text
stringlengths
216
4.52M
meta
dict
\section{INTRODUCTION} With the rapid growth of renewable Distributed Energy Resources (DERs) in power systems and due to their inherent uncertainties, economic energy management and planning is of great importance for microgrids of different scales. A Microgrid is defined as a group of interconnected DERs, loads, and storage units that act as a unified entity in the electricity market and is able to operate in both connected and islanded modes from the main electricity grid. In the connected mode, microgrids are connected to the main grid at the Point of Common Coupling (PCC) and any power exchange between the microgrid and the main grid is measured at this point. DERs within the microgrid may provide part or all of the microgrid's energy demand; hence reducing the energy drawn from the main grid. Electricity providers use different pricing schemes to encourage certain consumption behaviors among consumers and make power grids more efficient and reliable. Microgrids may use such energy price data to optimally schedule their loads, storage units, and dispatchable DERs \cite{shariatzadeh2015demand}. The existence of storage provides additional flexibility to benefit from such pricing schemes. Optimal microgrid scheduling should take into account microgrid operational costs and seek to compute optimal storage/DER dispatch schedule \cite{parhizi2015market,habib2016model,khodaei2015microgrid}. This optimal power value should be computed based on load requirements, hardware constraints, storage capacity, and with the consideration of intermittent and uncertain nature of renewables. Load and renewable generation uncertainty are two major sources of uncertainty in microgrids that could highly impact its economic performance \cite{baziar2013considering}. Various approaches have been proposed to address uncertainties in the predicted load \cite{sarantis2017optimal, samadi2013tackling}. Renewable uncertainties generally impose a greater impact if a significant portion of microgrid energy is provided by renewable DERs. Various robust and stochastic microgrid scheduling methods have been studied to address renewable uncertainty challenges. A widely popular approach to handle uncertainty is to formulate the problem as an stochastic programming problem by considering different scenarios for the uncertain parameters and their probabilities \cite{parisio2013stochastic}. The goal of the problem will then be minimizing cost of current decisions plus the expected value of cost of future decisions. In an alternative approach, uncertain generation can be handled by robust optimization formulations where all possible scenarios within the uncertainty set are accounted for and constraint satisfaction is guaranteed regardless of the realization of the random variables within a certain set \cite{li2011comparative,malysz2014optimal}. If a feasible solution exists for this problem, it tends to be more conservative yet less computationally intensive compared to the stochastic formulation. In a different approach, the problem can be studied within the framework of Chance Constrained Programming (CCP). Assuming known distribution for the uncertain variables, CCP can be used to compute a minimizing solution that meets inequality constraints with a certain probability \cite{wu2011economic,charnes1959chance}. Although less conservative for most realizations of the random variable, this approach may lead to constraint violation if knowledge of distribution is inaccurate or if extreme realizations of the random variables occur. In the context of day-ahead microgrid scheduling, the intended scheduling interval is relatively large. In addition, forecast of uncertain renewable generation will be updated and more accurate forecasts will become available as time progresses. These are two motivations behind taking a model predictive approach to the scheduling problem. If such approach is taken, one need not determine a unique scheduling at the beginning of the 24-hr interval that is robust against all possible uncertainties. Instead, some possible realizations may be allowed to violate the conditions at steps far into the future. Future updates of the model predictive solution will guarantee that no actual condition violation will happen. This framework will however allow for more optimal (less conservative) solutions to the problem. Diferent works have taken a model predictive approach to the scheduling problem \cite{jin2017user,prodan2014model}. Reference \cite{parisio2013stochastic} compares the performance of stochastic and deterministic MPC in economic scheduling of microgrids. Also, \cite{zhang2016model} provides a model predictive economic scheduling solution and discusses the impact of forecast error. In this work, microgrid optimal scheduling problem is considered for a grid-connected microgrid with solar generation and energy storage. Approximate solar forecast and its uncertainty bounds are assumed to be known for the day-ahead microgrid operation and are updated at 15-min intervals. To reduce conservatism in the presence of inherent uncertainties of PV generation, the problem is formulated within a model predictive framework and hard constraints are replaced with their relaxed versions at each step of MPC. At each step with the newly updated predictions, a quadratic programming problem is solved to yield an economic solution. \textbf{Notation.} Set of time intervals over 24 hours is denoted as $\mathcal{T}=\{1,2,...,T\}$. $s(t)$ is a random variable representing solar generation during interval $t$, while $\bar{s}(t,\tau),s_u(t,\tau),s_l(t,\tau)$ are solar generation forecast, its upper bound, and its lower bound during interval $t$ for predictions made at step $\tau$. In a similar way, $c(t)$ is a random variable showing the charge level of the battery during interval $t$ and $\bar{c}(t,\tau),c_u(t,\tau),c_l(t,\tau)$ are charge level forecast, its upper bound, and its lower bound during interval $t$ for MPC step $\tau$. In the remainder of the paper, $\tau$ indices for MPC steps are dropped for simplicity when it does not lead to confusion. $d(t), e(t),$ and $r(t)$ show load demand, energy to/from the inverter, and energy flow at PCC during interval $t$. Energy variables ($s,d,e,r$) are assumed uniform over the intervals and represent total energy over the 15-min interval. \section{SYSTEM DESCRIPTION AND PROBLEM FORMULATION} We study a grid-connected microgrid consisting solar energy generation, battery energy storage and several loads. Strict load requirements of the microgrid enforce that complete demand must be met at each time. The storage unit can be exploited to implement power shifting by using the stored energy at times of high market price and therefore reduce energy input from the main grid at those times. The economic scheduling problem is expected to propose a microgrid power flow solution at the point of common coupling that while reducing a certain cost function, meets microgird's operational constraints and guarantees robustness against PV prediction uncertainties. This work aims to develop a model-predictive robust optimization scheme based on 24-hr prediction of solar energy, load demand, and price. The 24-hr horizon is divided into $\Delta_T=15$~min intervals and the optimization algorithm for the remaining intervals of the 24-hr period is run at the beginning of each MPC step. However, only the results for the impending interval will be implemented. We first formulate the benchmark scheduling problem without uncertainties. \subsection{Benchmark Problem} The microgrid is assumed to have a certain energy demand for each of the considered intervals $t \in \mathcal{T}$. The sum of energy from the main grid and energy from DER/storage during each interval should meet this load demand \begin{align} d(t) = e(t) + r(t) \label{demand_balance} \end{align} Additionally, variation in the charge level of the battery can be expressed as \begin{align} c(t+1) = c(t) + s(t) - e(t) \label{battery_balance} \end{align} It should be noted that as indicated in this equation, a unique decision variable $e(t)$ (inverter energy to/from microgrid) should be applicable regardless of the actual realization of PV generation. This means that uncertainty in PV generation should only translate into uncertainty in the state of charge of the battery. \textbf{Constraints.} The microgrid optimal scheduling problem should be solved with the consideration of operational constraints of the system. A few of the most essential constraints are formulated in this section. The inverter can only provide power to the microgrid subject to its operational limits \begin{align} P_{min}.\Delta_T\leq e(t) \leq P_{max}.\Delta_T \label{inv_limits} \end{align} where $P_{min/max}$ is min/max inverter power. The state of charge of the battery should stay within its upper and lower limits \begin{eqnarray} C_{min}<\bar{c}(t,\tau)<C_{max} \label{SoC_constraint1} \end{eqnarray} The scheduling should further enforce that the final charge level of the battery at the end of the 24-hr interval is equal to its initial charge. \begin{align} \bar{c}(t=T,\tau)=C_0 \label{C_0} \end{align} Taking into account the last two constraints, the set of feasible battery charge levels can be denoted as \begin{align}\nonumber \mathcal{C}=\{c(t,\tau) \mid C_{min}&<c(t,\tau)<C_{max}\ \forall t \geq \tau;\\ & c(t=T,\tau)=C_0\} \label{charge_constraint} \end{align} \textbf{Cost Function.} The optimal scheduling problem is expected to minimize a cost function comprising cost of electricity and other operational costs of the microgrid. Cost of energy provided by the renewable source is assumed to be negligible. The total cost associated with microgrid operation is formulated below. The first term accounts for energy charge during the 24-hr period and the second term shows demand charge incurred during the time interval with maximum energy consumption. The last cost term is a penalty on battery charge/discharge to reduce energy loss due to battery round-trip efficiency. \begin{align} f = \sum\limits_{t=1}^T v(t)r(t) + k r_{max} + \alpha \sum\limits_{t=1}^T e(t)^2 \label{CF} \end{align} where $v(t)$ is electricity unit price at time $t$, $r_{max}$ is energy flow at PCC during the interval with maximum energy consumption, and $k$ and $\alpha$ are weighting coefficients. \subsection{Renewable Forecast Uncertainty} Numerous solar forecasting methods have been explored in the literature. In this work, we assume knowledge of expected value of solar generation during interval $t$ at MPC step $\tau$, $E\{s(t)\}_\tau = \bar{s}(t,\tau)$ as well as its upper and lower bounds $s_u(t,\tau),s_l(t,\tau)$ is available for all $t\geq \tau$. These bounds should guarantee with a high degree of confidence that the realized amount of solar generation will be within their range. This solar forecast data is updated with the most recent data at each step. Clearly, more accurate predictions and smaller uncertainty bounds are available if forecast interval $t$ is closer to current step $\tau$. For each MPC step $\tau$, we define the sequence $S_u:\{s_u(\tau),s_u(2), ...,s_u(T)\}$ as a solar scenario consisting of upper forecast points for all times $\tau \leq t \leq T$. This scenario represents an unlikely realization of $s(t)$ over the remainder of the 24-hr period which takes the upper solar forecast value at each time. In a similar way we define $S_l$ and $\bar{S}$ as the lower and average solar scenarios. In order to be able to implement the unique decision variable $e(t)$ for all possible solar scenarios, the uncertainty in solar generation should only translate into uncertainty in the state of charge of the battery. With the motivation to obtain a single scheduling plan that accommodates various solar generation scenarios $(\bar{S},S_u,S_l)$, we intend to extend the aforementioned problem into a robust scheduling problem. \section{Uncertainty Handling and Robust Scheduling} \subsection{Existing Approaches} A rudimentary approach would be to enforce the above hard constraint \eqref{SoC_constraint1} on charge level of all possible solar generation scenarios. \begin{align}\nonumber C_{min}<c_u(t,\tau)<C_{max}\\ C_{min}<c_l(t,\tau)<C_{max} \end{align} Such constraint over the 24-hr period could make the solutions highly conservative or even infeasible \cite{li2011comparative}. To reduce the conservatism knowing that the solution will be updated at later steps, a chance constrained formulation would require \[ P(C_{min}<c_u(t,\tau)<C_{max}, C_{min}<c_l(t,\tau)<C_{max}]\geq \beta \] where $P(.)$ is probability of the constraint satisfaction. This constraint is a rational relaxation of the previous constraint. It can therefore yield feasible/more optimal solutions for the next MPC step and update its solution at every step. It however requires knowledge of distribution of the uncertainty and demands higher computational complexity than the previous approach. A different approach would be to implement an additional cost term in the cost function associated with SoC limit violation in the place of SoC constraints for upper and lower SoC scenarios \cite{sarantis2017optimal}. \begin{equation} \begin{array}{c} f_{aug} = \sum\limits_{t=1}^T max\{0,c_u(t)-C_{max}\} \\ + \sum\limits_{t=1}^T max\{0,C_{min}-c_l(t)\} \end{array} \label{CF1} \end{equation} Although this additional cost term could act as a soft constraint on SoC limit violation, it has no structure to differentiate between intervals with and without uncertainty. \subsection{Proposed Method} We propose a framework to generalize the regular SoC constraints (\ref{charge_constraint}) in a structured way that makes it applicable to uncertain solar scenarios. The idea is to enforce the hard constraint (\ref{charge_constraint}) only on the expected value of solar forecast and a relaxed version of it on other possible solar generation scenarios $(S_u,S_l)$. The result is that instead of limiting all possible charge level realizations to fall within upper and lower limits, we allow them to linearly grow out of bounds during uncertain intervals, but control their growth by a parameter representing tightness of the constraint. The rationale behind this scheme is that the uncertainty in accumulative solar generation is additive over time and therefore storage scheduling over time should allow more relaxed constraints in the future in order for the solution to remain feasible. We define parameter $\eta$ to characterize such relaxed constraints on SoC of battery under upper and lower PV scenarios. Figure \ref{uncertainty1}-top shows the proposed constraint on the battery charge level across different uncertainty scenarios at the beginning of the 24-hr interval. Also, Figure \ref{uncertainty1}-bottom shows the updated constraints at time $\tau$ for the remainder of the 24-hr interval. These constraints can be formulated as \begin{eqnarray}\nonumber C_{l}(t,\tau)<c_u(t,\tau)<C_{u}(t,\tau)\\ C_{l}(t,\tau)<c_l(t,\tau)<C_{u}(t,\tau) \label{SoC_constraint2} \end{eqnarray} \begin{gather*} C_{l[u]}(t,\tau)= \begin{cases} C_{min[max]}& \hspace{8mm} t <t_a \ \\ C_{min[max]} -[+] (t-\tau).\eta & t_a<t<t_b \ \\ C_{min[max]} -[+] (t_b-\tau).\eta& t_b<t\ \end{cases} \end{gather*}% $t_a$ and $t_b$ are the time stamps of start and end of uncertainty in solar prediction. For $\eta=0$, this condition is equivalent to the strict conditions in (\ref{SoC_constraint1}). Applying the hard constraint ($\eta=0$) may yield no feasible solution over the entire 24-hr interval which means the problem was not solvable if hard SoC constraints were to be enforced on all possible scenarios at all times. For larger values of $\eta$, condition (\ref{SoC_constraint2}) means relaxation of conditions (\ref{SoC_constraint1}) as time progresses. Condition \eqref{SoC_constraint2} could be tested iteratively with different values of $\eta $ to obtain the smallest $\eta ~(\eta^*)$ for which a solution exists. Solutions will then exist for all $\eta>\eta^*$. The bigger the choice of $\eta$, the less conservative and less robust the solution will be. The benefit of the alternative soft SoC constraint, as opposed to hard SoC requirement \eqref{SoC_constraint1}, is the reduced conservatism on the optimal solution freedom while keeping different scenarios under control. However, by running this algorithm at regular 15-min periods with updated forecast, we ensure that the resulting schedule will strictly meet hard SoC requirements \eqref{SoC_constraint1}. \begin{figure}[thpb] \centering \includegraphics[width=0.75\columnwidth]{conceptual_constraint_v2} \caption{ \textbf{Top.} Uncertainty plot at the beginning of the 24-hr interval. Expected solar generation and its upper and lower uncertainty bounds are shown in blue. Hard SoC limits (green) are replaced by soft constraints (orange) to reach feasible solutions and reduce conservatism when there is uncertainty in solar prediction. \textbf{Bottom.} Uncertainty plot at time $\tau=10~hr$ of the 24-hr interval. Updated solar forecast and its upper and lower uncertainty bounds for the remainder of the 24-hr interval $(t\geq \tau)$ are shown in blue.} \label{uncertainty1} \end{figure} \textbf{Robustness Analysis.} At each step of MPC, we want to make sure that the output $e(t)$ computed for the next step with soft SoC constraints does not lead to SoC limit violation. To achieve this, one can shift the soft constraint one step to the right so that the impending step always follows the hard limits while steps after that follow soft limits. Another less conservative approach would be to limit next step's optimal solution according to the following \begin{align} c(\tau)+s_u(\tau)-C_{max}\leq e(\tau)\leq c(\tau)+s_l(\tau)-C_{mn} \end{align} Based on the previous discussion, we can formalize Algorithm 1 for microgrid robust optimal scheduling. \begin{algorithm} \caption{Microgrid Robust Optimal Scheduling}\label{algorithm} \begin{algorithmic}[1] \State Start at the beginning of the 24-hr interval $\tau=1$ \State Obtain price and load demand data for the next 24-hr \While{$\tau \leq T$} \State Update solar forecast and its bounds for $t=\tau : T$ \State Initialize parameter $\eta = \eta_0$ \Repeat \State Solve the quadratic programming problem with \quad cost function $(7)$ and subject to constraints \quad $(1-5,11,12)$ using any available solver \State Update $\eta$ using bisection to find smaller values \quad of $\eta$ that gives a feasible solution \Until{$|\eta_{current}-\eta_{last}|<\epsilon$} \State $\eta^* = \eta$ \State Obtain Optimal scheduling solution with $\eta = \eta^*$ for $t=\tau:T$ but implement only $e(t=\tau)$ \State Wait until the beginning of the next scheduling period and arrival of updated forecast data \EndWhile \State \textbf{End while} \end{algorithmic} \end{algorithm} \section{RESULTS} We study the microgrid of a medical facility that is planning integration into the main grid. Solar generation forecast at the beginning of the 24-hr horizon and its uncertainty bounds for the considered facility are illustrated in Figure \ref{uncertainty1}.Load demand and price data are also illustrated in Figure \ref{Power}. Also, microgrid specifications are listed in table 1. \begin{figure}[thpb] \centering \includegraphics[width=6.5cm]{Power_v2} \caption{Optimization results for scheduling performed at time $\tau=1$.} \label{Power} \end{figure} \begin{figure}[thpb] \centering \includegraphics[width=6.5cm]{SoC_v2} \caption{SoC variation within relaxed constraints for MPC step 1 (time step $\tau=1$) with expected $\bar{S}$, upper $S_u$, and lower $S_l$ solar generation scenarios} \label{SoC} \end{figure} \begin{figure}[thpb] \centering \includegraphics[trim=.8in 0 .5in 0,clip,width=\columnwidth]{Multiple_storage_min_eta} \caption{Effect of storage size on scheduling with uncertainty. With increasing storage size, $\eta^*$ decreases meaning that relaxed constraints come closer to hard SoC constraints.} \label{SoC_min_eta} \end{figure} We seek to compute an economic schedule for inverter power that can be implemented regardless of the realization of solar profile within the estimated boundaries. An attempt to solve the problem with the given solar bounds and hard constraints of equation (\ref{SoC_constraint1}) on all solar scenarios reveals that no feasible solution exists. Even if such solution existed, it would be highly conservative due to the requirement that all possible scenarios should stay within bounds even for time intervals far from the current interval for which accurate predictions do not exist. Since we assume no knowledge of uncertainty distribution except its upper and lower bounds, CCP is not a good solution fit for our problem. Next we investigate microgrid scheduling problem over 15-min intervals using the proposed algorithm. Algorithm 1 is implemented on a system with parameters $C=1$ MWh, $C_0=500$ kWh, $SoC_{min}=20 \% $, $SoC_{max}=90 \%$, $P_{min,max}=-(+)250$ kW, $\eta_0=1$, and $\epsilon=0.01$. Gurobi commercial solver is used for solving the quadratic programming problem at each step. The resulting power profile as well as battery charge levels are presented. Figures \ref{Power} and \ref{SoC} show the result of scheduling at time $\tau=1$. Maximum power flow at PCC over the entire 24-hr interval is remarkably reduced by flattening PCC power profile. The inverter power profile has also low volatility in this case which makes it robust against unmodelled prediction errors. Evolution of battery charge level within the soft SoC constraints under three scenarios of solar generation is indicated in Figure \ref{SoC}. \begin{figure}[thpb] \centering \includegraphics[width=9.0cm]{MPC_v2} \caption{Evolution of battery SoC over the 24-hr interval shown at 9 steps during the day. The dashed line shows the current step of optimization and the purple line shows the actual realization of battery charge level for past times. While the results of optimization at earlier steps show apparent SoC limit violation at times far from the current step, no actual SoC violation occurs as time proceeds as the optimization is updated at every MPC step.} \label{MPC} \end{figure} To investigate the effect of storage size while solar uncertainty exists, Figure \ref{SoC_min_eta} shows economic scheduling results at the start of the 24-hr horizon. For microgrids with different storage sizes, the minimum viable $\eta~(\eta^*)$ for solution feasibility is obtained for each case. For smaller storage sizes (800, 1000, 1200, 1400), $\eta^*$ is greater than zero meaning that no feasible solution would exist had the soft constraints not replaced the hard ones. For storage of size 1600, the problem is solvable with $\eta=0$ or equivalently hard SoC constraints. To illustrate the evolution of microgrid charge level over the 24-hr interval, Figure \ref{MPC} shows the evolution of battery charge level as the result of scheduling over the 24-hr interval. It is seen that as time proceeds and updated solar predictions become available, soft SoC constraints become tighter and no violation of hard SoC limits is observed at the end of the 24-hr interval. \section{CONCLUSIONS} A robust microgrid scheduling algorithm is proposed to optimize power exchange between microgrid and the main grid with uncertain solar prediction. To implement the algorithm, it suffices to have upper and lower bounds on solar generation to describe such uncertainty. By relaxing the original constraint on the charge level of the battery, optimal solutions are sought in a larger space of possible battery charge levels. The problem is formulated as a quadratic program and is solved at 15-min steps over a 24-hr interval and updated prediction data is used for each step. The model predictive formulation of the problem ensures that the apparent hard constraint violation does not lead to charge level requirement violation. The results indicate scheduling profiles that are in agreement with the defined cost function and follow the expected requirements under different uncertainty scenarios. Also, the effect of storage size on handling the uncertainty is investigated. \balance \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Natural language spatial video grounding is a vital task for video-text understanding \cite{luo2017comprehension, zhou2019grounded, hu2019you, Zhang_Tan_Yu_Zhao_Kuang_Liu_Zhou_Yang_Wu_2020, li2021adaptive}, which aims to detect the objects described by the natural language query from each video frame, as shown in Figure~\ref{fig:1_1}. There is a substantial and rapidly-growing research literature studying this problem with dense annotations \cite{li2017,yamaguchi2017spatio,sadhu2020video}, where each frame that contains objects relevant to the language query will be manually labeled with bounding boxes. Obviously, such annotations require tremendous human effort and can hardly be satisfied in real-world scenarios. Recently, some works have investigated weakly-supervised video grounding with solely the video-text correspondence rather than object-text annotations \cite{huang2018finding,chen2019object,shi2019not,chen2019weakly,zhou2018weakly}. However, the performance is less satisfied with such weak supervision. In practice, we are more likely to have a limited annotation budget rather than full annotation or no annotation. In addition, as humans, after experiencing the language query and one frame object paired together for the first time, we have the ability to generalize this finding and identify objects from more frames. Towards this end, we investigate another practical problem setting, \textit{i}.\textit{e}., one-shot spatial video grounding, with solely one relevant frame in the video labeled with bounding boxes per video. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{figure1_1.PNG} \caption{An example of spatially grounding natural language in video frames.} \label{fig:1_1} \end{figure} Existing methods that are devised for supervised video grounding are not directly applicable to this novel setting. We summarize several critical challenges: \begin{itemize}[leftmargin=*] \item On the one hand, most of them incorporate a multi-stage training process, \textit{i}.\textit{e}., firstly training a clip localization module, and training an object localization module in the second stage. However, in one-shot spatial video grounding, there are no temporal annotations, which indicate the start/end time of the relevant clip, to train the clip localization module. Moreover, many of them extract video features in a pre-processed manner using feature extractor or object detector pretrained on large-scale datasets. However, independent modeling limits the cooperation of different modules, especially when the labels are few. Therefore, it is in urgent need to derive an end-to-end training framework for one-shot spatial video grounding. \item On the other hand, there are video frames that are either irrelevant to the natural language query or the labeled frames. These irrelevant frames might increase the computation complexity of end-to-end training, and bring confounding between the frame label and (irrelevant) visual features. \item Lastly, with fewer supervision signals, deep representation learning might become error-prone or easily under-fitting, especially for end-to-end training. \end{itemize} To address these challenges, we devise an end-to-end model via the \underline{I}nformation \underline{T}ree for the \underline{O}ne \underline{S}hot natural language spatial video grounding (IT-OS). Different from previous works, we design a novel tree structure to shield off the one-shot learning from frames that are irrelevant to either the language query or the labeled frame. We devise several self-supervised tasks based on the tree structure to strengthen the representation learning under limited supervision signals. Specifically, the calculation processes of the key module, information tree, contains four steps: (1) To construct the information tree, we view video frame features as nodes, and then compress the adjacent nodes to non-leaf nodes based on the visual similarity of themselves and the semantic similarity with the language query; (2) We search the information tree and select branch paths that are consistently relevant to the language query both in the abstractive non-leaf node level and in the fine-grained leaf node level; (3) We drop I) the leaf nodes that do not belong the same semantic unit with the labeled node; and II) the non-leaf nodes on the low relevance branch paths. We also down-weight the importance of the leaf nodes that belong to the same semantic unit with the labeled node but are on the low relevance paths; (4) Finally, we input the extracted and weighted information to the transformer, and conduct training with the one-shot label and self-supervised tasks, including masked feature prediction and video-text matching. We note that both the information tree and the transformer are jointly trained in an end-to-end manner. We conduct experiments on two benchmark datasets, which demonstrate the effectiveness of IT-OS over state-of-the-arts. Extensive analysis including ablation studies and case studies jointly demonstrate the merits of IT-OS on one-shot video grounding. Our contributions can be summarized as follows: \begin{itemize}[leftmargin=*] \item To the best of our knowledge, we take the initiative to investigate one-shot natural language spatial video grounding. We design an end-to-end model named IT-OS via information tree to address the challenges brought by limited labels. \item By leveraging the language query, several novel modules on the information tree, such as tree construction, branch search, and branch cropping, are proposed. Moreover, to strengthen the deep representation learning under limited supervision signals, we introduce several self-supervised tasks based on the information tree. \item We experiment with our IT-OS model on two benchmark datasets. Comparisons with the state-of-the-art and extensive model analysis jointly demonstrate the effectiveness of IT-OS. \end{itemize} \section{Related works} \vpara{Natural Language Video Grounding.} Among numerous multimedia understanding applications~\cite{Zhang_Jiang_Wang_Kuang_Zhao_Zhu_Yu_Yang_Wu_2020,Zhang_Tan_Zhao_Yu_Kuang_Jiang_Zhou_Yang_Wu_2020, zhang2021consensus,zhang2021magic, zhang2020relational,kai2021learning, zhang2020counterfactual}, natural language video grounding has attracted the attention of more and more researchers recently. There are mainly three branches, temporal grounding[\cite{ross2018grounding,lu2019debug,zhang2019cross,lin2020weakly,lin2020moment,zhang2021parallel,li2022compositional,gao2021relation, yang2021deconfounded}], spatio-temporal grounding[\cite{tang2021human,zhang2020object,zhang2020does,su2021stvgbert}], and spatial grounding. We focus on the last one. Deep neural network has convincingly demonstrated high capability in many domains \cite{wu2020biased, wu2022learning, guo2021semi, li2020multi, li2020ib, li2020unsupervised}, especially for video related tasks \cite{miao2021vspw, miao2020memory, xiao2020visual, xiao2021video}, like video grounding. For example,\cite{li2017} use the neural network to detect language query related objects in the first frame and track the detected object in the whole video. Compared to it, \cite{yamaguchi2017spatio} and \cite{vasudevan2018object} go further. They extract all the object proposals through the pretrained detector, and choose the right proposal described in the text. Supervised training for the natural language video object detection needs high labeling costs. To reduce it, some researchers pay attention to weakly-supervised learning fashion using multiple instances learning(MIL) method \cite{huang2018finding,chen2019object,shi2019not,chen2019weakly,zhou2018weakly,wang2021weakly}transfers contextualized knowledge in cross-modal alignment to release the unstable training problem in MIL. Based on contrastive learning \cite{zhang2021reconstrast}, \cite{da2021asynce} proposes an AsyNCE loss to disentangle false-positive frames in MIL, which allows for mitigating the uncertainty of from negative instance-sentence pairs. Weakly supervised false-positive identification based on contrastive learning has witnessed success in some other domains~\cite{Zhang_Yao_Zhao_Chua_Wu_2021,yao2021contrastive} \vpara{One-shot Learning for Videos.} One-shot learning has been applied in some other video tasks. \cite{yang2018one} proposes a meta-learning-based approach to perform one-shot action localization by capturing task-specific prior knowledge. \cite{wu2018exploit} investigates the one-shot video person re-identification task by progressively improving the discriminative capability of CNN via stepwise learning. Different from these works, \cite{caelles2017one} and \cite{meinhardt2020make} define the one-shot learning as only one frame being labeled per video. Specifically, \cite{caelles2017one} use a fully convolutional neural network architecture to solve the one-shot video segmentation task. \cite{meinhardt2020make} decouple the detection task, and uses the modified Mask-RCNN to predict local segmentation masks. Following this setting, we investigate one-shot natural language spatial video grounding, and devise a novel information-tree based end-to-end framework for the task. \section{Method} \subsection{Model Overview} \vpara{Problem Formulation.} Given a video $V=\{v^i\}_{i=1,2,\dots,I}$ and a natural language query $C$, spatial video grounding aims to localize the query-described object from all the objects $O^i=\{o_j^i\}_{j=1,2,\dots,J}$ for each frame. $I$ denotes the frame number of the video, and the $J$ is the object number in the video. In \textit{\textbf{one-shot}} spatial video grounding, solely one frame $v^i$ in video $V$ is labeled with the region boxes of the target objects $O^i$. \begin{figure*}[t] \begin{center} \includegraphics[width=0.9\textwidth]{Architecture.pdf} \caption{ The overall schema of the proposed end-to-end one-shot video grounding via information tree (IT-OS), which contains query-guided tree construction, query-based branch search \& cropping, and a transformer encoder enhanced by self-supervised tasks. } \label{fig:schema} \end{center} \end{figure*} \vpara{Pipeline of IT-OS.} As shown in Figure~\ref{fig:schema}, there are mainly four steps involved in the end-to-end modeling of IT-OS: \begin{itemize}[leftmargin=*] \item Firstly, we extract the features from the input video and the input caption. Specifically, for the video, we use ResNet-101\cite{he2016deep} as the image encoder to extract the frame feature maps; for the language query, we employ a language model Roberta\cite{liu2019roberta}. Both the vision encoder and the language encoder are jointly optimized with the whole network. \item Secondly, we build the information tree to get the representation of the video. The information tree is built upon the frame feature maps, which are the leaf nodes. Leaf nodes will be further merged based on the relevance between node-node and node-query to have non-leaf and root nodes. Nodes on unnecessary branches will be deleted conditioned on the language query. \item Thirdly, we utilize the transformer encoder to reason on the remaining nodes and language features. Upon the transformer, we devise two self-supervised tasks, \textit{i}.\textit{e}., masked feature modeling, and video-text matching, which enhances the representation learning under limited labels. \end{itemize} \vpara{Prediction and Training.} We follow the common prediction and training protocol of visual transformers used in other object detection models \cite{wang2021end}. We input the embedding parameters $E_{de}$ and the multi-model features $F_{de}$ generated by the transformer encoder into the transformer decoder $D$. Then, the decoder $D$ outputs possible prediction region features for each frame. For each possible region, a possibility $P$ and a bounding box $B$ are generated. \begin{equation} P, B = D(F_{de}, E_{de}), \label{eq:decoder_pred} \end{equation} We choose the box $B$ with the highest possibility value $P$ for each frame as the target box. During the training process, we first calculate the possible prediction regions. Then, we match the possible regions with the target boxes, and choose the best match for each frame. Finally, use the match to train our IT-OS model. \subsection{Information Tree Module} In this section, we will elaborate the information tree modules in detail. We will illustrate how to construct the information tree, how to extract critical information from it and how to design the self-supervised learning based on the tree. To ease the illustration, we take the $6$ frames as an example, and show the process in Figure~\ref{fig:schema}. \subsubsection{Tree Construction} \label{sec:treeconstruct} Given the frame features generated by the CNN, we build the information tree by merging adjacent frame features in the specified order. Specifically, the frame features output by the image encoder are the leaf nodes $N=\{n^i\}_{i=1}^{2M}$. A sliding window of size 2 and step 2 is applied on these nodes and nodes in the window are evaluated to be merged or not. We calculate the \textit{semantic relevance difference} between each node pair with the language query, and get the \textit{visual relevance} between the nodes in each pair. For the visual relevance calculation, we max-pool the feature maps of the $i$ node pair to have the feature vector $f_v^{2i-1}$ and $f_v^{2i}$. And then, we compute the cosine similarity $r_{vv}^i$ between $f_v^{2i-1}$ and $f_v^{2i}$ to be the visual relevance. Next, we calculate the semantic relevance $r_{tv}^{2i-1}$ and $r_{tv}^{2i}$ between the text feature $f_t$ and the nodes of $i$ node pair: \begin{equation} r_{tv}^{2i-1}=\sigma((w_{t}*f_t)*(w_{v}*f_v^{2i-1})^T), \label{eq:text_node_attention_1} \end{equation} \begin{equation} r_{tv}^{2i}=\sigma((w_{t}*f_t)*(w_{v}*f_v^{2i})^T), \label{eq:text_node_attention_2} \end{equation} where the $w_t$ and $w_v$ are learnable parameters, and $\sigma$ is the sigmoid activation function. The semantic relevance difference $d_{tv}^i$ between the $i$th paired nodes is: \begin{equation} d_{tv}^i=|r_{tv}^{2i-1}-r_{tv}^{2i}|+\gamma*r_{vv}^i, \label{eq:video_node_attention} \end{equation} where the $\gamma$ is the hyperparameter. With the relevant difference value, we rank the node pairs and pick out the top $\lambda$. The $\lambda$ is a hyperparameter, which can be set as a constant or a percentage. We merge the node pairs: \begin{equation} n^{new}=w_{mg}*(n^{2i-1}+n^{2i})+b_{mg}, \label{eq:node_merge} \end{equation} where the $w_{mg}$ and $b_{mg}$ are trainable. Finally, The new node $n^{new}$ replace the old nodes $n^{2i-1}$ and $n^{2i}$ in the queue. Repeat the process until there is only one node in the queue. Saving all nodes in the process and the composite relationship between nodes generated in the merging process, we get the information tree. \subsubsection{Branch Search} We use a branch to denote a subtree. To filter critical local and global information, we perform branch search and selection. We firstly select branches that contain leaf nodes less than $\delta_{max}$ and more than $\delta_{min}$. $\delta_{max}$ and $\delta_{min}$ are hyperparameters. We calculate the \textit{semantic relevance} of branches' root nodes and the language query based on Equation \ref{eq:text_node_attention_1}. \vpara{Training.} During training, we directly select the branch that contains the labeled leaf node and the root node with the highest semantic relevance. This selection improves the training efficiency. \vpara{Inference.} During inference, all frames should be processed. We conduct an iterative search with multiple search steps. For each step, we select the branch with the highest semantic relevance and remove the selected branch from the information tree. After the search, we have multiple selected branches and each branch will be forwarded to the following processes. \subsubsection{Branch Cropping} Note that not all the non-leaf nodes in the selected branches are closely related to the input caption. We remove non-leaf nodes that are with semantic relevance less than $\Delta$, which is a hyperparameter. Their descendant non-leaf nodes are also removed. To reserve enough frame nodes for training, we do not remove the descendant leaf nodes. Instead, we down-weight them with $\lambda=0.5$. For other leaf nodes, $\lambda=1$. The remaining leaf nodes and non-leaf nodes represent the critical local information and the global information, respectively. We multiply the feature of node $i$ and the node's semantic relevance $r_{tv}^i$: \begin{equation} f_{v_{new}}^i=f_{v}^i*r_{tv}^i*\lambda, \label{eq:feature_reweight} \end{equation} where $f_{v_{new}}^i$ is the feature vector input into the transformer. As such, Equation \ref{eq:feature_reweight} considers both local relevance $r_{tv}$ and global relevance $\lambda$ with the language query. \subsubsection{Self-supervised Tasks} We leverage a transformer encoder for these extracted information and the language query. As shown in the Figure~\ref{fig:schema}, we design two self-supervised tasks as: 1) predicting the masked text features, and masked local/global video information; 2) judging whether the text and the video match. For the transformer, the input tokens $F_{in}$ consist of the local information, the global information and the text features, which are three types of tokens. We further introduce 2-D position embedding for video tokens and type embedding for all tokens, which are added to the tokens' features. Then, the features $F_{in}$ are input into the transformer encoder $E$. After encoding, the fusion features $F_{out}$ are output: \begin{equation} F_{out}=E(F_{in}). \label{eq:trans_encoder} \end{equation} We predict the original features for masked language tokens and masked video tokens (leaf/non-leaf nodes in the selected branch) using multilayer perceptrons. \begin{equation} {\hat f}_{in}^i=MLP_t(f_{out}^i), \ \ {\hat f}_{in}^j = MLP_v (f_{out}^j), \label{eq:MLP_video} \end{equation} where the $MLP_t$ and $MLP_v$ are the multilayer perceptrons for text and video features, respectively. We view masked token modeling as feature regression and adopt L2 distance as the loss function. In addition, there will be a mismatched language query at the rate of 50\%. We propose to predict whether the video and language are matched, \textit{i}.\textit{e}., whether the video contains the event described by the language query, based on the output representation of token \texttt{[CLS]}. When the video and the language are not matched, we will not train the model with the one-shot label. \section{Experiments} \subsection{Experimental Setup} \begin{table*} \centering {\setlength{\tabcolsep}{1.0em}\begin{tabular}{c|cccc|cccc}\hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{Declarative Sentence Grounding} &\multicolumn{4}{c}{Interrogative Sentence Grounding}\\ & 0.4 & 0.5 & 0.6 & Avg & 0.4 & 0.5 & 0.6 & Avg \\\hline GroundeR & 24.6 & 18.2 & 13.7 & 18.9 & 25.3 & 18.9 & 14.4 & 19.5 \\ STPR & 25.7 & 20.1 & 14.6 & 19.9 & 27.1 & 21.0 & 16.0 & 21.4 \\ STGRN & 27.6 & 20.9 & 16.3 & 21.5 & 28.5 & 21.9 & 17.2 & 22.5 \\ VOGnet & 32.1 & 24.4 & 19.9 & 25.8 & 33.1 & 25.5 & 20.9 & 26.7 \\ OMRN & 34.4 & 27.6 & 21.9 & 28.0 & 35.7 & 28.7 & 23.0 & 29.1 \\\hline VOGnet* & 36.4 & 29.4 & 22.0 & 29.3 & 37.0 & 28.4 & 22.6 & 29.3 \\ OMRN* & 39.5 & 30.0 & 22.3 & 30.6 & 38.9 & 30.5 & 24.1 & 31.2 \\ \hline \textbf{IT-OS} & \textbf{46.8} & \textbf{35.8} & \textbf{23.2} & \textbf{35.3} & \textbf{46.2} & \textbf{34.6} & \textbf{25.2} & \textbf{35.3}\\\hline \end{tabular}} \caption{\label{sota_VidSTG} Compared with baselines on VidSTVG. It is worth noting that all methods are trained using the one-shot learning. The $*$ represents the baselines use the MDETR as the object detector backbone, which is the same as the IT-OS.} \end{table*} \begin{table} \centering {\setlength{\tabcolsep}{0.7em}\begin{tabular}{c|cccc}\hline Method & 0.4 & 0.5 & 0.6 & Avg \\\hline GroundeR & 32.1 & 27.8 & 24.3 & 28.1 \\ STPR & 33.4 & 28.9 & 25.4 & 29.2 \\ STGRN & 35.5 & 30.4 & 26.3 & 30.7 \\ VOGnet & 38.8 & 32.7 & 26.9 & 32.8 \\ OMRN & 40.1 & 34.5 & 28.4 & 34.4\\\hline VOGnet* & 41.2 & 35.8 & 29.5 & 35.5 \\ OMRN* & 45.5 & 37.7 & 30.4 & 37.9\\\hline \textbf{IT-OS} & \textbf{51.9} & \textbf{42.9} & \textbf{33.6} & \textbf{42.8}\\\hline \end{tabular}} \caption{\label{sota_VID-sentence} Compared with baselines on VID-sentence. All methods are trained using one-shot learning. The $*$ represents the MDETR is applied to these baselines as the object detector backbone.} \end{table} \vpara{Datasets} We consider two video grounding benchmarks for evaluation: (1) {\textit{VidSTG \cite{zhang2020does}}} is a large-scale benchmark dataset for video grounding, which is constructed based on VidOR \cite{shang2019annotating} dataset. VidSTG contains $10,000$ videos and $99,943$ sentences with $55,135$ interrogative sentences and $44,808$ declarative sentences. These sentences describe $79$ types of objects appearing in the videos. We follow the official dataset split of \cite{zhang2020does}. (2) \textit{VID-sentence \cite{chen2019weakly}} is another widely used video grounding benchmark constructed based on the VID \cite{ILSVRC15} dataset. There are 30 categories and $7,654$ video clips in this dataset. We report the results of all methods on the validation set for the VID-sentence dataset. We obtain similar observations and conclusions on the test set. \vpara{Implementation Detail} For video preprocessing, we random resize the frames, and set the max size is $640*640$. The other data augmentation methods, such as random horizontal flip and random size cropping are used at the same time. During training, the learning rate is by default $0.00005$, and decays by a factor of $10$ for every $35$ epochs. The batch size is $1$ and the maximum training epoch is $100$. We implement IT-OS in Pytorch and train it on a Linux server. For model hyperparameters, we set $\lambda=60\%$, and $\Delta=0.7$. Most of the natural language spatial video grounding models use the pretrained detection model as the backbone. Thus, like them, we choose the official pretrained MDETR \cite{kamath2021mdetr} as the parameter basis for target detection of our IT-OS. \vpara{Evaluation Metrics} We follow the evaluation protocol of \cite{chen2019weakly}. Specifically, we compute the \underline{I}ntersection \underline{o}ver \underline{U}nion (IoU) metric for the predicted spatial bounding box and the ground-truth per frame. The prediction for a video is considered as "accurate" if the average IoU of all frames exceeds a threshold $\alpha$. The $\alpha$ is set to $0.4$, $0.5$, and $0.6$ during testing. \vpara{Baselines} Since existing video grounding methods are not directly applicable to the one-shot setting, we extend several state-of-the-arts as the baselines. Specifically, to have a comprehensive comparison, we consider 1)fully supervised models, including \textbf{VOGnet} \cite{sadhu2020video}, \textbf{OMRN} \cite{zhang2020object} and \textbf{STGRN} \cite{zhang2020does}; and 2) other widely known methods, including video person grounding \textbf{STPR} \cite{yamaguchi2017spatio}, and visual grounding method, \textbf{GroundeR} \cite{rohrbach2016grounding}. \subsection{Performance Comparison} The experimental results for one-shot video grounding on VidSTVG and VID-sentence datasets are shown in Table \ref{sota_VidSTG} and \ref{sota_VID-sentence}, respectively. According to the results, we have the following observations: \begin{itemize}[leftmargin=*] \item Not surprisingly, although extended to the video grounding setting, baselines that belong to other domains, including video person grounding STPR and visual grounding GroundeR, achieve inferior results on video grounding benchmarks. They lack domain-specific knowledge and might fail to effectively model the spatial-temporal relationships of videos and language queries. \item IT-OS consistently achieves the best performance on two benchmarks and multiple experimental settings with a large margin improvement. Remarkably, IT-OS boosts the performance (Avg) of the previous state-of-the-art OMRN from nearly $28.0/29.1/34.4$ to $35.3/35.3/42.8$ on VidSTVG and VID-sentence, respectively. It demonstrates the superiority of IT-OS on one-shot video grounding. \item The baselines are implemented with the backbones used in their original papers, which are different from ours. To further disentangle the sources of performance improvement, we re-implement the best-performing baselines (VOGnet*, and OMRN*) with the same object detection backbone, MDETR, as IT-OS. Although there is performance improvement with the new backbone, the best-performing baseline OMRN*, still underperforms IT-OS by over $4$ points for the average accuracy on all datasets. It further reveals the effectiveness of our novel model designs eliminating interference with different pre-training parameters. We attribute the improvement to the end-to-end modeling, where different modules can simultaneously benefit from each other. In addition, the proposed information tree alleviates the negative effects of irrelevant frames, and effectively models the interactions between the video global/local information and the language query. Several self-supervised learning tasks based on the information tree enhance the representation learning under limited one-shot labels. \end{itemize} \begin{table}[t] \centering {\setlength{\tabcolsep}{0.6em}\begin{tabular}{c|cccc}\hline Method & 0.4 & 0.5 & 0.6 & Avg\\ \hline GroundeR & 42.72 & 33.77 & 27.05 & 34.51 \\ STPR & 47.95 & 36.19 & 30.41 & 38.18 \\ STGRN & 49.25 & 44.03 & 34.89 & 42.72 \\ VOGnet & 53.17 & 43.47 & 33.77 & 43.47 \\ OMRN & 55.22 & 46.64 & 37.50 & 46.45 \\\hline IT-OS (OS) & 51.87 & 42.91 & 33.58 & 42.79 \\\hline \end{tabular}} \caption{\label{sota_VID-sentence_supervised} Compared with the baselines on VID-sentence. The baselines are trained using fully supervised learning. The OS represents the IT-OS trained under the one-shot settings.} \end{table} \begin{table*}[t] \centering {\setlength{\tabcolsep}{0.8em}\begin{tabular}{ccc|cccc|cccc}\hline \multicolumn{3}{c|}{\multirow{2}{*}{}} & \multicolumn{4}{c|}{Declarative Sentence Grounding} &\multicolumn{4}{c}{Interrogative Sentence Grounding}\\ $\Gamma_{self}$ &$\Gamma_{tree}$ & $\Gamma_{crop}$ & 0.4 & 0.5 & 0.6 & Avg & 0.4 & 0.5 & 0.6 & Avg \\\hline & & & 39.00 & 30.52 & 17.61 & 29.05 & 38.78 & 28.75 & 19.67 & 29.07 \\ \checkmark & & & 40.52 & 32.32 & 18.83 & 30.56 & 40.82 & 31.44 & 20.66 & 30.97 \\ &\checkmark & & 42.34 & 32.65 & 20.35 & 31.78 & 42.26 & 32.02 & 21.89 & 32.06 \\ \checkmark& \checkmark & & 44.16 & 33.38 & 21.11 & 32.89 & 44.55 & 33.78 & 23.19 & 33.84 \\ &\checkmark&\checkmark& 44.77 & 34.62 & 22.93 & 34.11 & 44.30 & 33.23 & 24.17 & 33.90 \\ \checkmark&\checkmark&\checkmark & \textbf{46.75} & \textbf{35.81} & \textbf{23.23} & \textbf{35.26} & \textbf{46.16} & \textbf{34.55} & \textbf{25.19} & \textbf{35.30}\\\hline \end{tabular}} \caption{\label{ablationstudy} Ablation study on VidSTG dataset.} \end{table*} {\setlength{\tabcolsep}{0.3em}\begin{table}[t] \centering \begin{tabular}{ccc|cccc}\hline $\Gamma_{self}$ &$\Gamma_{tree}$ & $\Gamma_{crop}$ & 0.4 & 0.5 & 0.6 & Avg \\\hline & & & 44.40 & 35.07 & 27.24 & 35.57 \\ \checkmark & & & 46.64 & 36.38 & 28.54 & 37.19 \\ &\checkmark & & 47.95 & 38.99 & 29.85 & 38.93 \\ \checkmark& \checkmark & & 49.44 & 40.30 & 31.16 & 40.30 \\ & \checkmark&\checkmark& 50.19 & 40.49 & 32.46 & 41.04 \\ \checkmark&\checkmark&\checkmark & \textbf{51.87} & \textbf{42.91} & \textbf{33.58} & \textbf{42.79} \\\hline \end{tabular} \caption{\label{ablationstudyVIDS} Ablation study on VID-sentence dataset.} \end{table} } \subsection{Comparison with Fully Supervised Methods} We are interested in 1) how different baselines perform under fully supervised settings; 2) how one-shot IT-OS perform compared to these baselines. Towards this end, we train multiple baselines and IT-OS with all labels on the VID-sentence dataset. The experiment results are shown in Table~\ref{sota_VID-sentence_supervised}. From the table, we have the following findings: \begin{itemize}[leftmargin=*] \item Remarkably, the performance gap between one-shot IT-OS and the fully supervised OMRN is less than $4\%$. Such a minor gap demonstrates the effectiveness of IT-OS on learning with limited annotations. This is significant and practical merit since we are more likely to have a limited annotation budget in real-world applications. \item Surprisingly, one-shot IT-OS can still outperform some weak baselines such as GroundeR and STPR. These results reveal the necessity of end-to-end modeling for video grounding. \end{itemize} \subsection{Ablation Study} We are interested in how different building blocks contribute to the effectiveness of IT-OS. To this end, we surgically remove several components from IT-OS and construct different architectures. The investigated components include information tree ($\Gamma_{tree}$), the branch cropping ($\Gamma_{crop}$), and the self-supervised training ($\Gamma_{self}$). It is worth noting that the other components cannot be deleted independently except the branch cropping. Thus, we don't conduct an ablation study for them. Results on VidSTG and VID-sentence datasets are shown in Table~\ref{ablationstudy} and Table~\ref{ablationstudyVIDS}, respectively. There are several observations: \begin{itemize}[leftmargin=*] \item Overall, removing any component incurs a performance drop, demonstrating the necessity and effectiveness of the information tree, branch search \& cropping, and self-supervised training. \item Stacking multiple components outperform the architecture with a single component. This result reveals that the proposed components can benefit from each other in end-to-end training and jointly boost one-shot video grounding. \end{itemize} \subsection{Case Study} We conduct a case study to visually reveal the ability of the IT-OS in detail. Specifically, we random sample $3$ videos from the datasets, and sample $6$ frames from each video to visualize. We compare our IT-OS model with the baseline method, OMRN, and the fundamental ablation model of the IT-OS, which is removed from the self-supervised module and the information tree. As shown in Figure~\ref{fig:4_1}, we have the following key findings: (1) The IT-OS detects the more accurate one from all objects of the video than the best performing previous method. It demonstrates the better representation extraction and analysis capabilities of our model. (2) Even if the target object is selected correctly, the IT-OS localizes a more precise spatial area compared with the previous two stages method. The results reflect the end-to-end model, IT-OS, has more accurate domain knowledge through training the whole model on the target dataset. (3) After adding the information tree and the self-supervised module, the IT-OS outputs more precise bounding boxes. It reveals that combining the two modules introduce stronger supervision signals for model training so that the model has stronger detection ability. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{figure4_1.PNG} \caption{Examples of the detection result visualization. The IT-OS(Base) represents the IT-OS model without the self supervised module and the informaiton tree. The GT represents the target labels.} \label{fig:4_1} \end{figure} \section{Conclusion} In this paper, we introduce the one-shot learning into the natural language spatial video grounding task to reduce the labeling cost. To achieve the goal, the main point is to make full use of only one frame label for each video. The invalid frames unrelated to the input text and target objects bring confounding to the one-shot training process. We design an end-to-end model (IT-OS) via the information tree to avoid it. Specifically, the information tree module merges frames with similar semantics into one node. Then, by searching the tree and cropping the invalid nodes, we can get the complete and valid semantic unit of the video. Finally, two self-supervised tasks are used to make up the insufficient supervision. \section*{Acknowledgements} This work is supported in part by the National Natural Science Foundation of China (Grant No.62037001, No.61836002, No.62072397). This work is also partially funded by Hangzhou Hikvision Digital Technology. \iffalse \section{Introduction} These instructions are for authors submitting papers to *ACL conferences using \LaTeX. They are not self-contained. All authors must follow the general instructions for *ACL proceedings,\footnote{\url{http://acl-org.github.io/ACLPUB/formatting.html}} and this document contains additional instructions for the \LaTeX{} style files. The templates include the \LaTeX{} source of this document (\texttt{acl.tex}), the \LaTeX{} style file used to format it (\texttt{acl.sty}), an ACL bibliography style (\texttt{acl\_natbib.bst}), an example bibliography (\texttt{custom.bib}), and the bibliography for the ACL Anthology (\texttt{anthology.bib}). \section{Engines} To produce a PDF file, pdf\LaTeX{} is strongly recommended (over original \LaTeX{} plus dvips+ps2pdf or dvipdf). Xe\LaTeX{} also produces PDF files, and is especially suitable for text in non-Latin scripts. \section{Preamble} The first line of the file must be \begin{quote} \begin{verbatim} \documentclass[11pt]{article} \end{verbatim} \end{quote} To load the style file in the review version: \begin{quote} \begin{verbatim} \usepackage[review]{acl} \end{verbatim} \end{quote} For the final version, omit the \verb|review| option: \begin{quote} \begin{verbatim} \usepackage{acl} \end{verbatim} \end{quote} To use Times Roman, put the following in the preamble: \begin{quote} \begin{verbatim} \usepackage{times} \end{verbatim} \end{quote} (Alternatives like txfonts or newtx are also acceptable.) Please see the \LaTeX{} source of this document for comments on other packages that may be useful. Set the title and author using \verb|\title| and \verb|\author|. Within the author list, format multiple authors using \verb|\and| and \verb|\And| and \verb|\AND|; please see the \LaTeX{} source for examples. By default, the box containing the title and author names is set to the minimum of 5 cm. If you need more space, include the following in the preamble: \begin{quote} \begin{verbatim} \setlength\titlebox{<dim>} \end{verbatim} \end{quote} where \verb|<dim>| is replaced with a length. Do not set this length smaller than 5 cm. \section{Document Body} \subsection{Footnotes} Footnotes are inserted with the \verb|\footnote| command.\footnote{This is a footnote.} \subsection{Tables and figures} See Table~\ref{tab:accents} for an example of a table and its caption. \textbf{Do not override the default caption sizes.} \begin{table} \centering \begin{tabular}{lc} \hline \textbf{Command} & \textbf{Output}\\ \hline \verb|{\"a}| & {\"a} \\ \verb|{\^e}| & {\^e} \\ \verb|{\`i}| & {\`i} \\ \verb|{\.I}| & {\.I} \\ \verb|{\o}| & {\o} \\ \verb|{\'u}| & {\'u} \\ \verb|{\aa}| & {\aa} \\\hline \end{tabular} \begin{tabular}{lc} \hline \textbf{Command} & \textbf{Output}\\ \hline \verb|{\c c}| & {\c c} \\ \verb|{\u g}| & {\u g} \\ \verb|{\l}| & {\l} \\ \verb|{\~n}| & {\~n} \\ \verb|{\H o}| & {\H o} \\ \verb|{\v r}| & {\v r} \\ \verb|{\ss}| & {\ss} \\ \hline \end{tabular} \caption{Example commands for accented characters, to be used in, \emph{e.g.}, Bib\TeX{} entries.} \label{tab:accents} \end{table} \subsection{Hyperlinks} Users of older versions of \LaTeX{} may encounter the following error during compilation: \begin{quote} \tt\verb|\pdfendlink| ended up in different nesting level than \verb|\pdfstartlink|. \end{quote} This happens when pdf\LaTeX{} is used and a citation splits across a page boundary. The best way to fix this is to upgrade \LaTeX{} to 2018-12-01 or later. \subsection{Citations} \begin{table*} \centering \begin{tabular}{lll} \hline \textbf{Output} & \textbf{natbib command} & \textbf{Old ACL-style command}\\ \hline \citep{Gusfield:97} & \verb|\citep| & \verb|\cite| \\ \citealp{Gusfield:97} & \verb|\citealp| & no equivalent \\ \citet{Gusfield:97} & \verb|\citet| & \verb|\newcite| \\ \citeyearpar{Gusfield:97} & \verb|\citeyearpar| & \verb|\shortcite| \\ \hline \end{tabular} \caption{\label{citation-guide} Citation commands supported by the style file. The style is based on the natbib package and supports all natbib citation commands. It also supports commands defined in previous ACL style files for compatibility. } \end{table*} Table~\ref{citation-guide} shows the syntax supported by the style files. We encourage you to use the natbib styles. You can use the command \verb|\citet| (cite in text) to get ``author (year)'' citations, like this citation to a paper by \citet{Gusfield:97}. You can use the command \verb|\citep| (cite in parentheses) to get ``(author, year)'' citations \citep{Gusfield:97}. You can use the command \verb|\citealp| (alternative cite without parentheses) to get ``author, year'' citations, which is useful for using citations within parentheses (e.g. \citealp{Gusfield:97}). \subsection{References} \nocite{Ando2005,borschinger-johnson-2011-particle,andrew2007scalable,rasooli-tetrault-2015,goodman-etal-2016-noise,harper-2014-learning} The \LaTeX{} and Bib\TeX{} style files provided roughly follow the American Psychological Association format. If your own bib file is named \texttt{custom.bib}, then placing the following before any appendices in your \LaTeX{} file will generate the references section for you: \begin{quote} \begin{verbatim} \bibliographystyle{acl_natbib}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{CONTENTS} \section{Introduction} In this paper, we extend the results of Todorov \cite{Todorov96} (on the existence of generalized solutions for a general set of differential operators) in two directions. If $P$ is a linear partial differential operator of order r, written as $P\in LPDO(r)$, with $C^{\infty}$ coefficients, and ${\la}_P$ is its (total) symbol, then Todorov proves the existence of generalized solutions $f$ for the equation $\rz P(f)(\rz x)=\rz g(\rz x)$ for all $x\in\bbr^m$ outside $\SZ_{{\la}_P}$ and for quite general $g$, where ${\SZ}_h$ denotes the $\{x\in\bbr^n:h(x)=0\}$. From a slightly different perspective, Todorov's result says that for a general set of standard $g$, there exists internal $f\in\rz\dcm$, such that $(\rz\! P(f)-\rz\! g)|_{^\s\bbr^m}=0$. ($^\s\bbr^m$ denotes the standard vectors in the internal vector space $\rz\bbr^m$). That is, $\rz P(f)-g$ vanishes pointwise, ie., has $0^{th}$ order contact with $\rz\bbr^n$ at each point of $^\s\bbr^m$. The standard geometry and jet definitions with respect to PDEs (partial differential equations) are recalled in the next section. The first extension of Todorov in this paper is to give a straightforward construction that there exists internal smooth maps $f$ such that $\rz\! P(f)-\rz\! g$ has infinite order contact with $\rz\bbr^n$ at all points of $^\s\bbr^m$. (See Theorem \ref{thm:lin eqn,infinite contact soln}.) Another way of stating this is that Todorov solves the equations at each standard $x$; here we solve the equations along with the infinite family of integrability differential equations associated to the given differential equation at each such $x$. But the two corollaries carry the critical import of this theorem. Corollary \ref{cor: infinite order Todorov} is the infinite order direct descendant of Todorov's result. It depends on the standard jet space work done in Section \ref{section: standard jet work,prolong and rank}. The needed standard statement following from this work lies in Corollary \ref{lem:symb prolong is surj}. The second corollary becomes possible only within the perspective of this paper. We can consider those partial differential operators whose (total) symbols vanish to some finite order, ie., have any finite order contact with $\rz\bbr^m$ along $^\s\bbr^m$; see Section \ref{section:prolong and vanishing}\;. Note here that Todorov only consider the $0^{th}$ order vanishing case. Our Corollary \ref{cor: infinite order solns for finite contact} says that as long as the vanishing order of $\rz g$ at standard points is controlled by that of the symbol of $\rz P$, we can find internal smooth $f$ solving $\rz P(f)-\rz g =0$ to infinite order on $^\s\bbr^m$. Such a theorem is unstatable within the venue of Todorov's setting. The work in standard geometry allowing the proof of this result occurs in section \ref{section:prolong and vanishing}; see Corollary \ref{cor:removing finite zeroes}. Parenthetically, it's conceivable that the PDE jet results of Sections \ref{section:prolong and vanishing} and \ref{section: standard jet work,prolong and rank} exist in the literature, but the author could not find them. The overwhelming bulk of the work in this paper concerns the linear theory. But, the nonlinear PDE jet framework and NSA fit quite well together, and so the second direction of extension of the result of Todorov is into nonlinear partial differential equations, NLPDEs. There is a well developed theory of nonlinear partial differential equations within the jet bundle framework, exemplified in the texts of Pommaret,\cite{Pommaret1978} Olver, \cite{Olver1993} and Vinogradov, \cite{VinogradovGeomJetSpNLDE1986}. Nonstandard analysis is as comfortable in this framework as in the linear. So, in Section \ref{section: nonlinear work}, we introduce simple conditions, $PCP$, and ${} ^\s PCP$, on the symbols of general NLPDEs of finite order, and give an easy proof of existence of generalized solutions in the sense of Todorov for those NPDO's satisfying these criteria. We show that Todorov's nonvanishing condition on the symbol implies that his $LPDO$'s satisfy ${}^\s PCP$. But, our theorem asserts the existence of generalized solutions in the far broader nonlinear arena. The standard import of these results is yet to be worked out. See the conclusion for a curious result on this. We prove a result that might appear startling: almost all internal smooth functions are solutions on $^\s\bbr^m$ of any standard differential operator that has the zero function as a solution. It seems that the work of Baty, etal., \cite{BatyOneDimlGas2007}, might be a useful framing for this. That is, their analysis needs a lot of elbow room on the infinitesimal level to allow adjusting eg., the infinitesimal widths of Heaviside jumps, etc. It seems that the results here might be interpreted as saying that the formal (nonstandard) jet theory of symbols allows such roominess for such empirically motivated adjustments. The author relies on a jet bundle framework when some might consider it too big a machine for the job. Yet, from the point of view of nonstandard analysis, the jet bundle framework is natural and eg., allows an easy generalization of Todorov's result to the nonlinear case. The total and principal symbols of a differential operator have a natural geometric setting which when extended to the nonstandard world allows a geometric consideration of generalized solutions and, in fact generalized differential operators vis *smooth symbols. Todorov defines his differential equation and constructs his solutions within spaces of generalized functions defined on $\rz\Om$, where $\Om$ an arbitrary open subset of $\bbr^m$, and gets his localizable differential algebra of generalized functions by `quotienting' out by the parts of the $\rz C^{\infty}$ functions defined on nonnearstandard points of $\rz \Om$. (Note also his NSA jazzed up version of the constructions of Colombeau, Oberguggenberger, and company in eg., \cite{TodorVern2008}, of which we will say more later). On the other hand, my paper focuses almost exclusively on extending Todorov's existence result to more general classes of differential operators and very little on a broader analysis of his differential algebra of generalized functions. (In a follow up to this paper, we will refine the results appearing here within the aforementioned nonstandard version of Colombeau's algebra of generalized functions constructed by Todovov and Oberguggenberger, \cite{OT98}). Accordingly, our constructions occur on all of $\rz\bbr^m_{nes}$. If we restrict to differential operators whose finite vanishing order sets don't have infinitely many components so that they have no nontrivial *limiting behavior at nonnearstandard points, the results here should hold without change for Todorov's localizable differential algebras. The geometric theory of differential equations and their symmetries, as exemplified in eg., Olver, \cite{Olver1993} and Pommaret, \cite{Pommaret1978}. is a natural framework within which to integrate the generalizing notions of NSA. This is the first of a series of papers in which the author intends to attempt a theory of generalized solutions (existence and regularity) and symmetries of differential equations within the context of the extensive jet theory. Note that although this approach seems to be new, there are a growing number of research programs moving beyond classical approaches; see eg., Colombeau, \cite{Colombeau1985}, Oberguggenberger, \cite{Oberguggengerger1992} and Rossinger, \cite{Rosinger1987}. For a good overview of the new theories of generalized functions, see eg., Hoskins and Pinto, \cite{HoskinsPintoGeneralizFcns2005}, and for specific surveys of the obstacles to the construction of a nonlinear generalization of distributions and a comparison of the characteristics of the these new theories, see Oberguggenberger, \cite{Oberguggenberger1995a} and more recently Colombeau, \cite{ColombeauSurvGeneralizFcnsInfinites2006}. Note , in particular the flury of work extending the arena of Colombeau algebras into mathematical physics involving diferential geometry and topology that Kunzinger, \cite{Kunzinger2007}, summarizes. Further note that Oberguggenberger and Todorov, see eg., \cite{OT98}, have shown how much of the theoretical foundations of Schwarz type Colombeau algebras can be simplified and strengthened within the venue of nonstandard constructions and recent work of Todorov and coworkers, see eg., \cite{TodorVern2008}, have extended the results with this model. See also Todorov's lecture notes, \cite{TodorovNotes}. None of these approaches consider the symbol of the differential operator, and the nonstandard extension of its geometric milleu as the primary object of study. This is the perspective of the current work. Finally, it seems that nonstandard methods are much more encompassing than the impressive work of the Colombeau school of generalized functions; eg., consider the work of the mathematical physicists working around Baty, eg., see \cite{BatyOneDimlGas2007} and \cite{BatyShockwaveStar2009}. Note, in particular the perspective of Baty, etal on p37 of \cite{BatyOneDimlGas2007} (with respect to the benefits of nonstandard methods) where they note that the generalized functions of the Columbeau school \begin{quote} ...are not smooth functions and do not support all of the operations of ordinary algebra and calculus, the multiplication of singular generalized functions is accomplished via a weak equality called association. In contrast to such calculations, the objects manipulated in equations (4.4) to (4.13) (and indeed in the following section of this report) are smooth nonstandard functions. \end{quote} In reading their papers, it's clear that their need for ``ordinary algebra'', etc., is critical to their analysis. The author also believes that, here also, having at hand the full capacity of mathematics via transfer straightforwardly allows many of the constructions of this paper. \section{Some jet PDE basics and nonstandard variations} \subsection{Nonstandard analysis} \subsubsection{Resources} Good introductions to nonstandard analysis abound. One might start with the pedestrian tour of its basics in the introduction of the authors dissertation, \cite{McGaffeyPhD}; then get deeper with the introduction of Lindstr{\o}m, \cite{Lindstrom1988} and follow this with the constructive introduction of Henson, \cite{Henson1997}. There is also Nelson's (axiomatic) internal set theory, a theory with similar goals and achievements to Robinson's (currently superstructure/ultrapower) nonstandard analysis; a good text being that of Lutz and Goze, \cite{LutzGoze1981}. One could also check out strategic outgrowths from these nonstandard schools, eg., the work of Di Nasso and Forti, \cite{DiNassoFortiTopExtens2006}. One might also check out their (and Benci's) constructive survey, \cite{BenciFortiNassoEightfoldPath2006}, of a variety of means to a nonstandard mathematics (which also includes a good introduction). There are yet other approaches to a nonstandard mathematics, notably the Russian school; for a good example, see \cite{InfinitesAnalyGordonKusraevKutateladzeBk2002}. \subsubsection{Impressionistic introduction with terminology} Let's give an impressionistic introduction to nonstandard mathematics via the (extended) ultrapower constructions. There are many motivations for the need of a nonstandard mathematics. To have our real numbers, $\bbr$, embedded in a much more robust object, $\rz\bbr$, with the properties of the real numbers, but also containing infinite and infinitesimal quantities is a boon to a direct formalization of intuitive strategies. Model theoretically, these have been around for more than 60 years (some would argue much longer) via eg., ultrapowers or the compactness theorem. The ultrapower is the construction generally least involved with theoretical matters of the foundations of math (but see \cite{DiNassoFortiTopExtens2006}). Thinking of eg., infinite numbers as limiting properties of sequences of real numbers, one might attempt to construct nonstandard real numbers as equivalence classes of such sequences, ie., $\rz\bbr=\bbr^\bbn/\sim$ where $/\!\sim$ denotes the forming of such equivalence classes. Clearly, one can extend the operations and relations on $\bbr$ to $\bbr^\bbn$ coordinate wise, getting a partially ordered ring; but almost all of the nice properties of $\bbr$ are lost. But, if $\SP(\bbr)=2^\bbn$, the power set of $\bbn$, it turns out that there are objects $\SU\subset\SP(\bbr)$, the ultrafilters, such that defining our equivalence relation in terms of elements of $\SU$ preserves all ``well stated'' properties of $\bbr$. More specifically, given $(r_i),(s_i)\in\bbr^\bbn$, define $(r_i)\sim (s_i)$ if $\{i:r_i=s_i\}\in\SU$ and $(r_i)<(s_i)$ if, again $\{i:r_i<s_i\}\in\SU$, we find that the extended ring operations and partial order are well defined on the quotient $\bbr^\bbn/\SU$ and, in fact, it's not hard to prove that we get a totally ordered field containing $\bbr$ (the set of equivalence classes of constant sequences) as a subfield. For $r\in\bbr$, let $\rz r=(r)/\sim$, the equivalence class containing the corresponding constant sequence and $\bsm{{}^\s A}=\{\rz r:r\in A\}\subset\rz\bbr$ denote the image of $A\subset\bbr$ in our nonstandard model of $\bbr$; eg., ${}^\s\bbr$ is the image of $\bbr$. In general, we will let $\bsm{\bk{r_i}}$ denote the equivalence class of a sequence $(r_i)\in\bbr^\bbn$. Given this, existence of infinite elements is clear: if $(r_i)\in\bbr^\bbn$ with $r_i\ra\infty$ as $i\ra\infty$, then for each $s\in\bbr$, the set $\{i:r_i>s\}$ is in $\SF(\subset\SU)$ and so, by our definition, $\bk{r_i}>\rz s$. Note that to verify the field properties and total ordering of $\rz\bbr$, we need the full strength of ultrafilters, eg., the maximality property: if $A\subset\bbn$, then precisely one of $A$ or $\bbn\ssm A$ is in $\SU$. In particular, if $\om=\bk{m_i}\in\rz\bbn$ where $m_i\uparrow\infty$ as $i\ra\infty$, eg., $\om$ is infinite Since we can form the $\SU$ equivalence class of arbitrary sequences of real numbers and get a much larger field with all of the `same' well formed properties as $\bbr$, why can't we do this for $\bbn$, $\bbq$, $\bbq(\sqrt{5})$, etc. and get `enlarged' versions of these? We can, but try to do this with the algebra $F(\bbr)$ of real valued smooth maps on $\bbr$; ie., consider $F(\bbr)^\bbn/\sim$ as before. Clearly, this is a ring, but do these `functions' (on $\rz\bbr$) have, in some good sense, all of the properties of functions in $F(\bbr)$? Ignoring subtleties, the simple answer is yes, simply because these elements are internal and therefore fall under the aegis of the all encompassing {\it principle of transfer}; but let's see, to some extent, how this works in this case. Let's consider, for example, the nonstandard support (*support) of an equivalence class $\bk{f_i}\in\rz F(\bbr)$. (Recall that if $f\in F(\bbr)$, then the support of $f$, $supp(f)$, is the closure of the set of $t\in\bbr$ where $f(t)\not=0$.) But, then as we seem to be following a recipe of extending everything component wise and then taking the quotient, if $A_i=supp(f_i)$, then $\rz supp(\bk{f_i})$ must be the equivalence class $\bk{A_i}$. Yet, how is this a subset of $\rz\bbr$? This is a special case of the next problem of nonstandard analysis: extending `is an element of' to our ultrapower constructions. Miraculously, the properties of ultrafilters (eg., our $\SU$) allow one to (simplemindedly!) define $\bk{r_i}\in\bk{A_i}$ if $\{i:r_i\in A_i\}\in\SU$. (This really should be written $\bk{r_i}\;\rz\!\!\in\bk{A_i}$, but starring all extended operations, relations, etc. can rapidly get confusing.) Note that these subsets of $\SP(\rz\bbr)$ of the form $\bk{A_i}$ are called {\it internal sets} and are {\it precisely those subsets that extend the properties of $\SP(\bbr)$ (and therefore shall be denoted $\rz\SP(\bbr)$) via the principle of transfer}. For example, the typical bounded subset $\SC$ of $\rz\bbr$ does not have a nonstandard supremum, ie., $\rz\sup \SC$ does not exist; in particular, the transfer principle applied to the theorem that bounded subsets of $\bbr$ have suprema does not transfer to all *bounded elements of $\SP(\rz\bbr)$. (For example, the set of {\it infinitesimals}, denoted $\mu(0)$ here, certainly does not have a supremum.) Nonetheless, the transfer principle certainly does apply to the internal $\bk{A_i}$, and the proper definition is (surprise!) $\rz\sup\bk{A_i}=\bk{\sup A_i}$. Note here that other notable examples of external subsets of $\rz\bbr$ are ${}^\s\bbr$ (and in fact ${}^\s A$ for any infinite subset $A\subset\bbr$), \begin{align} \rz\bbr_{nes}=\{\Ft\in\rz\bbr:|\Ft-\rz s|\in\mu(0)\;\text{for some}\;s\in\bbr\}, \end{align} the {\it nearstandard real numbers}, $\rz\bbr_\infty$, the infinite real numbers and the infinite natural numbers, $\rz\bbn_\infty$ which are said to be {\it *finite}. If $A\subset\bbr$ is finite, let $|A|\in\bbn$ denote its cardinality, let $\om=\bk{m_i}\in\rz\bbn$ be an infinite *finite integer and $A_i\subset\bbr$ be such that $\{i:|A_i|=m_i\}\in\SU$. Then we say that $\bk{A_i}\subset\rz\bbr$ is a {\it *finite subset of *cardinality} $\om$. Although (for $\om$ infinte) these sets are infinite (in fact uncountable!), transfer implies that *finite subsets of $\bbr$ have the `same' properly stated properties that finite subsets have. Nonetheless, for a much stronger {\it sufficiently saturated} ultrapowers, there exists *finite subsets of $\rz\bbr$ containing ${}^\s\bbr$. These will play a role in this paper. We still haven't considered how the elements of $\rz F(\bbr)=F(\bbr)^\bbn/\sim$ can be considered as functions on $\rz\bbr$, but by now the reader can see that we must define $\bk{f_i}(\bk{x_i})=\bk{f_i(x_i)}$ and hope that the properties of $\SU$ ensure that this is well defined (ie., independent of choice of representatives) and is a function. This can indeed be verified and these functions are the {\it internal functions} in $F(\rz\bbr)$; the function $\Ff:\rz\bbr\ra\rz\bbr$ defined by $\Ff(x)=x$ if $x\sim 0$, ie., if $x$ is infinitesimal, and $\Ff(x)=0$ if $x\not\sim 0$ is an {\it external function}, eg., does not satisfy the internality criteria allowing the use of transfer. For example, it is *bounded (bounded in $\rz\bbr$), but $\rz\sup\Ff$ does not exist. Yet again, it's straightforward that for *bounded $\bk{f_i}$, $\rz\sup\bk{f_i}$ is well defined precisely by our recipe: $\bk{\sup f_i}\in\rz\bbr$ (this *supremum may be an infinite element of $\rz\bbr$). Internal subsets of $\rz\bbr$ of the form $\bk{A}$ (ie., the equivalence class containing the constant sequence $(A_i)$ for some $A\subset\bbr$) are called the {\it standard sets}. Following our recipe for denoting the equivalence class of a constant sequence by starring, $\bk{A}$ is usually denoted $\rz A$. For perspective, note that the copy of $[0,1]$ lying in $\rz[0,1]$, ie., ${}^\s[0,1]\subset\rz[0,1]$ is very sparse. For example, given an infinitesimal, $0<\Ft=\bk{t_i}\in\rz\bbr$ (eg., suppose $t_i\downarrow 0$ as $i\ra\infty$) and $r\in(0,1)$, then $\rz r+[-\Ft,\Ft]\subset\rz[0,1]$, but intersects ${}^\s[0,1]$ only at $\rz r$. Let's consider the {\it standard function} $\rz \sin(x)$. First of all, $\rz\sin$ is defined essentially as we defined standard sets, $\rz A$, ie., the $f_i$ above are all the function $\sin$. So if we define the {\it *domain} of $\bk{f_i}$ as we have all else: $\rz dom(\bk{f_i})\dot=\bk{dom(f_i)}$, we see that $\rz dom(\rz\sin)$ is all of $\rz\bbr$. (Or, as the domain of $\sin$ is $\bbr$, transfer says that the *domain of $\rz\sin$ is $\rz\bbr$.) A consequence of our constructive approach is the fact that $\rz dom(\rz\sin)=\bk{dom(f_i)}$ is internal. It's not hard to check that $\rz\sin$ is really an extension of $\sin:\bbr\ra[-1,1]$: first, restricting the graph of $\rz\sin$ to ${}^\s\bbr$ just the image of the graph of $\sin$ in $\rz\bbr^2$; second, all of the symmetry and character properties hold and third, it has all of the (transferred) analytic properties that $\sin$ has. Before we conclude this tour, let's look at the standard part map. We defined the external (subring) $\rz\bbr_{nes}\subset\rz\bbr$ above. It should not be surprising that this is precisely those $\bk{r_i}\in\rz\bbr$ satisfying $|\bk{r_i}|<\rz t$ for some $t\in\bbr$ (here $|\;|:\rz\bbr\ra\rz[0,\infty)$ is defined as all else). But by it's definition, any $\bk{r_i}\in\rz\bbr_{nes}$ satisfies $\bk{r_i}\sim \rz r$ for some (clearly unique) $r\in\bbr$, eg., there is a well defined map (homomorphism onto!) $\fst:\rz\bbr_{nes}\ra\bbr$, the {\it standard part map}. Sometimes we will write ${}^o\bk{r_i}$ for $\fst\bk{r_i}$. Note then that if $\bk{f_i}:\rz\bbr\ra\rz\bbr$ has image in $\rz\bbr_{nes}$, then we can define $\fst\bk{f_i}:\bbr\ra\bbr$ to be the map $r\in\bbr\mapsto\fst(\bk{f_i}(\rz r))$ Given this, if $\om=\bk{m_i}\in\rz 2\bbn$ with $m_i\uparrow\infty$ as $i\ra\infty$ (eg., $\om$ is infinite), consider $f_i$ given by $x\mapsto\sin(m_ix)$, so that writing $\xi=\bk{x_i}\in\rz\bbr$, we have $\rz\sin(\om \xi)=\bk{sin(m_ix_i)}$. By transfer, $\xi\mapsto\rz\sin(\om\xi)$ has all of the symmetry and analytic properties of $x\mapsto\sin(2mx)$ for some $m\in\bbn$, eg., solves the nonstandard *differential equation $\Ff''-\om^2\Ff=0$; yet it's standard part is not even Lebesgue measurable! \subsubsection{Formal tools} The four pillars of nonstandard analysis are the internal definition principle, transfer, saturation and (several versions of) ``overflow''. In order to discern the internal sets among all external sets, one can use the internal definition principle. It is basically an algorithmic way of determining if some object is of the form $\bk{S_i}$ and depends on the fact that all internal sets ar elements of some standard set $\rz T$. It asserts that if $\SB=\bk{B_i}$ is internal and $P(H)$ is a statement about an variable quantity $H$ in an internal set $\SX$ (of *functions, *measures, etc.,), then $\{H\in\SX:P(H)\;\text{is true}\}$ is internal. As described above transfer allows us to, eg., translate to the nonstandard world careful statements about regular mathematics. Here we will need it to, eg., transfer to the nonstandard world the existence of maps of a certain type that have specified values on finite sets. Next, saturation has a variety of guises, one of which will be important here. Besides the need for the monads associated with neighborhood filters for a given topology, the specific form of saturation (see Stroyan and Luxemburg, \cite{StrLux76}, chapter 8) that will be needed here, in section 4, ensures that *finite set are sufficiently large. Specifically, let $X\in\SU$ be an infinite set of cardinality not bigger than $\SP(\dcm)$. Then, there exists a *finite $\SA\in\SU$ with ${}^\s X\subset\SA\subset\rz X$. This can be situated so that $\SA$ carries the same, well formed finitely stated, characteristics as $X$ (transfer). We will use this in the situation where $X$ is a particular collection of smooth maps or smooth section of a bundle. We will also use an overflow type result that depends on our nonstandard model being polysaturated. See \cite{StrLux76} chapter 7; below we paraphrase their theorem 7.6.2 for our use. \begin{theorem}\label{thm:extend external map} Suppose that $A\subset\rz\bbr^m$, not necessarily internal with cardinality less than that of $\rz\bbr^n$. Suppose that $F:A\ra\dstrcmn$ is any map. Then there is an internal subset $\bar{A}\subset\rz\bbr^m$ and an internal map $\bar{F}:\bar{A}\ra\dstrcmn$ such that $A\subset \bar{A}$ and $\bar{F}|_{A}=F$. \end{theorem} \subsection{Jet bundle constructions} In this section we cover enough of the basics of jets and the jet bundle formulation of (linear) differential operators, sufficient to formulate and prove our results. \subsubsection{Jet bundle setup} We will briefly summarize that part of jet theory that we need. Although the following formulation is straightforwardly generalized to smooth manifolds, for brevity's sake we will restrict to the Euclidean case. Let $\bsm{P_k(m,n)}$ denote the polynomial maps of order $k$ from $\bbr^m$ to $\bbr^n$. If $f\in\dcm$, $x\in\bbr^m$ and $k\in\bbn$, let $\bsm{T^k_xf}\in P_k(m,n)$ denote the $k$th order Taylor polynomial of $f$ at $x$. By * transfer, if $\rz P_k(m,n)$ denotes the $\rz\bbr$ vector space of internal polynomials from $\rz\bbr^m$ to $\rz\bbr^n$, $f\in\dstrcmn$ and $x\in\rz\bbr^m$, we have $\rz T^k_xf$, the internal $k^{th}$ order Taylor polynomial of $f$ at $x$. Note that transfer implies that this has all of the properties of the Taylor polynomial, suitably interpreted. Note that although $f=\rz g$ so that $\xi\mapsto\rz T^k_\xi f$ is simply the transfer of the standard map $x\mapsto T^k_xf$, for $\xi\in\rz\bbr^m_{\infty}$ or $k\in\rz\bbn_{\infty}$, $\rz T^k_\xi f$ can be very pathological. We define an equivalence relation on \cmn\; as follows. We say that $f\in\dcmn$ \textbf{vanishes to $k$th order at $x$} if $T^k_xf=0$ and for $f,g\in\dcmn$, we say that f equals g to $k$th order, written $\bsm{f\overset{x_k}\sim g}$ if $T^k_x(f-g)=0$. This defines an equivalence relation on \cmn. Let $\bsm{j^k_xf}$ denote the equivalence class containing $f$. We denote the set of equivalence classes by $\bsm{\SJ^k_{m,n,x}}$. There are a variety of definitions of $\SJ^k_{m,n,x}$ and its not hard to show that one can identify $\SJ^k_{m,n,x}$ with the set of Taylor polynomials of order $k$ at $x$ of smooth maps $(\bbr^m,x)\ra\bbr^n$ and we can identify $j^k_xf$ with $T^k_xf$. (An equivalence class consists of all maps with a given $k^{th}$ order Taylor polynomial at $x$.) Let $\bsm{\SJ^k_{m,n}}=\cup_{x\in\bbr^m}\SJ^k_{m,n,x}$. $\SJ^k_{m,n}$ is a smooth fiber bundle, in fact, as our maps have range $\bbr^m$, a vector bundle, over $\bbr^m$ with fiber over $x\in\bbr^n$ given by $\SJ^k_{m,n,x}$. Let $\bsm{\pi^k_0}:\SJ^k_{m,n}\ra\bbr^m$ denote the bundle projection. Note also that if $l,k\in\bbn$ with $l>k$, then $\SJ^l_{m,n}$ is a bundle over $\SJ^k_{m,n}$; let $\bsm{\pi^l_k}$ denote the bundle projection. We let $\bsm{C^{\infty}(\SJ^k_{m,n})}$ denote the $\bbr$ vector space of $C^{\infty}$ sections of $\SJ^k_{m,n}$. If $f\in\dcmn$, there is a canonical section of $\pi^k_0$ given by $\bsm{j^kf}:x\mapsto j^k_xf$. There is a canonical map, the operation of taking the $k$ jet: \begin{align} \bsm{j^k}:\dcmn \ra C^{\infty}(\SJ^k_{m,n})\quad f\mapsto j^kf \end{align} For later purposes we also need to define the infinite jet, $\bsm{j^{\infty}_xf}$, for $f\in\dcmn$. Doing a simplified rendering of projective limits, we will define the vector space of infinite jets at $x\in\bbr^m$, $\bsm{\SJ^{\infty}_{m,n,x}}$, to be the set of sequences $(f^0,f^1,f^2,\ldots)$ such that $f^k\in\SJ^k_{m,n,x}$ for all $k$ and for all nonnegative integers $j<k$, $\pi^k_j(f^k)=f^j$. Then for a given $f\in\dcmn$, the infinite jet of $f$ at $x$, $j^\infty_xf$, is clearly the well defined element of $\SJ^\infty_{m,n,x}$ given by $(j^0_xf,j^1_xf,j^2_xf,\ldots)$. It is easy to see that $\SJ^{\infty}_{m,n,x}$, is an infinite dimensional vector space over $\bbr^m$, operations given componentwise, and that, for each $x\in\bbr^m$, the map $\bsm{j^{\infty}_x}:f\mapsto j^{\infty}_xf:\dcmn\ra\SJ^{\infty}_{m,n,x}$ is a vector space surjection with kernel the subspace of $g\in \dcmn$ such that $j^k_xg=0$ for all integers $k$, ie., $g$ vanishes to infinite order at $x$. We will also need the forgetful fiber projection $\bsm{\pi_{k,x}}:\SJ^{\infty}_{m,n,x}\ra\SJ^k_{m,n,x}$. As the base range space is linear, $\pi_{k,x}$ is a surjective linear morphism and clearly has kernel the (ideal) of formal power series that vanish to $k^{th}$ at $x$, see above . From the canonical (global) coordinate framing $\bsm{x_i}$, $1\leq i\leq m$ on $\bbr^m$, and $\bsm{y^j}$, $1\leq j\leq n$ on $\bbr^n$, we get induced coordinates, $\bsm{x_i, y^j_{\a}}$ for $|\a|\leq k$ and $1\leq j\leq n$, on $\SJ^k_{m,n}$ defined as follows. The $x_i$ are just the pullback of the coordinates on the base $\bbr^m$. If $\phi\in\SJ^k_{m,n}$, then $\phi\in\SJ^k_{m,n,x_0}$ for some $x_0\in\bbr^m$ and so $\phi$ can be written as $j^k_{x_0}f$ for some $f\in\dcmn$. Then \begin{align} x_i(\phi)=x_{0,i},\quad y^j_{\a}(\phi)\doteq\p^{\a}(f^j)(x_0)=\phi^j_{\a}. \end{align} where $\bsm{\p^{\a}}$ denotes $\f{\p^{|\a|}}{{({\p}x_1)}^{\a_1}\cdots{({\p}x_m)}^{\a_m}}$ where $\a=(\a_1,\ldots,\a_m)$. If $\la\in C^{\infty}(\SJ^k_{m,n},\bbr^n)$, then with respect to the induced coordinates, we write this as $\bsm{\la(x_i,y^j_{\a})}$. therefore, for later use, we can Taylor expansion $\la$ \textbf{with respect to the $x_i$ coordinates}, around a given $p_0$, as follows. Let $p_0=(x_{0,i},y^k_{0,\a})$ be a coordinate representation as above. Let $\bsm{\la^l}$ denote the $l^{th}$ coordinate of $\la$ with respect to the canonical coordinates on $\bbr^n$. Then the Taylor expansion to order s with in the base coordinates is \begin{align}\label{taylor exp 1} \la^l(x_i,y^k_{\a})=\sum_{|\a|\leq s}K_{\a}(x-x_0)^{\a}\p^{\a}(\la^l)(p_0)+ \tl{\la}^l(x_i,y^k_{\a}) \end{align} where for $|\a|=s+1$, $\tl{\la}^l\in C^{\infty}(\SJ^k_{m,n},\bbr^n)$ vanishes to order $s+1$ in the base coordinates at $p_0$ , $K_{\a}$ are the usual factorial constants, $(x-x_0)^{\a}=(x_1-x_{0,1})^{\a_1}\cdots (x_m-x_{0,m})^{\a_m}$, and $\bsm{\p^{\a}}$ is the ${\a}^{th}$ partial derivative with respect to the base coordinates. Therefore, for our purposes the vanishing order, normally defined in terms of the power of the maximal ideal at the given point in terms of the $x$ coordinates, will be defined in terms of the degree of vanishing derivatives (in $x$ coordinates) at the given point. We need to emphasize that we are considering vanishing order of the smooth maps on the jet bundle only with respect to dependence on the base coordinates. Given the usual framing $\p_i=\f{\p}{\p x_i}$ , $1\leq i\leq m$ for $T\bbr^m$, $\p_{i,x}$ being the frame for $T_x\bbr^m$; we have an induced framing of $T\SJ^k_{m,n}$ given by adjoining to these tangent horizontal vectors the vectors $\p_{y^j_{\a}}= \f{\p}{\p y^j_\a}$ that are tangent to the fibers of the bundle projection $\pi^k_0$ at $x$, for $j=1,\ldots,n$ and $|\a|\leq k$. The notion of contact is useful in understanding the sharpening of the results here vis a vie the results of Todorov. Given $x\in\bbr^n$ and a nonnegative integer $s$, we say that $f\in\dcmn$ has \textbf{contact }$\bsm{s}$ with $\bbr^m$ at $x$ if $f(x)=0$ and the graph of $f$, $\G_f\subset\bbr^m\x\bbr^n$ is flat to $s^{th}$ order at $(x,0)$, that is, if $T^k_xf=0$, ie., if $j^s_xf$ is the equivalence containing the $0$ $s$ jet at $x$. We say that \textbf{$\bsm{f,g\in\dcmn}$ have $\bsm{s^{th}}$ order contact at $x$} if $f-g$ has $s^{th}$ order contact with $\bbr^m$ at $x$, the graph of the $0$ function, at $x$. It should be obvious that this is an equivalence relation and that the set of all $g\in\dcmn$ that belong to the $s^{th}$ order contact class of $f$ is precisely the affine subset with the same $s^{th}$ order jet as $f$. \subsubsection{Prolonging jet maps and total derivatives} Let $\bsm{\bbr^p_m}$ denote the product bundle with fiber $\bbr^p$ and base $\bbr^m$; if $p=1$, we will denote this bundle by $\bsm{\bbr_m}$. If $x\in\bbr^m$, let $\bsm{\bbr^p_{m,x}}$ denote the vector space fiber of $\bbr^p_m$ over $x$. In the following we will be using \textbf{vector bundle} \textbf{maps}, ie., smooth maps of bundles over the same base that preserve fibers and cover the identity map on the base. The symbols of linear differential operators, $\la:\SJ^r_{m,1}\ra\bbr^p_m$ are maps of this type. The set of such maps is a $C^{\infty}(\bbr^m,\bbr)$ module and will be denoted by $\bsm{C^{\infty}(\SJ^r_{m,1},\bbr^p_m)}$. If $\la: \SJ^k_{m,n}\ra \bbr^p_m$ is such a smooth bundle map, and $l\in\bbn$, then there exists a smooth bundle map $\bsm{\la^{(l)}}:\SJ^{k+l}_{m,n}\ra \SJ^l_{m,p}$\;, called the $\bsm{l^{th}}$\textbf{-prolongation of} $\bsm{\la}$ such that the following diagram is commutative \begin{align}\label{diag1} \begin{CD} \SJ^{k+l}_{m,n} @>\la^{(l)} >> \SJ^l_{m,p} \\ @V\pi^{k+l}_k VV @V\pi^l_0VV\\ \SJ^k_{m,n} @>\la >> \bbr^p_m \end{CD} \end{align} That is, as $j^s_x(\la\circ j^rf)$ depends only on derivatives up to order $s$ of $y\mapsto j_y^rf$ at $x$, and so only on the $r+s$ jet of $f$ at $x$, then the following definition is well defined. If $f\in\dcmn$, then \begin{align}\label{eqn:coord free prolng formula} \bsm{\la^{(s)}}(j^{r+s}_x(f))=j^s_x(\la\circ j^rf). \end{align} The prolongation of vector fields on $\bbr^m$ to vector fields on $\SJ^k_{m,n}$ are given by fairly complicated recursion formulas. For treatments of prolongations of vector fields in somewhat different contexts, see Olver \cite{Olver1993}, p110 or Pommaret, \cite{Pommaret1978}, p253. Pommaret gives a remarkably easy derivation of these expressions. We only need the prolongation of coordinate vector fields, ie., total derivatives, and these have far simpler expressions. These give explicit local expressions of prolongations and so allow us to computationally investigate the effect of successive prolongations on maps of jets. For each coordinate tangent field $\p_i$ on $T\bbr^m$, for $1\leq i\leq m$, we have an explicit expression for the corresponding lifted local section of $T\SJ^k_{m,n}$, the \textbf{total derivative} $\bsm{\p^\#_i}$ defined as follows. \begin{align}\label{total der sum} \p_i^\#= \p_i+ \sum_{\substack{|\a|\leq k\\1\leq j\leq n}} y^j_{\a^i}\p_{y^j_{\a}} \end{align} where $\a^i=(\a_1,\ldots,\a_i+1,\ldots,\a_n)$. Note that $\p_i^\#$ depends on coordinates of order $k+1$, ie., for $\la\in C^{\infty}(\SJ^k_{m,n},\bbr^n_m)$, we have $\p_i^\#(\la)$ are the coordinates for a map $\la^{(1)}:\SJ^{k+1}_{m,n}\ra\SJ^1_{m,n}$ with respect to the induced jet coordinates. In fact, we have the following. \begin{lemma}\label{lem:loc coord of 1-prolong} Suppose that $\la:\SJ^k_{m,n}\ra\bbr^n$ is a smooth map. Let $\la^j$ for $j=1,\ldots,n$ denote the coordinates of $\la$ with respect to the standard coordinate basis for $\bbr^n$. Then the components of $\la^{(1)}$ with respect to the given coordinates are $\p_i^\#(\la^j_{\a}$) \end{lemma} \begin{proof} One can verify this lemma and the expression (\ref{total der sum}) using the local version of the definition for prolonging jet maps (\ref{eqn:coord free prolng formula}) when $s=1$, eg., see \cite{Olver1993}, p109; ie., \begin{align}\label{1 prolong coord eqn} \p_i^\#(\la)(j_x^{r+1}(f)=\p_i(\la\circ j^r f)(x) \end{align} and then applying the chain rule to the right side of (\ref{1 prolong coord eqn}). \end{proof} \subsection{Differential operators and their prolongations} In order to align with Todorov's setup we will now restrict the dimension of the range space to be $1$. The superscript $j$ enumerating range space components will no longer appear. Let $\bsm{LPDO_r}$ denote the vector space of linear partial differential operators of degree less than or equal to $r$ with coefficients in \cm. Suppose that $P\in LPDO_r$. Then there exists a smooth bundle map $\bsm{\la_P}: \SJ^r_{m,1}\ra\bbr_m$ called the \textbf{total symbol of $\bsm{P}$} such that if $f\in\dcm$, then $P(f)=\la_P\circ j^rf$ as elements of \cm. If $\la:\SJ^r_{m,1}\ra\bbr_m$ is the symbol of an $r^{th}$ order differential operator, $P$, then the \textbf{$\bsm{s^{th}}$ prolongation} $\bsm{\la^{(s)}}:\SJ^{r+s}_{m,1}\ra\SJ^s_{m,1}$ is defined on $r+s$ jets above. As prolongations of differential operators mapping $\dcm$ to $\dcm$ are systems, we will use the notation $\bsm{LPDO_{r+s}^s}$ for $r+s$ order linear differential operators on $\dcm$ to smooth sections of $\SJ^s_{m,1}$. In particular, if $P\in LPDO_r$, and $s\in\bbn$, then there is $\bsm{P^{(s)}}\in LPDO_{r+s}^s$\;, called the $\bsm{s^{th}}$ \textbf{prolongation of} $\bsm{P}$, defined as the differential operator whose symbol is the $s^{th}$ prolongation of $\la_P$. We will have more to say about its nature later. On the nonstandard level, note that if $g\in\dcm$, then, at every standard $x$, ie., $\rz x\in\;^\s\bbr^m$, and for each $k\in\;^\s\bbn_0=\bbn\cup\{0\}$, $\rz\!j_{*x}^k(\rz g)=\rz\!(j^k_xg)$. That is, the internal operator $\bsm{\rz\!j^k_{*x}}|_{\dcgcm}$ operating on standard functions is just the transfer of $j^k_x$. (At nonstandard points this is not true.) It therefore follows that $\rz\la\circ\rz j^k$, restricted to a standard map, $\rz g$ at a standard point $\rz x$, is just the *transfer of $\la\circ j^k_xg\in\bbr_{m,x}$. We shall give a sufficient account of the remainder of the nonstandard material needed after we recall a bit more standard geometry. \section{Standard geometry: Prolongation and Vanishing}\label{section:prolong and vanishing} This section proves the standard results that allow the proof of Corollary \ref{cor: infinite order Todorov}. The idea here is desingularize our total symbol by ``lifting'' it to a sufficiently high jet level where we can then invoke a version of Todorov's result. In order to do this, we need some sort of correspondence between solutions of $P$ and those of $P^{(s)}$. We also need this procedure to decrease the vanishing order the coefficients of the $P^{(k)}$ as $k$ increases. Here is the idea. On the one hand, we can think of a symbol, $\la=\la_P$ of an LPDE as a smooth family of linear maps $x\mapsto\la_x$, and therefore one can think of vanishing order of $\la$ at $x_0$ as the ``flatness'' of the graph of this map at the point $x_0\in\bbr^m$. This relates directly to the Taylor polynomials of the smooth coefficients of $\la$ at $x_0$. On the other hand and more abstractly, there is a classic ``desingularization'' machinery for jet bundle map, eg., symbols of differential operators, that carries solutions to solutions, ie., prolongation. In this section we will relate the intuitive vanishing order to this prolongation method in order to get crude controls between jets in the domain and range of the prolongation of $\la$, in terms of these singularities. This will be done in this section. We first need the proper notion of vanishing order of a linear bundle map $\la:\SJ^k_{m,1}\ra\bbr$. \begin{definition} Let $x_0\in\bbr^m$, and $c\in\bbn$. Then we say that $\bsm{\la}$\textbf{ vanishes to order (exactly)} $\bsm{c}$ \textbf{(in $\bsm{x}$) along $\bsm{(\pi^k_0)^{-1}(x_0)}$}, written $\bsm{x_0\in\SZ^c(\la)}$, if $\p^{\a}(\la)(p)=0$ for all $\a$ with $|\a|\leq c$ and for all $p\in (\pi^k_0)^{-1}(x_0)$. and, secondly, there exists some and $\beta$ with $|\beta|=c+1$ and $p_0\in (\pi^k_0)^{-1}(x_0)$ such that $\p^{\beta}(\la)(p_0)\not =0$. Note, as always, that $\p^{\a}$ denotes the $\a^{th}$ derivative with respect to the $x_i$ coordinates. \end{definition} First note that this is more transparently stated as follows. Writing $\la=\sum_{\a}f_\a y_\a$ for some smooth $f_\a$'s, this condition is equivalent to stating that for each coefficient $f_\a$, $\p^\beta f_\a (x_0)=0$ for all multiindices $\beta$ satisfying $|\beta|\leq k$; with the second condition being that there exists a coefficient $f_\a$ and a multiindex $\beta$ with $|\beta|=k+1$ such that $\p^\beta f_\a(x_0)\not=0$. Note also that although the notion of contact is closely related to vanishing order, we will not pursue this connection here. In the next lemma, we don't need to restrict to jet mappings that are the symbols of elements of $LPDO_k$. When we consider how total derivatives change the vanishing order of jet maps, it will be essential to consider the particular form of jet maps that are symbols of elements of $LPDO_k$. Below we will be using the following notation. If $\beta=(\beta_1,\cdots,b_m)$ is a multiindex of order $k$; ie., $|\beta|=\beta_1+\cdots+\beta_m=k$ and $1\leq i_0\leq m$, then $\bsm{\beta_{i_0}}=(\beta_1,\ldots,\beta_{i_0}+1,\beta_{i_0+1},\ldots,\beta_m)$; so eg., $|\beta_{i_0}|=|\beta|+1$. \begin{lemma}\label{lem1:decreasing order of 0} Let $\la\in C^{\infty}(\SJ^k_{m,1},\bbr_m)$ and suppose that $p_0\in\SJ^k_{m,1}$ is in $\SZ^c(\la)$. Then for some $i\in\{1,\ldots,m\}$, we have $p_0\in\SZ^{c-1}(\p_i(\la))$. \end{lemma} \begin{proof} By hypothesis, $\p^{\a}(\la)(p_0)=0$ for $|\a|\leq c$ and there is a multiindex $\beta$ with $|\beta|=c+1$ and an such that $\p^{\beta}(\la)(p_0)\not =0$. Write $\beta=\a_i$ for some multiindex $\a$ with $|\a|=c$ and $i\in\{1,\ldots,m\}$. That is, $\p^{\a}(\p_i\la)(p_0)\not =0$. But if $\a$ is a multiindex such that $|\a|\leq c-1$, then $|\a_i|\leq c$ and so $\p^{\a}(\p_i\la)(p_0)=\p^{\a_i}(\la)(p_0)=0$ by hypothesis. But then by definition $p_0\in\SZ^{c-1}(\p_i\la)$, as we wanted to show . \end{proof} \subsection{Lifting solutions } At this point, it is important to note the explicit form of the jet maps that are symbols of elements of $LPDO_k$. \begin{lemma}\label{lem: LPDO symbol} Suppose that $P\in LPDO_k$. Then $\la=\la_P:\SJ^k_{m,1}\ra\bbr_m$ can be written in local coordinates $(x_i,y_{\a})$ in the following form $\la= \sum_{|\a|\leq k }f_{\a}y_{\a}$ where $f_{\a}\in\dcm$. \end{lemma} \begin{proof} This is clear. \end{proof} A remark is in order. Since for each $x\in\bbr^m$, $\la_P=\la_{P,x}:\SJ^r_{m,1,x}\bbr_{m,x}$ is linear, then we will consider look of the vanishing order of $x\mapsto\la_x$ \begin{lemma} Suppose that $P\in LPDO_r$ , $s\in \bbn$ and $f\in\dcm$ solves $P^{(s)}(f)=0$. Then $f$ solves $P(f)=0$. \end{lemma} \begin{proof} This is an easy unfolding of the definitions. Operationally, $P^{(s)}$ is defined on $f\in\dcm$ as follows. $P^{(s)}(f)=j^s\circ P(f)$. But then $P^{(s)}(f)=0$ implies that $j^s\circ P(f)=0$. But for $g\in\dcm$, $j^s(g)=0$ as a section of $\SJ^s_{m,1}$ if and only if $g$ is identically $0$, eg., letting $g=P(f)$, we get $P(f)=0$. \end{proof} \subsection{Lifting zero sets} \subsubsection{ Prolongation of linear symbols} \begin{remark} If $\la:\SJ^r_{m,1}\ra\bbr_m$ is the symbol of $P\in LPDO_r$, then w\textbf{e will instead write $\SZ^c(\la)$ for $\pi^r(\SZ^c(\la))$. In particular, $\SZ^c(\la)$ will now be considered as a subset of the base space,} $\bbr^m$. Note that since $\la$ is linear on the fibers, with particular form as noted in Lemma \ref{lem: LPDO symbol}, then to say that $\bsm{x\mapsto\la_x}$ \textbf{vanishes to order $c$ at $x_0$ is the same as our fiber condition as this says that the coefficient $f_\a$ of our generic jet derivative $y_\a$ vanishes to order $c$ at $x_0$.} So, in the linear case, we have the following definition. \end{remark} \begin{definition} Suppose that $\la$ is the symbol of $P\in LPDO_r$. Let $\bsm{\SZ^c(\la)}$ denote the $x\in\bbr^m$ where all of the coefficients of $\la$ vanish to order $c$ and at least one does not vanish to order $c+1$. If $\la$ is such a linear jet bundle map, let $\bsm{\SZ_\la}$ denote those $x\in\bbr^m$ where $\la_x$ is the zero linear map. Given this, it should be obvious that the conclusion of Lemma \ref{lem1:decreasing order of 0} holds with $\pi^r(\SZ^c(\la))$ in place of $\SZ^c(\la)$. \end{definition} With this development, we have the following initiating lemma. \begin{lemma}\label{lem2:decreasing order of 0} Suppose that $P\in LPDO_r$ with $\la\in C^{\infty}(\SJ^r_{m,1},\bbr_m)$ the symbol of $P$. Let $c\in \bbn$ and $x_0\in\SZ^c(\la)$. Then $x_0\in\SZ^{c-1}(\la^{(1)})$. In particular, if $x_0\in\SZ^1(\la)$, then $\la^{(1)}_{x_0}\not=0$. \end{lemma} \begin{proof} So we have that $x_0$ satisfies $\p^{\a}\la(x_0)=0$ if $|\a|\leq c$ , and there exists multiindex $\beta$, with $|\beta|=c+1$ such that $\p^{\beta}\la(x_0)\not=0$. By Lemma (\ref{lem:loc coord of 1-prolong}) we only need to verify that for all $i,\a$ with $|\a|\leq c-1$, $\p^{\a}(\p_i^\#\la)(x_0)=0$ and there exists $i_0,\tl{\a}$ with $|\tl{\a}|=c$ such that $\p^{\tl{\a}}(\p_{i_0}^\#\la)(x_0)\not=0$. To this end, suppose that the following statement is valid. Let $d\in\bbn$ and suppose that for all $\a$ with $|\a|\leq d$, we have that that $\p^{\a}\la(p_0)=0$. Then for arbitrary given $i_0$ and $\tl{\a}$ with $|\tl{\a}|=d$, $\p^{\tl{\a}}(\p^\#_{i_0}\la)(p_0)=0$ if and only if $\p^{\tl{\a}}(\p_{i_0}\la)(p_0)=0$. Then one can see that from this statement and Lemma \ref{lem1:decreasing order of 0} the proof of our lemma follows immediately; so it suffices to prove the above statement. First of all, we need only to verify this statement for $\la=\la_P$, for $P=\sum_{|\a|\leq r}f_{\a}\p^{\a}$ where $f_{\a}\in\dcm$ for all $\a$; that is, if $\la=\sum_{|\a|\leq r}f_{\a}y_{\a}$. In the following calculations we will leave out evaluation at $x_0$; it is implicit. So \begin{align} \p^\#_i(\la)=(\p_i+ \sum_{|\a|\leq r}y_{\a_i}\p_{y_{\a}})(\sum_{|\beta|\leq r}f_{\beta}y_{\beta})\notag \\ =\sum_{|\beta|\leq r}\p_i(f_{\beta})y_{\beta}+\sum_{|\a|\leq r}f_{\a}y_{\a_i}. \end{align} That is \begin{align} \p^\#_i(\la)=\sum_{|\a|\leq r}((\p_if_{\a})y_{\a}+f_{\a}y_{\a_i}). \end{align} so that taking $\p^{\tl{\a}}$, for $|\tl{\a}|\leq d$, of both sides and interchanging derivatives, we get \begin{align}\label{total der expression} \p^\#_i(\p^{\tl{\a}}\la)=\sum_{|\beta|\leq r}\p_i(\p^{\tl{\a}}f_{\beta})y_{\beta}+\p^{\tl{\a}}(f_{\beta})y_{\beta_i}. \end{align} But by hypothesis, the second term, on the right side of (\ref{total der expression}) is $0$ and the truth of the statement follows. \end{proof} \subsubsection{Higher prolongations; coordinate calculations} We want to prove a higher order prolongation version of the previous lemma. We need some preliminaries before we state the lemma. Suppose that $\a$ is a multiindex, let $\p_\a^\#$ denote the $\a^{th}$ iteration of the coordinate total derivatives; ie., if $\a=(\a_1,\ldots,\a_m)$ is a multiindex of order $k=\a_1+\cdots+\a_m$, then $\p^\#_\a=(\p^\#_1)^{\a_1}\circ\cdots\circ (\p^\#_m)^{\a_m}$, it being understood that if $\a_j=0$, then the corresponding $j^{th}$ factor is missing. As defined, $\p^\#_\a$ sends functions on $\SJ^r_{m,1}$ to functions on $\SJ^{r+k}_{m,1}$. Next, note that if $\la=\sum_\a f_\a y_\a$ is the symbol of $P\in LPDO_r$, then $\la^{(k)}$ is the symbol of the $r+k^{th}$ order operator $P^{(s)}$. As such, using the induced coordinates $y_\a$, $|\a|\leq k$ on the range, $\SJ^k_{m,1}$, we can write $\la^{(k)}_x$ as $((\p_\a^\#\la)_x)_{|\a|\leq k}$. That is, $\la^{(k)}$ is given by the family of linear maps \begin{align} x\mapsto ((\p^\#_\a\la)_x)_\a\;:\SJ^{r+k}_{m,1,x}\ra\SJ^k_{m,1,x} \end{align} Note then that this family of maps vanishes to order $c$ at $x_0$ precisely when for each $\a$, $|\a|\leq r+k$ the component $x\mapsto\p^\#_\a(\la)_x$ vanishes to order $c$. Given this, we have the following extension of the previous lemma to general prolongations. \begin{lemma} Suppose that the bundle map $\la:\SJ^r_{m,1}\ra\bbr_m$ is the symbol of $P\in LPDO_r$ and suppose that $x_0\in \SZ^c(\la)$. Then $\la^{(c+1)}_{x_0}$ is a nonzero linear map. \end{lemma} \begin{proof} By the remarks before the lemma, it suffices to prove that $\la^{(c+1)}_{x_0}$ has a nonzero component at $x_0$; that is, for some $\g$ with $|\g|\leq c+1$, $\p^\#_\g(\la)_{x_0}$ is a nonzero linear map. Note also that if $\a,\beta$ are distinct multiindices, then $\p^\#_\a$ First note that if $g\in\dcm$ and $\a,\g$ are appropriate multiindices, then \begin{align} \p^\#_\g(gy_\a)=\sum_{\e+\rho=\g}\p_\e g\cdot y_{\a+\rho} \end{align} Suppose that $g$ vanishes to order (exactly) at $x_0$; so that there is an index $i_0$ and multiindex $\bar{\beta}$ with $|\bar{\beta}|=c$, such that $\p_\a g(x_0)=0$ for $|\a|\leq c$ and $\p_{\bar{\beta}_{i_0}}g(x_0)\not= 0$. Then $\p^\#_{\bar{\beta}_{i_0}}(gy_\a)=\p_{\bar{\beta}_{i_0}}g\cdot y_\a$. This follows upon inspection: $\beta_{i_0}$ is the only multiindex of length $c+1$ occurring in the sum. All other multiindices occurring are of length $\leq c$ and so these terms are zero by the hypotheses on $g$. So now consider a general symbol $\la=\sum_{|\a|\leq r}f_\a y_\a$ and suppose, by hypothesis that $x_0\in \SZ^c(\la)$. So there exists a multiindex $\a_0$ with $|\a_0|\leq r$, a multiindex $\bar{\beta}$ of order $c$ and an index $i_0\in\{1,\ldots,m\}$ such that $\p_{\bar{\beta}_{i_0}}f_{\a_0}(x_0)\not=0$. We will show that the $\bar{\beta}_{i_0}$ component of $\la^{(c+1)}_{x_0}$ is nonzero; ie., that $\p^\#_{\bar{\beta}_{i_0}}(\la)_{x_0}$ is a nonzero linear map. Now \begin{align} \p^\#_{\bar{\beta}_{i_0}}(\la)_{x_0}=\sum_{|\a|\leq r}\p^\#_{\bar{\beta}_{i_0}}(f_\a y_\a)_{x_0}\qquad\quad\qquad\qquad \notag \\ \qquad\qquad =\sum_{|\a|\leq r}\sum_{\g\leq\bar{\beta}_{i_0}}\p_\g(f_\a)(x_0)(y_{\a+(\bar{\beta}_{i_0}-\g)})_{x_0} \notag \\ \qquad =\sum_{|\a|\leq r}\sum_{\substack{\g\leq\bar{\beta}_{i_0}\\|\g|=c+1}}\p_\g(f_\a)(x_0)(y_{\a+(\bar{\beta}_{i_0}-\g)})_{x_0}\notag \\ =\sum_{|\a|\leq r}\p_{\bar{\beta}_{i_0}}f_\a(x_0)y_\a|_{x_0}. \quad\quad \qquad\qquad \end{align} But the linear forms $y_\a|_{x_0}$ are linearly independent and the coefficient of $y_{\a_0}|_{x_0}$ is nonzero by hypothesis, hence the conclusion follows. \end{proof} \begin{corollary}\label{cor:removing finite zeroes} Suppose that $P\in LPDO_r$ with $\la\in C^{\infty}(\SJ^r_{m,1},\bbr_m)$ the symbol of $P$ . Suppose that $c_0=\sup\{c\in\bbn:\SZ^c_{\la}\;\text{is nonempty}\}$ is finite. Then $\SZ(\la^{(c_0+1)})$ is empty. That is, for each $x\in\bbr^m$, $rank(\la^{(c_0)}_x)\geq 1$. \end{corollary} \begin{proof} This is an immediate consequence of the above lemma. \end{proof} \section{ Standard Geometry: Prolongation and rank}\label{section: standard jet work,prolong and rank} In section 3, we were concerned with the vanishing order of the (total) symbol of an element of $LPDO_r$. Our proofs involved calculations with the induced local coordinate formulation of prolongations of jet bundle maps. In this section, we will prove the standard results needed in the proof of Corollary \ref{cor: infinite order Todorov}. Here, we are a bit more tradionally concerned with the prolongation effects of regularity hypotheses on the principal symbol of an element of $LPDO_r$. Our constructions will instead be in the tradition of diagram chasing through commutative diagrams of jet bundles. Here we will show that if the principal symbol $\un{\la}:\SJ^r_{m,1,x}\ra\bbr_m$ is nonvanishing, then for each $k\in\bbn$, $\la^{(k)}:\SJ^{r+k}_{m,1,x}\ra\SJ^k_{m,1,x}$ is maximal rank. We will use this fact in constructing solutions for $P(f)=g$. We will first look at a coordinate argument that seems to indicate this. We will follow this with a simple version with a proof of this fact using a typical jet bundle argument. Suppose that $\la:\SJ^r_{m,1}\ra\bbr_m$ is the symbol of $P\in LPDO_r$. Let's begin by looking at first order prolongations. Suppose that $x_0\in \bbr^m$ and that $\la_{x_0}:\SJ^r_{m,1,x_0}\ra\bbr_{x_0}$ is nonzero in the sense that some coefficient $a_{\bar{\a}}$, for $|\bar{\a}|=r$ is nonzero at $x_0$. . Then we will verify the first prolongation of $\la$, $\la^{(1)}_{x_0}:\SJ^{r+1}_{m,1,x_0}\ra\SJ^1_{m,1,x_0}$ has ``top order part'' of maximal rank. So we need to show that the rank of the ``top order part'' of $\la^{(1)}_{x_0}:\SJ^{r+1}_{m,1,x_0}\ra\SJ^1_{m,1,x_0}$ is $m$ as this is the dimension of the fiber of $\SJ^1_{m,1}$. Write $\la_{x}=\sum_{|\a|\leq r}a_{\a}(x)y_{\a}$ and, in coordinates, \begin{align} \la^{(1)}_{x_0}=(\la_{x_0},\p^{\#}_1\la_{x_0},\ldots,\p^{\#}_m\la_{x_0}) \end{align} where, as before for each $i$, \begin{align}\label{eqn:sum formula;tot der} \p^{\#}_i\la_{x_0}=\sum_{|\a|\leq r}(\p_i a_{\a}(x_0)y_{\a}+a_{\a}(x_0)y_{\a_i}). \end{align} where by assumption $a_{\tl{\a}}(x_0)\not=0$ for some $\tl{\a}$ with $|\tl{\a}|=r$. But then the linear forms $a_{\tl{\a}}(x_0)y_{\tl{\a}_i}$ for $i=1,\ldots,m$ are linearly independent. Therefore, the linear forms $\p^{\#}_i\la_{x_0}$, by their above expressions (\ref{eqn:sum formula;tot der}), are also linear independent. Hence, their image spans the fiber of $\SJ^1_{m,1}\ra\bbr^m$ over $x_0$. Given that $\la$ is the (total) symbol of $P\in LPDO_r$; the ``top order part'' of $\la$, $\un{\la}$ is the principal symbol of $P$. We need a little more jet stuff to properly define the principal symbol and proceed with the general statement for arbitrary prolongations. The set of $j^{k+1}_xf$ of $f\in\dcm$ that vanish to $k^{th}$ order at $x$ have a canonical $\bbr$ vector space identification with $\bsm{\mathcal S_{k+1} T^*_x}$, the ${k+1}^{st}$ symmetric power of $T^*_x$, the cotangent space to $\bbr^m$ at $x$. (See Pommaret, \cite{Pommaret1978}, p47, 48.) In fact, for every $x\in\bbr^m$ and $k\in\bbn$ we have a canonical injection of vector spaces, $\mathcal S_{k}T^*_x\stackrel{i_{k}}{\ra}\SJ^{k}_{m,1,x}$ which is a canonical isomorphism when $k=1$, ie., $T^*_x\cong \SJ^1_{m,1,x}$. This injection embeds in an exact sequence: \begin{align} 0\ra\mathcal S_{k+1}T^*_x\stackrel{i_{k+1}}{\ra}\SJ^{k+1}_{m,1,x}\stackrel{\pi^{k+1}_k}{\ra}\SJ^k_{m,1,x}\ra 0. \end{align} In fact this and much of the following holds in far greater generality; eg., giving exact sequences of jets of bundles over any paracompact smooth manifold. Next, when $k=r$ the order of our operator, note that expression (\ref{eqn:sum formula;tot der}) when evaluated on $j^{r+1}_xf\in\mathcal S_{k+1}T^*_x$ gives \begin{align}\label{eqn: principal symbol sum} \p^\#_i\la_{x}(j^{k+1}_xf)=\sum_{|\a|\leq r}a_\a(x)y_{\a_i}(j^{k+1}_xf), \end{align} the other terms being zero as $j^k_xf=0$. Now for each $k=0,1,2,\ldots$, the \textbf{principal symbol of} $\bsm{P^{(k)}}$, denoted $\bsm{\un{\la}^{(k)}}:\mathcal S_{r+k}T^*_x\ra\mathcal S_{k}T^*_x$ is the $\bbr$ linear map induced by the restriction to $\mathcal S_kT^*_x$ of $\la^{(k)}$. As the principal symbol $\un{\la}:\mathcal S_{r}T^*_x\ra\bbr_{m,x}$ can be represented by $\sum_{|\a|=r}a_\a y_\a$, then $\un{\la}^{(1)}:\mathcal S_{r+1}\ra T^*_x$ can be written $\sum_i\sum_{|\a|=r}a_{\a} y_{\a_i}\otimes dx_i$. See the discussion below. Also note that the linear maps $y_\a|_{\mathcal S_{k}T^*_x}$ decomposes as the $\a$ symmetric product of the coordinate cotangent vectors and we have the canonical $\bbr$ vector space identification $\mathcal S_{r+1}T^*_x\cong \mathcal S_r T^*_x\otimes T^*_x$. With these preliminaries we can prove the following. \begin{lemma} Suppose that for $x\in\bbr^m$, $\un{\la}_x:\SJ^r_{m,1,x}\ra\bbr_{m,x}$ is nonzero, ie. a surjection. Then $\la^{(k)}$ is a surjection for all $k\in\bbn$. \end{lemma} \begin{proof} The remark above allows us to decompose the expression for $\un{\la}^{(1)}$ as $1_{T^*_x}\otimes\un{\la}$. Actually this holds at all levels.(SEE POMMARRET, \cite{Pommaret1978} p193) That is, \begin{align} \un{\la}_x^{(k)}=\un{\la}_x\otimes 1|_{\mathcal S_kT^*_x}. \end{align} But note that in the category of finite dimensional $\bbr$ vector spaces, the tensor product of surjections is a surjection, hence by hypothesis $\un{\la}^{(k)}$ is a surjection for each $k\in\bbn$. We will prove, by induction on $k$, the order of prolongation, that $\la^{(k)}$ is a surjection. The result holds for $k=0$. Suppose that it holds for some $k\geq 0$. We will prove that it holds for $k+1$. By the inductive hypothesis, the remark on $\un{\la}^{(k)}$ directly above and general facts on jets, we have a commutative diagram of exact sequences of linear maps over $x$\\ \begin{align}\label{diag2} \begin{CD} 0 @. 0\\ @VVV @VVV\\ \mathcal S_{r+k+1}T^*_x @> \un{\la}^{(k+1)} >>\mathcal S_{k+1}T^*_x @>>> 0\\ @VV\i_{r+k+1}V @VV\i_{k+1}V\\ \SJ^{r+k+1}_{m,1,x} @>\la^{(k+1)} >> \SJ^{k+1}_{m,1,x} \\ @VV\pi^{r+k+1}_{r+k} V @VV\pi^{k+1}_{k} V\\ \SJ^{r+k}_{m,1,x} @>\la^{(k)} >> \SJ^k_{m,1,x} @>>> 0\\ @VVV @VVV\\ 0 @. 0\\ \end{CD} \end{align} and we wish to prove that the middle row is a surjection. Suppose that $\eta_{k+1}\in\SJ^{k+1}_{m,1,x}$. We will find $\z\in\SJ^{r+k+1}_{m,1,x}$ such that $\la^{(k+1)}(\z)=\eta_{k+1}$. The proof will be a typical 'diagram chase'. Let $\eta_k=\pi^{k+1}_k(\eta_{k+1})$. Then, by hypothesis, there exist $\eta_{r+k}\in\SJ^k_{m,1,x}$ such that $\la^{(k)}(\eta_{r+k})=\eta_k$. Let $\eta_{r+k+1}\in (\pi^{r+k+1}_{r+k})^{-1}(\eta_{r+k})$. So commutativity of the lower square implies that \begin{align} \pi^{k+1}_k\circ\la^{(k+1)}(\eta_{r+k+1})=\la^{(k)}\circ\pi^{r+k+1}_{r+k}(\eta_{r+k+1})=\eta_{k}= \pi^{k+1}_{k}(\eta_{k+1}). \end{align} That is, 1) $\pi^{k+1}_k(\la^{(k+1)}(\eta_{r+k+1})-\eta_{k+1})=0$, and so by exactness of the right sequence, there is $\s_{k+1}\in\mathcal S_{k+1}T^*_x$ such that \begin{align}\label{eqn:comm diagram,1} i_{k+1}(\s_{k+1})=\la^{(k+1)}(\eta_{r+k+1})-\eta_{k+1} \end{align} But $\un{\la}^{(k+1)}$ is surjective, ie., there exists $\s_{r+k+1}\in\mathcal S_{r+k+1}T*_x$ such that \\ $\un{\la}^{(k+1)}(\s_{r+k+1})=\s_{k+1}$. So by this and commutativity of the top square, we have \begin{align}\label{eqn:comm diagram,2} \la^{(k+1)}\circ i_{r+k+1}(\s_{r+k+1})=i_{k+1}\circ \un{\la}^{(k+1)}(\s_{r+k+1})=i_{k+1}(\s_{k+1}). \end{align} Combining the equivalences in (\ref{eqn:comm diagram,1}) and (\ref{eqn:comm diagram,2}), we get \begin{align} \la^{(k+1)}(\eta_{r+k+1}-i_{r+k+1}(\s_{r+k+1}))=\eta_{k+1}. \end{align} That is, $\z\doteq\eta_{r+k+1}-i_{r+k+1}(\s_{r+k+1})$ is the element of $\SJ^{r+k+1}_{m,1,x}$ we are looking for. \end{proof} Note that we did not use the full strength of our setting; we did not use the exactness of the left vertical sequence. \begin{corollary}\label{lem:symb prolong is surj} Suppose that $\la:\SJ^r_{m,1}\ra\bbr_m$ is such that $\un{\la}_{x_0}:\SJ^r_{m,1,x}\ra\bbr_{x_0}$ is nonzero as in the previous lemma. Then if $g\in\dcm$, $j^k_{x_0}g\in Im(\la^{(k)}_{x_0})$, for every $k\in\bbn_0$. \end{corollary} \begin{proof} This is an immediate consequence of the previous lemma. \end{proof} This corollary will allow us to extend Todorov's pointwise equality to an infinite jetwise equality in the next section. \section{The Main Linear Theorem} Before we begin the transfer of our results to the nonstandard world, we need in place a bit more of the framework for the infinite jet results in this section. First, let $N_s$ denote the finite set of multiindices of length $m$ and weight less than or equal to $s$\;; ie., indexing the fiber jet coordinates for $\SJ^r_{m,1}$. Let $\ov{N_s}\subset N_s$ denote the subset of multiindices of length equal to $\a$. Given the notational material, we now examine how the lifting works. Todorov proved a result crudely stated as follows: given $g$, there exists nonstandard $f$ such that $P(f)(x)=g(x)$ at each standard $x$. Our intention is to prove that such $f$ exists such that $j^s_x(P(f))=j^s_xg$ for all standard $x$ and all $s\in\;^\s\bbn$. This will be a consequence of the material in the previous section, the transfer of the Borel lemma and a bit more standard preliminaries. The mapping $\la^{(s)}$ can be seen as the intermediary of $j^s(P(f))$ as follows. If $s\in\bbn$, and $|\a|\leq s$, we have that \begin{align} j^s_{x_0}(P(f))=P^{(s)}(f)(x_0)=\la^{(s)}_{x_0}(j^{r+s}_{x_0}(f))=j^s_{x_0}(\la\circ j^rf)=j^s_{x_0}g. \end{align} We can therefore get a good estimate on the size of the range the successive prolongations of the range of $P$ at $x_0$ by watching the mapping properties of $\la^{(s)}$. We will denote by $\bsm{\la^{(\infty)}}$ the infinite prolongation of $\la$ given by \begin{align} j^{r,\infty}_xf\mapsto j^{\infty}_x(\la\circ j^rf): \SJ^{(r,\infty)}_{m,1,x}\ra\SJ^{(\infty)}_{m,1,x} \end{align} where $j^{r,\infty}_xf=(j^r_xf,j^{r+1}_xf,j^{r+2}_xf,\ldots)$, ie., $\la^{(\infty)}$ being the map whose components are already defined. In this section and the next the transfer of the Borel Lemma will be used. Here is a statement of the version we will use. \begin{lemma}[Borel Lemma]\label{lem: borel} Let $x\in\bbr^m$ and suppose that $\phi\in\SJ^\infty_{m,1,x}$. Then there exists $f\in\dcm$ such that $\phi = j^\infty_xf$. \end{lemma} Note, implicit in this result is the fact that this determination depends only on the germ of $f$ at $x$. \subsection{Transfer of jet preliminaries} To prove the main theorem we need to transfer the above jet formulation to the internal arena inserting the homogeneous version of Todorov's result into a jet level high enough so that the symbol has the correct form. If $\bsm{{}^\s LPDO_r}$ denotes those elements $P$ of $\rz LPDO_r$ whose coefficients are standard elements of \cm , then these correspond to symbols $\la_P\in{}^\s C^{\infty}(\SJ^r_{m,1},\bbr)$. Therefore, a special case of the *transfer of Corollary \ref{cor:removing finite zeroes} is the following statement. \begin{corollary} Let $r\in\ {}^\s\bbn$, $D_a=\{x\in\bbr^m:|x|\leq a\}$ and $\rz P\in {}^\s LPDO_r$ with $\la$ the symbol of $P$. Suppose that $\max\{c\in {}^\s\bbn:\rz\SZ^c({\la})\cap \rz D_a\;\text{is nonempty}\}$ is bounded in ${}^\s\bbn$ independent of $a\in\bbn$. Then there exists $s\in {}^\s \bbn$ such that if $\la'$ is the symbol of $P^{(s)}$, then $\SZ_{\la'}\cap\rz\bbr^m_{nes}$ is empty. \end{corollary} \begin{proof} In *transferring Corollary \ref{cor:removing finite zeroes}, we need only note the following things for this corollary to follow. First of all, we *transfer this corollary, for the situation where $\SZ(\la_P)\subset D_a$ for a given $0<a\in\bbn$ noting that $\cup_{a>0}\rz D_a=\rz\bbr^m_{nes}$ and that the hypothesis implies that there exists $a_0\in\bbn$ such that $m(a)\doteq\max\{c\in {}^\s\bbn:\rz\SZ^c({\la})\cap \rz D_a\;\text{is nonempty}\}$ satisfies $m(a)\leq m(a_0)$ for all $a\in\bbn$. \end{proof} \begin{remark} Suppose that $\la_P\in C^{\infty}(\SJ^r_{m,1},\bbr)$ is such that for every bounded $B\subset\bbr^m$, $\SZ(\la_P)\cap B$ has no accumulation points. Then $\rz\la_P$ can't have the property that $\la_P$ vanishes to infinite, but hyperfinite order at some point in $\rz\bbr^m_{nes}$. It therefore follows that we can't use *transfer to generalize this result, in the given context, to points where $ \la_P$ vanishes to infinite, hyperfinite order. In order to proceed we need a particular type of nonstandard partition of unity construction. For $0<c\in\rz\bbr^m$ and $y\in\rz\bbr^m$, let $D_c(y)$ denote the disk centered at $y$ with radius $c$. \end{remark} \begin{lemma}[*Weak partition of unity]\label{lem:POU} Suppose that for every $ x\in \bbr^m$, we have $f^x\in \dstrcmn$. Then there exists $f\in\dstrcmn$ and $0<\delta\sim 0$ such that for each $x\in\bbr^m$, $f|_{D_\delta(x)}=f^x|_{D_\delta(x)}$. \end{lemma} \begin{proof} First of all, sufficient saturation implies that the (external) map $^\s\bbr^m\ra\dstrcmn:x\mapsto f^x$ extends to an internal map $\SI:\rz\bbr^m\ra\dstrcm:l\in\SI\mapsto f^l$; see Theorem \ref{thm:extend external map}. Let $\SL\subset\rz\bbr^m$ be a *finite subset such that $^\s\bbr^m\subset\SL$. Choose $0<\delta\in\rz\bbr$ such that $\delta<\f{1}{10}\rz min\{|l-l'|:l,l'\in\SL,l\not=l'\}$. By the *transfer of a variation on a weak form of the partition of unity construction, there exists $\psi_l\in\dstrcm$ for each $l\in\SL$ such that $\sum_{l\in\SL}\psi_l(x)=x$ for each $x\in\rz\bbr^m$ and for each $l\in\SL$, $\psi_l|_{D_\delta(l)}\equiv 1$. (As the *cardinality of $\SL$ is *finite, we don't have to worry about *local finiteness of the sum of the $\psi_l$'s.) Then the function $f\doteq\rz\sum_{l\in\SL}\psi_lf^l$ has the properties we need. \end{proof} \begin{remark} In a follow up paper, a numerically controlled version of this lemma (and the corresponding one in the nonlinear section) will allow proof of most of the existence results in this paper within the category of Colombeau-Todorov algebras. \end{remark} Let $\la:\SJ^r_{m,1}\ra\SJ^0_{m,1}$ denote the symbol of a $P\in LPDO_r$. \begin{definition} Let $\bsm{finsupp(P)}$ or $\bsm{finsupp(\la)}$ denote the subset of $\bbr^m$ given by $\cup\{\SZ^c(\la):c=0,1,2,\ldots\}$ For each $x\in\bbr^m$ and $k\in\bbn_0$, let $\bsm{\SJ^k_{\la,x}}$ denote the subspace of $\SJ^k_{m,1}$ given by $\la^{(k)}(\SJ^{r+k}_{m,1,x})$. We write $g\not=0(\la,x)$, if $j^k_xg\not=0$ for some $k\in\bbn$. Let $\bsm{\SV^m_x}<\dcm$ denote the ideal of $f\in\dcm$ such that $j^{\infty}_xf=0$. \end{definition} \begin{lemma}\label{lem: infin diml soln space at x} If $x\in finsupp(\la)$, then there exists $g\in\dcm$ such that $j^{k_0}_xg\not=0$ for some $k_0\in\bbn$ and $j^k_xg\in\SJ^k_{\la,x}$ for every integer $k\geq 0$. \end{lemma} \begin{proof} Given Corollary \ref{cor:removing finite zeroes} the assertion amounts to specifying that the derivatives at each level must lie in a given set and hence is an easy consequence of the Borel lemma. \end{proof} \begin{definition} Let $\bsm{\frak I_{\la,x}}=\{g\in\dcm:j^k_xg\in\SJ^k_{\la,x}\;\text{for all}\;k\in\bbn_0\}$. \end{definition} Note, of course, that $\SV^m_x<\frak I_{\la,x}$. So by the above lemma, $\frak I_{\la,x}$ is infinite dimensional. Therefore, $\rz\frak I_{*\la,*x}$ is a *infinite dimensional $\rz\bbr$ subspace of \strcm. In the nonstandard world, we have the following analogous definition. \begin{definition} Let \begin{align} \pmb{^\s\frak I_{\la,x}}= \{g\in\dstrcm: \rz j^k_{*x}g\in\rz\SJ^k_{*\la,*x} \;\text{for all}\;k\in\; ^\s\bbn_0\}. \end{align} \end{definition} Note that $^\s\frak I_{\la,x}$ is an external $\rz\bbr$ vector space and $\rz\frak I_{\la,x}\subset\;^\s\frak I_{\la,x}\subset\dstrcm$. In particular, $^\s\frak I_{\la,x}$ is infinite dimensional. Note that its *dimensionality is not well defined. We have one more definition. \begin{definition} Let $\bsm{^\s\frak I_\la}$ denote the set of $g\in\dstrcm$ such that for all $\rz x\in\;^\s\bbr^m$, $g\in\;^\s\frak I_{\la,x}$. \end{definition} \begin{lemma}\label{lem: finite vanish implies section} Suppose that $^\s\frak I_{*\la,*x}\not=0$ for some $x\in\bbr^m$. Then $^\s\frak I_\la\not=0$. \end{lemma} \begin{proof} For each $x\in\bbr^m$, choose $f^x\in\dstrcm$ with $f^x\in\; ^\s\frak I_{*\la,*x}$, such that for some $x$, $f^x\not=0(\la,x)$. By Lemma \ref{lem:POU}, there exists $f\in\dstrcm$ and $0<\delta\sim0$ such that $f|_{D_\delta(x)}=f^x|_{D_\delta(x)}$ for each $x\in\bbr^m$. But then, for each $x\in\bbr^m$ and each $k\in\bbn_0$, $\rz\!j^k_{*x}f=\rz\!j^k_{*x}f^x\in\; ^\s\SJ^k_{*\la,*x}$. That is $f\in ^\s\frak I_\la$, and $f\not=0(\la,x)$ for some $x$. \end{proof} \begin{lemma}\label{lem:infty soln jet at x} Let $x\in\bbr^m$, and $g\in\frak I_{\la,x}$. Then there exists $f\in\dcm$ such that $\la^{(\infty)}_x(j^{r,\infty}_xf)=j^{\infty}_xg$. \end{lemma} \begin{proof} First of all, for every $k\in\bbn_0$, there exists $\g_k\in\SJ^{r+k}_{m,1}$ such that $\la^{(k)}(\g_k)=j^k_xg$. This just follows from the definition of $\frak I_{\la,x}$. Since this holds for all $k$, then there exists $\g\in\SJ^{r,\infty}_{m,1}$ with $\la^{(\infty)}_x(\g)=j^{\infty}_xg$. Just let $\g$ be such that $\pi^{\infty}_k(\g)=\g_k$ for each $k$. But note that for $\g\in\SJ^{r,\infty}_{m,1,x}$, the Borel Lemma, Lemma \ref{lem: borel}, implies that there exists $f\in\dcm$ such that $j^{r,\infty}_xf=\g$. \end{proof} \begin{notation} If $f\in\dstrcm$, we will denote \begin{align} \bsm{\rz\!j^\s_x(f)}=(\rz(j^k_{*x})(f))_{k\in\bbn_0},\;\text{an external sequence}. \end{align} Similarly, if $\la$ is an internal jet map and we are considering, for each $k\in\bbn$, not $\rz\bbn$, $\la^{(k)}_{\;* x}$, the \textbf{internal} prolongation of $\la$ at the standard point $\rz x$, ie.,$(\la^{(k)}_{*x})_{k\in^\s\bbn}$, then we will also write this as $\bsm{\la^{(\s)}_x}$; eg., if $\la$ or $f$ are standard and we are considering only this family of internal prolongations of $\rz\la$ or $\rz f$, then we will write $\bsm{\rz\!j^\s_x(\rz\!f)}$ or $\bsm{\rz\!\la^{(\s)}_x}$. \end{notation} In the situation when $f\in\dcm$, $\rz\!j^\s_x(\rz f)$ is just the external sequence of standard numbers, $(\rz(j^k_xf))_{k\in\bbn_0}$. This notation can be unwieldy; some of the parentheses, or *'s may be left out if the meaning is still clear. Note that if \begin{align} \SV^m=\bigcap_{x\in\bbr^m}\SV^m_x=\{f\in\dstrcm:\rz\!j^\s_x(f)=0,\;\text{for all}\;x\in\bbr^m\}, \end{align} then $\SV^m<\;^\s\frak I_\la$. Although $\SV$ is a $\rz \bbr$ vector space, it is nonetheless external. To get a sense of the size of $\SV^m$ in \strcMn, note that $\SL<\SV$ where $\SL$ is the *finite codimensional subspace \strcMn\; defined in the concluding section of the paper. Therefore, we have the following consequence of Lemma \ref{lem: finite vanish implies section}. \begin{corollary}\label{cor: section at pt implies infnte diml sections} Suppose that $^\s\frak I_{*\la,*x}\not=0$ for some $x\in\bbr^m$. Then $^\s\frak I_\la$ is *infinite dimensional. \end{corollary} \begin{remark} Suppose that $f\in\dcm,\;\ov{f}\in\dstrcm$ such that for some standard $x$, and $0<\delta\sim 0$, $\ov{f}|_{D_\delta(*x)}=\rz\!f|_{D_\delta(*x)}$. Then the internal jet sequence $\rz j^{\infty}_{*x}\ov{f}\doteq(\rz j^k_{*x}\ov{f})_{k\in*\bbn}$ is just the *transfer of the standard sequence $(j^k_xf)_{k\in\bbn_0}$, eg., when the set of jet indices is restricted to to the external set $^\s\bbn_0$. That is, in the above notation, $\rz\!j^\s_x(\ov{f})= (\rz\!j^{k}_xf)_{k\in\bbn_0}$. \end{remark} \subsection{Many generalized solutions with high contact} The following result is the main linear result of the paper, although it's import is not apparent without the following corollaries. \begin{theorem}\label{thm:lin eqn,infinite contact soln} Suppose that $P\in LPDO_r$. Then, for every $g\in\; ^\s\frak I_\la$, there exists $f\in\dstrcm$ such that \begin{align} \rz\!j^{\s}_x(\rz P(f))=\rz\!j^{\s}_xg\;\text{for every}\;x\in\bbr^m. \end{align} That is, $\rz P(f)$ has $^\s$infinite order *contact with $g$ at all points of $\rz\bbr^m_{nes}$. \end{theorem} \begin{proof} Suppose that $^\s\frak I_{\la}$. By Lemma \ref{lem:infty soln jet at x} if $x\in\bbr^m$, there is $f^x\in\dcm $ such that \begin{align}\label{eqn:infinite jet soln at point} \la^{(\infty)}_{P,x}(j^{\infty}_xf^x)=j^{\infty}_xg. \end{align} By Lemma \ref{lem:POU} there exist $\ov{f}\in\dstrcm$ such that for every $x\in\bbr^m$ $\ov{f}|_{D_\delta(*x)}=\rz f^x|_{D_\delta(*x)}$. By the remark above, for each such standard $x$, $\rz\!j^{\s}_x\ov{f}=\rz\!j^{\s}_{*x}(\rz\!f^x)$. But this implies that, at each standard $x$, \begin{align} \rz\!\la^{(\s)}_{*x}(\rz\!j^{\s}_{*x}\ov{f})=\rz\!\la^{(\s)}_{*x}(\rz\!j^{\s}_{*x}\rz\!f^x) \end{align} Coupling this with the transfer of expression (\ref{eqn:infinite jet soln at point}) restricted to standard indices, we now have $\ov{f}\in\dstrcm$ such that \begin{align}\label{eqn:infinite prolong,global soln in symb} \rz\la^{(\s)}_{*x}(\rz j^{\s}_{*x}\ov{f})=\rz j^{\s}_xg \end{align} for each $x\in\bbr^m$. But, by definition of prolongation, *transferred \begin{align}\label{eqn:infin prolong P is infin prolong lam} \rz j^{\infty}_{*x}(\rz P(\ov{f}))=\rz\la^{(\infty)}_{*x}(\rz j^{r,\infty}_{*x}\ov{f}). \end{align} Stringing together expressions (\ref{eqn:infinite prolong,global soln in symb}) and (\ref{eqn:infin prolong P is infin prolong lam}), restricted to standard indices, gets our result, as this holds for every standard $x$. \end{proof} \begin{corollary}\label{cor: infinite order Todorov} Suppose that $P\in LPDO_r$ with symbol $\la$, and principal symbol $\un{\la}$. Suppose that for each $x\in\bbr^m,\un{\la}_x\not=0$. Then for every $g\in\dstrcm$, there exists $f\in\dstrcm$ with \begin{align} *j^{\infty}_{*x}(*P(f))=*j^{\infty}_{*x}g\;\text{for every}\;x\in\bbr^m. \end{align} \end{corollary} \begin{proof} By Lemma \ref{lem:symb prolong is surj}, if $g\in\dcm$, then $g\in\; ^\s\frak I_\la$. But then the result is a direct consequence the previous theorem. \end{proof} \begin{remark} To put this result in perspective, note that Todorov, \cite{Todorov96}, proves the $0^{th}$ order jet case in his paper, with a slightly weaker hypothesis. \end{remark} For those $x\in\bbr^m$ where $\la_x=0$, a trivial case for the $0$-jet, as Todorov notes, becomes a nontrivial thickened result when the consideration becomes the infinite jet at standard points where some finite prolongation $\la^{(k)}_x$ is nonzero. For this situation we have the following result. \begin{corollary}\label{cor: infinite order solns for finite contact} Suppose that $finsupp(P)=\bbr^m$. Then there exists an *infinite dimensional subspace $^\s\frak I_P<\dstrcm$ such that if $g\in\; ^\s\frak I_P$, then there exists $f\in\dstrcm$ such that \begin{align} \rz j^{\s}_{*x}(\rz P(f))=\rz j^{\s}_{*x}g\;\text{for every}\;x\in\bbr^m. \end{align} \end{corollary} \begin{proof} As $finsupp(P)=\bbr^m$, if $x\in\bbr^m$, Lemma \ref{lem: infin diml soln space at x} implies that $\mathcal S^P_x\doteq\{j^{\infty}_xg:g\in \frak I_{\la,x}\}$ is nonzero. Therefore, the result follows from Corollary \ref{cor: section at pt implies infnte diml sections} and the above theorem. \end{proof} That is, even if the symbol vanishes at points of $\bbr^m$, as long as this vanishing order is finite at each such point, then there exists many $g\in\dstrcm$, satisfying the above compatibility conditions, such that $\rz P(f)=g$ is solved to infinite order along $^\s\bbr^m$ by $f\in\dstrcm$. \subsection{Solutions for singular Lewy operator} Before we move to the next section, let's look at the Lewy operator, see \cite{Todorov96}, p.679, $\SL=\p_1+i\p_2-2i(x_1+ix_2)\p_3$ acting on smooth complex valued functions on $\bbr^3$. First of all, note that the results just proved hold just as well with complex valued functions; the proofs are identical. Second, note that the principal symbol, $\un{\la}_{\;\SL}$ of $\SL$ is the same as the total symbol $\la_\SL=y_1+iy_2-2i(x_1+ix_2)y_3$. Inspection shows that these maps are nonvanishing, hence $\SL$ satisfies the hypotheses in Corollary \ref{cor: infinite order Todorov}, ie., for any $g\in \rz C^\infty(\bbr^3,\bbc)$, there exists (many) $f\in\rz C^\infty (\bbr^3,\bbc)$ such that $\rz\SL(f)(\rz x)=\rz g(\rz x)$ to infinite order at each $x\in\bbr^3$. But we can say more. Suppose that $h=(h_1,h_2,h_3)$ is such that $h_i\in C^\infty(\bbr^3,\bbr)$ for each $i$ and $h$ vanishes to finite order at each $x\in\bbr^3$. {\it Let $\wh{\SL}=h_1(x)\p_1+ih_2(x)\p_2-2ih_3(x)(x_1+x_2)\p_3$, a kind of singular Lewy operator with finite singularities at each $x\in\bbr^3$. Then Corollary \ref{cor: infinite order solns for finite contact} implies that for any $g\in C^\infty(\bbr^3,\bbc)$ that vanishes where $\la_{\wh{\SL}}$ vanishes to order at least that of $h$, there exists $f\in \rz C^\infty(\bbr^3,\bbc)$ such that $\wh{\SL}(f)(\rz x)=g(\rz x)$ holds to infinite order at all $x\in\bbr^3$}. \section{Nonlinear PDE's and the pointwise lifting property}\label{section: nonlinear work} In this section $P$ can now be an arbitrary smooth nonlinear PDO of finite order. Only the rudiments of a nonlinear development parallel to the linear considerations in the previous sections will be attempted in this paper. The point here is that the framework is not an impediment to a consistent consideration of generalized objects. First, as it is natural within our framework, we straightforwardly extend the notion of solution of a differential equation, as defined in Todorov's paper, to include nonlinear as well as linear differential equations. In analogy with $LPDO_r$, a (possibly nonlinear) order $r$ partial differential operator, $P:\dcm\ra \dcm$, is a mapping given by $P(f)(x)=\la(j^r_xf)$ where now the total symbol of $P$, $\la:\SJ^r_{m,1}\ra\bbr_m$ is a possibly nonlinear smooth bundle map. Let $\bsm{NLDO_r}$ denote this set of operators. \begin{definition} Given $g\in\dstrcm$, we say that $f\in\dstrcm$ is a solution of $\rz P(f)=g$ if $\rz P(f)(\rz x)=g(\rz x)$ for every $x\in\bbr^m$. \end{definition} We will consider a simple set theoretic condition on pairs $(P,g)$ (or $(\la_P,g)$) the \textbf{pointwise covering property}, $\bsm{PCP}$. An easy (saturation) proof will get that if $(P,g)$ satisfies this property, written $(P,g)\in PCP$, or $(\la_P,g)\in PCP$, then $P(f)=g$ has generalized solutions in a sense of Todorov. We will then show that the main theorem is a corollary of this result by verifying that our linear differential equation satisfies $PCP$. \begin{definition} Let $\la\in C^{\infty}(\SJ^k_{m,1},\bbr)$ and $g\in\dcm$. We say that \textbf{the pair $\bsm{(\la,g)}$ satisfies $\bsm{PCP}$}, if for each $x\in\bbr^m$, there exists $p\in (\pi^k)^{-1}(x)$, such that $\la(p)=g(x)$. If $\la\in \rz C^{\infty}(\SJ^k_{m,1},\bbr)$ and $g\in\dstrcm$, then we say that \textbf{the pair $\bsm{(\rz\la,g)}$ satisfies $\bsm{{}^\s PCP}$} if for all $x\in {}^\s\bbr^m$, there exists $p\in\rz\pi^{-1}(x)$, such that $\la(p)=g(\rz x)$. \end{definition} \begin{remark} Note that finding $p\in(\pi^r)^{-1}(x)$ such that $\la(p)=g(x)$ is identical to finding $h\in\dcm$ such that $h$ solves $P(h)=g$ at the single point $x$. Also, note the relationship between $PCP$ and ${}^\s PCP$. If $\la\in C^{\infty}(\SJ^k_{m,1},\bbr)$ and $g\in\dcm$ are such that $(\la,g)\in PCP$, then $(\rz\la,\rz g)\in {}^\s PCP$. On the other hand, if $\la\in SC^{\infty}(\SJ^k_{m,1},\bbr)$ and $g\in\dscm$ are such that $(\la,g)\in {}^\s PCP$, then $(^o\la,^og)\in PCP$. (Recall that if $X,Y$ are Hausdorff topological spaces and $f:\rz X\ra\rz Y$ is such that $f$ maps nearstandard points of $\rz X$ to those of $\rz Y$, then the standard part of $f$, $^of:X\ra Y$ is a welldefined map.) \end{remark} The following lemma verifies that the the PCP condition restricted to linear differential operators has Todorov's criterion as a special case. We are working with the symbol of the operator. \begin{lemma} Let $P\in LPDO_r$ and write $$ \la_P=\sum_{|\a|\leq r}f_{\a}y_{\a}. $$ Suppose that $\sum_{|\a|\leq r}|f^{\a}(x)|\not=0$ for all $x\in\bbr^m$ Then for all $g\in\dstrcm$, $(\rz\la_P,g)\in {}^\s PCP$. \end{lemma} \begin{proof} Let $x_0\in\bbr^m$. We will not write $\rz x_0$ when we transfer. The condition guarantees that there exists a multiindex $\a$ such that $f^{\a}(x_0)\not=0$. Let $\G=\{\a:c_{\a}\doteq f^{\a}(x_0)\not=0\}$. If $\G$ has only one element, $\a_0$, let $h\in\dstrcm$ be such that \begin{align} \rz\p^{\a_0}(h)(x_0)=\f{g(x_0)}{\rz f^{\a_0}(x_0)} \end{align} Then if $\kappa\in\rz\SJ^k_{m,1}$ is given by $\rz j^r_{x_0}h$, we get that \begin{align}\label{eqn:PCP} \rz\la_P(\kappa)=\rz\la_P(\rz j^r_{x_0}(h))= \\ \rz \sum_{|\a|\leq \notag r}f^{\a}(x_0)y_{\a}(j^r_{x_0}h)= & f^{\a_0}(x_0)y_{\a_0}(j^r_{x_0}h)\notag = \\ f^{\a_0}(x_0)\f{g(x_0)}{f^{\a_0}(x_0)}= g(x_0)\notag \end{align} as we wanted. So suppose that $\G$ has at least two elements. Let $\a_0\in\G$ and let $\La=\G-\{\a_0\}$. Choose $h\in\dstrcm$ so that if $\a\in\La$, then $\rz\p^{\a}h(x_0)=0$ and (as in the first case) such that $\rz\p^{\a_0}(h)(x_0)=\f{g(x_0)}{f^{\a_0}(x_0)}$. Then as in expressions (\ref{eqn:PCP}), we get $P(h)(x_0)=g(x_0)$. \end{proof} Given the above lemma we shall see that Todorov's result is a corollary of this lemma and the next theorem proving the existence of solutions of PCP operators. Before we proceed to the theorem, we need some NSA preliminaries. First we give a simple example of the construction we will need. Let $F(\bbr)$ be all maps from $\bbr$ to $\bbr$ and let $F^{\infty}(\bbr)=\{f\in F(\bbr):f\;\text{is smooth}\}$. Let $f\in F(\bbr)$, then there exists an (internal) element $\tl{f}\in\rz F^{\infty}(\bbr)$ such that $\tl{f}|_{{}^\s\bbr}=f$, as the following argument shows. Let $\SY_1\subset \rz\bbr$ be *finite such that ${}^\s\bbr\subset\SY_1$ and let $\SY_2=\rz f(\SY_1)$. Then $\SY_2$ is obviously a *finite subset of $\rz\bbr$. Now consider the following elementary standard statement. If $S_1,S_2$ are finite subsets of $\bbr$, of the same cardinality, and $h:S_1\ra S_2$ is a bijection, there exists $\tl{h}\in F^{\infty}(\bbr)$ such that $\tl{h}|_{S_1}=h$. This follows from a simple partition of unity argument. Now *transfer this to get existence of $\tl{f}\in\rz F^{\infty}(\bbr)$ such that $\tl{f}|_{\SY_1}=\rz f|_{\SY_1}$. In particular, $\tl{f}|_{{}^\s\bbr}=\rz f|_{{}^\s\bbr}=f|_{\bbr}$, as we wanted. Now we want to do the same construction in the venue of bundles and their sections. Let $\bsm{\G(\SJ^r_{ m,1})}=\{s:\bbr^m\ra\SJ^r_{m,1}|\;\pi^r\circ s=\bbi_{\bbr^m}\}$, ie., set theoretic sections of $\pi^r$. Let $\bsm{\G^{\infty}(\SJ^r_{m,1})}=\{s\in\G(\SJ^r_{ m,1}):s\;\text{is a smooth map}\}$. We have the following lemma. \begin{lemma}\label{lem:first standard jet approx} Suppose that $s\in\G(\SJ^r_{m,1})$. Then there exists $\tl{s}\in\rz\G^{\infty}(\SJ^r_{m,1})$, such that $\tl{s}|_{{}^\s\bbr^m}=s|_{\bbr^m}$. \end{lemma} \begin{proof} As with the above example, let $\SX\subset\rz\bbr^m$ be *finite such that $\bbr^m\subset\SX$. We have the following elementary fact. If $B=\{b_1,\ldots,b_l\}$ is a finite subset of the base and $P=\{p_1,\ldots,p_l\}\subset\SJ^r_{m,1}$ is a finite subset such that $p_j\in(\pi^r)^{-1}(b_j)$ for each $j$, then there exists $s\in\G(\SJ^r_{m,1})$ such that $s(x_j)=p_j$ for all $j$. Now *transnfer this statement, applying the *transferred statement to the *finite subset $\SX$ in the base and the *finite subset $\rz s(\SX)$ of points in the *bundle over $\SX$. That is, we can infer the existence of an internal section $\tl{s}\in\rz\G^{\infty}(\SJ^r_{m,1})$ such that for all $x\in\SX$, $\tl{s}(x)=\rz s(x)$, in particular $\tl{s}|_{{}^\s\bbr}=s$, as we wanted. \end{proof} In the context of this lemma, we have that $(\rz\la,g)\in{}^\s PCP$ is equivalent to the existence of a set theoretic section $s\in\rz\G(\SJ^r_{m,1})$ such that the pointwise condition $\rz\la\circ \rz s=g$ holds on ${}^\s\bbr^m$. It's important to note that, generally speaking, such sections are far from integrable; that is, equal to $j^rf$ for some smooth $f\in\dcm$. But again, by a transfer argument, we can find such a section. \begin{lemma}\label{lem:second standard jet approx} Suppose that $s\in\G^{\infty}(\SJ^r_{m,1})$ and $\SX\subset\rz\bbr^m$. Then there exists $f\in\dstrcm$ such that $\rz j^rf|_{\SX}=s|_{\SX}$. \end{lemma} \begin{proof} This just follows from the *transfer of the following obvious standard statement about jets. If $\{p_1,p_2,\ldots,p_l\}\subset\SJ^r_{m,1}$ such that $x_j=\pi^r(p_j)$ are all distinct. Then there exists $f\in\dcm$ such that $j^r_{x_i}f=p_i$ for all i. \end{proof} With these preliminaries, the proof of the following result is immediate. \begin{theorem} Let $\SD\in NLDO_r$ and let $\la_{\SD}=\la\in C^{\infty}(\SJ^r_{m,1},\bbr)$ and $g\in\dstrcm$. Suppose that $(\rz\la,g)\in {}^\s PCP$. Then $\SD(f)=g$ has a generalized solution, $f$, in the sense of Todorov. \end{theorem} \begin{proof} By the remark above, $(\rz\la,g)\in {}^\s PCP$ is equivalent to the existence of an $s\in\rz\G(\SJ^r_{m,1},\bbr)$ such that for every $x\in\bbr^m$, $\la_{\rz\SD}(s(\rz x))=g(\rz x)$. But by Lemma \ref{lem:first standard jet approx}, there exists $\tl{s}\in\rz\G^{\infty}(\SJ^r_{m,1})$ such that $\tl{s}(\rz x)=s(x)$ for all $x\in \bbr^m$. And by Lemma \ref{lem:second standard jet approx}, there exists $f\in\dstrcm$, such that for all $x\in\bbr^m$, $\rz j^r_{* x}(f)=\tl{s}(\rz x)$. \end{proof} Todorov's existence result (being for linear operators only) is a special consequence of the previous development. \begin{corollary} Suppose that $g\in\dstrcm$ and $P\in LPDO_r$ is such that $\la_P$ is nonvanishing on $\bbr^m$. Then there exists $f\in\dstrcm$, such that for all $x\in\bbr^m$, $P(f)(\rz x)=g(\rz x)$, ie., $f$ is a solution of $P(f)=g$ in the manner of Todorov. \end{corollary} \begin{proof} This is clear. \end{proof} Given the nonlinear setting of this section, proving results analogous to those in the linear sections appear to need much more involved preliminaries and so will be pursued at a later date. Nonetheless, it seems clear that we can consider some general criteria revolving around when $(P,g)\in PCP$. In particular, it appears that we can prove {\it a universal existence theorem asserting that any possible space of generalized functions that has the $PCP$ property is already contained in our nonstandard space}. This, too, will appear as time allows. \section{Conclusion} \subsection{Too many solutions?} In this paper I have used some of the machinery of the geometry of partial differential equations to explore the possibilities of the approach of Todorov. (We have yet to work through the nonlinear analogs of the linear results presented here; this will entail a much more extensive use of the the jet theory of nonlinear partial differential operators. Note even more starkly than in this paper; no counterpart in standard mathematics exists.) The implications of the results of this paper are still not clear. Yet one thing should be obvious, the class of internally smooth maps are remarkably `flabby', as compared to the standard world. As an indication of this, we have the following construction. let $\bsm{\SL}<\dstrcmn$ be the $\rz\bbr$ linear subspace of $\dstrcmn$ defined as follows. Let $\SY\subset\rz\bbr^m$ be a *finite subset such that $^\s\bbr^m\subset\SY$. Let $\omega\in\rz\bbn_{\infty}$. Then, the set $\bsm{\SL}=\{f\in\dstrcmn:\rz\! j^{\om}_x f=0\;\text{for all}\;x\in\SY\}$ is a *cofinite dimensional subspace of \strcmn, as this set of conditions on elements $f$ in \strcmn\; given by specifying the value of $j^{\om}_{x}f$, a *finite number of *Taylor coefficients at a *finite set of points in $\rz\bbr^m$, ie., the points of $\SY$ is *finite. Now, by construction, $\SL\cap\;^\s C^\infty(\bbr^m,\bbr^n)=\{0\}$, and we have the following diagram \begin{align} \begin{CD} \SL\\ @VjVV \\ \dstrcmn @>\text{ *$j^\om$}>> \rz \SJ^{\om}_{m,n} @>\rho>> \rz \SJ^{\om}_{m,n}|_{^\s\!\bbr^m}\\ @AiAA \\ \rz\bbr\otimes\;^\s C^\infty(\bbr^m,\bbr^n) \end{CD} \end{align} where the maps $i$ and $j$ are $\rz\bbr$ subspace injections and $\rho$ is the highly external restriction to the fibers over $^\s\bbr^m$. Let $\Phi =\rho\circ\rz j^{\om}$. Then the following holds. \begin{lemma} $\Phi|_{Im(j)}$ has image $\{0\}$ and $\Phi|_{Im(i)}$ is an injection. \end{lemma} \begin{proof} By construction, we have $\Phi(f)=0$ for every $f\in\SL$. On the other hand, if for any element $f\in\; \dcgcmn$ we have $\rz\!j^{\om}_x(\rz f)=0$ for each $x\in\SY$, we have in particular that $j^{\infty}_x f=0$, for each $x\in\bbr^m$, that is $f=0$. This therefore holds for all $f\in\rz\bbr^m\underset{\bbr^m}{\otimes}\dcgcmn$. \end{proof} So we have that the subspace of elements of\; \strcmn \; whose $^\s$\! infinite *jet vanishes everywhere on $^\s\bbr^m$ is all of \;\strcmn\; up to a *finite dimensional subspace containing all standard smooth maps. It should therefore be clear that we have the immediate corollary that exemplifies the ability to bend almost all of \strcmn \; away from contact with the world of standard differential equations, at least at standard points. \begin{corollary} If $P\in NPDO_r$ for any $r\in\bbn$ such that $P(\text{zero map})= \text{zero map}$, then $\rz P(f)(\rz x)=0$ for all $f\in\SL$ and all $x\in\bbr^m$. \end{corollary} \begin{proof} All classical differential operators $P$ of order $r$, factor as $\rz P=\rz\la_P\circ \rz j^r=\rz\la_P\circ\rz\pi^{\om}_r\circ \rz j^{\om}$ and by above $j^{\om}(\SL)|\;^\s\bbr^m =\{0\}$. \end{proof} {\it That is, all classical partial differential operators sending the zero map to the zero map operate as zero maps on ``almost all'' of \;\strcmn\;.} One perspective on the results here should not be a surprise: that *smooth functions (and with some thought *analytic functions) are far too flabby on a full infinitesimal scale. From a positive viewpoint, one could see how this might allow an investigator to have wide latitude in `Tayloring' generalized functions (on the monadic level) to get appropriate rigidities-growth or to test various empirical results by infinitesimal adjustings of singular parameters. The remark (in the introduction) with respect to the work of Baty, etal, see eg., \cite{BatyShockWave2008} seems relevant to the second perspective. The algebras of Oberguggenberger and Todorov, \cite{OT98} and the further developments in eg., Todorov and Vernaeve, \cite{TodorVern2008} seems to be good examples of the Tayloring capacities. \subsection{Prospects and goals} Only the rudiments of jets on the one hand, and nonstandard analysis, on the other have been deployed in this paper. In follow up articles we intend to use (*transferred) tools from smooth function theory along with a more extensive use of jet theory to extend both the linear and nonlinear existence results. Further, deploying more nuanced version of the jet material of section \ref{section: standard jet work,prolong and rank} over certain types of infinite points in the jet fibers, we intend to prove results on regularity of solutions of partial differential operators, linear or nonlinear, whose symbols satisfy certain properness conditions. Our first paper along this line, \cite{McGafRegSolnNPDE}, gives a regularity theorem for a broad class of nonlinear differential operators. We also intend to extend the results here to include the results of Akiyama, \cite{AkiyamaNSSolvabilityOpsOnVBs} into the framework established here in the manner we have included the results of Todorov. The method is by an extension from internal mapping with *finite support to internal smooth modules of bundle sections with *finite support. Furthermore, as noted in the introduction, we will refine the arguments in this paper to Todorov's nonstandard Colombeau algebras. Given that all of the usual constructions on the symmetries of differential equations (as in eg., Olver, \cite{Olver1993}) are straightforwardly lifted to the nonstandard universe, we are also looking into developing a theoretic framework on generalized symmetries (eg., shock symmetries) of differential equations, continuing within the jet theoretic framework begun here. \bibliographystyle{amsplain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Understanding the chemical compositions has been a central aspect in atmospheric characterization for planets within and beyond the Solar System. Photochemical kinetics models establish the link between our knowledge of chemical reactions and various planetary processes (e.g., atmospheric dynamics, radiative transfer, outgassing process, etc.), providing a theoretical basis for interpreting observations and addressing habitability. Hot Jupiters are the first discovered and best characterized class of exoplanets. Transit and eclipse observations have made various initial detections of chemical species in their atmospheres such as Na, K, \ce{H2O}, \ce{CH4}, CO, \ce{CO2} \citep[e.g. see the review of ][]{Kreidberg2018}. An extreme class of exceedingly irradiated hot Jupiter around bright stars have equilibrium temperature higher than 2000 K. They are prime targets for emission observations, and recent high-resolution spectroscopic measurements reveal atomic and ionic features that make their atmospheres resemble low-mass stars \citep[e.g., ][]{Birkby2013,Brogi2014,Jens2018}. The majority of discovered exoplanets have sizes between Earth and Neptune. Their heavy elemental abundances (i.e. metallicity) can vary considerably, as often inferred by the water detection \citep[e.g., ][]{Wakeford2017,Chachan2019}. While \ce{CH4} is expected to be more abundant in cooler (T$_{\textrm{eq}}$ $\lesssim$ 1000 K) atmospheres, understanding how disequilibrium chemistry and other processes alter the \ce{CH4}/CO abundance ratio remains an ongoing task. The direct imaging technique provides a complementary window to resolve young planets at a far orbit \citep[e.g., see the reviews of][]{Crossfield2015,Pueyo2018}). The new generation of instruments like GPI and SPHERE \citep{Chauvin2018} have identified a number of interesting young Jupiter analog. These young planets are self-luminous from their heat of formation and receive UV fluxes from the star at the same time, giving insights on the planet forming conditions outside the snow lines and the transition between planets and brown dwarfs. Across the various types of aforementioned planetary atmospheres, photochemical kinetics and atmospheric transport are the dominant mechanisms that control the major chemical abundances. Photodissociation occurs when molecules are split into reactive radicals by high-energy photons while atmospheric transport shapes the abundance distribution. Disequilibrium processes can drive abundances considerably away from the chemical equilibrium state and are best studied in chemical kinetics models. Kinetics models stem from simulating the atmospheric compositions in Solar System planets \citep[e.g., ][]{Kasting1979,Yung1984,Nair1994,Wilson2004,lavvas2008,Hu2012,Krasnopolsky2012}, which focus on photochemistry and radical reactions. The low temperature regime makes thermochemistry less relevant in most cases. \cite{Liang2003} first applied a photochemical kinetics model, Caltech/JPL KINETICS \citep{Allen1981}, to the hot Jupiter HD 209458b and identified the photochemical source of water for producing atomic H. However, some reaction rates in their study are extrapolated from measurements at low temperatures and not suitable for hot Jupiter conditions. \cite{Line2010} adopt the high-temperature rate coefficients for the major molecules and use the lower boundary to mimic mixing from the thermochemical equilibrium region. A new group of models incorporating kinetics data valid at high temperatures started to emerge since then. \cite{Zahnle09} reverse the reactions to ensure kinetics consistent with thermodynamic calculations and consider sulfur chemistry on hot Jupiters. \cite{Moses11} implement high-temperature reactions in KINETICS to model hot Jupiters HD 189733b and HD 209458b with detailed pathway analysis. \cite{Venot12} adopt the combustion mechanisms validated for industrial applications to model the same canonical hot Jupiters but find different quenching and photolysis profiles from \cite{Moses11}. \cite{Hobbs2021} recently extend \cite{Zahnle09} to include sulfur photochemistry and find the inclusion of sulfur can impact other non-sulfur species on HD 209458b and 51 Eridani b. As the discovery of diverse exoplanets progresses, more kinetics models have been applied to study a wide range of aspects, such as the compositional diversity within an atmospheric-grid framework \citep{Moses2013,Miguel2014,Karan2019}, atmospheric evolution with loss and/or outgassing processes \citep{Hu2015,Wordsworth2018,Lincowski2018}, prebiotic chemistry driven by high-energy radiation \citep{rimmer16,Rimmer2019}, and detectability of habitable planets \citep{Arney2019,bio_review}. A number of recent attempts of atmospheric composition measurements are hindered by aerosol layers \citep{Kreidberg2014,wasp80b}. Aerosol particles are possibly ubiquitous, with diverse compositions \citep{Gao2020} including cloud particles formed from condensation or produced by photolysis at high altitudes. Microphysics models \citep{Helling2006,Lavvas2017,Yui2018,Gao2018b,Ohno2020} have investigated trends and properties of aerosols for various environments. One particularly interesting candidate of aerosols is the sulfur family, such as sulfuric clouds \citep{Hu2013,Misra2015,Loftus2019} in an oxidizing atmosphere or elemental sulfur in a reducing atmosphere \citep{Hu2013,Gao2017}. Photochemistry generally sets off the initial steps in the gas phase, then the condensable species can form particles when saturated in a broad range of altitudes \citep{Gao2017}. The relatively simple sulfur particles in \ce{H2}-dominated atmospheres allow a consistent photochemical-aerosol kinetics modeling, which we will conduct in this work. Although the formation pathways of organic haze particles are highly complex, we will focus on a group of haze precursors and investigate their photochemical stability in the hope of providing complementary insights on the haze-forming conditions. The exclusive access to often proprietary chemical models motivates us to develop an open-source, chemical kinetics code VULCAN \citep{tsai17}. The initial version of VULCAN includes a reduced-size C-H-O thermochemical network and treats eddy diffusion. In \cite{tsai17}, VULCAN is validated by comparing the quench behavior with A{\footnotesize RGO} \citep{rimmer16} and \cite{Moses11}. Since then, VULCAN has been continuously updated and applied to several studies such as \cite{Zilinskas2020} who identify key molecules of hot super-Earths with nitrogen-dominated atmospheres, and \cite{Shulyak2020} who explore the effects of XUV for different stellar types. In this work, we present the new version of 1-D photochemical model VULCAN, with embedded chemical networks now including hydrogen, oxygen, carbon, nitrogen, and sulfur. The chemical network is customizable and does not require separating fast and slow species. The major updates of VULCAN from \cite{tsai17} are: \renewcommand\labelitemi{\tiny$\bullet$} \begin{itemize} \item C-H-N-O-S chemical networks with about 100 species, including a simplified benzene forming mechanism \item Photochemistry with options for temperature-dependent UV cross sections input \item Condensation and particle settling included \item Advection, eddy diffusion, and molecular diffusion included for the transport processes \item Choice of various boundary conditions \end{itemize} In Section \ref{model}, we describe model details that have been updated since \cite{tsai17}. In Section \ref{validation}, we validate photochemistry and various new features of VULCAN with simulations of HD 189733b, Jupiter, and Earth. A comprehensive model comparison for HD 189733b between \cite{Moses11}, \cite{Venot12}, and VULCAN is given. In Section \ref{case}, we perform case studies with focus on the effects of sulfur chemistry and haze precursors. We discuss caveats, implications and opportunities for future work in Section \ref{discussion} and summarize the highlights in Section \ref{sec:summary}. \section{Kinetics model}\label{model} \subsection{Basic Equations and Numerics} The 1D photochemical kinetics model solves a set of Eulerian continuity equations, \begin{equation} \frac{\partial n_i}{\partial t} = {\cal P}_i - {\cal L}_i - \frac{\partial \phi_i}{\partial z}, \label{eq:master} \end{equation} where $n_i$ is the number density (cm$^{-3}$) of species $i$ and $t$ denotes the time. ${\cal P}_i$ and ${\cal L}_i$ are the production and loss rates (cm$^{-3}$ s$^{-1}$) of species $i$, from both thermochemical and photochemical reactions. The system of (\ref{eq:master}) has the same form as that in \cite{tsai17}, except only eddy diffusion is considered for the transport flux $\phi_i$ in \cite{tsai17}. The transport flux including advection, eddy diffusion, molecular and thermal diffusion while assuming hydrostatic balance is now written as \citep[e.g., ][]{topa87} \begin{equation} \phi_i = n_i \, v -K_{\rm zz} n_{\rm tot} \frac{\partial X_i}{\partial z} -D_i[\frac{\partial n_i}{\partial z} + n_i(\frac{1}{H_i} + \frac{1+\alpha_T}{T}\frac{dT}{dz})], \label{eq:flux} \end{equation} where $v$ is the vertical wind velocity, $K_{\rm zz}$ and $D_i$ are the eddy diffusion and molecular diffusion coefficient, respectively, $H_i$ is the molecular scale height for species $i$ with molecular mass $m_i$ , i.e. $H_i$ = $\frac{m_i g}{k_BT}$ ($g$: gravity; $T$: temperature; $k_B$: the Boltzmann constant ), and $\alpha_T$ is the thermal diffusion factor. While advection is commonly ignored in 1-D models, we keep the advection term and distinguish it from eddy diffusion with respect to their intrinsic differences. For example, a plume of smoke transports the initial abundance along the direction of wind until diffusion becomes important and dissipates the smoke to the surrounding air. Physically, the first term of the transport flux (\ref{eq:flux}) describes advection in the direction of the wind. The second term is eddy diffusion that acts to smear out the compositional gradient. The molecular diffusion in the third term becomes important at low pressure and drives each constituent toward diffusive equilibrium, which is different for each species based on its individual scale height. The direction of thermal diffusion depends on the sign of the thermal diffusion factor. Positive sign means the component will diffuse toward colder region and vice versa. Thermal diffusion is often a secondary effect compared to eddy diffusion or molecular diffusion, except for the light species in the thermosphere with large temperature gradients \citep{Nicolet1968}. The molecular diffusion coefficient has the expression of $b/N$ from the gas kinetic, where $b$ is a parameter for binary gas mixtures. The binary parameter $b$ and the thermal diffusion factor $\alpha_T$ are ideally determined experimentally for each binary mixture. In practice, we simplify the atmosphere to a binary system with the dominant gas as the main constituent and the rest in turn as the minor constituent. Specifically, we adopt the molecular diffusion coefficient of a binary mixture that is available from the experimental data and scale that of other mixtures based on the fact that $b$ is proportional to the mean relative speed of two gases, i.e. given $D_{1-2}$ for the dominant gas 1 and minor gas 2, the molecular diffusion coefficient for gas 1 and any other minor gas $i$ can be scaled as \begin{equation}\label{eq:D_scale} D_{1-i} = D_{1-2} \sqrt{m_2/m_i ((m_1 + m_i)/(m_1 + m_2))}. \end{equation} The molecular diffusion coefficient and the thermal diffusion factor for atmospheres dominated by \ce{H2}, \ce{N2}, and \ce{CO2} are listed in Appendix \ref{app:Dzz}. A second-ordered central difference is used to discretize the spatial derivative of diffusion flux, as in \cite{tsai17}, except a first-order upwind scheme \citep{Jacob2017} is applied for advection. The finite difference form for the derivative of the transport flux of layer $j$ is \begin{equation} \frac{\phi_{i,j+1/2} - \phi_{i,j-1/2}}{\Delta z_j} \label{diff_flux} \end{equation} , with the upper and lower interfaces of layer $j$ labeled as $j+1/2$ and $j-1/2$, respectively, in the staggered structure. The full expression for the transport flux in Equation (\ref{eq:flux}) at the upper and lower interfaces is then \begin{equation} \begin{split} &\phi_{i,j+1/2} = \phi^{adv}_{i,j+1/2} - (K_{{\rm zz},j+1/2} + D_{i,j+1/2}) n_{{\rm tot},j+1/2}\\ &\times \frac{X_{i,j+1} - X_{i,j}}{\Delta z_{j+1/2}} - D_{j+1/2}X_{i,j+1/2}(\frac{1}{H_i} - \frac{1}{H_0} + \\ &\frac{\alpha_T}{T_{j+1/2}} \frac{T_{j+1} - T_j}{\Delta z_{j+1/2}})\\ &\phi_{i,j-1/2} = \phi^{adv}_{i,j-1/2} - (K_{{\rm zz},j-1/2} + D_{i,j-1/2}) n_{{\rm tot},j-1/2}\\ &\times \frac{X_{i,j} - X_{i,j-1}}{\Delta z_{j-1/2}} - D_{j-1/2}X_{i,j-1/2}(\frac{1}{H_i} - \frac{1}{H_0} + \\ &\frac{\alpha_T}{T_{j-1/2}} \frac{T_{j} - T_{j-1}}{\Delta z_{j-1/2}})\\ &\phi^{adv}_{i,j+1/2} = \begin{cases} v_{j+1/2} n_{i,j}, & \text{for } v_{j+1/2} > 0 \\ v_{j+1/2} n_{i,j+1}, & \text{for } v_{j+1/2} < 0 \end{cases}\\ &\phi^{adv}_{i,j-1/2} = \begin{cases} v_{j-1/2} n_{i,j-1}, & \text{for } v_{j-1/2} > 0 \\ v_{j-1/2} n_{i,j}, & \text{for } v_{j-1/2} < 0 \end{cases} \end{split} \label{eq:flux2} \end{equation} where $H_0$ is the atmospheric scale height with altitude dependent gravity and we have approximated the physical quantities at the interface by the average of two adjacent layers $n_{{\rm tot},j \pm1/2} = \frac{n_{{\rm tot},j} + n_{{\rm tot},j \pm1}}{2}$, $X_{i,j\pm1/2}$ = $\frac{X_{i,j} + X_{i,j\pm1}}{2}$, and $T_{{\rm tot},j \pm1/2} = \frac{T_{{\rm tot},j} + T_{{\rm tot},j \pm1}}{2}$. The advection flux $\phi^{adv}$ in Equation (\ref{eq:flux2}) only depends on the property of the upstream layer in the upwind scheme. Equation (\ref{eq:master}) can be reduced to a system of ordinary differential equations (ODEs) after replacing the spatial derivative of transport flux in Equation (\ref{eq:master}) with (\ref{diff_flux}) and (\ref{eq:flux2}) and assigning proper boundary conditions. The numerical scheme using the Rosenbrock method to integrate the ``stiff'' system (\ref{eq:master}) forward in time until steady state is achieved is described in detail in \cite{tsai17}. \subsection{Boundary Conditions} The solutions to the system of ODEs derived from Equation (\ref{eq:master}) need to satisfy the given boundary conditions. The boundary conditions encompass various planetary processes that are crucial in regulating the atmosphere. There are three basic quantities commonly used to describe the boundary conditions \citep[e.g.][]{Hu2012}: flux, velocity, and mixing ratio. We will elucidate their corresponding implications for the lower and upper boundaries. The flux term in Equation (\ref{eq:flux2}) depends on the layers above and below. Hence the fluxes at the top and bottom are unspecified. Assigning constant fluxes is common to represent surface emission at the lower boundary for rocky planets and inflow/outflow at the upper boundary. For example, CO and \ce{CH4} surface sources play a key role to Earth's troposphere; meteoritic inflow or hydrodynamic escape outflow can be prescribed as constant flux at the upper boundary \citep[e.g., ][]{Wordsworth2018}. Alternatively, diffusion-limited flux can be assigned at the upper boundary, which assumes the escape flux is limited by the diffusion transport into exosphere. The diffusion-limited flux reads \begin{equation} \phi_{i,\textrm{top}} = - D_{i,\textrm{top}} n_i (\frac{1}{H_i} - \frac{1}{H_0}) \end{equation} and can be applied to any set of light species in the code. Without additional constraints, we often simply assume the flux to be zero, which means no net material exchange. This zero-flux boundary condition is generally suited for the lower boundary conditions while placed at a sufficient depth of most gas giants \citep{Moses11,rimmer16,tsai17}. While not specifying the boundary condition, zero flux is implied as default in VULCAN. In addition to the flux, velocity is useful to represent sources and sinks that scale with the species abundance. For example, (dry/wet) deposition velocity is conventionally used to parametrize removal processes such as gas absorption or uptake into the surface \citep{Hu2012,Seinfeld2016}. At the upper boundary, upward velocity can be assigned to account for escape velocity or for any process producing inflow/outflow \citep{Krasnopolsky2012}. The flux and velocity can also be assigned together to describe the final boundary condition of a single species. Constant mixing ratios are prescribed for the boundary condition when the detail exchange is complex but the knowledge of precise abundance is available. For example, the water vapor at the surface is expected to be set by saturation according to relative humidity on an ocean planet with a substantial reservoir of water. Assigning constant mixing ratios is also practical for regional models, such as the composition around the cloud layers for the Venus model with lower boundary placed at the cloud layer \citep{Krasnopolsky2012}. Since constant mixing ratio does not allow changes of the composition at the boundary, this boundary condition should not be used in conjunction with flux or velocity boundary conditions. \begin{comment} types of commonly used boundary conditions \citep[e.g., ][]{Hu2012} that can be freely prescribed in the code: \renewcommand{\labelenumi}{(\Roman{enumi})} \begin{enumerate} \item zero-flux boundary: $\phi_i$ = 0 \item constant abundance boundary: $n_i$ = 0 \item constant flux boundary: $\phi_i$ = const \end{enumerate} (I) is the simplest and most general boundary condition, suited for most extrasolar gas giants \citep{tsai17}. (II) is useful when in-situ measurements are available. (III) is required when surface emission or atmospheric escape is considered. For the bottom boundary, the constant flux associates with surface emission and deposition. For the top boundary, it describes the escape flux or external source, such as meteoride injection. The analytical expression of Jacobian matrix \citep{tsai17} is adjusted according to the chosen boundary conditions. \end{comment} \subsection{Chemical Networks}\label{sec:network} We have extended the previous C-H-O network in \citep{tsai17} to include nitrogen and sulfur in a hierarchical manner, e.g., C-H-O\footnote{We have updated C-H-O network from \citep{tsai17} by adding \ce{HO2} and \ce{H2O2}.}, C-H-N-O, C-H-N-O-S networks. Each network is provided with a reduced version and a full version, where ``reduced" is referred to both oxidation state and network size. The reduced version has species and mechanisms (e.g., the ozone cycle) that are only important in oxidizing conditions stripped off, which are more computationally efficient and suited for the general hydrogen-dominated atmospheres. The full version of networks are designed for a wide range of main atmospheric constituents, from reducing to oxidizing. Hydrocarbon species are truncated at two carbons, while some higher-order hydrocarbons are present as necessary sinks for the two-carbon species or hazy precursors. The chemical network files with rate coefficients for the forward reactions can be found at \url{https://github.com/exoclime/VULCAN/tree/master/atm}. The full version of C-H-N-O-S network includes 96 species: \ce{H}, \ce{H2}, \ce{O}, \ce{^1O}, \ce{O2}, \ce{O3}, \ce{OH}, \ce{H2O}, \ce{HO2}, \ce{H2O2}, \ce{CH}, \ce{C}, \ce{CH2}, \ce{^1CH2}, \ce{CH3}, \ce{CH4}, \ce{C2}, \ce{C2H2}, \ce{C2H}, \ce{C2H3}, \ce{C2H4}, \ce{C2H5}, \ce{C2H6}, \ce{C4H2}, \ce{C3H3}, \ce{C3H2}, \ce{C3H4}, \ce{C6H5}, \ce{C6H6}, \ce{C4H3}, \ce{C4H5}, \ce{CO}, \ce{CO2}, \ce{CH2OH}, \ce{HCO}, \ce{H2CO}, \ce{CH3O}, \ce{CH3OH}, \ce{CH3CO}, \ce{H2CCO}, \ce{HCCO}, \ce{CH3O2}, \ce{CH3OOH}, \ce{N}, \ce{N(^2D)}, \ce{N2}, \ce{NH}, \ce{CN}, \ce{HCN}, \ce{NH2}, \ce{NH3}, \ce{NO}, \ce{N2H2}, \ce{N2H}, \ce{N2H3}, \ce{N2H4}, \ce{HNO}, \ce{H2CN}, \ce{HC3N}, \ce{CH3CN}, \ce{CH2CN}, \ce{C2H3CN}, \ce{HNCO}, \ce{NO2}, \ce{N2O}, \ce{CH2NH2}, \ce{CH2NH}, \ce{CH3NH2}, \ce{CH3CHO}, \ce{NO3}, \ce{HNO3}, \ce{HNO2}, \ce{NCO}, \ce{N2O5}, \ce{S}, \ce{S2}, \ce{S3}, \ce{S4}, \ce{S8}, \ce{SH}, \ce{H2S}, \ce{HS2}, \ce{SO}, \ce{SO2}, \ce{SO3}, \ce{CS}, \ce{OCS}, \ce{CS2}, \ce{NS}, \ce{HCS}, \ce{HSO}, \ce{HSO3}, \ce{H2SO4}, \ce{CH3S}, \ce{CH3SH}, \ce{S2O} and about 570 forward thermochemical reactions and 69 photodissociation branches. All thermochemical reactions are reversed using the equilibrium constant derived from the NASA polynomials as described in \cite{tsai17} to ensure chemical equilibrium can be kinetically achieved\footnote{We report a significant discrepancy in the new NASA 9-polynomials of \ce{CH2NH} (\url{http://garfield.chem.elte.hu/Burcat/NEWNASA.TXT}) compared to the early NASA 7-polynomials and other sources, which can lead to several orders of magnitude errors. We use the fit from the NASA 7-polynomials for \ce{CH2NH} instead.}. We also provide an option for customizing modular networks. A subgroup of species can be freely picked and only reactions that involve the selected species will form a new modular chemical network. Unlike minimizing Gibbs free energy for equilibrium chemistry, caution is required in this process to incorporate trace species that are important intermediates to set up a sensible network. We have incorporated a simplified benzene mechanism into the generally two-carbon based kinetics, with the motivation of considering it in the context of haze precursors, as will be discussed in Section \ref{sec:haze}. The intention is to capture the main formation pathways at minimum cost in terms of the size of the network. We adopt one of the possible benzene forming pathways through propargyl (\ce{C3H3}) recombination \ce{C3H3 + C3H3 ->[\textrm{M}] C6H6} \citep{Frenklach2002}, whereas \ce{C3H3} is produced by \ce{CH3 + C2H -> C3H3 + H}. We then add hydrocarbons such as \ce{C3H2}, \ce{C3H4}, and \ce{C6H5} for the hydrogen abstraction reactions of \ce{C3H3} and \ce{C6H6} to complete the mechanism. The rate coefficients of the reactions are broadly drawn from the following: (1) NIST database\footnote{\url{https://kinetics.nist.gov}} (2) KIDA database\footnote{\url{http://kida.obs.u-bordeaux1.fr/}} (3) literature sources including \cite{Moses2005,lavvas2008,Moses11,Zahnle2016}. Although most rate coefficients are chosen to be validated for a wide range as possible (300 - 2500 K), some of the rate coefficients are still only measured at limited temperature ranges, which has been a long standing issue in kinetics. The kinetics becomes even more uncertain while sulfur is involved. For example, elemental sulfur in the gas phase exists in many allotropic forms but the chain-forming reactions between the allotropes were poorly constrained. The recombination rates of S that form the first sulfur bond \ce{S + S ->[\textrm{M}] S2} from two early measurements \cite{Fair1969} and \cite{Nicholas1979} are differed by four orders of magnitude. A recent calculation by \cite{Du2008} confirms the value by \cite{Fair1969} and we adopt the rate coefficient from \cite{Du2008} in our network. To address the uncertainties in sulfur kinetics, we perform sensitivity tests for selective key reactions in Section \ref{case}. \subsection{Computing Photochemistry Stars are the ultimate energy source of disequilibrium chemistry. The stellar radiation interacting with the atmosphere can be converted into internal energy or initiates chemical reactions. Photodissociation describes the process in which energetic photons break molecules apart, schematically written as an unimolecular reaction with photons (h$\nu$) \begin{equation}\label{re:photolysis} \ce{A ->[h$\nu$] B + C}. \end{equation} Photodissociation typically produces active free radicals and initiate a chain of reactions that are essential to atmospheric chemistry (e.g., the ozone cycle on Earth or the organic haze formation on Titan). The radiative flux that drives photolysis is conventionally defined by the number of photons {\it from all directions} per unit time per unit area per unit wavelength and referred as the actinic flux, $J(z,\lambda)$, with $z$ being altitude $\lambda$ being wavelength. $J(z,\lambda)$ consists of two components, direct beam and diffuse radiation: \begin{equation} J(z,\lambda) = J(\infty,\lambda)e^{-\tau(z,\lambda)/\mu} + J_{\textrm{diff}}(z,\lambda). \label{eq:sflux} \end{equation} where $\tau$ is the optical depth and $\mu$ = cos$\theta$ with $\theta$ being the zenith angle of the incident beam. The first term of Equation(\ref{eq:sflux}) describes the attenuated actinic flux reaching the plane perpendicular to the direction of beam (there is no cosine pre-factor as for radiative heating since the number of intercepted molecules is randomly oriented and independent of the direction of the stellar beam). The optical depth $\tau$ accounts for the extinction from both absorption and scattering is calculated as \begin{equation} \tau = \int [\Sigma_i (\sigma_{a,i} + \sigma_{s,i}) n_i] dz \label{eq:tau} \end{equation} where $\sigma_{a,i}$ and $\sigma_{s,i}$ are the cross section of absorption and scattering, respectively. The absorption cross section $\sigma_{a,i}$ can be different from the photodissociation cross section because absorption is not necessarily followed by dissociation. The diffusive flux $J_{\textrm{diff}}$ is the scattered radiation defined by integrating the diffuse specific intensity over all directions. We use the two-stream approximation in \cite{Malik2019} to first solve for the diffuse flux and convert it to total intensity using the first Eddington coefficient \citep{Heng2018}: \begin{equation} J_{\textrm{diff}}(z,\lambda) = F_{\textrm{diff}} / \epsilon \label{eq:} \end{equation} where $F_{\textrm{diff}}$ is the total diffuse flux given by $F_{\textrm{diff}}$ $\equiv$ $F_{\uparrow}^{\textrm{diff}} + F_{\downarrow}^{\textrm{diff}}$ and $\epsilon$ is the first Eddington coefficient with value 0.5 for isotropic flux. Although multiple scattering is not explicitly included in the expression in \cite{Malik2019}, the process can be approached through iteration and we find the equilibrium state of multiple scattering can normally be achieved within 200 iterations for a strongly irradiated hot Jupiter. In the code, we have the option to update the actinic flux periodically to save computing time. Once the actinic flux has been obtained, the photolysis rate coefficient can be determined from integrating the actinic flux and the absorption cross section over the wavelength \begin{equation} k = \int_{\lambda} q(\lambda) \sigma_a(\lambda) J(z,\lambda) d\lambda. \label{eq:photo_rate} \end{equation} and the photolysis rate of Reaction (\ref{re:photolysis}) is \begin{equation} \frac{d n_{\textrm{A}} }{dt} = - k n_{\textrm{A}} \end{equation} , where $q(\lambda)$ is the quantum yield (photons$^{-1}$), describing the probability of triggering a photolysis branch for each absorbed photon. In VULCAN, we adopt the cross sections from the Leiden Observatory database\footnote{\url{http://home.strw.leidenuniv.nl/~ewine/photo}} \citep{Heays2017} whenever possible, which provides tabulated data of photoabsorption, photodissociation, and photoionisation cross sections with uncertainty ranking. The data has been benchmarked against other established databases such as the PHIDRATES database\footnote{\url{http://phidrates.space.swri.edu}} \citep{Huebner1992,Huebner2015} which is detailed in \cite{Heays2017}. The full list of photolysis reactions and references are listed in Table \ref{tab:photo_rates}. The spectral resolution with respect to the stellar flux and cross sections can be important while computing Equation (\ref{eq:photo_rate}) numerically. The minimum resolution used in the model should be capable of resolving the line structures in the stellar spectra and cross sections. We discuss the errors from under-resolving in Appendix \ref{app:resolution}. \subsection{Temperature-Dependent UV Cross Sections}\label{sec:Tcross} Most laboratory measurements of UV cross sections are conducted at room temperature or lower, which might raise reliability issues with application to high-temperature atmospheres. \cite{Heays2017} suggested that as temperatures increased by a few hundred K, the excitation of vibrational and rotational levels (limited to $v \leq 2$) in many cases only cause minor broadening of the cross sections and does not alter its wavelength integration. However, for molecules with prominent transition between excited vibrational states (e.g. \ce{CO2}), the temperature dependence on the cross section and photolysis rate can be important. Recent work has started to investigate the high-temperatures UV cross sections of a few molecules \citep{Venot2013,Venot2018}. Given the available data, we have included temperature-dependent photoabsorption cross sections of \ce{H2O} (EXOMOL\footnote{\url{http://www.exomol.com/data/data-types/xsec_VUV/}}), \ce{CO2} \citep{Venot2018} (with 1160 K from EXOMOL), \ce{NH3}(EXOMOL), \ce{O2} \citep{O2-lowT,O2-highT}, SH \citep{SH_cross}, \ce{H2S} \citep{SH_cross}, \ce{COS} \citep{SH_cross}, \ce{CS2} \citep{SH_cross} in the current version of VULCAN. The temperature dependence of the UV cross sections of these molecules can be found in Figure \ref{fig:cross_T}. It is evident that both the absorption threshold and cross sections of \ce{CO2} exhibit strong temperature dependence. For \ce{H2O}, we have incorporated the recent measurement for the cross section above 200 nm \citep{Ranjan20}. We follow \cite{Ranjan20} taking a log-linear fit for the noisy data above 216 nm. In addition, we have included measured data from \cite{Schulz2002} for temperature above 1500 K, . A layer-by-layer interpolation for the temperature-dependent cross sections is implemented in the model, i.e. the cross section of one single species is allowed to vary across the atmosphere due to the temperature variation. The interpolation is linear in the temperature space and logarithmic in the cross-section space. With limited data, we find the linear interpolation in temperature generally underestimates the cross sections and therefore our implementation is considered as a conservative estimate for how photolysis increases with temperature. \subsection{Condensation and rainout} VULCAN handles condensation and evaporation using the growth rate of particles, assuming sufficient activated nuclei. For a schematic condensation/evaporation reaction \begin{equation} \ce{A_{(gas)} <-> A_{(particle)}}, \end{equation} the reaction rate is given by the mass balance equation \citep{Seinfeld2016} \begin{equation} \frac{d n_{\textrm{A}} }{dt} = - \frac{D_{\ce{A}} m_{\ce{A}}}{\rho_p r_p^2} (n_{\ce{A}} - n^{\textrm{sat}}_{\ce{A}}) n_{\ce{A}} \label{eq:conden-rate} \end{equation} where $D_{\ce{A}}$ and $m_{\ce{A}}$ are the molecular diffusion coefficient and molecular mass of gas A, $\rho_p$ and $r_p$ are the density and radius of the particle, $n_{\ce{A}}$ and $n^{\textrm{sat}}_{\ce{A}}$ are the number density and saturation number density of gas A, respectively. Equation (\ref{eq:conden-rate}) describes the growth rate by diffusion for particles with size $r_p$ in the continuum regime (particles larger than the mean free path i.e. Knudsen number ($K_n$) smaller than 1). The negative value of Equation (\ref{eq:conden-rate}) corresponds to condensation when $n_{\ce{A}} > n^{\textrm{sat}}_{\ce{A}}$ and the positive value corresponds to evaporation when $n_{\ce{A}} < n^{\textrm{sat}}_{\ce{A}}$. Our condensation expression takes the same form as \cite{Hu2012,rimmer16}, except that the growth rate of particles in the kinetic regime (particles smaller than their mean free path i.e. Knudsen number ($K_n$) greater than 1) is used in \cite{Hu2012,rimmer16}. When applying $K_n = \frac{\pi \mu v_{th}}{4 P}$ where $\mu$ is the dynamic viscosity, $v_{th}$ the thermal velocity, and $P$ the pressure, a \ce{H2} atmosphere enters the kinetics regime with $K_n > 10$ above 1mbar for a temperature of 400 K and above 0.1 $\mu$bar for a temperature of 1000 K. We find that for most of the application, condensation occurs in the lower atmosphere with micron-size or larger particle and the continuum regime is more suitable. Since condensation typically operates in a relatively short timescale, we implement an option to switch off condensation and fix the abundances of condensing species and the condensates after the dynamic equilibrium has reached. The approach is similar to the quasi-steady-state assumption (QSSA) method, which decouples the fast and slow reactions to ease the computational load. After the gas condenses to particles, they fall following the terminal settling velocity ($v_s$) derived from the Stoke's law \citep{Seinfeld2016} as \begin{equation} v_s = \frac{2}{9} \frac{\rho_p r_p^2 g}{\mu} \label{eq:vs} \end{equation} where $\mu$ is the atmospheric dynamic viscosity with value taken from \cite{Cloutman2000} for the corresponding background gas. We have again assumed large particle size to simplify the slip correction factor (the correction for non-continuum) to unity in Equation (\ref{eq:vs}). In this work, we have implemented and will demonstrate the condensation of \ce{H2O}, \ce{NH3}, \ce{S2}, and \ce{S8} in the following sections. \subsection{Chemistry of Ti and V Compounds}\label{sec:Ti} TiO (titanium oxide) and VO (vanadium oxide) are present in the gas phase in cool stars and brown dwarfs where temperature exceeds 2000 K. The highly irradiated hot Jupiters have been suggested to manifest inverted temperature structures due to the strong optical absorption of TiO and VO vapor \citep{Hubeny03} in the stratosphere. The pioneering work proposing the role of TiO and VO in irradiated atmosphere \citep{Fortney08} is based on equilibrium chemistry, where the authors argue that the conversion between TiO and \ce{TiO2} is fast enough for TiO to remain in chemical equilibrium. However, it is not clear for conversion reactions with Ti or other titanium species. For example, the interconversion of \ce{CO <-> CO2} is relatively fast but the ultimate CO abundance is still controlled by the slower \ce{CO <-> CH4} interconversion. In addition to TiO, titanium hydride (TiH), has been suggested important in brown dwarfs by \cite{Burrows2005}. As the thermodynamics data of TiH is not available in the literature or standard databases, \cite{Burrows2005} perform ab initio calculations of the Gibbs free energy of TiH (based on the partition function obtained from the spectroscopic constants). To explore the kinetics of titanium and vanadium, we expand the species list to include Ti, TiO, \ce{TiO2}, TiH, TiC, TiN, V, VO. As only Ti, TiO, and TiO2 are available for titanium compounds in the NASA polynomials, we adopt the thermodynamics data of TiH from \cite{Burrows2005}, TiC from \cite{Woitke2018}, and the rest from \cite{tsuji73}. While there are a few measurements for the reactions of titanium/vanadium species with laser vaporization at low temperature, the kinetics data at high temperature is nearly non-existent. As a first step, we perform simple estimates on the unknown rate constants of titanium/vanadium species. First, we look for kinetics data of analogous transition metals, such as Fe. We assume the same rate coefficient as the analogous reaction if it is measured at high temperature. When high-temperature data are not available, we estimate the temperature dependence based on transition state theory. For an endothermic reaction, we approximate the activation energy (the exponential term in the Arrhenius expression) by the enthalpy difference between the products and reactants, assuming the energy increase of the transition state is small compared to the enthalpy difference for reactions involving radicals \footnote{To verify our approach, we compared the activation energy estimated from the enthalpy difference to that of well measured reactions. e.g., endothermic reactions \ce{H2O + H -> OH + H2} and \ce{CO2 + H -> CO + OH} have activation energy 10800 K \citep{Davidson1989} and 13300 K \citep{Tsang1986}, respectively, whereas our estimate yields 7200 K and 10300 K, respectively.}. Once the activation energy is obtained, the pre-exponential factor is adjusted to fit the reference value at low temperature. The titanium/vanadium kinetics we adopted is listed in Table \ref{tab:tio}. For photolysis, we include photodissociation of TiO, \ce{TiO2}, TiH, TiC, and VO. We estimate their UV cross sections from FeO \citep{FeO2005} at 252.39 nm and scale the photolysis threshold according to their bond dissociation energy. \subsection{Photochemical Hazy Precursors}\label{sec:haze} Observations have informed us that clouds or photochemical hazes are ubiquitous in a diverse range of planetary atmospheres. Microphysics models that include processes such as nucleation, coagulation, condensation, and evaporation of particles \citep[e.g., ][]{Gao2018b,Yui2019,Lavvas2017} simulate the formation and distribution of various-size aerosol particles. Given the complexity and uncertainty of the polymerizing pathways, one common approach is to select precursor species as a proxy and assume they will further grow into complex hydrocarbons \citep{Morley2013,Yui2018}. Typical choices of haze precursors include \ce{C2H}$_x$ and HCN, which is also limited by our kinetics knowledge and computing capacity. In this work, we preferentially consider precursors that are more closely related to forming polycyclic aromatic hydrocarbon (PAH) or nitriles. PAH is a group of complex hydrocarbon made of multiple aromatic rings, which has been commonly found in the smog pollution on Earth and expected to be associated with the organic haze on Titan \citep{Zhao2018}. In the polar region of Jupiter where charged particles are the main energy source, ionchemistry has also been suggested to promote the formation of PAHs and organic haze \citep{Wong2003}. Once the first aromatic ring, benzene, has formed, the thermodynamics state (enthalpy and entropy) does not vary much with the processes of attaching and arranging the rings. From the kinetics point of view, the classic mechanism of making complex hydrocarbons, H-Abstraction-Carbon-Addition (HACA), requires aromatic hydrocarbon and acetylene in the primary abstraction and addition steps \citep[e.g.][]{Frenklach2020}. It is conceivable that benzene formation is the rate-limiting step in forming complex hydrocarbons as the growth rate increases downstream from benzene. In practice, while the fundamental pathways leading to PAH remain elusive \citep{Wang2011,Zhao2018}, the combustion study can provide a good handle on the formation of benzene to a certain degree. Therefore, we suggest considering benzene as an important haze precursor. One important caveat about modeling benzene is that its photodissociation branches are poorly quantified across various branches \citep[see e.g.][]{Lebonnois2005}. The main photolysis products are possibly phenyl radical (\ce{C6H5}) and benzyne radical (\ce{C6H4}) \citep{Suto1992}. If they further absorb photons again, they could fragment into smaller, linear molecules like \ce{C4H3} and \ce{C3H3}. We adopt the cross sections of \ce{C6H6} from \cite{Boechat04} and \cite{Capalbo16}. For simplicity, we assume the main dissociation of benzene primarily goes into phenyl radical (\ce{C6H5}) with a small fraction leading to \ce{C3H3} ($\sim$ 15$\%$ based on \citep{Kislov2004}). Although HCN is the basic molecule for nitrile chemistry, it is unlikely that most of HCN will convert into complex nitriles. The nitrile formation is more likely to be limited by the less abundant \ce{H2CN}, \ce{CH2NH}, or \ce{CH3CN}. Hence we include these species along with \ce{HC3N} to represent the nitrile family precursor. For sulfur gases, in addition to the condensation of sulfur allotropes (S$_x$), we also consider \ce{CS2} according to the laboratory experiments by \cite{He2020}. Overall, we compose the following species as photochemical haze precursors: \ce{C2H2}, \ce{C2H6}, \ce{C4H2}, \ce{C6H6}, HCN, \ce{HC3N}, \ce{CH2NH}, \ce{CH3CN}, \ce{CS2}. \section{Model Validation}\label{validation} \begin{table*} \begin{center} \caption{Model Validation Setup} \begin{tabular}{lllllll} \hline \hline Planet & P-T profile & Network\footnote{files available in supplementary material} & stellar UV & Gravity\footnote{at the surface for Earth and defined at 1 bar for gaseous planet}& Upper Boundary & Lower Boundary\\ &&&&(cm$^2$/s)&\\ \hline \hline HD 189733b & \cite{Moses11} & N-C-H-O & Eps Eri\footnote{from the StarCat database (\url{https://casa.colorado.edu/~ayres/StarCAT}) \citep{Ayres2010} and following the same scaling adjustment as \cite{Moses11}} & 2140 & H escape\footnote{Assuming diffusion-limited escape rate} & zero-flux\\ \hline \multirow{2}*{Jupiter} & \cite{Moses2005} & N-C-H-O-lowT & \cite{Gueymard2018} & 2479 & \ce{H2O}, CO, \ce{CO2} & zero-flux\\ &+ dry adiabat& & & & inflow & \\ \hline Earth & COSPAR & S-N-C-H-O-full & \cite{Gueymard2018}& 980 & H, \ce{H2} escape & Table \ref{tab:BC_Earth}\\ \hline \hline \end{tabular} \end{center} \label{tab:validation} \end{table*} \subsection{HD 189733b}\label{sec:hd189} We have benchmarked our thermochemical kinetics results using a C-H-O network with vertical transport against \cite{Moses11} for HD 189733b and HD 209458b in \cite{tsai17}. In this work, we compare our results including N-C-H-O photochemistry to \cite{Moses11} and \cite{Venot12} (M11 and V12 hereafter). V12 use a chemical kinetics scheme derived from combustion application and find different disequilibrium abundances of \ce{CH4} and \ce{NH3} from those in M11. Since then, a size-reduced network based on V11 has been developed \citep{Venot2019}, with the motivation to support computationally heavy simulations. In particular, the controversial methanol mechanism, which has been identified to cause the differences in \ce{CH4}-CO conversion \citep{Moses11,Moses2014}, is further updated and analyzed in \citep{Venot2020}. Therefore, aiming to consolidate the model discrepancy, we run an additional model with VULCAN but implemented with the updated reduced network from \cite{Venot2020}. The planetary parameters and model setting are listed in Table \ref{tab:validation}. Before diving into the detailed comparison, we provide an overview of the chemical profiles and absorption properties for HD 189733b and HD 209458b in Figure \ref{fig:HD189-209}. \subsubsection{Disequilibrium Effects} The left panels of Figure \ref{fig:HD189-209} depict how vertical mixing and photochemistry drives the compositions out of equilibrium on HD 189733b, by isolating the two effects. The underlying processes can be understood as a general property of hot Jupiters, as discussed in \citep{Moses11,Venot12,Moses2014,Hobbs19,Karan2019}. Equilibrium chemistry prevails in the deep, hot region whereas energetic photons dissociate molecules and produce reactive radicals in the upper atmosphere. Between the two regions, the composition distribution is controlled by vertical transport, viz., species in equilibrium at depth are transported upward and become quenched when vertical mixing predominates chemical reactions; photochemical products are also mixed downward and initiate a sequence of reactions. The right panels of Figure \ref{fig:HD189-209} show the UV photosphere where the optical depth equals one, with decomposition of contribution from the main molecules. Our photochemical model captures several general transmission properties of irradiated \ce{H2}-atmospheres: \ce{H2} provides the dominant absorption in EUV (10--120 nm) whereas \ce{H2O} and \ce{CO} are the dominant absorbers in FUV (120--200 nm). The window around 160--200 nm is particularly important for water dissociation, which makes a catalytic cycle turning \ce{H2} into atomic H \citep{Liang2003,Moses11}. In the NUV (300--400nm), radiation can penetrate deep down to $\sim$ 1 bar until being scattered. The photospheres in Figure \ref{fig:HD189-209} descend from about 1 $\mu$bar to 10 mbar (from the end of \ce{H2}-shielding to the tail of ammonia absorption) which denote the photochemically active region in the atmosphere. HD 209458b shares qualitatively similar results with HD 189733b. Owning to its higher temperature and the inverted thermal structure (see Figure 1. in \cite{Moses11}), the quench level is lifted higher and the photolysis has little influence, as can be seen in Figure \ref{fig:HD189-209}. The composition distribution on HD 209458b can be described by a lower equilibrium region and an upper quenched region. We will now only focus on HD 189733b for the model comparison as disequilibrium processes contribute more compared to the hotter HD 209458b (see \cite{Hobbs19} for model comparison of HD 209458b). \begin{figure*}[tph] \begin{center} \includegraphics[width=\columnwidth]{fig/HD189-vulcan} \includegraphics[width=\columnwidth]{fig/photosphere_HD189} \includegraphics[width=\columnwidth]{fig/HD209-vulcan} \includegraphics[width=\columnwidth]{fig/photosphere_HD209} \end{center} \caption{C-H-N-O Photochemical kinetics results (top-left) of HD 189733b (solid), compared with including vertical mixing but no photochemistry (dashed), and thermochemical equilibrium (dotted). The temperature-pressure structure and eddy diffusion (K$_{zz}$) profile are taken from the dayside-average profile in \cite{Moses11} (their Figure 1 and 2). On the top right, we show the pressure level where energetic photons are mostly absorbed, i.e. optical depth $\tau$ = 1 (black), and decomposed into the main absorbers. The bottom panels show same as the top panels except for HD 209458 b.} \label{fig:HD189-209} \end{figure* \subsubsection{Model Comparison with \cite{Moses11} and \cite{Venot12}} The HD 189733b model comparisons between VULCAN, M11, and V12 are showcased in Figure \ref{fig:HD189-VM}, where the top row highlights the major species and the following rows are grouped into carbon, oxygen, and nitrogen species. For the major species, VULCAN produces profiles more consistent with M11, while there are notable differences with V12 in H, \ce{CH4}, \ce{NH3}, and HCN. \ce{CH4} and \ce{NH3} are quenched from below 1 bar level until being attacked by H around 1 mbar. Hence the differences with V12 in the photospheric region ($\sim$ 1 bar -- 1 mbar) are due to thermochemical kinetics, rather than photochemical sources. Nitrogen species generally manifest higher variances, reflecting the kinetics uncertainties. \paragraph{Quenching of \ce{CH4} and \ce{NH3}}\mbox{} The sharp gradients of the equilibrium distribution of \ce{CH4} and \ce{NH3} (Figure \ref{fig:HD189-209}) imply the abundances are sensitive to the quench levels, viz. small differences in the quench levels can lead to considerable differences. The key reactions responsible for the conversion at quench levels deserve a closer look. The match of quenched \ce{CH4} abundance between VULCAN and M11 has been discussed in \cite{tsai17}, in which we identify a similar pathway of \ce{CH4} destruction as M11. The inclusion of nitrogen does not change the fact since nitrogen does not participate in the \ce{CH4}-CO conversion. It can be seen that \ce{CH4} is quenched at a higher level with lower mixing ratio in V12, as a result of faster \ce{CH4}-CO conversion. \cite{Moses2014} identified the faster methanol decomposition \ce{H + CH3OH -> CH3 + H2O} measured by \cite{Hidaka1989} adopted in V12 as the key reaction that \ce{CH4} exhibits a shorter timescale in V12. \cite{Moses2014} suggested the rate is overestimated by \cite{Hidaka1989} based on the high energy barrier of the reaction. In response, \cite{Venot2019} removed the controversial reaction by \cite{Hidaka1989} and updated their chemical scheme with a newly validated \ce{CH3OH} combustion work \citep{Burke2016}, given the importance of methanol as an intermediate species for \ce{CH4}-CO conversion. Intriguingly, \cite{Venot2019} still find a methane abundance rather close to that in V12. Attempting to resolve this mystery, we further run our model with the \cite{Venot2020} reduced scheme\footnote{The reduced scheme captures the key reactions at work from V12 and has been benchmarked against V12 \citep{Venot2019}. The two schemes are approximately equivalent regarding the quenching of main species.} integrated with new \ce{CH3OH} mechanism. We did not incorporate the same photolysis scheme from V12 but here photolysis has no effects on the quenching comparison below 1 bar. Contrary to the findings in \cite{Venot2020}, the new scheme of \cite{Venot2020} implemented in our model indeed shows a slower \ce{CH4}-CO conversion and brings the \ce{CH4} profile closer to VULCAN and M11 (dotted line in Figure \ref{fig:HD189-VM}-(b)). Our model implemented with the \cite{Venot2020} scheme predicts a quenched methane mixing ratio 1.13 $\times 10^{-5}$, close to 1.51 $\times 10^{-5}$ in M11 and 1.26 $\times 10^{-5}$ in our nominal model, whereas V12 with the faster methanol decomposition from \cite{Hidaka1989} predicts 5.20 $\times 10^{-6}$. We conclude that the methanol decomposition indeed results in faster \ce{CH4}-CO conversion and subsequently lowers the \ce{CH4} abundance in V12. For nitrogen chemistry, the high-temperature kinetics is more uncertain and many reducing reactions relevant for \ce{H2}-atmospheres are not available on the NIST database. We drew data from the combustion literature (\cite{Dean2000}, same as M11) and the KIDA database. In particular, there are considerable uncertainties regarding the rates for the reactions that control the \ce{NH3}-\ce{N2} conversion, as extensively discussed in \cite{Moses2014}. We follow the suggestions in \cite{Moses2014} and adopt the rate coefficient of \ce{ NH3 + NH2 -> N2H3 + H2} from \cite{Dean1984} and that of \ce{ NH2 + NH2 -> N2H2 + H2} from \cite{Klippenstein2009}, since \cite{Konnov2001} used in V12 is measured at low temperatures. As \ce{NH3} progressively become fully quenched in the region between a few hundreds bar and 1 bar, there are more than a single pathway and rate-limiting step for \ce{NH3}-\ce{N2} conversion that effectively control the \ce{NH3} abundance. For pressure greater than $\sim$ 30 bar, we identify the pathway \begin{eqnarray} \begin{aligned} \ce{ NH3 + H &-> NH2 + H2}\\ \ce{ NH3 + NH2 &-> N2H3 + H2} \; (\textrm{i})\\ \ce{ N2H3 &->[\textrm{M}] N2H2 + H} \; (\textrm{ii})\\ \ce{ N2H2 + H &-> N2H + H2}\\ \ce{ N2H &->[\textrm{M}] N2 + H}\\ \noalign{\vglue 5pt} \hline \noalign{\vglue 5pt} \mbox{net} : \ce{2NH3 &-> N2 + 3H2}. \end{aligned} \label{path-nh3-1} \end{eqnarray} where the rate-limiting step switches from (\ref{path-nh3-1})-(i) to (\ref{path-nh3-1})-(ii) with increasing pressure. In the region with pressure between 30 and 1 bar, we find two pathways with close contribution: \begin{eqnarray} \begin{aligned} 2(\ce{ NH3 + H &-> NH2 + H2})\\ \ce{ NH2 + H &-> NH + H2}\\ \ce{ NH + NH2 &-> N2H2 + H} \; (\textrm{iii})\\ \ce{ N2H2 + H &-> N2H + H2}\\ \ce{ N2H &->[\textrm{M}] N2 + H}\\ \ce{ H2 &->[\textrm{M}] 2H}\\ \noalign{\vglue 5pt} \hline % \noalign{\vglue 5pt} \mbox{net} : \ce{2NH3 &-> N2 + 3H2}. \end{aligned} \label{path-nh3-2} \end{eqnarray} and \begin{eqnarray} \begin{aligned} \ce{ NH3 + H &-> NH2 + H2}\\ \ce{ NH2 + H &-> NH + H2}\\ \ce{ NH + H &-> N + H2}\\ \ce{ NH3 + N &-> N2H + H2} \; (\textrm{iv})\\ \ce{ N2H &->[\textrm{M}] N2 + H}\\ \ce{ H2 &->[\textrm{M}] 2H}\\ \noalign{\vglue 5pt} \hline % \noalign{\vglue 5pt} \mbox{net} : \ce{2NH3 &-> N2 + 3H2}. \end{aligned} \label{path-nh3-3} \end{eqnarray} where (\ref{path-nh3-2})-(iii) and (\ref{path-nh3-3})-(iv) are the rate-limiting steps. Our pathways (\ref{path-nh3-1}) and (\ref{path-nh3-2}) are identical to those in M11 ((5) and (6) in \cite{Moses11}), although we find (\ref{path-nh3-1})-(i) still play a role for controlling \ce{NH3} quenching, even with the high energy barrier given by \cite{Dean1984}. As we adopt the same rates for several key reaction relevant for \ce{NH3}-\ce{N2} conversion, our model reproduces \ce{NH3} very close to M11, whereas V12 with a faster \ce{NH3}-\ce{N2} conversion predicts a higher quench level and lower abundance for \ce{NH3} (Figure \ref{fig:HD189-VM}-(b)). In all, we reiterate that further investigation for the key reactions (e.g., (\ref{path-nh3-1})-(i), (\ref{path-nh3-1})-(ii), (\ref{path-nh3-2})-(iii), (\ref{path-nh3-3})-(iv)) at high temperatures are required to improve our ability to accurately model the \ce{NH3}-\ce{N2} system. \begin{figure*}[!ht] \begin{center} \includegraphics[width=\columnwidth]{fig/HD189-main-1} \includegraphics[width=\columnwidth]{fig/HD189-main-2} \includegraphics[width=\columnwidth]{fig/HD189-C-1} \includegraphics[width=\columnwidth]{fig/HD189-C-2} \includegraphics[width=\columnwidth]{fig/HD189-O-1} \includegraphics[width=\columnwidth]{fig/HD189-O-2} \end{center} \caption{Comparison of atmospheric compositions on HD 189733b computed by VULCAN (solid), \cite{Moses11} (dashed), and \cite{Venot12} (dashed-dotted), showing volume mixing ratios of main species ((a), (b)), carbon species ((c), (d)), oxygen species ((e), (f)), and nitrogen species ((g),(h)) (some species not included in V12). Additionally, dotted lines for \ce{CH4} and \ce{CO2} are from running VULCAN with the updated methanol scheme from \cite{Venot2020}.} \label{fig:HD189-VM} \end{figure*} \begin{figure*}[!ht] \ContinuedFloat \begin{center} \includegraphics[width=\columnwidth]{fig/HD189-N-1} \includegraphics[width=\columnwidth]{fig/HD189-N-2} \end{center} \caption{(cont.)} \end{figure*} \paragraph{Production of \ce{CO2} and HCN}\mbox{} Another unexpected change in \cite{Venot2020} is that \ce{CO2} remains in chemical equilibrium across the atmosphere. Our model with the implementation of \cite{Venot2020} scheme confirmed the same result. This is remarkably differed from all other models, including V12, where \ce{CO2} is enhanced by photochemically produced OH: \begin{equation} \vspace{-0.2cm} \ce{CO + OH -> CO2 + H} \vspace{+0.03cm} \label{re:CO} \end{equation} This reaction with the OH radical is expected to rapidly convert CO into \ce{CO2} while the reaction rate is well studied owning to its importance in the terrestrial atmosphere as well as combustion kinetics. The rate coefficient of Reaction (\ref{re:CO}) adopted in \cite{Venot2020}: 2.589 $\times$ 10$^{-16}$ ($T$/300)$^{1.5}$ exp(251.4 / $T$), has a pre-exponential factor about two orders of magnitude smaller than the typical values listed on NIST, as compared in Figure \ref{fig:rate_CO2}. The slow CO oxidation shuts off the \ce{CO2} production and makes \ce{CO2} retain chemical equilibrium in \cite{Venot2020}. We are not sure if this rate constant is part of the updated methanol scheme from \cite{Burke2016} at this point, as to our knowledge, the base network in \cite{Burke2016} takes the rate coefficient of Reaction (\ref{re:CO}) from \cite{joshi06}, which is consistent with the literature and faster than that in \cite{Venot2020}. The dissociation of \ce{CH4} and \ce{NH3} leads to the formation of HCN, the primary photochemical product that coupled carbon and nitrogen on HD 189733b. HCN becomes the most abundant carbon-bearing molecule next to CO in the upper atmosphere. We identify the pathway in the HCN-dominated region between 1 mbar and 1 $\mu$bar as \begin{eqnarray} \begin{aligned} \ce{ CH4 + H &-> CH3 + H2}\\ \ce{ NH3 + H &-> NH2 + H2}\\ \ce{ NH2 + H &-> NH + H2}\\ \ce{ NH + H &-> N + H2}\\ \ce{ CH3 + N &-> H2CN + H}\\ \ce{ H2CN + H &-> HCN + H2}\\ 2(\ce{ H2O &->[h\nu] OH + H})\\ 2(\ce{ OH + H2 &-> H2O + H})\\ \noalign{\vglue 5pt} \hline % \noalign{\vglue 5pt} \mbox{net} : \ce{CH4 + NH3 &-> HCN + 3H2} \end{aligned} \label{path-hcn} \end{eqnarray} , which is identical to (14) of \cite{Moses11}. HCN in V12 naturally follows the more scarce \ce{CH4} and \ce{NH3} and presents in a lower abundance. We note that \cite{Pearce2019} have run simulations and discovered previous unknown rate coefficients, e.g., the destruction of HCN by reacting with the excited \ce{N(^2D)} could be an important sink of HCN. \paragraph{Photolysis Effects}\label{sec:photo}\mbox{} In the upper stratosphere above 1 mbar, the model differences most likely come from photochemical sources. However, it is less straightforward to compare model discrepancy originated from photochemistry as each step in converting photon fluxes into photolysis rates can give rise to deviation, including stellar fluxes, cross sections, branching ratios, and radiative transfer, etc. For simplicity, we will directly inspect the computed photolysis rates from M11, V12, and VULCAN. We limit our comparison to water photolysis, owing to its importance of producing H radicals and the frontline role of H in reacting with molecules such as \ce{CH4} and \ce{NH3} \citep{Liang2003,Moses11}. Figure \ref{fig:hd189-H2OJ} compares the photodissociation rates of the main branch \ce{H2O ->[h\nu] OH + H} computed by three models. The water photodissociation rate in VULCAN is about twice as large as that in M11 and around one order of magnitude larger than that in V12. The \ce{H2O} photolysis rates evidently correlate with the H and OH profiles in Figure \ref{fig:HD189-VM}-(a), -(e) and molecules in V12 (e.g. \ce{CH4}) generally tend to survive toward higher altitude. The disagreement started even from the top of the models, with the same deviation also found across other photolytic species, such as \ce{CH4} and \ce{NH3}. This implies the model implementation of stellar fluxes is the first-order contribution to photochemical differences. Although according to \cite{Venot12}, they found no differences in switching to the same stellar flux from M11 and suggested that Rayleigh scattering could be the source of disagreement. We have tested switching off Rayleigh scattering and found negligible changes, since Rayleigh scattering only dominates in the deep region where photochemistry has ceased (see Figure \ref{fig:HD189-209}). We note that potential errors with insufficient spectral resolution can contribute to the photolysis rates as well (see Appendix \ref{app:resolution}). Overall, more attention should be paid to calibrating the stellar irradiation for future photochemical model benchmarks and we suggest using \ce{H2O} photolysis as a baseline. \vspace{-0.1cm} \begin{figure}[tph] \begin{center} \includegraphics[width=\columnwidth]{fig/HD189-H2OJ} \end{center} \caption{Comparison of the photodissociation rates (s$^{-1}$) of the main branch of \ce{H2O} in HD 189733b computed by VULCAN, M11 \citep{Moses11}, and V12 \citep{Venot12}.} \label{fig:hd189-H2OJ} \end{figure} \paragraph{Carbon Species Comparison}\mbox{} Panels (c) and (d) in Figure \ref{fig:HD189-VM} show the same comparison for other important carbon-bearing species. Atomic carbon is liberated from CO photodissociation near the top atmosphere. CO photolysis appears to be stronger in M11 and generates more atomic carbon around $\mu$bar level. The carbon vapor exceeds saturation and can potentially condense in the upper atmosphere. We will examine the implication of C condensation in Section \ref{HD189-C-conden}. In the lower stratosphere, various hydrocarbon production is initiated by methane abstraction, i.e. H being successively stripped from \ce{CH4} to form more reactive unsaturated hydrocarbons. The hydrocarbon profiles predicted by M11, V12, and our model are consistent with the divergence of parent \ce{CH4}, except that acetylene (\ce{C2H2}) is also governed by atomic C in the upper atmosphere. \ce{C2H2} is the most favoured unsaturated hydrocarbon on HD 189733b. In the CO-photolysis region, atomic C can couple with nitrogen into CN and eventually produce \ce{C2H2} by dissociation of \ce{HC3N}. Yet we find \ce{CH4} to still be the dominant source for producing \ce{C2H2} below 1 $\mu$bar via a pathway such as\vspace{-0.3cm} \begin{eqnarray} \begin{aligned} 2(\ce{ CH4 + H &-> CH3 + H2})\\ \ce{ CH3 + H &-> CH2 + H2}\\ \ce{ CH2 + H &-> CH + H2}\\ \ce{ CH + H &-> C + H2}\\ \ce{ CH3 + C &-> C2H2 + H}\\ 2(\ce{ H2O &->[h\nu] H + OH})\\ 2(\ce{ OH + H2 &-> H2O + H})\\ \noalign{\vglue 5pt} \hline % \noalign{\vglue 5pt} \mbox{net} : \ce{2CH4 &-> C2H2 + 3H2}. \end{aligned} \end{eqnarray} Our scheme predicts \ce{C2H2} with the maximum abundance a few factor smaller than V12 and about an order-of-magnitude smaller than M11. Ethylene (\ce{C2H4}) is the next most abundant hydrocarbon after acetylene and peaks around 10 mbar. \ce{C2H4} and other \ce{C2H_x} production stems from \ce{CH3} association reaction via the pathway \begin{eqnarray} \vspace{-2cm} \begin{aligned} \ce{ H2O &->[h\nu] H + OH}\\ \ce{ OH + H2 &-> H2O + H}\\ 2(\ce{ CH4 + H &-> CH3 + H2})\\ \ce{ CH3 + CH3 &->[M] C2H6}\\ \ce{ C2H6 + H &-> C2H5 + H2}\\ \ce{ C2H5 &->[M] C2H4 + H}\\ \noalign{\vglue 5pt} \hline % \noalign{\vglue 5pt} \mbox{net} : \ce{2CH4 &-> C2H4 + 2H2} \end{aligned} \vspace{-0.5cm} \end{eqnarray} , where forming \ce{C2H6} is usually the rate-limiting step. The abundances of \ce{C2H4} and \ce{C2H6} in our model are in agreement with M11 within an order of magnitude. \begin{comment} \sout{In the lower region, C is also produced by methane abstraction, where H being abstracted one after another from \ce{CH4}. The lower C in V12 follows its lower H.} From the profiles of CO and C, we see that CO photolysis is somewhat more efficient in M11 and generates more atomic carbon above 1 $\mu$ bar. The production of atomic carbon induces the production of acetylene (\ce{C2H2}) above $\sim$ 1 mbar. In VULCAN, C self-recombines to form \ce{C2} and reacts with H to make \ce{C2H2} at high altitude: \begin{subequations} \label{re:path-c2h2-lowP} \begin{align} \begin{split} 2(\ce{ CO + h\nu} &\rightarrow \ce{C + O})\\ 2(\ce{ C + C + M}&\rightarrow \ce{C2 + M})\\ \ce{ C2 + H + M}&\rightarrow \ce{C2H + M}\\ \ce{ C2H + H + M}&\rightarrow \ce{C2H2 + M}\\ \hline \nonumber \mbox{net} : \ce{2CO + H2} &\rightarrow \ce{C2H2 + 2O} . \end{split} \end{align} \end{subequations} , where the slow recombination of C to \ce{C2} is the rate-limiting step. Once \ce{CH4} is abundant enough below $\sim$ 0.01 mbar, C can efficiently react with \ce{CH3} (from \ce{CH4}) to form \ce{C2H2}. Below about 1 mbar, other hydrocarbons participate and our pathway is partly similar to that in M11: \begin{subequations} \label{re:path-c2h2-highP} \begin{align} \begin{split} 2(\ce{ H2O + h\nu} &\rightarrow \ce{H + OH})\\ 2(\ce{ OH + H2} &\rightarrow \ce{H2O + H})\\ 2(\ce{ CH4 + H} &\rightarrow \ce{CH3 + H2})\\ \ce{ CH3 + CH3 + M}&\rightarrow \ce{C2H6 + M}\\ \ce{ C2H6 + M}&\rightarrow \ce{C2H4 + H2 + M}\\ \ce{ C2H4 + M}&\rightarrow \ce{C2H2 + H2 + M}\\ \hline \nonumber \mbox{net} : \ce{2CH4} &\rightarrow \ce{C2H2 + 3H2} . \end{split} \end{align} \end{subequations} , where the decomposition of \ce{C2H6} is the rate-limiting step. Apart from the minor deviation, the left panel in the second row shows our hydrocarbons are broadly consistent with those in M11, whereas the lower abundances in V12 simply follow H and \ce{CH4}. \end{comment} The kinetics beyond C$_2$ hydrocarbons becomes less constrained \citep{Moses11,Venot2013}. As discussed in Section \ref{sec:network}, we intended to capture the major pathways of producing \ce{C6H6} as a proxy for haze precursors without invoking an exhaustive suite of hydrocarbons. In our model, \ce{C6H6} is formed by the pathway \begin{eqnarray} \label{re:path-c6h6} \begin{aligned} 9(\ce{ H2O + h$\nu$ -> H + OH})\\ 9(\ce{ OH + H2 -> H2O + H})\\ 6(\ce{ CH4 + H -> CH3 + H2})\\ 4(\ce{ CH3 + H -> CH2 + H2})\\ 4(\ce{ CH2 + H -> CH + H2})\\ 4(\ce{ CH + H -> C + H2})\\ 2(\ce{ CH3 + C -> C2H2 + H})\\ 2(\ce{ C + C2H2 -> C3H2})\\ 2(\ce{ C3H2 + H + M -> C3H3 + M})\\ \ce{ C3H3 + C3H3 + M -> C6H6 + M}\\ \hline \nonumber \mbox{net} : \ce{6CH4 -> C6H6 + 9H2} . \end{aligned} \end{eqnarray} where the recombination of \ce{C3H3} is the rate-limiting step (akin to the cooler atmosphere of Jupiter \citep{Moses2005}). Figure \ref{fig:HD189-VM}-(d) shows that \ce{C4H2} and \ce{C6H6} predicted by our reduced scheme have considerably lower abundances than those in M11. Given the agreement of \ce{C3H3} up until 10$^{-5}$ bar, we suspect that the differences of \ce{C6H6} between VULCAN and M11 are due to photodissociation effects from \ce{C6H6} as well as other species such as CO. Given all the uncertainties and complexity as we mentioned in Section \ref{sec:haze}, we do not consider the predicted abundances of \ce{C4H2} and \ce{C6H6} to be accurate, but it should rather serve the purpose for accessing the haze precursors. \begin{comment} More atomic carbons are produced mainly by CO photolysis in M11. There is a remarkable difference in acetylene (\ce{C2H2}) between 10$^{-3}$ and 10$^{-6}$ bar, as our model predicts lower acetylene than that M11 and V12 by more than one order of magnitude. V12 also peaks higher than M11, probably due to the higher abundance of the methane source near 10$^{-6}$ bar in V12. We apply pathway analysis \citep{tsai18} on the networks in M11 and this work. We find that \ce{C2H2} is in fact considerably enhanced through an additional path not mentioned in M11. \ce{C2H2} turns out to be efficiently produced from \ce{CH4} between 10$^{-3}$ and 10$^{-6}$ bar with the aid of C3 and/or C4 hydrocarbons. To test the sensitivity of \ce{C2H2} production to C3/C4 hydrocarbons, we have run the model using Moses network except removing C3/C4 hydrocarbons and produced lower \ce{C2H2} abundance close to ours. A typical pathway is: \begin{subequations} \label{re:path-c2h2} \begin{align} \begin{split} 2(\ce{ CH4 + H} &\rightarrow \ce{CH3 + H2})\\ \ce{ CH3 + H}&\rightarrow \ce{^1CH2 + H2}\\ \ce{^1CH2 + H}&\rightarrow \ce{CH + H2}\\ \ce{ CH + C2H5}&\rightarrow \ce{C3H5 + H}\\ \ce{ CH3 + C3H5}&\rightarrow \ce{C2H5 + C2H3}\\ \ce{ CH3 + C3H5}&\rightarrow \ce{C2H5 + C2H3}\\ \ce{ C2H3 + M}&\rightarrow \ce{C2H2 + H + M}\\ \ce{ H2 + M }&\rightarrow \ce{ 2H + M }\\ \hline \nonumber \mbox{net} : \ce{2CH4} &\rightarrow \ce{C2H2 + 3H2} . \end{split} \end{align} \end{subequations} Therefore, we consider the abundance of \ce{C2H2} predicted by our network, which truncates to the hydrocarbons with 2 carbons, to be the lower limit. This implies the importance of higher-order hydrocarbons \citep{venot15}, especially while modeling photochemical produced hazes. \end{comment} \paragraph{Oxygen Species Comparison}\mbox{} Panels (e) and (f) of Figure \ref{fig:HD189-VM} compare oxygen-bearing species. The deviation of O, OH, and \ce{O2} again follows the discrepency in \ce{H2O} photolysis, similar to H. There is a minor shift of the equilibrium abundance of \ce{H2CO} in V12, possibly from the thermodynamic data difference between JANAF and the NASA polynomial, as pointed out in \cite{tsai17}. All three models exhibit somewhat different quench levels and profiles for \ce{CH2OH} and \ce{CH3OH}, which are generally important intermediates for \ce{CH4}-CO interconversion \cite{Moses11,tsai18,Venot2020}. Nevertheless, this does not reflect in the \ce{CH4} abundance since \ce{CH4} has already quenched in the deeper region. The updated methanol scheme in \cite{Venot2020} also provides more consistent \ce{CH2OH} and \ce{CH3OH} distributions with M11 and VULCAN. Since VULCAN adopted the same rate coefficients from the ab initio calculation from M11 for the three methanol reactions, the difference between VULCAN and M11 is more likely associated with reactions involving \ce{CH2OH}. \paragraph{Nitrogen Species Comparison}\mbox{} Panel (g) and (h) of Figure \ref{fig:HD189-VM} compare nitrogen-bearing species. N and \ce{NH2} follow the same quench level as \ce{NH3} (panel (b)), since they are part of the \ce{NH3}-\ce{N2} conversion. Considerable amount of atomic N is produced above mbar level by hydrogen abstraction of ammonia, similar to that of methane. Atomic N is oxidized by OH into NO in the upper atmosphere. NO reacts rapidly with atomic C into CN, as C-N bond is stronger than N-O bond. CN is an important source of nitrile production, e.g., CN reacts with \ce{C2H2} to form \ce{HC3N}. Our model shows a slower \ce{HC3N} production and predicts \ce{HC3N} with a peak value about two orders of magnitude lower than M11. The carbon-nitrogen-bearing species are grouped in Figure \ref{fig:HD189-VM}-(h). Since \ce{NH3} quenched first in the deeper layers than \ce{CH4}, the quench levels of general carbon-nitrogen-bearing species also follow \ce{NH3}. Despite in trace abundance, \ce{CH2NH} and HNCO participate in HCN forming mechanism and become important at high pressures. We find HCN formed around 10 mbar via \ce{CH2NH} and \ce{CH3NH2} in a pathway identical to (7) in \cite{Moses11}.\\ We conclude that we validate our model of HD 189733b by thoroughly reproducing composition distribution within the uncertainty range enclosed by M11 and V12. The kinetics data we employed generally yields quenching behavior close to M11, while our model appeared to predict lower \ce{C2H2}, \ce{C4H2}, \ce{C6H6}, and \ce{HC3N} than M11 in the upper atmosphere. Contrary to what have been reported in \cite{Venot2020}, we find that the update methanol scheme in fact increases the quenched \ce{CH4} abundance and more consistent with that in M11 and this work. The photochemical part of the atmosphere is more complex to diagnose but we suggest the implementation of stellar fluxes is the main factor in the discrepancy between M11, V12, and VULCAN. \subsection{Jupiter} The modeling work for Jovian chemistry broadly falls into two categories addressing two main regions: the stratosphere and the deep troposphere. The stratospheric compositions are governed by photochemical kinetics with the main focus on understanding the formation of various hydrocarbons. For stratospheric models, fixed mixing ratios or fluxes at the lower boundary need to be specified \citep{Yung1980,Moses2005}. As for the deep tropospheric compositions below the clouds with sparse observational constraints, kinetics models attempt to infer the interior water content based on other quenched species \citep{Visscher2010,Wang2016}. Since chemical equilibrium is expected to hold in the deep interior, the elemental ratio essentially controls the reservoir of gases and vertical mixing determines the quenched compositions in the upper troposphere. In this validation, our objective is to validate the chemical scheme at low temperatures with observed hydrocarbons and verify the condensation scheme. We take a general approach by connecting the deep troposphere to the stratosphere and solve the continuity equations consistently. Our lower boundary at 5 kbar is far down in the region ruled by equilibrium chemistry and zero flux can be applied to the lower boundary. In this setup, fixed-abundance lower boundary conditions are not required as in the stratosphere models \citep[e.g., ][]{Moses2005,Hue2018}. The compositions at the lower stratosphere are physically determined by condensation and transport from the troposphere in the model. \subsubsection{Model Setup} The temperature profile in the stratosphere and top of the troposphere (above 6.7 bar) is taken from \cite{Moses2005} and extended to 5000 bar following the dry adiabat, with T = 427.71 K at 22 bar measured by the Galileo probe as the reference point. We use the same eddy diffusion profile for the stratosphere as Model A of \cite{Moses2005}, which is derived from multiple observations. The eddy diffusion is assumed to be constant with 10$^8$ (cm$^2$/s) in the convective region below 6.7 bar. The temperature and eddy-diffusion profiles adopted for our Jupiter model are shown in Figure \ref{fig:Jupiter-TP}. Heavy elements in Jupiter are enhanced compared to solar metallicity, except the oxygen abundance is still unclear. We assign the elemental abundances for the Jupiter model as He/H = 0.0785 \citep{Atreya2020}, C/H = 1.19$\times$10$^{-3}$ \citep{Atreya2020}, O/H = 3.03$\times$10$^{-4}$ (0.5 times solar), and N/H = 2.28$\times$10$^{-4}$ \citep{Li2017}. Sulfur is not included in our Jupiter validation for simplicity. We include condensation of \ce{H2O} and \ce{NH3}, assuming a single particle size with average radius equal to 0.5 $\mu$m for the cloud condensates. Oxygen sources from micrometeoroids are prescribed at the upper boundary at 10$^{-8}$ bar following \cite{Moses2005}, with influx (molecules cm$^{-2}$ s$^{-1}$) of \ce{H2O} = 4 $\times$ 10$^4$, \ce{CO} = 4 $\times$ 10$^6$, and \ce{CO2} = 1 $\times$ 10$^4$. \begin{figure}[htp] \begin{center} \includegraphics[width=\columnwidth]{fig/Jupiter-TP} \end{center} \caption{The temperature, eddy diffusion, and deep vertical velocities used for our Jupiter model. The temperature and eddy diffusion in the stratosphere are taken from \cite{Moses2005}, while a dry adiabat and uniform eddy diffusion with $K_\textrm{zz}$ = 10$^8$ (cm$^2$/s) are assumed for the troposphere. The upward (positive) and downward (negative) vertical velocities are prescribed by Equation (\ref{vz}) with the maximum speed of 5 cm/s at 0.7 bar.} \label{fig:Jupiter-TP} \end{figure} \subsubsection{Comparing to Stratospheric Observations and \cite{Moses2005}} The top panel of Figure \ref{fig:Jupiter-mix} displays the vertical distribution of key species computed by our model, compared to \cite{Moses2005} and various observations. First, \ce{CH4} is the major carbon-bearing species across the atmosphere. It is well-mixed until photolysis and separation by molecular diffusion take in place at low pressure. The \ce{CH4} distribution in our model matches well with the observation \citep{Drossart1999}. We verify that our treatment of molecular diffusion accurately reproduces the decrease of \ce{CH4} due to molecular diffusion above the homopause. Second, our model successfully predicts the major \ce{C2} hydrocarbons, which stem from \ce{CH4} photolysis in the stratosphere. Our model tends to predict lower abundances for the unsaturated hydrocarbons \ce{C2H2} and \ce{C2H4} than \cite{Moses2005} in the lower stratosphere, but both profiles are within the observational constraints. The UV photosphere in Figure \ref{fig:Jupiter-mix} indicates that \ce{CH4} predominates the absorption from Ly-$\alpha$ to about 150 nm. We find the main scheme of converting \ce{CH4} to \ce{C2H6} in the upper atmosphere is \begin{eqnarray} \begin{aligned} 2(\ce{ CH4 &->[h\nu] CH3 + H})\\ 2(\ce{ CH3 + CH3 &->[M] C2H6})\\ \ce{ 2H &->[M] H2}\\ \noalign{\vglue 5pt} \hline % \noalign{\vglue 5pt} \mbox{net} : \ce{2CH4 &-> C2H6 + H2} \end{aligned} \end{eqnarray} and the photodissociation branch of methane is replaced by \ce{ CH4 ->[h\nu] ^1CH2 + H2} followed by \ce{ ^1CH2 + H2 ->CH3 + H} at higher pressures. We confirm that hydrogen abstraction and three-body association reactions are sensitive to the formation of hydrocarbons on Jupiter as discussed in detail in \cite{Moses2005}. Particularly in the lower stratosphere where temperature drops below 200 K, the rate constants fall out of the valid temperature range or are not well constrained. We find it is particularly important to adopt the low-temperature rate constants for \ce{CH4} and \ce{C2Hx} recombination reactions, i.e. \ce{CH3 + H ->[M] CH4 }, \ce{H + C2H2 ->[M] C2H3}, \ce{H + C2H3 ->[M] C2H4}, \ce{H + C2H4 ->[M] C2H5}, and \ce{H + C2H5 ->[M] C2H6}. We also adopt the limit of rate constants below certain threshold temperatures derived by \cite{Moses2005}. Third, our condensation scheme predicts the location of water-ice clouds starts at 3.6 bar and ammonia clouds at 0.7 bar as shown in Figure \ref{fig:Jupiter-mix}, consistent with the thermodynamics prediction with 0.5 solar O/H \citep{Atreya2005,Weiden1973}. The ammonium hydrosulfide (\ce{NH4SH}) is not considered since sulfur is not included. Last, our model produces lower abundances of \ce{C4H2} and \ce{C6H6} is produced at higher altitude compared to those in \cite{Moses2005}, which reflects the uncertainties in high-order hydrocarbons and the photolysis branches of \ce{C6H6}. \begin{figure}[htp] \begin{center} \includegraphics[width=\columnwidth]{fig/Jupiter-1} \includegraphics[width=\columnwidth]{fig/Jupiter-2} \includegraphics[width=\columnwidth]{fig/Jupiter-tau} \end{center} \caption{The top panel shows the vertical mixing profiles of important chemical species in our Jupiter model (solid), compared with various observations (data points) of hydrocarbons and the stratospheric distributions from Model A of \cite{Moses2005} (dashed). We follow \cite{rimmer16} placing a factor of two error bars in pressure when they are not given in the observational data. The vapor mixing ratios and cloud densities (g/cm$^3$) of the condensible \ce{H2O} and \ce{NH3} are displayed in the middle panel. The bottom panel illustrates the UV photosphere where $\tau$ = 1 with decomposition of main absorbers.}\label{fig:Jupiter-mix} \end{figure} \begin{figure}[htp] \begin{center} \includegraphics[width=\columnwidth]{fig/Jupiter-NH3} \end{center} \caption{The deep ammonia distribution in parts per million (ppm) computed by our Jupiter model (black), while assuming chemical equilibrium, with eddy diffusion only, and including upward/downward advection for the updraft/downdraft branch (Figure \ref{fig:Jupiter-TP}), respectively. The red and green profiles show the inferred ammonia distribution at 2$^\circ$ N latitude and 12$^\circ$ N latitude based on Juno microwave measurement by \cite{Li2017}, where the shaded areas enclose the 16th and 84th percentile of the samples in their MCMC runs.} \label{fig:nh3-Jupiter} \end{figure} \subsubsection{Spatial Variation of Ammonia Due to Vertical Advection} During the Juno spacecraft's first flyby in 2016, the microwave radiometer (MWR) on Juno measured the thermal emission below the clouds, which was inverted to global distribution of ammonia from the cloud level down to a few hundreds bar level. A plume-like feature was curiously seen associated with latitudinal variation of ammonia \citep{juno17}. To explore the local impact of advection, we test how the upward and downward motion in a plume can shape the deep ammonia distribution in Jupiter. Although Galileo probe has provided constraints on the structure of Jupiter's deep zonal wind \citep{Atkinson1997} and Juno also sheds light on the vertical extension of the zonal wind \citep{Stevenson2020}, we do not have observational constraints on the deep vertical wind. Hence we consider a simple but physically motivated (mass-conserving) vertical wind structure without tuning to fit the data. We assume an updraft and downdraft plumes starting from the bottom of \ce{NH3}-ice clouds at 0.7 bar, in addition to eddy diffusion, as depicted in the right panel of Figure \ref{fig:Jupiter-TP}. For the non-divergent advection to conserve mass in a 1-D column, the vertical velocity at layer $j$ with number density n$_j$ follows \begin{equation} v_j n_j = v_\textrm{top} n_\textrm{top} = \textrm{constant} \label{vz} \end{equation} such that the net flux remains zero at each layer. For this test, we arbitrarily choose the maximum wind velocity at the top to be 5 cm/s. This choice has advection timescale shorter than diffusion timescale in the lower pressure region, i.e. $v_j$ $\gtrsim$ K$_\textrm{zz} / H$, which allows us to see the influence of advection. Figure \ref{fig:nh3-Jupiter} compares the computed distribution of ammonia to that retrieved from Juno measurements (\cite{Li2017}; also see updates in \cite{Li2020}) at two different latitudes. First, the ammonia distribution predicted by chemical equilibrium is rather uniform with depth, only slightly increasing from 350 ppm to 400 ppm. Next, vertical mixing by eddy diffusion alone makes ammonia quenched from the deep interior below 1000 bar and thus brings ammonia to a slightly lower but uniform concentration of 300 ppm. There is almost no visible difference while including the upward advection since ammonia has already been quenched by eddy diffusion from the deep region. Last, the uniform distribution of ammonia is altered in the downdraft, where the downward motion transports the lower concentration of \ce{NH3} from the condensing region. Our \ce{NH3} distribution is qualitatively consistent with the \ce{NH3}-depleted branch at 12$^{\circ}$ N from \cite{Li2017}, where \ce{NH3} reaches a local minimum around 7 bar. We emphisize that this shape cannot be reproduced by eddy diffusion alone. Although eddy diffusion is probably still essential in practice for parametrizing a range of mixing processes, we demonstrate that including vertical advection can be useful. The advection processes can especially play a bigger part in 2-D systems \citep{Zhang2013,Hue2015}. \subsection{Present-Day Earth} Our chemical network has only been applied to hydrogen-dominated, reducing atmospheres so far. In this section, we validate our full S-N-C-H-O network with the oxidizing atmosphere of present-day Earth. The interaction with the surface is particularly crucial in regulating the composition for the terrestrial atmosphere. Surface emission and deposition via biological and geological activities have to be taken into account. Our implementation of the top boundary fluxes and condensation scheme has been validated for Jupiter in the previous section. We will proceed to verify the lower boundary with surface emission and deposition in the Earth model. \begin{table}[h] \begin{center} \caption{Lower Boundary Conditions for the Earth Validation}\label{tab:BC_Earth} \begin{tabular}{|l|c|c|} \hline Species & Surface Emission\footnote{Global emission typically measured in mass budget (Tg/yr), which is converted to molar flux with the surface area of the Earth = 5.1 $\times$ 10$^{18}$ cm$^2$ for our 1-D photochemical model.} & V$_{\textrm{dep}}$ \footnote{Adopted from \cite{Hauglustaine1994}}\\ & (molecules cm$^{-2}$ s$^{-1}$) & (cm s$^{-1}$)\\ \hline CO\footnote{\cite{IPCC2001}} & 3.7 $\times$ 10$^{11}$ & 0.03\\ \ce{CH4}\footnote{\cite{Seinfeld2016}\label{S16}} & 1.6 $\times$ 10$^{11}$ & 0\\ NO\textsuperscript{\ref{S16}} & 1.3 $\times$ 10$^{10}$ & 0.001\\ \ce{N2O}\textsuperscript{\ref{S16}} & 2.3 $\times$ 10$^{9}$ & 0.0001\\ \ce{NH3}\textsuperscript{\ref{S16}} & 1.5 $\times$ 10$^{9}$ & 1\\ \ce{NO2} & 0 & 0.01\\ \ce{NO3} & 0 & 0.1\\ \ce{SO2}\textsuperscript{\ref{S16}} & 9 $\times$ 10$^{9}$ & 1\\ \ce{H2S}\textsuperscript{\ref{S16}} & 2 $\times$ 10$^{8}$ & 0.015\\ \ce{COS}\textsuperscript{\ref{S16}} & 5.4 $\times$ 10$^7$ & 0.003\\ \ce{H2SO4}\textsuperscript{\ref{S16}} & 7 $\times$ 10$^8$ & 1\\ \ce{HCN}\footnote{\cite{Li2003}\label{L03}} & 1.7 $\times$ 10$^8$ & 0.13\\ \ce{CH3CN}\textsuperscript{\ref{L03}} & 1.3 $\times$ 10$^8$ & 0.13\\ \ce{HNO3} & 0 & 4\\ \ce{H2SO4} & 0 & 1\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[h] \begin{center} \includegraphics[width=\columnwidth]{fig/Earth-TP} \caption{The temperature (at the equator in January from CIRA-86 with references described in the text) and eddy diffusion (K$_\textrm{zz}$) profiles \citep{Massie1981} for the Earth model.}\label{fig:Earth-TP} \end{center} \end{figure} \subsubsection{Model Setup} We follow \cite{Hu2012}, taking the monthly mean temperature at the equator in January 1986 (CIRA-86) from the empirical model COSPAR International Reference Atmosphere\footnote{\url{https://ccmc.gsfc.nasa.gov/pub/modelweb/atmospheric/cira/cira86ascii}} \citep{COSPARI,COSPARII} as the background temperature profile and the eddy diffusion coefficients from \cite{Massie1981}, as shown in Figure \ref{fig:Earth-TP}. The winter atmosphere has a colder and hence drier tropopause and better represents the global averaged water vapor content (see \cite{Chiou1997} and the discussion in \cite{Hu2012}). Unlike gas giants, terrestrial atmospheres typically do not extend to a thermochemical equilibrium region. Instead, biochemical (e.g., plants and anthropogenic pollution) and geological (e.g. volcanic outgassing) fluxes provide surface sources and sinks that are key to regulate the atmosphere. For the lower boundary condition, global emission budget provides estimates for the surface fluxes, which is conventionally recorded in the unit of mass rate (Tg yr$^{-1}$) and needed to convert to flux (molecules cm$^{-2}$ s$^{-1}$) in our 1-D model. For Earth and any ocean worlds with large bodies of surface water reservoir, the standard setup is to fix the surface-water mixing ratio \citep{Kasting1980,Hu2012,Lincowski2018}. We set the surface mixing ratio of water to 0.00894, corresponding to 25$\%$ relative humidity. Surface \ce{CO2} is also fixed at 400 ppm for simplicity, since we did not consider several major sources and sinks of \ce{CO2} at the surface, such as respiration, photosynthesis, ocean uptake, weathering, etc. The specific emission fluxes and deposition velocities for the lower boundary are listed in Table \ref{tab:BC_Earth}, while zero-flux boundary is assumed for all remaining species. We initialize the atmospheric composition with well-mixed (constant with altitude) 78$\%$ \ce{N2}, 20$\%$ \ce{O2}, 400 ppm \ce{CO2}, 934 ppm Ar, and 0.2 ppb \ce{SO2}. For the solar flux, we adopt a recently revised high-resolution spectrum \citep{Gueymard2018}, which is derived from various observations and models (see Table 1. of \cite{Gueymard2018}). The solar radiation was cut from 100 nm in \cite{Hu2012} for the missing absorption from the thermosphere. We do not find it necessary as we set the top layer to the lower thermosphere around 110 km and the EUV absorption is naturally accounted for. We have also tried including the absorption from atomic oxygen and nitrogen and found no differences regarding the neutral chemistry in the lower atmosphere, since \ce{N2} and \ce{O2} have already screened out the bulk EUV. The chlorine chemistry and lightning sources for odd nitrogen are not included in this validation. \subsubsection{Results} \begin{figure \begin{center} \includegraphics[width=\columnwidth]{fig/photosphere_Earth} \end{center} \caption{The UV photosphere i.e. optical depth $\tau$ = 1 (black) in our Earth model, overlaid with the composition-decomposed photosphere for several key molecules.} \label{fig:tau_Earth} \end{figure} \begin{figure}[htp] \begin{center} \includegraphics[width=\columnwidth]{fig/Earth-1} \includegraphics[width=\columnwidth]{fig/Earth-2} \includegraphics[width=\columnwidth]{fig/Earth-S} \end{center} \caption{The global average vertical distribution of key compositions in present-day Earth's atmosphere compared to observations. The \ce{H2O} mixing ratio is from the US Standard Atmosphere 1976\textsuperscript{*}. Satellite observations of CO in the tropics and \ce{NH3} within 30$^\circ$-- 40$^\circ$ N and 70$^\circ$-- 80$^\circ$ E in 2003 are measured by the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) \citep{Fischer2008}. The rest unlabelled observational data are from \cite{Massie1981,Hudson1979}. When errors are not included in the published observations, we follow \cite{Hu2012} placing one order of magnitude error bars for the diurnal and spatial variations.} \small\textsuperscript{*}e.g. \url{https://www.digitaldutch.com/atmoscalc/help.htm} \label{fig:earth-mix} \end{figure} Molecular oxygen (\ce{O2}) and ozone (\ce{O3}) are the main players in Earth's photochemistry. \ce{O2} absorbs VUV below 200 nm and \ce{O3} takes up the radiation longward than about 200 nm, which blocks the harmful UV from life on the surface. The penetration level of solar UV flux shown in Figure \ref{fig:tau_Earth} indicates that ozone absorbs predominately between 20 km and 50 km. The basics of oxygen--ozone cycle are described by the Chapman mechanism \citep[e.g., ][]{Yung1999,Jacob2011}. Our full chemical network encompasses the catalytic cycles involving hydrogen oxide and nitrogen oxide radicals that are responsible for the ozone sinks in the stratosphere. Although the catalytic cycle of chlorine which accounts for additional ozone loss is not included, we are able to reproduce the observed global average ozone distribution in Figure \ref{fig:earth-mix}. Our condensation scheme captures the cold trap of water in the troposphere, i.e. the water vapor entering the stratosphere is set by the tropopause temperature. Above the tropopause, water is supplied by diffusion transport from the troposphere and oxidation of \ce{CH4}. We find the conversion in the stratosphere go through the steps \begin{eqnarray} \begin{aligned} \ce{CH4 + OH &-> H2O + CH3}\\ 2(\ce{O3 &->[h\nu] O2 + O(^1D)})\\ 2(\ce{O(^1D) + N2 &-> O + N2})\\ \ce{CH3 + O &-> H2CO + H}\\ \ce{O + H2CO &-> HCO + OH}\\ \ce{HCO + O2 &-> CO + HO2}\\ \ce{CO + OH &-> CO2 + H}\\ \ce{HO2 + OH &-> H2O + O2}\\ 2(\ce{H + O2 &->[M] HO2})\\ \noalign{\vglue 5pt} \hline \noalign{\vglue 5pt} \textrm{net} : \ce{CH4 + 2OH + 2O3 &-> CO2 + 2H2O + 2HO2} \end{aligned} \label{re:CH4-H2O} \end{eqnarray} , effectively turning one \ce{CH4} molecule into two \ce{H2O} molecules \citep{Noel2018}. \ce{H2O} eventually photodissociated in the mesosphere and produced \ce{H2}, as indicated by the profiles in Figure \ref{fig:earth-mix}. Overall, our model produces water distribution consistent with observations considering the diurnal and spatial variations. The two oxides of nitrogen, NO and \ce{NO2}, cycle rapidly in the presence of ozone: \begin{subequations} \label{re:NOx} \begin{align} \begin{split} \ce{NO + O3 &-> NO2 + O2}\\ \ce{NO2 + O &-> NO + O2}\\ \hline \nonumber \mbox{net} : \ce{O3 + O &-> 2O2} \end{split} \end{align} \end{subequations} Thus NO and \ce{NO2} are conventionally grouped as \ce{NO_x}. The burning of fossil fuel accounts for about half of the global tropospheric emission (e.g. Table 2.6 of \cite{Seinfeld2016}). \ce{NO_x} is mainly lost by oxidation into nitric acid (\ce{HNO3}): \ce{NO2 + OH ->[\textrm{M}] HNO3}. Our model reproduces distribution of \ce{NO_x}, whereas our higher \ce{HNO3} in the upper stratosphere is seemingly attributed to missing the hydration removal in the actual atmosphere. Nitrous oxide (\ce{N2O}) is mainly emitted by soil bacteria, prescribed by the surface emission at the lower boundary. There is no efficient \ce{N2O} removal reactions in the troposphere and \ce{N2O} remains well-mixed as one of the important greenhouse gases. \ce{N2O} is predominantly removed by photodissociation in the stratosphere. Our calculated \ce{N2O} is in agreement with the observations for the troposphere and stratosphere. Although similar to \cite{Hu2012}, our model slightly overpredicts its abundance above 50 km, which is likely due to missing the photolysis branch of \ce{N2O} that produces excited oxygen \ce{O(^1S)}. \ce{CH4} is the most abundant hydrocarbon in Earth's atmosphere, with the surface emission largely comes from human activities (e.g. agriculture) as well as natural sources (e.g. wetlands). \ce{CH4} is oxidized into CO and eventually \ce{CO2} by OH through multiple steps similar to (\ref{re:CH4-H2O}) in the stratosphere. CO is produced by combustion activities with about 0.1 ppm concentration near the surface \cite{Seinfeld2016}, as a result of the balance among the emission flux, OH oxdization, and dry deposition. CO is continuously removed by OH through the troposphere and generated by photodissociation of \ce{CO2} in the thermosphere and mesosphere, as depicted by their distributions in Figure \ref{fig:earth-mix}. As the major oxidizing agent, OH is an important diagnostic species for Earth's photochemical model. It is mainly produced in the stratosphere during daytime initiated by ozone photolysis and regenerated in the troposphere by \ce{NO_x} \citep[see e.g., ][]{Jacob2011}. The OH distribution in our model is consistent with that in \cite{Massie1981}. We will further discuss using calculated OH concentration to estimate the chemical timescale of long-lived species against oxidation in the next section. Carbonyl sulfide (OCS) is the main sulfur species in the troposphere, emitted by direct outgassing or oxidation of carbon disulfide (\ce{CS2}) and dimethyl sulfide (DMS) released by the ocean \citep{Seinfeld2016,Barkley2008}. OCS is rather stable in the troposphere until entering the stratosphere where it is photodissociated or oxidized by OH and ultimately turned into sulfuric acid. Sulfur dioxide (\ce{SO2}) is another important sulfur containing pollutant from fossil fuel combustion. \ce{SO2} oxidation begins from the troposphere with \begin{equation*} \ce{SO2 + OH ->[\textrm{M}] HSO3} \end{equation*} \ce{HSO3} radical rapidly reacts with oxygen to form \ce{SO3} \begin{equation*} \ce{HSO3 + O2 -> SO3 + HO2} \end{equation*} followed by sulfuric acid formation \begin{equation*} \ce{SO3 + H2O -> H2SO4} \end{equation*} The sulfur-containing gases in our model generally agree with the global distribution, while the mismatch of \ce{H2SO4} is expected as our model does not include \ce{H2SO4} photodissociation and heterogeneous reactions that efficiently remove \ce{H2SO4} from the gas phase. \begin{figure}[htp] \begin{center} \includegraphics[width=\columnwidth]{fig/Earth-timescale} \end{center} \caption{Calculated chemical timescales of some environmentally important gases compared to the dynamical timescale of eddy diffusion in the Earth validation model. The thick lines indicate the region where the oxidation is dominated by OH (i.e. $\tau_{\ce{OH}}$ $\simeq$ $\tau_{\ce{chem}}$).}\label{fig:earth-tau} \end{figure} \subsubsection{Chemical Lifetime} The oxidizing capacity of Earth's atmosphere is important for decontaminating toxic and greenhouse gases, such as CO, \ce{CH4}, and various volatile organic compounds . The oxidizing power is not only essential for regulating habitable conditions but also key to address the stability of biosignature gases for other terrestrial planets. Here we present a brief overview of the key timescales for some important trace gases from our Earth model. OH radical is the primary daytime oxidizing agent in our biosphere. The chemical timescale of species A against oxidization ($\tau^{\ce{A}}_{\ce{OH}}$) can be estimated by the computed OH concentration as \begin{equation} \tau^{\ce{A}}_{\ce{OH}} = \frac{[\ce{A}]}{k_{\ce{A}-\ce{OH}}[\ce{A}][\ce{OH}]} = \frac{1}{k_{\ce{A}-\ce{OH}}[\ce{OH}]} \end{equation} where $k_{\ce{A-OH}}$ is the rate coefficient of the oxidizing reaction of \ce{A + OH}. In the upper atmosphere where molecular collision is less frequent, the excited \ce{O(^1D)} produced by ozone photolysis is not immediately stablized and becomes the main oxidant. We consider the two major oxidizing paths across the atmosphere and write the chemical timescale against oxidation as \begin{equation} \tau^{\ce{A}}_{\textrm{chem}} = \frac{1}{k_{\ce{A}-\ce{OH}}[\ce{OH}] + k_{\ce{A}-\ce{O(^1D)}}[\ce{O(^1D)}]} \end{equation} Figure \ref{fig:earth-tau} illustrates the chemical timescales ($\tau_{\textrm{chem}}$) along with photolysis timescales (1/$k_{\textrm{photo}}$) for several trace gases, where $\tau_{\ce{OH}}$ (thick lines) inversely correlates with temperature in general. We can gain some insights by comparing ($\tau_{\textrm{chem}}$) to the dynamical timescale of vertical mixing ($\tau_{\textrm{dyn}}$ = H$^2$/K$_\textrm{zz}$): In the troposphere, \ce{CH4} and \ce{N2O} display rather well-mixed abundances due to their longer chemical lifetime. CO and \ce{NH3} have comparable $\tau_{\textrm{chem}}$ with $\tau_{\textrm{dyn}}$ and and exhibit negative gradient with altitude from oxidation removal. In the stratosphere, \ce{NH3} is rapidly photodissociated while \ce{CH4} is transported from the troposphere and oxidized into CO. In the thermosphere above $\sim$ 80 km, the oxidation by \ce{O(^1D)} takes over for most species, but mixing processes with a shorter timescale here controls the chemical distribution. For example, CO abundance starts to increase with altitude from about 60 km as a result of downward transport of CO produced by \ce{CO2} photodissociation in the upper atmosphere. In summary, we validate our photochemical model with HD 189733b, Jupiter, and Earth, for a wide range of temperatures and oxidizing states. The inclusion of nitrogen and sulfur chemistry, along with the implementation of advection, condensation, and boundary conditions are verified by comparing with models and/or observations. The discrepancies in previous models of HD 189733b are identified for future investigation. \begin{table}[t] \setlength\tabcolsep{1pt} \caption{Parameters of the planetary systems.} \begin{tabular}{p{1.8cm}p{1.6cm}p{1.75cm}p{1.25cm}p{1.25cm}} \hline Parameter & WASP-33b & HD 189733b & GJ 436b & 51 Eri b\\ a\footnote{orbital distance} (AU) & 0.02558 & 0.03142 & 0.02887 & 11.1 \\ T$_\textrm{int}$ (K) & 200 & --- & 100/400 & 760\\ R$_\textrm{s}$ (R$_\odot$) & 1.51 & 0.805 & 0.464 & 1.45\\ R$_\textrm{p}$ (R$_\textrm{J}$) & 1.603 & 1.138 & 0.38 & 1.11\\ g\footnote{gravity at 1 bar level} (cm$^2$/s) & 2700 & 2140 & 1156 & 18197 \\ $\overline{\theta}$\footnote{mean stellar zenith angle} & 58 & 48 & 58 & 67\\ stellar type & A5 & K1-K2 & M2.5 & F0\\ \hline \end{tabular} \label{tab:planet_para} \end{table} \section{Case study}\label{case} In this section, we select WASP-33b (ultra-hot Jupiter), HD 189733b (hot Jupiter), GJ 436b (warm Neptune), and 51 Eridani b (young Jupiter) to perform case studies. Each case represents a distinctive class among gas giants with \ce{H2}-dominated atmospheres. The effective temperatures of these objects span across 700--3000 K while having host stars of various stellar types. We investigate how disequilibrium processes play a part for these cases with additional attention on the effects of sulfur chemistry and photochemical haze precursors. All the P-T profiles in this section are generated using the open-source radiative-transfer model, HELIOS, except we keep the same P-T profile of HD 189733b as in Section \ref{sec:hd189} for comparative purposes. HELIOS employs two-stream approximation and correlated-k method to solve for the radiative-convective equilibrium temperature consistent with thermochemical equilibrium abundances. The gaseous opacities include \ce{H2O}, \ce{CH4}, CO, \ce{CO2}, \ce{NH3}, \ce{HCN}, \ce{C2H2}, NO, SH, \ce{H2S} \ce{SO2}, \ce{SO3}, SiH, CaH, MgH, NaH, AlH, CrH, AlO, SiO, CaO, CIA$_{\ce{H2-H2}}$, CIA$_{\ce{H2-He}}$, and additionally TiO, VO, Na, K, H- for WASP-33b. The P-T profiles are fixed without taking into account of the radiative feedback from disequilibrium chemistry (but see \cite{Drummond2016} for the effects on HD 189733b). The astronomical parameters used are listed in Table \ref{tab:planet_para}. The dayside-average stellar zenith angle is used for WASP-33b and GJ436b and the global-average stellar zenith angle is used for 51 Eri b (see Appendix \ref{app:mu}), except that we keep the same value for HD 189733b to compare with the results in Section \ref{sec:hd189}. The stellar UV fluxes adopted for each system are compared in Figure \ref{fig:case-sflux}, with detailed description in each section. For the eddy diffusion ($K_{\textrm{zz}}$) profiles in our case studies (except that we again retain the same profile for HD 189733b from \cite{Moses11}), we assume $K_{\textrm{zz}}$ to be constant in the convective region and increasing roughly with inverse square root of pressure in the stratosphere \citep{Lindzen1981,Vivien2013}. The expression as a function of pressure in bar ($P_{\textrm{bar}}$) takes a similar form as \cite{Charnay2015} or \cite{Moses2016}: \begin{equation} K_{\textrm{zz}} = K_{\textrm{deep}} (\frac{P_{\textrm{tran}}}{P_{\textrm{bar}}})^{0.4}, \label{eq:Kzz} \end{equation} where $P_{\textrm{tran}}$ is the transition pressure level informed by the radiative transfer calculation. The more irradiated atmospheres have deeper radiative-convective transition levels and greater P$_{\textrm{tran}}$. A common way of estimating $K_{\textrm{deep}}$ in the convective region is applying the mixing length theory with the knowledge of convective heat flux. For WASP-33b, most of the modeled atmosphere is in the radiative region. We choose $K_{\textrm{deep}}$ such that the overall pressure-dependent $K_{\textrm{zz}}$ profile matches that derived from the vertical wind in the general circulation model (GCM). For GJ 436b, $K_{\textrm{deep}}$ is treated as a loosely constrained free parameter along with the internal heating we explored (Section \ref{sec:GJ436b_input}). $K_{\textrm{deep}}$ is likely more important in controlling the quenched species for cooler planets, such as 51 Eri b, where we adopted a value of $K_{\textrm{deep}}$ that can produce quenched \ce{CH4} consistent with the observations. We run nominal models for all planets in this section with the S-N-C-H-O chemical network\footnote{included in the supplementary material}. We recognize there are considerable uncertainties in sulfur kinetics, as discussed in Section \ref{sec:network}. In order to gauge the uncertainty effects of our sulfur scheme, we explore the sensitivity to sulfur chain-forming reactions for GJ 436b and OCS recombination for 51 Eridani b. After chemical abundances are obtained, we use the open-source tool PLATON \citep{Zhang2019,Zhang2020} to generate transmission spectra and HELIOS for the emission spectra. \begin{figure}[!ht] \begin{center} \includegraphics[width=\columnwidth]{fig/case_sflux_1au} \includegraphics[width=\columnwidth]{fig/case_sflux_toa} \end{center} \caption{Stellar UV fluxes normalized at 1 AU (top) and at the top of the planet's atmosphere (bottom) adopted in our case study models.} \label{fig:case-sflux} \end{figure} \subsection{WASP-33b} WASP-33b is among the hottest gas giants discovered with dayside temperature around 3000 K \citep{Essen2020}. To date it remains the only case showing evidence of both temperature inversion and \ce{TiO} features \citep{Serindag2021}, which makes WASP-33b an interesting target for testing the stability of TiO/VO along with other molecules. Previous work on ultra-hot Jupiters are limited by the assumption of chemical equilibrium chemistry \citep{Kitzmann2018,parmentier18,Zhang2018}. Here, we will verify the equilibrium assumption by exploring how disequilibrium processes affect the titanium and vanadium compounds with different C/O ratios. \subsubsection{Stellar UV-flux and Eddy Diffusion} The host star WASP-33 is an A5 type star with effective temperature about 7400 K. We use the UV spectrum of HD 40136 (F0 type) merged with a 7000 K atlas spectrum from \cite{Sarah2013} as an analogue. The star is fast rotating and exhibits pulsations, which might add more uncertainties to the UV flux. Nevertheless, as we will see in Section \ref{sec:wasp33b-NEQ}, photodissociation solely converts more molecules to atoms at this high temperature and the results should be qualitatively robust. Vertical wind generally correlates with the planet's effective temperature \citep{Tan2019,Komacek2019,Baxter2021}. We assume the value of $K_{\textrm{zz}}$ based on the simulations in \cite{Tan2019}, where the global RMS vertical wind increases with decreasing pressure and reaches about 100 m/s at 1 mbar (personal communication). The vertical wind translates to K$_\textrm{zz}$ $\sim$ 10$^{11}$ cm$^2$s$^{-1}$ around 1 mbar. The temperature and eddy diffusion profiles for WASP-33b are shown in Figure \ref{fig:TP-wasp33b}. \begin{figure}[pth] \begin{center} \includegraphics[width=\columnwidth]{fig/wasp33b-TP.pdf} \end{center} \caption{The temperature-pressure and eddy diffusion ($K_\textrm{zz}$) profiles for WASP-33b. Solar elemental abundance (solid) and C/O = 1.1 (dashed) are assumed for calculating the temperature structure.} \label{fig:TP-wasp33b} \end{figure} \subsubsection{Chemical Equilibrium} We first look at the trends associated with thermal dissociation governed by thermochemical equilibrium under carbon-poor and carbon-rich conditions, for which we assume a solar C/O and C/O = 1.1, respectively. Figure \ref{fig:Ti-EQ} illustrates how titanium compounds vary with temperature in equilibrium at 1 mbar. For solar C/O, titanium mainly exists in the form of Ti and TiO. As temperature exceeds about 2500 K, TiO becomes unstable against thermal dissociation and its abundance falls with temperature. For C/O = 1.1, TiO is depleted due to the scarcity of oxygen, as oxygen preferably combines with the excess carbon to form CO \citep{Madhu12}. Atomic titanium is the major species across this temperature range and TiC, TiH, and TiO have close abundances. The effects of thermal dissociation on WASP-33b are clearly visible in the equilibrium profiles in Figure \ref{fig:wasp33b-mix}. The blistering heat of WASP-33b makes all elements predominantly exist in the atomic form above 0.1 bar, where temperature starts to increase with altitude and exceeds 3000 K, while CO with the strong C--O bond is the only molecule that survives the high temperature. For solar C/O ratio, as the majority of C is locked in CO, atomic C tracks the temperature structure whereas oxides such as \ce{H2O}, \ce{TiO}, VO, and \ce{TiO2} display inverse trends with temperature. For C/O = 1.1, atomic O swaps place with C and \ce{TiO} and VO are significantly depleted. \begin{figure}[tph] \begin{center} \includegraphics[width=\columnwidth]{fig/Ti-species-solar} \includegraphics[width=\columnwidth]{fig/Ti-species-CtoO11} \end{center} \caption{The equilibrium mixing ratios of several gas phase titanium species at 1 mbar as a function of temperature for solar elemental abundance (top) and C/O = 1.1 (bottom).} \label{fig:Ti-EQ} \end{figure} \subsubsection{The effects of Disequilibrium Chemistry}\label{sec:wasp33b-NEQ} \begin{figure*}[tph] \begin{center} \includegraphics[width=\columnwidth]{fig/wasp33b-solar-1} \includegraphics[width=\columnwidth]{fig/wasp33b-CtoO11-1} \includegraphics[width=\columnwidth]{fig/wasp33b-solar-Ti} \includegraphics[width=\columnwidth]{fig/wasp33b-CtoO11-Ti} \end{center} \caption{The composition profiles for the main species of interest for WASP-33b, assuming solar C/O (left) and C/O = 1.1 (right). The equilibrium abundances are plotted in dotted lines.} \label{fig:wasp33b-mix} \end{figure*} For a typical hot Jupiter (e.g. HD 189733b), vertical mixing plays a major role controlling the chemical distribution in the photosphere. However, it is not the case for WASP-33b as we compare the equilibrium and disequilibrium mixing ratio profiles in Figure \ref{fig:wasp33b-mix}. Although the strength of eddy diffusion also increases with temperature, faster thermochemical reactions still prevail upon vertical mixing. The deviation of disequilibrium profiles above the temperature-inverted region ( $\sim$ 10$^{-4}$ bar) is due to photodissociation, which reduces molecular species and produces more atoms. In the absence of vertical quenching, the depleted TiO in a carbon rich condition is unable to be replenished by vertical transport from the deep region, as seen in Figure \ref{fig:wasp33b-mix}. In the photodissociation region, in principle, stronger vertical mixing can transport more molecules upward against photodissociation. We have perform additional tests with eddy diffusion profile varied by a factor of 10. Yet we found the change is minor and our results are not too sensitive to $K_\textrm{zz}$. For sulfur species, sulfur atom S is also the favored form followed by hydrogen sulfide (SH). The formation of \ce{S2} and other polysulfur (\ce{S_x}) is entirely shut down at this extremely high temperature. Sulfur does not couple to oxygen, carbon, or nitrogen since it mostly remains in the atomic form. Lastly, because the adopted stellar spectrum is truncated around Lyman-$\alpha$, we have further extended the stellar spectrum to include the EUV flux shorter than Lyman-$\alpha$ using the synthetic spectra by PHOENIX \footnote{\url{http://phoenix.astro.physik.uni-goettingen.de/}}. Apart from more C atoms from CO photodissociation above 10$^{-5}$ bar, we find no notable differences in all other species. Overall, the composition distribution of WASP-33b resembles that of a hot Jupiter except without a vertical quench region. The atmosphere of WASP-33b can be divided into a photochemically influenced region and a thermochemical equilibrium region, with the transition at the top of temperature-inverted layer around 10$^{-4}$ bar. \begin{figure}[tph] \begin{center} \includegraphics[width=\columnwidth]{fig/wasp33b-transit} \end{center} \caption{Synthetic transmission spectra for WASP-33b computed from modeled compositions assuming solar elemental abundance, C/O = 0.75, and C/O = 1.1. The absorption features of TiO and \ce{H2O} are indicated by the color bands.} \label{fig:wasp33b-transit} \end{figure} \subsubsection{Transmission Spectra} We have computed the synthetic spectra from equilibrium and disequilibrium abundances and found no observable differences in both transmission and emission spectra. The photochemical region above the temperature-inverted region is too optically thin, even when molecules like \ce{H2O} and TiO are strongly photodisscoiated here. High-resolution spectroscopy might be more sensitive to probe the atomic species in this region. Alternatively, the equilibrium abundances of TiO/VO are sensitive to the change of elemental abundance. Figure \ref{fig:wasp33b-transit} demonstrates that the opacity in the optical is most sensitive to the change of TiO/VO as C/O is close to unity, which shows even greater variation than \ce{H2O} absorption between 1.2 and 2 $\mu$m. In conclusion, we find photodissociation only impacts the upper atmosphere of WASP-33b where P $<$ 0.1 mbar, chemical equilibrium is generally a valid assumption, as has been found for KELT9-b \citep{Kitzmann2018} and ultra hot Jupiters with dayside temperatures above 3000 K. Atmospheric mixing might still play an important role in an atmosphere with temperature lower than WASP-33b. Our first attempt to solve the kinetics of titanium species can provide an interesting avenue for investigating other transition metals such as Fe and Ca for future study of ultra hot Jupiters. \begin{comment} The effects of disequilibrium chemistry on titanium and vanadium species are summarized in Figure \ref{fig:TiO}, with solar composition (left) and C/O = 10 (right) using two temperature profiles,TP-D (top) and TP-G (bottom) in Figure \ref{fig:TP_grid}. The temperature profile D is marginally above the condensation temperature of TiO/VO and a typically maximum possible value of the eddy diffusion coefficient ($K_{\mbox{zz}}$ = 10$^{12}$ cm$^2$ s$^{-1}$) is assumed. Hence it can be considered as the upper limit for disequilibrium since it is roughly the lowest temperature that can still hold TiO/VO, as thermochemistry proceeds faster toward equilibrium with higher temperature. First, we see that the atomic Ti and V are not sensitive to pressure change and disequilibrium processes for both solar composition and C/O = 10. Second, as seen in Figure \ref{fig:TiO-EQ}, the equilibrium abundances of TiO and VO begin to decrease with altitude when T $\gtrsim$ 2500 K. For solar composition, their photochemical abundances are also close to chemical equilibrium. It is because TiO and VO closely follow \ce{H2O} and \ce{H2O} remains close to chemical equilibrium due to the efficient recycle mechanism \citep{Moses2011}, with the effect of photodissociation only seen above 10$^{-6}$ bar. For C/O = 10, TiO/VO are, however, considerably enhanced by photochemistry in the upper atmosphere following water. As water would have been depleted at equilibrium in carbon-rich conditions, it is photochemically generated back in the upper atmosphere. The whole process is initiated by the photodissociation of CO that produces a large amount of atomic oxygen. Atomic oxygen rapidly reacts with hydrogen to form hydroxyl and oxidizes Ti into TiO, via the reaction \ce{Ti + OH} $\rightarrow$ \ce{TiO + H} \citep{Parmentier2018,Decin2018}. The scheme is \begin{subequations} \label{re:Ti-TiO} \begin{align} \begin{split} \ce{ CO +} h \nu &\rightarrow \ce{C + O} \\ \ce{O + H2} &\rightarrow \ce{OH + H}\\ \ce{Ti + OH} &\rightarrow \ce{TiO + H}\\ \ce{2H} &\rightarrow \ce{H2}\\ \hline \nonumber \mbox{net} : \ce{CO + Ti} &\rightarrow \ce{C + TiO} \end{split} \end{align} \end{subequations} Similarly, water is enhanced by the photolysis of CO through \begin{subequations} \label{re:Ti-TiO} \begin{align} \begin{split} \ce{ CO +} h \nu &\rightarrow \ce{C + O} \\ \ce{O + H2} &\rightarrow \ce{OH + H}\\ \ce{OH + H2} &\rightarrow \ce{H2O + H}\\ \ce{2H} &\rightarrow \ce{H2}\\ \hline \nonumber \mbox{net} : \ce{CO + H2} &\rightarrow \ce{C + H2O} \end{split} \end{align} \end{subequations} Last, TiH is almost not affected by vertical mixing or photodissociation in all scenarios. As predicted by equilibrium chemistry (Figure \ref{fig:TiO-EQ}), TiH and TiC dominate over TiO in most parts below 10$^{-5}$ bar for C/O = 10. \begin{figure* \begin{center} \end{center} \caption{Photochemical kinetics results of Ti and V bearing species for solar elemental composition (left) and C/O = 10 (right) assuming constant eddy diffusion $K_{\mbox{zz}}$ = 10$^{12}$ cm$^2$ s$^{-1}$, compared with chemical equilibrium. The mixing ratios from two temperature profiles, TP-D and TP-F, are shown in the same plot. Photochemical kinetics and chemical equilibrium from TP-D are plotted in solid lines and dotted lines. Photochemical kinetics and chemical equilibrium from TP-F are plotted in dashed-dotted lines and dashed lines, respectively.} \label{fig:TiO} \end{figure*} \end{comment} \subsection{HD189733b} We have benchmarked our model of HD 189733b in Section \ref{sec:hd189}, where we attempt to keep the astronomical and chemical setup as close to \cite{Moses11,Venot12} as possible for comparison. In this section, we include the following updates and aspects that have not been considered in previous work:\\ {\tiny$\bullet$} Recently observed stellar UV-flux of HD 189733 \citep{Bourrier20}\\ {\tiny$\bullet$} Sulfur chemistry\\ {\tiny$\bullet$} Condensation of carbon vapor\\ \subsubsection{Stellar UV-flux} \cite{Bourrier20} combine HST and XMM-Newton observations and derive semi-synthetic UV spectra up to 160 nm. For our model benchmark in Section \ref{sec:hd189}, solar flux is used for wavelengths below 115 nm and the observed spectrum of epsilon Eridani (a K2-type analogue) is adopted for 115 - 283 nm. The previously adopted and newly observed stellar fluxes are compared in Figure \ref{fig:HD189-flux}. The EUV flux of HD 189733 is modestly higher than that of the Sun, but the photochemically important FUV ($\lambda >$ 122 nm) flux appears to be weaker. Nevertheless, the change in the UV flux turns out to only slightly decrease the atomic H (by about 20$\%$). The overall impact of the updated EUV flux on neutral chemistry is in fact insignificant. \begin{figure \begin{center} \includegraphics[width=\columnwidth]{fig/HD189-sflux} \end{center} \caption{The UV flux of HD 189733 received at 1 A.U. derived from recent observations \cite{Bourrier20} compared to the previously adopted spectrum, which consists of solar EUV and epsilon Eridani following the same approach as \cite{Moses11} and used in Section \ref{sec:hd189}. The spectra are binned for clarity.} \label{fig:HD189-flux} \end{figure} \subsubsection{Sulfur Chemistry}\label{sec:HD189-S} We next run the same model except including sulfur kinetics. The sulfur species from our photochemical calculation are illustrated in Figure \ref{fig:HD189-S} and are broadly consistent with previous work \citep{Zahnle09}. Hydrogen sulfide (\ce{H2S}) is the thermodynamically stable form of sulfur in a hydrogen-dominated atmosphere. \ce{H2S} is mostly destroyed by hydrogen abstraction \begin{equation} \vspace{-0.1cm} \ce{H2S + H -> SH + H2} \label{re:H2S} \vspace{-0.1cm} \end{equation} and restored by the reverse reaction of (\ref{re:H2S}). The forward and backward reactions of (\ref{re:H2S}) essentially dictates the level where \ce{H2S} starts to loss its stability. On HD 189733b, \ce{H2S} is dissociated above 1 mbar and predominantly turned into S. SH and \ce{S2} also reach maximum values at the level where \ce{H2S} dissociation kicks off. The implication is both \ce{SH} and \ce{S2} absorb shortwave radiation and could potentially provide stratospheric heating, especially with super-solar metallicity condition as discussed in \cite{Zahnle09}. We find SO accumulated in the upper atmosphere from the oxidation of \ce{S + OH} $\rightarrow$ \ce{SO + H}. The highly reactive SO is known to self-react into SO dimer (\ce{(SO)2}) and may facilitate formation of \ce{S2O} and \ce{S2} \citep{Pinto2021} or back into S and \ce{SO2}. What actually happened in our model is SO either photodissociated or reacted with atomic H back to S in the low pressure region. The elemental S might be subject to photoionization, as we will discuss in Section \ref{discussion}. One notable effect of photochemistry with sulfur is several sulfur species absorb in the MUV/NUV. As illustrated in Figure \ref{fig:HD189-S}, sulfur species raised the UV photosphere above $\sim$ 230 nm, compared to that without sulfur where no efficient absorption beyond the ammonia bands. We find \ce{H2S} responsible for the dominant absorbption in the NUV (300--400 nm), rather than SH as reported in \cite{Zahnle09}, which might be caused by the isothermal atmosphere at 1400 K used in \cite{Zahnle09}. The absorption of \ce{S2} between 250 and 300 nm and the SH peaks around 325 nm can make prospective observable features. Figure \ref{fig:HD189-S-noS} highlights the compositional differences when sulfur is present. Sulfur species can play an interesting role in catalyzing conversion schemes that take multiple steps. In particular, \ce{CH4} is more diminished down to about 1 mbar. We find sulfur provide a catalytic pathway for \ce{CH4}-CO conversion. As \ce{CH4} and \ce{H2S} react with atomic H to liberate carbon and sulfur, they couple to form carbon monosulfide (CS). Carbon in CS is further oxidized into OCS and eventually ends up in CO through H-abstraction, via a pathway such as \begin{eqnarray} \begin{aligned} \ce{ CH4 + H &-> CH3 + H2}\\ \ce{ CH3 + H &-> CH2 + H2}\\ \ce{ CH2 + S &-> HCS + H}\\ \ce{ HCS + H &-> CS + H2}\\ \ce{ SH + H &-> S + H2}\\ \ce{ S + OH &-> SO + H}\\ \ce{ CS + SO &-> OCS + S}\\ \ce{ OCS + H &-> CO + SH}\\ \ce{H2O &->[h\nu] OH + H}\\ \ce{SH &->[h\nu] S + H}\\ \ce{S + H2 &-> SH + H}\\ \noalign{\vglue 5pt} \hline % \noalign{\vglue 5pt} \mbox{net} : \ce{CH4 + H2O &-> CO + 3H2}. \end{aligned} \label{re:path-ch4-co-S} \end{eqnarray} Note there is no net change of sulfur species in the cycle. The rate-limiting reaction in pathway (\ref{re:path-ch4-co-S}) is the carbon-sulfur step \ce{ CH2 + S -> HCS + H}, which is about three orders of magnitude faster than the pathway without sulfur around 1mbar. Interestingly, we find SH play an analogous role to \ce{H2O} in catalytically converting \ce{H2} to H, causing the minor H increase in Figure \ref{fig:HD189-S-noS}. The presence of sulfur species enhances the destruction of methane and might partly contribute to the scarcity of methane detection on hot Jupiters \citep[e.g., ][and references within]{Baxter2021}. \ce{H2S} has also been reported to speed up the oxidation of methane in combustion experiment \citep{Gersen2017}, in the oxidizing and high-pressure conditions of gas engines. The decreasing of \ce{CH4} naturally reduces its offspring products to a great extent. The column density shown in Figure \ref{fig:haze-bar} reflects the reduction of haze precursors with the participation of sulfur. Based on our fiducial analysis on HD 189733 b, we suggest that organic haze formation is likely to be partly suppressed by sulfur kinetics on a hot Jupiter, as opposed to enhanced by sulfur kinetics in a \ce{CO2}-rich condition suggested by experimental simulations \citep{He2020}. \begin{figure*}[ht] \begin{center} \includegraphics[width=\columnwidth]{fig/HD189-S} \includegraphics[width=\columnwidth]{fig/HD189-tau-S} \end{center} \caption{Left: Mixing ratios of the major sulfur species computed in the model of HD 189733b. The photochemical kinetics results are in solid lines and equilibrium abundances in dotted lines. Right: The pressure level of optical depth $\tau$ = 1 as a function of wavelength while including (black) and excluding (grey) sulfur chemistry, along with the main individual contribution from sulfur species.} \label{fig:HD189-S} \end{figure*} \begin{figure}[htp] \begin{center} \includegraphics[width=\columnwidth]{fig/HD189-S-effects} \end{center} \caption{Mixing ratio profiles of main species on HD 189733b that exhibit differences from models including sulfur kinetics (solid) and without sulfur kinetics (dashed).} \label{fig:HD189-S-noS} \end{figure} \subsubsection{Condensation of Carbon Vapor}\label{HD189-C-conden} Atomic carbon vapor (C) is produced by CO dissociation (including both photodissociation and thermal dissociation in the thermosphere) or the reaction \ce{N + CN -> C + N2} above $\sim$ 0.1 mbar and also by H abstraction with \ce{CHx} species in the lower region. The saturation vapor of C falls off rapidly with decreasing temperature in the upper atmosphere, as shown in Figure \ref{fig:HD189-C-conden}. In fact, the disequilibrium abundance of C starts to exceed the saturation concentration above 10 mbar. The realistic timescale for graphite growth by condensation involves detailed microphysics and is beyond the scope of this study. As a simple test, we explore the kinetic effects after carbon vapor is fully condensed. We run our HD 189733b model including sulfur chemistry and do not allow C to become supersaturated but simply fix the abundance of C in the gas phase to its saturation mixing ratio. This is physically equivalent to assuming instantaneous condensation and unlimited condensation nuclei. Figure \ref{fig:HD189-C-conden} demonstrates the consequences when C is instantaneously condensed, which mainly impacts the region above 0.1 mbar. Without the condensation of C, \ce{CH4} can be replenished by the hydrogenation sequence of C (i.e. C $\rightarrow$ CH $\rightarrow$ \ce{CH2} $\rightarrow$ \ce{CH3} $\rightarrow$ \ce{CH4}). This channel is closed as C condensed out and \ce{CH4} is further depleted in the upper atmosphere. CS is reduced in the same way but \ce{C2H2} and HCN remain almost unaffected (they are already reduced compared to the model without sulfur). In the end, we find that the condensation of C has limited effects to other gas compositions in the upper atmosphere. \begin{figure}[htp] \begin{center} \includegraphics[width=\columnwidth]{fig/HD189-C_conden} \end{center} \caption{Several carbon-containing species from the nominal model (dashed) compared to those from the model with limited C due to instantaneous condensation. The saturation mixing ratio of C is plotted in dotted curve.} \label{fig:HD189-C-conden} \end{figure} \subsubsection{Transmission Spectra} Here we first take a look at the observational consequences due to model uncertainties among \cite{Moses11}, \cite{Venot12}, and VULCAN which we examined in Section \ref{sec:hd189}. Figure \ref{fig:HD189-transit-VM11V12} showcases the transmission spectra of HD 189733b generated from the compositions computed by VULCAN, \cite{Moses11}, and \cite{Venot12}. The lower quenched abundances of \ce{CH4} and \ce{NH3} in \cite{Venot12} are responsible for the primary spectral differences while the spectra from VULCAN and \cite{Moses11} are fairly close. The ammonia absorption around 8--12 $\mu$m could be a useful diagnosis for the quenching mechanism of nitrogen chemistry. Overall, we find the model uncertainties lead to about half of the spectral deviation caused by disequilibrium chemistry. We then examine the effects of including sulfur chemistry to the transmission spectra in Figure \ref{fig:HD189-transit-S}. While the features from sulfur-containing species are almost obscured by other molecules such as \ce{H2O} and \ce{CH4} in the near-IR, there are still visible differences due to sulfur's impact on \ce{CH4} and \ce{NH3}. Since the coupling to sulfur reduces the abundances of \ce{CH4} and \ce{NH3}, the transit depth is smaller in the presence of sulfur. The differences caused by sulfur chemistry is smaller than that between equilibrium and disequilibrium \ce{CH4} and \ce{NH3} abundances but not trivial. Current observations are not capable of placing conclusive constraints and we need to rely on future facilities with higher resolving power (e.g., JWST, ARIEL). \begin{figure}[htp] \begin{center} \includegraphics[width=\columnwidth]{fig/HD189-transit-VM11V12} \end{center} \caption{Synthetic transmission spectra for HD 189733b generated from chemical abundances computed by VULCAN, \cite{Moses11}, \cite{Venot12}, and when assuming chemical equilibrium. Transit observations from \cite{Pont2013} and \cite{Mc2014} are shown as data points with error bars. The absorption features for the main molecules are indicated by the color bands.} \label{fig:HD189-transit-VM11V12} \end{figure} \begin{figure}[htp] \begin{center} \includegraphics[width=\columnwidth]{fig/HD189-transit-S} \end{center} \caption{Same as Figure \ref{fig:HD189-transit-VM11V12} but with abundances computed from our model while including or excluding sulfur species.} \label{fig:HD189-transit-S} \end{figure} \subsection{GJ 436b}\label{sec:GJ436b} GJ 436b is a Neptune-sized planet in a close orbit around a M dwarf star. This warm Neptune has received tremendous attention since its first discovery \citep{Butler2004}, including multiple primary transit and secondary eclipse observed with {\it Spitzer} (\cite{Stevenson2010,Madhu2011,Morley2017} and references within), as well as transmission spectroscopic study with HST WFC3 \citep{Demory2007,Knutson2014}. {\it Spitzer} 3.6 $\mu$m and 4.5 $\mu$m emission indicates the atmosphere is rich in CO/\ce{CO2} and poor in \ce{CH4}. Yet precise constraint and the mechanism on CO/\ce{CH4} ratio still remain inconclusive. Forward models have suggested high metallicity \citep{Moses2013,Morley2017} and hot interior from tidal heating \citep{Agundez2014} can explain the observed CO/\ce{CH4} but inconsistent with the low water content (less than 10$^{-4}$) obtained by the retrieval model \cite{Madhu2011}. \cite{Hu2015} propose that a remnant helium-dominated atmosphere as a result of hydrogen escape can naturally deplete \ce{CH4} and \ce{H2O}. However, the Ly-alpha absorption still appears to indicate a hydrogen-dominated atmosphere for GJ 436b \citep{Khodachenko2019}. For this work, we restrict ourself to 100 times solar metallicity (Neptune-like) and explore the effects of vertical mixing and internal heat with the presence of sulfur. \begin{figure}[htp] \begin{center} \includegraphics[width=\columnwidth]{fig/GJ436b-TPK} \end{center} \caption{The temperature-pressure and eddy diffusion ($K_\textrm{zz}$) profiles for GJ 436b, showing low (T$_{\textrm{int}}$ = 100 K) and high (T$_{\textrm{int}}$ = 400 K) internal heating and weak (dashed) and strong (solid) vertical mixing. The [\ce{CH4}]/[CO] = 1 equilibrium transition curve for 100 times solar metallicity is shown by the dotted curve.} \label{fig:GJ436b-TPK} \end{figure} \begin{figure*}[htp] \begin{center} \includegraphics[width=\columnwidth]{fig/GJ436b-Tint100-K6} \includegraphics[width=\columnwidth]{fig/GJ436b-Tint400-K6} \includegraphics[width=\columnwidth]{fig/GJ436b-Tint100-K8} \includegraphics[width=\columnwidth]{fig/GJ436b-Tint400-K8} \end{center} \caption{The mixing ratio profiles (solid) along with equilibrium profiles (dotted) of several main species on GJ 436b for different assumption of internal temperature and vertical mixing. The left/right columns correspond to low/high ($T_{\textrm{int}}$ = 100 K/$T_{\textrm{int}}$ = 400 K) internal heating and the top/bottom rows correspond to weak/strong vertical mixing.} \label{fig:GJ436b-mix} \end{figure*} \begin{figure*}[htp] \begin{center} \includegraphics[width=\columnwidth]{fig/GJ436b-Tint100-K6-S} \includegraphics[width=\columnwidth]{fig/GJ436b-Tint400-K6-S} \includegraphics[width=\columnwidth]{fig/GJ436b-Tint100-K8-S} \includegraphics[width=\columnwidth]{fig/GJ436b-Tint400-K8-S} \end{center} \caption{Same as Figure \ref{fig:GJ436b-mix} but for atomic O and sulfur species.} \label{fig:GJ436b-S} \end{figure*} \begin{figure*}[htp] \begin{center} \includegraphics[width=\columnwidth]{fig/GJ436b-Tint100-K6-noS} \includegraphics[width=\columnwidth]{fig/GJ436b-Tint400-K6-noS} \includegraphics[width=\columnwidth]{fig/GJ436b-Tint100-K8-noS} \includegraphics[width=\columnwidth]{fig/GJ436b-Tint400-K8-noS} \end{center} \caption{The abundances of several main species that show differences from models including sulfur kinetics (solid) and without sulfur kinetics (dashed). Each panel corresponds to the same internal heating and vertical mixing as Figure \ref{fig:GJ436b-mix}.} \label{fig:GJ436-noS} \end{figure*} \begin{figure}[htp] \begin{center} \includegraphics[width=\columnwidth]{fig/photosphere_GJ436b-M2-Tint400-KminE6.pdf} \includegraphics[width=\columnwidth]{fig/photosphere_GJ436b-M2-Tint400-KminE8.pdf} \end{center} \caption{The pressure level of optical depth $\tau$ = 1 for GJ 436b with high internal heating (T$_{\textrm{int}}$ = 400 K) while including (black) and excluding (grey) sulfur chemistry, along with the main contribution from sulfur species. The top and bottom panels are for weak and strong vertical mixing.} \label{fig:GJ436b-photosphere} \end{figure} \subsubsection{Model Input}\label{sec:GJ436b_input} Following the best-fit parameters in \cite{Morley2017}, we set up a low and a high internal heating scenario by running HELIOS assuming T$_{\textrm{int}}$ = 100 K and 400 K, respectively. The stellar UV flux of GJ 436 is adopted from the MUSCLES survey (version 2.2; \cite{France2016,Youngblood2016,Loyd2016}). The eddy diffusion profile is prescribed by (\ref{eq:Kzz}) with $P_{\textrm{tran}}$ = 1 bar, as where the radiative-convective transition is located in our radiative transfer calculation. We also explore the weak and strong vertical mixing scenarios, based on the GCM simulation by \cite{Lewis2010}. The average vertical wind from \cite{Lewis2010} translates to effective eddy diffusion coefficient from 10$^8$ cm$^2$ s$^{-1}$ at 100 bar to 10$^{11}$ cm$^2$ s$^{-1}$ at 0.1 mbar, assuming the mixing length to be the atmospheric scale height. Since this choice of mixing length generally overestimates the strength of eddy diffusion \citep{Smith1998,Vivien2013,Bordwell2018}, we consider it as the upper limit and set it for the strong vertical mixing scenario. We correspondingly have $K_{\textrm{deep}}$ = 10$^8$ cm$^2$ s$^{-1}$ for the strong mixing scenario and assume $K_{\textrm{deep}}$ = 10$^6$ cm$^2$ s$^{-1}$ for the weak mixing scenario. \subsubsection{Effects of Vertical Mixing and Internal Heating} Resolving the CO/\ce{CH4} abundance ratio is the leading question for the atmospheric compositions of GJ 436b. Since we did not vary the elemental abundance, the photospheric abundance of \ce{CH4} primarily depends on the quench level, which is controlled physically by the strength of vertical mixing and thermal structures. Figure \ref{fig:GJ436b-TPK} shows that the 100-times solar metallicity constrains both temperature profiles within the CO dominated region. As illustrated by the equilibrium profiles in Figure \ref{fig:GJ436b-mix}, for low internal heating ($T_{\textrm{int}}$ = 100 K), the temperature is close to the \ce{CH4}/CO transition and the equilibrium \ce{CH4} abundance oscillates below 10$^{-4}$ bar, whereas the equilibrium \ce{CH4} abundance decreases monotonically with increasing pressure from about 10$^{-4}$ bar for high internal heating ($T_{\textrm{int}}$ = 400 K). The twisting variation of \ce{CH4} with depth was pointed out by \cite{Karan2019}, suggesting it can lead to a non-monotonic correlation with increasing $K_{\textrm{zz}}$ for low internal heating. However, in the physically-motivated range of $K_{\textrm{zz}}$ we explored, \ce{CH4} is always quenched in the confined region below 1 bar where \ce{CH4} increases with depth, as shown in Figure \ref{fig:GJ436b-mix}. Therefore, stronger vertical mixing produces higher quenched \ce{CH4} abundance for low internal heating and conversely produces lower quenched \ce{CH4} abundance for high internal heating. For low internal heating, \ce{CH4} remains in considerable amounts in both weak and strong mixing cases, with photospheric CO/\ce{CH4} ratio about 25 and 4, respectively. The amount of methane efficiently converts to other hydrocarbons (e.g., \ce{C2H2} and \ce{C6H6}) and HCN, especially in the photochemically active region above 1 mbar. For high internal heating, \ce{CH4} abundance is significantly reduced compared to that with low internal heating, regardless of vertical mixing. The photospheric CO/\ce{CH4} ratio for weak and strong mixing is about 2000 and 7000. \ce{C2H2} and HCN are also diminished with \ce{CH4}, except \ce{CH4} can still be transported to the upper atmosphere in the strong mixing scenario. For nitrogen species, \ce{N2} predominants under high metallicity and quenched \ce{NH3} exhibits greater abundances than equilibrium in all cases. The \ce{NH3}--\ce{N2} conversion is slower than \ce{CH4}--\ce{CO} and hence \ce{NH3} is quenched deeper than \ce{CH4}. Photochemically produced HCN can take over \ce{NH3} and \ce{CH4} in the upper atmosphere, but at higher altitude compared to HD 189733b due to the weaker UV flux of GJ 436. Interestingly, the quench level of \ce{NH3} appears to vary little with vertical mixing and mainly responds to the change of internal heating (see the pressure and temperature dependence of \ce{NH3}--\ce{N2} conversion in \cite{tsai18}). The insensitivity of \ce{NH3} to vertical mixing could provide additional constraints to the deep thermal property. \subsubsection{Effects of Sulfur Species}\label{sec-GJ436-S} Figure \ref{fig:GJ436b-S} shows the distribution of main sulfur species for each scenario. \ce{H2S} still remains the major sulfur-bearing molecule. The region where \ce{H2S} is stable extends to a lower pressure of about 0.1 mbar compared to HD 189733b because of less photochemically produced atomic H on GJ 436b. Above the \ce{H2S}-stable region, the sulfur species is more diverse than the S/\ce{H2S} dichotomy in a hot Jupiter. The stratospheric temperature of GJ 436b is too warm for elemental sulfur to grow into large allotropes but allows rich interaction of sulfur and oxygen species in the upper stratosphere. The distribution is sensitive to mixing processes: \ce{SO2} takes over \ce{H2S} for weak vertical mixing while S, \ce{S2}, and SO are in turn the leading sulfur species for strong vertical mixing. Since sulfur species {\it do not} quench in the deep region like \ce{CH4} and \ce{NH3}, they are not affected by internal heating. Instead, sulfur species are more sensitive to photochemical products transported from the upper atmosphere. Sulfur also impacts other non-sulfur species. Figure \ref{fig:GJ436-noS} compares our models of GJ 436b that include and exclude sulfur chemistry. H is considerably reduced between 0.1 and 10$^{-4}$ bar in the presence of sulfur, opposite to what is seen on HD 189733b. This is because the photolysis of SH which provides the catalytic H production on HD 189733b is absent as SH is less favored on GJ 436b. Instead, hydrogen is recycled faster by \ce{H + H2S -> H2 + SH} in the \ce{H2S}-stable region of GJ 436b, as indicated in Figure \ref{fig:S-H2S-rate}. The reduction of H subsequently slows down the production of \ce{C2H2} and HCN, even when \ce{CH4} abundance is almost unchanged. In the photochemically active region above $\sim$ 0.1 mbar, atomic C preferably combines with sulfur into OCS or CS, which further lowers \ce{C2H2} and HCN in the upper atmosphere. As \ce{NH3} being oxidized by atomic O into NO in this region, nitrogen sulfide (NS) accelerates \ce{NH3} oxidization while coupling to sulfur, analogous to the role of HCS for destroying \ce{CH4} on HD 189733b. The catalytic pathway for oxidizing \ce{NH3} is \begin{eqnarray} \begin{aligned} \ce{NH3 + H &-> NH2 + H2}\\ \ce{NH2 + S &-> NS + H2}\\ \ce{NS + O &-> NO + S}\\ \noalign{\vglue 5pt} \hline % \noalign{\vglue 5pt} \mbox{net} : \ce{NH3 + H + O &-> NO + 2H2}. \end{aligned} \label{path:NH3-S} \end{eqnarray} Similar to HD 189733b, sulfur species raise the UV photosphere longward of 200 nm, as shown in Figure \ref{fig:GJ436b-photosphere}. The absorption feature around 200--300 nm reflects the aforementioned sensitivity to vertical mixing, with \ce{SO2} predominating in the weak mixing scenario and SO and \ce{S2} in the strong mixing scenario. \begin{figure}[htp] \begin{center} \includegraphics[width=\columnwidth]{fig/H-H2S-rate-T400-KE8} \end{center} \caption{The rates of reactions that are key to recycle H back to \ce{H2}, in the GJ 436b model including sulfur kinetics with $T_{\textrm{int}}$ = 400 K and weak vertical mixing (corresponding to the bottom right panel of Figure \ref{fig:GJ436-noS}).} \label{fig:S-H2S-rate} \end{figure} \subsubsection{Sensitivity to \ce{S_x} Polymerization} Since the growth from \ce{S2} to larger sulfur allotropes is suppressed in our GJ 436b model, we perform a sensitivity test to see if \ce{S_x} beyond \ce{S2} can be produced with faster polysulfur recombination rates. The three-body recombination reactions that interconvert \ce{S2}--\ce{S4}--\ce{S8} are the main polymerization steps. We follow \cite{Zahnle2016} and raise the rate of \ce{S4} recombination by 10 and that of \ce{S8} recombination by 100 for a faster polysulfur forming test. We find the abundances of \ce{S4} and \ce{S8} remain low and almost unchanged. We confirm that the stratosphere of GJ 436b is too warm for elemental sulfur to grow beyond \ce{S2} into fair quantities, even after taken into account the uncertainties in the sulfur polymerization rates. \begin{figure}[htp] \begin{center} \includegraphics[width=\columnwidth]{fig/GJ436-transit-T100} \includegraphics[width=\columnwidth]{fig/GJ436-transit-T400} \end{center} \caption{Synthetic transmission spectra computed for our GJ 436b model assuming $T_{\textrm{int}}$ = 100 K (top) and 400 K (bottom) with weak and strong vertical mixing. The model without sulfur chemistry (for $T_{\textrm{int}}$ = 400 K and weak vertical mixing) is also plotted for comparison. The {\it HST}/WFC3 points from \cite{Knutson2014} have been shifted down by 200 ppm, following \cite{Lothringer2018}. The wavebands of main molecular absorption are indicated.} \label{fig:GJ436-transit} \end{figure} \begin{figure}[htp] \begin{center} \includegraphics[width=\columnwidth]{fig/GJ436b-emission} \end{center} \caption{Synthetic emission spectra computed for our GJ 436b models including sulfur chemistry, in comparison with {\it Spitzer} secondary-eclipse data.} \label{fig:GJ436-emission} \end{figure} \subsubsection{Transmission and Emission Spectra} The observational indication in transmission spectroscopy of varying vertical mixing and internal heating for GJ 436b is shown in Figure \ref{fig:GJ436-transit}. The early analysis of {\it Spitzer} data \citep{Knutson2011} has shown inter-epoch variability, which is reduced in the investigation of \cite{Lanotte2014,Morello2015}. Methane absorption at 2.1-2.5 and 3-4 $\mu$m and sulfur dioxide absorption at 7-10 $\mu$m are the most promising diagnostic features. For $T_{\textrm{int}}$ = 100 K, vertical mixing leads to higher \ce{CH4} abundance and the strong mixing scenario can be marginally ruled out by the {\it Spitzer} 3.6-$\mu$m data. For $T_{\textrm{int}}$ = 400 K, vertical mixing conversely reduces \ce{CH4} abundances, confirmed with previous work by \cite{Agundez2014} and \cite{Morley2017}. The 3.6 and 4.5 $\mu$m {\it Spitzer} data are consistent with our models under weak/strong vertical mixing or in chemical equilibrium. While \ce{CH4} is not sensitive to the strength of vertical mixing at high internal heating scenario, \ce{SO2} shows strong dependence on mixing processes and is favored in our weak mixing scenario, which can potentially be detectable by {\it JWST}/MIRI. In addition, \ce{S2} is more favored with strong vertical mixing and provides strong absorption features in the UV. Figure \ref{fig:GJ436-emission} shows the synthetic emission spectra for GJ 436b compared to {\it Spitzer} observations. While the 3.6 $\mu$m data prefers the $T_{\textrm{int}}$ = 400 K models, $T_{\textrm{int}}$ = 100 K models are favored by the 8 $\mu$m data. On the other hand, the already large column abundance of CO makes the thermal emission at 4.5 $\mu$m insensitive to internal heating or vertical mixing. The models somewhat overpredict the flux at 4.5 $\mu$m, as in the previous study \citep{Morley2017} We conclude that our models demonstrate and confirm that the combination of moderately high ($\gtrsim$ 100 times) solar metallicity and internal heating can explain the low \ce{CH4}/CO ratio loosely constrained by the {\it Spitzer} 3.6 and 4.5 $\mu$m observations, regardless of the strength of mixing. Sulfur species do not quench in the deep region like \ce{CH4} or \ce{NH3} but closely associate with photolysis and mixing processes in the upper stratosphere. The independence from the thermal property of the interior makes sulfur chemistry a complementary avenue for characterizing GJ 436b, in conjunction with the long-standing quest for constraining \ce{CH4}/CO. \subsection{51 Eridani b} 51 Eridani b (51 Eri b) is a young Jupiter-mass giant around an F-type star at a wide orbit. Unlike irradiated hot Jupiters, the residual heat from the formation predominates over the stellar flux. In the discovery work, \cite{Macintosh2015} suggest 51 Eri b has an effective temperature 550 -- 750 K with vertically quenched \ce{CH4} and CO. Water vapor should not condense owing to its heat at depth, in contrast to our colder Jupiter. The combination of the hot-interior and photochemically-active stratosphere makes 51 Eri b a unique testbed for atmospheric chemistry, as explored by \cite{Zahnle2016,Moses2016}. \begin{figure}[htp] \begin{center} \includegraphics[width=\columnwidth]{fig/51Erib-TPK} \end{center} \caption{The temperature-pressure and eddy diffusion ($K_\textrm{zz}$) profiles for 51 Eri b, assuming solar and 10 times solar metallicity. The [\ce{CH4}]/[CO] = 1 equilibrium transition curves corresponding to two metallicities are shown by the dotted curves.} \label{fig:51Erib-TPK} \end{figure} \subsubsection{Model Setup} We adopt $T_{\textrm{eff}}$ = 760 K as suggested by the retrieval work of \cite{Samland2017} for calculating the temperature profile of 51 Eri b (although we find little difference between setting $T_{\textrm{eff}}$ = 760 K and $T_{\textrm{eff}}$ = 700 K as assumed in previous work \citep{Moses2016,Zahnle2016}). \cite{Samland2017} also derive a 10 times super-solar metallicity based on the K-band emission. To explore the effects of metallicity, we construct one temperature profile with solar metallicity and one with 10 times solar metallicity. The resulting P-T profiles are shown in Figure \ref{fig:51Erib-TPK}. We did not include a thermosphere as \cite{Moses2016} have added an arbitrary 1000 K inversion layer above 1 $\mu$bar but found little effects on the chemistry. The eddy diffusion takes the same form by Equation (\ref{eq:Kzz}), with the radiative-convective transition P$_{\textrm{tran}}$ and $K_{\textrm{deep}}$ set to 0.1 bar and 10$^5$ cm$^2$s$^{-1}$, respectively. The stellar UV flux of 51 Eridani is assembled from various sources. The observations from the International Ultraviolet Explorer (IUE)\footnote{\url{https://archive.stsci.edu/iue/}} covers the wavelength range between 115 and 198 nm. The EUV flux ($\lambda <$ 115 nm) is adopted from the synthetic spectrum of HR 8799 \citep{Sanz2011}\footnote{\url{http://sdc.cab.inta-csic.es/xexoplanets}}, following \cite{Moses2016}. For wavelengths greater than 198 nm, we use a theoretical stellar spectrum with a close temperature from ATLAS9 Model Atmosphere Database\footnote{\url{https://archive.stsci.edu/prepds/bosz/}} by BOSZ stellar atmosphere model \citep{BOSZ2017}, assuming $T_{\textrm{eff}}$ = 7250 K, log(g) = 4, log[Fe/H] = 0, radius = 1.45 M$_\odot$. The stellar irradiation received by the planet in our 51 Eridani model is stronger by about 50 $\%$ than previous work, since the orbit has been updated from 13-14 AU to 11.1 AU according to \cite{DeRosa2020}. \subsubsection{Disequilibrium Chemistry Compared with Previous Work} \cite{Zahnle2016} investigate sulfur hazes in the atmosphere of 51 Eri b. \cite{Moses2016} use an extensive N-C-H-O chemical network ($\sim$ 1650 reactions), which include more complex hydrocarbons, to study the quenching and photochemical effects. The mixing ratios of the main species in our 51 Eri b model for solar and 10 times solar metallicity are displayed in the top left panel of Figure \ref{fig:51Erib-mix}. The main C, H, N, O chemical species in our model are overall consistent with both \cite{Zahnle2016} and \cite{Moses2016}, which we summarize in the following paragraph. The top row of Figure \ref{fig:51Erib-mix} shows how disequilibrium processes control the composition distribution. First, \ce{CH4}-CO conversion is quenched at about 1 bar thus CO predominates over \ce{CH4}. Likewise, \ce{N2} is the predominant nitrogen bearing species over \ce{NH3}. Second, without the fast recycling mechanism like that on a hot Jupiter, strong photolysis of water makes the upper atmosphere oxidizing and produces considerable \ce{CO2} and \ce{O2}. Third, \ce{C2H2} and \ce{HCN} are photochemically generated in the upper atmosphere, similar to hot Jupiters. While \ce{C6H6} is produced to about 10 ppb level in \cite{Moses2016}, \ce{C6H6} is greatly reduced in our model including sulfur, as seen for GJ 436b. The most outstanding difference between \cite{Zahnle2016} and our model in terms of sulfur chemistry is that \ce{S8} reaches condensable level in our atmosphere. Although produced at about the same level, \ce{S8} is close to saturation but does not condense in the nominal model of \cite{Zahnle2016}. Since we adopt the same saturation vapor pressure of sulfur allotropes \citep{Lyon2008} as \cite{Zahnle2016}, the different condensation behavior should be due to a warmer upper stratosphere in \cite{Zahnle2016}. \cite{Zahnle2016} indeed noted that \ce{S8} would condense if the temperature were just a few degrees lower. \begin{figure*}[!ht] \begin{center} \includegraphics[width=\columnwidth]{fig/51Eri-solar-1} \includegraphics[width=\columnwidth]{fig/51Eri-10Xsolar-1} \includegraphics[width=\columnwidth]{fig/51Eri-solar-2} \includegraphics[width=\columnwidth]{fig/51Eri-10Xsolar-2} \includegraphics[width=\columnwidth]{fig/51Eri-solar-3} \includegraphics[width=\columnwidth]{fig/51Eri-10Xsolar-3} \end{center} \caption{The computed abundance profiles of 51 Eri b, assuming solar (left panels) and 10 times solar (right panels) metallicity. The top row presents the main species, with equilibrium profiles shown in dotted lines. The middle row shows the main sulfur species and the bottom has \ce{S2}/\ce{S8} vapor (solid), \ce{S2}/\ce{S8} condensate particles (dashed-dotted), and the saturation mixing ratios of \ce{S2}/\ce{S8} (dotted). The particles are plotted in the ratio of the number density of particles to the total number density of gas molecules and multiplied by 10$^{10}$.} \label{fig:51Erib-mix} \end{figure*} \begin{figure*}[!ht] \begin{center} \includegraphics[width=\columnwidth]{fig/51Eri-solar-noS} \includegraphics[width=\columnwidth]{fig/51Eri-10Xsolar-noS} \end{center} \caption{The abundances of several main species that show differences from models including sulfur kinetics (solid) and without sulfur kinetics (dashed) for 51 Eri b.} \label{fig:51Erib-noS} \end{figure*} \subsubsection{Effects of Super-Solar Metallicity} The left and right columns of Figure \ref{fig:51Erib-mix} compare the results of solar and 10 times solar metallicity. The model with 10 times solar metallicity has slightly hotter troposphere (Figure \ref{fig:51Erib-TPK}) which favored CO over \ce{CH4}. Although the equilibrium abundance of \ce{CH4} in the stratosphere is increased in the 10 $\times$ solar metallicity model, \ce{CH4} is in fact decreased with a lower \ce{CH4}/CO ratio at the quench level. In the end, the 10 $\times$ solar metallicity model shows higher \ce{CO}, \ce{CO2}, \ce{H2O} and lower \ce{CH4} abundances, which in turn reduces other hydrocarbons as well. The mixing ratio of \ce{CO2} exceeds \ce{CH4} for 10 $\times$ solar metallicity and can reach $\sim$ 0.1$\%$ in the upper atmosphere. The production of \ce{O2} also raises with metallicity following the increase of water. \subsubsection{Effects of Sulfur}\label{sec:51Eri-S} Compared to HD 189733b and GJ 436b, \ce{H2S} can only remain stable against hydrogen abstraction (\ref{re:H2S}) at higher pressure about 0.05 bar. The reverse rate of (\ref{re:H2S}) significantly drops in the cooler stratosphere of 51 Eri b and prohibits the reformation of \ce{H2S}. The active SH radical produced by \ce{H2S} leads to a rich variety of sulfur species, as illustrated in the middle row of Figure \ref{fig:51Erib-mix}. Compared to \cite{Zahnle2016}, our model exhibits a more oxidized upper stratosphere due to stronger UV irradiaion from the closer orbit in our setting (11.1 AU compared to 13.2 AU). Nonetheless, both models predict efficient polymerization forming great abundance of \ce{S8}. Since \ce{S8} is the end pool of sulfur chain reactions, we find the condensation of \ce{S8} does not affect other sulfur species. Elemental S is still the leading sulfur species above the \ce{S8} condensing layers until being oxidized into SO and \ce{SO2} in the upper stratosphere. The equilibrium abundance of \ce{H2S} scales with metallicity, which leads to more production of \ce{S8} vapor as metallicity increased. The 10 times solar metallicity model has slightly warmer temperature which allows higher saturation pressure of sulfur as well. In the end, both the gas and condensates of \ce{S2} and \ce{S8} increase with metallicity. The effects of coupling to sulfur on other species are highlighted in Figure \ref{fig:51Erib-noS}. The most remarkable feature is the enhanced oxygen abundances in the upper atmosphere with sulfur. In the absence of sulfur, atomic O can be released from \ce{H2O} with the aid of \ce{CO2}: \begin{eqnarray} \begin{aligned} \ce{H2O &->[h\nu] OH + H}\\ \ce{CO + OH &-> CO2 + H}\\ \ce{CO2 &->[h\nu] CO + O}\\ \noalign{\vglue 5pt} \hline % \noalign{\vglue 5pt} \mbox{net} : \ce{H2O &->[2 h\nu] 2H + O}. \end{aligned} \label{re:path-ch4-co-S} \end{eqnarray} While sulfur is present, SO and \ce{SO2} dissociate more than \ce{CO2} around Ly-$\alpha$ and provide a faster channel to liberate O from \ce{H2O}: \begin{eqnarray} \begin{aligned} \ce{H2O &->[h\nu] OH + H}\\ \ce{SO + OH &-> SO2 + H}\\ \ce{SO2 &->[h\nu] SO + O}\\ \noalign{\vglue 5pt} \hline % \noalign{\vglue 5pt} \mbox{net} : \ce{H2O &->[2 h\nu] 2H + O}. \end{aligned} \end{eqnarray} or \begin{eqnarray} \begin{aligned} \ce{H2O &->[h\nu] OH + H}\\ \ce{S + OH &-> SO + H}\\ \ce{SO &->[h\nu] S + O}\\ \noalign{\vglue 5pt} \hline % \noalign{\vglue 5pt} \mbox{net} : \ce{H2O &->[2 h\nu] 2H + O}. \end{aligned} \end{eqnarray} The excess atomic O readily reacts with OH to form \ce{O2}. This enhanced oxidized region along with NS accelerates the oxidization of \ce{NH3}, via the same pathway (\ref{path:NH3-S}) but more pronounced than that on GJ 436b. On the other hand, \ce{CH4} is unaffected because the intermediates HCS and CS are deficient in the colder stratosphere of 51 Eri b. Lastly, the coupling to \ce{H2S} also helps atomic H recycle back to \ce{H2} faster, as seen on GJ 436b. \begin{figure}[htp] \begin{center} \includegraphics[width=\columnwidth]{fig/rate_COS} \end{center} \caption{The low-pressure limit rate coefficient of \ce{S + CO ->[\textrm{M}] OCS} estimated in this work (\ref{COS_rate}), compared to those in the literature.} \label{fig:rate_COS} \end{figure} \begin{figure}[htp] \includegraphics[width=\columnwidth]{fig/COS-rev} \caption{Main sulfur species from our nominal model with solar metallicity (solid) compared to those adopting the faster rate of (\ref{Re_OCS}) from \cite{Zahnle2016} (dashed). \label{fig:sen-rev} \end{figure} \subsubsection{sensitivity to OCS recombination} The fate of elemental S after being released from \ce{H2S} is critical in sulfur kinetics. Several reactions potentially control whether S proceeds to chain formation into larger polysulfur (\ce{S_x}), forming OCS, or being oxidized to SO, \ce{SO2}. To address the effects of kinetics uncertainties, \cite{Zahnle2016} explore the sensitivity to \ce{H2S} recombination and S$_x$ polymerization. The authors found a faster \ce{H2S} recombination counteracts the destruction of \ce{H2S} and reduces the production of \ce{S8}, while their results are not sensitive to the polymerizing rates of forming \ce{S4} and \ce{S8} within the tested ranges. We have tested the polymerization rates for GJ 436b and confirmed its general insensitivity to \ce{S8} formation. For 51 Eri b, we recognize that the recombination of OCS \begin{equation} \ce{S + CO ->[\textrm{M}] OCS} \label{Re_OCS} \end{equation} could be important in determining the oxidizing rate of sulfur. The rate coefficient of Reaction (\ref{Re_OCS}) has in fact not been measured. Only the reverse step of (\ref{Re_OCS}), the dissociation of OCS, has available data at high temperatures. Recently, \cite{Ranjan20} has also identified this reaction to modestly alter the CO abundance in a \ce{CO2}-\ce{N2} atmosphere and advocate laboratory investigation. Here, we will explain how the rate coefficient of Reaction (\ref{Re_OCS}) is estimated in our nominal model and then explore the sensitivity to the uncertainty for 51 Eri b. Reaction (\ref{Re_OCS}) is a spin-forbidden reaction and usually many orders of magnitude slower than a typical three-body reaction. Since the measured high-temperature dissociation reaction has a high activation energy, extrapolating the dissociation reaction (the reversal of (\ref{Re_OCS})) to low temperatures will result in unrealistically rates. Instead, we estimate the activation energy from the well-studied analogous reaction, \ce{O + CO ->[\textrm{M}] CO2}. The pre-exponential factor is then determined by matching the reverse of dissociation reaction at 2000 K from \cite{Oya1994}. The low-pressure limit rate of (\ref{Re_OCS}) we estimate is \begin{equation}\label{COS_rate} k_\textrm{0} = 4.47 \times 10^{-34} \textrm{exp}(-1510/T). \end{equation} We compare the rate coefficient (\ref{COS_rate}) with those assumed in \cite{Zahnle2016} and Venus literature \citep{Zhang2012,Krasnopolsky2013} in Figure \ref{fig:rate_COS}. The reaction rates show diverse values especially toward lower temperatures, the relevant temperature range for the stratosphere of 51 Eri b. Albeit the rate discrepancy in each model, the rate constants in \cite{Zhang2012}, \cite{Krasnopolsky2013} and this work exhibit consistent temperature dependence from 1000 K to 200 K, whereas that in \cite{Zahnle2016} has suprisingly almost no temperature dependence. Since rate constant of (\ref{Re_OCS}) from \cite{Zahnle2016} is the most different from the literature and yields fastest OCS forming rate, we will use the rate from \cite{Zahnle2016} as the upper limit to test the sensitivity. We run our nominal model with solar metallicity but adopt the rate constant of (\ref{Re_OCS}) from \cite{Zahnle2016}. The effects of faster OCS recombination are illustrated in Figure \ref{fig:sen-rev}. With the OCS recombination rate from \cite{Zahnle2016}, OCS mixing ratio is significantly increased above 0.1 bar. \ce{S8} is slightly reduced but remains the major sulfur carrier between 10$^{-2}$ and 10$^{-4}$ bar, consistent with the model results in \cite{Zahnle2016}. The abundance of S and \ce{S2} are subsequently affected by more ample OCS photodissociation but that of \ce{S8} remains the same as set by condensation. Given these differences, we reiterate further investigation to pin down the reaction rate of OCS recombination. \begin{figure}[!ht] \includegraphics[width=\columnwidth]{fig/51Eri-emission-solar.pdf} \includegraphics[width=\columnwidth]{fig/51Eri-emission-10Xsolar.pdf} \caption{Synthetic emission spectra of 51 Eri b produced from equilibrium abundances, disequilibrium abundances, and with \ce{S8} condensate layer. Data points show GPI observations from \cite{Macintosh2015} and SPHERE observations from \cite{Samland2017}.} \label{fig:51Eri-emission} \end{figure} \begin{figure}[htp] \includegraphics[width=\columnwidth]{fig/S-diagram} \caption{A Schematic diagram illustrating the main pathways for sulfur kinetics in an \ce{H2}-dominated atmosphere. The dashed line represents the transition from the lower region where sulfur is predominantly locked in \ce{H2S} to the upper region where \ce{H2S} is subject to dissociation. Rectangles indicate stable species whereas ellipses indicate active radical or intermediate species.} \label{fig:S-diagram} \end{figure} \subsubsection{Emission Spectra} Figure \ref{fig:51Eri-emission} demonstrates the effects of disequilibrium chemistry and \ce{S8} clouds on the planetary emission spectra. For both metallicities, quenched \ce{CH4} and \ce{H2O} have lower abundances than equilibrium, leading to higher emission in the H and J bands from the deeper region. The 10 times solar metallicity further reduces \ce{CH4} and prompts the flux at 1.6 - 1.8 $\mu$m. We assume 1 $\mu$m particle size for \ce{S8} condensates, which scatter strongly and reduce the emission in this wavelength range. However, using the higher effective temperature and metallicity from \cite{Samland2017}, our models generate emission that are too high in the H and J bands and fail to reproduce the observed spectra. We conclude that either $T_{\textrm{eff}}$ is lower than that determined by \cite{Samland2017} and/or additional cloud layers \citep[e.g.][]{Moses2016} is required to match the lower observed emission. \subsection{Sulfur Mechanism} Figure \ref{fig:S-diagram} summerizes the important pathways for sulfur species in the irradiated \ce{H2}-dominated atmospheres we explored in this section. \ce{H2S} is the dominant molecule, which is thermochemically stable for a wide range of temperatures in the lower atmosphere followed by OCS. The photochemistry of sulfur is initiated from SH and S produced by \ce{H2S} dissociation, leading to multiple channels including chain formation and oxidization depending on the atmospheric condition. Sulfur chain formation is highly temperature sensitive where \ce{S2} is favored about 600 - 800 K and \ce{S8} can only form below $\sim$ 500 K (e.g. the stratosphere of 51 Eri b). When OH is sufficiently produced by \ce{H2O} photolysis, S will most likely be oxidized into SO and \ce{SO2} in the upper atmosphere. S also participates in accelerating \ce{CH4} and \ce{NH3} destruction via the coupling to \ce{CH2} or \ce{NH2}, as seen in our HD 189733b and GJ 436b models. \begin{comment} \begin{table*}[htp \caption{{\ bf change to histogram}The column number density (molecules cm$^{-2}$) above 1 mbar of several haze precursors. The models with T$_{\textrm{int}}$ = 400 K and weak / strong vertical mixing are listed for GJ 436b. (show w/o S for comparison?)} \begin{tabular}{ccccc} & WASP-33b (solar / C/O = 1.1) & HD 189733b & GJ 436b (weak $K_{\textrm{zz}}$ / strong $K_{\textrm{zz}}$) & 51 Eri b (solar / 10$\times$ solar)\\ with sulfur\\ \hline \ce{C2H2} & 1.1 $\times$ 10$^{7}$ / 1.2 $\times$ 10$^{13}$ & 1.2 $\times$ 10$^{14}$ & 9.1 $\times$ 10$^{11}$ / 1.5 $\times$ 10$^{12}$ & 2.1 $\times$ 10$^{14}$ / 1.5 $\times$ 10$^{12}$ \\ \ce{C2H6} & ---\footnote{negligible abundance} / --- & 6.7 $\times$ 10$^{10}$ & 1.1 $\times$ 10$^{12}$ / 3.4 $\times$ 10$^{11}$ & 4.8 $\times$ 10$^{14}$ / 4.4 $\times$ 10$^{13}$\\ \ce{C4H2} & --- / 5.0 $\times$ 10$^{5}$ & 1.7 $\times$ 10$^{4}$ & 4.5 $\times$ 10$^{4}$ / 5.1 $\times$ 10$^{6}$ & 7.9 $\times$ 10$^{7}$ / 7.0 $\times$ 10$^{3}$ \\ \ce{C6H6} & --- / --- & 8.5 $\times$ 10$^{3}$ & 4.6 $\times$ 10$^{7}$ / 3.6 $\times$ 10$^{5}$ & 3.6 $\times$ 10$^{9}$ / 3.2 $\times$ 10$^{4}$ \\ \ce{HCN} & 6.0 $\times$ 10$^{11}$ / 1.3 $\times$ 10$^{15}$ & 1.8 $\times$ 10$^{17}$ & 2.5 $\times$ 10$^{15}$ / 5.7 $\times$ 10$^{15}$ & 4.2 $\times$ 10$^{15}$ / 5.5 $\times$ 10$^{14}$ \\ \ce{HC3N} & --- / 2.1 $\times$ 10$^{7}$ & 7.8 $\times$ 10$^{6}$ & 5.9 $\times$ 10$^{8}$ / 5.8 $\times$ 10$^{8}$ & 1.0 $\times$ 10$^{13}$ / 3.9 $\times$ 10$^{9}$ \\ \ce{CH2NH} & --- / 3.4 $\times$ 10$^{2}$ & 2.6 $\times$ 10$^{9}$ & 9.1 $\times$ 10$^{8}$ / 6.4 $\times$ 10$^{9}$ & 1.4 $\times$ 10$^{11}$ / 2.4 $\times$ 10$^{10}$\\ \ce{CH3CN} & --- / 1.8 $\times$ 10$^{1}$ & 2.4 $\times$ 10$^{10}$ & 1.4 $\times$ 10$^{9}$ / 1.6 $\times$ 10$^{9}$ & 5.0 $\times$ 10$^{8}$ / 2.2 $\times$ 10$^{8}$\\ \ce{CS2} & 2.1 $\times$ 10$^{4}$ / 1.8 $\times$ 10$^{1}$ & 2.3 $\times$ 10$^{14}$ & 2.6 $\times$ 10$^{11}$ / 3.6 $\times$ 10$^{12}$ & 1.8 $\times$ 10$^{5}$ / 1.1 $\times$ 10$^{10}$\\ without sulfur\\ \hline \ce{C2H2} & 1.1 $\times$ 10$^{7}$ / 1.2 $\times$ 10$^{13}$ & 4.9 $\times$ 10$^{15}$ & 1.0 $\times$ 10$^{14}$ / 8.6 $\times$ 10$^{13}$ & 2.1 $\times$ 10$^{14}$ / 1.5 $\times$ 10$^{12}$ \\ \ce{C2H6} & ---/ --- & 1.3 $\times$ 10$^{11}$ & 8.7 $\times$ 10$^{11}$ / 4.2 $\times$ 10$^{11}$ & 4.8 $\times$ 10$^{14}$ / 4.4 $\times$ 10$^{13}$\\ \ce{C4H2} & --- / 5.0 $\times$ 10$^{5}$ & 8.1 $\times$ 10$^{9}$ & 2.3 $\times$ 10$^{9}$ / 7.0 $\times$ 10$^{9}$ & 7.9 $\times$ 10$^{7}$ / 7.0 $\times$ 10$^{3}$ \\ \ce{C6H6} & --- / --- & 1.4 $\times$ 10$^{11}$ & 3.1 $\times$ 10$^{13}$ / 1.4 $\times$ 10$^{11}$ & 3.6 $\times$ 10$^{9}$ / 3.2 $\times$ 10$^{4}$ \\ \ce{HCN} & 6.0 $\times$ 10$^{11}$ / 1.3 $\times$ 10$^{15}$ & 1.0 $\times$ 10$^{18}$ & 5.1 $\times$ 10$^{16}$ / 2.8 $\times$ 10$^{16}$ & 4.2 $\times$ 10$^{15}$ / 5.5 $\times$ 10$^{14}$ \\ \ce{HC3N} & --- / 2.1 $\times$ 10$^{7}$ & 7.8 $\times$ 10$^{9}$ & 8.3 $\times$ 10$^{8}$ / 5.1 $\times$ 10$^{11}$ & 1.0 $\times$ 10$^{13}$ / 3.9 $\times$ 10$^{9}$ \\ \ce{CH2NH} & --- / 3.4 $\times$ 10$^{2}$ & 5.2 $\times$ 10$^{10}$ & 1.4 $\times$ 10$^{9}$ / 1.0 $\times$ 10$^{9}$ & 1.4 $\times$ 10$^{11}$ / 2.4 $\times$ 10$^{10}$\\ \ce{CH3CN} & --- / 1.8 $\times$ 10$^{1}$ & 3.2 $\times$ 10$^{11}$ & 1.6 $\times$ 10$^{10}$ / 3.3 $\times$ 10$^{9}$ & 5.0 $\times$ 10$^{8}$ / 2.2 $\times$ 10$^{8}$\\ \hline \end{tabular} \label{tab:precursors} \end{table*} \end{comment} \begin{figure*}[htp] \includegraphics[width=\columnwidth]{fig/haze_bar-S} \includegraphics[width=\columnwidth]{fig/haze_bar-noS} \caption{The column number densities (molecules cm$^{-2}$) above 1 mbar of haze precursors for the simulated atmospheres in Section \ref{case}, including sulfur (left) and without sulfur (right). Some molecule abundances are negligible and not shown for WASP-33b. For GJ 436b, the models with T$_{\textrm{int}}$ = 400 K and weak/strong vertical mixing are used.} \label{fig:haze-bar} \end{figure* \subsection{Trends of Photochemical Hazy Precursors} Figure \ref{fig:haze-bar} summarizes the column densities of haze precursors above 1 mbar for the simulated atmospheres in Section \ref{case}. Across the various irradiated \ce{H2}-atmospheres we explored, we find HCN consistently to be the most prevailing precursor. This is not surprising as HCN is a robust photochemical product of \ce{CH4} and \ce{NH3} and also recently been detected on HD 209458b \citep{Giacobbe2021}. Nonetheless, it does not necessarily imply HCN will lead to complex nitriles formation, since HCN is not the limiting factor as we discussed in Section \ref{sec:haze}. A more careful assessment at high temperatures is required before extrapolating the haze-forming mechanism below 200 K on Titan. We observe a general increasing trend with decreasing temperature for the more indicative nitrile precursors \ce{HC3N} but opposite for \ce{CH3CN}. The same trend is seen for the hydrocarbon precursors \ce{C4H2} and \ce{C6H6}. Only HCN and \ce{C2H2} can reach appreciable levels on WASP-33b, even as photochemical hazes are not expected on WASP-33b. For GJ 436b, most of the precursors are not too sensitive to eddy diffusion. For 51 Eri b, almost all precursors are reduced with increasing metallicity, except for \ce{CS2} since it contains no H. In fact, \ce{CS2} is most favored on HD 189733b, which suggests sulfur-containing hazes in the hot Jupiter condition as carbon and sulfur can couple closely. In addition to sulfur condensates, 51 Eri b might also be covered by nitrile-type hazes according to the precursor distribution. \section{Discussion}\label{discussion} \subsection{High-Temperature UV Cross Sections} We have implemented layer-by-layer UV cross sections according to the temperature at each atmospheric level in VULCAN. Due to the sparsely available data, we did not perform systematic study in this work. Nonetheless, we have gained some insights through the case studies in Section \ref{case}. We found the effects of temperature dependence for \ce{H2O} is mostly negligible in a \ce{H2}-dominated atmosphere. However, this is solely based on the limited wavelength-range measurements we assembled. For the high-temperature ($T >$ 1000 K) cross sections, only wavelengths longer than about 190 nm are included (Figure \ref{fig:cross_T}). The high-temperature cross sections in the FUV could have larger effects. We confirm the analysis in \cite{Venot2013a} that although the \ce{CO2} abundance is not directly influenced by the temperature dependence of \ce{CO2} photolysis, the shielding effects can impact other species. As \ce{CO2} absorb more strongly with increasing temperature, the UV photosphere is lifted to lower pressure. The production of radicals, such as H and OH, is reduced and subsequently alters other species. However, we also find that the shielding effects of \ce{CO2} are completely shadowed when sulfur species are included (e.g. see the right panel of Figure \ref{fig:HD189-S-noS}). The temperature dependence of \ce{CO2} photolysis should be more amplified in \ce{CO2}-dominated atmospheres. \subsection{Implication of Ionization} Ions are not included in this work. We are working on including ionchemistry in the next update of VULCAN. Photoionization is known to be critical in initiating the haze formation \citep{Wong2003,Krasnopolsky2009,Plessis2012,Lavvas2013}. Even thermoionization can be important for ultra hot Jupiters. In our study about WASP-33b, atomic Ti and V in the upper atmosphere are expected to be partly ionized and contribute to free electrons. Since Ti has an ionization threshold of 180 nm, compared to about 240 nm for Na, the effects of photoionization on Ti and V should be similar and probably smaller than those on the alkali atoms, as investigated in \cite{Lavvas2017}. An important outcome of photoionization is that the increased electrons can lead to more hydrogen anions (H-) than predicted by thermal equilibrium, which are found to be important opacity sources in some hot Jupiters atmospheres \citep{Lewis2020}. In terms of sulfur chemistry, several sulfur species have relatively lower-energy threshold of ionization and can be subject to photoionization. For example, atomic S starts to ionize from Lyman-$\alpha$. Since S is likely the dominant sulfur species in the hot Jupiter's stratosphere (Section \ref{sec:HD189-S}), S can be photoionized and ramify into various organic molecules through ion-exchange reactions. \subsection{More Intriguing Questions about Sulfur} In Section \ref{case}, we find the coupling to sulfur chemistry impacts the core C-H-N-O kinetics in several ways for \ce{H2}-dominated atmospheres. The coupling effects essentially depend on if the sulfur-containing intermediates are active, which is not well-understood in general as it can vary with atmospheric conditions such as temperature and bulk compositions. \cite{Gersen2017} find that \ce{CH3S} and \ce{CH3SH} provide more efficient pathways for methane oxidization in the combustion (oxidizing) environment. \cite{He2020S} also observe the photochemical formation of \ce{CH3S} and \ce{CH3SH} in a \ce{CO2}-rich gas mixture in the experiments. Although we have included reactions involving \ce{CH3S} and \ce{CH3SH} in our sulfur mechanism, they are not identified to be important in the pathway analysis for all of the \ce{H2}-atmospheres we investigated. The chemical role of \ce{CH3SH} is worth further study in the broad context of biologically produced sulfur. The temperature profiles are fixed without considering the radiative feedback in this whole work. The radiative effects might be more prominent in the presence of sulfur, such as the absorption of SH and \ce{S2} in the optical and NUV. 51 Eri b or other directly imaged planets with a relatively cold stratosphere ($\lesssim$ 500 K) and under sufficient UV irradiation sit in the sweet spot for testing the radiative feedback on sulfur condensates. \section{Summary}\label{sec:summary} In this paper, we present a thorough theoretical framework of the updated photochemical code VULCAN. We validate our models for the atmospheres of hot Jupiters, Jupiter, and modern Earth and carry out comparative study on representative cases of extrasolar giant planets: WASP-33b, HD 189733b, GJ 436b, and 51 Eridani b. The highlights of our results are: \renewcommand\labelitemi{\tiny$\bullet$} \begin{itemize} \item We have carefully validated the model of HD 189733b. The updated methanol scheme in \cite{Venot2020} is found to bring the quenching behavor of methane close to \cite{Moses11} and VULCAN. We pointed out the photochemical source plays a non trivial part in the model differences between \cite{Moses11}, \cite{Venot12}, and VULCAN. \item We demonstrate advection transport in the downdraft can qualitatively explain the deep ammonia distribution in Jupiter, which can not be explained by eddy diffusion alone. \item The implementation of surface boundary conditions and condensation in an oxygen-rich atmosphere is validated in the present-day Earth model. A general oxidation timescale analysis is provided for assessing the chemical lifetime of biosignature gases. \item The atmosphere of WASP-33b is not affected by vertical quenching but consisted of an upper photolytic region and a thermochemical equilibrium region below. For GJ 436b, we find \ce{NH3} insensitive to vertical mixing and the sulfur species governed by photolysis and mixing in the upper stratosphere are independent of the deep thermal structure, which can be complementary to the \ce{CH4}/CO metric for breaking degeneracies. The quenched CO always predominates over \ce{CH4} on 51 Eri b and sulfur aerosols (chiefly \ce{S8}) condense out in the stratosphere. \item We find the coupling to sulfur chemistry impact C-H-N-O kinetics. Sulfur can provide catalytic paths to destroy \ce{CH4} and \ce{NH3} and generally lower the hydrocarbon abundances. \ce{H2S} makes H recycled back to \ce{H2} faster on the cooler GJ 436b and 51 Eri b. The dissociation of SO and \ce{SO2} also make the upper atmosphere of 51 Eri b more oxidizing. \item We suggest including several photochemical haze precursors such as \ce{C6H6} and \ce{HC3N}, which are more indicative than the commonly considered HCN and \ce{C2H2}. We observe a general increasing trend with decreasing temperature for \ce{C4H2}, \ce{C6H6}, and \ce{HC3N} but opposite for \ce{CH3CN}. \end{itemize} \section*{Model availability} The results in this work are produced by version 2.0 of VULCAN (\url{https://github.com/exoclime/VULCAN/releases/tag/v2.0}). In addition to the public code, the configuration files used for the models in Section \ref{validation} are available on \url{https://github.com/exoclime/VULCAN} and the main model output in Section \ref{validation} and Section \ref{case} can be found in the supplementary material. Software: Python; Numpy \citep{numpy}; Scipy \citep{scipy}; Matplotlib \citep{matplotlib} \acknowledgments S.-M.T gratefully thanks M. Zhang for customizing PLATON to read non-equilibrium compositions. S.-M.T also thanks O. Venot and J. Moses for sharing the output of HD189733b for model comparison, C. Li for providing the retrieved ammonia results from Juno measurements, L.M. Lara for fruitful discussions about setting up photochemistry, P. Rimmer for the compiled observational data of Jupiter, and N. Wogan for pointing out a typo in Equation (14) in an earlier version of this paper. S.-M.T acknowledges support from PlanetS National Center of Competence in Research (NCCR) and University of Oxford. M.M. acknowledges support from NASA under the XRP grant No. 18-2XRP18\_2-0076. E.K.L. acknowledges support from the University of Oxford and CSH Bern through the Bernoulli fellowship. K.H. acknowledges support from the PlanetS National Center of Competence in Research (NCCR) of the Swiss National Science Foundation and the European Research Council Consolidator Grant EXOKLEIN (No. 771620). This work was supported by the European Research Council Advanced Grant EXOCONDENSE (\#740963; PI: R.T. Pierrehumbert).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Supplementary information} \setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0} \makeatletter \renewcommand{\theequation}{S\arabic{equation}} \renewcommand{\thefigure}{S\arabic{figure}} \section*{S1. STM tip-superconductor junctions} \subsection{Equations} The six equations which relate the scattering amplitudes at each node, see Fig.\ \ref{fig:1}, for both the electron and hole parts of the wavefunction, are: \begin{align} \epsilon ( 1 + R_N ) &= t_{tip} ( e_N^* + R_N e_N ) + t_{tip,K} \left( \frac{T_{K,1}}{N_{K,1}} + \frac{T_{K,2}}{N_{K,2}} \right) + t_{tip,K'} \left( \frac{T_{K',1}}{N_{K',1}} + \frac{T_{K',2}}{N_{K',2}} \right) \nonumber \\ \epsilon \left( \frac{T_{K,1}}{N_{K,1}} + \frac{T_{K,2}}{N_{K,2}} \right) &= t^*_{tip,K} ( 1 + R_N ) + t_{sc,K} \left( \frac{T_{K,1}}{N_{K,1}} e_{K,1} + \frac{T_{K,2}}{N_{K,2}} e_{K,2} \right) +t_{K,K'} \left( \frac{T_{K',1}}{N_{K',1}} + \frac{T_{K',2}}{N_{K',2}} \right) \nonumber \\ \epsilon \left( \frac{T_{K',1}}{N_{K',1}} + \frac{T_{K',2}}{N_{K',2}} \right) &= t^*_{tip,K'} ( 1 + R_N ) + t_{sc,K'} \left( \frac{T_{K',1}}{N_{K',1}} e_{K',1} + \frac{T_{K',2}}{N_{K',2}} e_{K',2} \right)+ t_{K,K'}^* \left( \frac{T_{K,1}}{N_{K,1}} + \frac{T_{K,2}}{N_{K,2}} \right) \nonumber \\ \epsilon R_A &= - t_{tip} R_A e_N^* - t_{tip,K} \left( \frac{T_{K,1} A_{K,1}}{N_{K,1}} + \frac{T_{K,2} A_{K,2}}{N_{K,2}} \right) - t_{tip,K'} \left( \frac{T_{K',1} A_{K',1}}{N_{K',1}} + \frac{T_{K',2} A_{K',2}}{N_{K',2}} \right) \nonumber \\ \epsilon \left( \frac{T_{K,1} A_{K,1}}{N_{K,1}} + \frac{T_{K,2} A_{K,2}}{N_{K,2}} \right) &= - t_{tip,K}^* R_A - t_{sc,K} \left( \frac{T_{K,1} A_{K,1}}{N_{K,1}} e_{K,1} + \frac{T_{K,2} A_{K,2}}{N_{K,2}} e_{K,2} \right) \nonumber \\ &- t_{K,K'} \left( \frac{T_{K',1} A_{K',1}}{N_{K',1}} + \frac{T_{K',2} A_{K',2}}{N_{K',2}} \right) \nonumber \\ \epsilon \left( \frac{T_{K',1} A_{K',1}}{N_{K',1}} + \frac{T_{K',2} A_{K',2}}{N_{K',2}} \right) &= - t_{tip,K'}^* R_A - t_{sc,K'} \left( \frac{T_{K',1} A_{K',1}}{N_{K',1}} e_{K',1} + \frac{T_{K',2} A_{K',2}}{N_{K',2}} e_{K',2} \right) \nonumber \\ &- t_{K,K'}^* \left( \frac{T_{K,1} A_{K,1}}{N_{K,1}} + \frac{T_{K,2} A_{K,2}}{N_{K,2}} \right) \label{equations} \end{align} where $R_N , R_A, T_{K,1}, T_{K,2} , T_{K'_1} , T_{K',2}$ are the normal reflection coefficient, the Andreev reflection coefficient (associated to the tip electron and hole channels, respectively), and the transmission coefficients for the electron and hole channels in each valley, $\epsilon$ is the energy, and: \begin{align} e_N &= - \frac{\epsilon}{2 t_{tip}} + i \sqrt{1 - \frac{\epsilon^2}{4 t_{tip}^2}} \nonumber \\ e_{K,1} &= \left\{ \begin{array}{cc} - i \sqrt{1 - \frac{\Delta_K^2 - \epsilon^2}{4 t_{sc,K}^2}} + i \sqrt{\frac{\Delta_K^2 - \epsilon^2}{4 t_{sc,K}^2}} &| \epsilon | \le \Delta _K \\ - i \sqrt{1 + \frac{\epsilon^2 - \Delta_K^2}{4 t_{sc,K}^2}} + \sqrt{\frac{\epsilon^2 - \Delta_K^2}{4 t_{sc,K}^2}} &| \epsilon | > \Delta _K \end{array} \right. & e_{K,2} &= \left\{ \begin{array}{cc} i \sqrt{1 - \frac{\Delta_K^2 - \epsilon^2}{4 t_{sc,K}^2}} - i \sqrt{\frac{\Delta_K^2 - \epsilon^2}{4 t_{sc,K}^2}} &| \epsilon | \le \Delta _K \\ i \sqrt{1 - \frac{\epsilon^2 - \Delta_K^2}{4 t_{sc,K}^2}} + \sqrt{\frac{\epsilon^2 - \Delta_K^2}{4 t_{sc,K}^2}} &| \epsilon | > \Delta_K \end{array} \right. \nonumber \\ A_{K,1} &= \left\{ \begin{array}{cc} \frac{- \epsilon - i \sqrt{\Delta_{K}^2-\epsilon^2}}{\Delta_{K}} & |\epsilon | \le \Delta _K \\ \frac{- \epsilon - \sqrt{\epsilon^2 - \Delta_{K}^2}}{\Delta_{K}} & |\epsilon | > \Delta _K \end{array} \right. & A_{K,2} &= \left\{ \begin{array}{cc} \frac{- \epsilon + i \sqrt{\Delta_{K}^2-\epsilon^2}}{\Delta_{K}} & |\epsilon | \le \Delta _K \\ \frac{- \epsilon + \sqrt{\epsilon^2 - \Delta_{K}^2}}{\Delta_{K}} & |\epsilon | > \Delta _K \end{array} \right. \nonumber \\ N_{K,1} &= \sqrt{1 + | A_{K,1} |^2} & N_{K,2} &= \sqrt{1 + | A_{K,2} |^2} \end{align} and similar expressions for $K'$. The model can be extended to intravalley superconducting gaps with angular dependence, like the cases discussed in Ref.\ \cite{sukhachov2022andreev}.\ Each individual scattering angle can be treated as an independent channel.\ Reflection and transmission coefficients need to be defined for each angle, and the total currents will be given by integrals over all angles. \subsection{Subgap Andreev conductance.} The enhancement of the conductance for voltages below the superconducting gap discussed in \cite{blonder1982transition} arises from processes where an incoming electron is reflected as a hole, or vice versa.\ In an $s$-wave superconductor, and in the limit of perfect transmission, these processes lead to a conductance which is twice the conductance in the normal state, as shown in the BTK theory \cite{blonder1982transition}.\ In an $s$-wave superconductor, an electron is injected into the superconductor as a coherent sum of even combinations of plane waves with momenta $\vec{k}$ and $- \vec{k}$, because due to time reversal symmetry, the normal-superconductor hopping elements satisfy $t_{tip,\vec{k}} = t_{tip,-\vec{k}}$.\ This even combination becomes coupled to another even combination of hole states, which can move back into the normal electrode.\ In a non $s$-wave superconductor, the superconducting gap changes sign.\ If there are pairs of momenta such that $\Delta_{\vec{k}} = - \Delta_{\vec{k'}}$ and $t_{tip,\vec{k}} = t_{tip,\vec{k'}}$, the amplitudes of the injected electron will be equal for $\vec{k}$ and $\vec{k'}$, but, inside the superconductor, it will be coupled to holes with amplitudes of opposite signs.\ Such a hole state cannot tunnel back into the normal electrode, and subgap Andreev conductance will be fully suppressed.\ Local tunneling processes, as expected in an STM experiment, imply momentum independent hopping elements, $t_{tip,\vec{k}} = t$.\ Hence, we can expect that the subgap Andreev conductance will be suppressed when a normal tip is coupled to generic $p$- and $d$-wave superconductors \cite{sukhachov2022andreev}, and also in the case of the $f$-wave, two valley superconductor considered here.\ The situation changes when there is intervalley scattering, see below. \subsection{Tip induced Andreev states.} We can understand the formation of subgap Andreev states by the intervalley elastic scattering induced by the tip by using the simple model shown in Fig.\ \ref{fig:1}.\ The model reduces to a simple tight binding model: \begin{align} {\cal H} &= {\cal H}_{sc1} + {\cal H}_{sc2} + {\cal H}_{tip} \nonumber \\ {\cal H}_{sc1} &= t_{sc1} \sum_{n=-\infty}^0 \left( c_{e,n}^\dagger c_{e,n-1} - c_{h,n}^\dagger c_{h,n-1} \right) + \Delta_{sc1} \sum_{n=-\infty}^0 c_{e,n}^\dagger c_{h,n} + h. c. \nonumber \\ {\cal H}_{sc2} &= t_{sc2} \sum_{n=1}^{n=\infty} \left( c_{e,n}^\dagger c_{e,n+1} - c_{h,n}^\dagger c_{h,n+1} \right) + \Delta_{sc2} \sum_{n=1}^{n=\infty} c_{e,n}^\dagger c_{h,n} + h. c. \nonumber \\ {\cal H}_{tip} &= t_{K,K'} \left( c_{e,0}^\dagger c_{e,1} - c_{h,0}^\dagger c_{h,1} \right) + h. c. \end{align} The Green's function at sites $n=0$ and $n=1$ of the system can be obtained from the Green's functions at the same sites in the absence of intervalley coupling: \begin{align} \left( \begin{array}{cc} G_{0,0} ( \omega ) &G_{0,1} ( \omega ) \\ G_{1,0} ( \omega ) &G_{1,1} ( \omega ) \end{array} \right) &= \left( \begin{array}{cc} \bar{G}_{0,0}^{-1} ( \omega ) &t_{K,K'} {\cal I}_2 \\ t_{K,K'} {\cal I}_2 &\bar{G}_{1,1}^{-1} ( \omega ) \end{array} \right)^{-1} \end{align} where $\bar{G}_{0,0} ( \omega )$ and $\bar{G}_{1,1} ( \omega )$ are surface Green's functions associated to ${\cal H}_{sc1}$ and ${\cal H}_{sc2}$, and ${\cal I}_2$ is a $2 \times 2$ identity matrix.\ By changing to a basis defined by $c_{e,n} \pm c_{h,n}$, these matrix functions are: \begin{align} \bar{G}_{0,0}^{-1} ( \omega ) &= \left( \begin{array}{cc} \frac{\omega - \Delta_{sc1}}{2} + \frac{\sqrt{( \omega^2 - \Delta_{sc1}^2) ( \omega^2 - \Delta_{sc1}^2 - 4 t_{sc1}^2)}}{2 ( \omega + \Delta_{sc1} )} &0\\ 0 &\frac{\omega + \Delta_{sc1}}{2} + \frac{\sqrt{( \omega^2 - \Delta_{sc1}^2) ( \omega^2 - \Delta_{sc1}^2 - 4 t_{sc1}^2)}}{2 ( \omega - \Delta_{sc1} )} \end{array} \right) \end{align} and an equivalent expression for $G_{1,1}^{-1} ( \omega )$.\ Finally, for $t_{sc1} = t_{sc2} = t$ and $\Delta_{sc1} = \Delta_{sc2} = \Delta$, the Andreev states are defined by the equations: \begin{align} \frac{\omega \mp \Delta}{2} + \frac{\sqrt{( \omega^2 - \Delta^2) ( \omega^2 - \Delta^2 - 4 t^2)}}{2 ( \omega \pm \Delta )} &= \pm t_{K,K'} \end{align} For $t_{K,K'} \ll \Delta , t$ this equation gives Andreev states near the edge of the superconducting gap, $\omega = \pm \Delta$, and for $t_{K,K'} = t$ the Andreev states move to the center of the gap, $\omega = 0$.\ The parameter $t$ describes a high energy cutoff of the order of the bandwidth.\ For TBG near a magic angle, it is reasonable to expect that the perturbation due to the tip in the contact regime is such that $t_{K,K'} \gtrsim t$, so that, in an $f$-wave superconductor, Andreev states near the center of the gap will exist. \subsection{Weak coupling conductance} \begin{figure}[h] \centering{\includegraphics[width=16cm]{FigureS1.png}}% \caption{Normal and Andreev reflection and total conductance in a STM tip-superconducting TBG junction in the weak coupling regime.\ The parameters used are $t_{tip} = 10, \, t_{sc,K} = t_{sc,K'} = 1, \, t_{tip,K} = t_{tip,K'} = 1 / \sqrt{2}, \, t_{K,K'} = 0.2\, \Delta_K = 0.05 , \Delta_{K'} = \pm 0.05$. \label{fig:s1} \end{figure} We show in Fig. \ref{fig:s1} results in the regime where the normal transmission of the junction is small.\ The intervalley coupling has also been reduced.\ The Andreev reflection for the $s$-wave superconductor is notably suppressed.\ On the other hand, the intervalley coupling still induces subgap states near the edges of the gap of the $f$-wave superconductor.\ As a result, Andreev reflection persists, and leads to a peak in the junction conductance.\ Still, the conductance above and below the gap are similar for $s$-wave and $f$-wave pairings.\ Therefore, in this weak coupling regime, it would be difficult for an experiment to tell apart the two pairings. \section*{S2. Josephson junctions} \subsection{Details of the model} The junction lattice is built following the same procedure as in Ref.\ \cite{sainzcruz21high}.\ To avoid border transport, we impose periodic boundary conditions from top to bottom, which leads to a folding of the Brillouin zone.\ The folded bandstructure has more than two flat bands, e.g.\ four in Fig.\ \ref{fig:1}(a).\ Following the notation of \cite{sainzcruz21high}, we build a TBG nanotube with chiral vectors (44,2)@(-44,-2), which has a twist angle of 4.41$^{\circ}$ and 2704 sites in its unit cell. We use a tight binding Hamiltonian given by \cite{lin2018minimum} \begin{dmath} {{\cal H}_0=-\sum_{i\neq j,m}\gamma_{ij}^{mm}(c^{\dagger}_{i,m}c_{j,m}+h.c.)}-\sum_{i, j,m}\gamma_{ij}^{m,m+1}(c^{\dagger}_{i,m}c_{j,m+1}+h.c.)+\sum_{i,m}V_H(n)c^{\dagger}_{i,m}c_{i,m}\, , \label{eq:s3} \end{dmath} where $i,j$ run over the lattice sites and $m$ is the layer index.\ $H_0$ includes intralayer hopping to nearest-neighbors only $\gamma_{ij}^{mm}=t_{\parallel}$ and interlayer hopping that decays exponentially away from the vertical direction, $\gamma_{ij}^{m,m+1}=t_{\perp}e^{-(\sqrt{r^2+d^2}-d)/\lambda_\perp}\frac{d^2}{r^2+d^2}$, where $d=0.335$ nm is the distance between layers, $t_{\parallel}=3.09$ eV and $t_{\perp}=0.39$ eV are the intralayer and interlayer hopping amplitudes and $\lambda_\perp=0.027$ nm is a cutoff for the interlayer hopping \cite{lin2018minimum}.\ $V_H(n)=\frac{2 \rho(n)}{\epsilon L_ M}\sum^3_{i=1}\cos({G_i}\cdot{r})$, where $G_i$ are the reciprocal lattice vectors, $r$ the position, $L_M$ the moiré period, $\epsilon=4$ the dielectric constant due to hBN encapsulation and $\rho(n)$ a filling dependent parameter.\ To obtain $\rho(n)$, we fit the bandstructure of the tight-binding model to the continuum model of Ref.\ \cite{moon2014optical} and do a self-consistent calculation.\ We perform a scaling approximation, based on the fact that, within the continuum model, the bands of TBG depend, to first order, on a dimensionless parameter \cite{bistritzer2011moire}, \begin{equation} \alpha = \frac{at_{\perp}}{2\hbar v_F \sin ( \theta / 2 )} \propto \frac{t_{\perp}}{t_{\parallel}\theta}\, . \label{eq:s4} \end{equation} where $a$ is the lattice constant, $v_F$ is the Fermi velocity.\ Thus, a small angle $\theta$ can be simulated with a larger one $\theta^{\prime}$, doing the following transformations:\ $t_{\parallel}\rightarrow\frac{1}{\lambda} t_{\parallel}$, $a\rightarrow\lambda a$, $d\rightarrow\lambda d$, with $\lambda=\sin(\frac{\theta^{\prime}}{2})/\sin(\frac{\theta}{2})$ \cite{gonzalez2017,vahedi2021magnetism,sainzcruz21high}.\ This approximation reproduces well the low-energy bandstructure, as shown in Fig.\ \ref{fig:3}(a) in the main text.\ We use scaling factors $\lambda \sim$ 4. To obtain the results in Fig.\ \ref{fig:4}, we have exploited the fact that most of the critical current comes from states near the superconducting gap $\Delta=1$ meV, in fact, we have observed that states in the window $[-2\Delta, 2\Delta]$ carry over 95$\%$ of the current.\ Therefore, we have approximated Eq.\ \ref{eq:2} in the main text as: \begin{equation} \mathcal{I}\approx\frac{\partial}{\partial\phi}\sum_{\mid\epsilon_i\mid<2\Delta}\epsilon_i \end{equation} For the calculation of the current in the mixed junction, spin is explicitly included in the model, so the Hamiltonian in Eq.\ \ref{eq:2} is doubled.\ The $f$-wave electrode is at filling $n=2.4$ and the $s$-wave electrode at $n=-2.4$.\ The interface region interpolates smoothly between these two fillings.\ The gaps are similarly smoothed, see Fig.\ \ref{fig:s2}, in contrast to SNS and SIS junctions, for which we use hard boundary conditions. \begin{figure}[h] \centering{\includegraphics[width=6.5cm]{FigureS2.png}}% \caption{Superconducting gaps and filling at the interface between $f$-wave and $s$-wave electrodes in the mixed junction, versus position in units of moiré period, measured from the center of the junction. \label{fig:s2} \end{figure} With regards to the lenght of the system, we have found that 16 moiré unit cells per electrode are enough to reach convergent results, except in the mixed armchair junction, where 41 cells per electrode are needed.\ The low-energy spectrum was obtain with the library ARPACK.\ The complexity was approximately $\mathcal{O}(N^2)$ with N the number of sites in the system.\ To verify the algorithm was working as intended, we first reproduced some of the results in one-dimensional chains obtained with a Green's functions technique in Ref.\ \cite{perfetto2009equilibrium}. \subsection{Toy model junction} In the main text we showed that the critical current in mixed $f$-wave/$s$-wave TBG junctions has a phase periodicity of $\pi$, half of conventional junctions.\ This phenomenon was also found in Ref.\ \cite{zazunov2012supercurrent} in one-dimensional mixed chains.\ Fig.\ \ref{fig:s3} depicts a toy model which reproduces the result:\ in a chain of atoms with spin-unpolarized Kitaev ($p$-wave) pairing \cite{kitaev2001unpaired} on one side and $s$-wave pairing on the other, the current is $\pi$-periodic.\ Note that the saw-tooth profiles are a consequence of fully ballistic transport \cite{ishii1970josephson, golubov2004current}. \begin{figure}[h] \centering{\includegraphics[width=12.5cm]{FigureS3.png}}% \caption{(a) Schematic of a mixed one-dimensional toy model junction, with Kitaev pairing on one side and $s$-wave pairing on the other.\ (b) CPR of the mixed junction, compared to SNS junctions.\ The parameters used are $t=1$, $\Delta_K=0.1$ and $\Delta_s=0.2$.\ SNS junctions have five metallic atoms in the link. \label{fig:s3} \end{figure} \subsection{Andreev spectra} \begin{figure}[h] \centering{\includegraphics[width=16cm]{FigureS4.png}}% \caption{Subgap Andreev spectra in TBG SNS junctions, as a function of the phase, for $f$-wave and $s$-wave pairings and for electron (top row) and hole (bottom) domes, at different twist angles. \label{fig:s4} \end{figure} Figure \ref{fig:s4} shows the subgap Andreev spectra in TBG junctions in the SNS configuration, for $f$-wave and $s$-wave parings at different fillings and twist angles.\ As stated in the main text, these states carry most of the current in SNS junctions.\ $f$-wave and $s$-wave have similar spectra, except for the quasi-flat levels already mentioned earlier, which are precursors of Majorana modes.\ The spectrum changes fast near the magic angle, compare 1.06$^{\circ}$ and 1.09$^{\circ}$.\ The fact that the critical current increases with twist angle in these junctions is seen here as the growth of the slope of the Andreev levels with angle.\ There is marked electron-hole asymmetry. \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction\label{grb050406:introd}} The Swift Gamma-Ray Burst Explorer (\citealt{SWIFT}) was successfully launched on 2004 Nov 20. Its payload includes one wide-field instrument, the gamma-ray (15--350 keV) Burst Alert Telescope (BAT; \citealt{BAT}), and two narrow-field instruments, the X-Ray Telescope (XRT; \citealt{XRT2}) and the Ultraviolet/Optical Telescope (UVOT; \citealt{UVOT}). BAT detects the bursts, calculates their position to $\sim 1$--$4 \arcmin$ accuracy and triggers an autonomous slew of the observatory to point the two narrow-field instruments. The XRT, which operates in the 0.2--10\,keV energy range, can provide $\sim 5\arcsec$ positions, while the UVOT, which operates in the 1700--6000\,\AA{} wavelength range, can further refine the afterglow localization to $\sim 0\farcs5$. With its unique fast re-pointing capabilities Swift set out to investigate the very early phases of gamma-ray burst (GRB) afterglows, beginning as early as one minute after the BAT trigger. During the initial activation and calibration phases, which ended on 2005 Apr 5, Swift discovered 25 GRBs. The narrow-field instruments were re-pointed towards seven of them within a few hundred seconds, and such is the case for GRB 050406. On 2005 Apr 6 at 15:58:48.40 UT, BAT triggered on GRB 050406 (trigger 113872; \citealt{GCN3180}), and located it at RA(J2000$) = 02^{\rm h} 17^{\rm m} 53^{\rm s}$, Dec(J2000$) =-50^{\circ} 10\arcmin 52\arcsec$, with an uncertainty of 3 arcmin (95\% containment; \citealt{GCN3183}). The derived value for the time during which 90\% of the burst fluence is observed was $T_{90}=5\pm1$\,s in the 15--350 keV band. In the 15--25 keV band the light curve peak had a fast-rise, exponential decay (FRED) profile, while in the 25--50 keV band, the shape was more symmetric, with the peak starting $\sim 2$\,s earlier (\citealt{GCN3183}). Both the peak and time-averaged spectra were well fit by a simple power-law with a time-averaged spectrum photon index of 2.38$\pm$0.34 (90\% confidence; \citealt{GCN3183}). The fluence in the 15--350 keV band was $9.0 \times 10^{-8}$ erg cm$^{-2}$. The gamma-ray characteristics of this burst, i.e.\ the softness of the observed spectrum and the absence of significant emission above $\sim 50$ keV, classify GRB 050406 as an X-ray flash (XRF; \citealt{Heiseea01}). From now on, we shall therefore refer to this event as XRF 050406. Swift executed a prompt slew. The XRT imaged the BAT field only 84\,s after the trigger but no bright X-ray source could be detected within the field of view. However, a refined on-ground analysis revealed a previously uncatalogued X-ray source (\citealt{GCN3181,GCN3184}). From the very first examination of the down-linked data it was clear that the afterglow of this burst was peculiar. Indeed, after an initial decay, the X-ray count rate began rising, peaking at $\approx 220$\,s, and subsequently decaying again (\citealt{GCN3184}). Ground-based observations started as soon as the burst discovery was reported via the GCN network. The Magellan/Clay telescope imaged the XRT error circle with LDSS-3 in the $R$ and $i$ bands and found a single faint source ($R=22.0\pm0.09$ mag, 7.8 hr after the burst) located at RA(J2000$)=02^{\rm h} 17^{\rm m} 52\fs3$, Dec(J2000$)=-50^{\circ} 11\arcmin 15\arcsec$ with an uncertainty of $\sim 0\farcs5$ in each coordinate (\citealt{GCN3185,Bergerea05b}). Similarly to XRT, UVOT also imaged the field at the end of the slew (starting from $\sim 88$ s after the trigger) and though it failed to detect the afterglow on-board (\citealt{GCN3182}), subsequent on-ground analysis revealed a source within the XRT error circle at the 4.3- (19.0 mag), 3.0- and 2.5-$\sigma$ detection levels in the $U$, $B$ and $V$ bands, respectively. The UVOT position was RA(J2000$)=02^{\rm h} 17^{\rm m} 52\fs2$, Dec(J2000$)=-50^{\circ} 11\arcmin 15\farcs8$, consistent with the Magellan one. By the time the second UVOT observation (1.3 hr later) was performed, the source was not detected in the $U$ band, confirming it as the afterglow of XRF 050406. \citet{Schady06} obtained an estimate of $z=2.44\pm0.36$ from fitting the broad band spectrum (combined UVOT and XRT data). In this paper we present observations of the first Swift burst where a flare is clearly detected in its X-ray light curve, during which the source count rate increased by a factor of $\ga 6$. This feature had never been observed before in Swift data, and had rarely been observed before in any X-ray afterglow (\citealt{piroea05}). This paper is organized as follows. In Sect.\ \ref{grb050406:dataredu} we describe our observations and data reduction; in Sect.\ \ref{grb050406:dataanal} we describe our spatial, timing and spectral data analysis; in Sect.\ \ref{grb050406:discussion} we discuss our findings. Finally, in Sect.\ \ref{grb050406:conclusions} we summarize our conclusions. % Throughout this paper the quoted uncertainties are given at 90\% confidence level for one interesting parameter (i.e., $\Delta \chi^2 =2.71$) unless otherwise stated. Times are referred to the BAT trigger $T_0$, $t=T-T_0$. The decay and spectral indices are parameterized as follows, $F(\nu,t) \propto t^{-\alpha} \nu^{-\beta}$, where $F_{\nu}$ (erg cm$^{-2}$ s$^{-1}$ Hz$^{-1}$) is the monochromatic flux as a function of time $t$ and frequency $\nu$; we also use $\Gamma = \beta +1$ as the photon index, $N(E) \propto E^{-\Gamma}$ (ph keV$^{-1}$ cm$^{-2}$ s$^{-1}$). \section{Observations and data reduction\label{grb050406:dataredu}} \subsection{BAT observations\label{grb050406:batobs}} Table~\ref{grb050406:tab_obs} reports the log of the observations that were used for this work. The BAT data were analyzed using the standard BAT analysis software distributed within FTOOLS v6.0. The burst is detected in the first two standard bands (15--25 and 25--50\,keV) while virtually no flux is observed above 50 keV. We find $T_{\rm 90} = 6.1\pm 1.0$\,s in the 15--150 keV band. The BAT spectra were extracted over the full time interval over which the burst was detected ($T_{\rm tot}$), in the interval covering the 1-s peak $T_{\rm peak}$, and for the $T_{\rm 90}$ and $T_{\rm 50}$ intervals. Response matrices were generated with the task {\tt batdrmgen} using the latest spectral redistribution matrices. For our spectral fitting (XSPEC v11.3.2) we considered the 15--150 keV energy range. All spectra are well fit with a simple power law with photon index $\Gamma_\gamma \sim 2.65$ (see details in Table~\ref{grb050406:tab_specfits}). There is no evidence of a spectral break within the BAT energy range, thus constraining the peak energy $E_{\rm p}<15$\,keV. The indices are steeper (softer) although consistent with the ones reported by \citet{GCN3183}, due to the different energy ranges used for the spectral fitting. No significant improvements are found using either a cutoff power-law or a Band model (\citealt{Band}). The 1-s peak photon flux was $(2.3_{-0.4}^{+2.8})\times 10^{-8}$ erg cm$^{-2}$ s$^{-1}$ (15--350 keV band), while the fluence was $\mathcal{F} = (1.0 ^{+1.13}_{-0.36})\times 10^{-7}$ erg cm$^{-2}$ (15--350 keV band). This fluence corresponds to an isotropic-equivalent energy $E_{\rm iso} = (1.4^{+1.6}_{-0.6})\times 10^{51}$ erg (in the rest frame 52--1204 keV) assuming $z=2.44\pm0.36$ (\citealt{Schady06}). \subsection{XRT observations\label{grb050406:xrtobs}} \begin{figure} \resizebox{\hsize}{!}{\includegraphics[angle=270]{figure1.ps}} \caption{XRT image of XRF 050406, obtained from the total $\sim$163\,ks PC mode data. The field is centred on the 3$\arcmin$ radius BAT error circle. Also shown is the XRT 4\farcs2 error circle, as well as the Magellan (\citealt{GCN3185}) and UVOT (\citealt{GCN3186}) optical counterpart positions; the optical points are so close they cannot be distinguished on this scale. S2 is a serendipitous source located at RA(J2000$)=02^{\rm h} 17^{\rm m} 52.^{\rm s}9$, Dec(J2000$)=-50^{\circ} 10\arcmin 36\farcs1$. } \label{grb050406:fig_map} \end{figure} In order to cover the dynamic range and rapid variability expected from GRB afterglows and to provide rapid-response, automated observations, XRT was designed to support different readout modes that optimize the collected information as the flux of the burst diminishes. The XRT supports four major readout modes, one imaging (IM), two timing, Piled-up/Low-rate Photodiode (PuPD and LrPD) and Windowed Timing (WT), and one Photon-Counting (PC). A detailed description of XRT modes can be found in \citet{xrtmodes}. In the nominal operating state the mode switching is based on the source flux and is fully automated (auto state) to minimize pile-up in the data. The XRT observations of XRF 050406 started on 2005 Apr 6 at 16:00:12 UT, only 84\,s after the trigger, and ended on 2005 Apr 22, thus summing up a total net exposure (in PC mode) of $\sim 163$ ks spread over a $\sim$16\,d baseline. The monitoring is organized in 9 observations (000, 001, 002, 005, 006, 008, 009, 010, 011) and 183 snapshots (continuous pointings at the target). % This was the first burst to occur after the formal end of the calibration phase (2005 Apr 5), and the first (000) observation was performed as an automated target (AT) with XRT in auto state. Therefore, during observation 000 the automated mode switching made XRT take an initial 2.5\,s image (IM at $t=84$\,s), immediately followed by one PuPD ($t=90$\,s) and one LrPD ($t=91$\,s) frame. Then at $t=92$\,s a series of 5 WT frames was taken until the on-board measured count rate was low enough for XRT to switch to PC mode ($t=99$\,s). After this, XRT repeatedly switched between WT and PC modes because of an increased background level (see below). Since the signal-to-noise (S/N) in these late WT frames is low, we did not include them in our analysis (Table~\ref{grb050406:tab_obs}). The XRT data were first processed by the Swift Data Center at NASA/GSFC into Level 1 products (calibrated and quality-flagged event lists). Then they were further processed with the XRTDAS (v1.4.0) software package written by the Agenzia Spaziale Italiana (ASI) Science Data Center and distributed within FTOOLS v6.0 to produce the final cleaned event lists. We ran the task {\tt xrtpipeline} (v0.8.8) applying standard filtering and screening criteria, i.e., we cut out temporal intervals during which the CCD temperature was higher than $-47$ $^\circ$C, and we removed hot and flickering pixels. These are present because the CCD is operating at a temperature higher than the design temperature of $-100$ $^\circ$C due to a failure in the active cooling system. An on-board event threshold of $\sim$0.2 keV (un-reconstructed pulse-height PHAS[1]$>80$) was also applied to the central pixel, which has been proven to reduce most of the background due to either the bright Earth limb or the CCD dark current (which depends on the CCD temperature). These two sources of background are the main reason for the switching between PC and WT mode even when the source count rate is below 1 counts s$^{-1}$. Throughout the monitoring campaign the CCD temperature was $<-50$ $^\circ$C, with the exception of part of observations 002 and 005, where it became as high as $-43.5$ and $-45$ $^\circ$C, respectively; those data were therefore screened out. For our analysis we further selected XRT grades 0--12 and 0--2 for PC and WT data, respectively (according to Swift nomenclature; \citealt{XRT2}). \section{Data analysis\label{grb050406:dataanal}} \subsection{Spatial analysis\label{grb050406:spatial}} Figure~\ref{grb050406:fig_map} shows the 163 ks XRT image accumulated in PC mode in the 0.2--10 keV energy band. We detected two previously uncatalogued sources within 1 arcmin of the optical burst coordinates. The brightest uncatalogued source, which we identified as the fading X-ray counterpart of the burst, is present in the first four XRT snapshots. The source is piled-up during the initial 500\,s of PC data. Therefore, to obtain an unbiased position, we rely on the remainder of the PC data in the first observation, which has a net exposure of 49.8\,ks. We used the {\tt xrtcentroid} task (v0.2.7) and found that the afterglow position is RA(J2000$)=2^{\rm h} 17^{\rm m} 52\fs4$, Dec(J2000$)=-50^{\circ} 11\arcmin 13\farcs6$. We estimate its uncertainty to be 4\farcs2 (90\% confidence level). This position takes into account the correction for the misalignment between the telescope and the satellite optical axis (\citealt{centroids}). Figure~\ref{grb050406:fig_map} shows the XRT error circle, as well as the 3\arcmin{} BAT error circle (\citealt{GCN3183}; 95\% containment) and the optical counterpart coordinates determined by Magellan (\citealt{GCN3185}) and by UVOT (\citealt{GCN3186}). The XRT coordinates are 23\arcsec{} from the BAT ones, and 1\farcs6 and 2\farcs8 from the Magellan and UVOT ones, respectively. XRF 050406 was detected ({\tt XIMAGE} v4.3) in the first four snapshots individually, but not from the 5th on. The second source, S2, is located at RA(J2000$)=02^{\rm h} 17^{\rm m} 52\fs9$, Dec(J2000$)=-50^{\circ} 10\arcmin 36\farcs1$ and has a constant rate ($3.8\pm 0.7) \times 10^{-4}$ counts s$^{-1}$ throughout the observation campaign. \begin{figure*} \resizebox{\hsize}{!}{\includegraphics{figure2.ps}} \caption{X-ray light curve of the XRF 050406 afterglow in the 0.2--10\,keV energy band. The curve is background-subtracted and the time is referred to the BAT trigger, 2005 Apr 06 at 15:58:48.4 UT (\citealt{GCN3180}). The last point after 10$^6$\,s is a 3-$\sigma$ upper limit. {\bf Inset}: details of the first $\sim$ 1000\,s, which include data in all XRT modes. The (yellow) diamonds represent LrPD mode data taken during the latter portion of the slewing phase; the (cyan) triangle is the initial IM point (84\,s after the trigger, see Table~\ref{grb050406:tab_obs}), the downward-pointing arrow is a LrPD limit (pointing, 91\,s after the trigger), the (blue) circles are WT mode data (starting from 92\,s after the trigger), and the (red) squares are PC mode data (starting from 99\,s after the trigger). The data have been corrected for pile-up (where appropriate) and PSF losses. The solid (red) line represents the best-fit broken power-law model to the light curve (excluding the flare). } \label{grb050406:fig_lcvs} \end{figure*} \subsection{Temporal analysis\label{grb050406:timing}} During the first 500\,s of the XRT observation the intensity of the afterglow was high enough to cause pile-up in the PC mode data. To account for this effect we extracted the source events within an annulus with a 30-pixel outer radius ($\sim71$\arcsec) and a 2-pixel inner radius. These values were derived by comparing the observed and nominal PSF. For the PC data collected after the first 500\,s, the entire circular region (30-pixel radius) was used, instead. In both cases we further disregarded data within a circular region centred on the serendipitous source S2 (which lies within the 30-pixel PC source extraction region) with a 7.17 pixel (17\arcsec) radius. The WT data were extracted in a rectangular region 31 pixels long along the image strip (and 20 pixels wide), which excludes the data from the source S2. The selected extraction regions correspond to $\sim 69$\,\% (piled-up PC), $\sim 95$\,\% (non piled-up PC), and $\sim 94$\,\% (WT) of the XRT PSF. % To account for the background, data were also extracted in PC mode within a circular region (radius $130\arcsec =54.8$ pixels) and in WT mode within a rectangular box (40$\times$20 pixels), in locations far from background sources. The mean PC background in the 0.2--10 keV band was found to be constant throughout the observations and, normalized to the PC source extraction region, it had a value of $\sim 2.6 \times 10^{-3}$ counts s$^{-1}$. The mean WT background in the same energy band and normalized to the WT source region was $\sim 4.6 \times 10^{-2}$ counts s$^{-1}$. Figure~\ref{grb050406:fig_lcvs} shows the background-subtracted light curve extracted in the 0.2--10 keV energy band, with the BAT trigger as origin of time. We considered WT data for the first snapshot of the first observation, and PC data for all 9 available observations (see Table~\ref{grb050406:tab_obs}). During the initial phases of the afterglow evolution ($t< 4\times10^4$\,s) we binned the source counts with a minimum of 30 counts per time bin, and dynamically subtracted the normalized background counts in each bin. The PC mode data were corrected for the effects of pile-up. We note that, by keeping to the minimum number of counts per time bin criterion, we created several bins during the first snapshot, but subsequently needed to merge data belonging to snapshots 1 and 2 (point at $\sim 4$\,ks), then from snapshots 3 and 4 (point at $\sim 20$\,ks), and later on from snapshots 5 through 8 (point at $\sim 35$\,ks). Afterwards, we used {\tt XIMAGE} with the option {\tt SOSTA}, which calculates vignetting- and PSF-corrected count rates within a specified box, and the background in a user-specified region. To ensure uniformity with the early light curve, the background was estimated in the same region as the one used for the initial part of the light curve. We thus obtained a signal-to-noise ratio S/N $\ga 3$ (the only exception being the point at $\sim 33$ ks which has S/N $\ga 2$). The last point is a 3-$\sigma$ upper limit. This latter method is preferred for the construction of the late part of the light curve since it better accounts for the background in a low-counts regime. We note, however, that extracting the light curve in the same 30-pixel source region up to the end of the last observation, we obtained fully consistent results, albeit with a noisier light curve. We also note that the residual contribution of the serendipitous source S2 within the source extraction region is $\la 19\%$ of the S2 counts, which corresponds to $\la (7\pm1)\times 10^{-5}$ counts s$^{-1}$. Therefore, S2 only makes a marginal contribution to the afterglow light curve, which amounts to $<$20\% of the last point. The light curve clearly shows a complex behaviour, with a power law decay underlying a remarkable flare which peaks at $\approx 210$ s after the BAT trigger (see Fig.~\ref{grb050406:fig_lcvs}, inset). To fit the light curve we used the BAT trigger as reference time and we only considered spectroscopic-mode data obtained while XRT was pointing, thus excluding the early LrPD, the LrPD upper limit and the IM point. Further excluding the data taken during the flare (180\,s $<t<300$\,s), a fit with a simple power law yields $\chi^2_{\rm red}=4.32$ (12 degrees of freedom, d.o.f.), which is unacceptable. A fit with a broken power law $F(t) = K t^{-\alpha_1}$ for $t<t_{\rm b}$ and $F(t) = K\,t_{\rm b}^{-\alpha_1} \, (t/t_{\rm b})^{-\alpha_2}$ for $t>t_{\rm b}$, where $t_{\rm b}$ is the time of the break, yields $\alpha_1=1.58^{+0.18}_{-0.16}$ and $\alpha_2=0.50\pm0.14$, and a break at $\sim 4200$ s after the BAT trigger. This latter model yields a good fit ($\chi^2_{\rm red}=1.20$, 10 d.o.f.), a significant improvement over the simple power law (null hypothesis probability $=1.7\times 10^{-3}$, equivalent to 3.2 $\sigma$), but some of the parameters are not well constrained. Alternatively, a fit with two smoothly joined power laws $F(t) = K^\prime [(t/t_{\rm b})^{-\alpha_1} + (t/t_{\rm b})^{-\alpha_2}]$ yields $\chi^2_{\rm red}=1.29$ (10 d.o.f.) with similar values for the inferred parameters. A summary of the fits to the light curve can be found in Table~\ref{grb050406:tab_lcvfits}. As a reference, the 0.2--10 keV unabsorbed flux at $t_{\rm b}$ is $(4\pm1)\times 10^{-13}$ erg cm$^{-2}$ s$^{-1}$ (we adopted a count rate to unabsorbed flux conversion factor of $6.5 \times 10^{-11}$ erg cm$^{-2}$ count$^{-1}$, obtained from the best fit models derived in Sect.~\ref{grb050406:spectral}) and the luminosity in the 0.7--34.4 keV band is $(1.9\pm0.9)\times 10^{46}$ erg s$^{-1}$. During the flare a rebrightening of the source by a factor of $\ga 6$ in flux was observed between $t \sim 154$\,s and the peak at $\sim 210$\,s. Both the rising and the falling part of the flare had very steep slopes that, when fit with a simple power law, yield $\alpha_{\rm 1,flare}=-5.8^{+1.6}_{-2.1}$ and $\alpha_{\rm 2,flare}=6.7\pm1.0$. When the underlying power-law afterglow is subtracted, the fit yields $\alpha_{\rm 1,flare}=-6.8^{+2.4}_{-2.1}$ and $\alpha_{\rm 2,flare}=6.8^{+3.6}_{-2.0}$ and the peak is at $213\pm7$\,s from the BAT trigger. In all cases the errors are dominated by the uncertainty in the placement of the flare boundaries. The flare can also be characterised, as a simple parametric description, as a Gaussian line. A combined broken power law and Gaussian model fit yields a peak at $211.1^{+5.4}_{-4.4}$\,s ($61.4^{+1.6}_{-1.3}$\,s in the rest frame) and a width $17.9^{+12.3}_{-4.6}$\,s ($\chi^2_{\rm red}=1.58$, 17 d.o.f.). In this case the ratio of the characteristic time-scale and the peak time is $\delta t / t_{\rm peak} \sim 0.08$ or 0.20, when using the Gaussian width or its FWHM ($42.2^{+29.0}_{-10.8}$\,s), respectively. In either case, $\delta t / t_{\rm peak} \ll 1$, which puts severe constraints on the emission mechanisms that can produce the flare. We shall address this issue in the discussion section. Integration of the Gaussian best-fitting function yields an estimate of the fluence of the flare, $(1.4 \pm 1.0) \times 10^{-8}$ erg cm$^{-2}$, corresponding to an energy of $(2.0 \pm 1.4) \times 10^{50}$ erg. The large error reflects the uncertainty on the actual model used for the integration of the flare. We also extracted events from the first snapshot WT data in two more energy bands, 0.2--1 keV (soft, S) and 1--10 keV (hard, H), as well as the total band, 0.2--10 keV. We used the same regions as the ones described above, a constant time binning of 30\,s and dynamically subtracted their respective backgrounds. Figure~\ref{grb050406:fig_hr} shows the three background-subtracted light curves, as well as the ratio H$/$S. Indeed, during the rising portion of the flare the hard band flux increases by a factor of $\ga 6$ while the soft band flux only increases slightly, so that the spectrum of the flare starts off harder than the underlying afterglow, and then evolves into a softer state as its flux decreases; this can be seen in the following time bin, when the soft band flux peaks with a flare to pre-flare flux ratio of $\sim 3.5$. This yields an indication of spectral evolution during the flare as a $\sim 3$-$\sigma$ excess over a constant fit to H$/$S. It should be noted that this behaviour is reminiscent of that observed in the {\it prompt} emission (\citealt{Fordea95}), with the harder band peak preceding the softer band peak. At $t \sim 1.7 \times 10^{5}$\,s a second faint bump is observed. Its significance is not high, since it is detected as a 2-$\sigma$ excess over the underlying afterglow. Similar late-time bumps have been observed in other Swift-detected GRBs (e.g.\ GRB 050502B; \citealt{Falconeea05}). \begin{figure} \resizebox{\hsize}{!}{\includegraphics{figure3.ps}} \caption{WT background-subtracted light curves. {\bf (a)}: Total band ({\bf T}, 0.2--10 keV). {\bf (b)}: Soft band ({\bf S}, 0.2--1 keV). {\bf (c)}: Hard band ({\bf H}, 1--10 keV). {\bf (d)}: Ratio of hard to soft count rates. } \label{grb050406:fig_hr} \end{figure} \subsection{Spectral analysis\label{grb050406:spectral}} The afterglow of XRF 050406 was very faint, hence it is not possible to perform time-resolved spectroscopy to distinguish the spectral properties of the afterglow proper from the ones of the flare observed in the light curve. Therefore, we proceeded as follows. Spectra of the source and background were extracted in the regions described in \S\ref{grb050406:timing} from the first observation (000) event files. PC and WT spectra were extracted during the first $\sim$ 500\,s of the PC observation (see Table~\ref{grb050406:tab_specfits} for times referred to $T_0$), when PC data are piled-up and when the flare is observed in the light curve. % We also extracted PC spectra after the first 500\,s during the first 4 snapshots. For the latter we used a circular region with a 10 pixel radius (corresponding to $\sim 80$\,\% of the XRT PSF) to minimize the background and to be able to use the Cash statistics (\citealt{Cash79}). % Ancillary response files (ARF) were generated with the task {\tt xrtmkarf} within FTOOLS v6.0 using the latest ARF distribution (v003). These ARFs account for different extraction regions and PSF corrections. We used the latest spectral redistribution matrices (v007). The energy ranges adopted for spectral fitting were 0.5--10 keV and 0.2--10 keV for WT and PC, respectively. We first performed a fit with an absorbed ({\tt wabs} in XSPEC) power law to the WT data (166 counts), which were rebinned with a minimum of 20 counts per energy bin to allow $\chi^2$ fitting within XSPEC. The Hydrogen column was initially kept as a free parameter, and then frozen to the Galactic value ($N_{\rm H}^{\rm G}= 2.8 \times 10^{20}$ cm$^{-2}$, \citealt{DL90}) when the fit yielded a value lower than (although consistent with) the Galactic one. The fit was good, $\chi^2_{\rm red}=1.0$ for 6 d.o.f., and yielded $\Gamma=2.11_{-0.28}^{+0.31}$. We then performed a fit with the same model to the remainder of the PC data during snapshots 1 through 4 using Cash statistics which is more appropriate given the low number of counts (21 un-binned counts) and calculated the goodness of the fit via $10^4$ Montecarlo simulations. The fit was good and yielded consistent results. We also performed simultaneous fits to the WT and PC (60 counts) spectra extracted during the first $\sim 500$\,s (using $\chi^2$ statistics) and of the PC data alone (using Cash statistics), also obtaining consistent results. Table~\ref{grb050406:tab_specfits} summarizes the results of the fits. We note that, given the current goodness of the XRT calibration (5\% systematic uncertainty for all observing modes and grade selections in the 0.5--10 keV range; e.g., \citealt{Romanospie05}), an excess of $N_{\rm H}$ cannot be excluded and we find a 3-$\sigma$ upper limit to the total (Galactic plus intrinsic) Hydrogen column along the line of sight of $N_{\rm H} < 9 \times 10^{20}$ cm$^{-2}$. We can therefore conclude that, during the first 600\,s after the burst, which include the X-ray flare observed in the light curve, the mean photon index is $\Gamma=2.1\pm0.3$, and that the photon index does not vary after the end of the flare. However, we do have clues regarding the presence of spectral evolution during the flare coming from the hardness ratio analysis (Sect.~\ref{grb050406:timing}), even though the statistics are not high enough to show it in the spectral analysis. As we will discuss later (Sect.~\ref{grb050406:disc_flares}), other afterglows with larger amplitude X-ray flares demonstrate a strong spectral evolution of the flares. \section{Discussion\label{grb050406:discussion}} \subsection{Gamma-ray properties: similarity of XRFs and GRBs\label{grb050406:disc_prompt}} The duration of this burst ($T_{\rm 90} = 5\pm 1$\,s in the 15--350 keV band) places this burst in the short tail of the long GRB population (\citealt{Kouveliotouea93}). Its fluence is relatively low ($1.0 \times 10^{-7}$ erg cm$^{-2}$ in the 15--350 keV band) but not unusually faint. The gamma-ray characteristics of this burst are consistent with a classification as an X-ray flash (\citealt{Heiseea01}), or as an ``X-ray rich GRB'' (XRR). The softness of the observed spectrum, which is well fit in the 15--150 keV band with a simple power law with photon index $\Gamma_\gamma=2.65$, and with no significant emission observed above $\sim 50$ keV, implies that the peak energy is below the BAT bandpass ($E_{\rm p} < 15$ keV). The operational definition of XRFs/XRRs (e.g.\ \citealt{Lambea04}) is of a fast transient X-ray source characterized by a softness ratio SR$=\log [\mathcal{F} ({\mbox{2--30\,keV}})/\mathcal{F} ({\mbox{30--400\,keV}})] > 0$ for an XRF and $-0.5 < $SR$ < 0$ for an XRR. Extrapolation of the BAT spectrum, with the assumption of $E_{\rm p} < 2$ keV, yields SR$=0.8^{+0.5}_{-0.4}$, which classifies this burst as an XRF. However, a break in the spectrum may well be present in the 2--15 keV band. In the most conservative case, i.e. assuming no flux below 15 keV, this event would be an XRR GRB, with SR$ = -0.2^{+0.2}_{-0.3}$. The isotropic-equivalent gamma-ray energy of this event is $E_{\rm iso} = (1.4^{+1.6}_{-0.6})\times 10^{51}$ erg (Sect.~\ref{grb050406:batobs}), and this effectively puts XRF~050406 in the low-energy tail of GRB energies (\citealt{Bloomea03b}). Assuming that the Amati relation (\citealt{Amatiea02}) holds, we can infer a rest-frame $E_{\rm p}^{\rm rest} \sim 55$~keV, which corresponds to an observer-frame $E_{\rm p} \sim 15$ keV. This value is consistent with the nondetection of $E_{\rm p}$ in the BAT energy range. To date, X-ray afterglows of XRFs have been detected in just a few cases (XRF 011030, XRF 020427: \citealt{Bloomea03,Levanea05}; XRF 030723: \citealt{Butlerea04}; XRF 040701: \citealt{Fox04}; XRF 050315: \citealt{Vaughanea06}). This is one of the first examples of a well-studied X-ray light curve of an XRF. Its main characteristics are not qualitatively different from those of normal GRBs (\citealt{Chincaea05,Nousekea05}). As observations accumulate, it is becoming clear that these two classes of phenomena share many properties, and both have afterglows with similar characteristics (\citealt{Sakamotoea05}). This is a clue that both types of events may have a common origin and is supported by recent evidence that some XRFs are associated with supernovae (\citealt{Soderbergea04,Bersierea05,Fynboea04}). \subsection{X-ray flares: evidence for prolonged engine activity\label{grb050406:disc_flares}} The general behaviour of the afterglow of XRF 050406 is a typical one. The observed X-ray photon index ($\Gamma_{\rm X} = 2.1$) is common among X-ray afterglows (\citealt{Chincaea05,dePasqualeea05}). The light curve shows a break from a relatively steep decay ($\alpha_1=1.58$) to a flatter one ($\alpha_2=0.50$). Its overall shape is similar to the one typically observed by the XRT (\citealt{Chincaea05,Nousekea05}), even though the initial slope is less steep than average. The most striking characteristic of this burst is the strong flare in its X-ray light curve, a feature which had never been detected by Swift before and had been previously observed in very few GRBs (GRB~970508, \citealt{piroea99}; GRB~011121 and GRB~011211, \citealt{piroea05}). The fluence of the flare is $\sim 1.4 \times 10^{-8}$ erg cm$^{-2}$ in the 0.2--10 keV band, which amounts to $\sim 14$\,\% of the observed (15--350 keV band) prompt fluence. A better estimate of the flare-to-prompt energy ratio would require the knowledge of the prompt spectral energy distribution (SED). Since the actual peak energy of the prompt SED is unknown ($E_{\rm p} < 15$\,keV), the extrapolation of the BAT fluence to the XRT band is highly uncertain. For plausible values of $E_{\rm p}$, the flare to prompt fluence ratio is in the 1--10\,\% range. The observed rebrightening is by a factor of 6 in flux, presents a peak at $t_{\rm peak}=213\pm7$\,s and takes place on a very short timescale, with a ratio of the characteristic time-scale and the peak time $\delta t / t_{\rm peak} \ll 1$. Both the rising and the falling parts of the afterglow-subtracted flare had very steep slopes, $\alpha_{\rm 1,flare} \approx -7$ and $\alpha_{\rm 2,flare} \approx 7$, assuming the burst trigger as the time origin. According to the standard relativistic fireball model, the prompt emission is caused by internal shocks within the expanding fireball, while the afterglow is produced by the fireball shocking the external medium (external shocks, \citealt{Piran99,ZM04}). Available models to explain flares include refreshed shocks (\citealt{Reesea98}), external shocks with a clumpy medium (\citealt{Lazzatiea02}) and angular inhomogeneities in the outflow (\citealt{Fenimoreea99,Nakarea03}). However, it can be argued (\citealt{Burrowsea05b}, \citealt{Zhangea05}, \citealt{Nousekea05}) that such models cannot produce the observed large flux variations $\delta F / F_{\rm peak} \gg 1$ in such short timescales $\delta t / t_{\rm peak} \ll 1$ (\citealt{Iokaea05}). Similarly, none of the above mechanisms would explain the steep slopes observed in the flare. External reverse shocks, created when the fireball slows down because of the interaction with the external medium, are expected to emerge at optical and radio wavelengths, hence synchrotron self-Compton (SSC) must be invoked to produce emission in the X-ray band. This would require carefully balanced conditions (\citealt{Kobayashiea05}). \citet{piroea05} suggested that the X-ray flares observed in GRB 011121 and GRB 011211 were due to the onset of the afterglow. The steep slopes and the short timescale variability can only be accounted for within the thick shell scenario (\citealt{saripiran99}). \citet{GalliPiro05} successfully modeled XRF 011030 using this model. In this scenario, the emission before and after the flare is due to different processes (prompt tail and afterglow, respectively), hence a discontinuity in the light curve is generally expected underlying the flare. This is not the case for XRF 050406, where the same component describes the X-ray emission both before and after the flare. Even if a fine-tuning may explain this particular event, the lack of a light curve break is common to a large fraction of the flares observed by Swift (\citealt{flaresproc}). Therefore, while the explanation of flares in terms of the afterglow onset is attractive, it is unlikely to be applicable to the vast majority of the X-ray flares seen by XRT. A promising mechanism to produce the flare is late internal shocks (\citealt{Fanea05,Zhangea05,Kingea05,Pernaea06}), which implies that the central engine is still active at $t=213$\,s, even though the prompt emission ended after $t \sim 6$\,s. The late-time activity in this case must have a reduced power with respect to the prompt emission, as the relative fluences indicate. Such a mechanism would naturally explain the steep rise and decay slopes. Second, the energy required to power the flare would be much lower than in the other scenarios (\citealt{Zhangea05}). The indications of spectral evolution throughout the flare further support this interpretation. The flare appears to be harder than the underlying afterglow, which suggests a distinct origin for this emission. Furthermore, there are indications of spectral evolution, which shows the typical hard-to-soft pattern. Such a behaviour is commonly observed in the prompt emission spikes of GRBs (e.g.\ \citealt{Fordea95}), which are produced in internal shocks. Further evidence of late engine activity comes from both the flat part of the light curve ($\alpha_2 \approx 0.5$, see Sect.\ \ref{grb050406:disc_ag}) and possibly by the presence of the late-time bump observed at $t \sim 1.7 \times 10^{5}$\,s. Following the discovery of a flare in the afterglow of XRF 050406, initially reported by \citet{Burrowsea05b}, many others were identified: GRB 050502B (\citealt{Falconeea05}), GRB 050724 (\citealt{Barthelmyea05b}) and GRB 050904 (\citealt{Cusumanoea05c}), just to mention a few. At the time of writing (2005 Oct), $\sim 50$\,\% of the bursts detected by XRT which were immediately re-pointed towards showed flares, making flaring quite a common behaviour. Furthermore, all the characteristics of the XRF 050406 flares have now been observed in most flaring GRBs (see \citealt{flaresproc} for a recent review). For example, highly significant spectral evolution throughout the flare has been reported in GRB 050502B (which was the brightest observed so far) and GRB 050724. In several cases the flares present large amplitudes and occur on short timescales. Furthermore, several flares are often observed in the same event, at times ranging from $\sim 100$\,s to $10^4$--$10^5$\,s after the burst. Finally, in most cases the afterglow is clearly present {\it before} the onset of the flare, and has consistent decay slope and flux levels with after the flare. The present case shows that flares are present both in XRFs and in GRBs. Since flares are likely tied to the central engine activity, this finding further supports the idea that a similar mechanism is at work for both kind of events (\citealt{Fanea05}). \subsection{The X-ray afterglow light curve\label{grb050406:disc_ag}} The prompt reaction of Swift has allowed us to observe the X-ray light curves of GRB afterglows starting from a few tens of seconds after the burst explosion. In most cases the X-ray light curves are characterized by an initial steep decay (up to $\sim 500$\,s) followed by a shallow decay, and then by a steeper decay with a second break normally occurring at a few thousand seconds later (\citealt{Chincaea05,Nousekea05}). The early steep decay seen in the X-ray light curve can be explained as the tail of the prompt emission (however, see \citealt{Panaitescuea05}). The few cases where the XRT light curve lies well above the extrapolation of the prompt emission into the X-ray band can be explained either by a strong spectral evolution or by an X-ray flare with the maximum located before the XRT observation (\citealt{Tagliaferriea05}). There are other instances where the first steep decay is not observed at all (e.g.\ \citealt{Campanaea05}). In the case of XRF 050406, however, the initial slope is shallower than the steep values $3 \la \alpha \la 5$ observed in other early afterglows (\citealt{Tagliaferriea05}). Moreover, the curvature relation $\alpha=\beta+2$ (\citealt{Kumarea00,Dermer04}) is not satisfied, even after taking into account the effects pointed out by \citet{Zhangea05} that would alter such relation. Therefore, we also investigate whether the initial decline seen in XRF 050406 is consistent with afterglow emission. Comparison of spectral indices and temporal decay slopes with theoretical relativistic fireball models (e.g.\ Table~2 in \citealt{Zhangea05}) indicates that the first decay index $\alpha_1=1.58\pm0.17$ and energy index $\beta=1.1\pm0.3$ rule out fast cooling models (for which the injection frequency $\nu_{m}$ exceeds the cooling frequency $\nu_{c}$) for $\nu < \nu_{\rm m}$. For $\nu > \nu_{\rm m}$, the $\alpha(\beta)=(3\beta-1)/2$ closure relation is satisfied within the errors and an electron power-law distribution index $p \approx 2.5$ is obtained. The same relation holds for the slow cooling regime (where $\nu_{c} > \nu_{m}$) for $\nu > \nu_{\rm c}$ (both wind and ISM). In this case a consistent solution is also found for $\nu_{\rm m} < \nu < \nu_{\rm c}$, although with a large $p \approx 3$. The ISM environment is favoured on the basis of a better satisfied closure relation. In conclusion, the spectral indices and temporal decay slopes of the first part of the X-ray curve can be interpreted in terms of relativistic fireball models, even though the large uncertainties associated with the slopes do not allow us to choose among the available models. An alternative explanation for the initial XRT emission is the presence of an additional flare which started before the beginning of the XRT observation, and of which we only see the decaying part. The superposition of two (and possibly more, fainter) flares would then mimic the initial steep power law decay. However, this interpretation seems less likely since recent Swift observations of X-ray flares within the first several hundred seconds of the prompt emission all had temporal decay indices much steeper than the observed XRF 050406 pre-flare index. At $t \sim 4400$ s the XRT light curve breaks to $\alpha_2 \approx 0.5$. Such a flat decay cannot be explained in terms of the standard afterglow model. The only possibility would be to observe, in the fast cooling regime, the segment with $\nu_c < \nu < \nu_m$ (where $\alpha = 0.25$ is expected, marginally consistent with the observed value). However, the fast cooling regime is expected to end much earlier. To maintain the observed decay unbroken up to $\sim 10^6$\,s, large values of the equipartition parameters $\varepsilon_e$ and $\varepsilon_B$ or of the Compton parameter would be required. We consider this possibility quite unlikely. Another possibility is that the angular energy profile of the fireball is not trivial (a structured jet), so that emission coming from the (brighter) wings of the jet may increase the observed flux as the fireball Lorentz factor decreases (\citealt{Panaitescuea05}). An interesting explanation for the shallow-decay phase is injection of new energy into the fireball through refreshed shocks (\citealt{Sariea00,ZM01}). For this to happen, the energy release inside refreshed shocks must be sizeable, since the whole fireball dynamics has to be modified. Assuming an energy injection rate $\dot{E} \propto t^{-q}$, we find $q$ in the range 0 to 0.5 depending on the model details (\citealt{Zhangea05}). In this model, the initial part of the XRT afterglow light curve can be due to standard afterglow emission only if the fireball evolution is not influenced at these stages. Indeed, the energy supply provided by refreshed shocks is steadily growing, and at the beginning it cannot alter the fireball dynamics. In this case, the break would identify the time when the new, injected energy is comparable to the fireball energy. On the contrary, if the first XRT phase were due to late engine activity, then the energy injection could have begun much earlier and its emission would have been masked. Integration of the light curve from the onset of the flat slope phase yields $\mathcal{F} \approx 3 \times 10^{-8} (t_{\rm end}/7.6 \times 10^{5} ~ {\rm s})^{0.5}$ erg cm$^{-2}$, where $t_{\rm end}$ is the time at which the shallow phase ends, for which we can only set a lower limit. We note that this depends weakly on the onset time of the shallow phase, therefore the calculated fluence is correct in both presented scenarios. For comparison, the amount of energy released during the steep phase of the light curve (excluding the flare) is $\mathcal{F} \approx 2 \times 10^{-8} (t_{\rm start}/ 100~{\rm s})^{-0.6}$ erg cm$^{-2}$. We note that the shallow phase lasts a considerable time. \citet{Zhangea05} propose three explanations for the energy injection mechanism. In the impulsive case (\citealt{Sariea00}), the central engine ejects material with a wide distribution of Lorentz factors. In this case, slower moving shells will catch the fireball at a later time. We can estimate the minimum Lorentz factor as $\Gamma_{\rm min} \la 2 (E_{\rm iso,50} /n_0)^{1/8} (1+z)^{3/8}$, where $E_{\rm iso}=E_{\rm iso,50} \times 10^{50}$ erg is the isotropic-equivalent energy, and $n_0$ is the external medium particle density in units of cm$^{-3}$. This implies that the acceleration process works from ultra- to mildly-relativistic velocities. Within the putative Poyinting flux scenario (\citealt{Zhangea05b}), the energy supply is provided by the transfer of magnetic energy to the fireball, and the time at which the injection stops is related to the ratio $\sigma$ of the electromagnetic to baryonic kinetic energy. If this scenario is correct, we can infer a lower limit of $\sigma = (t_{\rm end}/t_{\rm start})^{1-q} > 10$--$100$, for $q=0.5$--0, where $t_{\rm start} < t_{\rm b}$ is the start time of injection. Therefore, after the end of the energy transfer phase, the energy of the blast-wave would be increased by a comparable factor. In the third scenario (the prolonged energy output by the central engine, \citealt{ZM01}), the end of the injection phase is simply the end of the engine activity. In this case, this activity produces a large amount of energy, particularly so since the radiative efficiency may be lower during the late afterglow than during the prompt emission, as is generally the case. This was previously noticed by \citet{Nousekea05} in a sample of several Swift GRBs. The monitoring of XRF 050406 was discontinued 22 days after the trigger. By then, the source was no longer detectable and only a 3-$\sigma$ upper limit could be drawn at $\approx 3.6 \times 10^{-4}$ counts s$^{-1}$. In order for the afterglow energy not to diverge, a further, late break is necessary. One interesting possibility is that this may be due to seeing the edge of the jet. A steepening in the light curve is expected when the fireball Lorentz factor becomes comparable to the inverse of the jet half-opening angle. Such a late break is not unexpected for an XRF. The few XRFs with known redshift (\citealt{Soderbergea04,Bersierea05,Fynboea04}) have a very low isotropic-energy release, and this may be at least in part accommodated if they have very wide jets. This picture is consistent with the result found by Frail et al.\ (2001; see also \citealt{Ghirlandaea04}), who found that low-energy GRBs tend to have wider opening angles. Using the standard formalism (\citealt{Rhoads99,Sariea99}), the jet half-opening angle is $\vartheta_{\rm j} = 16 \, t_{\rm j,6}^{3/8} n_0^{1/8} (\eta/0.2)^{1/8} E_{\rm iso,50}^{-1/8}$ deg, where $t_{\rm j}=t_{\rm j,6}\times 10^6$\,s is the jet break time and $\eta$ is the burst radiative efficiency. Therefore, using our lower limit on the jet break time $t_{\rm j} \ga 10^6$\,s, we can infer a lower limit on the jet half-opening angle of 16 deg. This value is at the high end of the distribution of jet angles (\citealt{Bloomea03b}). \section{Summary and conclusions\label{grb050406:conclusions}} XRF 050406 is classified as an X-ray flash, with fluence $\sim 1 \times 10^{-7}$ erg cm$^{-2}$ (15--350 keV), a soft spectrum ($\Gamma_\gamma=2.65$), no significant flux above $\sim 50$ keV and a peak energy $E_{\rm p} < 15$ keV. Its main characteristics are however not qualitatively different from those of normal GRBs. As observations accumulate, it becomes clear that these two classes of phenomena share many properties, and both have afterglows with similar characteristics. This is a clue that both events may have a common origin. XRF 050406 is the first Swift-detected burst that showed a flare in its X-ray light curve, a feature now found in $\sim 50$\,\% of the XRT afterglows. The flare peaked at $\sim 210$ s after the BAT trigger ($\sim 61$ s in the rest frame). The best fit of the afterglow decay is obtained with a broken power law with $\alpha_1=1.58\pm0.17$, $\alpha_2=0.50^{+0.14}_{-0.13}$, and a break at $\sim 4400$ s after the BAT trigger. The mean photon index is $\Gamma_{\rm X} = 2.1\pm0.3$. During the X-ray flare a flux variation of $\delta F / F_{\rm peak} \sim 6$ in a timescale $\delta t / t_{\rm peak} \ll 1$ is observed, and its measured fluence in the 0.2--10 keV band is $\sim 1.4 \times 10^{-8}$ erg cm$^{-2}$ [$(2.0 \pm 1.4) \times 10^{50}$ erg], which corresponds to 1--15\% of the prompt fluence. We argued that the flare-producing mechanism is late internal shocks, which implies that the central engine is still active at $t \sim 210$\,s, though with a reduced power with respect to the prompt emission. We showed possible indications of spectral variations during the flare, and a flattening of the X-ray light curve after $t \sim 4400$\,s in support of continued central engine activity at late times. Since XRF 050406 was observed, flares have been detected by XRT in both X-ray flashes and normal GRBs, indicating that flares are linked to some common properties of both kinds of bursts, and probably tied to their central engine. \begin{acknowledgements} This work is supported at OAB by ASI grant I/R/039/04, at Penn State by NASA contract NAS5-00136 and at the University of Leicester by PPARC. We gratefully acknowledge the contributions of dozens of members of the XRT and UVOT team at OAB, PSU, UL, GSFC, ASDC, and MSSL and our subcontractors, who helped make this instrument possible. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec1} The Allen-Cahn equation is a reaction diffusion equation with a bistable reaction term $f$; see (\ref{reaction}) for detailed conditions on $f$. This equation describes physical phenomena such as dynamical phase transition, and, in one dimension, it has the form: \begin{align} \label{eq:pde} \begin{cases} \dot{u} ^\varepsilon (t,x) &= \Delta u^\varepsilon (t,x)+\displaystyle{\frac{1}{\varepsilon}} f(u^\varepsilon (t,x) ),\ \ \ t > 0,\ x\in \mathbb{R},\\ u^\varepsilon (0,x) &= u_0 ^\varepsilon (x),\ \ \ x\in \mathbb{R}, \end{cases} \end{align} where $\varepsilon >0$, $\dot{u} = \frac{\partial u}{\partial t}$ and $\Delta u=\frac{\partial ^2 u}{\partial x^2}$. We assume that the function $f$ has $\pm 1$ as stable points and satisfies $\int _{-1} ^1 f(u) du=0$. Then, it is expected that the solution $u^\varepsilon$ tends to $\pm 1$ as $\varepsilon \to 0$ in a very short time and an interface appears to separate two different phases $\pm1$. In recent studies of the deterministic case, the behaviors of the solution have been investigated. For example, Chen \cite{xc} studied the initial value problem (\ref{eq:pde}) in one dimension and classified the behaviors of solutions into four stages: (i) Phase separation: In a very short time $u^\varepsilon$ tends to $\pm 1$. In other words, interfaces are generated in a time of order $O(\varepsilon |\log \varepsilon|)$. (ii) Generation of metastable patterns: Until the time of order $O(1)$, $u^\varepsilon$ enters into a neighborhood of standing waves associated with $f$. (iii) Super-slow motion of interfaces: An approximated ODE governs the very slow interface motion for a long time of order $O(e^\frac{C}{\varepsilon})$ with $C>0$. (iv) Annihilation of interfaces: Under the super-slow motion, when two interfaces are close enough, the interface between them is annihilated and they restore the super-slow motion. We are interested in the first generation time of interfaces and an appropriate time scale for the interface motion when a random external noise term is added. Carr and Pego \cite{cp} studied the one-dimensional deterministic case, and they proved the proper time scale for interface motion is of order $O(\exp(\frac{C}{\varepsilon}))$ as we mentioned above. Funaki \cite{f94} and \cite{f} studied the stochastic case with an additive noise: \begin{align} \label{eq:spde} \begin{cases} \dot{u} ^\varepsilon (t,x) &= \Delta u^\varepsilon (t,x)+\displaystyle{\frac{1}{\varepsilon}} f(u^\varepsilon (t,x) ) + \varepsilon ^\gamma a(x) \dot{W} _t(x),\ \ \ t> 0,\ x\in \mathbb{R},\\ u^\varepsilon (0,x) &= u^\varepsilon _0 (x),\ x\in \mathbb{R}, \ \ \ u^\varepsilon (t,\pm \infty) = \pm 1,\ \ \ t\geq 0, \end{cases} \end{align} where $a\in C_0^\infty (\mathbb{R})$. Here $\dot{W} _t(x)$ is a space-time white noise on $\mathbb{R}$ which formally has a covariance structure \begin{align} \label{cov} E[\dot{W}_t (x) \dot{W}_s (y)] = \delta(t-s) \delta(x-y) \end{align} and $\delta$ is the Dirac's delta (See also Bertini et al. \cite{bbb} and \cite{bbbp}). Funaki \cite{f} showed that the proper time scale is of order $O(\varepsilon^{-2\gamma - \frac{1}{2}})$. This behavior of the solution is corresponds to the phase (iii) in the deterministic case. The motion of interface for the stochastic case is much faster than that of the deterministic case only in this phase because of the strong effect of the noise. Funaki treated the case that an interface is already formed at the initial time. In this paper, we investigate more general initial values and, in particular, compute the first generation time of the interface. We further study whether we can connect it to the motion of interface in the case that the initial value is not an interface. \subsection{Setting of the model} We consider the SPDE (\ref{eq:spde}) of Allen-Cahn type in one dimension. The reaction term $f\in C^2( \mathbb{R} )$ satisfies the following conditions: \begin{align} \label{reaction} \begin{cases} \text{(i)}f \text{ has only three zeros }\pm 1, 0, & \\ \text{(ii)}f'( \pm 1) =:-p < 0,\ f'(0)=: \mu > 0, & \\ \text{(iii)}f(u)\leq C(1+|u|^q)\text{ with some }C,q>0,& \\ \text{(iv)}f'(u)\leq c\text{ with some }c>0, & \\ \text{(v)}f\text{ is odd,} &\\ \text{(vi)}f(u)\leq -p(u-1) \ (u\geq 1). \end{cases} \end{align} The conditions (i) and (ii) imply that the reaction term is bistable and has only $u=\pm 1$ as stable points. The existence of the global solution for the SPDE (\ref{eq:spde}) is assured by (iii) and (iv) (see p.222 and Section 2 of \cite{f}, and Section 2 of \cite{f94}). Moreover, we need the assumption (iv) in order to prove a comparison theorem by applying the maximum principle for the parabolic PDEs (see Section 2 of \cite{fr}). The condition (v) implies $\int _{-1}^{1}f(u)du=0$, from which we see that the corresponding traveling wave solution is actually a standing wave. We impose (vi) for a technical reason. We can take $f(u)=u-u^3$ as an example of $f$. Next we explain the external noise term. At first, we fix a filtered probability space $(\Omega, \mathcal{F}, P, \{ \mathcal{F} _t\} _{t\geq 0})$ and consider stochastic processes defined on it. Let $\dot{W}_t (x)$ be the space-time white noise which formally has the covariance structure (\ref{cov}) and is an $\{ \mathcal{F} _t\} _{t\geq 0}$-adapted process. We can rewrite the equation (\ref{eq:spde}) in the mild form: \begin{align} u^\varepsilon(t)= S_t u_0 ^\varepsilon + \frac{1}{\varepsilon} \int _0^t S_{t-s}f(u^\varepsilon (s))ds +\ \int _0^t S_{t-s} \varepsilon ^\gamma a dW_s \nonumber \end{align} where $S_t$ is an integral operator defined by $S_t u(x):=\int _{\mathbb{R}} p(t,x,y)u(y)dy$ and $p(t,x,y) :=\frac{1}{\sqrt{4\pi t}}e^{-\frac{(x-y)^2}{4t}}$. We give a mathematical meaning to the last term as a stochastic integral with respect to an operator valued integrand. Another way to interpret (\ref{eq:spde}) is as a weak solution, namely $u^\varepsilon(t)$ satisfies \begin{align} \langle u^\varepsilon(t) - u^\varepsilon(0) , \varphi \rangle =\int _0 ^t \langle u^\varepsilon(s) , \Delta \varphi \rangle ds + \frac{1}{\varepsilon} \int _0 ^t \langle f(u^\varepsilon(s)) , \varphi \rangle ds + \varepsilon ^\gamma \int _0 ^t \langle \varphi , adW_s \rangle \nonumber \end{align} for all $\varphi \in C_0 ^\infty (\mathbb{R})$. Here $ \langle , \rangle$ means the inner product on $L^2(\mathbb{R})$. It is well-known that every mild solution is a weak solution and vice versa (see \cite{dpz}). Moreover we assume that $u_0 ^\varepsilon \in C^2(\mathbb{R})$ and there exist constants $C_0 >1$, $C$, $C'>0$ and $\kappa >1$ such that \begin{align} \label{eq:ini} \begin{cases} \text{(i)} \| u_0 ^\varepsilon \| _\infty + \| u_0 ^{\varepsilon \prime} \| _\infty + \| u_0 ^{\varepsilon \prime \prime} \| _\infty \leq C_0, & \\ \text{(ii)} \text{There exists a unique } \xi _0 \in [-1,1] \text{ independent of } \varepsilon >0 \text{ such that } u_0 ^\varepsilon (\xi _0)= 0, & \\ \text{(iii)} | u_0 ^\varepsilon (x)| \geq C \varepsilon ^{\frac{1}{2}}\ (|x-\xi _0| \geq C' \varepsilon ^{\frac{1}{2}}), & \\ \text{(iv)} | u_0 ^\varepsilon (x) - 1 | + | u_0 ^{\varepsilon \prime} (x) | + | u_0 ^{\varepsilon \prime \prime} (x) | \leq \varepsilon ^{\kappa }C_\mu \exp(-\frac{\sqrt{\mu} x}{2}) \ (x\geq 1), & \\ \text{(v)} | u_0 ^\varepsilon (x) + 1 | + | u_0 ^{\varepsilon \prime} (x) | + | u_0 ^{\varepsilon \prime \prime} (x) | \leq \varepsilon ^{\kappa } C_\mu \exp(\frac{\sqrt{\mu} x}{2}) \ (x\leq -1), & \\ \end{cases} \end{align} where $C_\mu:=\frac{\mu}{4} \wedge 1$ and $\| \cdot \| _\infty$ is the supremum norm on $C(\mathbb{R})$. Here the constant $\mu$ is defined in (\ref{reaction}). Conditions on the constant $\kappa >1$ are stated in Theorem \ref{thm23} and in Section \ref{sec3}. We use the assumption (i) throughout of this paper. Because we consider the case that only one interface is formed, we assume the condition (ii). We use the conditions (iii), (iv) and (v) in order to prove the generation of interface for the deterministic case in Section \ref{sec2} as a preparation. In this paper, we assume that the support of $a$ is included in $[-1,1]$ without loss of generality. And for each $n \in \mathbb{N}$, Sobolev space $H^n(\mathbb{R})$ is defined by $H^n(\mathbb{R}) := \{ f\in L^2(\mathbb{R}) | \|f \|_{H^n(\mathbb{R})} < \infty \}$ equipped with a norm $\|f \|_{H^n(\mathbb{R})} := \sum _{k=0} ^n \|\nabla ^k f\|_{L^2(\mathbb{R})}$ where $\nabla ^k f (x) := \frac{d^k f}{dx^k}(x)$. \subsection{Main result} As we mentioned, in this paper, we discuss the generation of interfaces and give estimates on the first generation time of interfaces. After that, we connect this to the motion of interface in one-dimensional case which was introduced in \cite{f}. Before we state the main result, we define a function $m$ which satisfies the following ODE and is called a standing wave: \begin{align} \label{ode11} \begin{cases} \Delta m + f(m) =0,\ m(0)=0,\ m(\pm \infty) =\pm 1, \\ m\text{ is monotone increasing}. \end{cases} \end{align} We explain about this function below. Now we formulate our main result. \begin{theorem} \label{thm23} Assume that $u_0^\varepsilon$ satisfies (\ref{eq:ini}), $\bar{u}^\varepsilon (t,x) := u^\varepsilon (\varepsilon ^{-2\gamma -\frac{1}{2}} t,x)$ and $\gamma$ is a constant such that \begin{align} \label{gamma} &{\rm there \ exist \ constants\ } \kappa > \kappa' > 1{\rm \ which \ satisfy} \nonumber\\ &\begin{cases} (\kappa ' +\frac{21}{40} + \frac{\gamma}{10})\vee 2\kappa ' < \kappa < \gamma -\frac{C_f}{\mu},\\ 1< \kappa ' <\frac{1}{20} + \frac{\gamma}{5}. \end{cases} \end{align} Then there exist a.s. positive random variable $C(\omega) \in L^\infty (\Omega )$ and stochastic processes $\xi_t ^\varepsilon$ such that \begin{align} P( \| \bar{u} ^\varepsilon (t,\cdot )-\chi _{\xi^\varepsilon _t}(\cdot )\| _{L^2(\mathbb{R})} \leq \delta \ for \ all \ t \in [C(\omega )\varepsilon^{2\gamma +\frac{3}{2}} |\log \varepsilon |,T] ) \to 1\ \ \ (\varepsilon \to 0), \nonumber \end{align} for all $\delta >0$ and $T>0$. Moreover, the distribution of the process $\xi^\varepsilon _t$ on $C([0,T], \mathbb{R})$ weakly converges to that of $\xi _t$ and $\xi_t$ obeys the SDE starting at $\xi_0$: \begin{align} \label{eq:sde} d\xi_t = \alpha _1 a(\xi_t ) dB_t + \alpha _2 a(\xi_t)a'(\xi_t)dt, \end{align} where $\alpha _1$ and $\alpha _2 \in \mathbb{R}$ are defined as \begin{align} &\alpha _1 := \frac{1}{\| \nabla m \|_{L^2}} \nonumber \\ &\alpha _2 := - \frac{1}{\| \nabla m \|_{L^2} ^2} \int _0^\infty \int _{\mathbb{R}} \int _{\mathbb{R}} x p(t,x,y;m)^2 f''(m(y)) \nabla m(y) dxdydt \nonumber \end{align} and $p(t,x,y;m)$ denotes the fundamental solution for $\frac{\partial}{\partial t} - \Delta - f'(m)$ (See also \cite{f}, p. 252). \end{theorem} From the condition (\ref{gamma}), we need the condition $\gamma > \frac{19}{4}$ at least. This is the same condition as one of Funaki's result (see Theorem 8.1 in \cite{f}). In this case, we can regard $C(\omega )\varepsilon^{2\gamma +\frac{3}{2}} |\log \varepsilon |$ as the first generation time in the time scale of order $O(\varepsilon^{-2\gamma -\frac{1}{2}})$. This is the same order as the first generation time for the deterministic case if we do not change the time scale. Our result covers that of \cite{f}. The time scale for the interface motion is the same. Now we explain the idea of Funaki \cite{f94} and \cite{f} briefly. In \cite{f}, he showed that $\bar{u}^\varepsilon$ converges to $\chi _{\xi^\varepsilon _t}$ as $\varepsilon \to 0$, and the interface motion at the limit is described by (\ref{eq:sde}) in the case that the initial value $u_0 ^\varepsilon = m(\varepsilon ^{-\frac{1}{2}}(x-\xi _0 ))$. He took Ginzburg-Landau free energy as a Lyapnov functional corresponding to the equation (\ref{eq:pde}), which is defined by \begin{align} \mathcal{H}^\varepsilon (u) := \int _{\mathbb{R}} \left \{ \frac{1}{2}|\nabla u|^2 +\frac{1}{\varepsilon}F(u) \right \} dx \nonumber \end{align} where $f=-F'$. Note that the solution $u^\varepsilon$ of (\ref{eq:spde}) is not differentiable in $x$. Then, the set of minimizers of $\mathcal{H}^\varepsilon$ in the class of functions $u$ satisfying $u(\pm \infty)=\pm 1$ is given by $M^\varepsilon :=\{ m(\varepsilon ^{-\frac{1}{2}} (x-\eta)) | \eta \in \mathbb{R} \}$. Here we define a coordinate in the neighborhood of $M^1$ which is called Fermi coordinate. For $u\in \{u-m \in L^2(\mathbb{R}) \}$, we set $dist(u,M^1):=\inf _{\eta \in \mathbb{R}} \| u - m(\cdot -\eta) \| _{L^2(\mathbb{R})}$. If $dist (u,M^1)< \beta$ for some $\beta>0$, then there exists a unique constant $\eta(u) \in \mathbb{R}$ which attains $\inf _{\eta \in \mathbb{R}} \| u - m(\cdot -\eta) \| _{L^2(\mathbb{R})}$. And thus, we can see $u=m_{\eta(u)}+s(u)$ where $m_{\eta}(x)=m(x-\eta)$. We call the coordinate $(\eta(u),s(u)) \in \mathbb{R} \times L^2(\mathbb{R})$ Fermi coordinate. If we change the time scale as $\bar{u}^\varepsilon(t,x):= u^\varepsilon(\varepsilon^{-2\gamma - \frac{1}{2}}t,x)$, $\bar{u}^\varepsilon$ satisfies an SPDE: \begin{align} \label{scalingspde} \dot{\bar{u}}^\varepsilon = \varepsilon^{-2\gamma -\frac{1}{2}}\left \{ \Delta \bar{u}^\varepsilon +\frac{1}{\varepsilon}f(\bar{u}^\varepsilon)\right \} + (\varepsilon^{-2\gamma -\frac{1}{2}})^{\frac{1}{2}}\cdot \varepsilon^{\gamma} a(x) \dot{W}_t(x). \end{align} in a law sense. We give a formal proof of (\ref{scalingspde}). We have that \begin{align} \bar{u}^\varepsilon (t) - \bar{u}^\varepsilon (0) &= u^\varepsilon (\varepsilon^{-2\gamma - \frac{1}{2}} t) - u_0 ^\varepsilon \nonumber \\ &= \int _0 ^{\varepsilon^{-2\gamma -\frac{1}{2}}t} \left \{ \Delta u^\varepsilon (s) +\frac{1}{\varepsilon}f(u^\varepsilon (s))\right \} ds + \varepsilon^{\gamma} a(x) W_{\varepsilon^{-2\gamma -\frac{1}{2}}t} (x), \nonumber \\ &= \varepsilon^{-2\gamma -\frac{1}{2}} \int _0 ^t \left \{ \Delta \bar{u}^\varepsilon (s) +\frac{1}{\varepsilon}f(\bar{u}^\varepsilon (s))\right \} ds + (\varepsilon^{-2\gamma -\frac{1}{2}})^{\frac{1}{2}} \cdot \varepsilon^{\gamma} a(x) W_t (x), \nonumber \end{align} from SPDE (\ref{eq:spde}). In the third line, we have the first term from the integration by substitution $s \mapsto \varepsilon^{2\gamma +\frac{1}{2}}s$, and the second term comes from the self-similarity of space-time white noise (formally we have $W_{a^2 t}(x) = a W_t(x)$ in a law sense). Because of the strong effect of the drift term, the solution of (\ref{eq:spde}) started from $m(\varepsilon ^{-\frac{1}{2}} (x-\xi _0)) \in M^\varepsilon$ should be attracted to $M^\varepsilon$. From this observation, Funaki \cite{f} showed that the solution $\bar{u}^\varepsilon$ did not go out of a tubular neighborhood of $M^\varepsilon$ in $L^2$-sense if the initial value was on $M^\varepsilon$, by investigating a structure of the functional $\mathcal{H}^\varepsilon$ around minimizers $M^\varepsilon$. And he derived an SDE as the dynamics of the interface by defining an appropriate coordinate on this neighborhood. However, in our case, the initial value is not close to the neighborhood of $M^\varepsilon$. Thus, we need to show that the solution $u^\varepsilon$ enters the neighborhood of $M^\varepsilon$ in a short time with high probability, even if the initial value is not close to $M^\varepsilon$. We call this behavior the generation of interface. We first prove the generation of interface in the deterministic case in Section \ref{sec2} as a preparation. We refer to the comparison argument in \cite{ham}. The proof of the main result is given in Section \ref{sec3}. \section{The deterministic results \label{sec2} In this section, we will show the generation of interface for the solution of PDE (\ref{eq:pde}). We assume that there exist positive constants $p$, $\mu>0$ such that the reaction term $f$ satisfies \begin{align} \label{reaction2} \begin{cases} \text{(i)}f \text{ has only three zeros }a_\pm, a_0, & \\ \text{(ii)}f'( a_\pm) =-p < 0,\ f'(a_0)=\mu > 0, & \\ \text{(iii)}f(u)\leq C(1+|u|^q)\text{ with some }C,q>0,& \\ \text{(iv)}f'(u)\leq c\text{ with some }c>0, & \\ \text{(v)}f(u)\leq -p(u-a_+) \ (u\geq a_+), & \\ \text{(vi)}f(u)\geq -p(u-a_-) \ (u\leq a_-), \end{cases} \end{align} for some $a_-< a_0 < a_+$. We choose $a_\pm$ and $a_0$ because we need to change the stable points in order to construct super and sub solutions in Section \ref{sec3}. The initial value $u_0 ^\varepsilon$ satisfies the condition (\ref{eq:ini}) with $\pm 1$ replaced by $a_\pm$ throughout the rest of this section. We may take $C_0$ large enough such that $[a_- ,a_+]$ is included in $[-2C_0 , 2C_0]$. The argument in this section is based on Alfaro et al \cite{ham}. They proved that, for small $\eta >0$, the solution $u^\varepsilon$ formed an interface of width $O(\varepsilon ^{\frac{1}{2}})$ and each phase entered the $\eta$-neighborhood of $a_\pm$ uniformly at the time $t=\frac{1}{2\mu}\varepsilon | \log \varepsilon |$ (see Theorem 3.1 of \cite{ham}). However, in order to connect to the motion of interface, we need to show that the solution $u^\varepsilon$ enters $\varepsilon ^\kappa$-neighborhood of $M^\varepsilon$ in $L^2$-sense, that is $\inf_{\eta \in \mathbb{R}} \| u^\varepsilon - m(\varepsilon ^{-\frac{1}{2}}(\cdot -\eta)) \|_{L^2(\mathbb{R})} \leq \varepsilon ^\kappa$, for $\kappa >0$ and $\varepsilon ^\kappa \ll \eta$. And thus, we need to consider the time after $t=\frac{1}{2\mu}\varepsilon | \log \varepsilon |$. \subsection{Auxiliary estimates} \label{sec2-1} We first prepare some preliminary results. We consider the ODE: \begin{align} \begin{cases} \dot{Y} (\tau , \xi) = f( Y(\tau , \xi )),\ \ \ \tau >0,\\ Y(0, \xi)=\xi \in [-2C_0 ,2C_0].\end{cases} \nonumber \end{align} \begin{lemma} \label{lem31} There exists a constant $\eta_0 \in (0, a_+ - a_0)$ such that, for any $\eta \in (0,\eta_0 )$ and $\alpha >0$, there exists a positive constant $C>0$ and we have that \begin{align} Y(C|\log \varepsilon | ,\xi ) \geq a_+ -\eta \ \ \ (for\ all\ \xi \in [a_0+ \varepsilon ^\alpha ,a_+ -\eta]) \nonumber \end{align} for sufficiently small $\varepsilon >0$. The constant $C$ can be taken depending only on $\alpha$ and $f$. \end{lemma} \begin{proof} First, we take $\eta_0 \in (0, a_+ - a_0)$ small enough and fix $\eta \in (0,\eta_0 )$. We explain about $\eta _0$ in the proof of next lemma. Since the solutions $Y(\tau,\xi)$ are larger than $Y(\tau,a_0 + \varepsilon ^\alpha)$ for all $\xi \in (a_0 + \varepsilon ^\alpha ,a_+ -\eta]$, the conclusion follows once we can show it for $Y(\tau,a_0 + \varepsilon ^\alpha)$. Corollary 3.5 in \cite{ham} implies that there exists a positive constant $C_1(\eta)>0$ such that \begin{align} C_1(\eta) e^{\mu \tau} \varepsilon ^\alpha \leq Y(\tau,a_0+ \varepsilon ^\alpha) -a_0 \nonumber \end{align} for $\tau >0$ where $Y(\tau,\varepsilon ^\alpha)$ remains in $(a_0 ,a_+ -\eta]$. An inequality $C_1(\eta) e^{\mu \tau} \varepsilon ^\alpha \geq a_+ -a_0 -\eta$ implies that \begin{align} \tau \geq \frac{\alpha}{\mu} |\log \varepsilon |+\frac{1}{\mu} \log \frac{a_+ -a_0 -\eta}{C_1(\eta)} . \nonumber \end{align} And thus, if we take $\frac{\tilde{\alpha}}{\mu}$ as the constant $C$ for small $\tilde{\alpha} >\alpha$ and take $\varepsilon >0$ sufficiently small, this lemma is proven. \qed \end{proof} \begin{lemma} \label{lem32} There exists a constant $\eta_0 \in (0, a_+ - a_0)$ such that, for any $\eta \in (0,\eta_0 )$ and $\kappa >0$, there exists a positive constant $C>0$ and we have that \begin{align} Y(C|\log \varepsilon | ,\xi )\geq a_+ -\varepsilon ^\kappa \ \ \ (for\ all\ \xi \in [a_+ -\eta , a_+ - \varepsilon ^\kappa]) \nonumber \end{align} for sufficiently small $\varepsilon >0$. The constant $C$ can be taken depending only on $\kappa$ and $f$. \end{lemma} \begin{proof} From the same observation as the proof of Lemma \ref{lem31}, we only consider the solution $Y(\tau ,a_+ -\eta)$. We take small $\eta_0 \in (0, a_+ - a_0)$ such that the sign of the derivative $f''(u)$ does not change on $u\in [a_+ -\eta _0 ,a_+ )$, and fix $\eta \in (0,\eta_0 )$. At first, we consider the case that $f''(u) \leq 0$ on $[a_+ -\eta ,a_+)$. The inequality $f(u) \geq -\frac{f(a_+ -\eta)}{\eta}(u-a_+ )$ on $u\in [a_+ -\eta ,a_+ )$ and an easy computation gives us \begin{align} Y(\tau ,a_+ -\eta ) \geq a_+ -\eta e^{-\frac{f(a_+ -\eta)}{\eta} \tau} \nonumber \end{align} for all $\tau >0$. Reminding that $f'(a_+ )$ is negative and $f''(u) \leq 0$ on $\in [a_+ -\eta ,a_+ )$, we can show that the inequality \begin{align} \tau \geq \frac{1}{f'(a_+)}\{ \kappa |\log \varepsilon |+ \log \eta \} \geq -\frac{\eta}{f(a_+-\eta)}\{ \kappa |\log \varepsilon |+ \log \eta \} \nonumber \end{align} implies that $a_+ -\eta e^{-\frac{f(a_+ -\eta)}{\eta} \tau} \geq a_+ -\varepsilon ^\kappa$. We can show this lemma by taking $C= \frac{\tilde{\kappa}}{p}$ for small $\tilde{\kappa} > \kappa$. The case that $f''(u) \geq 0$ on $\in [a_+ -\eta ,a_+)$ is easier than another one, because of the estimate $f(u) \geq -p(u-a_+)$ on $u\in [a_+-\eta ,a_+)$. The same argument as above gives us the same estimate and this completes the proof of this lemma. \qed \end{proof} Combining Lemma \ref{lem31} and \ref{lem32}, we obtain a useful estimate as following. We need this estimate when we connect the generation and motion of interface. \begin{proposition} \label{thm32} For each $\alpha >0$ and $\kappa>0$, there exists a positive constant $C>0$ such that \begin{align} |Y(C_1 |\log \varepsilon | ,\xi )- a_+ |\leq \varepsilon ^\kappa \ \ \ for\ all\ \xi \in [a_0 + \varepsilon ^\alpha ,2C_0] \nonumber \end{align} for sufficiently small $\varepsilon >0$. The constant $C$ can be taken depending only on $\alpha$, $\kappa$ and $f$. \end{proposition} \begin{proof} From the condition (v) of (\ref{reaction2}), we have that $f(u)\leq -p(u-a_+)$ on $u\in [a_+, 2C_0]$. And thus the similar argument to the proof of Lemma \ref{lem32} gives us \begin{align} Y(C|\log \varepsilon | ,\xi ) \leq a_+ +\varepsilon ^\kappa \ \ \ for\ all\ \xi \in [a_+ ,2C_0] \nonumber \end{align} if we take $C=\frac{\tilde{\kappa}}{p}$ for $\tilde{\kappa} > \kappa$. If we set $\tilde{\kappa} > \kappa$ and $\tilde{\alpha} > \alpha$, the solution $Y$ started from $[a_0+ \varepsilon ^\alpha ,a_+ -\eta]$ becomes larger than $a_+ -\eta$ until the time $t=\frac{\tilde{\alpha}}{\mu} |\log \varepsilon|$ from Lemma \ref{lem31}, and the solution started form $[a_0 - \eta ,2C_0]$ goes into $[a_0 - \varepsilon ^\kappa ,a_0 + \varepsilon ^\kappa]$ until the time $t=\frac{\tilde{\kappa}}{p} |\log \varepsilon|$ from Lemma \ref{lem32}. And thus, we can prove this proposition if we take $C_1 =\{ \frac{\tilde{\alpha}}{\mu} + \frac{\tilde{\kappa}}{p} \}$ for $\tilde{\kappa} > \kappa$ and $\tilde{\alpha} > \alpha$. \qed \end{proof} We can obtain the similar estimate to that of Proposition \ref{thm32} in the case that $\xi \in [-2C_0 ,a_0 -\varepsilon ^\alpha ]$. We state this below. \begin{proposition} \label{thm33} For each $\alpha >0$ and $\kappa>0$, there exists a positive constant $C_1>0$ such that \begin{align} |Y(C_1 |\log \varepsilon | ,\xi ) -a_- |\leq \varepsilon ^\kappa \ \ \ for\ all\ \xi \in [-2C_0 ,a_0 -\varepsilon ^\alpha ] \nonumber \end{align} for sufficiently small $\varepsilon >0$. Especially the constant $C_1$ depends only on $\alpha$, $\kappa$ and $f$. \end{proposition} \subsection{Construction of super and sub solutions We set \begin{align} w_\varepsilon ^\pm (t,x) = Y \left( \frac{t}{\varepsilon} , u_0 ^\varepsilon (x) \pm \varepsilon h(x) (e^\frac{\mu t}{\varepsilon }-1 ) \right) \nonumber \end{align} for a bounded positive function $h(x)\in C_b ^2 (\mathbb{R})$ which satisfies \begin{align} \label{condh} \begin{cases} {\rm (i)}\ \mu h \geq \{ u_0 ^{\varepsilon \prime} +\varepsilon h' (e^{\frac{\mu t}{\varepsilon}} -1) \} ^2\ for\ all\ t\in [0,C_1\varepsilon |\log \varepsilon|]\ and\ x\in \mathbb{R}, \\ {\rm (ii)}\ \mu h \geq \{ u _0 ^{\varepsilon \prime} +\varepsilon h' (e^{\frac{\mu t}{\varepsilon}} -1) \} ^2 + \{ \Delta u _0 ^{\varepsilon} + \varepsilon \Delta h (e^{\frac{\mu t}{\varepsilon}} -1) \}\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ for\ all\ t\in [0,C_1\varepsilon |\log \varepsilon|]\ and\ x\in \mathbb{R}, \\ {\rm (iii)}\ \varepsilon ^\kappa C_\mu \exp(-\frac{\sqrt{\mu} x}{2}) + h(x) ( \varepsilon ^{1-C_1 \mu}-\varepsilon ) \leq C \varepsilon^\kappa \exp (-\frac{\sqrt{\mu} x}{2}) \ for\ all\ x \geq K, \\ {\rm (iv)}\ \varepsilon ^\kappa C_\mu \exp(\frac{\sqrt{\mu} x}{2}) + h(x) ( \varepsilon ^{1-C_1 \mu}-\varepsilon ) \leq C \varepsilon^\kappa \exp(\frac{\sqrt{\mu} x}{2}) \ for\ all\ x \leq -K, \\ {\rm (v)}\ \lim _{\varepsilon \to 0} ( \varepsilon ^{1-C_1 \mu}-\varepsilon ) \| h \|_\infty =0 , \end{cases} \end{align} for some constants $C>0$, $K>1$ and $\varepsilon _0 >0$, and for any $\varepsilon \in (0,\varepsilon _0]$. The constant $\mu$ is introduced in (\ref{eq:ini}). We need to construct the function $h$ satisfying (\ref{condh}). \begin{lemma} \label{consth} There exists a function $h \in C_b ^2 (\mathbb{R})$ which satisfies (\ref{condh}). \end{lemma} \begin{proof} For simplicity, we set $a_\varepsilon =\varepsilon (e^{\frac{\mu t}{\varepsilon}} -1)$. Note that $0< a_\varepsilon < \varepsilon ^{1-C_1 \mu}-\varepsilon$ if $t\in [0, C_1 \varepsilon |\log \varepsilon |)$. We can divide the initial value as $u_0 ^\varepsilon =\tilde{u}_0 ^\varepsilon + \varepsilon ^\kappa g$ where supp$\tilde{u}_0 ^\varepsilon \subset [-1,1]$, $|g| + |g'| + |g''| \leq C_\mu \exp (- \frac{\sqrt{\mu} |x|}{2})$ and $\tilde{u}_0 ^\varepsilon$, $g \in C^2(\mathbb{R})$ (see (\ref{eq:ini})). Now we take $h=\varphi +\varepsilon ^\kappa \psi$ where $\varphi$, $\psi \in C_b ^2(\mathbb{R})$ and positive. At first, we construct $\varphi$ satisfying; \begin{align} \label{condphi1} &\mu \varphi \geq 4( \tilde{u} _0 ^{\varepsilon \prime} ) ^2 + 4 ( a_\varepsilon \varphi ' ) ^2 ,\\ \label{condphi2} &\mu \varphi \geq 4( \tilde{u} _0 ^{\varepsilon \prime} ) ^2 + 4 ( a_\varepsilon \varphi ' ) ^2 + \tilde{u} _0 ^{\varepsilon \prime \prime} +a_\varepsilon \varphi '' +\varepsilon ^{\bar{\kappa}} 1_{[-1,1]}, \end{align} where $1_{[-1,1]}$ is an indicator function and $0 < \bar{\kappa} < \kappa$. We take a constant $K>1$ which does not depend on $\varepsilon$. We set $\varphi(x)= \exp(-\varepsilon ^{-\beta} (x+K))$ when $x<-K$ and $\varphi(x)= \exp(-\varepsilon ^{-\beta} (-x-K))$ when $x>K$, for some $0<\beta <\frac{1-C_1\mu}{2}$. By using conditions $\tilde{u} _0 ^{\varepsilon \prime} =\tilde{u} _0 ^{\varepsilon \prime \prime} =0$ for $|x|>K$ and $a_\varepsilon \varepsilon ^{-\beta} \to 0$ as $\varepsilon \to 0$, the estimates (\ref{condphi1}) and (\ref{condphi2}) are established when $|x|>K$. We can take a constant which is larger than $\frac{4 C_0 ^2 + C_0}{\mu}$ as $\varphi$ on $[-1,1]$. Thus we need to connect $\varphi$ on $[-K ,-1]$ and $[1,K]$. We consider only on $[-K ,-1]$. We take $\varphi ''$ as an linear function on $[-K ,-K + \varepsilon ^{2\beta} ]$ where $\varphi ''(-K + \varepsilon ^{2\beta} ) =0$. Then $\varphi$ becomes a monotonous increasing function on $[-K ,-K + \varepsilon ^{2\beta} ]$, and $\varphi (-K + \varepsilon ^{2\beta} ) =1 + C \varepsilon ^{\beta}$ for some $C>0$. Next, we take a concave function $\varphi(x)$ on $[-K + \varepsilon ^{2\beta} , -1]$ where $\varphi (-1) > \frac{4 C_0 ^2 + C_0}{\mu}$ and $\varphi '(-1)=\varphi ''(-1)=0$ and $\varphi$ is twice differentiable at $x= -K + \varepsilon ^{2\beta}$. For example, we can take $\varphi ''$ equals to some negative constant on $[-K + \varepsilon ^{2\beta} + \delta , -1-\delta]$ for some small $\delta>0$, interpolate on $[-K + \varepsilon ^{2\beta} , -1] \backslash [-K + \varepsilon ^{2\beta} + \delta , -1-\delta]$ by linear functions and integrate them to get $\varphi$ which satisfies conditions as above. Because $\varphi '' \leq 0$, $\varphi$ is concave, and $\varphi '(x) \leq O(\varepsilon ^\beta)$. Note that $\varphi (-1 )= O(\varepsilon ^{-\beta})$. Combining this and the conditions $a_\varepsilon \varepsilon ^{-\beta} \to 0$ and $\tilde{u} _0 ^{\varepsilon \prime} =\tilde{u} _0 ^{\varepsilon \prime \prime}=0$, we can prove (\ref{condphi1}) and (\ref{condphi2}). We take $\varphi$ symmetrically on $[1,K]$. Next we construct $\psi$ satisfying; \begin{align} \label{condpsi1} &\varepsilon ^\kappa \mu \psi \geq 4( \varepsilon ^\kappa g' )^2 + 4 ( \varepsilon ^\kappa a_\varepsilon \psi ' ) ^2 ,\\ \label{condpsi2} &\varepsilon ^\kappa \mu \psi \geq 4( \varepsilon ^\kappa g' )^2 + 4 ( \varepsilon ^\kappa a_\varepsilon \psi ' ) ^2 + \varepsilon ^\kappa \psi '' + \varepsilon ^\kappa a_\varepsilon \psi ''. \end{align} When $|x|>1$, we take $\psi (x) = \exp (- \frac{\sqrt{\mu} |x|}{2})$. Then the right-hand side of (\ref{condpsi1}) and the sum of the first, second and fourth terms of the right-hand side of (\ref{condpsi2}) are smaller than $\varepsilon ^\kappa \mu \psi$ for sufficiently small $\varepsilon >0$. The third term of (\ref{condpsi2}) is $\frac{\varepsilon ^\kappa \mu}{4} \psi$. This term is smaller than $\varepsilon ^\kappa \mu \psi $ and larger than $\varepsilon ^\kappa g''$ from the definition of $C_\mu$. We use the condition (iv) and (v) of (\ref{eq:ini}) here. Let us discuss about $|x| \leq 1$. We take $\psi \in C^2(\mathbb{R})$ which is twice differentiable at $x=-1$, $\psi '' (-1+\delta) = 0$ for some $\delta \in (0,1)$ and $\psi ''$ is monotonous decreasing when $x \in [-1,-1+\delta]$. For example, we can take cubic function because we have four conditions for the values of $\psi$, $\psi '$ and $\psi ''$ at $x=-1$ and $x=-1+\delta$. In particular, $\psi$ is positive on $[-1,-1+\delta]$. We can take $\psi$ similarly and symmetrically on $[1-\delta , 1]$. We connect $\psi$ by a concave function on $[-1+\delta , 1-\delta]$ which is twice differentiable at $x=-1+\delta$, $1-\delta$. For example, we can take a quartic function $\psi (x) = ax^4 +bx^2 + c$ for certain $a,b,c \in \mathbb{R}$, because we have six conditions of $\psi$, $\psi '$ and $\psi ''$ and take symmetric function $\psi$. And $\psi$ is positive on $\mathbb{R}$. In a similar way as above, we can show (\ref{condpsi1}) and (\ref{condpsi2}) on $[-1,1]$. We use the conditions $\mu \psi (\pm 1) > \psi '' (\pm1)$, the monotonous decreasing (resp. increasing) of $\psi ''$ on $[-1, -1+\delta]$ (resp. $[1-\delta ,1]$) and $\psi '' <0$ on $[-1+\delta ,1-\delta]$. To sum up (\ref{condphi1}) and (\ref{condpsi1}), we have that \begin{align} \mu h \geq 2(u_0 ^{\varepsilon \prime})^2 + 2 ( \varepsilon ^\kappa a_\varepsilon h ' ) ^2 \geq (u_0 ^{\varepsilon \prime} + \varepsilon ^\kappa a_\varepsilon h ' )^2. \nonumber \end{align} Here we use $(a+b)^2 \leq 2a^2 +2b^2$ twice. Similarly, we get \begin{align} \mu h \geq \{ u _0 ^{\varepsilon \prime}+a_\varepsilon h' \} ^2 + \{ u_0 ^{\varepsilon \prime \prime} + a_\varepsilon h'' \}, \nonumber \end{align} from (\ref{condphi2}) and (\ref{condpsi2}). Note that $\psi '' + \varepsilon ^{\bar{\kappa} -\kappa} 1_{[-1,1]} > g''$ for sufficiently small $\varepsilon$ because of the constant $C_\mu$ and the boundedness of $g''$ on $[-1,1]$. We also use the estimate $\psi '' > g''$ when $|x|>1$, which we show in the previous paragraph. Here we note $\varphi$ and $\varepsilon ^\kappa \psi$ depend on $\varepsilon$, however they are bounded by some constant which diverges in the order $O(\varepsilon ^{-\beta})$ and larger than $4C_0 ^2 +C_0$ for sufficiently small $\varepsilon >0$, from the construction of these function. This and the convergence $a_\varepsilon \varepsilon ^{-\beta} \to 0$ show the condition (v) of (\ref{condh}). We can see (iii) and (iv) from the construction of $\varphi$ and $\psi$. \qed \end{proof} Now we prove that $w_\varepsilon ^\pm$ are super and sub solutions for (\ref{eq:pde}) by applying the maximum principle. See (\ref{condh}) for the precise condition of $h$. Our claim in this subsection is formulated in the following proposition. \begin{proposition} \label{thm61} If we fix a constant $0 < C_1 < \frac{1}{\mu}$ and a positive function $h(x)\in C_b ^2 (\mathbb{R})$ which satisfies (\ref{condh}), then there exists $\varepsilon _0 >0$ such that for all $\varepsilon \in (0,\varepsilon_0]$, $t \in [0, C_1 \varepsilon |\log \varepsilon |)$ and $x \in \mathbb{R}$, we have that $w_{\varepsilon} ^- (t,x) \leq u^{\varepsilon} (t,x) \leq w_{\varepsilon } ^+ (t,x)$ where $u^{\varepsilon}$ is the solution of (\ref{eq:pde}). \end{proposition} Before proving this proposition, we give a notation as a preparation in advance. For $\xi \neq a_\pm,$ $a_0$, we define the following function: \begin{align} \label{def5-1} A (\tau ,\xi ) := \frac{Y_{\xi \xi } (\tau ,\xi )}{Y_\xi (\tau ,\xi )}, \end{align} where $Y_\xi$ and $Y_{\xi \xi}$ mean the derivatives of $Y$ with respect to $\xi$. We get an ODE: \begin{align} \label{ode6-1} \begin{cases} {Y}_{\xi \tau} (\tau ,\xi ) = {Y}_\xi (\tau ,\xi ) f'( Y (\tau ,\xi ) ),\ \ \ \tau > 0,\\ Y _\xi (0, \xi )=1, \end{cases} \end{align} and we obtain \begin{align} \label{eq5-1} {Y}_\xi (\tau ,\xi ) = \exp \left( \int _0 ^\tau f'\left( Y (\tau ,\xi )\right) ds \right),\ \ \ \tau \geq0, \end{align} from (\ref{ode6-1}). In particular, ${Y}_\xi$ is positive and thus we can define $A (\tau ,\xi )$ as (\ref{def5-1}). We get \begin{align} A (\tau, \xi ) = \int _0 ^\tau Y_\xi (s,\xi ) f''\left( Y(s,\xi )\right) ds,\ \ \ \tau \geq 0, \nonumber \end{align} by computing $Y_{\xi \xi}$ from (\ref{eq5-1}). Now we prove Proposition \ref{thm61} by using the maximum principle. \begin{proof}[Proof of Proposition \ref{thm61}] We fix $0 < C_1 < \frac{1}{\mu}$. At first, we need to check the initial conditions $\xi$ in $w_\varepsilon ^\pm$ are in $[-2C_0,2C_0]$. When $t \in [0,C_1 \varepsilon | \log \varepsilon |]$ and $\varepsilon$ is sufficiently small, we have that \begin{align} u_0 ^{\varepsilon} + \varepsilon h(x) ( e^{\frac{\mu t}{\varepsilon}} -1 ) \leq C_0 + h(x) (\varepsilon ^{1-C_1 \mu} - \varepsilon) \leq 2C_0 \nonumber \end{align} where $h$ is taken in Lemma \ref{consth} and $C_1< \frac{1}{\mu}$. Here we use the condition (v) of (\ref{condh}). In the same way, we can estimate $u_0 ^{\varepsilon} - \varepsilon h(x) ( e^{\frac{\mu t}{\varepsilon}} -1 )\geq -2C_0$. Let $\mathcal{L}$ be an operator which is defined by \begin{align} \mathcal{L} (u)(t,x) := \dot{u}(t,x) - \Delta u(t,x) - \frac{1}{\varepsilon}f(u(t,x)). \nonumber \end{align} From the maximum principle, if $\mathcal{L} (w_{\varepsilon} ^+)\geq 0$ then $w_{\varepsilon } ^+\geq u^{\varepsilon }$ (see Theorem 9 in \cite{fr} and proof of Lemma 2.2 in \cite{f} for the comparison of solutions from the maximum principle). A direct computation gives us \begin{align} \mathcal{L} (w_{\varepsilon} ^+)(t&,x) = \frac{1}{\varepsilon} Y_\tau + \mu h e^{\frac{\mu t}{\varepsilon}} Y_\xi - \{ \Delta u_0 ^{\varepsilon} +\varepsilon \Delta h (e^{\frac{\mu t}{\varepsilon}} -1) \} Y_\xi \nonumber \\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ - \{ u_0 ^{\varepsilon \prime} +\varepsilon h' (e^{\frac{\mu t}{\varepsilon}} -1) \} ^2 Y_{\xi \xi} - \frac{1}{\varepsilon}f(Y )\nonumber \\ &= \mu h e^{\frac{\mu t}{\varepsilon}} Y_\xi - \{ \Delta u_0 ^{\varepsilon } +\varepsilon \Delta h (e^{\frac{\mu t}{\varepsilon}} -1) \} Y_\xi - \{ u _0 ^{\varepsilon \prime} +\varepsilon h' (e^{\frac{\mu t}{\varepsilon}} -1) \} ^2 Y_{\xi \xi} \nonumber \\ &= \left[ \mu h e^{\frac{\mu t}{\varepsilon}} - \{ \Delta u_0 ^{\varepsilon } +\varepsilon \Delta h (e^{\frac{\mu t}{\varepsilon}} -1) \} - \{ u_0 ^{\varepsilon \prime} +\varepsilon h' (e^{\frac{\mu t}{\varepsilon}} -1) \} ^2 A \right ] Y_{\xi} \nonumber \\ &\geq \left[ \left \{ \mu h - \{ u_0 ^{\varepsilon \prime} +\varepsilon h' (e^{\frac{\mu t}{\varepsilon}} -1) \} ^2 \right \} e^{\frac{\mu t}{\varepsilon}} - \{ \Delta u_0 ^{\varepsilon } +\varepsilon \Delta h (e^{\frac{\mu t}{\varepsilon}} -1) \} \right] Y_{\xi} \nonumber \\ &\geq \left [ \mu h - \{ u_0 ^{\varepsilon \prime} +\varepsilon h' (e^{\frac{\mu t}{\varepsilon}} -1) \} ^2 - \{ \Delta u_0 ^{\varepsilon } +\varepsilon \Delta h (e^{\frac{\mu t}{\varepsilon}} -1) \} \right ] Y_{\xi} \nonumber \end{align} for all $x \in \mathbb{R}$ and $t\in[0,C_1 \varepsilon |\log \varepsilon |]$. Note that the function $Y_{\xi} $ is positive. The definition of $A$ gives us the third equality. The fourth inequality comes from Lemma 3.7 of \cite{ham} and the fifth inequality comes from the condition (i) of (\ref{condh}). From (ii) of (\ref{condh}), we see that $\mathcal{L} (w_{\varepsilon} ^+)\geq 0$. So we have proved that $w_{\varepsilon } ^+ \geq u^{\varepsilon }$ holds for all $t \in [0, C_1 \varepsilon |\log \varepsilon |]$ and $x \in \mathbb{R}$. The converse $w_{\varepsilon} ^- \leq u^{\varepsilon}$ can be proved in a similar way. \qed \end{proof} \subsection{The generation of interface in the deterministic case Now we formulate and prove the conclusion of this section. \begin{proposition} \label{pro21} If $u^\varepsilon$ is the solution of PDE (\ref{eq:pde}) and $\mu$ is defined in (\ref{reaction}), then there exist $K>1$, $\kappa > \frac{1}{2}$, $C>0$ and $\widetilde{C}>0$ such that for sufficiently large $C_1 \in (0, \frac{1}{\mu})$ and any $0<\bar{\beta}< 1-C_1 \mu$.\\ {\rm (i)} $a_- - \varepsilon ^\kappa \leq u ^\varepsilon (C_1 \varepsilon |\log \varepsilon |,x) \leq a_+ +\varepsilon ^\kappa \ (x \in [-K,K])$, \\ {\rm (ii)} $u ^\varepsilon (C_1 \varepsilon |\log \varepsilon |,x) \geq a_+ -\varepsilon ^\kappa$ (for all $x \in [-K,K]$ such that $u_0 ^{\varepsilon}(x) \geq a_0 + C \varepsilon ^{\bar{\beta}}$), $u ^\varepsilon (C_1 \varepsilon |\log \varepsilon |,x) \leq a_- +\varepsilon ^\kappa$ (for all $x \in [-K,K]$ such that $u_0 ^{\varepsilon}(x) \leq a_0 - C \varepsilon ^{\bar{\beta}}$),\\ {\rm (iii)} $| u ^\varepsilon (C_1 \varepsilon |\log \varepsilon |,x) - a_+ | \leq \widetilde{C} \varepsilon ^\kappa \exp (-\frac{\sqrt{\mu} x}{2}) \ (x\geq K )$, \ $| u ^\varepsilon (C_1 \varepsilon |\log \varepsilon |,x) - a_- | \leq \widetilde{C} \varepsilon ^\kappa \exp (\frac{\sqrt{\mu} x}{2}) \ (x\leq -K)$. \end{proposition} \begin{proof} (i) Propositions \ref{thm32} and \ref{thm61} imply that there exists a constant $C_1>0$ such that \begin{align} u^\varepsilon (C_1\varepsilon |\log \varepsilon|,x) &\leq w_\varepsilon ^+ (C_1\varepsilon |\log \varepsilon|,x) \nonumber \\ &\leq Y \left( C_1 |\log \varepsilon| , u_0 ^{\varepsilon} (x) + h(x) ( \varepsilon ^{1-C_1 \mu}-\varepsilon ) \right) \leq a_+ + \varepsilon ^\kappa . \nonumber \end{align} Remind that the estimate $|u_0 ^{\varepsilon} (x) + h(x) ( \varepsilon ^{1-C_1 \mu}-\varepsilon )| \leq 2C_0 \nonumber$ holds for all $x \in \mathbb{R}$, $C_1\in (0,\frac{1}{\mu})$ and sufficiently small $\varepsilon >0$. The proof of the lower bound is similar.\\ (ii) We only show the first estimate. From Proposition \ref{thm61}, we obtain \begin{align} u^\varepsilon (C_1\varepsilon |\log \varepsilon| & ,x) \geq w_\varepsilon ^- (C_1\varepsilon |\log \varepsilon| ,x) = Y \left( C_1 |\log \varepsilon| , u_0 ^{\varepsilon} (x) - h(x) ( \varepsilon ^{1-C_1 \mu}-\varepsilon ) \right). \nonumber \end{align} Here we need to observe the neighborhood of $\xi_0$ which is the zero of $u_0 ^{\varepsilon}$. The condition $u_0 ^{\varepsilon} (x) - h(x) ( \varepsilon ^{1-C_1 \mu}-\varepsilon ) \geq \varepsilon^\alpha$ is equivalent to \begin{align} \label{est7-1} u_0 ^{\varepsilon} (x) \geq h(x) ( \varepsilon ^{1-C_1 \mu}-\varepsilon ) + \varepsilon^\alpha , \end{align} for $\alpha >0$. Thus there exists a positive constant $C>0$, and $u_0 ^{\varepsilon} (x) \geq C\varepsilon ^{\bar{\beta}}$ implies (\ref{est7-1}) if $\alpha >\bar{\beta}$ by taking $2\beta =1-C_1 \mu -\bar{\beta}$ in the construction of $h$ (see the proof of Proposition \ref{consth}).\\ (iii) We only show the first. From the definition of $h \in C_b ^2 (\mathbb{R})$, we immediately see that \begin{align} u^\varepsilon (C_1\varepsilon |\log \varepsilon| ,x) -a_+ &\leq w_\varepsilon ^+ (C_1\varepsilon |\log \varepsilon| ,x) - a_+ \nonumber \\ &=Y \left( C_1 |\log \varepsilon| , u_0 ^{\varepsilon} (x) + h(x) ( \varepsilon ^{1-C_1 \mu}-\varepsilon ) \right)-a_+\nonumber \\ &\leq u_0 ^{\varepsilon} (x) + h(x) ( \varepsilon ^{1-C_1 \mu}-\varepsilon )-a_+ \leq C \varepsilon^\kappa \exp \left ( -\frac{\sqrt{\mu} x}{2}\right ), \nonumber \end{align} for all $x>K$ from the condition (iii) of (\ref{condh}). The first inequality comes from Proposition \ref{thm61}. We also have that \begin{align} u^\varepsilon (&C_1\varepsilon |\log \varepsilon| ,x) -a_+ \geq - C \varepsilon^\kappa \exp \left ( -\frac{\sqrt{\mu} x}{2}\right ) . \nonumber \end{align} We get $|u^\varepsilon (C_1\varepsilon |\log \varepsilon| ,x) -a_- | \leq C \varepsilon^\kappa \exp (\frac{\sqrt{\mu} x}{2})$ for $x<-K$ in a similar way. \qed \end{proof} \section{Proof of Theorem {\ref{thm23}} \label{sec3} In this section, we consider the SPDE (\ref{eq:spde}). Recall that the external noise term $\dot{W}^\varepsilon _t (x)$ is given by $\varepsilon ^\gamma a(x) \dot{W} _t (x)$ where $\dot{W} _t (x)$ is a space-time white noise and $a\in C_0^\infty ([-1,1])$, and that the reaction term $f$ satisfies (\ref{reaction}) and the initial value $u^\varepsilon _0$ satisfies (\ref{eq:ini}). Throughout this section, we set constants $C_f$ and $\kappa ' >1$, and assume that the constants $C_1$, $\alpha$ and $\kappa$ satisfying \begin{align} \label{const} \begin{cases} C_f := \underset{u \in [-2C_0,2C_0]}{\sup} f'(u),& \\ \alpha >\frac{1}{2},\ \kappa > \kappa ' >1, & \\ \frac{\alpha}{\mu} + \frac{\kappa}{p} +\bar{\delta} \leq C_1 \leq \frac{1}{\mu}-\bar{\delta}, \end{cases} \end{align} for sufficiently small $\bar{\delta}>0$. The constants $p$ and $\mu$ are introduced in (\ref{reaction}), and $C_0$ is introduced in (\ref{eq:ini}). In particular, the constant $C_1>0$ is the same constant as in Proposition \ref{pro21}. \subsection{Preliminary results} At the beginning of this section, we refer to the result about a property of the solution $u^\varepsilon$; see Section 2 of \cite{f} or Theorem 3.1 of \cite{f94}. \begin{proposition} \label{thm31} If $|u^\varepsilon _0 (x)|\leq K$, then we have that \begin{align} \lim_{\varepsilon \to 0} P\left ( |u^\varepsilon (t,x)| \leq \max \{K,1\} +\delta \ for\ all\ t \in [0,\varepsilon ^{-n}]\ and\ all\ x\in \mathbb{R}\right ) =1, \nonumber \end{align} for all $n$ and $\delta >0$. \end{proposition} From this result, we see that the solution $u^\varepsilon$ stays in the interval $[-2C_0 ,2C_0]$ up to high probability. We introduce a stopping time \begin{align} \tau _1 :=\inf \{t >0 | |u^\varepsilon (t,x)| > 2C_0\ for\ some\ x\in \mathbb{R} \}, \nonumber \end{align} so that $u^\varepsilon$ stays in $[-2C_0 ,2C_0]$ until the time $\tau _1$. The probability that $\tau _1 \geq \varepsilon ^{-n}$ occurs tends to 1 as $\varepsilon \to 0$ for each $n\in \mathbb{N}$ from Proposition \ref{thm31}. Next we prove that the solution $u^\varepsilon$ of (\ref{eq:spde}) is close to a solution $u$ of (\ref{eq:pde}) if the initial values are same. The proof is based on the proof of Proposition 12.1 of \cite{dpz}. As a preparation, we show an estimate for a stochastic convolution. Note that we apply this result for small $T$ later. \begin{lemma} \label{lem87} For each $p>4$ and $a(x)\in C_0 ^\infty (\mathbb{R})$, there exists a positive constant $C_{a,p}>0$ such that \begin{align} E\left [ \underset{t\in[0,T]}{\sup}\left \| \int _0^t \langle S_{t-s} a(\cdot), dW_s (\cdot )\rangle \right \| _{L^2(\mathbb{R})}^p\right ] \leq C_{a,p} T, \nonumber \end{align} holds for every $0<T<1$. \end{lemma} \begin{proof} We use the factorization method (see Proposition 5.9, Theorem 5.10 and Proposition 7.3 of \cite{dpz}). Note that $\| S_{t}a \| _{HS} ^2 \leq C t^{-\frac{1}{2}} \| a (\cdot ) \| _{L^2(\mathbb{R})} ^2$ for some constant $C>0$, where $\| \cdot \| _{HS}$ is a Hilbert-Schmidt norm on $L^2(\mathbb{R})$ and $a:L^2(\mathbb{R}) \to L^2(\mathbb{R})$ is a multiplication operator which is defined by $(af)(x):=a(x)f(x)$. Indeed, from Chapman-Kolmogolov equation, we have that $\| S_{t}a \| _{HS} ^2 = \int _\mathbb{R} \int _\mathbb{R} p(t,x,y)^2 a(y)^2dxdy = \int _\mathbb{R} p(2t,y,y) a(y)^2dy = \frac{1}{\sqrt{4\pi t}} \| a (\cdot ) \| _{L^2(\mathbb{R})} ^2$. From this observation and the stochastic Fubini's theorem, we have that \begin{align} \int _0^t \langle S_{t-s} a(\cdot), dW_s (\cdot )\rangle = \frac{\sin \pi \alpha}{\pi} \int _0 ^t (t-s)^{\alpha -1} S_{t-s} Y(s)ds, \nonumber \end{align} where \begin{align} Y(s)=\int _0 ^s (s-r )^{-\alpha} \langle S_{s-r} a(\cdot), dW_r (\cdot )\rangle \nonumber \end{align} for each $\alpha \in (\frac{1}{p} , \frac{1}{4})$. Taking $q>1$ such that $\frac{1}{p}+\frac{1}{q} = 1$, we get \begin{align} \underset{t\in [0,T]}{\sup} \left \| \int _0^t \langle S_{t-s} a(\cdot), dW_s (\cdot )\rangle \right \| _{ L^2(\mathbb{R})} ^p & \leq \left ( \frac{\sin \pi \alpha}{\pi}\right ) ^p \underset{t\in [0,T]}{\sup} \left ( \int _0 ^t |t-s|^{\alpha -1} \|Y(s)\|_{ L^2(\mathbb{R})} ds \right )^p \nonumber \\ & \leq C \underset{t\in [0,T]}{\sup} \left ( \int _0 ^t |t-s|^{q(\alpha -1)} ds \right )^{\frac{q}{p}} \cdot \left ( \int _0 ^T \|Y(s)\|_{ L^2(\mathbb{R})} ^p ds \right ) \nonumber \\ & \leq C \int _0 ^T \|Y(s)\|_{ L^2(\mathbb{R})} ^p ds \nonumber \end{align} from H\"{o}lder's inequality, because $q(\alpha -1) > -1$ and $T \in (0,1)$. Next we derive an estimate for $Y(s)$: \begin{align} E\left [ \|Y(s)\| _{L^2(\mathbb{R})} ^p\right ] & \leq E\left [ \underset{s' \in [0,s]}{\sup}\left \| \int _0 ^{s'} (s-r)^{-\alpha} \langle S_{s-r} a(\cdot), dW_r (\cdot )\rangle \right \| _{L^2(\mathbb{R})} ^p\right ] \nonumber \\ & \leq C_p \left ( \int _0 ^{s} (s-r )^{-2\alpha} \left \| S_{s-r} a\right \| _{HS} ^2 dr \right )^{\frac{p}{2}} \leq C_{a,p} s^{\frac{p}{2}(\frac{1}{2} -2\alpha)}. \nonumber \end{align} We have used Burkholder's inequality in the second line. To sum up these estimates, we can show this lemma noting $\frac{1}{2} -2\alpha > 0$. \qed \end{proof} \begin{proposition} \label{thm81} Let $u(t,x)$ be a solution of PDE (\ref{eq:pde}) where $f$ satisfies (\ref{reaction}) and the initial value $u^\varepsilon _0$ satisfies (\ref{eq:ini}). Then, we have that \begin{align} \lim _{\varepsilon \downarrow 0}P\left( \underset{t\in [0, \frac{ \varepsilon}{\mu} |\log \varepsilon | ]}{\sup} \| u^\varepsilon (t,\cdot ) - u(t,\cdot) \|_{L^2(\mathbb{R})} \leq \varepsilon ^{\kappa } \right) =1, \nonumber \end{align} where $\kappa < \gamma -\frac{C_f}{\mu}$. \end{proposition} \begin{proof} At first, we consider the mild form \begin{align} u^\varepsilon(t) -u(t)=\frac{1}{\varepsilon} \int _0^t S_{t-s}\{ f(u^\varepsilon (s)) -f(u (s)) \} ds +u_1(t) , \nonumber \end{align} where $u_1(t):=\varepsilon ^\gamma \int _0^t \langle S_{t-s}a(\cdot ),dW_s(\cdot) \rangle$. We now consider stopping times $\sigma := \inf \{ t>0 | \| u^\varepsilon(t) -u(t) \|_{L^2} > \varepsilon ^{\kappa } \}$ and $\tau_2 := \tau _1\wedge \sigma$. From Proposition \ref{thm31} and the definition of the positive constant $C_f >0$ given in (\ref{const}), we obtain \begin{align} \| u^\varepsilon(t \wedge \tau_2) -u(&t \wedge \tau_2) \|_{L^2} \leq \frac{C_f}{\varepsilon} \int _0^{t \wedge \tau_2} \| u^\varepsilon (s) -u (s) \|_{L^2} ds + \| u_1(t \wedge \tau _2) \|_{L^2} \nonumber \\ &\leq \frac{C_f}{\varepsilon} \int _0^t \| u^\varepsilon (s \wedge \tau_2) -u (s \wedge \tau_2) \|_{L^2} ds + \| u_1(t \wedge \tau _2) \|_{L^2}. \nonumber \end{align} From Gronwall's inequality, we have that \begin{align} \| u^\varepsilon(t \wedge \tau_2) -u(t \wedge \tau_2) \|_{L^2} & \leq \varepsilon ^\gamma \exp \left (\frac{C_f T}{\varepsilon} \right) \cdot \underset{t\in [0, T]}{\sup}\| \varepsilon ^{-\gamma} u_1(t \wedge \tau _2) \|_{L^2}\nonumber \\ & \leq \varepsilon ^\gamma \exp \left (\frac{C_f T}{\varepsilon} \right) \cdot \underset{t\in [0, T]}{\sup}\|\varepsilon ^{-\gamma} u_1(t) \|_{L^2} \nonumber \end{align} for each $T>0$. From the estimate in Lemma \ref{lem87}, we obtain \begin{align} E[\| u^\varepsilon(T \wedge \tau_2) -u(T \wedge \tau_2) \|_{L^2} ^p] \leq C_p \varepsilon ^{p \gamma} \exp \left (\frac{pC_f T}{\varepsilon} \right) T \nonumber \end{align} for every $p>4$ and $0<T<1$. As a result, for sufficiently large $p$, we obtain \begin{align} P(\sigma \leq \frac{ \varepsilon}{\mu} |\log \varepsilon |) &\leq P(\tau _2 \leq \frac{ \varepsilon}{\mu} |\log \varepsilon |) \leq \varepsilon ^{-p\kappa } E[\| u^\varepsilon(\frac{ \varepsilon}{\mu} |\log \varepsilon | \wedge \tau_2) -u(\frac{ \varepsilon}{\mu} |\log \varepsilon | \wedge \tau_2) \|_{L^2} ^p]\nonumber \\ &\leq C \varepsilon ^{p(\gamma -\kappa -\frac{C_f}{\mu}) +1} |\log \varepsilon | \nonumber \end{align} from Chebyshev inequality with the choice of $T=\frac{ \varepsilon}{\mu} |\log \varepsilon |$. Note that the right hand side converges to 0 as $\varepsilon \to 0$ in the order of $O(\varepsilon ^{p(\gamma -\kappa -\frac{C_f}{\mu}) +1} |\log \varepsilon |)$. From the conditions of $\gamma$ and $\kappa$, $p(\gamma -\kappa -\frac{C_f}{\mu}) +1$ is strictly positive. This estimate implies the conclusion. \qed \end{proof} Next we need to modify Proposition \ref{thm31}. The outline of the proof is similar to that of Theorem 2.1 in \cite{f}. At first we consider a stochastic process $u_1(t)$ in the proof of Proposition \ref{thm81}. Here $u_1$ satisfies the stochastic heat equation; \begin{align} \begin{cases} \dot{u}_1 (t,x) &= \Delta u_1 (t,x)+\varepsilon ^\gamma a(x) \dot{W}_t(x) ,\ \ \ t > 0,\ x\in \mathbb{R}, \nonumber \\ u_1 (0,x) &= 0,\ \ \ x\in \mathbb{R}. \nonumber \end{cases} \end{align} Now we refer to a result which asserts that the perturbation of the noise is very small; see Lemma 2.1 in \cite{f}. \begin{lemma} \label{lem81} There exists a random variable $Y(\omega) \in \cap _{p\geq 1} L^p (\Omega )$ such that \begin{align} |u_1 (t,x)| \leq \varepsilon ^\gamma Y(\omega), \ \ \ t\in [0,1],\ x \in \mathbb{R},\ 0<\varepsilon <1. \nonumber \end{align} \end{lemma} Next we consider a PDE; \begin{align} \begin{cases} \dot{\bar{u}}_\pm ^{\varepsilon , \delta} (t,x) &= \Delta \bar{u}_\pm ^{\varepsilon , \delta} (t,x)+\displaystyle{\frac{1}{\varepsilon}} f_\pm ^\delta( \bar{u}_\pm ^{\varepsilon , \delta} (t,x) ) ,\ \ \ t > 0,\ x\in \mathbb{R}, \nonumber \\ \bar{u}_\pm ^{\varepsilon , \delta} (0,x) &= u_0 ^\varepsilon (x)\pm \delta ,\ \ \ x\in \mathbb{R}, \nonumber \end{cases} \end{align} for small $\delta >0$, where the functions $f_\pm ^\delta \in C^2(\mathbb{R})$ satisfy the following conditions; \begin{align} f_+ ^\delta(u) \geq \sup _{|v|\leq \delta} f_+ ^\delta(u+v),\ f_+ ^\delta(\pm 1+\delta) =0,\ f_+ ^\delta(-\delta)=0,\ \frac{d}{du}f_+ ^\delta (-\delta)=\mu , \nonumber \\ f_- ^\delta(u) \leq \inf _{|v|\leq \delta} f_- ^\delta(u+v),\ f_- ^\delta(\pm 1-\delta) =0,\ f_- ^\delta(\delta)=0,\ \frac{d}{du}f_- ^\delta (\delta)=\mu , \nonumber \end{align} and $u_0^\varepsilon$ satisfies (\ref{eq:ini}). Note that we choose the reaction terms $f_\pm ^\delta$ to satisfy (\ref{reaction2}). Thus we can apply the result of Section \ref{sec2} to the solutions $\bar{u}_\pm ^{\varepsilon , \delta}$. We set a stopping time $\tau _3:=\inf \{ t>0 | |u_1(t,x)|>\delta\ for\ some \ x\in \mathbb{R}\}$. \begin{lemma} \label{lem82} On the event $\{ \omega \in \Omega | \tau_3 \geq 1 \}$, we have that \begin{align} \bar{u}_- ^{\varepsilon ,\delta}(t,x) -\delta \leq u^\varepsilon (t,x) \leq \bar{u}_+ ^{\varepsilon ,\delta}(t,x) +\delta, \ \ \ t\in [0,1],\ x \in \mathbb{R}, \nonumber \end{align} where $u^\varepsilon$ is the solution of (\ref{eq:spde}). \end{lemma} \begin{proof} We only consider the upper bound on $\{ \omega \in \Omega | \tau_3 \geq 1 \}$. We consider the PDE \begin{align} \begin{cases} \dot{u}_2 (t,x) &= \Delta u_2 (t,x)+\displaystyle{\frac{1}{\varepsilon}}f(u_1 +u_2) ,\ \ \ t > 0,\ x\in \mathbb{R}, \nonumber \\ u_2 (0,x) &= u_0 ^\varepsilon (x),\ \ \ x\in \mathbb{R}, \nonumber \end{cases} \end{align} where $u_0 ^\varepsilon$ satisfies (\ref{eq:ini}), and take the function $v(t,x)= \bar{u}_+ ^{\varepsilon ,\delta}(t,x)-u_2 (t,x)$. Here $u_1$ is defined in the proof of Proposition \ref{thm81}. Note that $u^\varepsilon = u_1 + u_2$. The rest of this proof is similar to that of Lemma 2.2 of \cite{f}. \qed \end{proof} \begin{proposition} \label{thm82} Let $u^\varepsilon$ be the solution of (\ref{eq:spde}) and assume that the initial value $u_0^\varepsilon$ satisfies (\ref{eq:ini}). Then there exist some positive constants $C_1, \ C>0$ and $K>1$ such that \begin{align} \lim _{\varepsilon \downarrow 0}P\left( | u^\varepsilon (t,x) -1 | \leq \varepsilon ^{\kappa } \left( C\exp \left( -\frac{\sqrt{\mu} x}{2}\right) +1 \right ) \ for \ all \ t \in [0, C_1 \varepsilon |\log \varepsilon |],\ x \geq K \right) =1, \nonumber \\ \lim _{\varepsilon \downarrow 0}P\left( | u^\varepsilon (t,x) +1 | \leq \varepsilon ^{\kappa }\left( C\exp \left( \frac{\sqrt{\mu} x}{2}\right) +1 \right ) \ for \ all \ t \in [0, C_1 \varepsilon |\log \varepsilon |],\ x \leq -K \right) =1, \nonumber \end{align} for all $0< \kappa <\gamma$. \end{proposition} \begin{proof} We only prove the first one. By taking $\delta =\varepsilon^{\kappa }$ and $K$ as in Proposition \ref{pro21}, we obtain \begin{align} &P \left ( | u^\varepsilon (t,x) -1 | \leq \varepsilon ^{\kappa } \left ( C\exp \left ( -\frac{\sqrt{\mu} x}{2} \right ) +1\right ) \ for \ all \ t \in [0, C_1 \varepsilon |\log \varepsilon |],\ x \geq K \right ) \nonumber \\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \geq P(\varepsilon ^\gamma Y(\omega )\leq \varepsilon ^{\kappa }) \geq 1- \varepsilon ^{p (\gamma - \kappa )}E[Y^p] \nonumber \end{align} for all $p\geq1$ from Lemma \ref{lem82}, Chebyshev inequality and Proposition \ref{pro21}. We apply Proposition \ref{pro21} for the solutions $\bar{u}_\pm ^{\varepsilon , \delta}$. \qed \end{proof} \begin{proposition} \label{pro81} Let $\underbar{u}$ and $\bar{u}$ be the solutions of (\ref{eq:spde}) which satisfy $\underbar{u}(0,x) \leq \bar{u}(0,x)$ for all $x \in \mathbb{R}$. Then $\underbar{u}(t,x) \leq \bar{u}(t,x)$ holds for all $x \in \mathbb{R}$ and $t \in [0,\infty )$ $P$-a.s. \end{proposition} \begin{proof} We can show this proposition by applying the maximum principle in a similar way to the proof of Lemma \ref{lem82}. \qed \end{proof} \subsection{Energy estimates Let $u(t,x)$ be a solution of PDE (\ref{eq:pde}) where $f$ satisfies (\ref{reaction}) and $u^\varepsilon _0$ satisfies (\ref{eq:ini}). We set $t = C_1 \varepsilon |\log \varepsilon |$ which is the generation time of $u$ and the constant $C_1\in (0, \frac{1}{\mu})$ is given in Proposition \ref{pro21}. Because $\kappa >1$, we imediately see that \begin{align} \label{est810} dist(u(C_1\varepsilon |\log \varepsilon |,\cdot ),M^\varepsilon )\leq C\varepsilon ^{\bar{\beta}}, \end{align} from Proposition \ref{pro21}. Proposition \ref{thm81} and (\ref{est810}) imply that the solution $u^\varepsilon$ of SPDE (\ref{eq:spde}) is in the $C(\varepsilon ^{\bar{\beta}} + \varepsilon ^{\kappa })$-neighborhood of $M^\varepsilon$, though this is not enough. In order to show the main result, we need much better estimates. We now construct super and sub solutions of $u^\varepsilon$. We see that \begin{align} u^\varepsilon(t,x) \leq \bar{u}_+ ^{\varepsilon ,\delta}(t,x)+\delta \ \ \ t\in [0,T \wedge \tau _3] ,\ x\in \mathbb{R}, \nonumber \end{align} from Lemmas \ref{lem81} and \ref{lem82}. Recall that $\tau _3 := \inf \{t>0| |u_2 (t,x)| > \delta \ for\ some\ x \in \mathbb{R} \}$. If $\tau _3 \geq C_1 \varepsilon |\log \varepsilon |$, then \begin{align} \bar{u}_+ ^{\varepsilon ,\delta}(C_1 \varepsilon |\log \varepsilon | \wedge \tau _3,x) & = \bar{u}_+ ^{\varepsilon ,\delta}(C_1 \varepsilon |\log \varepsilon | ,x)\nonumber \\ &\leq m(\varepsilon ^{-\frac{1}{2}} (x-\xi_0 - C \varepsilon^{\bar{\beta}})) + C' \varepsilon ^{\kappa } \nonumber \end{align} by applying Proposition \ref{pro21} to $\bar{u}_+ ^{\varepsilon ,\delta}$ for $\delta = \varepsilon ^{\kappa }$ and $\kappa >0$. The function $m$ is defined in (\ref{ode11}). And we see that \begin{align} \int _{\mathbb{R} \backslash [-2,2]} | u^\varepsilon (C_1 \varepsilon |\log \varepsilon |,x) - m(\varepsilon ^{-\frac{1}{2}} x)|^2 dx \leq \varepsilon ^{2\kappa } \nonumber \end{align} from Propositions \ref{pro21} and \ref{thm81}. We consider $t= C_1\varepsilon |\log \varepsilon |$ as an initial time. Namely, we consider the SPDE (\ref{eq:spde}) which is replaced $u_0 ^\varepsilon$ by $u ^\varepsilon (C_1\varepsilon |\log \varepsilon |, x)$. We can construct super and sub solutions $u_\pm ^\varepsilon$ for SPDE (\ref{eq:spde}) which satisfy \begin{align} \label{in} \begin{cases} \text{(i)} u_\pm ^\varepsilon \text{ obey the SPDE (\ref{eq:spde}) for all }t > 0, \\ \text{(ii)} dist(u_\pm ^\varepsilon (t,\cdot),M^\varepsilon )\leq C \varepsilon ^{\kappa }\ (\text{for all }t\in [ 0 , (\frac{1}{\mu}-C_1)\varepsilon |\log \varepsilon |\wedge \tau _2 \wedge \tau _3]). \end{cases} \end{align} Indeed, by combining the estimates as above, we take an initial value of super solution as follows: \begin{align} \label{ini3} u_+ ^\varepsilon (0,x):=(1 - \chi_1 (x))u^\varepsilon (0,x) + \chi_2 (x) (m(\varepsilon ^{-\frac{1}{2}} (x-\xi_0 - C \varepsilon^{\bar{\beta}})) + C' \varepsilon ^{\kappa }), \end{align} where $\chi _1$ and $\chi _2$ are some positive cutoff functions in $C_0 ^\infty (\mathbb{R})$ which take values in $[0,1]$. The function $\chi _1$ takes 1 when $x\in [-1,1]$ and takes 0 when $x\in \mathbb{R} \backslash [-2,2]$. The function $\chi _2$ takes 1 when $x\in [-2,2]$ and takes 0 when $x\in \mathbb{R} \backslash [-3,3]$. We can check easily that $u_+ ^\varepsilon (0,x)\geq u ^\varepsilon (0,x)$ and that $u_+ ^\varepsilon (t,x)$ dominates the solution $u ^\varepsilon (t,x)$ for all $t \in [0,\varepsilon ^{-2\gamma -\frac{1}{2}}T]$ and $x\in \mathbb{R}$ from Proposition \ref{pro81}. Here super solution $u_+ ^\varepsilon (t,x)$ satisfies (ii) of (\ref{in}) because of Lemma 9.1 of \cite{f} and Proposition \ref{thm81} in this section. We can construct $u_- ^\varepsilon$ in a similar way. Indeed, we can take an initial value of $u_- ^\varepsilon$; \begin{align} u_- ^\varepsilon (0,x):=(1 - \chi_1 (x))u^\varepsilon (0,x) + \chi_2 (x) (m(\varepsilon ^{-\frac{1}{2}} (x-\xi_0 + C \varepsilon^{\bar{\beta}})) - C' \varepsilon ^{\kappa }). \end{align} The functions $\chi_1$ and $\chi_2$ are same as above, and $u_- ^\varepsilon$ also satisfies (ii) of (\ref{in}). Now we show that $u_\pm ^\varepsilon$ stay in the $\varepsilon ^{\kappa '}$-neighborhood of $M^\varepsilon$ in $L^2$-sense, not only for $t \in [0,(\frac{1}{\mu}-C_1)\varepsilon |\log \varepsilon |]$ but for $t \in [0,\varepsilon ^{-2\gamma -\frac{1}{2}}T]$ for some $1<\kappa ' <\kappa $ with high probability. In order to show this, we prove that $u_\pm ^\varepsilon$ enters the $\varepsilon ^{\kappa '}$-neighborhood of $M^\varepsilon$ in the $H^1$-sense. We only consider the super solution $u_+ ^\varepsilon$. We change the scale of the solution $u_+ ^\varepsilon$ in time and space variables as below; \begin{align} v(t,x) := u_+ ^\varepsilon (\varepsilon ^{-2\gamma-\frac{1}{2}}t,\varepsilon ^{\frac{1}{2}}x),\ \ \ t\in[0,\infty ), \ x\in \mathbb{R}. \nonumber \end{align} We define an approximation of the function $u^\delta (t,x):= (\rho ^{\delta(\cdot)} (\cdot) \ast u(t, \cdot ))(x)$ where $\rho$ is the function satisfying \begin{align} \begin{cases} \text{(i)}\int_\mathbb{R} \rho (t)dt=1, & \nonumber \\ \text{(ii)}\rm{supp} \rho \subset [0,1], & \nonumber \\ \text{(iii)}\rho \in C^\infty (\mathbb{R} ). \nonumber \\ \end{cases} \end{align} And $\rho^{\delta(x)}$ satisfies the following conditions; \begin{align} \begin{cases} \rho^{\delta(x)} = \frac{1}{\delta}\rho( \frac{x}{\delta}) \ \ \ (|x|\leq \varepsilon ^{-\frac{1}{2}}+1), \nonumber \\ \rho^{\delta(x)} = \frac{1}{\delta(x)}\rho( \frac{x}{\delta(x)}) \ \ \ ( \varepsilon ^{-\frac{1}{2}}+1\leq |x| \leq \varepsilon ^{-\frac{1}{2}} +2), \nonumber \\ \rho^{\delta(x)} = \rho ^0 \ \ \ (|x|\geq \varepsilon ^{-\frac{1}{2}}+2 ), \nonumber \end{cases} \end{align} where we denote $\rho ^0 \ast u =u$ formally, $\delta (\cdot) \in C^\infty (\mathbb{R})$ and $0 \leq \delta (x) \leq \delta$. We can see the precise conditions of this convolution in Section 4 and 6 of \cite{f}. Before the proof, we state the SPDE which $v(t,x)$ satisfies in law sense; \begin{align} \dot{v}(t,x) = \varepsilon ^{-2\gamma-\frac{3}{2}} \{ \Delta v + f(v) \} + \varepsilon ^{-\frac{1}{2}} a(\varepsilon ^{\frac{1}{2}} x) \dot{W} _t (x). \nonumber \end{align} From Proposition \ref{thm81} and the result of the generation of interface for PDE, we only need to show the case that $dist(v(0,\cdot),M) \leq \varepsilon ^{\kappa -\frac{1}{4}}$ from the strong Markov property where $M:=M^1=\{ m(x-\eta ) | \eta \in \mathbb{R} \}$. Moreover we see that there exists unique Fermi coordinate $v_t = s(v_t) + m_{\eta(v_t)}$ if $t \in [0,(\frac{1}{\mu}-C_1)\varepsilon^{2\gamma +\frac{3}{2}} |\log \varepsilon |\wedge \varepsilon^{2\gamma +\frac{1}{2}} \tau_2 \wedge\varepsilon^{2\gamma +\frac{1}{2}} \tau_3]$ from (ii) of (\ref{in}). See Section \ref{sec1} for Fermi coordinate. \begin{lemma} \label{lem83} For each $T>0$, $t\in [0,T\wedge \tau _1]$ and $p>1$ there exist positive random variables $Y^\varepsilon(\omega )$, $Z ^\varepsilon(\omega ) \in L^p(\Omega )$ such that $\sup _{0<\varepsilon <1}E[(Y^\varepsilon )^p] < \infty ,\ \sup _{0<\varepsilon <1}E[(Z ^\varepsilon )^p] < \infty$, \begin{align} \| v_t - v_t ^\delta \|_{L^2(\mathbb{R})} \leq Y^\varepsilon \delta + Z^\varepsilon \varepsilon ^{\frac{1}{16} + \frac{3\gamma}{4} -\alpha} \delta ^{\frac{1}{4} -\alpha '} \nonumber \end{align} for sufficiently small $\alpha$ and $\alpha ' >0$ \end{lemma} \begin{proof} See Lemmas 4.1, 4.2 and the proof of Theorem 5.1 in \cite{f}. We note that the initial value satisfies assumptions for these lemmas and theorem. \qed \end{proof} \begin{lemma} \label{lem84} We define a stopping time $\tau ^\delta := \inf \{t>0 | \| s(v_t ^\delta) \|_{H^1} \leq \varepsilon ^{\kappa '} \}$ for $\kappa ' >1$. If we can take $\kappa > \kappa '$ which satisfy $(\kappa ' +\frac{21}{40} + \frac{\gamma}{10})\vee 2\kappa ' < \kappa < \gamma -\frac{C_f}{\mu}$ and $1< \kappa ' <\frac{1}{20} + \frac{\gamma}{5}$, then there exists a sequence $\{ \delta _\varepsilon \}$, which converges to $0$ as $\varepsilon \to 0$, such that \begin{align} \lim _{\varepsilon \downarrow 0}P\left( \tau ^{\delta_\varepsilon } \leq \varepsilon ^{2\gamma +\frac{3}{2}+\alpha} |\log \varepsilon| \right) =1, \nonumber \end{align} for sufficiently small $\alpha >0$. \end{lemma} \begin{proof} At first, we fix a time $t \in [0,(\frac{1}{\mu}-C_1) \varepsilon ^{2\gamma +\frac{3}{2}} |\log \varepsilon| \wedge \varepsilon^{2\gamma +\frac{1}{2}} \tau_2 \wedge \varepsilon^{2\gamma +\frac{1}{2}} \tau_3]$ at which $u^\varepsilon (\varepsilon^{-2\gamma-\frac{1}{2}}t)$ satisfies the conditions of Proposition \ref{thm82}. From the definition of Fermi coordinate, we get \begin{align} \label{est8-2} \| s(v_t ^\delta) \|_{H^1} &= \| v_t ^\delta -m_{\eta(v_t ^\delta)} \|_{H^1} \nonumber \\ &\leq \| v_t ^\delta -m_{\eta(v_t )} ^\delta \|_{H^1} + \| m_{\eta(v_t )} ^\delta - m_{\eta(v_t )} \|_{H^1} + \| m_{\eta(v_t )} -m_{\eta(v_t ^\delta )} \|_{H^1}\nonumber \\ &\leq \|s^\delta (v_t)\|_{H^1} + C\delta + C' \| v_t - v_t ^\delta \|_{L^2}, \end{align} for $t\leq (\frac{1}{\mu}-C_1) \varepsilon ^{2\gamma +\frac{3}{2}} |\log \varepsilon| \wedge \varepsilon^{2\gamma +\frac{1}{2}} \tau_2 \wedge \varepsilon^{2\gamma +\frac{1}{2}} \tau_3$. We just use the triangle inequality for the second line. We denote the approximation of $s(v_t)$ convoluted with $\rho ^\delta$ as $s^\delta (v_t)$ in the third line. We can estimate the second term of the second line by the integrability and the differentiability of $m_{\eta(v_t )} ^\delta - m_{\eta(v_t )}$ (see Lemma 5.4 of \cite{f}). From Lemma 5.5 of \cite{f} and the straight calculation, we get the estimate of the third term in the third line. And thus, we need to derive the estimate of $\|s^\delta (v_t)\|_{H^1}$ because Lemma \ref{lem83} completes these estimates if we take $\delta _\varepsilon =\varepsilon ^{\frac{1}{10}+\frac{2\gamma}{5}}$ which is the same $\delta$ as the case in Section 5 of \cite{f}. Now we consider the estimate in $L^2$-norm. An easy computation gives us \begin{align} \|s^\delta (v_t)\|_{L^2} &\leq \|s^\delta (v_t)-s(v_t) \|_{L^2} + \|s(v_t)\|_{L^2}\nonumber \\ &\leq \| v_t ^\delta - v_t \|_{L^2} + \| m_{\eta(v_t )} ^\delta - m_{\eta(v_t )} \|_{L^2} + \|s(v_t)\|_{L^2}\nonumber \\ &\leq \| v_t ^\delta - v_t \|_{L^2} + C\delta + \varepsilon ^{\kappa -\frac{1}{4}}. \nonumber \end{align} We use the triangle inequality and the definition of Fermi coordinate throughout these estimate. And thus Lemma \ref{lem83} and order of $\delta _\varepsilon$ complete the estimate. Next we need to consider $\|\nabla s^\delta (v_t)\|_{L^2}$ where $\nabla$ means $\frac{d}{dx}$. We divide the integration $\|\nabla s^\delta (v_t)\|_{L^2} ^2$ into four parts as below. \begin{align} \label{est8-1} \int_\mathbb{R} (\nabla s^\delta (v_t) )^2 dx = &\int_{I_\varepsilon ^- \cup I_\varepsilon ^+}(\nabla s (v_t) )^2 dx + \int_{-\varepsilon ^{-\frac{1}{2}}-1} ^{\varepsilon ^{-\frac{1}{2}}+1} (\nabla s^\delta (v_t) )^2 dx\nonumber \\ &+\left( \int_{-\varepsilon ^{-\frac{1}{2}}-2} ^{-\varepsilon ^{-\frac{1}{2}}-1} + \int_{\varepsilon ^{-\frac{1}{2}}+1} ^{\varepsilon ^{-\frac{1}{2}}+2}\right) \{ (\nabla s^\delta (v_t) )^2 - (\nabla s(v_t) )^2 \}dx, \end{align} where $I_\varepsilon ^- := (-\infty,-\varepsilon ^{-\frac{1}{2}}-1]$ and $I_\varepsilon ^+ := [\varepsilon ^{-\frac{1}{2}}+1,\infty)$. At first, we derive the estimate for the second term of right hand side of (\ref{est8-1}). From the definition of $s^\delta (v_t)$, $\delta _\varepsilon$ and $\kappa $, a straight calculation gives us \begin{align} \int_{-\varepsilon ^{-\frac{1}{2}}-1} ^{\varepsilon ^{-\frac{1}{2}}+1} (\nabla s^\delta (v_t) )^2 dx &= \int_{-\varepsilon ^{-\frac{1}{2}}-1} ^{\varepsilon ^{-\frac{1}{2}}+1} ((\nabla \rho ^\delta) \ast s(v_t) )^2 dx \nonumber \\ &\leq C(\varepsilon ^{-\frac{1}{2}} +1)\delta^{-\frac{1}{2}}\int_\mathbb{R} ( s(v_t) )^2 dx \leq C(\varepsilon ^{-\frac{1}{2}} +1)\delta^{-\frac{1}{2}} \varepsilon ^{2\kappa -\frac{1}{2}} \leq\varepsilon ^{2\kappa '} . \nonumber \end{align} Next we consider the last term of (\ref{est8-1}). Note that $s^\delta (v_t)$ and $s (v_t)$ are both differentiable if $|x|\geq \varepsilon ^{-\frac{1}{2}}+1$. Reminding the estimate of $ \| m_{\eta(v_t )} ^\delta - m_{\eta(v_t )} \|_{H^1}$ in (\ref{est8-2}), we can assert that the last term is of order $O(\delta ^2)$. At last, we show that the value of the first term of (\ref{est8-1}) becomes less than $\varepsilon ^{2\kappa '}$ before the time of order $O(\varepsilon ^{2\gamma +\frac{3}{2}} |\log \varepsilon |)$ on the event $\{ (\frac{1}{\mu}-C_1) \varepsilon ^{2\gamma +\frac{3}{2}} |\log \varepsilon| < \varepsilon^{2\gamma +\frac{1}{2}}\tau_2 \wedge \varepsilon^{2\gamma +\frac{1}{2}} \tau_3 \}$. This completes the estimate. Here we only consider the problem on the domain $I_\varepsilon ^+$. We regard $v_t$ as the solution of the boundary value problem \begin{align} \begin{cases} \dot{v}(t,x) = \varepsilon ^{-2\gamma-\frac{3}{2}} \{ \Delta v + f(v) \} , \nonumber \\ v(t,\varepsilon ^{-\frac{1}{2}}+1)=V_t(\omega ),\ \ \ v(0,x)=v_0(x), \nonumber \end{cases} \end{align} for each fixed $\omega \in \Omega$. We note that the boundary value $V_t(\omega )$ is almost 1 for all $t \in [0,\tau _3]$ from the observation in Proposition \ref{thm82}. From Proposition \ref{thm82} and the condition of the solution $m$, $\| m-m_{\eta (v_t)}\|_{L^2(I_\varepsilon ^+)}$ decays as $\varepsilon \to 0$, and its order is $O(\exp (-\frac{C}{\varepsilon}))$ (see Lemma 2.1 of \cite{ham}). And thus, this integral is negligible. The triangle inequality allows us to consider only $s_t:= v_t - m$. For simplicity we use the notation $\nabla$, $\Delta$ and $\partial _t$. From the form of the PDE, for all $T\in [0,(\frac{1}{\mu}-C_1) \varepsilon ^{2\gamma +\frac{3}{2}} |\log \varepsilon| \wedge \varepsilon^{2\gamma +\frac{1}{2}} \tau_2 \wedge \varepsilon^{2\gamma +\frac{1}{2}} \tau _3]$, we obtain \begin{align} \int _0^T \int_{I_\varepsilon ^+}\partial _t s(t,x) s(t,x) dx dt = \varepsilon ^{-2\gamma-\frac{3}{2}} &\int _0^T \int_{I_\varepsilon ^+}\Delta s(t,x) s(t,x) dx dt \nonumber \\ &+ \varepsilon ^{-2\gamma-\frac{3}{2}} \int _0^T \int_{I_\varepsilon ^+}\{ f(v(t,x))-f(m(x)) \} s(t,x) dx dt, \nonumber \end{align} and the integration by parts, the equality $\partial _t s(t,x) s(t,x)=\frac{1}{2}\partial _t (s(t,x))^2$ and the boundary condition $v(t,\infty)=1$ give us \begin{align} \int _0^T \int_{I_\varepsilon ^+} ( \nabla s(t,x) )^2 dx dt\leq \frac{\varepsilon^{2\gamma+\frac{3}{2}}}{2} \|s_0 \|_{L^2(I_\varepsilon ^+)} ^2 &+ \int _0^T \int_{I_\varepsilon ^+}\{ f(v(t,x))-f(m(x)) \} s(t,x) dx dt\nonumber \\ &-\int _0^T \{ \nabla s(t,\varepsilon ^{-\frac{1}{2}}+1) \} s(t,\varepsilon ^{-\frac{1}{2}}+1) dt. \nonumber \end{align} We can derive the same estimate for $I_\varepsilon ^-$ in a similar way, and hence we obtain \begin{align} \int _0^T \int_{I_\varepsilon ^+ \cup I_\varepsilon ^-} ( \nabla s(t,x) )^2 dx dt\leq \frac{\varepsilon^{2\gamma+\frac{3}{2}}}{2} \|s_0 \|_{L^2(I_\varepsilon ^+ \cup I_\varepsilon ^-)} ^2 &+ \int _0^T \int_{I_\varepsilon ^+ \cup I_\varepsilon ^-}\{ f(v(t,x))-f(m(x)) \} s(t,x) dx dt\nonumber \\ &-\int _0^T \{ \nabla s(t,\varepsilon ^{-\frac{1}{2}}+1) \} s(t,\varepsilon ^{-\frac{1}{2}}+1) dt\nonumber \\ &+\int _0^T \{ \nabla s(t,-\varepsilon ^{-\frac{1}{2}}-1) \} s(t,-\varepsilon ^{-\frac{1}{2}}-1) dt.\nonumber \end{align} The first term of the right hand side is obviously dominated by $\varepsilon ^{2\gamma+\frac{1}{2}+2\kappa }$ because we consider the initial value as (\ref{ini3}). The second term is negative because $v(t,x)$ and $m(x)$ are almost 1 if $|x|\geq \varepsilon ^{-\frac{1}{2}}+1$ from Proposition \ref{thm82}. From Lemma 6.2 in \cite{f}, $\nabla s(t,\varepsilon ^{-\frac{1}{2}}+1)$ and $ \nabla s(t,-\varepsilon ^{-\frac{1}{2}}-1)$ are bounded for all $t \in [0,\tau _2]$. And thus, the third and fourth terms are dominated by $CT \varepsilon ^{\kappa}$ ($0< \kappa <\gamma$) from Proposition \ref{thm82}. To sum up all of these estimation, we get \begin{align} T \underset{t\in[0,T]}{\inf} \int_{I_\varepsilon ^+ \cup I_\varepsilon ^-} ( \nabla s(t,x) )^2 dx &\leq \int _0^{T} \int_{I_\varepsilon ^+ \cup I_\varepsilon ^-} ( \nabla s(t,x) )^2 dx dt \nonumber \\ &\leq C \varepsilon ^{2\gamma+\frac{1}{2}+2\kappa } + CT \varepsilon ^{\kappa} , \nonumber \end{align} on the event $\{ T < \varepsilon^{2\gamma +\frac{1}{2}}\tau_2 \wedge \varepsilon^{2\gamma +\frac{1}{2}}\tau_3 \}$. We now take $T:= \varepsilon ^{2\gamma +\frac{3}{2}+\alpha}|\log \varepsilon |$. From this estimate, we see that the integral $\int_{I_\varepsilon ^+ \cup I_\varepsilon ^-} ( \nabla s(t,x) )^2 dx$ becomes less than $\varepsilon ^{2 \kappa '}$ at some time $T_\varepsilon (\omega) \leq O(\varepsilon ^{2\gamma +\frac{3}{2} +\alpha } |\log \varepsilon |)$ $P$-a.s. for sufficiently small $\alpha >0$. Note that $P(\varepsilon^{2\gamma +\frac{1}{2}} \tau_2 \wedge \varepsilon^{2\gamma +\frac{1}{2}} \tau_3 \geq (\frac{1}{\mu}-C_1) \varepsilon ^{2\gamma +\frac{3}{2}} |\log \varepsilon |) \to 1$ as $\varepsilon \to 0$. This is the conclusion of the lemma. \qed \end{proof} \begin{lemma} \label{lem85} If we can take $\kappa > \kappa ' >1$ which satisfy $(\kappa' +\frac{21}{40} + \frac{\gamma}{10})\vee 2\kappa' < \gamma -\frac{C_f}{\mu}$ and $1< \kappa ' <\frac{1}{20} + \frac{\gamma}{5}$, then there exist a positive random variable $\widetilde{C}(\omega ) \in L^\infty (\Omega )$ and a sequence $\{ \delta _\varepsilon \}$ which converges to $0$ as $\varepsilon \to 0$ such that \begin{align} \lim _{\varepsilon \downarrow 0}P\left( \| s(v_t ^{\delta _\varepsilon}) \|_{H^1} \leq \varepsilon ^{\kappa '} \ for \ all \ t \in [\widetilde{C}(\omega ) \varepsilon^{2\gamma +\frac{3}{2}} |\log \varepsilon |,T] \right) =1. \nonumber \end{align} \end{lemma} \begin{proof} The same argument as in the proof of Proposition 5.4 in \cite{f} (from p.241 to 244) completes the proof of this lemma from Lemma \ref{lem84}. The positive random variable $\widetilde{C}(\omega)$ can be taken as $\widetilde{C}(\omega)=(\varepsilon^{-2\gamma -\frac{3}{2}} |\log \varepsilon |^{-1} \tau_{\delta _\varepsilon}) \wedge 1$ and it is in the class of $L^\infty (\Omega)$ from Lemma \ref{lem84}. \qed \end{proof} \begin{corollary} \label{cor85} If we can take $\kappa > \kappa ' >1$ which satisfy $(\kappa' +\frac{21}{40} + \frac{\gamma}{10})\vee 2\kappa' < \gamma -\frac{C_f}{\mu}$ and $1< \kappa ' <\frac{1}{20} + \frac{\gamma}{5}$, then there exists a positive random variable $\widetilde{C}(\omega ) \in L^\infty (\Omega )$ such that \begin{align} \lim _{\varepsilon \downarrow 0}P\left( s(v_t) \in H^\alpha \ for \ all \ t \in [\widetilde{C}(\omega ) \varepsilon^{2\gamma +\frac{3}{2}} |\log \varepsilon |,T] \right) =1, \nonumber \end{align} for all $0< \alpha < \frac{1}{4}$. \end{corollary} \begin{proof} We get this corollary from Lemma \ref{lem85} and the same argument as Lemma 5.6 of \cite{f}. \qed \end{proof} \begin{proposition} \label{thm83} If we can take $\kappa > \kappa ' >1$ which satisfy $(\kappa' +\frac{21}{40} + \frac{\gamma}{10})\vee 2\kappa' < \gamma -\frac{C_f}{\mu}$ and $1< \kappa ' <\frac{1}{20} + \frac{\gamma}{5}$, then there exists a positive random variable $\widetilde{C}(\omega ) \in L^\infty (\Omega )$ such that \begin{align} \lim _{\varepsilon \downarrow 0}P \left( dist (v_t, M)\leq \varepsilon ^{\kappa '} \ for\ all\ t \in [\widetilde{C}(\omega)\varepsilon^{2\gamma +\frac{3}{2}} |\log \varepsilon |,T] \right) =1. \nonumber \end{align} \end{proposition} \begin{proof} We prove this from Lemma \ref{lem85} in the same way as the proof of Theorem 5.1 in \cite{f}. \qed \end{proof} We get similar results for sub solution $u_- ^\varepsilon$ in the similar way to the proofs of Corollary \ref{cor85} and Proposition \ref{thm83}. We set $w(t,x) := u_- ^\varepsilon (\varepsilon ^{-2\gamma-\frac{1}{2}}t,\varepsilon ^{\frac{1}{2}}x)$ for all $t\in[0,\infty )$ and $x\in \mathbb{R}$. \begin{corollary} \label{cor86} If we can take $\kappa > \kappa ' >1$ which satisfy $(\kappa' +\frac{21}{40} + \frac{\gamma}{10})\vee 2\kappa' < \gamma -\frac{C_f}{\mu}$ and $1< \kappa ' <\frac{1}{20} + \frac{\gamma}{5}$, then there exists a positive random variable $\widetilde{C}(\omega ) \in L^\infty (\Omega )$ such that \begin{align} \lim _{\varepsilon \downarrow 0}P\left( s(w_t) \in H^\alpha \ for \ all \ t \in [\widetilde{C}(\omega ) \varepsilon^{2\gamma +\frac{3}{2}} |\log \varepsilon |,T] \right) =1, \nonumber \end{align} for all $0< \alpha < \frac{1}{4}$. \end{corollary} \begin{proposition} \label{thm84} If we can take $\kappa > \kappa ' >1$ which satisfy $(\kappa' +\frac{21}{40} + \frac{\gamma}{10})\vee 2\kappa' < \gamma -\frac{C_f}{\mu}$ and $1< \kappa ' <\frac{1}{20} + \frac{\gamma}{5}$, then there exists a positive random variable $\widetilde{C}(\omega ) \in L^\infty (\Omega )$ such that \begin{align} \lim _{\varepsilon \downarrow 0}P \left( dist (w_t, M)\leq \varepsilon ^{\kappa '} \ for\ all\ t \in [\widetilde{C}(\omega )\varepsilon^{2\gamma +\frac{3}{2}} |\log \varepsilon |,T] \right) =1. \nonumber \end{align} \end{proposition} From these results, we get the dynamics of $u_\pm ^\varepsilon$ from the same argument as in the case $v_0 \in M$ (see Sections 7 and 8 of \cite{f}). \begin{proof}[Proof of Theorem \ref{thm23}] At first, we note that the condition (\ref{gamma}) assures that all of lemmas, propositions and corollaries in this section hold. Now we construct super and sub solutions again. We consider $C_1\varepsilon |\log \varepsilon |$ as the initial time $0$. From (\ref{in}), we can see $u_- ^\varepsilon (0,x)\leq u_0 ^\varepsilon (x) \leq u_+ ^\varepsilon(0,x)$, and Proposition \ref{pro81} allows us to compare the solutions as below: \begin{align} \label{est:compa} u_- ^\varepsilon (t,x)\leq u^\varepsilon (t,x) \leq u_+ ^\varepsilon (t,x)\ for \ all \ t\in[0,\varepsilon ^{-2\gamma -\frac{1}{2}}T] \ and \ x \in \mathbb{R}. \end{align} Because $dist(u_\pm ^\varepsilon(0,\cdot) ,M^\varepsilon )\leq C\varepsilon ^{\kappa }$, by taking $\bar{u}_\pm ^\varepsilon(t,x) := u_\pm ^\varepsilon(\varepsilon ^{-2\gamma -1}t,x)$ and $\kappa >1$, we can show that \begin{align} P\left ( \underset{ t \in [0,T] }{ \sup } \| \bar{u}_\pm ^\varepsilon(t,\cdot )-\chi _{\xi^{\varepsilon ,\pm} _t}(\cdot )\| _{L^2(\mathbb{R})} \leq \delta \right ) \to 1\ \ \ (\varepsilon \to 0) \nonumber \end{align} in the same way as \cite{f}. The stochastic processes $\xi^{\varepsilon ,\pm} _t$ can be defined in the same way as (8.3) of \cite{f} (replace $v^\varepsilon$ in (8.3) by $v$ and $w$). We see that $|\xi^{\varepsilon ,\pm} _0-\xi_0| \leq \varepsilon^{\bar{\beta}} $ and the difference of initial value does not matter for the proof of tightness for $\{ P_\pm ^\varepsilon \}$ which are the distributions of $\{ \xi _t ^{\varepsilon ,\pm} \}$ on $C([0,T],\mathbb{R})$. Here $\bar{\beta}>0$ is a constant which is defined in Proposition \ref{pro21}. Thus the distribution of the process $\{ \xi^{\varepsilon ,\pm} _t\}$ on $C([0,T],\mathbb{R})$ converges to that of $\{ \xi _t\}$ weakly as $\varepsilon \to 0$. Therefore we obtain \begin{align} &\| \bar{u}_\varepsilon (t,\cdot )-\chi _{\xi^{\varepsilon } _t}(\cdot )\| _{L^2(\mathbb{R})} \nonumber \\ &\leq \| \bar{u}^\varepsilon (t,\cdot )-\bar{u}_+ ^\varepsilon (t,\cdot )\| _{L^2} + \| \bar{u}_+ ^\varepsilon (t,\cdot )-\chi _{\xi^{\varepsilon ,+ } _t}(\cdot )\| _{L^2} \nonumber \\ &\leq \| \bar{u}_+ ^\varepsilon (t,\cdot )-\bar{u}_- ^\varepsilon (t,\cdot )\| _{L^2} + \| \bar{u}_+ ^\varepsilon (t,\cdot )-\chi _{\xi^{\varepsilon ,+ } _t}(\cdot )\| _{L^2} \nonumber \\ &\leq 2 \| \bar{u}_+ ^\varepsilon (t,\cdot )-\chi _{\xi^{\varepsilon ,+ } _t}(\cdot )\| _{L^2} + \| \chi _{\xi^{\varepsilon ,+ } _t}(\cdot ) - \chi _{\xi^{\varepsilon ,- } _t}(\cdot )\| _{L^2} + \| \chi _{\xi^{\varepsilon ,- } _t}(\cdot )-\bar{u}_- ^\varepsilon (t,\cdot )\| _{L^2}, \nonumber \end{align} for all $t \in [0,T\wedge \varepsilon ^{2\gamma +\frac{1}{2}} (\tilde{\sigma} _\varepsilon \wedge \bar{\sigma} _\varepsilon)]$, by taking $\xi ^\varepsilon _t := \xi^{\varepsilon ,+} _t$. Here we set the stopping time \begin{align} &\tilde{\sigma} _\varepsilon := \inf \{t>0 | dist (v_t ,M)> \varepsilon ^{\kappa '} \ or \ \|v_t\|_{L^\infty}>2C_0 \ or \ v_t \notin H^\alpha +m \} \nonumber \\ &\bar{\sigma} _\varepsilon := \inf \{t>0 | dist (w_t ,M)> \varepsilon ^{\kappa '} \ or \ \|w_t\|_{L^\infty}>2C_0 \ or \ w_t \notin H^\alpha +m \} \nonumber \end{align} for fixed $\alpha <\frac{1}{4}$. Indeed, the first and the third inequality come from the triangle inequality, and (\ref{est:compa}) gives us the second inequality. From Propositions \ref{thm31}, \ref{thm83}, \ref{thm84}, Corollaries \ref{cor85} and \ref{cor86}, we see that $P(T \leq \varepsilon ^{2\gamma +\frac{1}{2}} (\tilde{\sigma} _\varepsilon \wedge \bar{\sigma} _\varepsilon)) \to 1$ as $\varepsilon \to 0$. This completes the proof of the theorem by taking $C(\omega):=C_1 + \widetilde{C}(\omega)$. Here, $C_1$ is introduced in Proposition \ref{pro21}. \qed \end{proof} \ \\ {\bf Acknowledgements} The author would like to thank Professor T. Funaki for his tremendous supports and incisive advices. This work was supported by the Program for Leading Graduate Schools, MEXT, Japan and Japan society for the promotion of science, JSPS.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\@startsection {section}{1}{\z@ {-\bigskipamount {\medskipamount {\large\bfserie \raggedright}} \renewcommand\subsection{\@startsection {subsection}{2}{\z@ {-\medskipamount {\smallskipamount {\bfserie \raggedright}} \makeatother \renewcommand{\c}{\mathsf{c}} \newcommand{\fin}{\mathsf{fin}} \newcommand{\Ca}{\mathsf{Ca}} \newcommand{\Co}{\mathsf{Co}} \renewcommand{\r}{\mathsf{r}} \newcommand{\sign}{\operatorname{sign}} \newcommand{\limD}{\operatorname{Dlim}} \renewcommand{\limD}{\mathop{\operatorname{Dlim}}} \newcommand{\limsupD}{\mathop{\operatorname{Dlimsup}}} \newcommand{\liminfD}{\mathop{\operatorname{Dliminf}}} \newcommand{\uperp}{\{u\}^\perp} \newcommand{\dom}{\operatorname{dom}} \newcommand{\codom}{\operatorname{codom}} \newcommand{\gr}{\operatorname{gr}} \newcommand{\ffrown}{\text{\raisebox{3pt}[0pt][0pt]{$\frown$}}} \renewcommand{\O}{\underset{\ffrown}{<}} \newcommand{\OG}{\underset{\ffrown}{>}} \newcommand{\ssim}{\text{\raisebox{2.5pt}[0pt][0pt]{$\sim$}}} \newcommand{\lsim}{\underset{\ssim}{<}} \newcommand{\gsim}{\underset{\ssim}{>}} \renewcommand{\gg}{>\kern-2pt>} \renewcommand{\ll}{<\kern-2pt<} \newcommand{\esssup}{\operatorname{ess\,sup}} \renewcommand{\S}{\operatorname{\mathsf{S}\!}} \renewcommand{\S}{\mathsf{S}} \newcommand{\V}{\mathsf{V}} \newcommand{\intr}[2]{\overline{#1,#2}} \newcommand{\Alo}{A_{\mathsf{lo}}} \newcommand{\Ahi}{A_{\mathsf{hi}}} \newcommand{\Ahii}[1]{A_{\mathsf{hi,#1}}} \newcommand{\Blo}{B_{\mathsf{lo}}} \newcommand{\Bhi}{B_{\mathsf{hi}}} \renewcommand{\gg}{>\kern-2pt>} \renewcommand{\ll}{<\kern-2pt<} \newcommand{\lf}{\left\lfloor} \newcommand{\rf}{\right\rfloor} \newcommand{\lp}{\left(} \newcommand{\rp}{\right)} \newcommand{\lb}{\left\{} \newcommand{\rb}{\right\}} \newcommand{\dd}{\partial} \renewcommand{\dd}{\operatorname{d}\!} \renewcommand{\le}{\leqslant} \renewcommand{\ge}{\geqslant} \renewcommand{\leq}{\leqslant} \renewcommand{\geq}{\geqslant} \renewcommand{\succeq}{\succcurlyeq} \renewcommand{\preceq}{\preccurlyeq} \renewcommand{\smallint}{\textstyle{\int}} \newcommand{\lc}{\mathsf{L\!C}} \newcommand{\lin}{\mathsf{Lin}} \renewcommand{\aa}{\overline{a}} \newcommand{\m}{\mathbf{m}} \newcommand{\s}{\mathbf{s}} \newcommand{\vv}{\mathbf{v}} \newcommand{\w}{\mathbf{w}} \newcommand{\xx}{\mathbf{x}} \newcommand{\al}{\alpha} \newcommand{\be}{\beta} \newcommand{\ga}{\gamma} \newcommand{\Ga}{\Gamma} \newcommand{\ep}{\varepsilon} \newcommand{\ka}{\kappa} \renewcommand{\th}{\theta} \newcommand{\g}{\gamma} \newcommand{\si}{\sigma} \newcommand{\la}{\lambda} \newcommand{\La}{\Lambda} \newcommand{\de}{\delta} \newcommand{\De}{\Delta} \newcommand{\vpi}{\varphi} \newcommand{\Om} X} \newcommand{\B}{\mathfrak{B}} \newcommand{\BB}{\mathcal{B}} \newcommand{\D}{\mathcal{D}} \newcommand{\EE}{\mathcal{E}} \newcommand{\X}{\mathcal{X}} \newcommand{\Y}{\mathcal{Y}} \newcommand{\ZZ}{\mathcal{Z}} \renewcommand{\L}{\mathcal{S}} \newcommand{\M}{\mathcal{M}} \newcommand{\NN}{\mathcal{N}} \newcommand{\LD}{\mathcal{L}\!\mathcal{D}} \renewcommand{\LD}{\mathcal{L}{\kern -1.9pt}\mathcal{D}} \renewcommand{\LD}{\mathcal{D}} \renewcommand{\LD}{\mathcal{L}{\kern -4pt}\mathcal{C}} \renewcommand{\LD}{\mathcal{R}{\kern -3pt}\mathcal{C}} \newcommand{\id}{\mathrm{id}} \newcommand{\iid}{\overset{\mathrm{i.i.d.}}{\sim}} \renewcommand{\leq}[1]{\overset{#1}{\preceq}} \renewcommand{\geq}[1]{\overset{#1}{\succeq}} \newcommand{\ii}[1]{\mathrm{I}\!\left\{#1\right\}} \newcommand{\st}{\mathsf{st}} \newcommand{\ST}{\mathsf{ST}} \newcommand{\SST}{\mathrm{S\!T}} \newcommand{\bs}{\mathsf{BS}} \newcommand{\BS}{\mathrm{B\!S}} \newcommand{\BC}{\mathrm{B\!C}} \newcommand{\supp}{\operatorname{\mathrm{supp}}} \newcommand{\kurt}{\operatorname{\mathrm{kurt}}} \renewcommand{\P}{\operatorname{\mathsf{P}}} \newcommand{\PP}{\operatorname{\mathsf{P}}} \newcommand{\E}{\operatorname{\mathsf{E}}} \newcommand{\Var}{\operatorname{\mathsf{Var}}} \newcommand{\N}{\mathbb{N}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\R}{{\mathbb{R}}} \newcommand{\C}{\mathbb{C}} \newcommand{\Mminus}[1]{\mathcal{M}_-^{#1}} \newcommand{\F}[1]{\mathcal{F}_+^{#1}} \renewcommand{\F}{\mathcal{F}} \renewcommand{\H}[1]{\mathcal{H}_+^{#1}} \renewcommand{\H}{\mathcal{H}} \newcommand{\tH}[1]{{\tilde{\mathcal{H}}}_+^{#1}} \renewcommand{\tH}{{\tilde{\mathcal{H}}}} \newcommand{\Hminus}[1]{\mathcal{H}_-^{#1}} \newcommand{\Fminus}[1]{\mathcal{F}_-^{#1}} \newcommand{\FF}[1]{\mathcal{F}^{#1}} \newcommand{\GG}[1]{\mathcal{G}^{#1}} \newcommand{\G}[1]{\mathcal{G}^{#1}_+} \renewcommand{\G}{\Psi} \newcommand{\GPlus}[1]{\mathcal{G}^{#1}_{++}} \renewcommand{\PP}[1]{\mathcal{P}^{#1}_+} \renewcommand{\PP}{\mathcal{P}} \newcommand{\ta}{{\tilde{a}}} \newcommand{\tb}{{\tilde{b}}} \newcommand{\tf}{{\tilde{f}}} \newcommand{\tg}{{\tilde{g}}} \newcommand{\tI}{{\tilde{I}}} \newcommand{\tr}{{\tilde{r}}} \newcommand{\tw}{{\tilde{w}}} \newcommand{\tww}{{\tilde{\w}}} \newcommand{\tx}{{\tilde{x}}} \newcommand{\tz}{{\tilde{z}}} \newcommand{\tP}{{\tilde{P}}} \newcommand{\txi}{{\tilde{\xi}}} \newcommand{\tQ}{{\tilde{Q}}} \newcommand{\tA}{{\tilde{A}}} \newcommand{\tB}{{\tilde{B}}} \newcommand{\teta}{{\tilde{\eta}}} \newcommand{\tm}{{\tilde{m}}} \newcommand{\hF}{{\hat\F}} \newcommand{\hG}{{\hat\G}} \newcommand{\opt}{\operatorname{\mathsf{opt}}} \newcommand{\near}[1]{\mathrel{\underset{#1}{\scalebox{1.1}{$\nearrow$}}}} \newcommand{\sear}[1]{\mathrel{\underset{#1}{\scalebox{1.1}{$\searrow$}}}} \newcommand{\A}{\mathcal{A}} \newcommand{\Rad}{\mathsf{Rad}} \renewcommand{\d}{\mathrm{d}} \newcommand{\cl}{\operatorname{cl}} \newcommand{\Is}{I^{\mathsf{symm}}} \newcommand{\as}{\stackrel{\mathrm{a.s.}}{=}} \newcommand{\inter}{\mathrm{int}\,} \newcommand{\vp}{\varepsilon} \newcommand{\req}[1]{(\ref{#1})} \pagenumbering{arabic} \errorcontextlines=999 \begin{document} \title{Measure extension by local approximation} \author{Iosif Pinelis} \address{Department of Mathematical Sciences\\ Michigan Technological University\\ Hough\-ton, Michigan 49931, USA\\ E-mail: [email protected]} \keywords{ Measures, measure extension, rings of sets, algebras of sets, $\si$-algebras of sets} \subjclass[2010]{Primary 28A12; secondary 60A10} \begin{abstract} Measurable sets are defined as those locally approximable, in a certain sense, by sets in the given algebra (or ring). A corresponding measure extension theorem is proved. It is also shown that a set is locally approximable in the mentioned sense if and only if it is Carath\'eodory-measurable. \end{abstract} \maketitle \tableofcontents \section{Introduction, summary, and discussion }\label{intro} Let $m\colon\A\to[0,\infty]$ be a measure -- that is, a nonnegative $\si$-additive function defined on an algebra $\A$ over a set $\Om$ such that $m(\emptyset)=0$. The measure extension problem is to extend the measure $m$ to a measure on a $\si$-algebra containing $\A$. This problem was solved by Carath\'eodory; see e.g.\ \cite{halmos}. The key in his solution was to consider the set \begin{equation}\label{eq:M_Ca} \M_\Ca:=\{E\subseteq\Om\colon m^*(F)=m^*(F\cap E)+m^*(F\cap E^\c)\ \; \forall F\subseteq\Om\} \end{equation} of all Carath\'eodory-measurable subsets of $\Om$, where, as usual, $m^*$ denotes the outer measure corresponding to $m$, and ${}^\c$ denotes the complement (to $\Om$). It is then shown that $\M_\Ca$ is a $\si$-algebra containing $\A$, and the restriction of $m^*$ to $\M_\Ca$ is a measure extending $m$. When the measure $m$ is finite, one can also introduce the inner measure $m_*$ by the formula $m_*(E):=m(\Om)-m^*(E^\c)$ for all $E\subseteq\Om$, and then the key condition $m^*(F)=m^*(F\cap E)+m^*(F\cap E^\c)$ in \eqref{eq:M_Ca} can be rewritten in the case $F=\Om$ as $m^*(E)=m_*(E)$. This equality of the outer and inner measures on all Carath\'eodory-measurable subsets of $\Om$ may explain the intuition behind the definition \eqref{eq:M_Ca}. Moreover, one can show -- see e.g.\ Theorem~\ref{th:M comp} at the end of this section -- that the condition $\forall F\subseteq\Om$ in \eqref{eq:M_Ca} can be equivalently replaced by $\forall A\in\A_\fin$, where \begin{equation}\label{eq:A_fin} \A_\fin:=\{A\in\A\colon m(A)<\infty\}. \end{equation} That is, \begin{equation}\label{eq:M_Ca,loc} \M_\Ca=\bigcap_{A\in\A_\fin}\M_A, \end{equation} where \begin{equation*} \M_A:=\{E\subseteq\Om\colon(m_A)^*(A\cap E)=(m_A)_*(A\cap E)\}, \end{equation*} $m_A$ is the restriction of the measure $m$ to the algebra $\A_A:=\{B\in\A\colon B\subseteq A\}$ over the set $A$, and $(m_A)^*$ and $(m_A)_*$ are the outer and inner measures corresponding to $m_A$. This ``localized'' restatement of the definition of $\M_\Ca$ brings it closer to the mentioned intuition of the desired equality of the outer and inner measures of measurable sets. Another approach to the measure extension problem is based on an approximation idea, which may be more immediately intuitive. For any subsets $E$ and $F$ of $\Om$, define the ``distance'' between them by the formula \begin{equation}\label{eq:d} d(E,F):=m^*(E+F), \end{equation} where $E+F$ denotes the symmetric difference between $E$ and $F$. The idea is then to define the set of all measurable subsets of $\Om$ as the closure of the algebra $\A$ with respect to the pseudometric $d$. This idea was carried out in \cite[Appendix~1]{borovkov} in the case when $m$ is a probability measure. Of course, one can quite similarly do for any finite measure $m$; cf.\ e.g.\ \cite[Theorem~1.5.6]{bogachev}. However, without modifications, this approach will not work in general even when the measure $m$ is $\si$-finite. For instance, suppose that $\Om=\R$, $\A$ is the smallest algebra containing all left-open intervals $(a,b]$ in $\R$, and $m$ is the Lebesgue measure on $\A$, so that $m$ is $\si$-finite. Let now $E:=\bigcup_{n\in\Z}(2n,2n+1]$, so that $E$ is in the $\si$-algebra $\si(\A)$ generated by $\A$. Then it is easy to see that $d(E,A)=\infty$ for any $A\in\A$. Moreover, \cite[Example~4.19]{wise-hall} shows that there exist a set $\Om$, an algebra $\A$ over $\Om$, a $\si$-finite measure $\mu$ on $\si(\A)$, and a set $E\in\si(\A)$ such that $\mu(E)<\infty$ but $\mu(E+A)\ge\mu(E)>0$ for all $A\in\A$. The approximation idea can be saved, though, by combining it with appropriate localization. That is, a measurable set may be only ``locally'' approximable by sets in the algebra $\A$. Specifically, for any $A\in\A$ consider the following ``localized'' version of the definition \eqref{eq:d}: \begin{equation*} d_A(E,F):=m^*\big(A\cap(E+F)\big) \end{equation*} for any subsets $E$ and $F$ of $\Om$. Thus, for the ``distance'' $d$ defined by \eqref{eq:d}, we have $d=d_\Om$. Now recall \eqref{eq:A_fin} and let $\M$ denote the set of all subsets $E$ of $\Om$ such that for each $A\in\A_\fin$ and each real $\vp>0$ there is some $B=B_{E,A,\vp}\in\A$ such that $d_A(E,B)<\vp$: \begin{equation}\label{eq:M} \M:=\{E\subseteq\Om\colon\forall A\in\A_\fin\;\forall\vp>0\;\exists B\in\A\ d_A(E,B)<\vp\}. \end{equation} Note that here we use the ``$m$-finite'' subset $\A_\fin$ of the algebra $\A$ (rather than $\A$ itself); this localization idea is similar to the one that led us to \eqref{eq:M_Ca,loc}. \begin{theorem}\label{th:si-alg} $\M$ is a $\si$-algebra over $\Om$, and $\M\supseteq\A$. \end{theorem} The necessary proofs are given in Section~\ref{proofs}. \begin{theorem}\label{th:si-add} The outer measure $m^*$ is $\si$-additive on the $\si$-algebra $\M$. \end{theorem} So, in view of \eqref{eq:m*=m}, the restriction \begin{equation*} \bar m:=m^*\big|_\M \end{equation*} of $m^*$ to $\M$ is a measure that extends $m$ from the algebra $\A$ to the $\si$-algebra $\M$. \begin{theorem}\label{th:uniq} If the measure $m$ on $\A$ is $\si$-finite, then the $\si$-additive extension of $m$ to $\M$ is unique. \end{theorem} Of course, the $\si$-finiteness condition in Theorem~\ref{th:uniq} is essential; for instance, see \cite[Example~4.20]{wise-hall}. Let us also compare the set $\M$, defined by \eqref{eq:M}, of all sets locally approximable by sets in algebra $\A$ with the set $\M_\Ca$, defined by \eqref{eq:M_Ca}, of all sets measurable in the Carath\'eodory sense, as well as with the completion \begin{equation*} \M_\Co:=\{E\subseteq\Om\colon \exists S\in\si(\A)\ d_\Om(E,S)=0\} \end{equation*} of the $\si$-algebra $\si(\A)$ generated by algebra $\A$. Let us also consider the following ``$\A_\fin$'' counterparts of the Carath\'eodory set $\M_\Ca$ and the set $\M_\Co$: \begin{align*} \M_{\Ca,\A_\fin}&:=\{E\subseteq\Om\colon m^*(A)=m^*(A\cap E)+m^*(A\cap E^\c)\ \;\forall A\in\A_\fin\}, \\ \M_{\Co,\A_\fin}&:=\{E\subseteq\Om\colon\forall A\in\A_\fin\ \exists S\in\si(\A)\ d_A(E,S)=0\}. \end{align*} \begin{theorem}\label{th:M comp} $\M_\Co\subseteq \M_{\Co,\A_\fin}=\M=\M_\Ca=\M_{\Ca,\A_\fin}$. If the measure $m$ on $\A$ is $\si$-finite, then $\M_\Co= \M_{\Co,\A_\fin}=\M=\M_\Ca=\M_{\Ca,\A_\fin}$. \end{theorem} In \cite[Theorem~2.3]{weizs-meas}, it was shown that the restriction of $m^*$ to $\M_\Co$ is $\si$-additive. The $\si$-finiteness condition in the second sentence of Theorem~\ref{th:M comp} is essential; cf.\ e.g.\ \cite[Example~4.28]{wise-hall}. \begin{remark} The condition $\Om\in\A$ will never be used in the proofs of Theorems~\ref{th:si-alg}--\ref{th:M comp} (to be given in Section~\ref{proofs}). So, these theorems will hold even if $\A$ is only assumed to be a ring (but not necessarily an algebra) of subsets of $\Om$. \end{remark} \section{Proofs}\label{proofs} First here, let us recall the definition and basic properties of the outer measure corresponding to the given measure $m$ on the algebra $\A$. Take any set $E\subseteq\Om$. Let $c(E)$ denote the set of all sequences $(A_n):=(A_n)_{n=1}^\infty$ in $\A$ such that $\bigcup_n A_n\supseteq E$. Let also $c_\d(E)$ denote the set of all disjoint sequences $(A_n)\in c(E)$, so that $A_i\cap A_j=\emptyset$ whenever $(A_n)\in c_\d(E)$ and $i\ne j$. Consider the outer measure \begin{equation}\label{eq:outer} m^*(E):=\inf\Big\{\sum_n m(A_n)\colon(A_n)\in c(E)\Big\} \end{equation} of the set $E$ corresponding to the measure $m$ on the algebra $\A$. The following properties of the outer measure are well known and easy to check: for any subsets $E,E_1,E_2,\dots$ of $\Om$, one has \begin{description} \item[{\bf positivity}] $m^*(E)\ge0$; \item[{\bf monotonicity}] if $E_1\subseteq E_2$ then $m^*(E_1)\le m^*(E_2)$; \item[{\bf subadditivity}] $m^*\big(\bigcup_n E_n\big)\le\sum_n m^*(E_n)$; \item[{\bf ``disjoint'' version}] in the definition \eqref{eq:outer} of the outer measure, one may replace $c(E)$ by $c_\d(E)$. \end{description} The latter property follows immediately by simple and well-known \begin{remark}\label{rem:CU} For any sequence $(A_n)$ in $\A$ and the sequence $(B_n)$ defined by the condition $B_n=A_n\setminus \bigcup_{j<n}A_j$, one has the following: $(B_n)$ is a disjoint sequence in $\A$, $m(B_n)\le m(A_n)$ for all $n$, and \bigcup_n A_n=\bigcup_n B_n. \end{remark} Another useful property of the outer measure is just a bit more involved: \begin{proposition}\label{prop:1} Take any disjoint sequence $(A_n)$ in $\A$. Then \begin{equation*} m^*\Big(\bigcup_n A_n\Big)=\sum_n m(A_n). \end{equation*} \end{proposition} \begin{proof} Let $E:=\bigcup_n A_n$. Then trivially $(A_n)\in c_\d(E)\subseteq c(E)$, so that, by \eqref{eq:outer}, $m^*(E)\le\sum_n m(A_n)$. To prove the reverse inequality, take any $(B_k)\in c_\d(E)$ and any natural $N$. Let $C_N:=\cup_{n=1}^N A_n$, so that $C_N\in\A$. Then, by the $\si$-additivity of $m$ on $\A$, \begin{multline*} \sum_{n\le N} m(A_n)=\sum_{n\le N} \sum_k m(A_n\cap B_k) =\sum_k \sum_{n\le N} m(A_n\cap B_k) =\sum_k m(C_N\cap B_k) \\ \le\sum_k m(B_k). \end{multline*} Taking now the infimum over all $(B_k)\in c_\d(E)$ and recalling the ``disjoint'' version of the definition of the outer measure, we see that $\sum_{n\le N} m(A_n)\le m^*(E)$. Finally, letting $N\to\infty$, we confirm the reverse inequality, $\sum_n m(A_n)\le m^*(E)$, which completes the proof of Proposition~\ref{prop:1}. \end{proof} An immediate and important consequence of Proposition~\ref{prop:1} is that \begin{equation}\label{eq:m*=m} m^*(A)=m(A) \quad\text{for all}\ A\in\A; \end{equation} that is, $m^*$ is an extension of $m$ from $\A$ to the set of all subsets of $\Om$. \medskip Note the following properties of the functions $d_A$: for any $A\in\A_\fin$ and any subsets $E,B,E_1,E_2,\dots,B_1,B_2,\dots$ of $\Om$, \begin{enumerate}[(I)] \item \label{i} $d_A$ is a pseudometric; \item \label{ii} $d_A(E,B)=d_A(E^\c,B^\c)=d_A(E^\c,A\setminus B)$; \item \label{iii} $d_A\Big(\bigcup\limits_n E_n,\bigcup\limits_n B_n\Big)\le\sum\limits_n d_A(E_n,B_n)$ and \\ $d_A\Big(\bigcap\limits_n E_n,\bigcap\limits_n B_n\Big)\le\sum\limits_n d_A(E_n,B_n)$; \item \label{iv} $m^*(A\cap E)\le m^*(A\cap B)+d_A(E,B)$. \end{enumerate} Property \eqref{i} follows because the outer measure $m^*$ is nonnegative, monotone, and subadditive, whereas $E_1+E_3=E_1+E_2+E_2+E_3\subseteq(E_1+E_2)\cup(E_2+E_3)$ and $E_1+E_2=E_2+E_1$. Concerning Property \eqref{ii}, it is enough to note that $E+B=E^\c+B^\c$. To check Property \eqref{iii}, note that $\bigcup\limits_n E_n+\bigcup\limits_n B_n\subseteq\bigcup\limits_n(E_n+B_n)$ and $\bigcap\limits_n E_n+\bigcap\limits_n B_n\subseteq\bigcup\limits_n(E_n+B_n)$, and then use again the monotonicity and subadditivity of $m^*$. Finally, Property \eqref{iv} as well follows by the monotonicity and subadditivity of $m^*$, since $A\cap E\subseteq(A\cap B)\cup\big(A\cap(E+B)\big)$. Now we are ready to present \begin{proof}[Proof of Theorem~\ref{th:si-alg}] Take any $A\in\A_\fin$ and any real $\vp>0$. Note first that $X\in\M$, since $d_A(X,A)=0<\vp$. Moreover, if $E\in\A$, then $d_A(E,B)=0<\vp$ for $B:=E\in\A$; so, $\M\supseteq\A$. That $\M$ is closed with respect to the complement easily follows from Property \eqref{ii} of $d_A$, which yields $d_A(E^c,A\setminus B)<\vp$ if $d_A(E,B)<\vp$. Also, Property \eqref{iii} of $d_A$ shows that $\M$ is closed with respect to the finite unions. So, $\M$ is an algebra. To complete the proof of Theorem~\ref{th:si-alg}, it remains to show that $\M$ is closed with respect to the countable unions. First here, take any disjoint sequence $(A_n)$ in $\A$. Then for any natural $N$ \begin{equation}\label{eq:cap A_n} d_A\Big(\bigcup_n A_n,\bigcup_{n\le N} A_n\Big) =m^*\Big(\bigcup\limits_{n>N} (A\cap A_n)\Big)=\sum_{n>N} m(A\cap A_n) \end{equation} by Proposition~\ref{prop:1}. On the other hand, for any natural $L$, \begin{equation*} \sum_{n\le L} m(A\cap A_n) =m\Big(\bigcup_{n\le L}(A\cap A_n)\Big) \le m(A)<\infty, \end{equation*} since $A\in\A_\fin$. Hence, $\sum_n m(A\cap A_n) \le m(A)<\infty$, which implies that \break $\sum_{n>N} m(A\cap A_n)\to0$ as $N\to\infty$. So, by \eqref{eq:cap A_n}, $\bigcup_n A_n\in\M$ -- for any disjoint sequence $(A_n)$ in $\A$. Moreover, since $\M$ is an algebra, in view of Remark~\ref{rem:CU} it now follows that $\bigcup_n B_n\in\M$ for any, not necessarily disjoint, sequence $(B_n)$ in $\A$. Finally, take any $E_1,E_2,\dots$ in $\M$. Then for each $n$ there is some $B_n\in\A$ such that $d_A(E_n,B_n)<\vp/2^n$. By the last paragraph, $\bigcup_n B_n\in\M$, and so, $d_A\big(\bigcup_n B_n,B)<\vp$ for some $B\in\A$. By Properties \eqref{i} and \eqref{iii} of $d_A$, \begin{equation*} \begin{aligned} d_A\Big(\bigcup\limits_n E_n,B\Big) &\le d_A\Big(\bigcup\limits_n E_n,\bigcup\limits_n B_n\Big) + d_A\Big(\bigcup\limits_n B_n,B\Big) \\ &\le\sum\limits_n d_A(E_n,B_n)+\vp \le\sum\limits_n \vp/2^n+\vp=2\vp. \end{aligned} \end{equation*} This completes the proof of Theorem~\ref{th:si-alg}. \end{proof} \begin{proof}[Proof of Theorem~\ref{th:si-add}] In view of the subadditivity property of $m^*$, it is enough to show that $m^*$ is finitely superadditive; that is, for any disjoint $E_1$ and $E_2$ in $\M$, one has \begin{equation}\label{eq:super} m^*(E_1\cup E_2)\overset{\text{?}}\ge m^*(E_1)+m^*(E_2). \end{equation} Take indeed any such $E_1$ and $E_2$. Take also any real $\vp>0$. If $m^*(E_1\cup E_2)=\infty$, then inequality \eqref{eq:super} is trivial. So, without loss of generality $m^*(E_1\cup E_2)<\infty$. Hence, in view of the ``disjoint'' version of the definition of the outer measure, for some sequence $(A_n)\in c_\d(E_1\cup E_2)$ we have \begin{equation}\label{eq:sum m<infty} \sum_n m(A_n)\le m^*(E_1\cup E_2)+\vp<\infty, \end{equation} and so, for any natural $N$ and $C:=C_N:=\bigcup_{n\le N}A_n\in\A_\fin$, \begin{equation}\label{eq:infty>} \infty>m^*(E_1\cup E_2)\ge\sum_n m(A_n)-\vp\ge\sum_{n\le N} m(A_n)-\vp=m(C)-\vp. \end{equation} Further, since $E_1$ and $E_2$ are in $\M$, one can find $B_1$ and $B_2$ in $\A$ such that \begin{equation}\label{eq:d<ep} d_C(E_\al,B_\al)<\vp; \end{equation} here and in what follows, $\al$ is $1$ or $2$. By Property~\ref{iv} of the pseudometrics $d_A$, \begin{equation}\label{eq:m*1} m^*(C\cap E_\al)\le m^*(C\cap B_\al)+d_C(E_\al,B_\al)\le m^*(C\cap B_\al)+\vp. \end{equation} On the other hand, using first the monotonicity of $m^*$ and the condition $(A_n)\in c_\d(E_1\cup E_2)$, and then Proposition~\ref{prop:1} and \eqref{eq:sum m<infty}, we have \begin{equation}\label{eq:m*2} m^*(C^\c\cap E_\al) \le m^*\big(C^\c\cap\bigcup_n A_n\big) =m^*\big(\bigcup_{n>N} A_n\big) =\sum_{n>N}m(A_n)<\vp \end{equation} if $N$ is large enough -- which latter will be assumed in the sequel. It follows from the subadditivity of $m^*$, \eqref{eq:m*1}, \eqref{eq:m*2}, and \eqref{eq:m*=m} that \begin{equation*} m^*(E_\al)\le m^*(C\cap E_\al)+m^*(C^c\cap E_\al) \le m^*(C\cap B_\al)+2\vp=m(C\cap B_\al)+2\vp. \end{equation*} Therefore, \begin{equation}\label{eq:m*+m*<} \begin{aligned} m^*(E_1)+m^*(E_2)-4\vp&\le m(C\cap B_1)+m(C\cap B_2) \\ &= m\big(C\cap (B_1\cup B_2)\big)+m(C\cap B_1\cap B_2) \\ &\le m(C)+d_C(E_1,B_1)+d_C(E_2,B_2)\le m(C)+2\vp; \end{aligned} \end{equation} the penultimate inequality here holds by Property~\eqref{iii} of the pseudometrics $d_A$, taking also into account that $C\cap (B_1\cup B_2)\subseteq C$ and $C\cap E_1\cap E_2\subseteq E_1\cap E_2=\emptyset$, whereas the last inequality in \eqref{eq:m*+m*<} follows immediately from \eqref{eq:d<ep}. Comparing the multi-line display \eqref{eq:m*+m*<} with \eqref{eq:infty>}, we see that $m^*(E_1\cup E_2)\ge m^*(E_1)+m^*(E_2)-7\vp$, which concludes the proof of \eqref{eq:super} and thus the proof of Theorem~\ref{th:si-add}. \end{proof} \begin{proof}[Proof of Theorem~\ref{th:uniq}] Recall that the $\si$-finiteness of the measure $m$ on $\A$ means that there is a disjoint sequence $(D_n)$ in $\A_\fin$ such that $\bigcup_n D_n=\Om$. In the rest of this proof, let $(D_n)$ be such a sequence. In addition to the restriction $\bar m$ of $m^*$ to $\M$, let $\tm$ be another measure that extends $m$ from the algebra $\A$ to the $\si$-algebra $\M$. Take any $E\in\M$. By the ``disjoint'' version of the definition of the outer measure, for each real $r>m^*(E)$ there is a sequence $(A_n)\in c_\d(E)$ such that $\sum_n m(A_n)<r$. So, $\tm(E)\le\tm\big(\bigcup_n A_n\big)=\sum_n\tm(A_n)=\sum_n m(A_n)<r$, for any real $r>m^*(E)$. Thus, $\tm(E)\le m^*(E)=\bar m(E)$. So, for each $n$ one has $\tm(D_n\cap E)\le\bar m(D_n\cap E)\le m(D_n)<\infty$ and \break $\tm(D_n\cap E^\c)\le\bar m(D_n\cap E^\c)\le m(D_n)<\infty$. Adding now inequalities \break $\tm(D_n\cap E)\le\bar m(D_n\cap E)$ and $\tm(D_n\cap E^\c)\le\bar m(D_n\cap E^\c)$, we get $\tm(D_n)<\bar m(D_n)$ unless $\tm(D_n\cap E)=\bar m(D_n\cap E)$. But $\tm(D_n)<\bar m(D_n)$ contradicts the condition that both $\bar m$ and $\tm$ are extensions of the measure $m$ on $\A$. Thus, $\tm(D_n\cap E)=\bar m(D_n\cap E)$ for all $n$, whence $\tm(E)=\sum _n\tm(D_n\cap E)=\sum _n\bar m(D_n\cap E)=\bar m(E)$, for all $E\in\M$. \end{proof} The following characterization of the outer measure will be useful in the proof of Theorem~\ref{th:M comp}, and it may also be of independent interest. \begin{proposition}\label{prop:tight} Take any $E\subseteq\Om$. Then \begin{equation*} \begin{aligned} m^*(E)&=m^*_\circ(E):=\inf\{\bar m(S)\colon S\in\M,\ S\supseteq E\} \\ &=m^*_{\circ\circ}(E):=\inf\{\bar m(S)\colon S\in\si(\A),\ S\supseteq E\}. \end{aligned} \end{equation*} Morever, the second of the two infima is attained, and hence the first one is attained. \end{proposition} \begin{proof} That $m^*_{\circ}(E)\le m^*_{\circ\circ}(E)$ follows because $\si(\A)\subseteq\M$. That $m^*(E)\le m^*_{\circ}(E)$ follows because for any $S\in\M$ such that $S\supseteq E$ one has $m^*(E)\le m^*(S)=\bar m(S)$. So, $m^*(E)\le m^*_{\circ}(E)\le m^*_{\circ\circ}(E)$. Thus, to complete the proof of Proposition~\ref{prop:tight}, it is enough to construct some $S\in\si(\A)$ such that $S\supseteq E$ and $\bar m(S)\le m^*(E)$. Such a construction is easy. Indeed, for each natural $k$ there is a sequence $\big(A_n^{(k)}\big)_{n=1}^\infty\in c_\d(E)$ such that $\si(\A)\ni B^{(k)}:=\bigcup_n A_n^{(k)}\supseteq E$ and $\bar m(B^{(k)})=\sum_n m(A_n^{(k)})\le m^*(E)+1/k$. Let now $S:=\bigcap_k B^{(k)}$. Then indeed $S\in\si(\A)$, $S\supseteq E$, and $\bar m(S)\le m^*(E)$. \end{proof} \begin{proof}[Proof of Theorem~\ref{th:M comp}] The first sentence of Theorem~\ref{th:M comp} will be verified in the following six steps. \textbf{Step 1: verification of $\M_\Co\subseteq\M_{\Co,\A_\fin}$.} This follows immediately, because $0\le d_A(E,S)\le d_X(E,S)$ for any subsets $E$ and $A$ of $X$. \textbf{Step 2: verification of $\M_{\Co,\A_\fin}\subseteq\M$.} Take any $E\in\M_{\Co,\A_\fin}$. Take next any real $\vp>0$ and any $A\in\A_\fin$. Then $d_A(E,S)=0$ for some $S\in\si(\A)$. Take any such $S$. Then, by Theorem~\ref{th:si-alg}, $S\in\M$ and therefore $d_A(S,B)<\vp$ for some $B\in\A$, whence, by the triangle inequality, $d_A(E,B)\le d_A(E,S)+d_A(S,B)<\vp$. So, $E\in\M$. Step 2 is complete. \textbf{Step 3: verification of $\M_{\Co,\A_\fin}\supseteq\M$.} Take any $E\in\M$. Take next any real $\vp>0$ and any $A\in\A_\fin$. Then for each natural $k$ there is some set $B_k\in\A$ such that $d_A(E,B_k)\le\vp/2^k$. Let $C:=\bigcup_j C_j$, where $C_j:=\bigcap_{k>j}B_k$. Note that $C\in\si(\A)$ and $C=\bigcup_{j>N} C_j$ for any natural $N$, since $C_j\subseteq C_{j+1}$ for all $j$. So, by Property~\ref{iii} of $d_A$, we have $d_A(E,C_j)\le\sum_{k>j}d_A(E,B_k)\le\sum_{k>j}\vp/2^k=\vp/2^j$ for all $j$, whence $d_A(E,C)\le\sum_{j>N}d_A(E,C_j)\le\vp/2^N$, for any natural $N$. Thus, $d_A(E,C)=0$, which means that $E\in\M_{\Co,\A_\fin}$. Step 3 is complete. \textbf{Step 4: verification of $\M\subseteq\M_\Ca$.} Take any $E\in\M$ and then any $F\subseteq\Om$. By Proposition~\ref{prop:tight}, for some $S\in\M$ one has $S\supseteq F$ and $\bar m(S)=m^*(F)$. Then $S\cap E\in\M$, $S\cap E^\c\in\M$, $S\cap E\supseteq F\cap E$, and $S\cap E^\c\supseteq F\cap E^\c$, whence \begin{equation*} \begin{aligned} m^*(F)=\bar m(S)=\bar m(S\cap E)+\bar m(S\cap E^\c) &=m^*(S\cap E)+m^*(S\cap E^\c) \\ &\ge m^*(F\cap E)+m^*(F\cap E^\c), \end{aligned} \end{equation*} so that $m^*(F)\ge m^*(F\cap E)+m^*(F\cap E^\c)$. The reverse inequality follows by the subadditivity of $m^*$. Thus, $E\in\M_\Ca$, for any $E\in\M$. Step 4 is complete. \textbf{Step 5: verification of $\M_\Ca\subseteq\M_{\Ca,\A_\fin}$.} This is trivial. \textbf{Step 6: verification of $\M_{\Ca,\A_\fin}\subseteq\M$.} Take any $E\in\M_{\Ca,\A_\fin}$ and then any $A\in\A_\fin$ and any real $\vp>0$. We want to show here that $d_A(E,B)\le3\vp$ for some $B\in\A$. The conditions $A\in\A_\fin$ and $E\in\M_{\Ca,\A_\fin}$ yield \begin{equation}\label{eq:Ca} \infty>m(A)=m^*(A)=m^*(A\cap E)+m^*(A\cap E^\c). \end{equation} Next, for some sequences $(S_n)\in c_\d(A\cap E)$ and $(T_n)\in c_\d(A\cap E^\c)$ we have \begin{equation}\label{eq:subset M} \begin{alignedat}{2} &\M\ni S:=\bigcup_n S_n\supseteq A\cap E,\quad &&\bar m(S)=\sum_n m(S_n)\le m^*(A\cap E)+\vp, \\ &\M\ni T:=\bigcup_n T_n\supseteq A\cap E^\c,\quad &&\bar m(T)=\sum_n m(T_n)\le m^*(A\cap E^\c)+\vp. \end{alignedat} \end{equation} Moreover, without loss of generality $S\cup T\subseteq A$; otherwise, replace $S_n$ and $T_n$ by $A\cap S_n$ and $A\cap T_n$, respectively. Since $S\supseteq A\cap E$ and $T\supseteq A\cap E^\c$, it follows that \begin{equation}\label{eq:S,T,A} S\cup T=A. \end{equation} By the first inequality in \eqref{eq:subset M}, $\sum_n m(S_n)<\infty$, because $m^*(A\cap E)\le m^*(A)=m(A)<\infty$. So, for some natural $N$ we have $\sum_{n>N} m(S_n)\le\vp$. Let now B:=\bigcup_{n\le N}S_n. $ Then $B\in\A$ and \begin{equation}\label{eq:d(S,B)} d_A(S,B)=m^*\Big(\bigcup_{n>N}S_n\Big)=\sum_{n>N} m(S_n)\le\vp. \end{equation} Since $S\subseteq A$ and $A\cap E^\c\subseteq T$, we have $S\setminus(A\cap E) \subseteq S\cap T$. Therefore and in view of \eqref{eq:subset M}, \eqref{eq:S,T,A}, and \eqref{eq:Ca}, \begin{multline*} d_A(E,S)=m^*\big(S\setminus(A\cap E)\big) \le m^*(S\cap T)=\bar m(S\cap T) \\ =\bar m(S)+\bar m(T)-\bar m(A) \le m^*(A\cap E)+\vp+m^*(A\cap E^\c)+\vp-m(A)=2\vp. \end{multline*} This and \eqref{eq:d(S,B)} yield the desired result: \begin{equation*} d_A(E,B)\le d_A(E,S)+d_A(S,B)\le3\vp. \end{equation*} This completes Step~6 and thus the entire proof of the first sentence of Theorem~\ref{th:M comp}. It remains to verify its second sentence. To do this, assume that $m$ is $\si$-finite, so that there is a disjoint sequence $(D_n)$ in $\A_\fin$ such that $\bigcup_n D_n=\Om$. Take now any $E\in\M_{\Co,\A_\fin}$. Since $E\in\M_{\Co,\A_\fin}$, for each natural $n$ there is some set $S_n\in\si(\A)$ such that $d_{D_n}(E,S_n)=0$. Here one can replace $S_n$ by $D_n\cap S_n$, so that without loss of generality $S_n\subseteq D_n$. Let then $S:=\bigcup_n S_n$, so that $S\in\si(\A)$ and $D_n\cap S=S_n$ for each $n$. Since $m^*$ is $\si$-additive on $\M=\M_{\Co,\A_\fin}$ and $\M\supseteq\si(\A)$, it follows that $d_X(E,S)=\sum_n d_{D_n}(E,S)=\sum_n d_{D_n}(E,D_n\cap S)=\sum_n d_{D_n}(E,S_n)=0$. So, $E\in\M_\Co$, for any $E\in\M_{\Co,\A_\fin}$ -- if $m$ is $\si$-finite. Theorem~\ref{th:M comp} is now completely proved. \end{proof} \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} We are witnessing a convergence of multi-modal AI~\cite{devlin2018bert,bao2021beit} where the architecture and the learning algorithm are unified for various data modalities. This exciting direction abandons the domain-specific knowledge for an individual data modality, but rather pursues a solution far more generalizable. For self-supervised representation learning, masked modeling~\cite{devlin2018bert} or simply the masking mechanism has emerged as an effective approach. The input data is represented by a 2D tensor with a sequential dimension and a channel dimension in a modality-agnostic way~\cite{baevski2022data2vec}. The sequential dimension can be spatial in images, temporal in audio, and syntactic in languages. The masking mechanism withholds information along the sequential dimension, and exploits it for supervision. As a result, models learned from the masking supervision demonstrate strong capability for capturing correlations between sequential tokens~\cite{he2022masked}. \begin{figure} \centering \includegraphics[width=1\linewidth]{figs/Figure1.pdf} \caption{ We represent data as a matrix with a sequential dimension and a channel dimension. As a generic data augmentation, masking drops tokens along the sequential dimension. The proposed randomized quantization instead withholds information along the channel dimension. In this figure, we use 1D data of 10 sequential tokens for illustration. Data values are coded in grayscale.} \label{fig:short} \end{figure} The channel dimension describes the data feature at each sequential location, for example, RGB color at a spatial location or spectrogram frequency at a time step. Despite being generic, masking approaches have neglected to exploit supervision along the channel dimension. While the number of channels for images is as small as three, the channels for audio and tabular data can be as many as hundreds. Formulating the self-supervision from the channel dimension holds much potential for representation learning. In this paper, we draw a connection between masking and quantization, and explore quantization as a novel form of masking along the channel dimension. The data in each channel is dynamically quantized through a non-uniform quantizer, with the quantization value randomly sampled from randomly sampled quantization bins. In this way, information within each quantization bin is masked out, yet information across bins is retained. The information removed by quantization is controlled by the number of bins and the size of the bins, which has been rigorously studied in theory~\cite{shannon1959coding}. The larger the distortion rate, the stronger the quantization when it is used as an augmentation for representation learning. The extreme case of using only a single bin is equivalent to dropping the entire channel. We systematically study various quantization configurations for their effects as a data augmentation, for example, with respect to the number bins, uniform or non-uniform bins, and methods to select quantization values. We apply the randomized quantizer as the only augmentation, or in conjunction with augmentations along the sequential dimension on state-of-the-art Siamese representation learning methods MoCo-v3~\cite{chen2021empirical} and BYOL~\cite{niizumi2021byol}. In comparisons with previous domain-agnostic augmentations based on MixUp~\cite{zhang2017mixup}, our approach achieves state-of-the-art results by a large margin on vision, audio, and point cloud tasks, as well as on the DABS benchmark. Compared with domain-specific augmentations, our approach achieves competitive performance against handcrafted augmentations on vision, and state-of-the-art performance on audio and 3d point clouds. Our contributions can be summarized as follows: \begin{itemize} \vspace{-5pt} \item[-] We propose a simple yet effective data augmentation based on quantization, which is orthogonal to masking along the sequential dimension. \item[-] We demonstrate the generality and strong performance of randomized quantization for vision, audio, and 3D point clouds in a data-agnostic way. \item[-] We show that randomized quantization can augment intermediate features of a network on the DABS benchmark, which consists of numerous modalities. \end{itemize} \section{Approach} This paper proposes a novel generic data augmentation for representation learning based on quantization. We first provide preliminaries on quantization. We then introduce two factors to inject randomness into the quantization procedure. \subsection{Preliminaries: Quantization} \begin{figure} \centering \includegraphics[width=1\linewidth]{figs/quantizer.pdf} \caption{An illustration of a quantizer with five bins.} \label{fig:quantizer} \end{figure} A quantizer is a function which consists of a set of non-overlapping intervals or bins $S = \{S_i=[a_i, a_{i+1}))\}, i=0,1,...n-1$, and a set of reproduction values $y_i$. $n$ is the number of intervals and reproduction values. The quantizer maps values within an interval to a single scalar, defined as $q(x) = y_i$ for $ x\in S_i$. Formally, it can be written as \begin{equation} q(x) = \sum_i y_i \cdot 1_{S_i}(x), \end{equation} where the indicator function $1_{S_i}(x) = 1$ if $x\in S_i$ and $1_{S_i}(x) = 0$ otherwise. Figure~\ref{fig:quantizer} gives an illustration of a quantizer with five intervals. Quantization represents the original signal using a finite number of bits and hence introduces error to signal recovery. The central research problem is to find better tradeoffs between communication bandwidths and reproduction errors. Quantization can be categorized by uniform quantization and non-uniform quantization. A uniform quantizer has intervals and values which are evenly spaced, whereas a non-uniform quantizer allows either intervals or values to be unevenly spaced. Uniform quantizers are amenable to hardware deployment. However, non-uniform quantizers may perform better depending on the probabilistic distribution of $x$. \begin{figure*}[t] \centering \includegraphics[width=1\linewidth]{figs/Figure3.pdf} \caption{Visualization of quantized images with different numbers of bins. The images are quantized by a uniform quantizer. Fewer than three quantization bins causes severe information reduction, while fifteen or more bins leads to negligible difference from the original image. An intermediate number of bins (e.g., five to ten) is well-suited for image augmentation.} \label{fig:img_bins} \end{figure*} \begin{figure} \centering \includegraphics[width=1\linewidth]{figs/Figure4_vis_region_nubmer.pdf} \caption{Ablation study on the number of quantization bins. The peak performance is reached at 8 bins. Fewer bins deliver heavier augmentations and larger bins deliver weaker augmentations.} \label{fig:numbins} \end{figure} \subsection{Randomized Quantization as Augmentation} We aim to exploit quantization as a data withholding tool for representation learning. The information within each quantization bin is withheld, while the information across bins are retained. For data augmentation, a key aspect is the complexity of the augmentation space. We design the complexity of quantization augmentation by randomizing the intervals and the reproduction values. Concretely, given $S_i = [a_i, a_{i+1})$, $a_i$ is generated by, \begin{equation} a_0, a_1, ... ,a_{n-1} = \text{sort}(a'_0, a'_1, ..., a'_{n-1}) \end{equation} \begin{equation} \label{eq:random1} a'_i = U(\text{min}(x), \text{max}(x)), i=0,1,...,n-1, \end{equation} where $U$ denotes random sampling with a uniform distribution over the interval. The reproduction value $y_i$ is randomly sampled within the corresponding interval, \begin{equation} \label{eq:random2} y_i = U (a_i, a_{i+1}). \end{equation} The resultant randomized quantizer is non-uniform. The number of quantization bins $n$ is the hyperparameter of the augmentation. \subsection{Data-Agnostic Augmentation} The proposed randomized quantization augmentation can be applied to the channel dimension for any arbitrary data modality. The physical interpretation of the augmentation depends on the nature of the data modality. In Figure~\ref{fig:vis_img_audio}, we visualize the augmentations for images and audio. On images, it removes the high frequency details but highlights object boundaries and edges. It also alters color appearance significantly. On audio, we examine the augmented sound acoustically and we find the augmentation tends to enhance specific frequency signals, such as low-frequency sounds or high-frequency sounds. On point clouds where the channel dimension represents xyz coordinates, it tends to downsample local structures but highlight the global shape. The augmentation could instead be applied on intermediate embeddings of a neural network, as studied in the experiments section~\ref{exp_dabs}. The physical meaning of the augmentation on feature embeddings is less interpretable. \begin{table}[t] \centering \caption{ Ablation study of the two randomness factors for randomized quantization described in Eq.~\ref{eq:random1} and Eq.~\ref{eq:random2}. We examine the effect of randomized bins and random reproduction values for each bin. These two factors increase the complexity of the augmentation and significantly improve the performance. } \label{tab:abl_non_uniform} { \begin{tabular}{c|c|c|c} & random bins & random values & top1 acc\\ \hline baseline & \xmark & \xmark & 50.0 \\ + quantize & \xmark & \xmark & 54.8 \\ + quantize &\xmark & \cmark & 62.6 \\ + quantize & \cmark & \xmark & 66.0 \\ + quantize & \cmark & \cmark &\textbf{67.9}\\ \end{tabular} } \end{table} \subsection{Siamese Representation Learning} Siamese representation learning or contrastive learning relies heavily on the quality of the augmentations~\cite{niizumi2021byol,zhao2021distilling}. We apply the proposed augmentation on Siamese representation learning. At each training iteration, we sample two views from a data instance using randomized quantization by itself or in conjunction with other augmentations. Two views are processed by a deep neural network to extract feature representations. Loss terms such as InfoNCE~\cite{oord2018representation} and L2 are applied on the two views. We follow MoCo-v3~\cite{chen2021empirical} and BYOL~\cite{niizumi2021byol} in this paper, and we refer readers to the original papers for details. \section{Detailed results on the DABS benchmark} \begin{table*}[h] \centering \label{tab:cmp_dabs} \setlength{\tabcolsep}{12pt} \begin{tabular}{l|c|c|c|c|c} Dataset& Domain & Metric & None & e-Mix~\cite{tamkin2021dabs}& Ours\\ \hline CIFAR-10 & Images & Accuracy & 24.20 & 39.43 & \textbf{47.70} \\ Birds & Images & Accuracy & 1.62 & 3.86 & \textbf{4.16} \\ VGG Flower & Images & Accuracy & 9.03 & 25.96 & \textbf{30.20} \\ DTD (Textures) & Images & Accuracy & 7.39 & 8.83 & \textbf{10.90} \\ GTSRB (Traffic) & Images & Accuracy & 14.33 & 65.07 & \textbf{86.80} \\ FGVC-Aircraft & Images & Accuracy & 2.70 & 10.15 & \textbf{12.60} \\ LibriSpeech Sp. ID & Speech & Accuracy & 17.12 & 60.18 & \textbf{62.70} \\ VoxCeleb Sp. ID & Speech & Accuracy & 0.59 & 2.43 & \textbf{2.69} \\ AudioMNIST & Speech & Accuracy & 33.13 & 80.35 & \textbf{82.80} \\ Google Speech & Speech & Accuracy & 4.87 & 19.22 & \textbf{26.00} \\ Fluent Locations & Speech & Accuracy & 62.09 & 60.93 & \textbf{65.20} \\ Fluent Actions & Speech & Accuracy & 26.15 & 29.87 & \textbf{31.40} \\ Fluent Objects & Speech & Accuracy & 30.13 & 39.89 & \textbf{40.80} \\ COLA & English Text & Pearson Corr. & 0.00 & \textbf{8.40} & 8.27 \\ MNLI\_Matched & English Text & Accuracy & 35.80 & \textbf{37.80} & 36.70 \\ MNLI\_Mismatched & English Text & Accuracy & 36.60 & \textbf{37.50} & 37.00 \\ MRPC & English Text & Accuracy & 68.40 & 66.20 & \textbf{68.90} \\ QNLI & English Text & Accuracy & 57.70 & \textbf{57.90} & 57.40 \\ QQP & English Text & Accuracy & 65.10 & 64.30 & \textbf{65.50} \\ RTE & English Text & Accuracy & \textbf{54.50} & 51.30 & 52.70 \\ SST2 & English Text & Accuracy & 57.00 & \textbf{58.10} & 55.80 \\ STSB & English Text & Accuracy & 4.20 & 11.40 & \textbf{13.70} \\ WNLI & English Text & Accuracy & 43.60 & 47.90 & \textbf{50.70} \\ PAWS-X EN & Multilingual Text & Accuracy & \textbf{57.85} & 54.85 & 56.20 \\ PAWS-X FR & Multilingual Text & Accuracy & \textbf{57.80} & 55.90 & 55.90\\ PAWS-X ES & Multilingual Text & Accuracy & \textbf{58.55} & 55.50 & 54.80 \\ PAWS-X DE & Multilingual Text & Accuracy & \textbf{58.85} & 56.50 & 55.50 \\ PAWS-X ZH & Multilingual Text & Accuracy & \textbf{57.35} & 55.35 & 54.20 \\ PAWS-X JP & Multilingual Text & Accuracy & \textbf{57.55} & 57.35 & 56.70 \\ PAWS-X KO & Multilingual Text & Accuracy & \textbf{58.80} & 57.70 & 56.60 \\ PAMAP2 & Sensor & Accuracy & 69.81 & 79.48 & \textbf{84.90} \\ CheXpert & Chest X-Rays & Avg. AUROC & 68.14 & 72.40 & \textbf{73.40} \\ ChestX-ray8 & Chest X-Rays & Avg. AUROC & 57.00 & 63.00 & \textbf{64.70} \\ VQA & Vision/Language & Accuracy & \textbf{57.50} & 48.90 & 54.40 \\ \end{tabular} \caption{Detailed comparisons with e-Mix~\cite{tamkin2021dabs} on the DABS benchmark.} \end{table*} {\small \bibliographystyle{ieee_fullname} \section{Conclusion} We propose randomized quantization as a novel data augmentation tool for self-supervised representation learning. Quantization effectively withholds information within the quantization bins but retains the information across bins. It could be applied on arbitrary data along the channel dimension without domain-specific knowledge. Randomized quantization significantly outperforms existing domain-agnostic augmentations based on Mixup. It compares favorably against domain-specific augmentations on vision, and attains state-of-the-art results on audio and 3D point clouds. We also explored its capability on the feature representations in a neural network for a wide range of data modalities. Experimental results on the DABS benchmark demonstrates state-of-the-art results for speech, text, images and multiple sensors. Jointly applying augmentations on the input data and feature representations is a promising direction, and we leave it for future work. \section*{Broader Impacts} Although the proposed augmentation is generic in its formulation, it is not guaranteed to work beyond the modalities investigated in this paper. The study is limited to classification tasks. Generalization to other downstream tasks remain under-explored. \section{Introduction} After receiving paper reviews, authors may optionally submit a rebuttal to address the reviewers' comments, which will be limited to a {\bf one page} PDF file. Please follow the steps and style guidelines outlined below for submitting your author response. The author rebuttal is optional and, following similar guidelines to previous CVPR conferences, is meant to provide you with an opportunity to rebut factual errors or to supply additional information requested by the reviewers. It is NOT intended to add new contributions (theorems, algorithms, experiments) that were absent in the original submission and NOT specifically requested by the reviewers. You may optionally add a figure, graph, or proof to your rebuttal to better illustrate your answer to the reviewers' comments. Per a passed 2018 PAMI-TC motion, reviewers should refrain from requesting significant additional experiments for the rebuttal or penalize for lack of additional experiments. Authors should refrain from including new experimental results in the rebuttal, especially when not specifically requested to do so by the reviewers. Authors may include figures with illustrations or comparison tables of results reported in the submission/supplemental material or in other papers. Just like the original submission, the rebuttal must maintain anonymity and cannot include external links that reveal the author identity or circumvent the length restriction. The rebuttal must comply with this template (the use of sections is not required, though it is recommended to structure the rebuttal for ease of reading). \subsection{Response length} Author responses must be no longer than 1 page in length including any references and figures. Overlength responses will simply not be reviewed. This includes responses where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font. \section{Formatting your Response} {\bf Make sure to update the paper title and paper ID in the appropriate place in the tex file.} All text must be in a two-column format. The total allowable size of the text area is $6\frac78$ inches (17.46 cm) wide by $8\frac78$ inches (22.54 cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them. The top margin should begin 1 inch (2.54 cm) from the top edge of the page. The bottom margin should be $1\frac{1}{8}$ inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4 paper, approximately $1\frac{5}{8}$ inches (4.13 cm) from the bottom edge of the page. Please number any displayed equations. It is important for readers to be able to refer to any particular equation. Wherever Times is specified, Times Roman may also be used. Main text should be in 10-point Times, single-spaced. Section headings should be in 10 or 12 point Times. All paragraphs should be indented 1 pica (approx.~$\frac{1}{6}$ inch or 0.422 cm). Figure and table captions should be 9-point Roman type as in \cref{fig:onecol}. List and number all bibliographical references in 9-point Times, single-spaced, at the end of your response. When referenced in the text, enclose the citation number in square brackets, for example~\cite{Alpher05}. Where appropriate, include the name(s) of editors of referenced books. \begin{figure}[t] \centering \fbox{\rule{0pt}{0.5in} \rule{0.9\linewidth}{0pt}} \caption{Example of caption. It is set in Roman so that mathematics (always set in Roman: $B \sin A = A \sin B$) may be included without an ugly clash.} \label{fig:onecol} \end{figure} To avoid ambiguities, it is best if the numbering for equations, figures, tables, and references in the author response does not overlap with that in the main paper (the reviewer may wonder if you talk about \cref{fig:onecol} in the author response or in the paper). See \LaTeX\ template for a workaround. \subsection{Illustrations, graphs, and photographs} All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the response. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Readers (and reviewers), even of an electronic copy, may choose to print your response in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic. When placing figures in \LaTeX, it is almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below {\small\begin{verbatim} \usepackage{graphicx} ... \includegraphics[width=0.8\linewidth] {myfile.pdf} \end{verbatim} } {\small \bibliographystyle{ieee_fullname} \section{Related Works} \noindent \textbf{Self-supervised learning} extracts labels from the data itself and tasks the network to learn transferable representations. Among the earliest forms of self-supervised models are auto-encoders~\cite{hinton1993autoencoders} and generative models~\cite{hinton2009deep}. But since the input and the output are identical, a neural network may easily find shortcuts and use memorization to solve the generation task. Advances in recent years show that information needs to be withheld from the input to prevent cheating~\cite{doersch2015unsupervised}. Pretext tasks such as colorization~\cite{zhang2016colorful}, inpainting~\cite{pathak2016context}, jigsaw puzzles~\cite{noroozi2016unsupervised} were proposed in vision, while masked modeling~\cite{devlin2018bert}, next sentence prediction~\cite{kiros2015skip,jernite2017discourse}, and replaced word prediction~\cite{clark2020electra} were developed in natural language processing. Speediness~\cite{benaim2020speednet,huang2021ascnet} and temporal order~\cite{wei2018learning,misra2016shuffle} have been exploited for video representation learning. Due to space limitations, we omit the literature for speech~\cite{baevski2020wav2vec}, tabular data~\cite{arik2021tabnet}, graph-structured data~\cite{sun2019infograph} and many other modalities. The optimal pretext task for each target problem may be different. However, there exists enormous interest in obtaining a single foundation model~\cite{bommasani2021opportunities} for all downstream applications. Instead of withholding data for supervision, contrastive models~\cite{wu2018unsupervised,oord2018representation} create new data via data augmentation and compare features extracted using a Siamese network for supervision. Siamese representation learning can be categorized by whether to use explicit negatives~\cite{niizumi2021byol}, ways to define negatives~\cite{bachman2019learning}, and various loss formulations~\cite{caron2021emerging,zbontar2021barlow}. However, the main driving signal for learning lies in the data augmentations. \vspace{2pt} \noindent \textbf{Data augmentation} enlarges the number of data instances by leveraging prior knowledge of the data and target problem properties. For supervised learning, data augmentation aids in reducing overfitting and regularization~\cite{zhang2021understanding}. For self-supervised learning, the information gap created by two augmentations provides learning supervision. Typically, the data augmentation function extracts partial information from the data and optionally adds corruptions. Popular image-specific augmentations include cropping, scaling, color jittering, Gaussian blurring, cut-out~\cite{devries2017improved}, cut-mix~\cite{yun2019cutmix}, and auto-augment, which searches for a data augmentation policy~\cite{cubuk2018autoaugment}. In natural language processing, synonym replacement~\cite{wei2019eda}, back translation~\cite{brislin1970back}, random word insertion and deletion are most common. For audio and speech, altering the pitch, changing the playback speed, and masking either along the time axis or the frequency axis~\cite{park2019specaugment} may improve performance. Additionally, augmenting data through a generative model~\cite{bowles2018gan} such as a GAN is a viable approach. \vspace{2pt} \noindent \textbf{Domain-agnostic augmentation} aims to generalize modality-specific and domain-specific augmentations into a unified formulation. Finding such general priors in data is challenging. One line of work follows Mixup~\cite{zhang2017mixup}, which is initially proposed to improve empirical risk minimization of supervised learning by linearly interpolating data and labels. Because of its generality, later works have explored its application on other data modalities~\cite{guo2020nonlinear}, a wide range of problems~\cite{lucas2018mixed,hendrycks2019augmix}, as well as representation learning~\cite{tamkin2021dabs,lee2020mix, verma2021towards}. Another important line of work generalizes masked modeling~\cite{devlin2018bert}, which was initially proposed for language modeling, to other data modalities and domains~\cite{he2022masked,tong2022videomae,xu2022masked}. The masking mechanism samples a subset of the input data, while Mixup introduces additional corruptions which are not observed in the original data instance. Our randomized quantization augments along the channel dimension in a manner orthogonal to masking. \begin{figure*} \centering \includegraphics[width=1\linewidth]{figs/Figure5_multiple_modality_1.pdf} \caption{Visualizing randomized quantization augmentation on images, audio, and 3d point clouds. The first row presents the original signal, and the bottom three rows are augmented views. Randomized quantization alters color and enhances edges on images, spatially samples coordinates on point clouds, and enhances frequency channels for audios.} \label{fig:vis_img_audio} \end{figure*} \vspace{2pt} \noindent \textbf{Quantization} represents numerical values with a fixed discrete set of numbers so as to reduce communication bandwidth and maintain representation quality. The rounding error was first analyzed a century ago~\cite{sheppard1897calculation}, and the theory based on variable-rate quantization~\cite{shannon1948mathematical} and Huffman coding\cite{huffman1952method} revolutionized the communications industry. We refer readers to a survey~\cite{gray1998quantization} that describes this area from a theoretical perspective. Quantization for efficient neural networks~\cite{gholami2021survey} aims to reduce neural network latency while maintaining model accuracy. The advances of half-precision~\cite{banner2018scalable,wang2018training} and mixed-precision training~\cite{courbariaux2014training,gupta2015deep,micikevicius2017mixed} has accelerated model training by an order of magnitude. Works have shown that neural networks can be completely binarized~\cite{lin2017towards,wu2015adjustable,courbariaux2015binaryconnect} with reasonable performance. Stochastic quantization~\cite{chen2020statistical,fan2020training,bengio2013estimating} is a technique for learning and compressing model weights in a way that avoids local minima with the low-precision weight representations. A prior work~\cite{fu2022contrastive} shows that weight perturbations by quantization can enhance contrastive learning, especially on the semi-supervised scenarios. This work is the first to consider quantization as a data augmentation, especially for self-supervised representation learning. In this context, the goal of quantization is not to reduce the error rate but to effectively withhold information. The information gap between two random quantizations provides the supervision. \section{Ablation Study} \iffalse \begin{table}[t] \centering \caption{ \textbf{Ablation: randomly sampled endpoints versus uniformly sampled ones.} Under each sampling mechanism, quantization levels are set to RANDOM and the best results after tuning the number of quantization levels are demonstrated. } \label{tab:abl_non_uniform} { \begin{tabular}{c|c|c} \toprule[1.5pt] Spacing&Top 1&Top 5\\ \hline Uniform&62.59&84.66\\ Random&\textbf{67.89}&\textbf{88.37}\\ \bottomrule[1.5pt] \end{tabular} } \end{table} \begin{table}[t] \centering \caption{ \textbf{Ablation: the effect of quantization levels.} Each quantization step is randomly sampled and we report the performance of each quantized value choice with corresponding quantization level number at its optimum. Quantization levels can be set to RANDOM or MIDDLE, which denotes setting quantized values to local random numbers and middle points, respectively. Please refer to Figure~\ref{} for an intuitive illustration. } \label{tab:abl_q_values} { \begin{tabular}{c|c|c} \toprule[1.5pt] Quantization Levels&Top 1&Top 5\\ \hline MIDDLE&66.03&87.23\\ RANDOM&\textbf{67.89}&\textbf{88.37}\\ \bottomrule[1.5pt] \end{tabular} } \end{table} \fi \begin{table}[t] \centering \caption{ Representation learning with randomized quantization augmentation benefits from more training epochs.} \label{tab:training_length} \begin{tabular}{c|c|c|c} &100-ep&300-ep&800-ep\\ \hline MoCo-v3 &67.9&71.6& 72.1\\ BYOL & 67.2&71.0 & 71.6\\ \end{tabular} \end{table} \begin{table}[t] \centering \caption{Comparisons with alternative domain-agnostic augmentation techniques under the linear classification protocol on ImageNet. CR is short for center crop, and RRC is short for random resized crop. Our randomized quantization approach achieves the state-of-the-art results against prior arts.} \label{tab:cmp_daa} \setlength{\tabcolsep}{12pt} \scalebox{1}{ \begin{tabular}{l|c|c} Augmentations&MoCo-v3&BYOL\\ \hline CR&10.1&9.9\\ CR + DACL~\cite{verma2021towards}&32.7&33.2\\ CR + i-Mix~\cite{lee2020mix}&30.3&28.7\\ CR + Ours&\textbf{42.9}& \textbf{43.0}\\ \hline RRC&50.0&49.3\\ RRC + DACL~\cite{verma2021towards} &57.2&57.6\\ RRC + i-Mix~\cite{lee2020mix}& 55.4&49.9\\ RRC + Ours&\textbf{67.9}&\textbf{67.2}\\ \end{tabular} } \end{table} We choose visual representation learning for an ablation study. Random resized cropping is taken as the baseline augmentation, and we apply our randomized quantization after it. Following the MoCo-v3 framework~\cite{chen2021empirical}, we use ResNet-50~\cite{he2016deep} as the backbone network. The optimizer is consistent with MoCo-v3, and the network is optimized for 100 epochs. Representation learning is conducted on the ImageNet-1K dataset~\cite{deng2009imagenet}, and linear classification accuracy is reported on the validation set. We ablate three design factors of the proposed quantization based augmentation which affect its ability to mask channel-wise information. \vspace{2pt} \noindent \textbf{Randomizing Bins.} The performance of representation learning depends on the complexity of the pretext tasks created from random augmentations. In Table~\ref{tab:abl_non_uniform}, the baseline approach using the random resized crop augmentation obtains 50.0\% top-1 accuracy. Using a fixed uniform quantizer improves the performance mildly to 54.8\%. Randomizing the locations and sizes of bins allows for uneven masking and creates more useful pretext tasks. It improves the performance significantly to 66.0\%. \vspace{2pt} \noindent \textbf{Randomizing reproduction values.} Quantization is also affected by how each bin is represented. Commonly, the values within a bin's range are represented by the midpoint. As an alternative, we also consider taking a random value in the range. Intuitively, random reproduction values lead to bias in the quantization error, making them no longer zero-mean and bringing a stronger augmentation effect. It is found to benefit representation learning, yielding an increase of 1.9 points upon randomizing the bins in Table~\ref{tab:abl_non_uniform}. \vspace{2pt} \noindent \textbf{Number of quantization bins.} Figure~\ref{fig:numbins} illustrates the effect of various numbers of quantization bins. Intuitively, fewer bins leads to a stronger masking effect and higher cross-view variation. We vary the number of bins and find strong performance with 5-10 bins, peaking at 8 bins. This observation is similar to spatial masking in MAE~\cite{he2022masked} where an optimal masking ratio is found. In Figure~\ref{fig:img_bins}, we visualize quantized images with different numbers of quantization bins. To make the visualization consistent and comparable, we use the uniform quantizer in this case. It can be observed that too much information is withheld when using too few bins, and too many bins withholds too little information. \vspace{2pt} \noindent \textbf{Training epochs.} We further study training the augmentations with more epochs. In Table~\ref{tab:training_length}, the performance improves from 67.9\% with 100 epochs to 71.6\% with 300 epochs and 72.1\% with 800 epochs. With this complex augmentation, the network benefits from longer training. \section{Multi-Modality Experiments} We examine pre-training with the proposed augmentation across a variety of modalities including 1) vision (Section~\ref{exp_img}); 2) 3D point clouds (Section~\ref{exp_point_cld}); 3) audio (Section~\ref{exp_audio}); and 4) the DABS benchmark~\cite{tamkin2021dabs} (Section~\ref{exp_dabs}) comprised of data from multiple domains: natural images, multi-channel sensor data, English text, speech recordings, multilingual text, chest x-rays, and images with text descriptions. The hyper-parameter $n$ indicating the number of bins is tuned for each modality. We leave the description of corresponding datasets, settings and evaluation metrics to each section. \subsection{Images} \label{exp_img} \begin{table}[t] \centering \caption{ Comparisons with image-specific augmentations under the linear classification protocol on ImageNet. CJ stands for color jittering, and Full includes random resized crop, color jittering, grayscaling, Gaussian blurring and solarization. Randomized quantization is stronger than color jittering by a large margin. It falls behind the full handcrafted augmentations by just 1\%. } \label{tab:cmp_specific} \setlength{\tabcolsep}{12pt} \begin{tabular}{l|c|c} Method& MoCo-v3 & BYOL\\ \hline Ours & 42.9 & 43.0 \\ RRC & 50.0 & 49.4\\ RRC + CJ & 60.1 & 61.1\\ RRC + Ours & 67.9 & 67.2\\ Full & \textbf{68.9} & \textbf{68.9} \end{tabular} \end{table} We compare the proposed randomized quantization augmentation against domain-agnostic augmentation baselines, as well as domain-specific augmentations designed for images. The number of quantization bins is chosen as $n=8$. The experimental protocol follows the ablation study. \vspace{2pt} \noindent \textbf{Comparisons with domain-agnostic augmentations.} Recent works on domain-agnostic augmentation are predominantly adapted from Mixup~\cite{zhang2017mixup}. For example, i-Mix~\cite{lee2020mix} linearly interpolates input data, and their corresponding virtual labels are generated from the current batch. Similarly, DACL~\cite{verma2021towards} interpolates input but uses it as a way of adding noise to the original data. With two calls of the mixing function, two views are created for one image. In Table~\ref{tab:cmp_daa}, we compare our approach to these methods on two spatial operations: center crop (CR) and random resize crop (RRC). Center crop amounts to no augmentation, and random resized crop is frequently used in vision applications. Our evaluation is based on two Siamese representation learning frameworks MoCo-v3 and BYOL, since BYOL is said to have different behavior on augmentations. Randomized quantization performs the best against Mixup-based augmentations. As a standalone augmentation, randomized quantization obtains an accuracy of 42.9\% with MoCo-v3, which outperforms DACL and i-Mix by a large margin. In conjunction with random resized crop, a 10\% margin is maintained. The results using MoCo-v3 and BYOL training objectives are similar. Overall, randomized quantization achieves state-of-the-art results against domain-agnostic baselines in the vision domain. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{figs/Figure_color_jitter_vs_ours.pdf} \caption{Visual comparisons between color jittering and randomized quantization. Randomized quantization exhibits greater change in visual appearance and stronger edge enhancement.} \label{fig:img_cmp_color_jittering} \end{figure} \vspace{2pt} \noindent \textbf{Comparisons with domain-specific augmentations.} We further compare with image-specific augmentations for visual representation learning in Table~\ref{tab:cmp_specific}. We find that randomized quantization is much stronger than color jittering, which is heavily designed with prior knowledge such as brightness, contrast, and saturation for pixels. In Figure~\ref{fig:img_cmp_color_jittering}, we visualize color jittering and our augmentation. It can be observed that our augmentation leads to stronger and more diverse visual appearances than color jittering. Our augmentation is 1\% weaker than the full augmentation, which includes random resized crop, color jittering, grayscaling, Gaussian blurring and solarization successively. \begin{table}[t] \centering \caption{Linear probing and finetuning results for the shape classification task on the ModelNet40 dataset. Pre-training is conducted on the ShapeNet dataset. Our augmentation improves the classification accuracy substantially on various ratios of data, especially on very limited data with 1\%. ``lin'' and ``ft'' denote linear probing and fintuning respectively.} \label{tab:pointcls} \setlength{\tabcolsep}{4pt} \begin{tabular}{l|c|c|c|c|c|c} &1\%&2\%&5\%&10\%&20\%&100\%\\ \hline FoldingNet (lin) &56.4 &66.9& 75.6& 81.2& 83.6&88.4\\ MID-FC (lin)&61.5 &73.1& \textbf{80.2}& 84.2& {86.9}&90.3\\ Ours (lin)& \textbf{66.7} & \textbf{74.3}&80.0&\textbf{84.5}& \textbf{87.2} &\textbf{90.5}\\ \hline Scratch &58.5& 71.2& 80.1& 85.4& 88.7&92.9\\ MID-FC (ft)&67.3 &76.5& 83.6& {88.4}& 90.2&\textbf{93.0}\\ Ours (ft)&\textbf{71.3}& \textbf{78.5}& \textbf{84.9}& \textbf{88.6}& \textbf{90.6}&\textbf{93.0}\\ \end{tabular} \end{table} \begin{table}[t] \centering \caption{Linear probing and finetuning results for the shape segmentation task on the ShapeNet Part dataset. Pre-training is conducted on ShapeNet. Our augmentation improves the performance substantially on various ratios of data. ``lin'' and ``ft'' denote linear probing and fintuning respectively. } \label{tab:pointseg} \setlength{\tabcolsep}{4pt} \begin{tabular}{l|c|c|c|c|c|c} &\multicolumn{3}{c|}{C.mIoU}&\multicolumn{3}{c}{I.mIoU}\\ \cline{2-7} &1\%&5\%& 100\%&1\%&5\%& 100\%\\ \hline Multi-Task (lin) & - & 73.9 &- & 68.2 & 80.7 & - \\ MID-FC (lin)&66.2& 76.5&82.8&72.4&80.9&84.1\\ Ours (lin)&\textbf{70.6}&\textbf{76.9}&\textbf{82.9}&\textbf{77.4}&\textbf{81.9}&\textbf{84.3}\\ \hline MID-FC (ft)&67.6&77.8&84.3&76.2&82.1&\textbf{85.5}\\ Ours (ft)&\textbf{69.5}&\textbf{78.4}&\textbf{ 84.4}&\textbf{77.8}&\textbf{82.3}&\textbf{85.5} \\ \end{tabular} \end{table} \begin{table*}[t] \centering \caption{Downstream dataset details for audio representation learning.} \label{tab:audio_datasets} \setlength{\tabcolsep}{4pt} \begin{tabular}{c|c|c|c|c} Name&Task&\#Classes &Data size& Avg duration (s)\\ \hline NSynth (NS)~\cite{engel2017neural}&Musical instrument classification&11&305,979&4.0\\ UrbanSound8K (US8K)~\cite{salamon2014dataset}&Urban sound classification &10&8,732&4.0\\ VoxCeleb1 (VC1)~\cite{nagrani2017voxceleb}&Speaker identification&1,211&153,514&8.2\\ VoxForge (VF)~[from~Voxforge.org]&Language identification&6&176,438&5.8\\ Speech Commands V2 (SPCV2)~\cite{warden2018speech}&Command classification &35&105,829&1.0\\ Speech Commands V2 (SPCV2/12)~\cite{warden2018speech}&Command classification &12&105,829&1.0\\ \end{tabular} \end{table*} \begin{table*}[t] \centering \caption{Linear probing results for audio representation learning on six downstream datasets. Pre-training is conducted on the AudioSet dataset. Our model outperforms BYOL-A on four out of the six datasets, achieving an improvement of 1.8\% on average.} \label{tab:cmp_audio} { \setlength{\tabcolsep}{12pt} \begin{tabular}{c|cccccc|c} Method&NS&US8K&VC1&VF&SPCV2/12&SPCV2&Average \\ \hline TRILL~\cite{shor2020towards}& - &-&17.9&88.1&74.9&-&-\\ COLA~\cite{saeed2021contrastive}&63.4&-&29.9&71.3&71.7&62.4&-\\ OpenL3~\cite{cramer2019look}&-&78.2&-&-&-&-&-\\ COALA~\cite{favory2020coala}&73.1&72.7&-&-&-&-&-\\ COLA*~\cite{saeed2021contrastive}&70.2&78.5&30.4&79.5&76.7&76.8&68.7\\ BYOL-A~\cite{niizumi2021byol} &74.1&79.1&40.1&90.2&91.0&92.2&77.8\\ \hline Ours&\textbf{74.2}&78.0&\textbf{45.7}&\textbf{92.6}&\textbf{95.1}&92.1&\textbf{79.6}\\ \multicolumn{8}{l}{\small* denotes a re-implemented result by the BYOL-A authors.} \end{tabular} } \end{table*} \subsection{3D Point Clouds} \label{exp_point_cld} We explore self-supervised representation learning on point clouds which is represented by a disordered set of xyz coordinates. The pretraining is conducted on the ShapeNet~\cite{chang2015shapenet} dataset consisting of 57,449 3D shapes. Octree-based Sparse CNN~\cite{wang2017cnn} is used as the backbone network, which takes 3D point clouds as input and extracts point features as well as shape features. We follow the MID-Net~\cite{wang2021unsupervised} model as the baseline, which is trained by a point-wise and instance-wise contrastive loss. The model is trained by a SGD optimizer with a batch size of 32 and a weight decay of 5e-4. The initial learning rate is 0.03 and decreases by a factor of 10 after 200 and 300 epochs, and the training process terminates after 400 epochs. For data augmentation, we follow the baseline~\cite{wang2021unsupervised} to normalize each point cloud into a unit sphere, randomly rotate it along its upright axis, and randomly translate and scale it in $[-0.25, 0.25]$ and $[0.75, 1.25]$ respectively. For evaluation, we experiment on two downstream tasks: shape classification and shape segmentation. We apply the randomized quantization augmentation after the base augmentations. Unlike images and audio which are snapped to grids, strong quantization of point cloud coordinates will drastically degrade the 3d point data. We thus choose to use a larger number of bins, $n=30$, in order to maintain more information. In practice, since the 3d points are sparsified by quantization, we observe a substantial training speedup as a side benefit. Shape classification is conducted on ModelNet40~\cite{wu20153d} which is composed of 13,834 3D models from 40 categories. For each shape, we extract a global feature with the pre-trained backbone then train a linear classifier, or finetune the network, and report the average classification accuracy in Table~\ref{tab:pointcls}. We do comparison with FoldingNet~\cite{Yang2018a} and MID-FC~\cite{wang2021unsupervised}. With our augmentation, we improve the classification accuracy over the baseline MID-FC~\cite{wang2021unsupervised} substantially, especially when the training data is limited as shown in Table~\ref{tab:pointcls}. For example, with 1\% of the training data, we improve the classification accuracy by 5.2 and 4.0 points on linear probing and finetuning respectively. Shape segmentation is conducted on ShapeNet Part~\cite{yi2017large} with 16,881 3D point clouds from 16 categories. For each shape, we extract point-wise features with the pre-trained backbone then train a segmentation head composed of two fully connected layers, or finetune the network, and report the mean IoU across all categories (C.mIoU) and the mean IoU across all instances (I.mIoU) in Table~\ref{tab:pointseg}. We do comparison with two unsupervised pretraining methods including MID-FC~\cite{wang2021unsupervised} and Multi-Task~\cite{Hassani2019}. Our results are constantly better than the baselines across different ratio of training data. And similarly to ModelNet40 classification, we observe significant improvements with limited (1\% and 5\%) training data. For example, with 1\% of the training data, we improve the segmentation performance by 4.4 and 1.9 IoUs on linear probing and finetuning respectively. Since the only difference between our method and MID-FC is our quantization augmentation, the performance improvements of our method on shape classification and segmentation over MID-FC clearly demonstrate the effectiveness of our augmentation. \begin{table*}[t] \centering \caption{ Evaluation of the representation performance over six modalities on the DABS benchmark. Representations are trained on a single primary dataset for each modality and evaluated on a number of downstream datasets. The performance for each modality is averaged across the downstream datasets and shown in the table. } \label{tab:cmp_dabs} { \setlength{\tabcolsep}{10pt} \begin{tabular}{c|cccccc|c} Method&Natural Images&Text&Speech&Sensors&Chest x-rays &Images \& Text & Average \\ \cline{1-8} Scratch&10.1&42.3&24.9&69.8&68.1&\textbf{57.5} & 45.5 \\ e-Mix&27.9&44.1&41.8&79.5&72.4&48.9 & 52.4\\ Ours&\textbf{32.1}&\textbf{44.7}&\textbf{44.5}&\textbf{84.9}&\textbf{73.4}&54.5&\textbf{55.6}\\ \end{tabular} } \end{table*} \subsection{Audio} \label{exp_audio} We apply randomized quantization to audio representation learning. We largely follow the experimental settings of BYOL-A~\cite{niizumi2021byol} and treat it as our baseline. We use AudioSet~\cite{gemmeke2017audio} as the pretraining dataset, which consists of 1,963,807 audio samples of 527 classes. \footnote{When we downloaded this dataset, some data links were invalid, so we were only able to gather a subset of the full dataset (1,733,046 / 1,963,807). } AudioSet covers a comprehensive set of classes, ranging from human voices to animal sounds to environmental sounds. The pretrained representation is evaluated on six downstream audio classification datasets, covering musical instrument classification, urban sound classification, speaker identification, language identification and command classification. A summary of the downstream datasets can be found in Table~\ref{tab:audio_datasets}. We convert audio clips into the commonly used log-scaled spectrogram representation. Random resized crop is used to extract a $64\times96$ frequency-temporal segment for training. We replaced the Mixup augmentation used in BYOL-A with our randomized quantization, with the number of bins set to 5. We follow prior works~\cite{koizumi2020ntt,takeuchi2020effects,niizumi2021byol} by using a lightweight 2D convolutional network as the backbone. The backbone encoder produces a feature of 2048 dimensions, which is then fed to the projection and prediction head for representation learning. We train the network using the Adam optimizer with a base learning rate of 3e-4 and a batch size of 256 for 100 epochs. Table~\ref{tab:cmp_audio} summarizes the results on the six downstream classification tasks. Compared against BYOL-A with the Mixup augmentation, our randomized quantization outperforms it in four out of the six tasks. Our approach is particularly stronger by a margin of 5.6\% on the VoxCeleb1 dataset, which is the hardest classification task with 1211 classes among all six tasks. Our improvements tend to be smaller for tasks with fewer classes. On average, the proposed augmentation surpasses the current state-the-of-art BYOL-A by a margin of 1.8\%. \subsection{DABS} \label{exp_dabs} We further study the capability of augmenting intermediate features in a neural network, which is less interpretable than the input data. We conduct the experiment on a public benchmark DABS~\cite{tamkin2021dabs} which is designed to study domain-agnostic self-supervised representation learning. It contains six data modalities~\footnote{The benchmark also provides an additional multi-lingual text modality. However, it is not evaluated in the original paper and we thus omit this.}, covering natural RGB images, multichannel sensor data, English text, audio, chest x-ray images, as well as captioned images. In each domain, pre-training is conducted on a large-scale dataset, and the learned representations are evaluated with linear probing on various in-domain downstream datasets. The average performance for the in-domain downstream datasets is reported. We refer the reader to the benchmark for a full description of the pretraining datasets and in-domain evaluation datasets. Following e-Mix~\cite{tamkin2021dabs}, transformer~\cite{vaswani2017attention} is adopted as the backbone architecture for its generality across diverse domains. A lightweight domain-specific embedding module is applied before feeding the input tokens to the transformer. The transformer contains 12 layers with 8 attention heads, 256 hidden dimensions, and a dropout probability of $0.1$. The representations across tokens are averaged and projected to a 128-dimensional vector for SimCLR representation learning. The network is optimized with the Adam optimizer with a learning rate of 1e-4 and weight decay of 1e-4. The training protocol follows e-Mix, and all modalities share the same recipe. We apply randomized quantization on the token embeddings before the transformer. Since the quantization function has zero gradients everywhere, we simply initialize the token embedding module without updating it. The straight-through estimator can be potentially useful, but it is not the focus of this work. Table~\ref{tab:cmp_dabs} summarizes the results for this benchmark. Our model outperforms the baseline e-Mix on all modalities. The improvements on natural images, speech, and sensors are larger than 3\%, while the improvements on text and chest x-rays are relatively smaller, less than 1\%. Both e-Mix and our pretraining seem to hurt the representation quality for captioned images. We hypothesize that the two modalities of images and texts pose significant challenges for a naive contrastive learning approach. \section{Introduction} After receiving paper reviews, authors may optionally submit a rebuttal to address the reviewers' comments, which will be limited to a {\bf one page} PDF file. Please follow the steps and style guidelines outlined below for submitting your author response. The author rebuttal is optional and, following similar guidelines to previous CVPR conferences, is meant to provide you with an opportunity to rebut factual errors or to supply additional information requested by the reviewers. It is NOT intended to add new contributions (theorems, algorithms, experiments) that were absent in the original submission and NOT specifically requested by the reviewers. You may optionally add a figure, graph, or proof to your rebuttal to better illustrate your answer to the reviewers' comments. Per a passed 2018 PAMI-TC motion, reviewers should refrain from requesting significant additional experiments for the rebuttal or penalize for lack of additional experiments. Authors should refrain from including new experimental results in the rebuttal, especially when not specifically requested to do so by the reviewers. Authors may include figures with illustrations or comparison tables of results reported in the submission/supplemental material or in other papers. Just like the original submission, the rebuttal must maintain anonymity and cannot include external links that reveal the author identity or circumvent the length restriction. The rebuttal must comply with this template (the use of sections is not required, though it is recommended to structure the rebuttal for ease of reading). \subsection{Response length} Author responses must be no longer than 1 page in length including any references and figures. Overlength responses will simply not be reviewed. This includes responses where the margins and formatting are deemed to have been significantly altered from those laid down by this style guide. Note that this \LaTeX\ guide already sets figure captions and references in a smaller font. \section{Formatting your Response} {\bf Make sure to update the paper title and paper ID in the appropriate place in the tex file.} All text must be in a two-column format. The total allowable size of the text area is $6\frac78$ inches (17.46 cm) wide by $8\frac78$ inches (22.54 cm) high. Columns are to be $3\frac14$ inches (8.25 cm) wide, with a $\frac{5}{16}$ inch (0.8 cm) space between them. The top margin should begin 1 inch (2.54 cm) from the top edge of the page. The bottom margin should be $1\frac{1}{8}$ inches (2.86 cm) from the bottom edge of the page for $8.5 \times 11$-inch paper; for A4 paper, approximately $1\frac{5}{8}$ inches (4.13 cm) from the bottom edge of the page. Please number any displayed equations. It is important for readers to be able to refer to any particular equation. Wherever Times is specified, Times Roman may also be used. Main text should be in 10-point Times, single-spaced. Section headings should be in 10 or 12 point Times. All paragraphs should be indented 1 pica (approx.~$\frac{1}{6}$ inch or 0.422 cm). Figure and table captions should be 9-point Roman type as in \cref{fig:onecol}. List and number all bibliographical references in 9-point Times, single-spaced, at the end of your response. When referenced in the text, enclose the citation number in square brackets, for example~\cite{Alpher05}. Where appropriate, include the name(s) of editors of referenced books. \begin{figure}[t] \centering \fbox{\rule{0pt}{0.5in} \rule{0.9\linewidth}{0pt}} \caption{Example of caption. It is set in Roman so that mathematics (always set in Roman: $B \sin A = A \sin B$) may be included without an ugly clash.} \label{fig:onecol} \end{figure} To avoid ambiguities, it is best if the numbering for equations, figures, tables, and references in the author response does not overlap with that in the main paper (the reviewer may wonder if you talk about \cref{fig:onecol} in the author response or in the paper). See \LaTeX\ template for a workaround. \subsection{Illustrations, graphs, and photographs} All graphics should be centered. Please ensure that any point you wish to make is resolvable in a printed copy of the response. Resize fonts in figures to match the font in the body text, and choose line widths which render effectively in print. Readers (and reviewers), even of an electronic copy, may choose to print your response in order to read it. You cannot insist that they do otherwise, and therefore must not assume that they can zoom in to see tiny details on a graphic. When placing figures in \LaTeX, it is almost always best to use \verb+\includegraphics+, and to specify the figure width as a multiple of the line width as in the example below {\small\begin{verbatim} \usepackage{graphicx} ... \includegraphics[width=0.8\linewidth] {myfile.pdf} \end{verbatim} } {\small \bibliographystyle{ieee_fullname} \section{Approach} This paper proposes a novel generic data augmentation for representation learning based on quantization. We first provide preliminaries on quantization. We then introduce two factors to inject randomness into the quantization procedure. \subsection{Preliminaries: Quantization} \begin{figure} \centering \includegraphics[width=1\linewidth]{figs/quantizer.pdf} \caption{An illustration of a quantizer with five bins.} \label{fig:quantizer} \end{figure} A quantizer is a function which consists of a set of non-overlapping intervals or bins $S = \{S_i=[a_i, a_{i+1}))\}, i=0,1,...n-1$, and a set of reproduction values $y_i$. $n$ is the number of intervals and reproduction values. The quantizer maps values within an interval to a single scalar, defined as $q(x) = y_i$ for $ x\in S_i$. Formally, it can be written as \begin{equation} q(x) = \sum_i y_i \cdot 1_{S_i}(x), \end{equation} where the indicator function $1_{S_i}(x) = 1$ if $x\in S_i$ and $1_{S_i}(x) = 0$ otherwise. Figure~\ref{fig:quantizer} gives an illustration of a quantizer with five intervals. Quantization represents the original signal using a finite number of bits and hence introduces error to signal recovery. The central research problem is to find better tradeoffs between communication bandwidths and reproduction errors. Quantization can be categorized by uniform quantization and non-uniform quantization. A uniform quantizer has intervals and values which are evenly spaced, whereas a non-uniform quantizer allows either intervals or values to be unevenly spaced. Uniform quantizers are amenable to hardware deployment. However, non-uniform quantizers may perform better depending on the probabilistic distribution of $x$. \begin{figure*}[t] \centering \includegraphics[width=1\linewidth]{figs/Figure3.pdf} \caption{Visualization of quantized images with different numbers of bins. The images are quantized by a uniform quantizer. Fewer than three quantization bins causes severe information reduction, while fifteen or more bins leads to negligible difference from the original image. An intermediate number of bins (e.g., five to ten) is well-suited for image augmentation.} \label{fig:img_bins} \end{figure*} \begin{figure} \centering \includegraphics[width=1\linewidth]{figs/Figure4_vis_region_nubmer.pdf} \caption{Ablation study on the number of quantization bins. The peak performance is reached at 8 bins. Fewer bins deliver heavier augmentations and larger bins deliver weaker augmentations.} \label{fig:numbins} \end{figure} \subsection{Randomized Quantization as Augmentation} We aim to exploit quantization as a data withholding tool for representation learning. The information within each quantization bin is withheld, while the information across bins are retained. For data augmentation, a key aspect is the complexity of the augmentation space. We design the complexity of quantization augmentation by randomizing the intervals and the reproduction values. Concretely, given $S_i = [a_i, a_{i+1})$, $a_i$ is generated by, \begin{equation} a_0, a_1, ... ,a_{n-1} = \text{sort}(a'_0, a'_1, ..., a'_{n-1}) \end{equation} \begin{equation} \label{eq:random1} a'_i = U(\text{min}(x), \text{max}(x)), i=0,1,...,n-1, \end{equation} where $U$ denotes random sampling with a uniform distribution over the interval. The reproduction value $y_i$ is randomly sampled within the corresponding interval, \begin{equation} \label{eq:random2} y_i = U (a_i, a_{i+1}). \end{equation} The resultant randomized quantizer is non-uniform. The number of quantization bins $n$ is the hyperparameter of the augmentation. \subsection{Data-Agnostic Augmentation} The proposed randomized quantization augmentation can be applied to the channel dimension for any arbitrary data modality. The physical interpretation of the augmentation depends on the nature of the data modality. In Figure~\ref{fig:vis_img_audio}, we visualize the augmentations for images and audio. On images, it removes the high frequency details but highlights object boundaries and edges. It also alters color appearance significantly. On audio, we examine the augmented sound acoustically and we find the augmentation tends to enhance specific frequency signals, such as low-frequency sounds or high-frequency sounds. On point clouds where the channel dimension represents xyz coordinates, it tends to downsample local structures but highlight the global shape. The augmentation could instead be applied on intermediate embeddings of a neural network, as studied in the experiments section~\ref{exp_dabs}. The physical meaning of the augmentation on feature embeddings is less interpretable. \begin{table}[t] \centering \caption{ Ablation study of the two randomness factors for randomized quantization described in Eq.~\ref{eq:random1} and Eq.~\ref{eq:random2}. We examine the effect of randomized bins and random reproduction values for each bin. These two factors increase the complexity of the augmentation and significantly improve the performance. } \label{tab:abl_non_uniform} { \begin{tabular}{c|c|c|c} & random bins & random values & top1 acc\\ \hline baseline & \xmark & \xmark & 50.0 \\ + quantize & \xmark & \xmark & 54.8 \\ + quantize &\xmark & \cmark & 62.6 \\ + quantize & \cmark & \xmark & 66.0 \\ + quantize & \cmark & \cmark &\textbf{67.9}\\ \end{tabular} } \end{table} \subsection{Siamese Representation Learning} Siamese representation learning or contrastive learning relies heavily on the quality of the augmentations~\cite{niizumi2021byol,zhao2021distilling}. We apply the proposed augmentation on Siamese representation learning. At each training iteration, we sample two views from a data instance using randomized quantization by itself or in conjunction with other augmentations. Two views are processed by a deep neural network to extract feature representations. Loss terms such as InfoNCE~\cite{oord2018representation} and L2 are applied on the two views. We follow MoCo-v3~\cite{chen2021empirical} and BYOL~\cite{niizumi2021byol} in this paper, and we refer readers to the original papers for details. \section{Conclusion} We propose randomized quantization as a novel data augmentation tool for self-supervised representation learning. Quantization effectively withholds information within the quantization bins but retains the information across bins. It could be applied on arbitrary data along the channel dimension without domain-specific knowledge. Randomized quantization significantly outperforms existing domain-agnostic augmentations based on Mixup. It compares favorably against domain-specific augmentations on vision, and attains state-of-the-art results on audio and 3D point clouds. We also explored its capability on the feature representations in a neural network for a wide range of data modalities. Experimental results on the DABS benchmark demonstrates state-of-the-art results for speech, text, images and multiple sensors. Jointly applying augmentations on the input data and feature representations is a promising direction, and we leave it for future work. \section*{Broader Impacts} Although the proposed augmentation is generic in its formulation, it is not guaranteed to work beyond the modalities investigated in this paper. The study is limited to classification tasks. Generalization to other downstream tasks remain under-explored. \section{Ablation Study} \iffalse \begin{table}[t] \centering \caption{ \textbf{Ablation: randomly sampled endpoints versus uniformly sampled ones.} Under each sampling mechanism, quantization levels are set to RANDOM and the best results after tuning the number of quantization levels are demonstrated. } \label{tab:abl_non_uniform} { \begin{tabular}{c|c|c} \toprule[1.5pt] Spacing&Top 1&Top 5\\ \hline Uniform&62.59&84.66\\ Random&\textbf{67.89}&\textbf{88.37}\\ \bottomrule[1.5pt] \end{tabular} } \end{table} \begin{table}[t] \centering \caption{ \textbf{Ablation: the effect of quantization levels.} Each quantization step is randomly sampled and we report the performance of each quantized value choice with corresponding quantization level number at its optimum. Quantization levels can be set to RANDOM or MIDDLE, which denotes setting quantized values to local random numbers and middle points, respectively. Please refer to Figure~\ref{} for an intuitive illustration. } \label{tab:abl_q_values} { \begin{tabular}{c|c|c} \toprule[1.5pt] Quantization Levels&Top 1&Top 5\\ \hline MIDDLE&66.03&87.23\\ RANDOM&\textbf{67.89}&\textbf{88.37}\\ \bottomrule[1.5pt] \end{tabular} } \end{table} \fi \begin{table}[t] \centering \caption{ Representation learning with randomized quantization augmentation benefits from more training epochs.} \label{tab:training_length} \begin{tabular}{c|c|c|c} &100-ep&300-ep&800-ep\\ \hline MoCo-v3 &67.9&71.6& 72.1\\ BYOL & 67.2&71.0 & 71.6\\ \end{tabular} \end{table} \begin{table}[t] \centering \caption{Comparisons with alternative domain-agnostic augmentation techniques under the linear classification protocol on ImageNet. CR is short for center crop, and RRC is short for random resized crop. Our randomized quantization approach achieves the state-of-the-art results against prior arts.} \label{tab:cmp_daa} \setlength{\tabcolsep}{12pt} \scalebox{1}{ \begin{tabular}{l|c|c} Augmentations&MoCo-v3&BYOL\\ \hline CR&10.1&9.9\\ CR + DACL~\cite{verma2021towards}&32.7&33.2\\ CR + i-Mix~\cite{lee2020mix}&30.3&28.7\\ CR + Ours&\textbf{42.9}& \textbf{43.0}\\ \hline RRC&50.0&49.3\\ RRC + DACL~\cite{verma2021towards} &57.2&57.6\\ RRC + i-Mix~\cite{lee2020mix}& 55.4&49.9\\ RRC + Ours&\textbf{67.9}&\textbf{67.2}\\ \end{tabular} } \end{table} We choose visual representation learning for an ablation study. Random resized cropping is taken as the baseline augmentation, and we apply our randomized quantization after it. Following the MoCo-v3 framework~\cite{chen2021empirical}, we use ResNet-50~\cite{he2016deep} as the backbone network. The optimizer is consistent with MoCo-v3, and the network is optimized for 100 epochs. Representation learning is conducted on the ImageNet-1K dataset~\cite{deng2009imagenet}, and linear classification accuracy is reported on the validation set. We ablate three design factors of the proposed quantization based augmentation which affect its ability to mask channel-wise information. \vspace{2pt} \noindent \textbf{Randomizing Bins.} The performance of representation learning depends on the complexity of the pretext tasks created from random augmentations. In Table~\ref{tab:abl_non_uniform}, the baseline approach using the random resized crop augmentation obtains 50.0\% top-1 accuracy. Using a fixed uniform quantizer improves the performance mildly to 54.8\%. Randomizing the locations and sizes of bins allows for uneven masking and creates more useful pretext tasks. It improves the performance significantly to 66.0\%. \vspace{2pt} \noindent \textbf{Randomizing reproduction values.} Quantization is also affected by how each bin is represented. Commonly, the values within a bin's range are represented by the midpoint. As an alternative, we also consider taking a random value in the range. Intuitively, random reproduction values lead to bias in the quantization error, making them no longer zero-mean and bringing a stronger augmentation effect. It is found to benefit representation learning, yielding an increase of 1.9 points upon randomizing the bins in Table~\ref{tab:abl_non_uniform}. \vspace{2pt} \noindent \textbf{Number of quantization bins.} Figure~\ref{fig:numbins} illustrates the effect of various numbers of quantization bins. Intuitively, fewer bins leads to a stronger masking effect and higher cross-view variation. We vary the number of bins and find strong performance with 5-10 bins, peaking at 8 bins. This observation is similar to spatial masking in MAE~\cite{he2022masked} where an optimal masking ratio is found. In Figure~\ref{fig:img_bins}, we visualize quantized images with different numbers of quantization bins. To make the visualization consistent and comparable, we use the uniform quantizer in this case. It can be observed that too much information is withheld when using too few bins, and too many bins withholds too little information. \vspace{2pt} \noindent \textbf{Training epochs.} We further study training the augmentations with more epochs. In Table~\ref{tab:training_length}, the performance improves from 67.9\% with 100 epochs to 71.6\% with 300 epochs and 72.1\% with 800 epochs. With this complex augmentation, the network benefits from longer training. \section{Multi-Modality Experiments} We examine pre-training with the proposed augmentation across a variety of modalities including 1) vision (Section~\ref{exp_img}); 2) 3D point clouds (Section~\ref{exp_point_cld}); 3) audio (Section~\ref{exp_audio}); and 4) the DABS benchmark~\cite{tamkin2021dabs} (Section~\ref{exp_dabs}) comprised of data from multiple domains: natural images, multi-channel sensor data, English text, speech recordings, multilingual text, chest x-rays, and images with text descriptions. The hyper-parameter $n$ indicating the number of bins is tuned for each modality. We leave the description of corresponding datasets, settings and evaluation metrics to each section. \subsection{Images} \label{exp_img} \begin{table}[t] \centering \caption{ Comparisons with image-specific augmentations under the linear classification protocol on ImageNet. CJ stands for color jittering, and Full includes random resized crop, color jittering, grayscaling, Gaussian blurring and solarization. Randomized quantization is stronger than color jittering by a large margin. It falls behind the full handcrafted augmentations by just 1\%. } \label{tab:cmp_specific} \setlength{\tabcolsep}{12pt} \begin{tabular}{l|c|c} Method& MoCo-v3 & BYOL\\ \hline Ours & 42.9 & 43.0 \\ RRC & 50.0 & 49.4\\ RRC + CJ & 60.1 & 61.1\\ RRC + Ours & 67.9 & 67.2\\ Full & \textbf{68.9} & \textbf{68.9} \end{tabular} \end{table} We compare the proposed randomized quantization augmentation against domain-agnostic augmentation baselines, as well as domain-specific augmentations designed for images. The number of quantization bins is chosen as $n=8$. The experimental protocol follows the ablation study. \vspace{2pt} \noindent \textbf{Comparisons with domain-agnostic augmentations.} Recent works on domain-agnostic augmentation are predominantly adapted from Mixup~\cite{zhang2017mixup}. For example, i-Mix~\cite{lee2020mix} linearly interpolates input data, and their corresponding virtual labels are generated from the current batch. Similarly, DACL~\cite{verma2021towards} interpolates input but uses it as a way of adding noise to the original data. With two calls of the mixing function, two views are created for one image. In Table~\ref{tab:cmp_daa}, we compare our approach to these methods on two spatial operations: center crop (CR) and random resize crop (RRC). Center crop amounts to no augmentation, and random resized crop is frequently used in vision applications. Our evaluation is based on two Siamese representation learning frameworks MoCo-v3 and BYOL, since BYOL is said to have different behavior on augmentations. Randomized quantization performs the best against Mixup-based augmentations. As a standalone augmentation, randomized quantization obtains an accuracy of 42.9\% with MoCo-v3, which outperforms DACL and i-Mix by a large margin. In conjunction with random resized crop, a 10\% margin is maintained. The results using MoCo-v3 and BYOL training objectives are similar. Overall, randomized quantization achieves state-of-the-art results against domain-agnostic baselines in the vision domain. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{figs/Figure_color_jitter_vs_ours.pdf} \caption{Visual comparisons between color jittering and randomized quantization. Randomized quantization exhibits greater change in visual appearance and stronger edge enhancement.} \label{fig:img_cmp_color_jittering} \end{figure} \vspace{2pt} \noindent \textbf{Comparisons with domain-specific augmentations.} We further compare with image-specific augmentations for visual representation learning in Table~\ref{tab:cmp_specific}. We find that randomized quantization is much stronger than color jittering, which is heavily designed with prior knowledge such as brightness, contrast, and saturation for pixels. In Figure~\ref{fig:img_cmp_color_jittering}, we visualize color jittering and our augmentation. It can be observed that our augmentation leads to stronger and more diverse visual appearances than color jittering. Our augmentation is 1\% weaker than the full augmentation, which includes random resized crop, color jittering, grayscaling, Gaussian blurring and solarization successively. \begin{table}[t] \centering \caption{Linear probing and finetuning results for the shape classification task on the ModelNet40 dataset. Pre-training is conducted on the ShapeNet dataset. Our augmentation improves the classification accuracy substantially on various ratios of data, especially on very limited data with 1\%. ``lin'' and ``ft'' denote linear probing and fintuning respectively.} \label{tab:pointcls} \setlength{\tabcolsep}{4pt} \begin{tabular}{l|c|c|c|c|c|c} &1\%&2\%&5\%&10\%&20\%&100\%\\ \hline FoldingNet (lin) &56.4 &66.9& 75.6& 81.2& 83.6&88.4\\ MID-FC (lin)&61.5 &73.1& \textbf{80.2}& 84.2& {86.9}&90.3\\ Ours (lin)& \textbf{66.7} & \textbf{74.3}&80.0&\textbf{84.5}& \textbf{87.2} &\textbf{90.5}\\ \hline Scratch &58.5& 71.2& 80.1& 85.4& 88.7&92.9\\ MID-FC (ft)&67.3 &76.5& 83.6& {88.4}& 90.2&\textbf{93.0}\\ Ours (ft)&\textbf{71.3}& \textbf{78.5}& \textbf{84.9}& \textbf{88.6}& \textbf{90.6}&\textbf{93.0}\\ \end{tabular} \end{table} \begin{table}[t] \centering \caption{Linear probing and finetuning results for the shape segmentation task on the ShapeNet Part dataset. Pre-training is conducted on ShapeNet. Our augmentation improves the performance substantially on various ratios of data. ``lin'' and ``ft'' denote linear probing and fintuning respectively. } \label{tab:pointseg} \setlength{\tabcolsep}{4pt} \begin{tabular}{l|c|c|c|c|c|c} &\multicolumn{3}{c|}{C.mIoU}&\multicolumn{3}{c}{I.mIoU}\\ \cline{2-7} &1\%&5\%& 100\%&1\%&5\%& 100\%\\ \hline Multi-Task (lin) & - & 73.9 &- & 68.2 & 80.7 & - \\ MID-FC (lin)&66.2& 76.5&82.8&72.4&80.9&84.1\\ Ours (lin)&\textbf{70.6}&\textbf{76.9}&\textbf{82.9}&\textbf{77.4}&\textbf{81.9}&\textbf{84.3}\\ \hline MID-FC (ft)&67.6&77.8&84.3&76.2&82.1&\textbf{85.5}\\ Ours (ft)&\textbf{69.5}&\textbf{78.4}&\textbf{ 84.4}&\textbf{77.8}&\textbf{82.3}&\textbf{85.5} \\ \end{tabular} \end{table} \begin{table*}[t] \centering \caption{Downstream dataset details for audio representation learning.} \label{tab:audio_datasets} \setlength{\tabcolsep}{4pt} \begin{tabular}{c|c|c|c|c} Name&Task&\#Classes &Data size& Avg duration (s)\\ \hline NSynth (NS)~\cite{engel2017neural}&Musical instrument classification&11&305,979&4.0\\ UrbanSound8K (US8K)~\cite{salamon2014dataset}&Urban sound classification &10&8,732&4.0\\ VoxCeleb1 (VC1)~\cite{nagrani2017voxceleb}&Speaker identification&1,211&153,514&8.2\\ VoxForge (VF)~[from~Voxforge.org]&Language identification&6&176,438&5.8\\ Speech Commands V2 (SPCV2)~\cite{warden2018speech}&Command classification &35&105,829&1.0\\ Speech Commands V2 (SPCV2/12)~\cite{warden2018speech}&Command classification &12&105,829&1.0\\ \end{tabular} \end{table*} \begin{table*}[t] \centering \caption{Linear probing results for audio representation learning on six downstream datasets. Pre-training is conducted on the AudioSet dataset. Our model outperforms BYOL-A on four out of the six datasets, achieving an improvement of 1.8\% on average.} \label{tab:cmp_audio} { \setlength{\tabcolsep}{12pt} \begin{tabular}{c|cccccc|c} Method&NS&US8K&VC1&VF&SPCV2/12&SPCV2&Average \\ \hline TRILL~\cite{shor2020towards}& - &-&17.9&88.1&74.9&-&-\\ COLA~\cite{saeed2021contrastive}&63.4&-&29.9&71.3&71.7&62.4&-\\ OpenL3~\cite{cramer2019look}&-&78.2&-&-&-&-&-\\ COALA~\cite{favory2020coala}&73.1&72.7&-&-&-&-&-\\ COLA*~\cite{saeed2021contrastive}&70.2&78.5&30.4&79.5&76.7&76.8&68.7\\ BYOL-A~\cite{niizumi2021byol} &74.1&79.1&40.1&90.2&91.0&92.2&77.8\\ \hline Ours&\textbf{74.2}&78.0&\textbf{45.7}&\textbf{92.6}&\textbf{95.1}&92.1&\textbf{79.6}\\ \multicolumn{8}{l}{\small* denotes a re-implemented result by the BYOL-A authors.} \end{tabular} } \end{table*} \subsection{3D Point Clouds} \label{exp_point_cld} We explore self-supervised representation learning on point clouds which is represented by a disordered set of xyz coordinates. The pretraining is conducted on the ShapeNet~\cite{chang2015shapenet} dataset consisting of 57,449 3D shapes. Octree-based Sparse CNN~\cite{wang2017cnn} is used as the backbone network, which takes 3D point clouds as input and extracts point features as well as shape features. We follow the MID-Net~\cite{wang2021unsupervised} model as the baseline, which is trained by a point-wise and instance-wise contrastive loss. The model is trained by a SGD optimizer with a batch size of 32 and a weight decay of 5e-4. The initial learning rate is 0.03 and decreases by a factor of 10 after 200 and 300 epochs, and the training process terminates after 400 epochs. For data augmentation, we follow the baseline~\cite{wang2021unsupervised} to normalize each point cloud into a unit sphere, randomly rotate it along its upright axis, and randomly translate and scale it in $[-0.25, 0.25]$ and $[0.75, 1.25]$ respectively. For evaluation, we experiment on two downstream tasks: shape classification and shape segmentation. We apply the randomized quantization augmentation after the base augmentations. Unlike images and audio which are snapped to grids, strong quantization of point cloud coordinates will drastically degrade the 3d point data. We thus choose to use a larger number of bins, $n=30$, in order to maintain more information. In practice, since the 3d points are sparsified by quantization, we observe a substantial training speedup as a side benefit. Shape classification is conducted on ModelNet40~\cite{wu20153d} which is composed of 13,834 3D models from 40 categories. For each shape, we extract a global feature with the pre-trained backbone then train a linear classifier, or finetune the network, and report the average classification accuracy in Table~\ref{tab:pointcls}. We do comparison with FoldingNet~\cite{Yang2018a} and MID-FC~\cite{wang2021unsupervised}. With our augmentation, we improve the classification accuracy over the baseline MID-FC~\cite{wang2021unsupervised} substantially, especially when the training data is limited as shown in Table~\ref{tab:pointcls}. For example, with 1\% of the training data, we improve the classification accuracy by 5.2 and 4.0 points on linear probing and finetuning respectively. Shape segmentation is conducted on ShapeNet Part~\cite{yi2017large} with 16,881 3D point clouds from 16 categories. For each shape, we extract point-wise features with the pre-trained backbone then train a segmentation head composed of two fully connected layers, or finetune the network, and report the mean IoU across all categories (C.mIoU) and the mean IoU across all instances (I.mIoU) in Table~\ref{tab:pointseg}. We do comparison with two unsupervised pretraining methods including MID-FC~\cite{wang2021unsupervised} and Multi-Task~\cite{Hassani2019}. Our results are constantly better than the baselines across different ratio of training data. And similarly to ModelNet40 classification, we observe significant improvements with limited (1\% and 5\%) training data. For example, with 1\% of the training data, we improve the segmentation performance by 4.4 and 1.9 IoUs on linear probing and finetuning respectively. Since the only difference between our method and MID-FC is our quantization augmentation, the performance improvements of our method on shape classification and segmentation over MID-FC clearly demonstrate the effectiveness of our augmentation. \begin{table*}[t] \centering \caption{ Evaluation of the representation performance over six modalities on the DABS benchmark. Representations are trained on a single primary dataset for each modality and evaluated on a number of downstream datasets. The performance for each modality is averaged across the downstream datasets and shown in the table. } \label{tab:cmp_dabs} { \setlength{\tabcolsep}{10pt} \begin{tabular}{c|cccccc|c} Method&Natural Images&Text&Speech&Sensors&Chest x-rays &Images \& Text & Average \\ \cline{1-8} Scratch&10.1&42.3&24.9&69.8&68.1&\textbf{57.5} & 45.5 \\ e-Mix&27.9&44.1&41.8&79.5&72.4&48.9 & 52.4\\ Ours&\textbf{32.1}&\textbf{44.7}&\textbf{44.5}&\textbf{84.9}&\textbf{73.4}&54.5&\textbf{55.6}\\ \end{tabular} } \end{table*} \subsection{Audio} \label{exp_audio} We apply randomized quantization to audio representation learning. We largely follow the experimental settings of BYOL-A~\cite{niizumi2021byol} and treat it as our baseline. We use AudioSet~\cite{gemmeke2017audio} as the pretraining dataset, which consists of 1,963,807 audio samples of 527 classes. \footnote{When we downloaded this dataset, some data links were invalid, so we were only able to gather a subset of the full dataset (1,733,046 / 1,963,807). } AudioSet covers a comprehensive set of classes, ranging from human voices to animal sounds to environmental sounds. The pretrained representation is evaluated on six downstream audio classification datasets, covering musical instrument classification, urban sound classification, speaker identification, language identification and command classification. A summary of the downstream datasets can be found in Table~\ref{tab:audio_datasets}. We convert audio clips into the commonly used log-scaled spectrogram representation. Random resized crop is used to extract a $64\times96$ frequency-temporal segment for training. We replaced the Mixup augmentation used in BYOL-A with our randomized quantization, with the number of bins set to 5. We follow prior works~\cite{koizumi2020ntt,takeuchi2020effects,niizumi2021byol} by using a lightweight 2D convolutional network as the backbone. The backbone encoder produces a feature of 2048 dimensions, which is then fed to the projection and prediction head for representation learning. We train the network using the Adam optimizer with a base learning rate of 3e-4 and a batch size of 256 for 100 epochs. Table~\ref{tab:cmp_audio} summarizes the results on the six downstream classification tasks. Compared against BYOL-A with the Mixup augmentation, our randomized quantization outperforms it in four out of the six tasks. Our approach is particularly stronger by a margin of 5.6\% on the VoxCeleb1 dataset, which is the hardest classification task with 1211 classes among all six tasks. Our improvements tend to be smaller for tasks with fewer classes. On average, the proposed augmentation surpasses the current state-the-of-art BYOL-A by a margin of 1.8\%. \subsection{DABS} \label{exp_dabs} We further study the capability of augmenting intermediate features in a neural network, which is less interpretable than the input data. We conduct the experiment on a public benchmark DABS~\cite{tamkin2021dabs} which is designed to study domain-agnostic self-supervised representation learning. It contains six data modalities~\footnote{The benchmark also provides an additional multi-lingual text modality. However, it is not evaluated in the original paper and we thus omit this.}, covering natural RGB images, multichannel sensor data, English text, audio, chest x-ray images, as well as captioned images. In each domain, pre-training is conducted on a large-scale dataset, and the learned representations are evaluated with linear probing on various in-domain downstream datasets. The average performance for the in-domain downstream datasets is reported. We refer the reader to the benchmark for a full description of the pretraining datasets and in-domain evaluation datasets. Following e-Mix~\cite{tamkin2021dabs}, transformer~\cite{vaswani2017attention} is adopted as the backbone architecture for its generality across diverse domains. A lightweight domain-specific embedding module is applied before feeding the input tokens to the transformer. The transformer contains 12 layers with 8 attention heads, 256 hidden dimensions, and a dropout probability of $0.1$. The representations across tokens are averaged and projected to a 128-dimensional vector for SimCLR representation learning. The network is optimized with the Adam optimizer with a learning rate of 1e-4 and weight decay of 1e-4. The training protocol follows e-Mix, and all modalities share the same recipe. We apply randomized quantization on the token embeddings before the transformer. Since the quantization function has zero gradients everywhere, we simply initialize the token embedding module without updating it. The straight-through estimator can be potentially useful, but it is not the focus of this work. Table~\ref{tab:cmp_dabs} summarizes the results for this benchmark. Our model outperforms the baseline e-Mix on all modalities. The improvements on natural images, speech, and sensors are larger than 3\%, while the improvements on text and chest x-rays are relatively smaller, less than 1\%. Both e-Mix and our pretraining seem to hurt the representation quality for captioned images. We hypothesize that the two modalities of images and texts pose significant challenges for a naive contrastive learning approach. \section{Introduction} We are witnessing a convergence of multi-modal AI~\cite{devlin2018bert,bao2021beit} where the architecture and the learning algorithm are unified for various data modalities. This exciting direction abandons the domain-specific knowledge for an individual data modality, but rather pursues a solution far more generalizable. For self-supervised representation learning, masked modeling~\cite{devlin2018bert} or simply the masking mechanism has emerged as an effective approach. The input data is represented by a 2D tensor with a sequential dimension and a channel dimension in a modality-agnostic way~\cite{baevski2022data2vec}. The sequential dimension can be spatial in images, temporal in audio, and syntactic in languages. The masking mechanism withholds information along the sequential dimension, and exploits it for supervision. As a result, models learned from the masking supervision demonstrate strong capability for capturing correlations between sequential tokens~\cite{he2022masked}. \begin{figure} \centering \includegraphics[width=1\linewidth]{figs/Figure1.pdf} \caption{ We represent data as a matrix with a sequential dimension and a channel dimension. As a generic data augmentation, masking drops tokens along the sequential dimension. The proposed randomized quantization instead withholds information along the channel dimension. In this figure, we use 1D data of 10 sequential tokens for illustration. Data values are coded in grayscale.} \label{fig:short} \end{figure} The channel dimension describes the data feature at each sequential location, for example, RGB color at a spatial location or spectrogram frequency at a time step. Despite being generic, masking approaches have neglected to exploit supervision along the channel dimension. While the number of channels for images is as small as three, the channels for audio and tabular data can be as many as hundreds. Formulating the self-supervision from the channel dimension holds much potential for representation learning. In this paper, we draw a connection between masking and quantization, and explore quantization as a novel form of masking along the channel dimension. The data in each channel is dynamically quantized through a non-uniform quantizer, with the quantization value randomly sampled from randomly sampled quantization bins. In this way, information within each quantization bin is masked out, yet information across bins is retained. The information removed by quantization is controlled by the number of bins and the size of the bins, which has been rigorously studied in theory~\cite{shannon1959coding}. The larger the distortion rate, the stronger the quantization when it is used as an augmentation for representation learning. The extreme case of using only a single bin is equivalent to dropping the entire channel. We systematically study various quantization configurations for their effects as a data augmentation, for example, with respect to the number bins, uniform or non-uniform bins, and methods to select quantization values. We apply the randomized quantizer as the only augmentation, or in conjunction with augmentations along the sequential dimension on state-of-the-art Siamese representation learning methods MoCo-v3~\cite{chen2021empirical} and BYOL~\cite{niizumi2021byol}. In comparisons with previous domain-agnostic augmentations based on MixUp~\cite{zhang2017mixup}, our approach achieves state-of-the-art results by a large margin on vision, audio, and point cloud tasks, as well as on the DABS benchmark. Compared with domain-specific augmentations, our approach achieves competitive performance against handcrafted augmentations on vision, and state-of-the-art performance on audio and 3d point clouds. Our contributions can be summarized as follows: \begin{itemize} \vspace{-5pt} \item[-] We propose a simple yet effective data augmentation based on quantization, which is orthogonal to masking along the sequential dimension. \item[-] We demonstrate the generality and strong performance of randomized quantization for vision, audio, and 3D point clouds in a data-agnostic way. \item[-] We show that randomized quantization can augment intermediate features of a network on the DABS benchmark, which consists of numerous modalities. \end{itemize} \section{Related Works} \noindent \textbf{Self-supervised learning} extracts labels from the data itself and tasks the network to learn transferable representations. Among the earliest forms of self-supervised models are auto-encoders~\cite{hinton1993autoencoders} and generative models~\cite{hinton2009deep}. But since the input and the output are identical, a neural network may easily find shortcuts and use memorization to solve the generation task. Advances in recent years show that information needs to be withheld from the input to prevent cheating~\cite{doersch2015unsupervised}. Pretext tasks such as colorization~\cite{zhang2016colorful}, inpainting~\cite{pathak2016context}, jigsaw puzzles~\cite{noroozi2016unsupervised} were proposed in vision, while masked modeling~\cite{devlin2018bert}, next sentence prediction~\cite{kiros2015skip,jernite2017discourse}, and replaced word prediction~\cite{clark2020electra} were developed in natural language processing. Speediness~\cite{benaim2020speednet,huang2021ascnet} and temporal order~\cite{wei2018learning,misra2016shuffle} have been exploited for video representation learning. Due to space limitations, we omit the literature for speech~\cite{baevski2020wav2vec}, tabular data~\cite{arik2021tabnet}, graph-structured data~\cite{sun2019infograph} and many other modalities. The optimal pretext task for each target problem may be different. However, there exists enormous interest in obtaining a single foundation model~\cite{bommasani2021opportunities} for all downstream applications. Instead of withholding data for supervision, contrastive models~\cite{wu2018unsupervised,oord2018representation} create new data via data augmentation and compare features extracted using a Siamese network for supervision. Siamese representation learning can be categorized by whether to use explicit negatives~\cite{niizumi2021byol}, ways to define negatives~\cite{bachman2019learning}, and various loss formulations~\cite{caron2021emerging,zbontar2021barlow}. However, the main driving signal for learning lies in the data augmentations. \vspace{2pt} \noindent \textbf{Data augmentation} enlarges the number of data instances by leveraging prior knowledge of the data and target problem properties. For supervised learning, data augmentation aids in reducing overfitting and regularization~\cite{zhang2021understanding}. For self-supervised learning, the information gap created by two augmentations provides learning supervision. Typically, the data augmentation function extracts partial information from the data and optionally adds corruptions. Popular image-specific augmentations include cropping, scaling, color jittering, Gaussian blurring, cut-out~\cite{devries2017improved}, cut-mix~\cite{yun2019cutmix}, and auto-augment, which searches for a data augmentation policy~\cite{cubuk2018autoaugment}. In natural language processing, synonym replacement~\cite{wei2019eda}, back translation~\cite{brislin1970back}, random word insertion and deletion are most common. For audio and speech, altering the pitch, changing the playback speed, and masking either along the time axis or the frequency axis~\cite{park2019specaugment} may improve performance. Additionally, augmenting data through a generative model~\cite{bowles2018gan} such as a GAN is a viable approach. \vspace{2pt} \noindent \textbf{Domain-agnostic augmentation} aims to generalize modality-specific and domain-specific augmentations into a unified formulation. Finding such general priors in data is challenging. One line of work follows Mixup~\cite{zhang2017mixup}, which is initially proposed to improve empirical risk minimization of supervised learning by linearly interpolating data and labels. Because of its generality, later works have explored its application on other data modalities~\cite{guo2020nonlinear}, a wide range of problems~\cite{lucas2018mixed,hendrycks2019augmix}, as well as representation learning~\cite{tamkin2021dabs,lee2020mix, verma2021towards}. Another important line of work generalizes masked modeling~\cite{devlin2018bert}, which was initially proposed for language modeling, to other data modalities and domains~\cite{he2022masked,tong2022videomae,xu2022masked}. The masking mechanism samples a subset of the input data, while Mixup introduces additional corruptions which are not observed in the original data instance. Our randomized quantization augments along the channel dimension in a manner orthogonal to masking. \begin{figure*} \centering \includegraphics[width=1\linewidth]{figs/Figure5_multiple_modality_1.pdf} \caption{Visualizing randomized quantization augmentation on images, audio, and 3d point clouds. The first row presents the original signal, and the bottom three rows are augmented views. Randomized quantization alters color and enhances edges on images, spatially samples coordinates on point clouds, and enhances frequency channels for audios.} \label{fig:vis_img_audio} \end{figure*} \vspace{2pt} \noindent \textbf{Quantization} represents numerical values with a fixed discrete set of numbers so as to reduce communication bandwidth and maintain representation quality. The rounding error was first analyzed a century ago~\cite{sheppard1897calculation}, and the theory based on variable-rate quantization~\cite{shannon1948mathematical} and Huffman coding\cite{huffman1952method} revolutionized the communications industry. We refer readers to a survey~\cite{gray1998quantization} that describes this area from a theoretical perspective. Quantization for efficient neural networks~\cite{gholami2021survey} aims to reduce neural network latency while maintaining model accuracy. The advances of half-precision~\cite{banner2018scalable,wang2018training} and mixed-precision training~\cite{courbariaux2014training,gupta2015deep,micikevicius2017mixed} has accelerated model training by an order of magnitude. Works have shown that neural networks can be completely binarized~\cite{lin2017towards,wu2015adjustable,courbariaux2015binaryconnect} with reasonable performance. Stochastic quantization~\cite{chen2020statistical,fan2020training,bengio2013estimating} is a technique for learning and compressing model weights in a way that avoids local minima with the low-precision weight representations. A prior work~\cite{fu2022contrastive} shows that weight perturbations by quantization can enhance contrastive learning, especially on the semi-supervised scenarios. This work is the first to consider quantization as a data augmentation, especially for self-supervised representation learning. In this context, the goal of quantization is not to reduce the error rate but to effectively withhold information. The information gap between two random quantizations provides the supervision. \section{Detailed results on the DABS benchmark} \begin{table*}[h] \centering \label{tab:cmp_dabs} \setlength{\tabcolsep}{12pt} \begin{tabular}{l|c|c|c|c|c} Dataset& Domain & Metric & None & e-Mix~\cite{tamkin2021dabs}& Ours\\ \hline CIFAR-10 & Images & Accuracy & 24.20 & 39.43 & \textbf{47.70} \\ Birds & Images & Accuracy & 1.62 & 3.86 & \textbf{4.16} \\ VGG Flower & Images & Accuracy & 9.03 & 25.96 & \textbf{30.20} \\ DTD (Textures) & Images & Accuracy & 7.39 & 8.83 & \textbf{10.90} \\ GTSRB (Traffic) & Images & Accuracy & 14.33 & 65.07 & \textbf{86.80} \\ FGVC-Aircraft & Images & Accuracy & 2.70 & 10.15 & \textbf{12.60} \\ LibriSpeech Sp. ID & Speech & Accuracy & 17.12 & 60.18 & \textbf{62.70} \\ VoxCeleb Sp. ID & Speech & Accuracy & 0.59 & 2.43 & \textbf{2.69} \\ AudioMNIST & Speech & Accuracy & 33.13 & 80.35 & \textbf{82.80} \\ Google Speech & Speech & Accuracy & 4.87 & 19.22 & \textbf{26.00} \\ Fluent Locations & Speech & Accuracy & 62.09 & 60.93 & \textbf{65.20} \\ Fluent Actions & Speech & Accuracy & 26.15 & 29.87 & \textbf{31.40} \\ Fluent Objects & Speech & Accuracy & 30.13 & 39.89 & \textbf{40.80} \\ COLA & English Text & Pearson Corr. & 0.00 & \textbf{8.40} & 8.27 \\ MNLI\_Matched & English Text & Accuracy & 35.80 & \textbf{37.80} & 36.70 \\ MNLI\_Mismatched & English Text & Accuracy & 36.60 & \textbf{37.50} & 37.00 \\ MRPC & English Text & Accuracy & 68.40 & 66.20 & \textbf{68.90} \\ QNLI & English Text & Accuracy & 57.70 & \textbf{57.90} & 57.40 \\ QQP & English Text & Accuracy & 65.10 & 64.30 & \textbf{65.50} \\ RTE & English Text & Accuracy & \textbf{54.50} & 51.30 & 52.70 \\ SST2 & English Text & Accuracy & 57.00 & \textbf{58.10} & 55.80 \\ STSB & English Text & Accuracy & 4.20 & 11.40 & \textbf{13.70} \\ WNLI & English Text & Accuracy & 43.60 & 47.90 & \textbf{50.70} \\ PAWS-X EN & Multilingual Text & Accuracy & \textbf{57.85} & 54.85 & 56.20 \\ PAWS-X FR & Multilingual Text & Accuracy & \textbf{57.80} & 55.90 & 55.90\\ PAWS-X ES & Multilingual Text & Accuracy & \textbf{58.55} & 55.50 & 54.80 \\ PAWS-X DE & Multilingual Text & Accuracy & \textbf{58.85} & 56.50 & 55.50 \\ PAWS-X ZH & Multilingual Text & Accuracy & \textbf{57.35} & 55.35 & 54.20 \\ PAWS-X JP & Multilingual Text & Accuracy & \textbf{57.55} & 57.35 & 56.70 \\ PAWS-X KO & Multilingual Text & Accuracy & \textbf{58.80} & 57.70 & 56.60 \\ PAMAP2 & Sensor & Accuracy & 69.81 & 79.48 & \textbf{84.90} \\ CheXpert & Chest X-Rays & Avg. AUROC & 68.14 & 72.40 & \textbf{73.40} \\ ChestX-ray8 & Chest X-Rays & Avg. AUROC & 57.00 & 63.00 & \textbf{64.70} \\ VQA & Vision/Language & Accuracy & \textbf{57.50} & 48.90 & 54.40 \\ \end{tabular} \caption{Detailed comparisons with e-Mix~\cite{tamkin2021dabs} on the DABS benchmark.} \end{table*} {\small \bibliographystyle{ieee_fullname}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{s:intro} Let $G$ be a simple algebraic group over an algebraically closed field $k$ of characteristic $p \geqslant 0$ and let $\mathcal{C}_1, \ldots, \mathcal{C}_t$ be non-central conjugacy classes in $G$, where $t \geqslant 2$ is an integer. Consider the irreducible subvariety $X = \mathcal{C}_1 \times \cdots \times \mathcal{C}_t$ of the Cartesian product $G^t$. Given a tuple $x = (x_1, \ldots, x_t) \in X$, let $G(x)$ denote the Zariski closure of the subgroup $\langle x_1, \ldots, x_t\rangle$ and set \[ \Delta = \{ x \in X \, : \, G(x) = G\}. \] In this setting, a basic problem is to determine whether or not $\Delta$ is empty. Note that if $k$ is algebraic over a finite field then $G$ is locally finite and thus $\Delta$ is always empty, so this problem is only interesting when $k$ is not algebraic over a finite field, which is a hypothesis we adopt throughout the paper. This problem has been the subject of several recent papers and some general results have been established. In characteristic zero, for example, a theorem of Guralnick \cite{gurnato} shows that $\Delta$ is always an open subset of $X$. In the general setting, \cite[Theorem 2]{BGG1} states that $\Delta$ is non-empty if and only if it is dense in $X$, while \cite[Theorem 1]{BGG2} reveals that $\Delta$ is non-empty if and only if it is generic, which means that it contains the complement of a countable union of proper closed subvarieties of $X$. If $G$ is a simple algebraic group of exceptional type, then \cite[Theorem 7]{BGG1} states that $\Delta$ is non-empty if $t \geqslant 5$, and the same conclusion holds for $t \geqslant 4$ when $G = G_2$. Moreover, this is best possible since there are examples where $t=4$ ($t=3$ for $G=G_2$) and $\Delta$ is empty (see \cite[Theorem 3.22]{BGG1}). In this paper, we extend the earlier work in \cite{BGG1} by studying the special case where $G$ is an exceptional type group in positive characteristic $p$ and each $\mathcal{C}_i$ is a conjugacy class of unipotent elements of order $p$ (note that every nontrivial unipotent element has order $p$ if $p \geqslant h$, where $h$ is the Coxeter number of $G$). In this situation, our aim is to classify the varieties $X = \mathcal{C}_1 \times \cdots \times \mathcal{C}_t$ where $t \geqslant 2$ and $\Delta$ is empty. This is in a similar spirit to the main theorem of \cite{BGG2}, which considers the analogous problem for symplectic and orthogonal groups when the $\mathcal{C}_i$ comprise elements of prime order modulo the centre of the group. In turn, this extends earlier work of Gerhardt \cite{Ger} on linear groups. Notice that all of these results are independent of the isogeny class of $G$ since the centre of $G$ is contained in the Frattini subgroup and thus a subgroup $H$ is dense in $G$ if and only if $HZ/Z$ is dense in $G/Z$, where $Z$ is any central subgroup of $G$. We will typically work with the simply connected form of the group. Our main result is the following (Tables \ref{tab:main} and \ref{tab:special} are presented at the end of the paper in Section \ref{s:tab}). Note that any two involutions generate a dihedral group, which explains why we assume $p \geqslant 3$ when $t=2$. \begin{theorem}\label{t:main} Let $G$ be a simple algebraic group of exceptional type over an algebraically closed field of characteristic $p>0$ that is not algebraic over a finite field. Set $X = \mathcal{C}_1 \times \cdots \times \mathcal{C}_t$, where $t \geqslant 2$ and each $\mathcal{C}_i$ is a conjugacy class of elements of order $p$ in $G$. Assume $p \geqslant 3$ if $t=2$. Then $\Delta$ is empty if and only if $X$ is one of the cases recorded in Tables \ref{tab:main} and \ref{tab:special}. \end{theorem} \begin{remk}\label{r:main} Some remarks on the statement of Theorem \ref{t:main} are in order. \begin{itemize}\addtolength{\itemsep}{0.2\baselineskip} \item[{\rm (a)}] We adopt the notation for unipotent classes from \cite{LS_book} (see Tables 22.1.1-22.1.5 in \cite{LS_book}) and it is worth noting that this sometimes differs from the notation used by other authors. For example, the unipotent class in $E_7$ labelled $(A_1^3)^{(1)}$ in \cite{LS_book} is denoted $(3A_1)''$ in \cite{Law1,Spal}. Similarly, the class $E_6(a_3)$ in \cite{Law1,LS_book} is labelled $A_5+A_1$ in \cite{Spal}. \item[{\rm (b)}] By inspecting the relevant tables in \cite{Law1}, it is easy to read off the required condition on $p$ to ensure that each $\mathcal{C}_i$ contains elements of order $p$. To do this, we consider the action of $G$ on a suitable $kG$-module $V$ and we inspect the Jordan form of a representative $y \in \mathcal{C}_i$ in this representation, noting that $y$ has order $p$ if and only if every Jordan block has size at most $p$. For example, if $G = E_8$ and $y \in G$ is contained in the class $A_2$, then \cite[Table 9]{Law1} states that the Jordan form of $y$ on the adjoint module for $G$ is as follows \[ \left\{\begin{array}{ll} (J_4^2,J_3^{54},J_1^{78}) & p = 2 \\ (J_3^{57},J_1^{77}) & p = 3 \\ (J_5,J_3^{55},J_1^{78}) & p \geqslant 5 \end{array}\right. \] where $J_i$ denotes a standard unipotent Jordan block of size $i$. Therefore, the elements in this class have order $p$ if and only if $p \geqslant 3$ (they have order $4$ when $p=2$). \item[{\rm (c)}] We have chosen to record the cases $X = \mathcal{C}_1 \times \cdots \times \mathcal{C}_t$ with $\Delta$ empty over two tables, rather than one. This is essentially an artefact of our proof and it will be convenient to make a distinction between the cases in Table \ref{tab:main} and \ref{tab:special} (for example, see Theorem \ref{t:fixV} below). \item[{\rm (d)}] To avoid unnecessary repetition, the tuples in Tables \ref{tab:main} and \ref{tab:special} are listed up to reordering, and also up to graph automorphisms when $(G,p) = (G_2,3)$ or $(F_4,2)$. For example, if $(G,p) = (G_2,3)$ and $\tau$ is a graph automorphism of $G$, then $\tau$ interchanges the classes of long and short root elements (denoted by $A_1$ and $\tilde{A}_1$), whence $\Delta$ is also empty when $t=3$ and $(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3) = (\tilde{A}_1,\tilde{A}_1,\tilde{A}_1)$. \end{itemize} \end{remk} The Zariski closure of a unipotent conjugacy class is a union of unipotent classes and this leads naturally to a partial order on the set of unipotent classes of $G$, which has been completely determined by Spaltenstein (see \cite[Section II.10]{Spal} for $G = G_2$ and \cite[Section IV.2]{Spal} for the other types). This is relevant here because Proposition \ref{p:clos} (see \cite[Lemma 2.2]{BGG2}) states that $\Delta$ is empty if and only if $G(y) \ne G$ for all $y \in \bar{X}$, where $\bar{X}$ denotes the Zariski closure of $X$ in $G^t$. This observation allows us to present the following reformulation of Theorem \ref{t:main} (see Remark \ref{r:spal} for some brief comments on Spaltenstein's notation in \cite{Spal}). \begin{corol}\label{c:main2} The set $\Delta$ is empty if and only if $X$ is contained in the closure of one of the varieties $Y = \mathcal{C}_1' \times \cdots \times \mathcal{C}_t'$ recorded in Table \ref{tab:clos}, up to reordering and graph automorphisms if $(G,p) = (G_2,3)$ or $(F_4,2)$. \end{corol} {\small \begin{table} \[ \begin{array}{ll} \hline G & (\mathcal{C}_1', \ldots, \mathcal{C}_t') \\ \hline G_2 & (A_1,A_1,A_1), (A_1,G_2(a_1)) \\ F_4 & (A_1,A_1,A_1,A_1), (A_1,A_1,A_2), (A_1,\tilde{A}_1,\tilde{A}_1), (A_1,B_3), (\tilde{A}_1,\tilde{A}_2), (\tilde{A}_1, B_2), (A_2,A_2) \\ E_6 & (A_1,A_1,A_1,A_1), (A_1,A_1,A_2), (A_1,A_1^2,A_1^2), (A_1,D_4), (A_1,A_4), (A_1^2,A_3), (A_1^2,A_2^2), (A_2,A_2) \\ E_7 & (A_1,A_1,A_1,A_1), (A_1,A_1,A_2A_1), (A_1,(A_1^3)^{(1)},(A_1^3)^{(1)}), (A_1,(A_5)^{(1)}), (A_1,D_5(a_1)) \\ & ((A_1^3)^{(1)},(A_3A_1)^{(1)}), (A_2,A_2A_1) \\ E_8 & (A_1,A_1,A_1,A_1^2), (A_1,A_1,A_3), (A_1,A_1^2,A_2), (A_1,D_5), (A_1,D_4A_2), (A_1^2,D_4), (A_2,A_3) \\ \hline \end{array} \] \caption{The varieties $Y = \mathcal{C}_1' \times \cdots \times \mathcal{C}_t'$ in Corollary \ref{c:main2}} \label{tab:clos} \end{table} } Let $V$ be a $kG$-module with $C_V(G) = 0$ and write $\mathcal{C}_i = y_i^G$. Let $C_V(y_i)$ be the $1$-eigenspace of $y_i$ on $V$ and note that $\dim C_V(y_i)$ coincides with the number of Jordan blocks in the Jordan form of $y_i$ on $V$. If \begin{equation}\label{e:di} \sum_{i=1}^t \dim C_V(y_i) > (t-1)\dim V \end{equation} then the intersection $\bigcap_i C_V(y_i)$ is nonzero and thus $G(x)$ has a nontrivial fixed space on $V$ for all $x \in X$. In particular, this condition implies that $\Delta$ is empty. With this observation in hand, our next result is an immediate corollary of Theorem \ref{t:main}. Here $V$ is the specific Weyl module for $G$ (or its dual) recorded in Table \ref{tab:mod}, where we label a set of fundamental dominant weights for $G$ in the usual way (see \cite{Bou}). Notice that $C_V(G)=0$ and the Jordan form of $y_i$ on $V$ has been determined by Lawther \cite{Law1}, which allows us to compute the sum in \eqref{e:di}. In part (ii) of the corollary, $\tau$ is a graph automorphism of $G$ and we write $\mathcal{C}_i^{\tau}$ for the image of the class $\mathcal{C}_i$ under $\tau$. {\small \begin{table} \[ \begin{array}{cccccc} \hline G & G_2 & F_4 & E_6 & E_7 & E_8 \\ \hline V & W_G(\l_1)^* & W_G(\l_4)^* & W_G(\l_1) & W_G(\l_7) & W_G(\l_8) \\ \dim V & 7 & 26 & 27 & 56 & 248 \\ \hline \end{array} \] \caption{The $kG$-module $V$ in Corollary \ref{c:main}} \label{tab:mod} \end{table}} \begin{corol}\label{c:main} Let $V$ be the $kG$-module in Table \ref{tab:mod}. Then $\Delta$ is empty if and only if one of the following holds: \begin{itemize}\addtolength{\itemsep}{0.2\baselineskip} \item[{\rm (i)}] The inequality in \eqref{e:di} holds. \item[{\rm (ii)}] $(G,p) = (G_2,3)$ or $(F_4,2)$, and \eqref{e:di} holds for $\mathcal{C}_1^{\tau} \times \cdots \times \mathcal{C}_t^{\tau}$. \item[{\rm (iii)}] $X$ is one of the cases listed in Table \ref{tab:special}, up to reordering and graph automorphisms if $(G,p) = (F_4,2)$. \end{itemize} \end{corol} \begin{remk}\label{r:natural} The previous corollary reveals that we can identify most cases where $\Delta$ is empty in Theorem \ref{t:main} just by considering the action of the relevant conjugacy class representatives on a suitable $kG$-module. Here we briefly recall that similar results for classical groups have been established in \cite{BGG2,Ger}, with respect to the natural module. Let $G$ be a simple algebraic group of classical type over an algebraically closed field $k$ of characteristic $p \geqslant 0$ which is not algebraic over a finite field. Assume $p \ne 2$ if $G$ is of type $B_n$. Let $V$ be the natural module for $G$ and set $X = \mathcal{C}_1 \times \cdots \times \mathcal{C}_t$, where $t \geqslant 2$ and each $\mathcal{C}_i = y_i^G$ is a non-central conjugacy class. Note that $C_V(G) = 0$ and thus $\Delta$ is empty if \eqref{e:di} holds. In \cite{Ger}, Gerhardt proves that if $G = {\rm SL}(V)$ and $\dim V \geqslant 3$ then $\Delta$ is empty if and only if \eqref{e:di} holds, or if $t=2$ and both $y_1$ and $y_2$ have quadratic minimal polynomials on $V$ (if the latter property holds, then every composition factor of $G(x)$ on $V$ is at most $2$-dimensional for all $x \in X$). A similar result is proved for the groups ${\rm Sp}(V)$ and ${\rm SO}(V)$ in \cite{BGG2} under the assumption that the $y_i$ have prime order modulo $Z(G)$. The analysis of the latter groups is more complicated and several exceptions arise where $\Delta$ is empty and \eqref{e:di} does not hold for the natural module $V$ (see \cite[Tables 1,2]{BGG2}). \end{remk} We have noted that $\Delta$ is empty if the inequality in \eqref{e:di} is satisfied for a suitable $kG$-module $V$ with $C_V(G)=0$. In order to study the general case, let $\mathcal{M}$ be a complete set of representatives of the conjugacy classes of closed positive dimensional maximal subgroups of $G$. Then $\mathcal{M}$ is finite and the classification of the subgroups in $\mathcal{M}$ was completed by Liebeck and Seitz in \cite{LS04} (noting the existence of an additional subgroup $H = F_4$ in $\mathcal{M}$, which arises when $(G,p) = (E_8,3)$; see \cite{CST}). Following \cite{BGG1}, set \[ \Delta^+ = \{x \in X \,:\, \dim G(x) > 0 \} \] and note that $\Delta = \Delta^+ \cap \Lambda$, where $\Lambda$ is the set of $x \in X$ such that $G(x)$ is not contained in a positive dimensional maximal subgroup of $G$. In addition, for $H \in \mathcal{M}$ we define \begin{equation}\label{e:XHH} X_H = \{ x \in X \,:\, \mbox{$G(x) \leqslant H^g$ for some $g \in G$} \}. \end{equation} By combining \cite[Corollary 4]{BGG1} and \cite[Theorem 1.1]{GMT}, we see that $\Delta^{+}$ is empty if and only if $(G,p) = (G_2,3)$ or $(F_4,2)$, with $t=2$ and $\{\mathcal{C}_1,\mathcal{C}_2\} = \{A_1,\tilde{A}_1\}$, so we may assume $\Delta^+$ is non-empty. Then $\Delta^+$ is a dense subset of $X$ by \cite[Theorem 1]{BGG1} and thus $\Delta$ is non-empty if $\Lambda$ contains a non-empty open subset of $X$. As explained in \cite{BGG1}, we can work with fixed point spaces in order to study the existence (or otherwise) of such a subset of $\Lambda$. For $H \in \mathcal{M}$, let $\O$ be the coset variety $G/H$ and let $C_{\O}(y) = \{ \a \in \O \,:\, \a^{y} = \a\}$ be the fixed point space of an element $y \in G$ on $\O$. Set \[ \a(G,H,y) = \frac{\dim C_{\O}(y)}{\dim \O} \] and write \[ \Sigma_X(H) = \sum_{i=1}^t \a(G,H,y_i), \] where $X = \mathcal{C}_1 \times \cdots \times \mathcal{C}_t$ and $\mathcal{C}_i = y_i^G$ as before. As noted in the proof of \cite[Theorem 5]{BGG1}, if \begin{equation}\label{e:fix} \Sigma_X(H) < t-1 \end{equation} then $X_H$ is contained in a proper closed subset of $X$. In particular, if the inequality in \eqref{e:fix} holds for all $H \in \mathcal{M}$, then $\Lambda$ contains a non-empty open subset (recall that $\mathcal{M}$ is a finite set) and thus $\Delta$ is non-empty. In order to establish the inequality in \eqref{e:fix} for a given subgroup $H \in \mathcal{M}$, we need upper bounds on $\dim C_{\O}(y)$ for elements $y \in G$ of order $p$, where $\O = G/H$. Fixed point spaces for actions of exceptional algebraic groups were studied by Lawther, Liebeck and Seitz \cite{LLS1} in a general setting, but we will often require sharper bounds on $\dim C_{\O}(y)$ for the unipotent elements we are interested in here. To do this, first observe that $\dim C_{\O}(y)=0$ if $H$ does not contain a conjugate of $y$, so we may as well assume $y \in H$. Then \cite[Proposition 1.14]{LLS1} states that \[ \dim C_{\O}(y) = \dim \O - \dim y^G + \dim (y^G \cap H). \] In order to apply this formula, given an element $y \in H$ of order $p$, we need to identify the $G$-class of $y$ and we need to compute $\dim(y^G \cap H)$. In many cases, we can appeal to earlier work of Lawther to do this. Specifically, if $M$ is a maximal closed connected reductive subgroup of $G$, then the $G$-class of each unipotent $M$-class is determined in \cite{Law2} and we will make extensive use of these results. Indeed, this essentially reduces our problem to the following two cases: \begin{itemize}\addtolength{\itemsep}{0.2\baselineskip} \item[{\rm (a)}] $H \in \mathcal{M}$ is reductive and $|H:H^0|$ is divisible by $p$; \item[{\rm (b)}] $H \in \mathcal{M}$ is a parabolic subgroup. \end{itemize} In (a) we need to consider the possible existence of elements $y \in H \setminus H^0$ of order $p$. In this situation, we will often work with the restriction of a suitable $kG$-module $W$ to $H^0$ in order to determine the Jordan form on $W$ of such an element $y$. From here, in almost all cases, we can then identify the $G$-class of $y$ by inspecting \cite{Law1}. The case where $H$ is a maximal parabolic subgroup requires special attention. Here we adopt an indirect approach, which involves working with the corresponding permutation character $1^{G_{\sigma}}_{H_{\sigma}}$ in order to compute $\dim C_{\O}(y)$, where $\sigma$ is a Steinberg endomorphism of $G$ and $G_{\sigma} = G(q)$ for some $p$-power $q$. In turn, this relies heavily on work of Geck and L\"{u}beck \cite{Geck,Lub}, who have very recently completed the computation of the Green functions for finite exceptional groups of Lie type in all characteristics (see Section \ref{s:parab} for more details). For most varieties $X = \mathcal{C}_1 \times \cdots \times \mathcal{C}_t$ as in Theorem \ref{t:main}, we will show that either \eqref{e:di} is satisfied for the $kG$-module $V$ in Table \ref{tab:mod}, in which case $\Delta$ is empty, or the inequality in \eqref{e:fix} holds for all $H \in \mathcal{M}$ and thus $\Delta$ is non-empty. In this way, the proof of Theorem \ref{t:main} is reduced to the configurations in Table \ref{tab:special}, where in each case the inequality in \eqref{e:di} is not satisfied and $\Sigma_X(H) \geqslant t-1$ for some $H \in \mathcal{M}$. We need a different approach to show that $\Delta$ is empty in these special cases. To do this, we formalise a technique which was first introduced in \cite{BGG2} (see the proof of \cite[Lemma 4.1]{BGG2}, for example), which provides an essentially uniform method for handling all of these cases. The basic idea is as follows. First we embed a set of class representatives $y_i \in \mathcal{C}_i$ in a carefully chosen closed connected proper subgroup $L$ of $G$ and we then consider the morphism \[ \varphi: \mathcal{D}_1 \times \cdots \times \mathcal{D}_t \times G \to X, \;\; (a_1,\ldots, a_t,g) \mapsto (a_1^g, \ldots, a_t^g), \] where $\mathcal{D}_i = y_i^L$. By studying the fibres of this map, we aim to show that $\varphi$ is a dominant morphism and that every tuple in the image of $\varphi$ topologically generates a subgroup contained in a conjugate of $L$. Then the set $X_L$ defined in \eqref{e:XHH} contains a non-empty open subset of $X$, so $\Delta$ is not dense in $X$ and thus \cite[Theorem 2]{BGG1} implies that $\Delta$ is empty. We refer the reader to Proposition \ref{p:fibre} for more details. \begin{remk}\label{r:prime} In Theorem \ref{t:main} we assume that each unipotent class $\mathcal{C}_i = y_i^G$ contains elements of order $p$. To conclude this introductory section, we briefly discuss the more general problem, where the $\mathcal{C}_i$ are arbitrary nontrivial unipotent classes. \begin{itemize}\addtolength{\itemsep}{0.2\baselineskip} \item[{\rm (a)}] As noted above, every nontrivial unipotent element has order $p$ if $p \geqslant h$, where $h$ is the Coxeter number of $G$, so Theorem \ref{t:main} gives a complete solution to the problem under this hypothesis. \item[{\rm (b)}] It is straightforward to show that $\Delta$ is empty for every tuple in Table \ref{tab:main} for all $p \geqslant 0$. Indeed, if $V$ is the $kG$-module in Table \ref{tab:mod} then by inspecting \cite{Law1} one checks that \eqref{e:di} holds and thus $\bigcap_i C_V(y_i) \ne 0$ in each case. Similarly, for $p=2$ we find that $\Delta$ is empty when $t=2$ and $(\mathcal{C}_1,\mathcal{C}_2)$ is one of the following (in each case, $\mathcal{C}_2$ contains elements of order $4$): \vspace{1mm} \begin{itemize}\addtolength{\itemsep}{0.2\baselineskip} \item $G = F_4$: $(A_1,(B_2)_2)$, $(A_1,(\tilde{A}_2A_1)_2)$, $(A_1,(C_3(a_1))_2)$, $((\tilde{A}_1)_2, A_2)$ \item$G = E_7$: $(A_1,(A_3A_2)_2)$ \end{itemize} \item[{\rm (c)}] Suppose the characteristic $p$ is a good prime for $G$ in the usual sense, so $p \geqslant 5$, with $p \geqslant 7$ for $G = E_8$. Then one can check that every class $\mathcal{C}_i$ appearing in one of the tuples in Table \ref{tab:special} contains elements of order $p$ (and thus $\Delta$ is empty by Theorem \ref{t:main}), with the exception of the following two cases (up to reordering): \vspace{1mm} \begin{itemize}\addtolength{\itemsep}{0.2\baselineskip} \item $G = E_7$, $p=5$ and $(\mathcal{C}_1,\mathcal{C}_2) = (A_1,(A_5)^{(1)})$ \item $G = E_8$, $p=7$ and $(\mathcal{C}_1,\mathcal{C}_2) = (A_1,D_5)$ \end{itemize} \vspace{1mm} \noindent Without some additional work, we cannot resolve these cases by appealing to Proposition \ref{p:fibre}, due to the prime order hypothesis adopted in \cite[Theorem 7]{BGG2} (for the group $L=D_6$) and in the present paper (for $L = E_7$). \item[{\rm (d)}] We claim that $\Delta$ is empty in the two special cases highlighted in part (c), which allows us to conclude that $\Delta$ is empty for every tuple in Table \ref{tab:special} when $p$ is a good prime for $G$. We thank Bob Guralnick for suggesting the following argument. Let $x = (x_1,x_2) \in X$ and note that $G(x) \ne G$ if $G(x)$ acts reducibly on the adjoint module for $G$. Write $k = R/M$, where $R$ is a ring of algebraic integers and $M$ is a maximal ideal, and lift $x$ to $z = (z_1,z_2) \in \mathcal{C}_1' \times \mathcal{C}_2'$, where $\mathcal{C}_1'$ and $\mathcal{C}_2'$ are the corresponding unipotent conjugacy classes in $G(R)$, with the same labels as $\mathcal{C}_1$ and $\mathcal{C}_2$. By a standard compactness argument, which relies on the fact that $\Delta$ is empty in the two cases of interest when the characteristic of the underlying field is sufficiently large, we deduce that $G(R)$ is not topologically generated by $z_1$ and $z_2$. In turn, this implies that $\langle z_1,z_2\rangle$ acts reducibly on the adjoint module for $G(R)$ and hence $\langle x_1, x_2\rangle$ is reducible on the adjoint module for $G$. Therefore, $G(x) \ne G$ and we conclude that $\Delta$ is empty. Alternatively, if $G = E_7$ and $(\mathcal{C}_1,\mathcal{C}_2) = (A_1,(A_5)^{(1)})$ then we can use \cite[Corollary 3.20]{BGG2} to show that $\Delta$ is empty for all $p \geqslant 0$ (see Remark \ref{r:e7_lie}). \item[{\rm (e)}] We conjecture that the conclusion to Theorem \ref{t:main} holds in good characteristic, regardless of the orders of the elements in each unipotent class. \end{itemize} \end{remk} \vspace{3mm} \noindent \textbf{Acknowledgements.} The author thanks Bob Guralnick and Donna Testerman for helpful discussions on the content of this paper. He also thanks the Institute of Mathematics at the \'{E}cole Polytechnique F\'{e}d\'{e}rale de Lausanne for their generous hospitality during a research visit in 2022. \section{Preliminaries}\label{s:prel} Here we record some preliminary results which will be useful in the proof of Theorem \ref{t:main}. Throughout this section, $G$ denotes a simple algebraic group over an algebraically closed field $k$ of characteristic $p \geqslant 0$. \subsection{Fixed point spaces and conjugacy classes}\label{ss:fix} Let $H$ be a closed subgroup of $G$ and consider the natural action of $G$ on the coset variety $\O = G/H$. For $y \in G$ we define \[ C_{\O}(y) = \{ \a \in \O \,:\, \a^y = \a\}. \] As briefly explained in Section \ref{s:intro}, bounds on the dimensions of these fixed point spaces will play a key role in the proof of Theorem \ref{t:main}. In order to obtain such bounds, we will repeatedly apply the following result (see \cite[Proposition 1.14]{LLS1}). \begin{prop}\label{p:dim} We have \[ \dim C_{\O}(y) = \dim \O - \dim y^G + \dim(y^G \cap H) \] for all $y \in H$. \end{prop} Here $\dim \O = \dim G - \dim H$ and it is also easy to compute $\dim y^G = \dim G - \dim C_G(y)$ if we can identify the $G$-class of $y$. However, the final term $\dim(y^G\cap H)$ is more difficult to calculate, in general, since one needs to understand the embedding of $H$-classes in $G$. In the main setting we are interested in here, with $G$ of exceptional type and $y \in G$ unipotent, we will work extensively with Lawther's results in \cite{Law2} on the fusion of unipotent $H$-classes in $G$ when $H$ is a maximal closed connected reductive subgroup of $G$. Note that if $H$ is reductive (and possibly disconnected) then $y^G \cap H = \bigcup_i y_i^H$ is a finite union of $H$-classes for all $y \in H$ (see \cite{Gur}) and thus \[ \dim(y^G \cap H) = \max_i \dim y_i^H \] in this situation. In order to prove Theorem \ref{t:main} we need detailed information on the unipotent classes in simple algebraic groups and there is an extensive literature to draw upon. First assume $G$ is an exceptional group. Here we refer the reader to \cite{LS_book} and specifically Tables 22.1.1-5, where the relevant conjugacy classes are listed. Throughout this paper, we will adopt the labelling of unipotent classes given in these tables, noting that this sometimes differs from the notation used elsewhere (for example, see Remark \ref{r:main}(a)). The centraliser dimensions are recorded in the third column of each table, so it is easy to compute $\dim y^G$ for each unipotent element $y \in G$. We will also make extensive use of Lawther's work in \cite{Law1}, where he determines the Jordan form of each unipotent element on certain $kG$-modules $V$, including the module defined in Table \ref{tab:mod}. In particular, we can use \cite{Law1} to read off the unipotent classes containing elements of order $p$, recalling that $y$ has order $p$ if and only if every Jordan block in the Jordan form of $y$ on $V$ has size at most $p$. We will also need some results from \cite{LS_book} on unipotent classes in classical type algebraic groups. Let $G$ be such a group, with natural module $V$. As noted in \cite[Theorem 3.1]{LS_book}, if $p \ne 2$ then each unipotent class $y^G$ is essentially determined by the Jordan form of $y$ on $V$, which also encodes the dimension of the class. The description of unipotent classes is more complicated when $p=2$ and $G$ is a symplectic or orthogonal group (see \cite[Theorem 4.2]{LS_book}). Here the conjugacy classes of involutions were originally determined by Aschbacher and Seitz \cite[Sections 7,8]{AS} and we will often use their notation for class representatives. The Zariski closure of a unipotent conjugacy class $y^G$ is a union of unipotent classes and this yields a natural partial order on the set of unipotent classes of $G$, where we write $z^G \preccurlyeq y^G$ if $z^G$ is contained in the closure of $y^G$. In view of Proposition \ref{p:clos} below, we are interested in this closure relation when $G$ is an exceptional type group and here we can appeal to work of Spaltenstein \cite{Spal}, which gives a complete description of the corresponding ordering in this setting. For $G = G_2$, we refer the reader to \cite[Section II.10]{Spal}, while the relevant closure diagrams for the remaining exceptional groups are presented in \cite[Section IV.2]{Spal}. \begin{rem}\label{r:spal} As previously remarked, our labelling of unipotent classes (which is consistent with \cite{LS_book}) does not always agree with the notation adopted by Spaltenstein in \cite{Spal}. This sometimes means that extra care is required when interpreting Spaltenstein's closure diagrams. For example, suppose $G = F_4$ and $p=2$. Here \cite{LS_book} uses the notation $\tilde{A}_1$ for the class of short root elements in $G$ and $(\tilde{A}_1)_2$ for the special class of involutions with dimension $22$. But this is opposite to the notation in \cite{Spal}, where $(\tilde{A}_1)_2$ denotes the class of short root elements. In particular, Spaltenstein's closure diagram (see \cite[p.250]{Spal}) shows that the closure of every nontrivial unipotent class contains long or short root elements, which is consistent with \cite[Corollary 3.3]{GMal}. \end{rem} We will also need the following version of \cite[Proposition 1.4]{LLS1} on graph automorphisms of order $p$. In Table \ref{tab:graph}, we write $C_H(u)$ for the centraliser of a long root element $u \in H$. \begin{prop}\label{p:graph} Let $G$ be a simple algebraic group of type $A_r$, $E_6$ or $D_4$ over an algebraically closed field of characteristic $p$, where $p=2,2$ or $3$, respectively. Let $\tau$ be a graph automorphism of $G$ of order $p$ and let $y_1, \ldots, y_n$ be a complete set of representatives of the $G$-classes of elements of order $p$ in the coset $G\tau$. Then $n$ and $d_i = \dim y_i^G$ are recorded in Table \ref{tab:graph}, together with the structure of $C_G(y_i)$. \end{prop} {\small \begin{table} \[ \begin{array}{lcccl} \hline G & p & n & d_i & C_G(y_i) \\ \hline A_{2m} & 2 & 1 & m(2m+3) & B_m \\ A_{2m-1} & 2 & 2 & 2m^2 \pm m-1 & C_m, \, C_{C_m}(u) \\ E_6 & 2 & 2 & 26, 42 & F_4, \, C_{C_4}(u) \\ D_4 & 3 & 2 & 14, 20 & G_2, \, C_{G_2}(u) \\ \hline \end{array} \] \caption{The classes and centralisers of graph automorphisms of order $p$} \label{tab:graph} \end{table}} \subsection{Subgroup structure}\label{ss:sub} Let $G$ be a simple algebraic group of exceptional type and let $\mathcal{M}$ be a set of representatives of the conjugacy classes of closed positive dimensional maximal subgroups of $G$. We may write \[ \mathcal{M} = \mathcal{P} \cup \mathcal{R}, \] where the subgroups in $\mathcal{P}$ are parabolic and those in $\mathcal{R}$ are reductive (and possibly disconnected). Recall that the conjugacy classes of maximal parabolic subgroups are parameterised by the nodes in the Dynkin diagram of $G$; we will use $P_i$ for a maximal parabolic subgroup corresponding to the $i$-th node in the Dynkin diagram (throughout this paper, we adopt the standard labelling of Dynkin diagrams as in \cite{Bou}). The classification of the subgroups in $\mathcal{R}$ was completed by Liebeck and Seitz in \cite{LS04}, noting that an additional conjugacy class of maximal subgroups of type $F_4$ arises when $(G,p) = (E_8,3)$; see \cite{CST}. The subgroups in $\mathcal{R}$ are recorded in Table \ref{tab:max}. \begin{rem} In Table \ref{tab:max} we use the notation $\tilde{L}$ to denote a simple factor $L$ of $H^0$ which is generated by short root subgroups of $G$. In addition, if $G = E_r$ then $T$ is a maximal torus of $G$ and $W = N_G(T)/T$ is the Weyl group, where $W$ is isomorphic to ${\rm PGSp}_4(3)$, $2 \times {\rm Sp}_6(2)$ and $2.{\rm O}_{8}^{+}(2)$ for $r = 6,7$ and $8$, respectively. \end{rem} \renewcommand{\arraystretch}{1.2} {\small \begin{table} \[ \begin{array}{ll} \hline G & \mathcal{R} \\ \hline G_2 & A_2.2, \, \tilde{A}_2.2 \, (p=3), \, A_1\tilde{A}_1, \, A_1 \, (p \geqslant 7) \\ F_4 & B_4, \, C_4 \, (p=2), \, A_1C_3 \, (p \geqslant 3), \, A_1G_2 \, (p \geqslant 3), \, G_2 \, (p=7), \, A_1 \, (p \geqslant 13), \, D_4.S_3, \, \tilde{D}_4.S_3 \, (p=2), \, A_2\tilde{A}_2.2 \\ E_6 & F_4, \, A_1A_5, \, C_4 \, (p \geqslant 3), \, A_2G_2, \, G_2 \, (p \ne 7), \, D_4T_2.S_3,\, A_2^3.S_3, \, A_2.2 \, (p \geqslant 5), \, T.W \\ E_7 & A_1D_6, \, A_1F_4, \, G_2C_3, \, A_1G_2 \, (p \geqslant 3), \, A_1A_1 \, (p \geqslant 5), \, A_1\, (\mbox{$2$ classes; $p \geqslant 17,19$}), \, E_6T_1.2, \, A_7.2 \\ & A_2A_5.2,\, A_1^3D_4.S_3, \, (2^2 \times D_4).S_3 \, (p \geqslant 3), \, A_1^7.{\rm L}_3(2), \, A_2.2 \, (p \geqslant 5), \, T.W \\ E_8 & A_1E_7, \, A_2E_6, \, D_8, \, G_2F_4, \, F_4 \, (p=3), \, B_2 \, (p \geqslant 5), \, A_1 \, (\mbox{$3$ classes, $p \geqslant 23,29,31$}), \, A_8.2, \, D_4^2.(S_3 \times 2) \\ & A_4^2.4, \, A_1G_2^2.2 \, (p \geqslant 3), \, A_2^4.{\rm GL}_2(3), \, A_1^8.{\rm AGL}_3(2), \, A_2A_1.2 \, (p \geqslant 5), \, A_1 \times S_5 \, (p \geqslant 7),\, T.W \\ \hline \end{array} \] \caption{The collection $\mathcal{R}$ of reductive maximal subgroups of $G$} \label{tab:max} \end{table}} \renewcommand{\arraystretch}{1} The following result on the maximal overgroups of long root elements will be useful later. Recall that the long root elements comprise the class labelled $A_1$ in \cite[Tables 22.2.1-5]{LS_book}. \begin{thm}\label{t:long} Let $H \in \mathcal{R}$. Then $H$ contains a long root element of $G$ only if $H \in \mathcal{L}$, where $\mathcal{L}$ is defined in Table \ref{tab:long}. \end{thm} \begin{proof} We need to rule out the existence of long root elements in each subgroup $H \in \mathcal{R} \setminus \mathcal{L}$ and we can typically use \cite{Law2} to do this. For example, suppose $G = E_8$. If $H = F_4$ and $p=3$ then the $G$-class of each unipotent $H$-class is presented in \cite[Table 2]{CST} and we immediately deduce that $H$ does not contain any long root elements. By inspecting \cite[Tables 36, 37]{Law2}, we see that the same conclusion holds when $H = A_2A_1.2$ or $B_2$ (both with $p \geqslant 5$), and for $H = A_1$ we can appeal to \cite[Table 27]{Law2}. Finally, suppose $p \geqslant 7$ and $H = A_1 \times S_5$. If $y \in H$ has order $p$, then we may embed $y$ in a connected maximal rank subgroup $A_4^2$ of $G$ so that $y = y_1y_2$ and each $y_i \in A_4$ is regular. Then the $G$-class of $y$ is recorded in \cite[Table 26]{Law2} and we conclude that $H$ does not contain any long root elements. The other groups can be handled in the same way and we omit the details. Note that if $G = E_7$ and $H = (2^2 \times D_4).S_3$ (with $p \geqslant 3$) then the proof of \cite[Proposition 5.12]{BGS} shows that $H$ does not contain long root elements. \end{proof} \renewcommand{\arraystretch}{1.2} {\small \begin{table} \[ \begin{array}{ll} \hline G & \mathcal{L} \\ \hline G_2 & A_2.2, \, A_1\tilde{A}_1 \\ F_4 & B_4, \, C_4 \, (p=2), \, A_1C_3 \, (p \geqslant 3), \, A_1G_2 \, (p \geqslant 3), \, D_4.S_3, \, \tilde{D}_4.S_3 \, (p=2), \, A_2\tilde{A}_2.2 \\ E_6 & F_4, \, A_1A_5, \, C_4 \, (p \geqslant 3), \, A_2G_2, \, D_4T_2.S_3,\, A_2^3.S_3, \, T.W \\ E_7 & A_1D_6, \, A_1F_4, \, G_2C_3, \, E_6T_1.2, \, A_7.2,\, A_2A_5.2,\, A_1^3D_4.S_3, \, (2^2 \times D_4).S_3 \, (p \geqslant 3), \, A_1^7.{\rm L}_3(2), \, T.W \\ E_8 & A_1E_7, \, A_2E_6, \, D_8, \, G_2F_4, \, A_8.2, \, D_4^2.(S_3 \times 2),\, A_4^2.4, \, A_1G_2^2.2 \, (p \geqslant 3), \, A_2^4.{\rm GL}_2(3), \, A_1^8.{\rm AGL}_3(2),\, T.W \\ \hline \end{array} \] \caption{The subgroup collection $\mathcal{L}$ in Theorem \ref{t:long}} \label{tab:long} \end{table}} \renewcommand{\arraystretch}{1} \subsection{Topological generation}\label{ss:top} For the remainder of Section \ref{s:prel}, let us adopt some of the notation introduced in Section \ref{s:intro}. Let $G$ be a simple algebraic group defined over an algebraically closed field $k$ of characteristic $p \geqslant 0$ and let us assume $k$ is not algebraic over a finite field. Let $t \geqslant 2$ and set $X = \mathcal{C}_1 \times \cdots \times \mathcal{C}_t$, where each $\mathcal{C}_i = y_i^G$ is a non-central conjugacy class. Given a tuple $x = (x_1, \ldots, x_t) \in X$, let $G(x)$ be the Zariski closure of $\langle x_1, \ldots, x_t \rangle$ and set \begin{align*} \Delta & = \{x \in X \,:\, G(x) = G\} \\ \Delta^{+} & = \{x \in X \,:\, \dim G(x) > 0\}. \end{align*} In addition, let $\Lambda$ be the set of elements $x \in X$ such that $G(x)$ is not contained in a positive dimensional maximal subgroup of $G$. Note that $\Delta = \Delta^+ \cap \Lambda$. The following basic observation will be very useful (see \cite[Lemma 2.2]{BGG2}) and it explains why we are interested in the closure relation on unipotent classes discussed in Section \ref{ss:fix}. \begin{prop}\label{p:clos} Let $\bar{X}$ be the Zariski closure of $X$ in $G^t$ and assume $G = G(y)$ for some $y \in \bar{X}$. Then $\Delta$ is non-empty. \end{prop} We will also need the following result (see \cite[Theorems 1 and 2]{BGG1}). \begin{thm}\label{t:plus} Let $\Sigma = \Delta$ or $\Delta^+$. Then $\Sigma$ is non-empty if and only if it is dense in $X$. \end{thm} For $H \in \mathcal{M}$, let $\O$ be the coset variety $G/H$ and set \begin{equation}\label{e:alpha} \a(G,H,y) = \frac{\dim C_{\O}(y)}{\dim \O} \end{equation} for $y \in G$. Then define \begin{equation}\label{e:sig} \Sigma_X(H) = \sum_{i=1}^t \a(G,H,y_i) \end{equation} where $\mathcal{C}_i = y_i^G$ for each $i$. The following result, which is essentially \cite[Theorem 5]{BGG1}, will be a key tool in the proof of Theorem \ref{t:main}. In particular, it establishes a bridge between topological generation and the dimensions of fixed point spaces on coset varieties. \begin{prop}\label{p:fix} If $\Delta^+$ is non-empty and $\Sigma_X(H)<t-1$ for all $H \in \mathcal{M}$, then $\Delta$ is non-empty. \end{prop} \begin{proof} Fix a subgroup $H \in \mathcal{M}$. Since $\Sigma_X(H)<t-1$, the proof of \cite[Theorem 5]{BGG1} shows that \[ X_H = \{ x \in X \,:\, \mbox{$G(x) \leqslant H^g$ for some $g \in G$}\} \] is contained in a proper closed subset of $X$. Therefore, there is a non-empty open subset $U_H$ of $X$ such that for all $x \in U_H$, $G(x)$ is not contained in any conjugate of $H$. Since $\mathcal{M}$ is finite and the inequality $\Sigma_X(H)<t-1$ holds for all $H \in \mathcal{M}$, it follows that $\bigcap_{H \in \mathcal{M}} U_H$ is a non-empty open subset of $X$ contained in $\Lambda$. Finally, we recall that $\Delta^+$ is dense in $X$ by Theorem \ref{t:plus} and thus $\Delta = \Delta^+ \cap \Lambda$ is non-empty. \end{proof} In order to apply Proposition \ref{p:fix} in the proof of Theorem \ref{t:main}, we will need the following result. This follows by combining \cite[Corollary 4]{BGG1} and \cite[Theorem 1.1]{GMT}. \begin{thm}\label{t:plus2} Let $G$ be a simple algebraic group of exceptional type and assume $X = \mathcal{C}_1 \times \cdots \times \mathcal{C}_t$, where each $\mathcal{C}_i$ is a nontrivial unipotent class. Then $\Delta^+$ is empty if and only if $(G,p) = (G_2,3)$ or $(F_4,2)$, with $t=2$ and $\{\mathcal{C}_1,\mathcal{C}_2\} = \{A_1,\tilde{A}_1\}$. \end{thm} To conclude this preliminary section, we state and prove the following result. This is based on a method first introduced in the proof of \cite[Lemma 4.1]{BGG2} and it will turn out to be a very useful tool in the proof of Theorem \ref{t:main}. Indeed, we will use it to show that $\Delta$ is empty for every case recorded in Table \ref{tab:special}. This result applies in the general setting, where $X = \mathcal{C}_1 \times \cdots \times \mathcal{C}_t$ and the $\mathcal{C}_i$ are arbitrary conjugacy classes. \begin{prop}\label{p:fibre} Suppose there exists a closed connected subgroup $L$ of $G$ and a tuple $x = (x_1, \ldots, x_t) \in X$ such that all of the following conditions are satisfied: \begin{itemize}\addtolength{\itemsep}{0.2\baselineskip} \item[{\rm (i)}] $M = N_G(L)$ is connected and $x_i \in M$ for all $i$. \item[{\rm (ii)}] $G(x)^0 = L$ and $G(y)^0 \leqslant L$ for all $y \in Y$, where $Y = \mathcal{D}_1 \times \cdots \times \mathcal{D}_t$ and $\mathcal{D}_i = x_i^M$. \item[{\rm (iii)}] $\dim M = \dim Y + \dim G - \dim X$. \end{itemize} Then $\Delta$ is empty. \end{prop} \begin{proof} Consider the morphism \[ \varphi: Y \times G \to X, \;\; (d_1,\ldots,d_t,g) \mapsto (d_1^g, \ldots, d_t^g) \] and set $Z = \{ (x_1^{a^{-1}}, \ldots, x_t^{a^{-1}},a) \,:\, a \in M\}$. Then $Z$ is contained in the fibre $\varphi^{-1}(x)$ and we claim that $Z = \varphi^{-1}(x)$. To see this, let $(x_1^{g^{-1}}, \ldots, x_t^{g^{-1}},g) \in \varphi^{-1}(x)$ be an arbitrary element, so $g \in G$ and for each $i$ we may write $x_i^{g^{-1}} = x_i^{a_i}$ for some $a_i \in M$. Set $y = (x_1^{a_1}, \ldots, x_t^{a_t}) \in Y$. Then \[ \langle x_1, \ldots, x_t \rangle = \langle x_1^{a_1}, \ldots, x_t^{a_t} \rangle^g \] and using the conditions in (ii) we deduce that \[ L = G(x)^0 = (G(y)^0)^g \leqslant L^g. \] Since $L$ is connected, it follows that $L = L^g$ and thus $g \in N_G(L)$, which is equal to $M$ by (i). This justifies the claim and we conclude that $\dim \varphi^{-1}(x) = \dim M$. Next observe that $\varphi$ is a morphism of irreducible varieties, so we have \[ \dim \varphi^{-1}(x) \geqslant \dim (Y \times G) - \dim \overline{{\rm im}(\varphi)} \geqslant \dim (Y \times G) - \dim X = \dim M, \] where the final equality holds by (iii). But we have already shown that $\dim \varphi^{-1}(x) = \dim M$, so this implies that $\dim \overline{{\rm im}(\varphi)} = \dim X$ and thus $\varphi$ is dominant. As a consequence, ${\rm im}(\varphi)$ contains a non-empty open subset $U$ of $X$. Finally, if $z = (d_1^g, \ldots, d_t^g) \in {\rm im}(\varphi)$, then $G(z)^0 = (G(y)^0)^g \leqslant L^g$ with $y = (d_1, \ldots, d_t) \in Y$ and thus $G(z) \ne G$. It follows that $U \cap \Delta$ is empty, whence $\Delta$ is non-dense in $X$ and therefore empty by Theorem \ref{t:plus}. \end{proof} \section{Fixed spaces}\label{s:fixV} Let $G$ be a simple algebraic group of exceptional type over an algebraically closed field $k$ of characteristic $p>0$ and assume $k$ is not algebraic over a finite field. Fix an integer $t \geqslant 2$ and set $X = \mathcal{C}_1 \times \cdots \times \mathcal{C}_t$, where each $\mathcal{C}_i = y_i^G$ is a unipotent conjugacy class of elements of order $p$. Let $V$ be the $kG$-module defined in Table \ref{tab:mod} and let $C_V(y_i)$ be the fixed space of $y_i$ on $V$, which coincides with the number of Jordan blocks in the Jordan form of $y_i$ on $V$. Note that \[ C_V(G) = \{ v \in V \,:\, \mbox{$v^g = v$ for all $g \in G$} \} = 0. \] The purpose of this section is to record the following result. Since the Jordan form on $V$ of every unipotent element in $G$ is recorded in \cite{Law1}, the proof is a routine exercise. \begin{thm}\label{t:fixV} Let $V$ be the $kG$-module in Table \ref{tab:mod}. Then \[ \sum_{i=1}^t \dim C_V(y_i) > (t-1)\dim V \] if and only if $(\mathcal{C}_1, \ldots, \mathcal{C}_t)$ is one of the cases recorded in Table \ref{tab:main}, up to reordering and graph automorphisms if $(G,p) = (G_2,3)$ or $(F_4,2)$. \end{thm} \begin{cor}\label{c:fixV} The set $\Delta$ is empty for every case $X$ arising in Table \ref{tab:main}. \end{cor} \begin{proof} If the inequality in Theorem \ref{t:fixV} is satisfied, then $\bigcap_i C_V(x_i)$ is nontrivial and thus $\langle x_1, \ldots, x_t \rangle$, and also $G(x)$, has a nontrivial fixed space on $V$ for all $x = (x_1, \ldots, x_t) \in X$. But $C_V(G) = 0$ and thus $\Delta$ is empty. \end{proof} \section{Parabolic actions}\label{s:parab} Define $G$ and $X = \mathcal{C}_1 \times \cdots \times \mathcal{C}_t$ as in the previous section and let $\mathcal{P} = \{P_1, \ldots, P_r\}$ be a set of representatives of the conjugacy classes of maximal parabolic subgroups of $G$, where $r$ is the rank of $G$. Fix a subgroup $H \in \mathcal{P}$ and let $\O = G/H$ be the corresponding coset variety. Following \cite{BGG1}, we can use a character-theoretic approach to compute the dimensions of the fixed point spaces $C_{\O}(y)$ for each unipotent element $y \in G$. Let $\sigma$ be a Steinberg endomorphism of $G$ with finite fixed point subgroup $G_{\sigma} = G(q)$ for some $p$-power $q$ and let $y \in G$ be unipotent. By inspecting the relevant tables in \cite[Chapter 22]{LS_book}, we observe that $y^G \cap G(q)$ is non-empty and thus every unipotent class in $G$ has a representative in $G(q)$. We may assume $H$ is $\sigma$-stable, so $H_{\sigma}$ is a maximal parabolic subgroup of $G_{\sigma}$ and we can consider the corresponding permutation character $\chi = 1^{G_{\sigma}}_{H_{\sigma}}$. According to \cite[Lemma 2.4]{LLS2}, the character $\chi$ admits the following decomposition \begin{equation}\label{e:dec} \chi = \sum_{\phi \in \widehat{W}}n_{\phi}R_{\phi} \end{equation} where $\widehat{W}$ is the set of complex irreducible characters of the Weyl group $W$ of $G$. Here the $R_{\phi}$ are almost characters of $G_{\sigma}$ and the coefficients are given by the inner products $n_{\phi} = \langle 1^W_{W_H},\phi \rangle$, where $W_H$ is the corresponding parabolic subgroup of $W$. In each case, the precise decomposition of $\chi$ as in \eqref{e:dec} is presented in \cite[Section 2]{LLS2}. The restriction of each almost character $R_{\phi}$ to unipotent elements yields the Green functions of $G_{\sigma}$, as defined by Deligne and Lusztig \cite{DL}. Building on earlier work due to Beynon-Spaltenstein, Lusztig and Shoji, the computation of the Green functions for exceptional groups of Lie type in all characteristics has very recently been completed by Geck and L\"{u}beck \cite{Geck, Lub}. This allows us to compute $\chi(z)$ for every $p$-element $z \in G_{\sigma}$ and in each case we obtain a polynomial in $q$. By considering the degrees of these polynomials and by appealing to Lang-Weil \cite{LW}, we can read off $\dim C_{\O}(y)$ for each unipotent element $y \in G$. More precisely, if we write $(y^G)_{\sigma} = \bigcup_j z_j^{G_{\sigma}}$ as a union of $G_{\sigma}$-classes, then $\dim C_{\O}(y)$ is the maximal degree of the polynomials $\chi(z_j)$. This approach allows us to compute $\dim C_{\O}(y)$ for every unipotent element $y \in G$. For example, suppose $G = F_4$, $p \geqslant 3$, $H = P_1$ and $y$ is contained in the class labelled $B_2$. Then $\dim \O = 15$ and by inspecting \cite[p.414]{LLS2} we observe that \[ \chi = R_{\phi_{1,0}} + R_{\phi_{2,1}} + R_{\phi_{2,2}}+R_{\phi_{1,3}'} \] in terms of Carter's notation for irreducible characters of $W$ (see \cite{Car}). From \cite[Table 22.2.4]{LS_book}, we see that $(y^G)_{\sigma} = z_1^{G_{\sigma}} \cup z_2^{G_{\sigma}}$, where \[ |C_{G_{\sigma}}(z_1)| = 2q^{10}|{\rm SL}_2(q)|^2, \;\; |C_{G_{\sigma}}(z_2)| = 2q^{10}|{\rm SL}_2(q^2)|. \] By taking the appropriate Green functions, we compute \[ \chi(z_1) = 2q^4+3q^3+2q^2+q+1,\;\; \chi(z_2) = q^3+q+1 \] and we conclude that $\dim C_{\O}(y) = 4$. By proceeding in this way, it is routine to verify the following result. In part (ii), $\tau$ is a graph automorphism of $G$. In each case, the dimension of $\O=G/H$ is recorded in \cite[Table 10]{BGS}. \begin{thm}\label{t:par} Let $V$ be the $kG$-module in Table \ref{tab:mod} and suppose $\Sigma_X(H) \geqslant t-1$ for some maximal parabolic subgroup $H$ of $G$. Then one of the following holds: \begin{itemize}\addtolength{\itemsep}{0.2\baselineskip} \item[{\rm (i)}] The inequality in \eqref{e:di} is satisfied. \item[{\rm (ii)}] $(G,p) = (G_2,3)$ or $(F_4,2)$, and \eqref{e:di} holds for $\mathcal{C}_1^{\tau} \times \cdots \times \mathcal{C}_t^{\tau}$. \item[{\rm (iii)}] $G = F_4$ and $(\mathcal{C}_1, \ldots, \mathcal{C}_t)$ is either $(A_1,\tilde{A}_1,(\tilde{A}_1)_2)$ or $(\tilde{A}_1,\tilde{A}_2)$, up to reordering. \item[{\rm (iv)}] $G = E_6$, $E_7$ or $E_8$ and $(\mathcal{C}_1, \ldots, \mathcal{C}_t)$ is one of the cases recorded in Table \ref{tab:special}, up to reordering. \end{itemize} \end{thm} The following corollary is obtained by combining Theorem \ref{t:par} with Theorem \ref{t:main}. \begin{cor}\label{c:parab} The set $\Delta$ is empty if and only if one of the following holds, up to reordering and graph automorphisms: \begin{itemize}\addtolength{\itemsep}{0.2\baselineskip} \item[{\rm (i)}] \eqref{e:di} holds with $V$ as in Table \ref{tab:mod}. \item[{\rm (ii)}] $\Sigma_X(H) \geqslant t-1$ for some maximal parabolic subgroup $H$ of $G$. \item[{\rm (iii)}] $G=F_4$ and $(\mathcal{C}_1, \ldots, \mathcal{C}_t) = (A_1, \tilde{A}_1, \tilde{A}_1)$, $(A_1, (\tilde{A}_1)_2, (\tilde{A}_1)_2)$, $(\tilde{A}_1,A_2\tilde{A}_1)$ or $(\tilde{A}_1,B_2)$. \end{itemize} \end{cor} \begin{proof} In view of Theorems \ref{t:main} and \ref{t:par}, this follows by inspecting the cases appearing in Table \ref{tab:special} with $G=F_4$, excluding the two configurations in part (iii) of Theorem \ref{t:par}. \end{proof} \begin{rem} Two comments on the statement of Corollary \ref{c:parab}: \begin{itemize}\addtolength{\itemsep}{0.2\baselineskip} \item[{\rm (a)}] In part (ii), we may assume $(G,H)$ is one of the following: \[ (F_4,P_1), \, (E_6, P_1), \, (E_7, P_1), \, (E_8,P_8), \] where the maximal parabolic subgroups are labelled in the usual manner. \item[{\rm (b)}] Consider the special cases recorded in part (iii). If $(\mathcal{C}_1, \mathcal{C}_2,\mathcal{C}_3) = (A_1, \tilde{A}_1, \tilde{A}_1)$ then the proof of Proposition \ref{p:f4_dim3} shows that $\Sigma_X(B_4) = 2$. Similarly, for $(A_1, (\tilde{A}_1)_2, (\tilde{A}_1)_2)$ we get $\Sigma_X(C_4) = 2$, while $\Sigma_X(B_4) = 1$ for $(\tilde{A}_1,A_2\tilde{A}_1)$ and $(\tilde{A}_1,B_2)$. \end{itemize} \end{rem} \section{Proof of Theorem \ref{t:main}}\label{s:main} We are now ready to prove Theorem \ref{t:main}. Throughout this section, $G$ and $X$ are defined as in the statement of Theorem \ref{t:main} and recall that we adopt the notation for unipotent classes from \cite{LS_book}. We partition the proof into several subsections according to the group $G$. \subsection{$G = G_2$}\label{ss:g2} We begin by assuming $G = G_2$. Information on the unipotent conjugacy classes of $G$ is recorded in \cite[Table 22.1.5]{LS_book} and we adopt the notation therein for labelling the classes. By inspecting \cite[Table 1]{Law1}, it is easy to determine the required condition on $p$ to ensure that the elements in a given unipotent class have order $p$. In addition, note that if $p=3$ then a graph automorphism $\tau$ of $G$ interchanges the classes labelled $A_1$ and $\tilde{A}_1$ comprising long and short root elements, respectively, while the remaining classes are stable under $\tau$. Let $\mathcal{M}$ be a set of representatives of the conjugacy classes of closed positive dimensional maximal subgroups of $G$ and write $\mathcal{M} = \mathcal{P} \cup \mathcal{R}$, where the subgroups in $\mathcal{P}$ and $\mathcal{R}$ are parabolic and reductive, respectively. The subgroups in $\mathcal{R}$ are listed in Table \ref{tab:max}. We will need the following result on fixed point spaces. Recall that the expression $\a(G,H,y)$ is defined in \eqref{e:alpha}. In Table \ref{tab:beta_g2}, $\delta_{r,p}$ is the Kronecker delta, where $\delta_{r,p}=1$ if $p=r$, otherwise $\delta_{r,p}=0$. \begin{prop}\label{p:g2_dim3} Let $H \in \mathcal{R}$ and let $y \in G$ be an element of order $p$. Then $\a(G,H,y) \leqslant \b$, where $\b$ is recorded in Table \ref{tab:beta_g2}. \end{prop} {\small \begin{table} \[ \begin{array}{l|ccccc} & A_1 & \tilde{A}_1 & (\tilde{A}_1)_3 & G_2(a_1) & G_2 \\ \hline A_2.2 & 2/3 & \delta_{2,p}/2 & 0 & 1/3 & 0 \\ \tilde{A}_2.2 & 0 & 2/3 & 0 & 1/3 & 0 \\ A_1\tilde{A}_1 & 1/2 & (1+\delta_{3,p})/4 & 0 & 1/4 & 0 \\ A_1 & 0 & 0 & 0 & 0 & 1/11 \\ \end{array} \] \caption{The upper bound $\a(G,H,y) \leqslant \b$ in Proposition \ref{p:g2_dim3}} \label{tab:beta_g2} \end{table}} \begin{proof} We consider each possibility for $H$ in turn, working with Proposition \ref{p:dim} to obtain the required upper bound on $\dim C_{\O}(y)$. First assume $H = A_2.2$, so $\dim \O = 6$ and $H^0$ is generated by long root subgroups of $G$. Let $y \in G$ be an element of order $p$. Now the $G$-class of each unipotent class in $H^0$ is determined by Lawther in \cite[Section 4.2]{Law2} and as a consequence we compute $\dim(y^G \cap H^0) = 4,6$ if $y \in A_1, G_2(a_1)$, respectively, otherwise $\dim(y^G \cap H^0)=0$. If $p \geqslant 3$, or if $p = 2$ and $y^G \cap (H \setminus H^0)$ is empty, then $y^G \cap H = y^G \cap H^0$ and we can calculate $\dim C_{\O}(y)$ via Proposition \ref{p:dim}. So to complete the analysis of this case, we may assume $p=2$ and $y \in H \setminus H^0$ is an involution. Here $y$ acts as a graph automorphism on $H^0$ (see \cite[p.3]{LS04}) and thus $C_{H^0}(y) = B_1$ by Proposition \ref{p:graph}. In order to determine the $G$-class of such an element $y$, let us consider the decomposition \[ V\downarrow H^0 = U \oplus U^* \oplus 0, \] where $V = W_G(\l_1)$ is the $7$-dimensional Weyl module for $G$ with highest weight $\l_1$ and $U$ and $0$ are the natural and trivial modules for $H^0$, respectively. Since $y$ interchanges the two $3$-dimensional summands, we deduce that $y$ has Jordan form $(J_2^3,J_1)$ on $V$ and by inspecting \cite[Table 1]{Law1} we see that $y \in \tilde{A}_1$. Therefore, $\a(G,H,y) \leqslant \delta_{2,p}/2$ when $y$ is in the class $\tilde{A}_1$. Next assume $H = \tilde{A}_2.2$ and $p=3$, where $H^0$ is generated by short root subgroups. Here $H$ is the image of a maximal subgroup $A_2.2$ under a graph automorphism $\tau$ (where the connected component $A_2$ is generated by long root subgroups) and so the result follows immediately from our analysis of the previous case, recalling that $\tau$ interchanges the $G$-classes labelled $A_1$ and $\tilde{A}_1$. Finally, if $H = A_1\tilde{A}_1$ or $A_1$ then the $G$-class of each unipotent $H$-class is determined in \cite{Law2} and the desired result quickly follows. \end{proof} We are now in a position to prove Theorem \ref{t:main} for $G = G_2$. Recall that $\Delta$ is non-empty when $t \geqslant 4$ by \cite[Theorem 7]{BGG1}, so we may assume $t \in \{2,3\}$. \begin{thm}\label{t:g2} The conclusion to Theorem \ref{t:main} holds when $G = G_2$. \end{thm} \begin{proof} First assume $t=3$. By Corollary \ref{c:fixV}, we know that $\Delta$ is empty when $\mathcal{C}_i = A_1$ for all $i$. By considering Proposition \ref{p:clos} and the closure relation on the set of unipotent classes in $G$ (see \cite[Section II.10]{Spal}), it suffices to show that $\Delta$ is non-empty when $X = \mathcal{C}_1 \times \mathcal{C}_2 \times \mathcal{C}_3$ with $\mathcal{C}_1 = \mathcal{C}_2 = A_1$ and $\mathcal{C}_3 = \tilde{A}_1$. By Proposition \ref{p:fix}, we just need to verify the bound $\Sigma_X(H)<2$ for all $H \in \mathcal{M}$, where $\Sigma_X(H)$ is defined as in \eqref{e:sig}. By Theorem \ref{t:par}, this bound holds when $H \in \mathcal{P}$ is a maximal parabolic subgroup. And for $H \in \mathcal{R}$, the desired result follows from the upper bounds on $\a(G,H,y)$ in Proposition \ref{p:g2_dim3}. A very similar argument applies when $t=2$ and $p \geqslant 3$. Here it suffices to show that $\Sigma_X(H)<1$ for all $H \in \mathcal{R}$ when $(\mathcal{C}_1,\mathcal{C}_2) = (A_1,G_2)$, $(\tilde{A}_1,(\tilde{A}_1)_3)$ or $(\tilde{A}_1,\tilde{A}_1)$, with $p \geqslant 5$ in the latter case (if $p=3$ then $(\tilde{A}_1,\tilde{A}_1)$ is the image of $(A_1,A_1)$ under a graph automorphism). Once again, the result follows by applying the bounds presented in Proposition \ref{p:g2_dim3}. \end{proof} \subsection{$G = F_4$}\label{ss:f4} Next assume $G = F_4$. Here we refer the reader to \cite[Table 22.1.4]{LS_book} for information on the unipotent classes in $G$, including the notation we use to label the classes. Note that if $p=2$ then a graph automorphism interchanges the $G$-classes labelled $A_1$ and $\tilde{A}_1$ comprising long and short root elements, respectively, and it fixes the classes $(\tilde{A}_1)_2$ and $A_1\tilde{A}_1$. As before, we write $\mathcal{M} = \mathcal{P} \cup \mathcal{R}$ for a set of representatives of the conjugacy classes of closed positive dimensional maximal subgroups of $G$ (see Table \ref{tab:max} for the subgroups in $\mathcal{R}$). We also write $\mathcal{L}$ for the subset of $\mathcal{R}$ defined in Table \ref{tab:long} (see Theorem \ref{t:long}). We begin by establishing the following result on fixed point spaces. \begin{prop}\label{p:f4_dim3} Let $H \in \mathcal{L}$ and let $y \in G$ be an element of order $p$ in one of the following conjugacy classes \[ A_1, \, \tilde{A}_1, \, (\tilde{A}_1)_2, \, A_1\tilde{A}_1, \, A_2, \, \tilde{A}_2, \, A_2\tilde{A}_1, \, \tilde{A}_2A_1, \, C_3. \] Then $\a(G,H,y) \leqslant \b$, where $\b$ is recorded in Table \ref{tab:beta_f4}. \end{prop} {\small \begin{table} \[ \begin{array}{l|ccccccccc} & A_1 & \tilde{A}_1 & (\tilde{A}_1)_2 & A_1\tilde{A}_1 & A_2 & \tilde{A}_2 & A_2\tilde{A}_1 & \tilde{A}_2A_1 & C_3 \\ \hline B_4 & 3/4 & (5-\delta_{2,p})/8 & 5/8 & 1/2 & 1/2 & 0 & 3/8 & 0 & 0 \\ C_4 & 1/2 & 3/4 & 5/8 & 1/2 & & & & & \\ A_1C_3 & 9/14 & 4/7 & & 3/7 & 3/7 & 3/7 & 0 & 2/7 & 1/7 \\ A_1G_2 & 5/7 & 0 & & 3/7 & 3/7 & 13/35 & 0 & 11/35 & 0 \\ A_2\tilde{A}_2.2 & 2/3 & (3+\delta_{2,p})/6 & 0 & 1/2 & 1/3 & 1/3 & 1/3 & 1/3 & 0 \\ D_4.S_3 & 3/4 & 5/8 & 7/12 & 1/2 & 1/2 & 1/3 & 0 & 1/3 & 0 \\ \tilde{D}_4.S_3 & 5/8 & 3/4 & 7/12 & 1/2 & & & & & \\ \end{array} \] \caption{The upper bound $\a(G,H,y) \leqslant \b$ in Proposition \ref{p:f4_dim3}} \label{tab:beta_f4} \end{table}} \begin{proof} If $H \in \{B_4,C_4,A_1C_3,A_1G_2\}$ then Lawther \cite{Law2} has determined the $G$-class of each $H$-class of unipotent elements and by appealing to Proposition \ref{p:dim} it is a straightforward exercise to compute $\a(G,H,y)$ in each case. For example, if $H = B_4$ and $p=2$, then the $G$-class of each $H$-class of involutions in $H$ is recorded in Table \ref{tab:b4} (in the first column, we use the notation from \cite[Table 4]{Law2} for the $H$-class of $y$, with the corresponding label from \cite{AS} given in the second column). As a consequence, we deduce that if $y \in A_1\tilde{A}_1$ then \[ \dim C_{\O}(y) = \dim \O - \dim y^G + \dim(y^G \cap H) = 16-28+20 = 8 \] and thus $\a(G,H,y) = 1/2$. {\small \begin{table} \[ \begin{array}{ccccc} \hline \mbox{$H$-class of $y$} & & \dim y^H & \mbox{$G$-class of $y$} & \dim y^G \\ \hline A_1 & a_2 & 12 & A_1 & 16 \\ B_1 & b_1 & 8 & \tilde{A}_1 & 16 \\ B_1^{(2)} & c_2 & 14 & (\tilde{A}_1)_2 & 22 \\ 2A_1 & a_4 & 16 & (\tilde{A}_1)_2 & 22 \\ A_1+B_1 & b_3 & 18 & A_1\tilde{A}_1 & 28 \\ A_1+B_1^{(2)} & c_4 & 20 & A_1\tilde{A}_1 & 28 \\ \hline \end{array} \] \caption{The case $G = F_4$, $H = B_4$, $p=2$} \label{tab:b4} \end{table}} Next assume $H = A_2\tilde{A}_2.2$. The $G$-class of each $H^0$-class of unipotent elements is determined in \cite[Section 4.7]{Law2} and this allows us to compute $\dim(y^G \cap H^0)$. This gives $\dim(y^G \cap H)$, and hence $\dim C_{\O}(y)$ via Proposition \ref{p:dim}, unless $p=2$ and $y^G \cap (H \setminus H^0)$ is non-empty. So we may assume $p=2$ and $y \in H \setminus H^0$ is an involution. Here $y$ induces a graph automorphism on both $A_2$ factors of $H^0$ and thus $C_{H^0}(y) = B_1^2$ (see Proposition \ref{p:graph}). In order to identify the $G$-class of $y$, let $V = W_G(\l_4)$ be the $26$-dimensional Weyl module with highest weight $\l_4$ and note that \[ V\downarrow H^0 = (U \otimes U) \oplus (U^* \otimes U^*) \oplus (0 \otimes \mathcal{L}(A_2)) \] (see \cite[Table 2]{Thomas}), where $U$, $\mathcal{L}(A_2)$ and $0$ are the natural, adjoint and trivial modules for $A_2$, respectively. Now $y$ interchanges the two $9$-dimensional summands and we calculate that it has Jordan form $(J_2^3,J_1^2)$ on $\mathcal{L}(A_2)$ (to do this, one just needs to consider the action of the transpose map on the space of trace-zero $3 \times 3$ matrices over $k$). Therefore, $y$ has Jordan form $(J_2^{12},J_1^2)$ on $V$ and by inspecting \cite[Table 3]{Law1} we deduce that $y$ is in the $G$-class labelled $A_1\tilde{A}_1$. Since $\dim(y^G \cap H^0) = 8$, we deduce that $\dim(y^G \cap H) \leqslant 10$ and thus $\a(G,H,y) \leqslant 1/2$. Next suppose $H = D_4.S_3$. Here $H^0 < B_4 < G$ and so we can use \cite[Section 4.4]{Law2} to compute $\dim(y^G \cap H^0)$. For example, suppose $p=2$ and observe that there are three $H$-classes of involutions in $H^0$, represented by the elements $a_2$, $c_2$ and $c_4$ in the notation of \cite{AS} (note that involutions of type $c_2$, $a_4$ and $a_4'$ are conjugate under a triality graph automorphism of $H^0$). The corresponding class in $B_4$ has the same label and the $G$-class can be read off from Table \ref{tab:b4}. So to complete the analysis of this case, we may assume $p \in \{2,3\}$ and $y \in H \setminus H^0$ has order $p$. First assume $p=2$. Here we can proceed as above, noting that $D_4.2 < B_4$ and the relevant involutions are of type $b_1$ and $b_3$, which we view as graph automorphisms of $H^0$. By consulting Table \ref{tab:b4}, we see that the $b_1$-involutions are contained in the $G$-class $\tilde{A}_1$, while those of type $b_3$ are in the class labelled $A_1\tilde{A}_1$. So for $y \in \tilde{A}_1$ we deduce that $\dim(y^G \cap H) \leqslant 7$ and thus $\a(G,H,y) \leqslant 5/8$. Similarly, if $y \in A_1\tilde{A}_1$ then $\dim(y^G \cap H) = 16$ and $\a(G,H,y) = 1/2$. Now assume $p=3$ and $y \in H \setminus H^0$ has order $3$. Here $y$ acts as a triality graph automorphism on $H^0$ and there are two $H^0$-classes to consider, represented by $y_1$ and $y_2$, where $C_{H^0}(y_1) = G_2$ and $C_{H^0}(y_2) = C_{G_2}(u)$ with $u \in G_2$ a long root element (see Proposition \ref{p:graph}). As above, let $V = W_G(\l_4)$ and note that \[ V\downarrow H^0 = U_1 \oplus U_2 \oplus U_3 \oplus 0^2, \] where the $U_i$ denote the three $8$-dimensional irreducible modules for $H^0$ (namely, the natural module and the two spin modules) and $0$ is the trivial module (see \cite[Table 2]{Thomas}). Now $y$ cyclically permutes $U_1,U_2$ and $U_3$, whence the Jordan form of $y$ on $V$ has $8$ Jordan blocks of size $3$. By inspecting \cite[Table 3]{Law1}, this places $y$ in one of the $G$-classes labelled $\tilde{A}_2$ or $\tilde{A}_2A_1$. In fact, by arguing as in the proof of \cite[Proposition 5.14]{BGS} we can show that $y \in \tilde{A}_2A_1$ when $C_{H^0}(y) = C_{G_2}(u)$. Therefore, we conclude that $\dim(y^G \cap H) \leqslant 14$ and $\a(G,H,y) \leqslant 1/3$ if $y \in \tilde{A}_2$. Similarly, we get $\a(G,H,y) \leqslant 1/3$ if $y \in \tilde{A}_2A_1$. Finally, let us observe that the result for $H = \tilde{D}_4.S_3$ with $p=2$ follows immediately from our analysis of the previous case, noting that $H$ is the image of a maximal subgroup $D_4.S_3$ of $G$ under a graph automorphism, where the connected component of the latter group is generated by long root subgroups. \end{proof} If $t \geqslant 5$ then $\Delta$ is non-empty by \cite[Theorem 7]{BGG1}, so we may assume $t \in \{2,3,4\}$. First we handle the case $t=4$. \begin{thm}\label{t:f4_4} The conclusion to Theorem \ref{t:main} holds when $G = F_4$ and $t=4$. \end{thm} \begin{proof} If $\mathcal{C}_i = A_1$ for all $i$ then $\Delta$ is empty by Corollary \ref{c:fixV}, and the same conclusion holds if $p=2$ and $\mathcal{C}_i = \tilde{A}_1$ for all $i$. Therefore, it remains to show that $\Delta$ is non-empty in all other cases. In view of Proposition \ref{p:fix}, it suffices to show that $\Sigma_X(H)<3$ for all $H \in \mathcal{M}$. Let $y \in G$ be a unipotent element of order $p$ and let $H \in \mathcal{M}$. By \cite[Theorem 3.1]{BGG1} we have $\a(G,H,y) \leqslant 3/4$ if $y \in A_1$, or if $p=2$ and $y \in \tilde{A}_1$, otherwise $\a(G,H,y)<2/3$. Therefore, \[ \Sigma_X(H) < 3\cdot \frac{3}{4}+\frac{2}{3} < 3 \] and the result follows. \end{proof} Now assume $t=3$. We begin by showing that $\Delta$ is empty for the three special cases recorded in Table \ref{tab:special}. \begin{lem}\label{l:f4_30} The set $\Delta$ is empty if $t=3$, $p=2$ and $(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3) = (\tilde{A}_1,(\tilde{A}_1)_2,(\tilde{A}_1)_2)$ or $(A_1,\tilde{A}_1,(\tilde{A}_1)_2)$. \end{lem} \begin{proof} In view of Proposition \ref{p:clos}, noting that $A_1$ is contained in the closure of $(\tilde{A}_1)_2$ (see \cite{Spal} and Remark \ref{r:spal}), it suffices to show that $\Delta$ is empty for $(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3) = (\tilde{A}_1,(\tilde{A}_1)_2,(\tilde{A}_1)_2)$. Write $\mathcal{C}_i = y_i^G$ and observe that we may embed each $y_i$ in a maximal closed subgroup $L = C_4$, where $y_1$ is an $a_2$-type involution and $y_2, y_3$ are of type $a_4$ (see \cite[Section 4.5]{Law2}). If $W$ denotes the natural module for $L$, then \[ \sum_{i=1}^3 \dim C_W(y_i) = 6+4+4 = 14 < 2\dim W \] and by applying \cite[Theorem 7]{BGG2} it follows that we may assume $G(y) = L$. If we set $\mathcal{D}_i = y_i^L$ then $\dim \mathcal{D}_1 = 12$ and $\dim \mathcal{D}_i = 16$ for $i=2,3$, whence \[ \dim (\mathcal{D}_1 \times \mathcal{D}_2 \times \mathcal{D}_3) + \dim G - \dim X = 36 = \dim L. \] Now $N_G(L) = L$ and one can now check that all of the conditions in Proposition \ref{p:fibre} are satisfied (with $M=L$). We conclude that $\Delta$ is empty. \end{proof} \begin{lem}\label{l:f4_300} The set $\Delta$ is empty if $t=3$ and $(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3) = (A_1, \tilde{A}_1, \tilde{A}_1)$. \end{lem} \begin{proof} This is clear when $p=2$ since we know that $\Delta$ is empty for the triple $(\tilde{A}_1, A_1,A_1)$ by Corollary \ref{c:fixV}, which is the image of $(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3)$ under a graph automorphism. Now assume $p \geqslant 3$ and embed each $y_i$ in a maximal closed subgroup $L = B_4$, where $y_1$ has Jordan form $(J_2^2,J_1^5)$ on the natural module for $L$, while $y_2$ and $y_3$ both have Jordan form $(J_2^4,J_1)$. Set $D = y_i^L$ and note that $N_G(L) = L$ and \[ \dim \mathcal{C}_1 = 16, \; \dim \mathcal{C}_2 = \dim \mathcal{C}_3 = 22, \; \dim \mathcal{D}_1 = 12, \; \dim \mathcal{D}_2 = \dim \mathcal{D}_3 = 16. \] In addition, by \cite[Theorem 7]{BGG2}, we may assume that the $y_i$ topologically generate $L$. Setting $Y = \mathcal{D}_1 \times \mathcal{D}_2 \times \mathcal{D}_3$ we compute $\dim Y + \dim G - \dim X =\dim L$ and thus $\Delta$ is empty by Proposition \ref{p:fibre}. \end{proof} \begin{thm}\label{t:f4_3} The conclusion to Theorem \ref{t:main} holds when $G = F_4$ and $t=3$. \end{thm} \begin{proof} Set $\mathcal{C}_i = y_i^G$. In view of Corollary \ref{c:fixV} and Lemmas \ref{l:f4_30} and \ref{l:f4_300}, we just need to show that $\Delta$ is non-empty when $(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3)$ is not one of the triples in Tables \ref{tab:main} or \ref{tab:special}, up to reordering and graph automorphisms when $p=2$. To do this, we will verify the bound $\Sigma_X(H)<2$ for all $H \in \mathcal{M}$ and apply Proposition \ref{p:fix}. By Theorem \ref{t:par}, this bound holds when $H \in \mathcal{P}$, so we may assume $H \in \mathcal{R}$. First assume $\mathcal{C}_1 = \mathcal{C}_2 = A_1$. Here $p \geqslant 3$ and by considering the closure relation on unipotent classes, it suffices to show that $\Sigma_X(H)<2$ when $\mathcal{C}_3 = \tilde{A}_2$ or $A_2\tilde{A}_1$. If $H \not\in \mathcal{L}$ then $H \cap \mathcal{C}_i$ is empty for $i=1,2$, so $\Sigma_X(H) = \a(G,H,y_3)$ and the result follows. Therefore, we may assume $H \in \mathcal{L}$ and one can check that the bounds in Proposition \ref{p:f4_dim3} are sufficient. Next suppose $\mathcal{C}_1 = A_1$ and $\mathcal{C}_2 \ne A_1$. Here it is sufficient to show that $\Sigma_X(H)<2$ for the triple $(A_1,\tilde{A}_1,A_1\tilde{A}_1)$ and once again we find that the bounds in Proposition \ref{p:f4_dim3} are good enough. Finally, if $\mathcal{C}_1 \ne A_1$ (and also $\mathcal{C}_1 \ne \tilde{A}_1$ if $p=2$), then \cite[Theorem 3.1]{BGG1} implies that $\a(G,H,y_i)<2/3$ for all $H \in \mathcal{M}$ and thus $\Sigma_X(H)<2$. \end{proof} Finally, let us assume $t=2$. First we handle the special cases in Table \ref{tab:special}. \begin{lem}\label{l:f4_31} The set $\Delta$ is empty if $t=2$ and $(\mathcal{C}_1,\mathcal{C}_2) = (\tilde{A}_1,\tilde{A}_2)$, $(\tilde{A}_1,A_2\tilde{A}_1)$ or $(\tilde{A}_1,B_2)$. \end{lem} \begin{proof} Write $\mathcal{C}_i = y_i^G$ and first assume $(\mathcal{C}_1,\mathcal{C}_2) = (\tilde{A}_1,\tilde{A}_2)$. We may embed $y_1,y_2 \in L = C_3$, where $M = N_G(L) = A_1C_3$ is a maximal closed subgroup of $G$. Here $y_1$ and $y_2$ have respective Jordan forms $(J_2^2,J_1^2)$ and $(J_3^2)$ on the natural module for $L$, so \cite[Theorem 7]{BGG2} implies that $G(x) = L$ for some $x \in X$. Let $\mathcal{D}_i = y_i^M$ and note that $\dim \mathcal{D}_1 = 10$ and $\dim \mathcal{D}_2 = 14$. Then \[ \dim(\mathcal{D}_1 \times \mathcal{D}_2)+\dim G - \dim X = 24 = \dim M \] and we now conclude by applying Proposition \ref{p:fibre}. To complete the proof, it suffices to show that $\Delta$ is empty when $(\mathcal{C}_1,\mathcal{C}_2) = (\tilde{A}_1,B_2)$ since the class $A_2\tilde{A}_1$ is contained in the closure of the class labelled $B_2$. Here we embed $y_1,y_2 \in L = B_4$, with respective Jordan forms $(J_2^4,J_1)$ and $(J_4^2,J_1)$ on the natural module for $L$ and the reader can check that the same argument applies, via \cite[Theorem 7]{BGG2} and Proposition \ref{p:fibre} (note that $L$ is maximal, so $M = N_G(L) = L$). \end{proof} \begin{rem}\label{r:f4_lie} Suppose $t=2$ and $(\mathcal{C}_1,\mathcal{C}_2) = (\tilde{A}_1,\tilde{A}_2)$. Let $W = \mathcal{L}(G)$ be the Lie algebra of $G$ and suppose the field $k$ has arbitrary characteristic $p \geqslant 0$. By inspecting \cite[Table 4]{Law1} we see that $\dim C_W(y_1) \geqslant 30$ and $\dim C_W(y_2) =22$, whence \[ \dim W = 52 < 4+30+22 \leqslant {\rm rk}(G) + \sum_{i=1}^2\dim C_W(y_i) - \dim Z, \] where ${\rm rk}(G)$ and $Z$ denote the rank of $G$ and the centre of $W$, respectively (note that $Z = 0$). Therefore, \cite[Corollary 3.20]{BGG2} implies that $\Delta$ is empty for all $p \geqslant 0$. On the other hand, if $(\mathcal{C}_1,\mathcal{C}_2) = (\tilde{A}_1,A_2\tilde{A}_1)$ and $p \ne 2$, then $\dim C_W(y_1) = 30$, $\dim C_W(y_2) = 18$ and thus \cite[Corollary 3.20]{BGG2} is inconclusive in this case. Similarly, one can check that this alternative approach via \cite[Corollary 3.20]{BGG2} is ineffective when $(\mathcal{C}_1,\mathcal{C}_2) = (\tilde{A}_1,B_2)$. \end{rem} \begin{thm}\label{t:f4_2} The conclusion to Theorem \ref{t:main} holds when $G = F_4$, $t=2$ and $p \geqslant 3$. \end{thm} \begin{proof} By combining Corollary \ref{c:fixV} and Lemma \ref{l:f4_31}, it remains to show that $\Delta$ is non-empty for every pair $(\mathcal{C}_1,\mathcal{C}_2)$ not appearing in Tables \ref{tab:main} and \ref{tab:special} (as usual, up to reordering and graph automorphisms). In view of Proposition \ref{p:fix} and Theorem \ref{t:par}, it suffices to verify the bound $\Sigma_X(H)<1$ for all $H \in \mathcal{R}$. Note that if $H \in \mathcal{R} \setminus \mathcal{L}$, then $H = G_2$ (with $p=7$) or $H = A_1$ and $p \geqslant 13$ (see Tables \ref{tab:max} and \ref{tab:long}). By \cite[Section 5.2]{Law2}, $H = G_2$ meets the nontrivial unipotent classes labelled $A_1\tilde{A}_1$, $\tilde{A}_2A_1$, $F_4(a_3)$ and $F_4(a_2)$. Similarly, the nontrivial unipotent elements in $H = A_1$ are in the $G$-class labelled $F_4$ (see \cite[Table 27]{Law2}). Suppose $\mathcal{C}_1 = A_1$. As before, if $H \not\in \mathcal{L}$ then $\a(G,H,y_1) = 0$ for $y_1 \in \mathcal{C}_1$ and the desired bound follows. So we only need to consider the subgroups in $\mathcal{L}$. By carefully reviewing the closure relation on unipotent classes, we may assume $\mathcal{C}_2$ is the class labelled $C_3$ and the desired bound $\Sigma_X(H)<1$ follows from Proposition \ref{p:f4_dim3}. Similarly, if $\mathcal{C}_1 = \tilde{A}_1$ then we may assume $\mathcal{C}_2 = \tilde{A}_2A_1$ and $H \in \mathcal{L}$, finding once again that the bounds in Proposition \ref{p:f4_dim3} are good enough. The same argument applies if $\mathcal{C}_1 \in \{ A_1\tilde{A}_1, A_2\}$, noting that in each case we may assume $\mathcal{C}_2 \in \{\tilde{A}_2, A_2\tilde{A}_1\}$. Finally, let us assume $\mathcal{C}_i \not\in \{A_1, \tilde{A}_1, A_1\tilde{A}_1, A_2\}$ for $i=1,2$. By considering closures, it suffices to show that $\Delta$ is non-empty when each $\mathcal{C}_i$ is contained in $\{\tilde{A}_2, A_2\tilde{A}_1\}$ and it is easy to check that the bounds in Proposition \ref{p:f4_dim3} are sufficient. \end{proof} This completes the proof of Theorem \ref{t:main} for $G = F_4$. \subsection{$G = E_6$}\label{ss:e6} In this section we assume $G = E_6$. We adopt the labelling of unipotent classes given in \cite[Table 22.1.3]{LS_book} and we define the subgroup collections $\mathcal{M}$, $\mathcal{P}$, $\mathcal{R}$ and $\mathcal{L}$ as in Section \ref{ss:sub}. We begin with the following result on fixed point spaces. \begin{prop}\label{p:e6_dim3} Let $H \in \mathcal{L}$ and let $y \in G$ be an element of order $p$ in one of the following conjugacy classes \[ A_1, \, A_1^2, \, A_1^3, \, A_2, \, A_2A_1, \, A_2^2A_1, \, A_4A_1. \] Then $\a(G,H,y) \leqslant \b$, where $\b$ is recorded in Table \ref{tab:beta_e6}. \end{prop} {\small \begin{table} \[ \begin{array}{l|ccccccc} & A_1 & A_1^2 & A_1^3 & A_2 & A_2A_1 & A_2^2A_1 & A_4A_1 \\ \hline A_1A_5 & 7/10 & 3/5 & 1/2 & 9/20 & 2/5 & 3/10 & 1/5 \\ F_4 & 10/13 & 8/13 & 7/13 & 7/13 & 0 & 4/13 & 0 \\ C_4 & 2/3 & 4/7 & 10/21 & 10/21 & 0 & 2/7 & 0 \\ A_2G_2 & 5/7 & 1/2 & 1/2 & 3/7 & 11/28 & 9/28 & 0 \\ A_2^3.S_3 & 2/3 & 5/9 & 1/2 & 1/3 & 1/3 & 1/3 & 0 \\ D_4T_2.S_3 & 3/4 & 7/12 & 25/48 & 1/2 & 0 & 1/3 & 0 \\ T.W & 17/24 & 23/36 & 19/36 & 1/2 & 4/9 & 1/3 & 2/9 \\ \end{array} \] \caption{The upper bound $\a(G,H,y) \leqslant \b$ in Proposition \ref{p:e6_dim3}} \label{tab:beta_e6} \end{table}} \begin{proof} If $y \in A_1$ is a long root element then the upper bound on $\a(G,H,y)$ in Table \ref{tab:beta_e6} follows immediately from \cite[Theorem 3.1]{BGG1}. Now assume $y \not\in A_1$. If $H \in \{A_1A_5, F_4, C_4, A_2G_2\}$ then Lawther \cite{Law2} has determined the $G$-class of each $H$-class of unipotent elements and it is an easy exercise to compute $\a(G,H,y)$ in each case. Next assume $H = A_2^3.S_3$. The $G$-class of each unipotent class in $H^0$ is determined in \cite[Section 4.9]{Law2} and this allows us to compute $\dim(y^G \cap H^0)$. Therefore, to complete the analysis of this case we may assume $p \in \{2,3\}$ and $y \in H \setminus H^0$ has order $p$. If $p=2$ then the proof of \cite[Lemma 4.12]{BTh0} shows that $C_{H^0}(y) = A_2B_1$ and $y$ is contained in the $G$-class labelled $A_1^3$. So for $y \in A_1^3$ we conclude that $\dim(y^G \cap H^0) = 12$ and $\dim(y^G \cap (H \setminus H^0)) \leqslant 13$, whence $\a(G,H,y) \leqslant 1/2$. Similarly, if $p=3$ and $y \in H \setminus H^0$ has order $3$ then $y$ is contained in the $G$-class $A_2^2$, which is not one of the classes we need to consider. Now suppose $H = D_4T_2.S_3$. Set $J = (H^0)' = D_4$ and note that $J.S_3 < F_4 < G$. We studied the embedding $J.S_3 < F_4$ in the proof of Proposition \ref{p:f4_dim3} and we note that the $G$-class of each unipotent class in $F_4$ is recorded in \cite[Table 22.1.4]{LS_book}. In this way, it is straightforward to determine the $G$-class of each unipotent class in $H^0$ and then compute $\dim(y^G \cap H^0)$. So we may assume $p \in \{2,3\}$ and $y \in H \setminus H^0$ has order $p$. First assume $p=2$. In the notation of \cite{AS}, we may view $y$ as a $b_1$ or $b_3$ type involution in $D_4.2 = {\rm O}_8(k)$ and we recall that the $b_1$-involutions are contained in the $F_4$-class $\tilde{A}_1$ (which in turn places $y$ in the $G$-class labelled $A_1^2$), while those of type $b_3$ are in the $A_1\tilde{A}_1$ class of $F_4$, which is contained in the $A_1^3$ class of $G$. As a consequence, we deduce that $\dim C_{\O}(y) = 36, 28$ if $y \in A_1, A_1^2$, respectively. And if $y \in A_1^3$ then $\dim(y^G \cap H^0) = 16$ and $\dim(y^G \cap (H \setminus H^0)) \leqslant 17$, which implies that $\dim C_{\O}(y) \leqslant 25$ and thus $\a(G,H,y) \leqslant 25/48$ as recorded in Table \ref{tab:beta_e6}. Now suppose $p=3$, in which case $y$ acts as a triality graph automorphism on $J = D_4$. By inspecting the proof of Proposition \ref{p:f4_dim3} we see that $y$ is in the $F_4$-class $\tilde{A}_2$ if $C_J(y) = G_2$ (in which case, $y$ is in the $G$-class labelled $A_2^2$), otherwise $y$ is in the $F_4$-class $\tilde{A}_2A_1$, which places $y$ in the $G$-class $A_2^2A_1$. So for $y \in A_2^2A_1$ we get $\dim(y^G \cap H^0) = 0$ and $\dim(y^G \cap (H \setminus H^0)) \leqslant 22$, which yields $\a(G,H,y) \leqslant 1/3$. Finally, if $H = T.W$ is the normaliser of a maximal torus, then the results in Table \ref{tab:beta_e6} (for $y \not\in A_1$) follow from the trivial bound $\dim(y^G \cap H) \leqslant \dim H = 6$. \end{proof} \begin{thm}\label{t:e6_4} The conclusion to Theorem \ref{t:main} holds when $G = E_6$ and $t=4$. \end{thm} \begin{proof} By Corollary \ref{c:fixV} we know that $\Delta$ is empty when $\mathcal{C}_i = A_1$ for all $i$. In the remaining cases, it suffices to show that $\Sigma_X(H)<3$ for all $H \in \mathcal{M}$. By \cite[Theorem 3.1]{BGG1} we have $\a(G,H,y) \leqslant 10/13$ if $y \in A_1$, otherwise $\a(G,H,y)<2/3$, whence $\Sigma_X(H) < 3\cdot 10/13+2/3$ and the result follows. \end{proof} \begin{thm}\label{t:e6_3} The conclusion to Theorem \ref{t:main} holds when $G = E_6$ and $t=3$. \end{thm} \begin{proof} In view of Corollary \ref{c:fixV}, we see that $\Delta$ is empty for each triple in Table \ref{tab:main}. Therefore, in the remaining cases it suffices to show that $\Sigma_X(H)<2$ for all $H \in \mathcal{M}$. By Theorem \ref{t:par}, we only need to check this for $H \in \mathcal{R}$. Set $\mathcal{C}_i = y_i^G$. If $\mathcal{C}_i \ne A_1$ for all $i$, then \cite[Theorem 3.1]{BGG1} implies that $\a(G,H,y_i)<2/3$ and thus $\Sigma_X(H) <2$. Therefore, for the remainder we may assume $\mathcal{C}_1 = A_1$ and $H \in \mathcal{L}$. First assume $\mathcal{C}_2 = A_1$. By inspecting \cite{Spal}, we see that the class labelled $A_2A_1$ is contained in the closure of $\mathcal{C}_3$, so in view of Proposition \ref{p:clos} we may assume $\mathcal{C}_3 = A_2A_1$. One can now check that the bounds in Proposition \ref{p:e6_dim3} imply that $\Sigma_X(H)<2$. Similarly, if $\mathcal{C}_2 = A_1^2$ then we may assume $\mathcal{C}_3 = A_1^3$ and once again the bounds in Proposition \ref{p:e6_dim3} are good enough. Finally, if $\mathcal{C}_2 \ne A_1,A_1^2$ then by considering closures we may assume $\mathcal{C}_2 = \mathcal{C}_3 = A_1^3$. As before, the result now follows by applying the bounds presented in Proposition \ref{p:e6_dim3}. \end{proof} Finally, let us assume $t=2$. First we handle the special case from Table \ref{tab:special}. \begin{lem}\label{l:e6_3} The set $\Delta$ is empty if $t=2$ and $(\mathcal{C}_1,\mathcal{C}_2) = (A_1^2,A_2^2)$. \end{lem} \begin{proof} Write $\mathcal{C}_i = y_i^G$ and observe that we may embed $y_1$ and $y_2$ in a subgroup $L = A_5$ with $M = N_G(L) = A_1A_5$ so that the respective Jordan forms on the natural module for $L$ are $(J_2^2,J_1^2)$ and $(J_3^2)$. Then by applying the main theorem of \cite{Ger}, we may assume $G(y) = L$ for $y = (y_1,y_2) \in X$. Setting $\mathcal{D}_i = y_i^M$ and $Y = \mathcal{D}_1 \times \mathcal{D}_2$ we compute $\dim Y = 40$ and thus \[ \dim Y + \dim G - \dim X = 38 = \dim M. \] We now conclude via Proposition \ref{p:fibre}. \end{proof} \begin{thm}\label{t:e6_2} The conclusion to Theorem \ref{t:main} holds when $G = E_6$, $t=2$ and $p \geqslant 3$. \end{thm} \begin{proof} Write $\mathcal{C}_i = y_i^G$. By combining Corollary \ref{c:fixV} and Lemma \ref{l:e6_3}, we have already shown that $\Delta$ is empty for all of the pairs $(\mathcal{C}_1,\mathcal{C}_2)$ recorded in Tables \ref{tab:main} and \ref{tab:special}. So in view of Theorem \ref{t:par}, it suffices to verify the bound $\Sigma_X(H)<1$ in each of the remaining cases, for every subgroup $H \in \mathcal{R}$. First assume $\mathcal{C}_1 = A_1$. If $H \not\in \mathcal{L}$ then $\Sigma_X(H) = \a(G,H,y_2)$ and the result follows, so we may as well assume $H \in \mathcal{L}$. In addition, by considering the closure relation on unipotent classes, we only need to check that $\Sigma_X(H)<1$ when $\mathcal{C}_2$ is the class labelled $A_4A_1$. The result now follows by applying the relevant upper bounds in Proposition \ref{p:e6_dim3}. Next suppose $\mathcal{C}_1 = A_1^2$. Here we may assume $\mathcal{C}_2 = A_2^2A_1$ and by inspecting \cite{Law2} we see that $H$ meets $\mathcal{C}_1$ only if $H \in \mathcal{L}$. So we are free to assume that $H \in \mathcal{L}$ and we can now verify the bound $\Sigma_X(H)<1$ from the bounds in Proposition \ref{p:e6_dim3}. Similarly, if $\mathcal{C}_1 = A_1^3$ or $A_2$ then we may assume $\mathcal{C}_2 = A_2A_1$ and $H \in \mathcal{L}$, noting once again that the desired result follows via Proposition \ref{p:e6_dim3} (note that if $H \not\in \mathcal{L}$, then either $\mathcal{C}_1$ or $\mathcal{C}_2$ does not meet $H$ and the bound $\Sigma_X(H)<1$ clearly holds). Finally, let us assume $\mathcal{C}_i \not\in \{A_1,A_1^2,A_1^3,A_2\}$ for $i=1,2$. Here we may assume $\mathcal{C}_1 = \mathcal{C}_2 = A_2A_1$ and $H \in \mathcal{L}$, in which case the result follows in the usual fashion via Proposition \ref{p:e6_dim3}. \end{proof} This completes the proof of Theorem \ref{t:main} for $G = E_6$. \subsection{$G = E_7$}\label{ss:e7} In this section we prove Theorem \ref{t:main} for $G = E_7$. The unipotent classes in $G$ and their respective dimensions are recorded in \cite[Table 22.1.2]{LS_book} and we adopt the labelling of classes therein. We remark that there are several differences between our choice of labels and those used by other authors. For example, the classes $(A_1^3)^{(1)}$, $(A_3A_1)^{(1)}$ and $(A_5)^{(1)}$ are respectively labelled $(3A_1)''$, $(A_3+A_1)''$ and $(A_5)''$ in \cite{Law2,Law1,Spal}. We define the subgroup collections $\mathcal{M}$, $\mathcal{P}$, $\mathcal{R}$ and $\mathcal{L}$ as in Section \ref{ss:sub}. We begin with the following result on fixed point spaces. \begin{prop}\label{p:e7_dim3} Let $H \in \mathcal{L}$ and let $y \in G$ be an element of order $p$ in one of the following conjugacy classes \[ A_1, \, A_1^2, \, (A_1^3)^{(1)}, \, (A_1^3)^{(2)}, \, A_1^4, \, A_2, \, A_2A_1^2, \, A_2^2A_1, \, A_4A_2. \] Then $\a(G,H,y) \leqslant \b$, where $\b$ is recorded in Table \ref{tab:beta_e7}. \end{prop} {\small \begin{table} \[ \begin{array}{l|ccccccccc} & A_1 & A_1^2 & (A_1^3)^{(1)} & (A_1^3)^{(2)} & A_2 & A_1^4 & A_2A_1^2 & A_2^2A_1 & A_4A_2 \\ \hline A_1D_6 & 3/4 & 5/8 & 5/8 & 1/2 & 1/2 & (15+\delta_{2,p})/32 & 3/8 & 5/16 & 0 \\ A_1F_4 & 10/13 & 8/13 & 7/13 & 7/13 & 7/13 & 19/39 & 5/13 & 1/3 & 0 \\ G_2C_3 & 5/7 & 29/49 & 4/7 & 25/49 & 3/7 & 24/49 & 18/49 & 16/49 & 0 \\ A_7.2 & 5/7 & 3/5 & 43/70 & 19/35 & 18/35 & 2/5 & 13/35 & 11/35 &1/5 \\ A_2A_5.2 & 11/15 & 3/5 & 3/5 & 23/45 & 7/15 & (14+\delta_{2,p})/30 & 17/45 & 1/3 & 1/5 \\ T_1E_6.2 & 7/9 & 17/27 & 1/2 & 5/9 & 5/9 & \delta_{2,p}/2 & 11/27 & 1/3 & 0 \\ A_1^3D_4.S_3 & 3/4 & 7/12 & 31/48 & 25/48 & 1/2 & (11+\delta_{2,p})/24 & 3/8 & 1/3 & 0 \\ A_1^7.{\rm GL}_3(2) & 5/7 & 37/56 & 9/14 & 31/56 & 15/28 & (27+\delta_{2,p})/56 & 11/28 & 9/28 & 5/28 \\ T.W & 31/42 & 9/14 & 79/126 & 23/42 & 67/126 & \delta_{2,p}/2 & 17/42 & 43/126 & 3/14 \\ \end{array} \] \caption{The upper bound $\a(G,H,y) \leqslant \b$ in Proposition \ref{p:e7_dim3}} \label{tab:beta_e7} \end{table}} \begin{proof} If $y \in A_1$ is a long root element then the upper bound on $\a(G,H,y)$ in Table \ref{tab:beta_e7} follows from \cite[Theorem 3.1]{BGG1}, so for the remainder we may assume $y \not\in A_1$. Now if $H$ is one of the subgroups $A_1D_6, A_1F_4$ or $G_2C_3$, then the $G$-class of each $H$-class of unipotent elements is determined in \cite{Law2} and it is straightforward to verify the result via Proposition \ref{p:dim}. The subgroups $A_7.2$ and $A_2A_5.2$ can be handled in a similar fashion, with some additional work when $p=2$. Indeed, by inspecting \cite[Sections 4.11, 4.12]{Law2} we can compute $\dim(y^G \cap H^0)$, so the analysis is reduced to the situation where $p=2$ and $y \in H \setminus H^0$ is an involution. First assume $H = A_7.2$. Here $y$ induces a graph automorphism on $H^0 = A_7$ and there are two $H^0$-classes of such elements, represented by $y_1$ and $y_2$, where $C_{H^0}(y_1) = C_4$ and $C_{H^0}(y_2) = C_{C_4}(u)$ with $u \in C_4$ a long root element (see Proposition \ref{p:graph}). As explained in the proof of \cite[Lemma 3.18]{BGS}, we find that $y_1$ is contained in the $G$-class $(A_1^3)^{(1)}$, while $y_2$ is in the class labelled $A_1^4$. Hence for $y \in (A_1^3)^{(1)}$ we deduce that $\dim(y^G \cap H^0) = 0$ and $\dim (y^G \cap (H \setminus H^0)) \leqslant 27$, which implies that $\a(G,H,y) \leqslant 43/70$. Similarly, for $y \in A_1^4$ we get $\a(G,H,y) \leqslant 2/5$ if $p=2$, otherwise $\a(G,H,y) = 0$. Now assume $H = A_2A_5.2$, $p=2$ and $y \in H \setminus H^0$ is an involution. Here $y$ induces a graph automorphism on both simple factors of $H^0$ and we deduce that there are two $H^0$-classes to consider, represented by $y_1$ and $y_2$, where $C_{H^0}(y_1) = B_1C_3$ and $C_{H^0}(y_2) = B_1C_{C_3}(u)$ with $u \in C_3$ a long root element. As explained in the proof of \cite[Lemma 3.12]{BGG1}, the element $y_2$ is contained in the $G$-class $A_1^4$ and we claim that $y_1$ is in the class $(A_1^3)^{(2)}$. To see this, let $V$ be the $56$-dimensional Weyl module $W_G(\l_7)$ and observe that \[ V\downarrow H^0 = (U_1 \otimes U_2) \oplus (U_1^* \otimes U_2^*) \oplus (0 \otimes \Lambda^3(U_2)), \] where $U_1$ and $U_2$ denote the natural modules for $A_2$ and $A_5$, respectively, and $0$ is the trivial module for $A_2$ (see \cite[Table 4]{Thomas}, for example). Now $y_1$ interchanges the first two summands and it has Jordan form $(J_2^6,J_1^8)$ on the final summand. Therefore, $y_1$ has Jordan form $(J_2^{24},J_1^8)$ on $V$ and by inspecting \cite[Table 7]{Law1} we conclude that $y_1$ is in the class $(A_1^3)^{(2)}$ as claimed. As a consequence, we deduce that $\dim(y^G \cap H) = \dim(y^G \cap H^0) = 20$ if $y \in (A_1^3)^{(2)}$ (for all $p$) and thus $\a(G,H,y) = 23/45$. Similarly, we calculate that $y_2$ has Jordan form $(J_2^{28})$ on $V$, which places $y_2$ in the class labelled $(A_1^3)^{(1)}$ or $A_1^4$. In fact, by arguing as in the proof of \cite[Lemma 3.12]{BGG1}, we see that $y_2$ is in $A_1^4$. Therefore, if $y \in A_1^4$ then either $p \geqslant 3$ and $\a(G,H,y) = 7/15$, or $p=2$ and $\a(G,H,y) \leqslant 1/2$. Next suppose $H = T_1E_6.2$. Clearly, every unipotent element in the connected component $H^0$ is contained in the $E_6$ factor and the corresponding classes in $E_6$ and $G$ have the same label (note that the $E_6$-class labelled $A_1^3$ meets a Levi subgroup $A_6T_1$ of $G$, so it is contained in the $G$-class $(A_1^3)^{(2)}$). This allows us to compute $\dim(y^G \cap H^0)$ and so to complete the analysis of this case we may assume $p=2$ and $y \in H \setminus H^0$ is an involution. Here $y$ induces a graph automorphism on the $E_6$ factor and it inverts the $1$-dimensional central torus $T_1$. By Proposition \ref{p:graph}, there are two $H^0$-classes of involutions of this form, represented by $y_1$ and $y_2$, where $C_{H^0}(y_1) = F_4$ and $C_{H^0}(y_2) = C_{F_4}(u)$, with $u \in F_4$ a long root element. As explained in the proof of \cite[Lemma 4.1]{LLS1}, we calculate that each $y_i$ has Jordan form $(J_2^{28})$ on $V = W_G(\l_7)$ and by inspecting \cite[Table 7]{Law1} we deduce that each $y_i$ is in one of the $G$-classes labelled $(A_1^3)^{(1)}$ or $A_1^4$. In fact, the proof of \cite[Lemma 4.1]{LLS1} shows that $y_2$ is in $A_1^4$, so for $y \in (A_1^3)^{(1)}$ we get $\dim(y^G \cap H^0) = 0$ and $\dim(y^G \cap (H \setminus H^0)) \leqslant 27$, which implies that $\a(G,H,y) \leqslant 1/2$. Now let us turn to the case $H = A_1^3D_4.S_3$. Write $H^0 = H_1H_2$, where $H_1 = A_1$ and $H_2 = D_2D_4<D_6$. In particular, $H^0$ is contained in a maximal closed subgroup $A_1D_6$ and so we can appeal to \cite[Section 4.10]{Law2} in order to compute $\dim(y^G \cap H^0)$. For example, suppose $p=2$ and let us adopt the notation from \cite{AS} for unipotent involutions in orthogonal groups. Let $z_1,z_2 \in D_2$ be involutions of type $a_2$ and $c_2$, respectively, so $\dim z_1^{D_2} = 2$ and $\dim z_2^{D_2} = 4$. Then it is straightforward to write down a set of representatives of the classes of involutions in $D_2D_4$ and we can easily identify the corresponding class in $D_6$, recalling that an involution of the form $uv \in D_2D_4$ is of type $a$ in $D_6$ if and only if $u$ and $v$ are both $a$-type involutions (or if one of them is type $a$ and the other is trivial). We can then use \cite[Section 4.10]{Law2} to identify the $G$-class of each involution $y \in H^0$ and this allows us to compute the dimension of $y^G \cap H^0$ as follows: \[ \begin{array}{l|ccccc} \mbox{$G$-class of $y$} & A_1 & A_1^2 & (A_1^3)^{(1)} & (A_1^3)^{(2)} & A_1^4 \\ \hline \dim(y^G \cap H^0) & 10 & 12 & 14 & 16 & 22 \end{array} \] Now assume $p \in \{2,3\}$ and $y \in H \setminus H^0$ has order $p$. Suppose $p=3$. Here $y$ induces a triality graph automorphism on the $D_4$ factor of $H^0$ and it cyclically permutes the three $A_1$ factors. Therefore, $C_{H^0}(y) = A_1G_2$ or $A_1C_{G_2}(u)$, where $u \in G_2$ is a long root element, and thus $\dim(y^G \cap (H \setminus H^0)) \leqslant 26$. Now by considering the restriction of $V = W_G(\l_7)$ to $H^0$, it is straightforward to show that $y$ has Jordan form $(J_3^{18},J_1^2)$ on $V$ and therefore $y$ is in one of the classes labelled $A_2^2$ or $A_2^2A_1$. In particular, if $y$ is in the class $A_2^2A_1$, then $\dim(y^G \cap H^0) = 0$ and thus $\dim(y^G \cap H) \leqslant 26$, which yields $\a(G,H,y) \leqslant 1/3$. Now assume $p=2$. Here $y$ induces a graph automorphism on $D_4$ (of type $b_1$ or $b_3$ in the notation of \cite{AS}) and it interchanges two of the $A_1$ factors. Therefore, we have at most four $H^0$-classes of involutions to consider, represented by the elements \[ y_1 = (b_1,1), \, y_2 = (b_3,1), \, y_3 = (b_1,z), \, y_4 = (b_3,z), \] where the first component indicates the type of graph automorphism of $D_4$ induced by $y_i$ and the second indicates the action on the fixed $A_1$ factor, which is either trivial or an involution $z$. Now \begin{align*} V \downarrow H^0 = & \, (U_1 \otimes U_2 \otimes U_3 \otimes 0) \oplus (U_1 \otimes 0 \otimes 0 \otimes W_{D_4}(\omega_1)) \oplus (0 \otimes U_2 \otimes 0 \otimes W_{D_4}(\omega_3)) \\ & \, \oplus (0 \otimes 0 \otimes U_3 \otimes W_{D_4}(\omega_4)), \end{align*} where $U_i$ is the natural module for the $i$-th $A_1$ factor, $0$ is the trivial module and the $\omega_i$ are fundamental dominant weights for $D_4$ (see \cite[Table 4]{Thomas}). Using this decomposition, we calculate that $y_1$ and $y_2$ have respective Jordan forms $(J_2^{20},J_1^{16})$ and $(J_2^{24},J_1^8)$ on $V$, while $y_3$ and $y_4$ have Jordan form $(J_2^{28})$. By inspecting \cite[Table 7]{Law1}, we deduce that $y_1$ and $y_2$ are respectively contained in the $G$-classes $A_1^2$ and $(A_1^3)^{(2)}$, while $y_3$ and $y_4$ are in the classes labelled $(A_1^3)^{(1)}$ or $A_1^4$. Therefore, if $y \in A_1^2$ then $\dim(y^G \cap H) = 12$ (for all $p$) and thus $\a(G,H,y) = 7/12$. Similarly, if $y \in (A_1^3)^{(2)}$ then $\dim(y^G \cap H) \leqslant 18$ and $\a(G,H,y) \leqslant 25/48$, while for $y \in (A_1^3)^{(1)}$ we deduce that $\dim (y^G \cap H) \leqslant 20$, which yields $\a(G,H,y) \leqslant 31/48$. Similarly, if $y \in A_1^4$ then $\dim(y^G \cap H) = \dim(y^G \cap H^0) = 18+4\delta_{2,p}$ and thus $\a(G,H,y) = (11+\delta_{2,p})/24$. Next assume $H = A_1^7.{\rm GL}_3(2)$. Here $\dim(y^G \cap H^0) \leqslant 14$ for all $y \in H^0$ and it is straightforward to check that $\dim(y^G \cap (H\setminus H^0)) \leqslant 14$ for the relevant unipotent classes we are interested in. Note that if $p=7$ and $y \in H \setminus H^0$, then $y$ acts as a $7$-cycle on the $A_1$ factors of $H^0$ and by considering the restriction of $V = W_G(\l_7)$ to $H^0$ we deduce that $y$ has Jordan form $(J_7^8)$ on $V$, which places $y$ in the $G$-class labelled $A_6$ (see \cite[Table 7]{Law1}) and this is not one of the classes we are interested in. Therefore, $\dim(y^G \cap H) \leqslant 14$ and we immediately obtain the corresponding upper bound on $\a(G,H,y)$ in Table \ref{tab:beta_e7} unless $p \geqslant 3$ and $y$ is contained in the class labelled $A_1^4$. So to complete the argument for $H = A_1^7.{\rm GL}_3(2)$, let us assume $y \in A_1^4$ and $p \geqslant 3$. First we claim that $y^G \cap H = y^G \cap H^0$. To see this, first observe that if $p=3$ and $x \in H \setminus H^0$ has order $3$, then $x$ induces a permutation of the $A_1$ factors of $H^0$ with cycle-shape $(3^2,1)$ (for example, this follows by considering the summands arising in the decomposition of $W \downarrow H^0$, where $W = W_G(\l_1)$ is the adjoint module for $G$; see \cite[Table 4]{Thomas}). This implies that $x$ has at least $16$ Jordan blocks of size $3$ on $V = W_G(\l_7)$ and thus $x$ is not in the class $A_1^4$. Similarly, we noted above that if $p=7$ and $x \in H \setminus H^0$ has order $7$, then $x$ is contained in the $G$-class $A_6$. So in order to establish the bound $\a(G,H,y) \leqslant 27/56$, we need to show that $\dim(y^G \cap H^0) \leqslant 12$. To do this, write $H^0 = H_1H_2$, where $H_1 = A_1$ and $H_2 = D_2^3<D_6$, so we have $H^0 < A_1D_6 < G$ and we can determine the $G$-class of each unipotent $H^0$-class by appealing to \cite[Section 4.12]{Law2}. In this way, it is straightforward to check that if $x = x_1 \cdots x_7 \in H^0$ has order $p$ and each $x_i$ is nontrivial, then $x$ is contained in the $G$-class labelled $A_2A_1^3$. In particular, if $y \in A_1^4$ then $\dim(y^G \cap H^0) \leqslant 12$ as required. Finally, let us assume $H = T.W$. Here the trivial bound $\dim(y^G \cap H) \leqslant 7$ is sufficient unless $y \in A_1^4$ and $p \geqslant 3$. In the latter case, we claim that $\a(G,H,y) = 0$. To see this, we may assume $p \in \{3,5,7\}$ and $x \in H \setminus H^0$ has order $p$ (recall that $W = 2 \times {\rm Sp}_6(2)$ and so the prime divisors of $|W|$ are $2,3,5$ and $7$). Here $x$ induces a permutation of order $p$ on the set of $1$-dimensional root spaces in the Lie algebra $V = W_G(\l_1)$ and we deduce that $x$ admits a Jordan block of size $p$ in its action on $V$. So by inspecting \cite[Table 8]{Law1}, we may assume $p=3$. But here we find that $x$ has at least $32$ Jordan blocks of size $3$ on $V$ and once again this is incompatible with the Jordan form of elements in the $A_1^4$ class. This justifies the claim and the result follows. \end{proof} In order to prove Theorem \ref{t:main} for $G = E_7$, we may assume $t \in \{2,3,4\}$ since $\Delta$ is always non-empty if $t \geqslant 5$ by \cite[Theorem 7]{BGG1}. \begin{thm}\label{t:e7_4} The conclusion to Theorem \ref{t:main} holds when $G = E_7$ and $t=4$. \end{thm} \begin{proof} By Corollary \ref{c:fixV} we know that $\Delta$ is empty when $\mathcal{C}_i = A_1$ for all $i$, so it remains to show that $\Delta$ is non-empty in all other cases. As before, it suffices to verify the bound $\Sigma_X(H)<3$ for all $H \in \mathcal{M}$ and by appealing to \cite[Theorem 3.1]{BGG1} we deduce that $\a(G,H,y) \leqslant 7/9$ if $y \in A_1$, otherwise $\a(G,H,y)<2/3$. Therefore, $\Sigma_X(H) < 3\cdot 7/9+2/3 = 3$ and the result follows. \end{proof} Now assume $t \in \{2,3\}$. First we handle the special cases in Table \ref{tab:special}. \begin{prop}\label{p:e7_00} The set $\Delta$ is empty if $X$ is one of the cases in Table \ref{tab:special}. \end{prop} \begin{proof} Write $\mathcal{C}_i = y_i^G$ and first assume $t=3$. Since the $G$-class $A_1^2$ is contained in the closure of $(A_1^3)^{(1)}$ (see \cite[Chapter 4]{Spal}), it suffices to show that $\Delta$ is empty when $(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3)$ is the triple $(A_1,(A_1^3)^{(1)},(A_1^3)^{(1)})$. To handle this case, first observe that we may embed each $y_i$ in a subgroup $L = D_6$ with $M = N_G(L) = A_1D_6$. More precisely, if $W$ denotes the natural module for $L$, then we may assume $y_1$ has Jordan form $(J_2^2,J_1^8)$ on $W$, while $y_2$ and $y_3$ both have Jordan form $(J_2^6)$ (see \cite[Section 4.10]{Law2} and note that if $p=2$ then $y_1$ is an involution of type $a_2$, while $y_2$ and $y_3$ are both of type $a_6$, in the notation of \cite{AS}). In addition, since $\sum_i \dim C_W(y_i) = 22 < 2\dim W$, \cite[Theorem 7]{BGG2} implies that we may assume the $y_i$ topologically generate $L$. Set $\mathcal{D}_i = y_i^M$ and note that $\dim \mathcal{C}_1 = 34$, $\dim \mathcal{C}_i = 54$, $\dim \mathcal{D}_1 = 18$ and $\dim \mathcal{D}_i = 30$ for $i=2,3$. In particular, we compute $\dim X = 142$ and $\dim Y = 78$, where $Y = \mathcal{D}_1 \times \mathcal{D}_2 \times \mathcal{D}_3$, whence \[ \dim G + \dim Y - \dim X = 69 = \dim M \] and thus Proposition \ref{p:fibre} implies that $\Delta$ is empty. For the remainder, let us assume $t=2$. First assume $(\mathcal{C}_1,\mathcal{C}_2) = (A_1,(A_5)^{(1)})$, so $\dim \mathcal{C}_1 = 34$ and $\dim \mathcal{C}_2 = 102$. Here we may embed $y_1$ and $y_2$ in $L = D_6$, where $M = N_G(L) = A_1D_6$ and the $y_i$ have respective Jordan forms $(J_2^2,J_1^8)$ and $(J_6^2)$ on the natural module for $L$. In view of \cite[Theorem 7]{BGG2}, we may assume $y_1$ and $y_2$ topologically generate $L$ and we note that $\dim y_i^M = 18,54$ for $i=1,2$, respectively. The result now follows by applying Proposition \ref{p:fibre}. The case $(\mathcal{C}_1,\mathcal{C}_2) = (A_1^2, (A_3A_1)^{(1)})$ is very similar. Here we embed the $y_i$ in $L = D_6$ with respective Jordan forms $(J_2^4,J_1^4)$ and $(J_4^2,J_2^2)$ and once again the result follows by applying Proposition \ref{p:fibre} and \cite[Theorem 7]{BGG2}. Since the $G$-class $A_2^2$ is contained in the closure of the class $(A_3A_1)^{(1)}$, we deduce that $\Delta$ is also empty when $(\mathcal{C}_1,\mathcal{C}_2) = (A_1^2,A_2^2)$. Finally, let us assume $\mathcal{C}_1 = (A_1^3)^{(1)}$. By considering closures, it suffices to show that $\Delta$ is empty when $\mathcal{C}_2 = (A_3A_1)^{(1)}$ or $A_2A_1^3$. If $\mathcal{C}_2 = (A_3A_1)^{(1)}$ then we can proceed as above, embedding the $y_i$ in $L = D_6$ with respective Jordan forms $(J_2^6)$ and $(J_4^2,J_2^2)$ on the natural module. We leave the reader to check the details. Now suppose $\mathcal{C}_2 = A_2A_1^3$ and note that $\dim \mathcal{C}_1 = 54$ and $\dim \mathcal{C}_2 = 84$. To handle this case we need to modify the standard approach via Proposition \ref{p:fibre} because we cannot embed $y_2$ in a $D_6$ subgroup. First observe that we may embed $y_1$ and $y_2$ in $M = N_G(L) = A_1D_6$, where $y_1 \in L = D_6$ has Jordan form $(J_2^6)$ on the natural module for $L$ and $y_2 = u_2v_2 \in A_1D_6$ with $u_2$ in the $A_1$ factor of order $p$ and $v_2 \in L$ with Jordan form $(J_3^3,J_1^3)$. By applying \cite[Theorem 7]{BGG2}, we see that $L$ is topologically generated by $y_1$ and $v_2$, so if we set $y = (y_1,y_2) \in X$ then \[ G(y) \leqslant \overline{\langle y_1,u_2,v_2 \rangle} = \langle L, u_2 \rangle < M. \] Since $\langle L, u_2 \rangle^0 = L$ and $G(y)$ projects onto $L$, it follows that $G(y)^0 = L$. Similarly, if we set $\mathcal{D}_i = y_i^M$ and $Y = \mathcal{D}_1 \times \mathcal{D}_2$ then $G(z)^0 \leqslant L$ for all $z \in Y$. Finally, we compute $\dim \mathcal{D}_1 = 30$ and $\dim \mathcal{D}_2 = 44$, so $\dim G + \dim Y - \dim X = 69 = \dim M$ and we now conclude by applying Proposition \ref{p:fibre}. \end{proof} \begin{rem}\label{r:e7_lie} It is worth noting that several cases in Table \ref{tab:special} can be handled by arguing as in Remark \ref{r:f4_lie}, which relies on \cite[Corollary 3.20]{BGG2}. For example, suppose $t=2$ and $(\mathcal{C}_1,\mathcal{C}_2) = ((A_1^3)^{(1)}, A_2A_1^3)$, so $p \geqslant 3$. Let $W = \mathcal{L}(G)$ be the Lie algebra of $G$. By inspecting \cite[Table 8]{Law1} we see that $\dim C_W(y_i) = 79$, $49$ for $i=1,2$, respectively, and thus \[ \dim W =133 < 7+79+49 = {\rm rk}(G) + \sum_{i=1}^2\dim C_W(y_i) - \dim Z, \] where $Z$ is the centre of $W$ (since $p \geqslant 3$, we have $Z=0$). Therefore, $\Delta$ is empty by \cite[Corollary 3.20]{BGG2}. However, if $t=2$, $(\mathcal{C}_1,\mathcal{C}_2) = ((A_1^3)^{(1)}, (A_3A_1)^{(1)})$ and $p \geqslant 5$, then \[ \dim W = {\rm rk}(G) + \sum_{i=1}^2\dim C_W(y_i) -\dim Z \] and so the previous approach is inconclusive in this case. \end{rem} \begin{thm}\label{t:e7_3} The conclusion to Theorem \ref{t:main} holds when $G = E_7$ and $t=3$. \end{thm} \begin{proof} Set $\mathcal{C}_i = y_i^G$. By applying Corollary \ref{c:fixV} and Proposition \ref{p:e7_00}, we see that $\Delta$ is empty for the cases in Tables \ref{tab:main} and \ref{tab:special}. So to complete the proof, it suffices to show that $\Sigma_X(H)<2$ for all $H \in \mathcal{M}$ and for each of the remaining possibilities for $X$. By Theorem \ref{t:par}, this holds if $H \in \mathcal{P}$ so we only need to consider the subgroups in $\mathcal{R}$. If $\mathcal{C}_i \ne A_1$ for all $i$, then \cite[Theorem 3.1]{BGG1} implies that $\Sigma_X(H) <2$ since we have $\a(G,H,y_i)<2/3$ for each $i$. Therefore, for the remainder we may assume $\mathcal{C}_1 = A_1$ and $H \in \mathcal{L}$ (indeed, if $H \not\in \mathcal{L}$ then $\a(G,H,y_1) = 0$ and thus $\Sigma_X(H) < 4/3$). First assume $\mathcal{C}_2 = A_1$. By inspecting \cite[Chapter 4]{Spal}, we see that the class labelled $A_2A_1^2$ is contained in the closure of $\mathcal{C}_3$. Therefore, in view of Proposition \ref{p:clos}, it suffices to show that $\Sigma_X(H)<2$ when $\mathcal{C}_3 = A_2A_1^2$ and it is easy to check that the bounds in Proposition \ref{p:e7_dim3} are sufficient. Similarly, if $\mathcal{C}_2 = A_1^2$ then we may assume $\mathcal{C}_3 = (A_1^3)^{(2)}$ and once again the bounds in Proposition \ref{p:e7_dim3} are good enough. Finally, if $\mathcal{C}_2 \ne A_1,A_1^2$ then by considering closures we may assume $\mathcal{C}_2 = (A_1^3)^{(1)}$ and $\mathcal{C}_3 = (A_1^3)^{(2)}$. As before, the result follows via Proposition \ref{p:e7_dim3}. \end{proof} \begin{thm}\label{t:e7_2} The conclusion to Theorem \ref{t:main} holds when $G = E_7$, $t=2$ and $p \geqslant 3$. \end{thm} \begin{proof} Write $\mathcal{C}_i = y_i^G$. If $(\mathcal{C}_1,\mathcal{C}_2)$ is one of the cases in Tables \ref{tab:main} or \ref{tab:special}, then $\Delta$ is empty by Corollary \ref{c:fixV} and Proposition \ref{p:e7_00}. So by appealing to Proposition \ref{p:fix} and Theorem \ref{t:par}, we just need to verify the bound $\Sigma_X(H) <1$ in each of the remaining cases, where $H$ is an arbitrary subgroup in the collection $\mathcal{R}$. First assume $\mathcal{C}_1 = A_1$. If $H \not\in \mathcal{L}$ then $\Sigma_X(H) < 2/3$ by \cite[Theorem 3.1]{BGG1}, so we may assume $H \in \mathcal{L}$. By considering the closure relation on unipotent classes, we may assume $\mathcal{C}_2$ is the class labelled $A_4A_2$ and the result now follows by applying the relevant upper bounds in Proposition \ref{p:e7_dim3}. Next assume $\mathcal{C}_1 = A_1^2$ or $(A_1^3)^{(1)}$. Here it is sufficient to show that $\Delta$ is non-empty when $\mathcal{C}_2 = A_2^2A_1$. If $H \in \mathcal{L}$ then the bounds in Proposition \ref{p:e7_dim3} yield $\Sigma_X(H)<1$, so we may assume $H \not\in \mathcal{L}$. If $\mathcal{C}_i \cap H$ is empty for $i=1$ or $2$ then \cite[Theorem 3.1]{BGG1} gives $\Sigma_X(H)<2/3$ and so we can assume that both $\mathcal{C}_1$ and $\mathcal{C}_2$ have representatives in $H$. By inspecting \cite{Law2}, we can rule out $H = A_2.2$, $A_1^2$ and $A_1$, which leaves $H = A_1G_2$ and $(2^2 \times D_4).S_3$ and so there are two possibilities to consider. If $H = A_1G_2$ then we use \cite[Section 5.9]{Law2} to show that $\Sigma_X(H) \leqslant 27/29$. Now assume $H = (2^2 \times D_4).S_3$. Here $H^0 = D_4 < A_7<G$, where the $D_4<A_7$ is the standard embedding corresponding to the natural module for $D_4$. By appealing to \cite[Section 4.11]{Law2}, it is easy to compute $\dim(y^G \cap H^0)$ for each unipotent element $y \in G$. For the relevant classes we are interested in we get \[ \dim(y^G \cap H^0) = \left\{\begin{array}{ll} 10 & \mbox{if $y \in A_1^2$} \\ 0 & \mbox{if $y \in (A_1^3)^{(1)}$ or $A_2^2A_1$.} \end{array}\right. \] If $p=3$ and $y \in H \setminus H^0$ has order $3$, then $y$ acts as a triality graph automorphism on $H^0$ and by arguing as in the proof of \cite[Lemma 3.18]{BGG1} we deduce that $y$ has at least $35$ Jordan blocks of size $3$ on the adjoint module $W_G(\l_1)$. By inspecting \cite[Table 8]{Law1}, we conclude that $\a(G,H,y) = 3/5$ if $y \in A_1^2$ and $\a(G,H,y) = 0$ for $y \in (A_1^3)^{(1)}$. For $y \in A_2^2A_1$ we observe that $\dim(y^G \cap H) \leqslant 20$, whence $\a(G,H,y) \leqslant 1/3$ and $\Sigma_X(H) \leqslant 3/5+1/3 <1$. Now suppose $\mathcal{C}_1 = (A_1^3)^{(2)}$ or $A_2$. Here we need to show that $\Sigma_X(H)<1$ for $\mathcal{C}_2 = A_2A_1^2$ and one can check that the bounds in Proposition \ref{p:e7_dim3} are effective for $H \in \mathcal{L}$. Now assume $H \not\in \mathcal{L}$. By inspecting \cite{Law2}, the problem is quickly reduced to the case where $H = (2^2 \times D_4).S_3$. By arguing as in the previous paragraph, we compute $\a(G,H,y) = 0$ for $y \in (A_1^3)^{(2)}$ and $\a(G,H,y) = 17/35$ for $y \in A_2$. And if $y \in A_2A_1^2$ we observe that $\dim(y^G \cap H^0) = 16$ and $\dim(y^G \cap (H \setminus H^0)) \leqslant 20$, which yields $\a(G,H,y) \leqslant 43/105$. Bringing these bounds together, we conclude that $\Sigma_X(H) \leqslant 17/35+43/105<1$. To complete the proof we may assume $\mathcal{C}_i \not\in \{A_1,A_1^2,(A_1^3)^{(1)},(A_1^3)^{(2)},A_2\}$ for $i=1,2$. Here it suffices to show that $\Delta$ is non-empty when $\mathcal{C}_1 = \mathcal{C}_2 = A_1^4$. Since $(2^2 \times D_4).S_3$ does not meet the class $A_1^4$, and similarly for the subgroups $A_2.2$, $A_1^2$ and $A_1$, we may assume $H \in \mathcal{L}$. Recalling that $p$ is odd, one can now check that the bound on $\a(G,H,y)$ in Proposition \ref{p:e7_dim3} for $y \in A_1^4$ is sufficient in every case. \end{proof} This completes the proof of Theorem \ref{t:main} for $G = E_7$. \subsection{$G = E_8$}\label{ss:e8} In order to complete the proof of Theorem \ref{t:main} we may assume $G = E_8$. We refer the reader to \cite[Table 22.1.1]{LS_book} for a list of the unipotent classes in $G$ and their respective dimensions. As usual, we follow \cite{LS_book} in our choice of notation for the unipotent classes in $G$ and we define $\mathcal{M}$, $\mathcal{P}$, $\mathcal{R}$ and $\mathcal{L}$ as in Section \ref{ss:sub}. \begin{prop}\label{p:e8_dim3} Let $H \in \mathcal{L}$ and let $y \in G$ be an element of order $p$ in one of the following conjugacy classes \[ A_1, \, A_1^2, \, A_1^3, \, A_1^4, \, A_2, \, A_2A_1^3, \, A_2^2A_1^2, \, A_4A_3. \] Then $\a(G,H,y) \leqslant \b$, where $\b$ is recorded in Table \ref{tab:beta_e8}. \end{prop} {\small \begin{table} \[ \begin{array}{l|cccccccc} & A_1 & A_1^2 & A_1^3 & A_1^4 & A_2 & A_2A_1^3 & A_2^2A_1^2 & A_4A_3 \\ \hline A_1E_7 & 11/14 & 9/14 & 4/7 & (27+\delta_{2,p})/56 & 4/7 & 3/8 & 9/28 & 0 \\ D_8 & 3/4 & 5/8 & 9/16 & (15+\delta_{2,p})/32 & 35/64 & 3/8 & 5/16 & 3/16 \\ G_2F_4 & 10/13 & 8/13 & 7/13 & 45/91 & 7/13 & 34/91 & 30/91 & 0 \\ A_2E_6.2 & 7/9 & 17/27 & 5/9 & (26+\delta_{2,p})/54 & 5/9 & 31/81 & 1/3 & 0 \\ A_8.2 & 3/4 & 13/21 & 23/42 & (20+\delta_{2,p})/42 & 1/2 & 31/84 & 0 & 4/21 \\ A_4^2.4 & 3/4 & 31/50 & 27/50 & (24+\delta_{2,p})/50 & 1/2 & 37/100 & 8/25 & 1/5 \\ D_4^2.(S_3 \times 2) & 3/4 & 5/8 & 9/16 & (15+\delta_{2,p})/32 & 17/32 & 13/32 & 1/3 & 0 \\ A_2^4.{\rm GL}_2(3) & 3/4 & 11/18 & 31/54 & (26+\delta_{2,p})/54 & 31/54 & 7/18 & 1/3 & 0 \\ A_1G_2^2.2 & 165/217 & 137/217 & 113/217 & 103/217 & 113/217 & 81/217 & 71/217 & 0 \\ A_1^8.{\rm AGL}_3(2) & 3/4 & 37/56 & 4/7 & (13+\delta_{2,p})/28 & 9/16 & 43/112 & 9/28 & 5/28 \\ T.W & 61/80 & 13/20 & 17/30 & \delta_{2,p}/2 & 67/120 & 47/120 & 1/3 & 1/5 \\ \end{array} \] \caption{The upper bound $\a(G,H,y) \leqslant \b$ in Proposition \ref{p:e8_dim3}} \label{tab:beta_e8} \end{table}} \begin{proof} If $y \in A_1$ is a long root element, then $\a(G,H,y)$ is given in \cite[Table 1]{BGG1}, noting that $\a(G,H,y)=3/4$ when $H = D_4^2.(S_3 \times 2)$ and $p=2$. Indeed, in this case there are no long root elements in $H \setminus H^0$ (see below) and this allows us to deduce that $\a(G,H,y)=3/4$ for all $p$. For the remainder, we will assume $y \not\in A_1$. Let $V = W_G(\l_8)$ be the adjoint module for $G$. Suppose $H \in \{A_1E_7, D_8, G_2F_4\}$. Here $H$ is connected and the $G$-class of each unipotent class in $H$ is determined in \cite{Law2}. From this we can compute $\dim(y^G \cap H)$ and then obtain $\dim C_{\O}(y)$ via Proposition \ref{p:dim}. Next assume $H = A_2E_6.2$. Here the $G$-class of each unipotent class in the connected component $H^0$ is recorded in \cite[Section 4.15]{Law2} and this allows us to compute $\dim(y^G \cap H^0)$. So to complete the analysis of this case, we may assume $p=2$ and $y \in H \setminus H^0$ is an involution. Here $y$ acts as a graph automorphism on both simple factors of $H^0$ and by applying Proposition \ref{p:graph} we deduce that there are two $H^0$-classes of such elements, represented by $y_1$ and $y_2$, where $C_{H^0}(y_1) = B_1F_4$ and $C_{H^0}(y_2) = B_1C_{F_4}(u)$, with $u \in F_4$ a long root element. Here $\dim (y_i^G \cap (H \setminus H^0)) = 31, 47$ for $i=1,2$ and by considering the Jordan form of $y_1$ and $y_2$ on $V$ (see the proof of \cite[Lemma 3.11]{BGG1}) we see that $y_1$ is in the $G$-class labelled $A_1^3$, whereas $y_2$ is in $A_1^4$. If $y \in A_1^3$ then $\dim(y^G \cap H^0) = 40$ and thus $\a(G,H,y) = 5/9$. Similarly, if $y \in A_1^4$ then $\dim(y^G \cap H^0) = 44$, so $\dim(y^G \cap H) \leqslant 44+3\delta_{2,p}$ and we deduce that $\a(G,H,y) \leqslant (26+\delta_{2,p})/54$. The case $H = A_8.2$ is very similar, working with \cite[Section 4.16]{Law2} to compute $\dim(y^G \cap H^0)$. If $p=2$ and $y \in H \setminus H^0$ is an involution, then $y$ induces a graph automorphism on $H^0$, so $C_{H^0}(y) = B_4$, $\dim (y^G \cap (H\setminus H^0)) = 44$ and by arguing as in the proof of \cite[Proposition 5.11]{BGS} we deduce that $y$ is contained in the $G$-class $A_1^4$. Since $\dim (y^G \cap H^0) = 40$, we conclude that $\dim(y^G \cap H) \leqslant 40+4\delta_{2,p}$ and thus $\a(G,H,y) \leqslant (20+\delta_{2,p})/42$ as claimed. Now suppose $H = A_4^2.4$. Once again, we can compute $\dim (y^G \cap H^0)$ by inspecting \cite{Law2}, so we may assume $p=2$ and $y \in H \setminus H^0$ is an involution. By considering the restriction of $V$ to $H^0$ (see \cite[Table 5]{Thomas}) we deduce that $y$ induces a graph automorphism on both $A_4$ factors of $H^0$, so $C_{H^0}(y) = B_2^2$, $\dim (y^G \cap (H \setminus H^0)) = 28$ and we calculate that $y$ is in the $G$-class labelled $A_1^4$ (see the proof of \cite[Lemma 4.4]{BTh0}, for example). Since $\dim(y^G \cap H^0) = 24$, we conclude that $\a(G,H,y) \leqslant (24+\delta_{2,p})/50$ when $y \in A_1^4$. Next let us turn to the case $H = D_4^2.(S_3 \times 2)$. Here $H^0 < D_8 < G$ and the embedding of $H^0$ in $D_8$ is transparent. Therefore, we can identify the $G$-class of each unipotent element in $H^0$ by appealing to \cite[Section 4.13]{Law2}. In turn, this allows us to compute $\dim (y^G \cap H^0)$ and so the analysis of this case is reduced to the situation where $p \in \{2,3\}$ and $y \in H \setminus H^0$ has order $p$. First assume $p=3$ and $y \in H \setminus H^0$ has order $3$. Here $y$ induces a triality graph automorphism on both $D_4$ factors and thus Proposition \ref{p:graph} implies that $\dim(y^G \cap (H \setminus H^0)) \leqslant 40$. Next observe that \[ V \downarrow H^0 = \mathcal{L}(H^0) \oplus (U_1 \otimes U_1) \oplus (U_2 \otimes U_2) \oplus (U_3 \otimes U_3) \] (see \cite[Table 5]{Thomas}), where $\mathcal{L}(H^0)$ is the Lie algebra of $H^0$ and $U_1$, $U_2$ and $U_3$ denote the three $8$-dimensional irreducible modules for $D_4$ (that is, the natural module and the two spin modules). Now $y$ cyclically permutes the three $64$-dimensional summands in this decomposition, which means that the Jordan form of $y$ on $V$ has at least $64$ Jordan blocks of size $3$. By inspecting \cite[Table 9]{Law1}, it follows that $y$ is not contained in any of the $G$-classes labelled $A_1, A_1^2, A_1^3, A_1^4$ or $A_2$. And if $y \in A_2A_1^3$ then the bound $\dim(y^G \cap H) \leqslant 40$ yields $\a(G,H,y) \leqslant 13/32$. Similarly, we get $\a(G,H,y) \leqslant 1/3$ if $y \in A_2^2A_1^2$. Now assume $p=2$ and $y \in H \setminus H^0$ is an involution. Then up to conjugacy, $y$ either interchanges the two $D_4$ factors, or it acts as a graph automorphism on both factors. If $y$ swaps the two factors, then $y$ embeds in $D_8$ as an involution of type $(4A_1)'$ or $(4A_1)''$ in the notation of \cite[Table 8]{Law2} (that is, $y \in D_8$ is an involution of type $a_8$ or $a_8'$ in the notation of \cite{AS}) and by inspecting \cite[Section 4.13]{Law2} we deduce that $y$ is in the $G$-class $A_1^3$ or $A_1^4$. Similarly, if $y$ acts as a $b_1$-type graph automorphism on both factors, then $y \in D_8$ is of type $D_2$ and is therefore contained in the $G$-class $A_1^2$ (this corrects an error in the proof of \cite[Proposition 5.11]{BGS}, where it is incorrectly stated that $y$ is in the class $A_1$). And if $y$ acts as a $b_3$ graph automorphism on both factors, then $y$ is contained in the $D_8$-class $2A_1+D_2$ and is therefore in the $G$-class $A_1^4$. Similarly, if $y$ acts as a $b_1$ automorphism on one factor and $b_3$ on the other, then $y$ is in the $G$-class $A_1^3$. So for $p=2$ we conclude that $\dim(y^G \cap (H \setminus H^0)) \leqslant 30$ if $y \in A_1^4$ (since the class of $b_3$ graph automorphisms of $D_4$ has dimension $15$) and thus $\dim (y^G \cap H) = 26+6\delta_{2,p}$, which in turn yields $\a(G,H,y) = (15+\delta_{2,p})/32$. Similarly, if $y \in A_1^2$ then $\dim(y^G \cap H) = 20$ and $\a(G,H,y) = 5/8$. On the other hand, if $y \in A_1^3$ then $\dim(y^G \cap H) \leqslant 28$ and we deduce that $\a(G,H,y) \leqslant 9/16$. Now suppose $H = A_2^4.{\rm GL}_2(3)$. Here $H^0 < A_2E_6$ and so we can work with the information in \cite[Sections 4.9, 4.15]{Law2} to compute $\dim(y^G \cap H^0)$. Now assume $p \in \{2,3\}$ and $y \in H \setminus H^0$ has order $p$. Suppose $p=2$. If $y \in A_1^2$ then the proof of \cite[Lemma 3.11]{BGG1} shows that $\dim(y^G \cap H) =8$ and we deduce that $\a(G,H,y) = 11/18$. For $y \in A_1^3,A_1^4$ we observe that $\dim (y^G \cap (H\setminus H^0)) \leqslant 20$ (maximal if $y$ acts as a graph automorphism on each $A_2$ factor of $H^0$) and the result follows since $\dim(y^G \cap H^0) = 12,16$, respectively. Finally, suppose $p=3$. By arguing as in the proof of \cite[Lemma 3.11]{BGG1} we see that there are at least $54$ Jordan blocks of size $3$ in the Jordan form of $y$ on $V$ and so by inspecting \cite[Table 9]{Law1} we deduce that $y$ is not contained in any of the $G$-classes labelled $A_1$, $A_1^2$, $A_1^3$ or $A_1^4$. Moreover, by considering the action of $y$ on the simple factors of $H^0$, we see that $\dim(y^G \cap (H\setminus H^0)) \leqslant 22$ (with equality if $y$ cyclically permutes three of the factors and acts as a regular element on the fixed factor). By comparing this estimate with $\dim(y^G \cap H^0)$, it follows that $\dim(y^G \cap H) \leqslant 22$ for $y \in A_2, A_2A_1^3$, while $\dim(y^G \cap H) \leqslant 24$ for $y \in A_2^2A_1^2$. In each case, this gives the bound $\a(G,H,y) \leqslant \b$ presented in Table \ref{tab:beta_e8}. Next assume $H = A_1G_2^2.2$ and note that $p \geqslant 3$, so each $y \in H$ of order $p$ is contained in $H^0$. Since $H^0 = H_1H_2 < F_4G_2 < G$ with $H_1 = A_1G_2 < F_4$ and $H_2 = G_2$, we can use the information in \cite[Sections 5.3, 5.12]{Law2} to compute $\dim(y^G \cap H)$ and the result follows. Now suppose $H = A_1^8.{\rm AGL}_3(2)$ and note that $\dim y^{H^0} \leqslant 16$ if $y \in H^0$. If $p \in \{2,3,7\}$ and $y \in H \setminus H^0$ then $y$ induces a nontrivial permutation of the $A_1$ factors of $H^0$ and it is straightforward to check that $\dim y^{H^0} \leqslant 16$ for the elements of order $p$ we need to consider. For example, if $y$ has cycle-shape $(3^2,1^2)$ on the set of $A_1$ factors, then $\dim y^{H^0} \leqslant 6+6+4 = 16$, with equality if $y$ acts nontrivially on the two fixed factors. And if $p=7$ and $y$ induces a $7$-cycle on the $A_1$ factors, then by considering the restriction of $V = W_G(\l_8)$ to $H^0$ we deduce that $y$ has at least $32$ Jordan blocks of size $7$ on $V$, which is incompatible with the Jordan form of the elements we are interested in (see \cite[Table 9]{Law1}). We conclude that $\dim(y^G \cap H) \leqslant 16$ and this gives the bound $\a(G,H,y)\leqslant \b$ in Table \ref{tab:beta_e8} unless $y \in A_1^4$ and $p \geqslant 3$. In the latter case, we can argue as in the proof of Proposition \ref{p:e7_dim3} (for the case $H = A_1^7.{\rm GL}_3(2)$) to show that $y^G \cap H = y^G \cap H^0$ and $\dim(y^G \cap H) = 8$, which yields $\a(G,H,y) = 13/28$. Finally, if $H = T.W$ then the trivial bound $\dim (y^G \cap H) \leqslant 8$ is sufficient unless $y \in A_1^4$ and $p \geqslant 3$. In the latter case, by arguing as in the proof of Proposition \ref{p:e7_dim3}, we deduce that $y^G \cap H$ is empty and thus $\a(G,H,y) = 0$. This completes the proof of the proposition. \end{proof} We are now ready to prove Theorem \ref{t:main} for $G = E_8$. In view of \cite[Theorem 7]{BGG1}, we may assume $t \in \{2,3,4\}$. First we handle the cases recorded in Table \ref{tab:special}. \begin{prop}\label{p:e8_00} The set $\Delta$ is empty if $X$ is one of the cases in Table \ref{tab:special}. \end{prop} \begin{proof} Write $\mathcal{C}_i = y_i^G$ and first assume $t=4$, so $(\mathcal{C}_1, \mathcal{C}_2, \mathcal{C}_3, \mathcal{C}_4) = (A_1,A_1,A_1,A_1^2)$. Here we may embed each $y_i$ in a subgroup $L = E_7$ such that $M=N_G(L)=A_1E_7$ is a maximal closed subgroup of $G$. Set $\mathcal{D}_i = y_i^M = y_i^L$ and note that the $L$-class $\mathcal{D}_i$ and the $G$-class $\mathcal{C}_i$ have the same labels (see \cite[Section 4.14]{Law2}), so $\dim \mathcal{C}_i = 58$ and $\dim \mathcal{D}_i = 34$ for $i=1,2,3$, $\dim \mathcal{C}_4 = 92$ and $\dim \mathcal{D}_4 = 52$. By Theorem \ref{t:e7_4}, we may assume that the $y_i$ topologically generate $L$. It is now straightforward to check that all of the conditions in Proposition \ref{p:fibre} are satisfied and we conclude that $\Delta$ is empty. Next assume $t=3$. By considering Proposition \ref{p:clos} and the closure relation on unipotent classes (see \cite{Spal}), it suffices to show that $\Delta$ is empty for $(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3) = (A_1,A_1,A_3)$ and $(A_1,A_1^2,A_2)$. Suppose $(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3) = (A_1,A_1,A_3)$. As before, we may embed each $y_i$ in $L = E_7$, where $M = N_G(L) = A_1E_7$ and the classes $\mathcal{C}_i$ and $\mathcal{D}_i = y_i^M = y_i^L$ have the same labels. Moreover, by applying Theorem \ref{t:e7_3}, we may assume that the $y_i$ topologically generate $L$. If we set $Y = \mathcal{D}_1 \times \mathcal{D}_2 \times \mathcal{D}_3$, then $\dim X = 264$, $\dim Y = 152$ and thus $\Delta$ is empty by Proposition \ref{p:fibre}. The case $(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3) = (A_1,A_1^2,A_2)$ is entirely similar. For the remainder we may assume $t=2$. Suppose $\mathcal{C}_1 = A_1$. By considering Proposition \ref{p:clos} and the closure relation on unipotent classes in $G$, we may assume that $\mathcal{C}_2 \in \{ D_5, D_4A_2\}$. Suppose $\mathcal{C}_2 = D_5$. Here we may embed $y_1$ and $y_2$ in $L = E_7$ so that $M = N_G(L) = A_1E_7$ and the classes $y_i^M = y_i^L$ and $\mathcal{C}_i$ have the same labels. In addition, we may assume that the $y_i$ topologically generate $L$ (see Theorem \ref{t:e7_2}) and we now apply Proposition \ref{p:fibre}, noting that the condition in part (iii) holds since $\dim X = 258$ and $\dim Y = 146$. Now assume $(\mathcal{C}_1,\mathcal{C}_2) = (A_1,D_4A_2)$. This case requires a slight variation of the usual argument (this is analogous to the case we considered in the final paragraph of the proof of Proposition \ref{p:e7_00}). First we embed the $y_i$ in $M = A_1E_7 = N_G(L)$, where $L = E_7$. More precisely, we take $y_1$ to be in the $A_1$-class of $L$ and we choose $y_2 = u_2v_2 \in A_1E_7$, where $u_2$ is an element of order $p$ in the $A_1$ factor and $v_2 \in L$ is contained in the $E_7$-class labelled $D_5(a_1)A_1$ (see \cite[Table 23]{Law2}). Set $\mathcal{D}_i = y_i^M$ and $Y = \mathcal{D}_1 \times \mathcal{D}_2$, so $\dim X = 256$ and $\dim Y = 144$. By Theorem \ref{t:e7_2}, we may assume that $y_1$ and $v_2$ topologically generate $L$. Setting $y = (y_1,y_2) \in X$, this implies that $G(y)^0 = L$ and we have $G(z)^0 \leqslant L$ for all $z \in Y$. We now apply Proposition \ref{p:fibre} to conclude, noting that $\dim Y + \dim G - \dim X = \dim M$. By considering the closure relation on unipotent classes, it remains to show that $\Delta$ is empty when $(\mathcal{C}_1,\mathcal{C}_2) = (A_1^2,D_4)$ or $(A_2,A_3)$. In both cases, this is a straightforward application of Proposition \ref{p:fibre}, where we embed each $y_i$ in $L = E_7$. We omit the details. \end{proof} \begin{thm}\label{t:e8_4} The conclusion to Theorem \ref{t:main} holds when $G = E_8$ and $t=4$. \end{thm} \begin{proof} By combining Corollary \ref{c:fixV} and Propositions \ref{p:fix} and \ref{p:e8_00}, we see that it suffices to show that $\Sigma_X(H)<3$ for all $H \in \mathcal{M}$ whenever $X$ is not one of the cases in Tables \ref{tab:main} and \ref{tab:special}. By Theorem \ref{t:par}, this inequality holds if $H \in \mathcal{P}$, so we may assume $H \in \mathcal{R}$. Fix an element $y \in G$ of order $p$. If $H \ne A_1E_7$ then \cite[Theorem 3.1]{BGG1} states that $\a(G,H,y) \leqslant 7/9$ if $y \in A_1$, otherwise $\a(G,H,y)<2/3$. Therefore, $\Sigma_X(H) < 3\cdot 7/9 + 2/3 = 3$ as required. Now assume $H = A_1E_7$. By inspecting Table \ref{tab:beta_e8} we observe that $\a(G,H,y) \leqslant 11/14, 9/14$ if $y \in A_1, A_1^2$, respectively, and $\a(G,H,y) \leqslant 4/7$ for all other unipotent elements of order $p$. This implies that $\Sigma_X(H) \leqslant 3\cdot 11/14+4/7 <3$ and the result follows. \end{proof} \begin{thm}\label{t:e8_3} The conclusion to Theorem \ref{t:main} holds when $G = E_8$ and $t=3$. \end{thm} \begin{proof} Set $\mathcal{C}_i = y_i^G$. By Corollary \ref{c:fixV} and Proposition \ref{p:e8_00}, we know that $\Delta$ is empty for each case in Tables \ref{tab:main} and \ref{tab:special}. Therefore, it remains to show that $\Delta$ is non-empty in all the other cases. As usual, to do this we will work with Proposition \ref{p:fix} and Theorem \ref{t:par}, which imply that it suffices to show that $\Sigma_X(H)<2$ for all $H \in \mathcal{R}$. If $\mathcal{C}_i \ne A_1$ for all $i$, then \cite[Theorem 3.1]{BGG1} gives $\a(G,H,y_i)<2/3$ and thus $\Sigma_X(H) <2$. Therefore, we may assume $\mathcal{C}_1 = A_1$ and $H \in \mathcal{L}$, which brings the bounds in Proposition \ref{p:e8_dim3} into play. First assume $\mathcal{C}_2 = A_1$. By considering Proposition \ref{p:clos} and the closure relation on unipotent classes, we may assume $\mathcal{C}_3 = A_2A_1^3$ and the desired bound $\Sigma_X(H)<2$ now follows via Proposition \ref{p:e8_dim3}. Similarly, if $\mathcal{C}_2 = A_1^2$ then we may assume $\mathcal{C}_3 = A_1^4$ and once again the bounds in Proposition \ref{p:e8_dim3} are good enough. Finally, if $\mathcal{C}_2 \ne A_1,A_1^2$ then by considering closures we may assume $\mathcal{C}_2 = \mathcal{C}_3 = A_1^3$ and the result follows by applying Proposition \ref{p:e8_dim3}. \end{proof} \begin{thm}\label{t:e8_2} The conclusion to Theorem \ref{t:main} holds when $G = E_8$, $t=2$ and $p \geqslant 3$. \end{thm} \begin{proof} Write $\mathcal{C}_i = y_i^G$. In the usual way, by combining Corollary \ref{c:fixV} and Proposition \ref{p:e8_00}, we observe that $\Delta$ is empty for each pair $(\mathcal{C}_1,\mathcal{C}_2)$ in Tables \ref{tab:main} and \ref{tab:special}. In order to show that $\Delta$ is non-empty in each of the remaining cases, it is enough to verify the bound $\Sigma_X(H) <1$ for all $H \in \mathcal{R}$. First assume $\mathcal{C}_1 = A_1$. If $H \not\in \mathcal{L}$ then $\a(G,H,y_1) = 0$ and the desired inequality clearly holds, so we may assume $H \in \mathcal{L}$. By considering the possibilities for $\mathcal{C}_2$ and the closure relation on unipotent classes (see \cite[Chapter 4]{Spal}), it suffices to show that $\Sigma_X(H)<1$ for $\mathcal{C}_3 = A_4A_3$ and this follows immediately from the upper bounds in Proposition \ref{p:e8_dim3}. Next assume $\mathcal{C}_1 = A_1^2$. By considering closures, we may assume $\mathcal{C}_2 = A_2^2A_1^2$. If $H \in \mathcal{L}$ then the bounds in Proposition \ref{p:e8_dim3} imply that $\Sigma_X(H)<1$ as required. On the other hand, if $H \not\in \mathcal{L}$ then either $H \cap \mathcal{C}_1$ is empty (and thus $\Sigma_X(H) = \a(G,H,y_2) < 2/3$ by \cite[Theorem 3.1]{BGG1}), or by inspecting \cite{CST,Law2} we deduce that $H = F_4$ and $p=3$. In the latter case, the information in \cite[Table 2]{CST} yields $\a(G,H,y_1) = 30/49$ and $\a(G,H,y_2) = 16/49$, whence $\Sigma_X(H) = 46/49$. Now suppose $\mathcal{C}_1 = A_1^3$ or $A_2$. In the usual way, by considering closures, we may assume $\mathcal{C}_2 = A_2A_1^3$. If $H \cap \mathcal{C}_1$ is empty then $\Sigma_X(H)<2/3$ by \cite[Theorem 3.1]{BGG1}, so we may assume $H$ contains elements in the class $\mathcal{C}_1$. By inspecting \cite{CST,Law2}, this implies that $H \in \mathcal{L}$ and one can check that the upper bounds on $\a(G,H,y_i)$ in Proposition \ref{p:e8_dim3} are sufficient in all cases. Finally, let us assume $\mathcal{C}_i \not\in \{A_1,A_1^2,A_1^3,A_2\}$ for $i=1,2$. Here the closure of $\mathcal{C}_i$ contains the class $A_1^4$, so we may assume $\mathcal{C}_1 = \mathcal{C}_2 = A_1^4$. If $H \in \mathcal{L}$ then the bounds in Proposition \ref{p:e8_dim3} are sufficient (recall that $p \geqslant 3$). On the other hand, if $H \not\in \mathcal{L}$ then by inspecting \cite{CST,Law2} we see that we may assume $H = F_4$ and $p=3$, in which case the fusion information in \cite[Table 2]{CST} yields $\a(G,H,y) = 45/98$ for $y \in A_1^4$ and thus $\Sigma_X(H) = 45/49$. \end{proof} This completes the proof of Theorem \ref{t:main} for $G = E_8$, which in turn completes the proof of Theorem \ref{t:main} in full generality. \newpage \section{The tables}\label{s:tab} Here we present Tables \ref{tab:main} and \ref{tab:special} from Theorem \ref{t:main}. We refer the reader to Remark \ref{r:main} for comments on the notation for unipotent classes adopted in the tables. \vspace{3mm} {\small \begin{table}[h] \renewcommand{\thetable}{A} \[ \begin{array}{lll} \hline G & t & (\mathcal{C}_1, \ldots, \mathcal{C}_t) \\ \hline G_2 & 3 & (A_1, A_1, A_1) \\ & 2 & (A_1,A_1), (A_1,\tilde{A}_1), (A_1,(\tilde{A}_1)_3), (A_1,G_2(a_1)) \\ & & \\ F_4 & 4 & (A_1, A_1, A_1, A_1) \\ & 3 & (A_1, A_1, A_1), (A_1, A_1, \tilde{A}_1), (A_1, A_1, (\tilde{A}_1)_2), (A_1, A_1, A_1\tilde{A}_1), (A_1, A_1, A_2)\\ & 2 & (A_1,A_1), (A_1,\tilde{A}_1), (A_1, A_1\tilde{A}_1), (A_1,A_2), (A_1, \tilde{A}_2), (A_1, A_2\tilde{A}_1), (A_1, \tilde{A}_2A_1), (A_1, B_2), (A_1,C_3(a_1)) \\ & & (A_1, F_4(a_3)), (A_1, B_3), (\tilde{A}_1, \tilde{A}_1), (\tilde{A}_1, A_1\tilde{A}_1), (\tilde{A}_1, A_2), (A_1\tilde{A}_1, A_1\tilde{A}_1), (A_1\tilde{A}_1, A_2), (A_2,A_2) \\ & & \\ E_6 & 4 & (A_1, A_1, A_1, A_1) \\ & 3 & (A_1, A_1, A_1), (A_1, A_1, A_1^2), (A_1, A_1, A_1^3), (A_1, A_1, A_2), (A_1, A_1^2, A_1^2) \\ & 2 & (A_1,A_1), (A_1,A_1^2), (A_1,A_1^3), (A_1,A_2), (A_1,A_2A_1), (A_1,A_2^2), (A_1,A_2A_1^2), (A_1,A_3), (A_1,A_2^2A_1) \\ & & (A_1,A_3A_1), (A_1,D_4(a_1)), (A_1,A_4), (A_1,D_4), (A_1^2,A_1^2), (A_1^2,A_1^3), (A_1^2,A_2), (A_1^2,A_2A_1) \\ & & (A_1^2,A_2A_1^2), (A_1^2,A_3), (A_1^3,A_1^3), (A_1^3,A_2), (A_2,A_2) \\ & & \\ E_7 & 4 & (A_1, A_1, A_1, A_1) \\ & 3 & (A_1,A_1,A_1), (A_1,A_1,A_1^2), (A_1,A_1,(A_1^3)^{(1)}), (A_1,A_1,(A_1^3)^{(2)}), (A_1,A_1,A_2), (A_1,A_1,A_1^4) \\ & & (A_1,A_1,A_2A_1), (A_1,A_1^2,A_1^2) \\ & 2 & (A_1,A_1), (A_1,A_1^2), (A_1,(A_1^3)^{(1)}), (A_1,(A_1^3)^{(2)}), (A_1,A_2), (A_1,A_1^4), (A_1,A_2A_1), (A_1,A_2A_1^2) \\ & & (A_1,A_2A_1^3), (A_1,A_2^2), (A_1,A_3), (A_1,(A_3A_1)^{(1)}), (A_1,A_2^2A_1), (A_1,(A_3A_1)^{(2)}), (A_1,D_4(a_1)) \\ & & (A_1,A_3A_1^2), (A_1,D_4), (A_1,D_4(a_1)A_1), (A_1,A_3A_2), (A_1,A_4), (A_1,A_3A_2A_1), (A_1,D_4A_1) \\ & & (A_1,A_4A_1), (A_1,D_5(a_1)), (A_1^2,A_1^2), (A_1^2,(A_1^3)^{(1)}), (A_1^2,(A_1^3)^{(2)}), (A_1^2,A_2), (A_1^2,A_1^4), (A_1^2,A_2A_1) \\ & & (A_1^2,A_2A_1^2), (A_1^2,A_2A_1^3), (A_1^2,A_3), ((A_1^3)^{(1)},(A_1^3)^{(2)}), ((A_1^3)^{(1)},A_2), ((A_1^3)^{(2)}, (A_1^3)^{(2)}) \\ & & ((A_1^3)^{(2)},A_2), ((A_1^3)^{(2)},A_1^4), ((A_1^3)^{(2)},A_2A_1), (A_2,A_2), (A_2, A_1^4), (A_2, A_2A_1) \\ & & \\ E_8 & 4 & (A_1, A_1, A_1, A_1) \\ & 3 & (A_1,A_1, A_1), (A_1,A_1,A_1^2), (A_1,A_1,A_1^3), (A_1,A_1,A_2), (A_1,A_1,A_1^4), (A_1,A_1^2,A_1^2) \\ & 2 & (A_1,A_1), (A_1,A_1^2), (A_1,A_1^3), (A_1,A_2), (A_1,A_1^4), (A_1,A_2A_1), (A_1,A_2A_1^2), (A_1,A_3), (A_1,A_2A_1^3) \\ & & (A_1,A_2^2), (A_1,A_2^2A_1), (A_1,A_3A_1), (A_1,D_4(a_1)), (A_1,D_4), (A_1,A_2^2A_1^2), (A_1,A_3A_1^2) \\ & & (A_1, D_4(a_1)A_1), (A_1,A_3A_2), (A_1,A_4), (A_1,A_3A_2A_1), (A_1,D_4A_1),(A_1,D_4(a_1)A_2) \\ & & (A_1,A_4A_1), (A_1,A_3^2), (A_1^2, A_1^2), (A_1^2,A_1^3), (A_1^2,A_2), (A_1^2,A_1^4), (A_1^2,A_2A_1), (A_1^2,A_2A_1^2) \\ & & (A_1^2,A_3), (A_1^2,A_2A_1^3), (A_1^3, A_1^3), (A_1^3,A_2), (A_1^3,A_1^4), (A_2, A_2), (A_2,A_1^4) \\ \hline \end{array} \] \caption{The varieties $X = \mathcal{C}_1 \times \cdots \times \mathcal{C}_t$ in Theorem \ref{t:main} with $\Delta = \emptyset$, Part I} \label{tab:main} \end{table} } {\small \begin{table}[h] \renewcommand{\thetable}{B} \[ \begin{array}{lll} \hline G & t & (\mathcal{C}_1, \ldots, \mathcal{C}_t) \\ \hline F_4 & 3 & (A_1, \tilde{A}_1, \tilde{A}_1), (A_1, \tilde{A}_1, (\tilde{A}_1)_2), (A_1, (\tilde{A}_1)_2, (\tilde{A}_1)_2) \\ & 2 & (\tilde{A}_1,\tilde{A}_2), (\tilde{A}_1, A_2\tilde{A}_1), (\tilde{A}_1,B_2) \\ E_6 & 2 & (A_1^2,A_2^2) \\ E_7 & 3 & (A_1,A_1^2,(A_1^3)^{(1)}), (A_1,(A_1^3)^{(1)},(A_1^3)^{(1)}) \\ & 2 & (A_1,(A_5)^{(1)}), (A_1^2,A_2^2), (A_1^2,(A_3A_1)^{(1)}), ((A_1^3)^{(1)}, (A_1^3)^{(1)}), ((A_1^3)^{(1)},A_1^4), ((A_1^3)^{(1)},A_2A_1) \\ & & ((A_1^3)^{(1)},A_2A_1^2), ((A_1^3)^{(1)},A_2A_1^3), ((A_1^3)^{(1)},A_2^2), ((A_1^3)^{(1)},A_3), ((A_1^3)^{(1)},(A_3A_1)^{(1)}) \\ E_8 & 4 & (A_1, A_1, A_1, A_1^2) \\ & 3 & (A_1,A_1,A_2A_1), (A_1,A_1,A_2A_1^2), (A_1,A_1,A_3), (A_1,A_1^2,A_1^3), (A_1,A_1^2,A_2) \\ & 2 & (A_1,D_5(a_1)), (A_1,A_4A_1^2), (A_1,A_4A_2), (A_1,A_4A_2A_1), (A_1,D_5(a_1)A_1), (A_1,A_5), (A_1,D_4A_2) \\ & & (A_1,E_6(a_3)), (A_1,D_5), (A_1^2,A_2^2), (A_1^2,A_2^2A_1),(A_1^2,A_3A_1), (A_1^2,D_4(a_1)), (A_1^2,D_4), (A_1^3,A_2A_1) \\ & & (A_1^3,A_2A_1^2), (A_1^3,A_3), (A_2,A_2A_1), (A_2,A_2A_1^2), (A_2,A_3) \\ \hline \end{array} \] \caption{The varieties $X = \mathcal{C}_1 \times \cdots \times \mathcal{C}_t$ in Theorem \ref{t:main} with $\Delta = \emptyset$, Part II} \label{tab:special} \end{table} } \clearpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and Statement of Results} Let $j(\tau)$ be the $\textrm{SL}_2(\mathbb{Z})$-modular function defined by $$j(\tau) := \frac{E_4(\tau)^3}{\Delta(\tau)} = \sum_{n=-1}^{\infty}c(n)e^{2\pi in\tau} = e^{-2\pi i\tau} + 744 + 196884e^{2\pi i\tau} + \cdots,$$ where $E_{2k}(\tau)$ is the weight $2k$ Eisenstein series and $\Delta(\tau) := (E_4(\tau)^3 - E_6(\tau)^2)/1728$ is the modular discriminant. It is well known that $j(\tau)$ parametrizes isomorphism classes of elliptic curves over $\mathbb{C}$ and gives a bijective map from the fundamental domain $X_0(1) \setminus \{\infty\}$ to $\mathbb{C}$. A natural question is to ask for a description of the inverse map. Gauss offered a solution to the inverse problem in terms of the arithmetic-geometric mean (AGM) by making use of the theory of elliptic functions. The elliptic Weierstrass $\wp$-function satisfies the differential equation $$(\wp'(z))^2 = (\wp(z))^3 - g_2\wp(z) - g_3,$$ where $g_2 := 60E_4(\tau)$ and $g_3 := 140E_6(\tau)$ are the \textit{elliptic invariants}. The cubic equation above defines an elliptic curve over $\mathbb{C}$ whose $j$-invariant is equal to $j(\tau)$. Given $\alpha \in \mathbb{C}$, one can find $\tau \in X_0(1) \setminus \{\infty\}$ such that $j(\tau) = \alpha$ by producing a elliptic curve model of the above form with $j$-invariant $\alpha$. Then $\tau$ is given by the ratio of the fundamental periods $\omega_1$ and $\omega_2$ of the associated $\wp$-function. The theory of elliptic functions tells us that the inverse of the $\wp$-function is an elliptic integral. Using this fact, one can show that $\omega_1$ and $\omega_2$ are given in terms of so-called period integrals. Gauss showed that the period integrals are left unchanged by replacing certain parameters with their arithmetic and geometric means. By passing to the limit, he was able to evaluate the period integrals in terms of the AGM. The AGM can then be numerically evaluated using the Gaussian hypergeometric series. It is natural to ask for a solution to the inverse problem that relies only on the properties of $j(\tau)$ as a function and makes no reference to an elliptic curve model. The theory of polar harmonic Maass forms offers such a solution. As it turns out, the logarithmic derivatives of meromorphic modular forms are polar harmonic Maass forms, as was shown by Bringmann et al. in \cite{divmf}. The inverse problem can then be reformulated in terms of locating the pole of the logarithmic derivative of $j(\tau) - \alpha$. This can be done using the asymptotic formula for the Fourier coefficients of such polar harmonic Maass forms offered in \cite{divmf}. Using their work, we prove the following theorem, which can be found in the M.S. thesis \cite{MSBS} of the author: \begin{thm}\label{mainthm} Let $\alpha \in \mathbb{C}$ and let $z \in X_0(1) \setminus \{\infty\}$ such that $j(z) = \alpha$. Define $$H_z(\tau) := -\frac{1}{2\pi i}\frac{j'(\tau)}{j(\tau) - \alpha} = \sum_{n=0}^{\infty}a(n)e^{2\pi in\tau}.$$ Write $z = x + iy$. Then $y$ is given by $$y = \lim_{n \to \infty}\frac{\log\vert a(n) \vert}{2\pi n}.$$ If $\alpha = 0$, then $x = -\tfrac{1}{2}$. If $\alpha \neq 0$, let $$c(n) = \begin{cases} \Re(a(n))e^{-2\pi ny_0} &\text{if } \lim_{n\to\infty}\vert a(n) \vert e^{-2\pi ny} = 1, \\ \frac{1}{2}a(n)e^{-2\pi ny_0} &\text{otherwise}, \end{cases}$$ where $y_0 \approx y$ is obtained from the $b(n)$. Let $w_n = \cos^{-1}(c(n))$. Then an approximation for $x$ is given by one of the following formulas: \begin{equation*} \begin{aligned} x &\approx \pm\frac{1}{2\pi}(w_n \pm w_{n-1}) \\ x &\approx \pm\frac{1}{2\pi}(w_n + w_{n-1} - 2\pi). \end{aligned} \end{equation*} \noindent Two remarks: \begin{enumerate} \item \emph{There is some ambiguity in the value of $x$ in the above theorem. However, it is not difficult to determine the correct value of $x$ by resubstituting the possible values into $j(\tau)$.} \item \emph{It would be interesting to study the convergence of the above algorithm in detail.} \end{enumerate} \end{thm} This paper is organized as follows: In Section 2, we briefly recall the basic facts about harmonic Maass forms. In Section 3, we use the work of Bringmann et al. in \cite{divmf} to prove \Cref{mainthm}. We conclude with Section 4, where we offer some examples of \Cref{mainthm} in practice. \section{Preliminaries on Harmonic Maass Forms} Recall that a \textit{harmonic Maass form} of integer weight $k$ is a real-analytic function $f \colon \mathbb{H} \to \mathbb{C}$ which satisfies the modular transformation law, is annihilated by the weight $k$ hyperbolic Laplacian operator $\Delta_k$, and exhibits at most linear exponential growth at the cusps. If $f$ has poles in $\mathbb{H}$, we say that $f$ is a \textit{polar harmonic Maass form}. The theory of monstrous moonshine tells us that the Fourier expansion of $j(\tau)$ is the McKay-Thompson series for the identity, meaning that the Fourier coefficients $c(n)$ are the graded dimensions of the Monster module $V^\natural$. From moonshine we also obtain the infinite product identity $$j(z) - j(\tau) = e^{-2\pi iz}\prod_{m > 0, n \in \mathbb{Z}} \left(1 - e^{2\pi imz}e^{2\pi in\tau}\right)^{c(mn)},$$ known as the \textit{denominator formula} for the Monster Lie algebra. It turns out that the denominator formula, when viewed as a function of $\tau$, is a polar harmonic Maass form with a simple pole at $z$. More specifically, the denominator formula is equivalent to a theorem of Asai, Kaneko, and Ninomiya (see Theorem 3 of \cite{valuesofmfdivmf}). The theorem states that if we define $$H_z(\tau) := \sum_{n=0}^{\infty}j_n(z)e^{2\pi in\tau} = \frac{E_4(\tau)^2E_6(\tau)}{\Delta(\tau)} \frac{1}{j(\tau) - j(z)} = -\frac{1}{2\pi i} \frac{j'(\tau)}{j(\tau) - j(z)},$$ then the functions $j_n(\tau)$ form a Hecke system. Namely, if we let $j_0(\tau) = 1$ and $j_1(\tau) = j(\tau) - 744$, then the others are given by $$j_n(\tau) = j_1(\tau) \mid T(n),$$ where $T(n)$ is the $n$th normalized Hecke operator. In \cite{divmf} Bringmann et al. generalize the above theorem by constructing weight $2$ polar harmonic Maass forms $H_{N,z}^{\ast}(\tau)$ which generalize the $H_z(\tau)$. Their work extends the result for $j(\tau)$ to meromorphic modular forms on all of the modular curves $X_0(N)$. They also give asymptotics for the Fourier coefficients of the $H_{N,z}^{\ast}(\tau)$ in terms of ``Ramanujan-like'' expansions, sums of the form \begin{equation}\label{Ramanujan sums} \sum_{\substack{\lambda \in \Lambda_z \\ \lambda \leq n}} \sum_{(c,d) \in S_{\lambda}}e\left(-\frac{n}{\lambda}r_z(c,d)\right)e^{\tfrac{2\pi n\Im(z)}{\lambda}}. \end{equation} Here we define $e(w) := e^{2\pi iw}$ for $w \in \mathbb{C}$. The definitions of the objects appearing in the sum are as follows. For an arbitrary solution $a,b \in \mathbb{Z}$ to $ad - bc = 1$, we define \begin{equation*} \begin{aligned} r_z(c,d) &:= ac\vert z \vert^2 + (ad + bc)\Re(z) + bd, \\ \Lambda_z &:= \{\alpha\vert z \vert^2 + \beta\Re(z) + \gamma^2 : \alpha, \beta, \gamma \in \mathbb{Z}\}, \\ S_{\lambda} &:= \{(c,d) \in N\mathbb{N}_0 \times \mathbb{Z} : \gcd(c,d) = 1 \textrm{ and } Q_z(c,d) = \lambda\}, \\ Q_z(c,d) &:= c^2\vert z \vert^2 + 2cd\Re(z) + d^2. \end{aligned} \end{equation*} \noindent Note that $r_z(c,d)$ is not uniquely defined. However $e(-nr_z(c,d)/Q_z(c,d))$ is well defined. We quote Theorem 1.1 from \cite{divmf} below. \begin{thm}\label{PHMF} If $z \in \mathbb{H}$, then $H^{\ast}_{N,z}(\tau)$ is a weight $2$ polar harmonic Maass form on $\Gamma_0(N)$ which vanishes at all cusps and has a single simple pole at $z$. Moreover, the following are true: \begin{enumerate}[font=\normalfont] \item If $z \in \mathbb{H}$ and $\Im(\tau) > \max\{\Im(z), \tfrac{1}{\Im(z)}\}$, then we have that $$H^{\ast}_{N,z}(\tau) = \frac{3}{\pi[\emph{SL}_2(\mathbb{Z}) : \Gamma_0(N)]\Im(\tau)} + \sum_{n=1}^{\infty}j_{N,n}(z)q^n.$$ \item For $\gcd(N,n) = 1$, we have $j_{N,n}(\tau) = j_{N,1}(\tau) \mid T(n)$. \item For $n \mid N$, we have $j_{N,n}(\tau) = j_{\tfrac{N}{n},1}(n\tau)$. \item As $n \to \infty$, we have $$j_{N,n}(\tau) = \sum_{\substack{\lambda \in \Lambda_{\tau} \\ \lambda \leq n}} \sum_{(c,d) \in S_{\lambda}}e\left(-\frac{n}{\lambda}r_{\tau}(c,d)\right)e^{\tfrac{2\pi n\Im(\tau)}{\lambda}} + O_{\tau}(n).$$ \end{enumerate} \end{thm} \noindent If we let $N = 1$, then $j_{1,n}(\tau) = j_n(\tau)$ and we recover the $H_z(\tau)$ up to the addition of the weight $2$ nonholomorphic Eisenstein series $E^{\ast}_2(\tau) := -\tfrac{3}{\pi\Im(\tau)} + E_2(\tau)$. We also quote Corollary 1.3 from \cite{divmf}, which we will use to obtain the imaginary part of $j^{-1}(\alpha)$. \begin{cor}\label{Im(z) cor} Suppose that $f(\tau)$ is a meromorphic modular form of weight $k$ on $\Gamma_0(N)$ whose divisor is not supported at cusps. Let $y_1$ be the largest imaginary part of any points in the divisor of $f(\tau)$ lying in $\mathbb{H}$. Then if $-\frac{1}{2\pi i}\frac{f'(\tau)}{f(\tau)} =: \sum_{n \gg -\infty} a(n)e^{2\pi in\tau}$, we have that $$y_1 = \lim\sup_{n\to\infty} \frac{\log\vert a(n)\vert}{2\pi n}.$$ \end{cor} \section{Proof of \Cref{mainthm}} In this section we will use the results gathered in the previous section to prove \Cref{mainthm}. We first rewrite the asymptotic formula given in \Cref{PHMF} as a sum over corresponding matrices $M = (\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}) \in \Gamma_{\infty} \backslash \Gamma_0(N)$. Direct substitution and simplification shows that $r_z(c,d)/Q_z(c,d) = \Re(Mz)$ and $\Im(z)/Q_z(c,d) = \Im(Mz)$, thus \Cref{PHMF} (4) is equivalent to $$j_{N,n}(z) = \sum_{\substack{M \in \Gamma_{\infty} \backslash \Gamma_0(N) \\ n\Im(Mz) \geq \Im(z)}}e^{-2\pi inMz} + O_z(n).$$ In the case where $N = 1$, we have $\Im(Mz) \leq \Im(z)$ for all $M \in \textrm{SL}_2(\mathbb{Z})$, thus the $\lambda = 1$ terms dominate in the formula given in \Cref{PHMF} (4). Separating out the $c = 0$ term, we have $$j_n(z) \approx e^{-2\pi inz} + \sum_{c \geq 1} \sum_{\substack{d \in \mathbb{Z} \\ \gcd(c,d) = 1 \\ \vert cz + d \vert^2 = 1}} e\left(n\frac{d - a}{c}\right)e^{2\pi in\bar{z}},$$ where the $e^{2\pi inz}$ arises from the $c = 0$ term. If $z \in X_0(1) \setminus \{\infty\}$ and $\vert z \vert > 1$, then $Q_z(c,d) = 1$ has no solutions for $c \geq 1$. If $\vert z \vert = 1$ and $z \neq e^{2\pi i/3}$, then the only solution is $(c,d) = (1,0)$ and the second term reduces to $e^{2\pi in\bar{z}}$. Writing $z = x + iy$, we conclude that \begin{equation}\label{Re(z) asymptotics} j_n(z)e^{-2\pi ny} \sim \begin{cases} e^{-2\pi nx} \qquad &\textrm{if $\vert z \vert > 1$} \\ 2\cos(2\pi nx) \qquad &\textrm{if $\vert z \vert = 1, z \neq e^{2\pi i/3}$} \end{cases}. \end{equation} \noindent We remark that $j(e^{2\pi i/3}) = 0$ is well known, thus we can exclude the case where $z = e^{2\pi i/3}$. Let $c(n)$ and $w_n$ be defined as in \Cref{mainthm}. The conditions on $\vert z \vert$ in \Cref{Re(z) asymptotics} are equivalent to the conditions on $\lim_{n\to\infty}\vert a(n) \vert e^{-2\pi ny}$ in the definition of $c(n)$. We will now prove the claimed formulas for $x$ and $y$. \Cref{Im(z) cor} proves the claimed formula in \Cref{mainthm} for $y$. Once $y \approx y_0$ has been approximated to sufficient precision, we substitute $y_0$ into the formula for $c(n)$. Since the sequence $c(n)$ is bounded, by taking real parts in the case where $\vert z \vert > 1$, it suffices to show the clamed formula for $x$ in the case where $\vert z \vert = 1$. We have $$c(n) \approx \cos(w_{n-1} \pm 2\pi x).$$ Now $\vert x \vert \leq \tfrac{1}{2}$ and $w_n \in [0,\pi]$, thus $w_n \pm 2\pi x \in [-\pi, 2\pi]$. Note that $$\cos^{-1}(\cos(x_0)) = \begin{cases} -x_0 \quad &x_0 \in [-\pi,0) \\ x_0 \quad &x_0 \in [0,\pi] \\ 2\pi - x_0 \quad &x_0 \in (\pi,2\pi] \end{cases}.$$ We thus have $$\pm w_n \approx w_{n-1} \pm 2\pi x$$ or $$w_n \approx 2\pi - (w_{n-1} \pm 2\pi x).$$ Rearranging gives the formulas claimed in \Cref{mainthm}. \section{Examples} In this section we provide some examples of calculating $z$ using \Cref{mainthm} for selected values of $\alpha$. Throughout this section we let $z = x + iy$ and $q := e^{2\pi in\tau}$. We let $a(n), b(n), c(n), w_n, y_0$ be defined as in \Cref{mainthm}. \begin{ex} ($\alpha = 2 \cdot 30^3, z = \sqrt{3}i$) \noindent We have $$-\frac{1}{2\pi i}\frac{j'(\tau)}{j(\tau) - 2 \cdot 30^3} = 1 + 53256q + 2835807768q^2 + 151013228757024q^3 + \cdots.$$ We find that $b(3) = 1.73205083\ldots$ matches the limiting value up to $7$ decimal places. We see from the size of $y_0$ that $\vert z \vert > 1$. We compute \begin{equation*} \begin{aligned} c(1) &= 1.00007\ldots, \\ c(2) &= 1.00000\ldots. \end{aligned} \end{equation*} We see that $c(n) \to 1$, thus $x = 0$. \end{ex} \begin{ex} ($\alpha = -640320^3, z = \tfrac{-1 + \sqrt{163}i}{2}$) \noindent We have $$-\frac{1}{2\pi i}\frac{j'(\tau)}{j(\tau) + 640320^3} = 1 - 262537412640768744q + \cdots.$$ We find that $b(1) = 6.3835726674\ldots$ matches the limiting value up to $30$ decimal places. We see from the size of $y_0$ that $\vert z \vert > 1$. We compute \begin{equation*} \begin{aligned} c(1) &= -1.000000000000000000000000000003\ldots, \\ c(2) &= 1.000000000000000000000000000000\ldots. \end{aligned} \end{equation*} We see that $c(n) \approx (-1)^n$, thus $x = -\tfrac{1}{2}$. \end{ex} \begin{ex} ($\alpha = 1728, z = i$) \noindent We have $$-\frac{1}{2\pi i}\frac{j'(\tau)}{j(\tau) - 1728} = 1 + 984q + 574488q^2 + 307081056q^3 + \cdots.$$ We find that $b(5000) = 1.0000220635600152652\ldots$ matches the limiting value up to $4$ decimal places. We compute \begin{equation*} \begin{aligned} c(1) &= 1.001440\ldots, \\ c(2) &= 0.999503\dots, \end{aligned} \end{equation*} thus $x = 0$. \end{ex} \begin{ex} ($\alpha = 1 + i$) \noindent We have $$-\frac{1}{2\pi i}\frac{j'(\tau)}{j(\tau) - (1 + i)} = 1 - (744 - i)q + (158280 - 1486i)q^2 - (35797022 - 1065494i)q^3 + \cdots.$$ We find that $b(100) = .8882136152\ldots$. Since $\alpha$ is nonreal we must have $\vert z \vert > 1$. We compute \begin{equation*} \begin{aligned} x &\approx -\frac{1}{2\pi}(w_{103} + w_{102}) = -0.477227209285886\ldots, \end{aligned} \end{equation*} thus $$z \approx -.4772 + .8882i.$$ To verify the value of $x$, we check that $$j(-.4772 + .8882i) \approx 1.0042 + 0.9983i.$$ \end{ex} \section{Acknowledgements} This research was carried out in fulfillment of the requirements for the M.S. degree in mathematics at Emory University. The author would like to thank Ken Ono for his mentorship and support. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The $k$-\textsc{Center}\xspace problem is a classical problem in theoretical computer science and was first formulated by Hakimi~\cite{hakimi} in 1964. In this problem, given a metric space $(P,\texttt{dist})$ and an integer $k\leq |P|$ the goal is to select a set $C$ of $k$ centers which minimizes the maximum distance of a point in $P$ from its nearest center, i.e., select a set $C$ which minimizes the quantity $\max_{p\in P} \min_{c\in C} \texttt{dist}(p,c)$. A geometric way to view the $k$-\textsc{Center}\xspace problem is to find the minimum radius $r$ such that $k$ closed balls of radius $r$ located at each of the points in $C$ cover all the points in $P$. In most applications, we require that $C\subseteq P$ and this is known as the discrete version of the problem. As an example, one can consider the set $P$ to be important locations in a city and solving the $k$-\textsc{Center}\xspace problem (where $k$ is upper bounded by budget constraints) establishes the locations of fire stations which minimize the response time in event of a fire. In addition to other applications in facility location, transportation networks, etc. an important application of $k$-\textsc{Center}\xspace is in clustering. With the advent of massive data sets, the problem of efficiently and effectively summarizing this data is crucial. A standard approach for this is via centroid-based clustering algorithms of which $k$-\textsc{Center}\xspace is a special case. Clustering using $k$-\textsc{Center}\xspace has found applications in text summarization, robotics, bioinformatics, pattern recognition, etc ~\cite{clustering-text,clustering-robotics,hennig2015handbook,jiang2004cluster} \subsection{Prior work on exact \& approximate algorithms for discrete $k$-\textsc{Center}\xspace} The discrete\footnote{Here we mention the known results only for the discrete version of $k$-\textsc{Center}\xspace. A discussion about results for the continuous version of the problem is given in~\autoref{subsec:continuous-k-center-discussion}.} $k$-\textsc{Center}\xspace problem is NP-hard~\cite{vazirani-book}, and admits a $2$-approximation~\cite{DBLP:journals/mor/HochbaumS85,DBLP:journals/tcs/Gonzalez85} in $n^{O(1)}$ time where $n$ is the number of points. This approximation ratio is tight and the $k$-\textsc{Center}\xspace problem is NP-hard to approximate in polynomial time to a factor $(2-\epsilon)$ for any constant $\epsilon>0$~\cite{DBLP:journals/dam/HsuN79,DBLP:journals/tcs/Gonzalez85}. Given this intractability, research was aimed at designing parameterized algorithms~\cite{fpt-book} and parameterized approximation algorithms for $k$-center. The $k$-\textsc{Center}\xspace problem is W[2]-hard to approximate to factor better than $2$ even when allowing running times of the form $f(k)\cdot n^{O(1)}$ for any computable function $f$~\cite{DBLP:journals/algorithmica/Feldmann19,DBLP:journals/talg/DemaineFHT05}. The $k$-\textsc{Center}\xspace problem remains W[2]-hard even if we combine the parameter $k$ with other structural parameters such as size of vertex cover or size of feedback vertex set~\cite{DBLP:journals/dam/KatsikarelisLP19}. Agarwal and Procopiuc~\cite{agarwal-procopiuc} designed an algorithm for discrete $k$-\textsc{Center}\xspace on $n$ points in $d$-dimensional Euclidean space which runs in $n^{O\left(d\cdot k^{1-1/d}\right)}$ time. The paradigm of combining parameterized algorithms \& approximation algorithms has been successful in designing algorithms for $k$-center in special topologies such as $d$-dimensional Euclidean space~\cite{agarwal-procopiuc}, planar graphs~\cite{DBLP:conf/soda/Fox-EpsteinKS19}, metrics of bounded doubling dimensions~\cite{DBLP:journals/algorithmica/FeldmannM20}, graphs of bounded highway dimension~\cite{DBLP:journals/algorithmica/Feldmann19,DBLP:conf/esa/BeckerKS18}, etc. Of particular relevance to this paper is the $(1+\epsilon)$-approximation algorithm\footnote{This is also known as an efficient parameterized approximation scheme (EPAS) as the running time is a function of the type $f(k,\epsilon,d)\cdot n^{O(1)}$.} of Agarwal and Procopiuc~\cite{agarwal-procopiuc} which runs in $O(dn\log k) + \left(\dfrac{k}{\epsilon}\right)^{O\left(k^{1-1/d}\right)}\cdot n^{O(1)}$ time. This was generalized by Feldmann and Marx~\cite{DBLP:journals/algorithmica/FeldmannM20} who designed an $(1+\epsilon)$-approximation algorithm running in $\left(\dfrac{k^k}{\epsilon^{O(kD)}}\right)\cdot n^{O(1)}$ time for discrete $k$-\textsc{Center}\xspace in metric spaces of doubling dimension $D$. \subsection{From 2-dimensions to higher dimensions} \subparagraph*{Square root phenomenon for planar graphs and geometric problems in the plane:} For a wide range of problems on planar graphs or geometric problems in the plane, a certain {\em square root phenomenon} is observed for a wide range of algorithmic problems: the exponent of the running time can be improved from $O(\ell)$ to $O(\sqrt{\ell})$ where $\ell$ is the parameter, or from $O(n)$ to $O(\sqrt{n}$) where $n$ is in the input size, and lower bounds indicate that this improvement is essentially best possible. There is an ever increasing list of such problems known for planar graphs~\cite{DBLP:journals/siamcomp/ChitnisFHM20,DBLP:conf/icalp/Marx12,DBLP:conf/icalp/KleinM12,MarxPP-FOCS2018,KleinM14,DemaineFHT05,DBLP:conf/stacs/PilipczukPSL13,DBLP:conf/esa/MarxP15,DBLP:conf/fsttcs/LokshtanovSW12,DBLP:journals/corr/AboulkerBHMT15,FominLMPPS16} and in the plane~\cite{DBLP:conf/esa/MarxP15,DBLP:conf/iwpec/Marx06,FominKLPS16,DBLP:journals/jal/AlberF04,DBLP:conf/focs/SmithW98,DBLP:journals/algorithmica/HwangLC93,DBLP:journals/algorithmica/HwangCL93} \subparagraph*{Bounds for higher dimensional Euclidean spaces:} Unlike the situation on planar graphs and in two-dimensions, the program of obtaining tight bounds for higher dimensions is still quite nascent with relatively fewer results~\cite{bane,DBLP:conf/compgeom/MarxS14,biro-higher-d,de-berg-higher-d,tsp-higher-d}. Marx and Sidiropoulos~\cite{DBLP:conf/compgeom/MarxS14} showed that for some problems there is a \emph{limited blessing of low dimensionality}: \textcolor{black}{that is,} for $d$-dimensions the running time can be improved from $n^{\ell}$ to $n^{\ell^{1-1/d}}$ or from $2^{n}$ to $2^{n^{1-1/d}}$ where $\ell$ is a parameter and $n$ is the input size. In contrast, Cohen-Addad et al.~\cite{bane} showed that the two problems of $k$-\textsc{Median}\xspace and $k$-\textsc{Means}\xspace suffer from the \emph{curse of low dimensionality}: even for $4$-dimensional Euclidean space, assuming the Exponential Time Hypothesis\footnote{Recall that the Exponential Time Hypothesis (ETH) has the consequence that $n$-variable 3-SAT cannot be solved in $2^{o(n)}$ time~\cite{eth,eth-2}.} (ETH), there is no $f(k)\cdot n^{o(k)}$ time algorithm, i.e., the brute force algorithm which runs in $n^{O(k)}$ time is asymptotically optimal. \subsection{Motivation \& Our Results} In two-dimensional Euclidean space there is an $n^{O(\sqrt{k})}$ algorithm~\cite{agarwal-procopiuc,DBLP:journals/algorithmica/HwangLC93,DBLP:journals/algorithmica/HwangCL93}, and a matching lower bound of $f(k)\cdot n^{o(\sqrt{k})}$ under Exponential Time Hypothesis (ETH) for any computable function $f$~\cite{DBLP:conf/iwpec/Marx06}. Our motivation in this paper is to investigate what is the \emph{correct} complexity of exact and approximate algorithms for the discrete $k$-\textsc{Center}\xspace for higher dimensional Euclidean spaces. In particular, we aim to answer the following two questions: \begin{table}[ht] \noindent\framebox{\begin{minipage}{\textwidth} \begin{description} \item[(Question 1)] Can the running time of the $(1+\epsilon)$-approximation algorithm of \cite{agarwal-procopiuc} be improved from $O(dn\log k) + \left(\dfrac{k}{\epsilon}\right)^{O\left(k^{1-1/d}\right)}\cdot n^{O(1)}$, or is there a (close to) matching lower bound? \item[(Question 2)] The $n^{O\left(d\cdot k^{1-1/d}\right)}$ algorithm of \cite{agarwal-procopiuc} for $d$-dimensional Euclidean space shows that there is a \emph{limited blessing of low dimensionality} for $k$-\textsc{Center}\xspace. But can the term $k^{1-1/d}$ in the exponent be improved, or is it asymptotically tight? \end{description} \end{minipage}} \end{table} \noindent We make progress towards answering both these questions by showing the following theorem: \begin{restatable}{theorem}{domsetd} \normalfont For any $d\geq 2$, under the Exponential Time Hypothesis (ETH), the discrete $k$-\textsc{Center}\xspace problem in $d$-dimensional Euclidean space \begin{description} \item[\textbf{- (Inapproximability result)}] does not admit an $(1+\epsilon)$-approximation in $f(k)\cdot \left(\frac{1}{\epsilon}\right)^{o\left(k^{1-1/d}\right)}\cdot n^{o\left(k^{1-1/d}\right)}$ time where $f$ is any computable function and $n$ is the number of points. \item[\textbf{- (Lower bound for exact algorithm)}] cannot be solved in $f(k)\cdot n^{o\left(k^{1-1/d}\right)}$ time where $f$ is any computable function and $n$ is the number of points. \end{description} \label{thm:dom-set-d-dimensions} \end{restatable} \autoref{thm:dom-set-d-dimensions} answers Question~$1$ by showing that the running time of the $(1+\epsilon)$-approximation algorithm of Agarwal and Procopiuc \cite{agarwal-procopiuc} is essentially tight, i.e., the dependence on $\epsilon$ cannot be improved even if we allow a larger dependence on both $k$ and $n$. \autoref{thm:dom-set-d-dimensions} answers Question~$2$ by showing that the running time of the exact algorithm of Agarwal and Procopiuc \cite{agarwal-procopiuc} is asymptotically tight, i.e., the exponent of $k^{1-1/d}$ cannot be asymptotically improved even if we allow a larger dependence on $k$. \subsection{Discussion of the continuous $k$-\textsc{Center}\xspace problem} \label{subsec:continuous-k-center-discussion} In the continuous version of the $k$-\textsc{Center}\xspace problem, the centers are not required to be picked from the original set of input points. The $n^{O\left(d\cdot k^{1-1/d}\right)}$ algorithm of Agarwal and Procopiuc~\cite{agarwal-procopiuc} also works for this continuous version of the $k$-\textsc{Center}\xspace problem in $\mathbb{R}^d$. Marx \cite{dm-esa-05} showed the W[1]-hardness of $k$-\textsc{Center}\xspace in $(\mathbb{R}^2, \ell_{\infty})$ parameterized by $k$. Cabello et al.~\cite{dm-continuous} studied the complexity of this problem parameterized by the dimension, and showed the W[1]-hardness of $4$-\textsc{Center}\xspace in $(\mathbb{R}^d, \ell_{\infty})$ parameterized by $d$. Additionally, they also obtained the W[1]-hardness of $2$-\textsc{Center}\xspace in $(\mathbb{R}^d, \ell_{2})$ parameterized by $d$\textcolor{black}{;} this reduction also rules out existence of $n^{o(d)}$ algorithms for this problem under the Exponential Time Hypothesis (ETH). It is an interesting open question whether the $n^{O\left(d\cdot k^{1-1/d}\right)}$ algorithm of Agarwal and Procopiuc \cite{agarwal-procopiuc} is also asymptotically tight for the continuous version of the problem: one way to possibly prove this would be to extend the W[1]-hardness reduction of Marx \cite{dm-esa-05} for continuous $k$-\textsc{Center}\xspace in $\mathbb{R}^2$ (parameterized by $k$) to higher dimensions using the framework of Marx and Sidiropoulos \cite{DBLP:conf/compgeom/MarxS14}. Our reduction in this paper does not extend to the continuous version. \subsection{Notation} The set $\{1,2,\ldots, n\}$ is denoted by $[n]$. All vectors considered in this paper have length $d$. If $\mathbf{a}$ is a vector then for each $i\in [d]$ its $i$\textcolor{black}{-th} coordinate is denoted by $\mathbf{a}[i]$. Addition and subtraction of vectors is denoted by $\oplus$ and $\ominus$ respectively. The $i$\textcolor{black}{-th} unit vector is denoted by $\mathbf{e}_i$ and has $\mathbf{e}_{i}[i]=1$ and $\mathbf{e}_{i}[j]=0$ for each $j\neq i$. The $d$-dimensional vector \textcolor{black}{whose every} coordinate \textcolor{black}{equals} $1$ is denoted by $\mathbf{1}^d$. If $u$ is a point and $X$ is a set of points then $\texttt{dist}(u, X) = \min_{x\in X} \texttt{dist}(u,x)$. We will sometimes abuse notation slightly and use $x$ to denote both the name and location of the point $x$. \section{Lower bounds for exact \& approximate $k$-\textsc{Center}\xspace in $d$-dimensional Euclidean space} \label{sec:general-d} The goal of this section is to prove \autoref{thm:dom-set-d-dimensions} which is restated below: \domsetd* \subparagraph*{Roadmap to prove \autoref{thm:dom-set-d-dimensions}:} To prove \autoref{thm:dom-set-d-dimensions}, we design a gap reduction (described in \autoref{subsec:redn-general-d}) from a constraint satisfaction problem (CSP) to the $k$-\textsc{Center}\xspace problem. The definition and statement of the lower bound for the CSP due to Marx and Sidiropoulos \cite{DBLP:conf/compgeom/MarxS14} is given in \autoref{subsec:marx-sidiropoulos}. The correctness of the reduction is shown in~\autoref{subsec:k-center-general-d-easy} and~\autoref{subsec:k-center-general-d-hard}. Finally, everything is tied together in \autoref{subsec:finishing-the-proof} which contains the proof of \autoref{thm:dom-set-d-dimensions}. \subsection{Lower bound for $d$-dimensional geometric $\geq$-\text{CSP}\xspace~\cite{DBLP:conf/compgeom/MarxS14}} \label{subsec:marx-sidiropoulos} This section introduces the $d$-dimensional geometric $\geq$-\text{CSP}\xspace problem of Marx and Sidiropoulos \cite{DBLP:conf/compgeom/MarxS14}. First we start with some definitions before stating the formal lower bound (\autoref{thm:marx-sidiropoulos}) that will be used to prove \autoref{thm:dom-set-d-dimensions}. Constraint Satisfaction Problems (CSPs) are a general way to represent several important problems in theoretical computer science. In this paper, we will only need a subclass of CSPs called binary CSPs which we define below. \begin{definition} \normalfont An instance of a binary constraint satisfaction problem (CSP) is a triple $\mathcal{I}=(\mathcal{V}, \mathcal{D}, \mathcal{C})$ where $\mathcal{V}$ is a set of variables, $\mathcal{D}$ is a domain of values and $\mathcal{C}$ is a set of constraints. There are two types of constraints: \begin{itemize \item \underline{\emph{Unary constraints}}: For some $v\in \mathcal{V}$ there is a unary constraint $\langle v, R_v \rangle$ where $R_v \subseteq \mathcal{D}$. \item \underline{\emph{Binary constraints}}: For some $u,v\in \mathcal{V}$, \textcolor{black}{$u \neq v$}, there is a binary constraint $\big\langle (u,v), R_{u,v} \big\rangle$ where $R_{u,v}\subseteq \mathcal{D} \times \mathcal{D}$. \end{itemize} \label{defn:binary-csp} \end{definition} Solving a given CSP instance $\mathcal{I}=(\mathcal{V},\mathcal{D},\mathcal{C})$ is to check whether there exists a satisfying assignment for it, i.e., a function $f:\mathcal{V}\to \mathcal{D}$ such that all the constraints are satisfied. For a binary CSP, a satisfying assignment $f$ has the property that for each unary constraint $\langle v, R_v \rangle$ we have $f(v)\in R_v$ and for each binary constraint $\big\langle (u,v), R_{u,v} \big\rangle$ we have $\left(f(u),f(v)\right)\in R_{u,v}$. The constraint graph of a given CSP instance $\mathcal{I}=(V,D,C)$ is an undirected graph $G_{\mathcal{I}}$ whose vertex set is $V$ and the adjacency relation is defined as follows: two vertices $u,v\in V$ are adjacent in $G_{\mathcal{I}}$ if there is a constraint in $\mathcal{I}$ which contains both $u$ and $v$. Marx and Sidiropoulos~\cite{DBLP:conf/compgeom/MarxS14} observed that binary CSPs whose primal graph is a subgraph of the $d$-dimensional grid are useful in showing lower bounds for geometric problems in $d$-dimensions. \begin{definition} \normalfont The $d$-dimensional grid $\textup{R}[N, d]$ is an undirected graph with vertex set $[N]^d$ and the adjacency relation is as follows: two vertices $(a_1, a_2, \ldots, a_d)$ and $(b_1, b_2, \ldots, b_d)$ have an edge between them if and only if $\sum_{i=1}^{d} |a_i-b_i|=1$. \label{defn:d-dimensional-grid} \end{definition} \begin{definition} \normalfont A $d$-dimensional geometric $\geq$-\text{CSP}\xspace $\mathcal{I}=(\mathcal{V},\mathcal{D},\mathcal{C})$ is a binary CSP whose \begin{itemize} \item set of variables $\mathcal{V}$ is a subset of $\textup{R}[N,d]$ for some $N\geq 1$, \item domain is $[\delta]^d$ for some integer $\delta\geq 1$, \item constraint graph $G_{\mathcal{I}}$ is an \emph{induced} subgraph of $\textup{R}[N,d]$, \item unary constraints are arbitrary, \textcolor{black}{and} \item binary constraints are of the following type: if $\mathbf{a},\mathbf{a}' \in \mathcal{V}$ such that $\mathbf{a}'=\mathbf{a}\oplus\mathbf{e}_i$ for some $i\in [d]$ then there is a binary constraint $\big\langle (\mathbf{a},\mathbf{a}'), R_{\mathbf{a},\mathbf{a}'} \big\rangle$ where $R_{\mathbf{a},\mathbf{a}'}=\left\{ \left( \mathbf{x},\mathbf{y}\right)\in R_{\mathbf{a}}\times R_{\mathbf{a}'}\ \mid\ \mathbf{x}[i]\geq \mathbf{y}[i] \right\}$. \end{itemize} \label{defn:d-dimensional-geometric-leqcsp} \end{definition} Observe that the set of unary constraints of a $d$-dimensional geometric $\geq$-\text{CSP}\xspace is sufficient to completely define it. The size $|\mathcal{I}|$ of a binary CSP $\mathcal{I}=(\mathcal{V},\mathcal{D},\mathcal{C})$ is the combined size of the variables, domain and the constraints. With appropriate preprocessing (e.g., combining different constraints on the same variables) we can assume that $|\mathcal{I}|=\left(|\mathcal{V}|+|\mathcal{D}|\right)^{O(1)}$. We now state the result of Marx and Sidiropoulos \cite{DBLP:conf/compgeom/MarxS14} which gives a lower bound on the complexity of checking whether a given $d$-dimensional geometric $\geq$-\text{CSP}\xspace has a satisfying assignment. \begin{theorem} \normalfont \citep[Theorem 2.10]{DBLP:conf/compgeom/MarxS14} If for some fixed $d \geq 2$, there is an $f(|\mathcal{V}|)\cdot |\mathcal{I}|^{o\left(|\mathcal{V}|^{1-1/d}\right)}$ time algorithm for solving a $d$-dimensional geometric $\geq$-\text{CSP}\xspace $\mathcal{I}$ for some computable function $f$, then the Exponential Time Hypothesis (ETH) fails. \label{thm:marx-sidiropoulos} \end{theorem} \begin{remark} \label{remark:issue-of-geqcsp-versus-leqcsp} \normalfont The problem defined by Marx and Sidiropoulos \cite{DBLP:conf/compgeom/MarxS14} is actually $d$-dimensional geometric $\leq$-\text{CSP}\xspace which has $\leq$-constraints instead of the $\geq$-constraints. However, for each $\mathbf{a}\in \mathcal{V}$ by replacing each unary constraint $\mathbf{x}\in R_{\mathbf{a}}$ by $\mathbf{y}$ such that $\mathbf{y}[i]=N+1-\mathbf{x}[i]$ for each $i\in [d]$, it is easy to see that $d$-dimensional geometric $\leq$-\text{CSP}\xspace and $d$-dimensional geometric $\geq$-\text{CSP}\xspace are equivalent. \end{remark} \subsection{Reduction from $d$-dimensional geometric $\geq$-\text{CSP}\xspace to $k$-\textsc{Center}\xspace in $\mathbb{R}^d$} \label{subsec:redn-general-d} We are now ready to describe our reduction from $d$-dimensional geometric $\geq$-\text{CSP}\xspace to $k$-\textsc{Center}\xspace in $\mathbb{R}^d$. Fix any $d\geq 2$. Let $\mathcal{I}=(\mathcal{V},\mathcal{D},\mathcal{C})$ be a $d$-dimensional geometric $\geq$-\text{CSP}\xspace instance on variables $\mathcal{V}$ and domain $[\delta]^d$ for some integer $\delta\geq 1$. We fix\footnote{For simplicity of presentation, we choose $r=1/4$ instead of $r=1$: by scaling the result holds also for $r=1$.} the following two quantities: \begin{gather}\label{defn:r-epsilon-general-d} r:=\frac{1}{4}\quad \text{and}\quad \textcolor{black}{\epsilon:=\frac{r^2}{(d-1)\delta^{2}} = \frac{1}{16(d-1)\delta^2}}. \end{gather} \textcolor{black}{Since $d\geq 2$ and $\delta\geq 1$, we obtain the following bounds from} \autoref{defn:r-epsilon-general-d}, \begin{align} 0 < \epsilon\leq \epsilon\delta\leq \epsilon\delta^2\leq \epsilon\delta^2(d-1)= r^2 = \frac{1}{16}. \label{eqn:always-to-be-cited} \end{align} Given an instance $\mathcal{I}=(\mathcal{V}, \mathcal{D}, \mathcal{C})$ of $d$-dimensional geometric $\geq$-\text{CSP}\xspace, we add a set $\mathcal{U}$ of points in $\mathbb{R}^d$ as described in \autoref{table:general-d:construction} \textcolor{black}{and} \autoref{table-special-sets-general-d}. These set of points are the input for the instance of the $|\mathcal{V}|$-\textsc{Center}\xspace problem. \begin{table}[ht] \noindent\framebox{\begin{minipage}{\textwidth} \begin{enumerate} \item[(1)] \underline{Corresponding to variables}: If $\mathbf{a}\in \mathcal{V}$ then we add the following set of points which are collectively called as $\textsc{Border}[\mathbf{a}]$ \begin{itemize} \item For each $i\in [d]$, the point $B_{\mathbf{a}}^{+i}$ which is located at $\mathbf{a} \oplus \mathbf{e}_{i}\cdot \textcolor{black}{r(1-\epsilon)}\oplus (\mathbf{1}^d-\mathbf{e}_i)\cdot 2\epsilon \delta$. \item For each $i\in [d]$, the point $B_{\mathbf{a}}^{-i}$ which is located at $\mathbf{a} \ominus \mathbf{e}_{i}\cdot \textcolor{black}{r(1-\epsilon)} \ominus (\mathbf{1}^d-\mathbf{e}_i)\cdot 2\epsilon \delta$. \end{itemize} This set of points are \textcolor{black}{referred to} as \emph{border} points. \item[(2)] \underline{Corresponding to unary constraints}: If $\mathbf{a}\in \mathcal{V}$ and $\big\langle (\mathbf{a}), R_{\mathbf{a}} \big\rangle$ is the unary constraint on $\mathbf{a}$, then we add the following set of points which are collectively called as $\textsc{Core}[\mathbf{a}]$: \begin{itemize} \item for each $\mathbf{x}\in R_{\mathbf{a}}\subseteq [\delta]^d$ we add a point called $C_{\mathbf{a}}^{\mathbf{x}}$ located at $\mathbf{a} \oplus \epsilon\cdot \mathbf{x}$. \end{itemize} This set of points are \textcolor{black}{referred to} as \emph{core} points. \item[(3)] \underline{Corresponding to adjacencies in $G_{\mathcal{I}}$}: For every edge $(\mathbf{a}, \mathbf{a'})$ in $G_{\mathcal{I}}$ we add a collection of $\delta$ points denoted by $\mathcal{S}_{\{\mathbf{a},\mathbf{a}'\}}$. Assume, without loss of generality, that $\mathbf{a'} = \mathbf{a}\oplus\mathbf{e}_i$ for some $i\in[d]$. Then the set of points $\mathcal{S}_{\{\mathbf{a},\mathbf{a}'\}}$ is defined as follows: \begin{itemize} \item for each $\ell\in [\delta]$ we add a point $S_{\{\mathbf{a},\mathbf{a}'\}}^{\ell}$ which is located at $\mathbf{a}\oplus\mathbf{e}_{i}\cdot\left( (1-\epsilon)2r+\epsilon\ell\right)$. \end{itemize} This set of points are \textcolor{black}{referred to} as \emph{secondary} points. \end{enumerate} \end{minipage}} \vspace{2mm} \caption{The set $\mathcal{U}$ of points in $\mathbb{R}^d$ \big(which gives an instance of $k$-\textsc{Center}\xspace\big) constructed from an instance $\mathcal{I}=(\mathcal{V},\mathcal{D},\mathcal{C})$ of $d$-dimensional geometric $\geq$-\text{CSP}\xspace.} \label{table:general-d:construction} \end{table} \textcolor{black}{Note that we add at most $|\mathcal{V}|\cdot 2d$ many border points, at most $|\mathcal{C}|$ many core points, and at most $|\mathcal{V}|^{2}\cdot \delta$ many secondary points}. Hence, the total number of points $n$ in the instance $\mathcal{U}$ is $\leq |\mathcal{V}|\cdot 2d + |\mathcal{C}| +|\mathcal{V}|^{2}\cdot \delta = |\mathcal{I}|^{O(1)}$ where $|\mathcal{I}|=|\mathcal{V}|+|\mathcal{D}|+|\mathcal{C}|$. We now prove some preliminary lemmas to be later used in~\autoref{subsec:k-center-general-d-easy} and~\autoref{subsec:k-center-general-d-hard}. \begin{table}[ht] \noindent\framebox{\begin{minipage}{\textwidth} \begin{gather} \text{For each}\ \mathbf{a}\in \mathcal{V},\ \text{let}\ \mathcal{D}[\mathbf{a}] := \textsc{Core}[\mathbf{a}]\ \bigcup\ \textsc{Border}[\mathbf{a}]. \label{eqn:defn-of-d[a]-general-d} \\ \text{The set of primary points is \textsc{Primary}}:= \bigcup_{\mathbf{a}\in \mathcal{V}} \mathcal{D}[\mathbf{a}]. \label{eqn:defn-of-primary-balls-general-d} \\ \text{The set of secondary points is \textsc{Secondary}} := \bigcup_{\mathbf{a}\ \&\ \mathbf{a'}\ \text{forms an edge in}\ G_\mathcal{I}} \mathcal{S}_{\{\mathbf{a},\mathbf{a}'\}}. \label{eqn:defn-of-secondary-balls-general-d}\\ \text{The final collection of points is}\ \mathcal{U} := \textsc{Primary}\ \bigcup\ \textsc{Secondary}. \label{eqn:defn-of-set-of-ball-general-d} \end{gather} \end{minipage}} \vspace{2mm} \caption{Notation for some special subsets of points from $\mathcal{U}$. Note that a primary point is either a core point or a border point.} \label{table-special-sets-general-d} \end{table} \subsubsection{Preliminary lemmas} \begin{lemma} \normalfont For each $\mathbf{a}\in \mathcal{V}$ and $i\in [d]$, we have $\texttt{dist}\left( B_{\mathbf{a}}^{+i}, B_{\mathbf{a}}^{-i}\right)\geq 2r(1+\epsilon)$. \label{lem:borders-dont-intersect-2d} \end{lemma} \begin{proof} Fix any $\mathbf{a}\in \mathcal{V}$ and $i\in [d]$. By~\autoref{table:general-d:construction}, the points $B_{\mathbf{a}}^{+i}$ and $B_{\mathbf{a}}^{-i}$ are located at $\mathbf{a} \oplus \mathbf{e}_{i}\cdot r(1-\epsilon) \oplus (\mathbf{1}^d-\mathbf{e}_i)\cdot 2\epsilon \delta$ and $\mathbf{a} \ominus \mathbf{e}_{i}\cdot r(1-\epsilon) \ominus (\mathbf{1}^d-\mathbf{e}_i)\cdot 2\epsilon \delta$ respectively. Hence, we have that \begin{align*} \texttt{dist}\left(B_{\mathbf{a}}^{+i}, B_{\mathbf{a}}^{-i}\right)^2 & = (2r(1-\epsilon))^{2} + (d-1)\cdot (4\epsilon \delta)^{2} = (2r(1-\epsilon))^{2} + 16\epsilon \cdot (d-1)\epsilon\delta^{2}, \\ & = (2r(1-\epsilon))^{2} + 16\epsilon \cdot r^2, \tag{by definition of $\epsilon$ in \autoref{defn:r-epsilon-general-d}}\\ & = (2r)^2[(1-\epsilon)^2 + 4\epsilon] = (2r(1+\epsilon))^2. \end{align*} \end{proof} \begin{lemma} \normalfont For each $\mathbf{a}\in \mathcal{V}$, the distance between any two points in $\textsc{Core}[\mathbf{a}]$ is $< r$. \label{lem:core-pairwise-intersects-general-d} \end{lemma} \begin{proof} Fix any $\mathbf{a} \in \mathcal{V}$. Consider any two points in $\textsc{Core}[\mathbf{a}]$, say $C_{\mathbf{a}}^{\mathbf{x}}$ and $C_{\mathbf{a}}^{\mathbf{y}}$, for some $\mathbf{x}\neq \mathbf{y}$. By \autoref{table:general-d:construction}, these points are located at $\mathbf{a}\oplus \epsilon\cdot \mathbf{x}$ and $\mathbf{a}\oplus \epsilon\cdot \mathbf{y}$ respectively. Hence, we have \begin{align*} \texttt{dist}\left( C_{\mathbf{a}}^{\mathbf{x}}, C_{\mathbf{a}}^{\mathbf{y}}\right)^2 & = \left(\epsilon\cdot \texttt{dist}(\mathbf{x},\mathbf{y})\right)^{2}, \\ & \leq \epsilon^2\cdot d\cdot (\delta-1)^{2}, \tag{since $\mathbf{x},\mathbf{y} \in R_{\mathbf{a}} \subseteq [\delta]^d$}\\ & = \frac{d(\delta-1)^2}{(d-1)^2\delta^4}\cdot r^4, \tag{by definition of $\epsilon$ in \autoref{defn:r-epsilon-general-d}}\\ & \leq \frac{1}{8}\cdot r^4 < r. \tag{since $d\geq 2$ and $\delta \geq 1$} \end{align*} \end{proof} \begin{lemma} \normalfont For each $\mathbf{a}\in \mathcal{V}$, the distance of any point from $\textsc{Core}[\mathbf{a}]$ to any point from $\textsc{Border}[\mathbf{a}]$ is $<2r$. \label{lem:core-intersects-all-border-general-d} \end{lemma} \begin{proof} Fix any $\mathbf{a}\in \mathcal{V}$ and consider any point $C_{\mathbf{a}}^{\mathbf{x}}\in \textsc{Core}[\mathbf{a}]$ where $\mathbf{x}\in R_{\mathbf{a}}\subseteq [\delta]^{d}$. We prove this lemma by showing that, for each $i\in [d]$, the point $C_{\mathbf{a}}^{\mathbf{x}}$ is \textcolor{black}{at} distance $< 2r$ from \textcolor{black}{both} the points $B_{\mathbf{a}}^{+i}$ and $B_{\mathbf{a}}^{-i}$. Fix some $i\in [d]$. \begin{enumerate} \item[(i)] By \autoref{table:general-d:construction}, the points $C_{\mathbf{a}}^{\mathbf{x}}$ and $B_{\mathbf{a}}^{+i}$ are located at $\mathbf{a}\oplus \epsilon\cdot \mathbf{x}$ and $\mathbf{a} \oplus \mathbf{e}_{i}\cdot r(1-\epsilon) \oplus (\mathbf{1}^d-\mathbf{e}_i)\cdot 2\epsilon \delta$ respectively. Hence, we have \begin{align*} \texttt{dist}\left(C_{\mathbf{a}}^{\mathbf{x}} , B_{\mathbf{a}}^{+i}\right)^2 & = (r(1-\epsilon)-\epsilon\cdot \mathbf{x}[i])^{2} + \sum_{j=1 \colon j\neq i}^{d} (2\epsilon \delta- \epsilon\cdot \mathbf{x}[j] )^2, \\ & \leq (r(1-\epsilon))^2 + (d-1)(2\epsilon\delta)^2, \tag{since $\mathbf{x}[i],\mathbf{x}[j] \geq 1$} \\ & = (r(1-\epsilon))^2 + 4\epsilon r^2, \tag{by definition of $\epsilon$ in \autoref{defn:r-epsilon-general-d}}\\ & = (r(1+\epsilon))^2 < (2r)^2. \tag{since $\epsilon < 1$} \end{align*} \item[(ii)] By~\autoref{table:general-d:construction}, the points $C_{\mathbf{a}}^{\mathbf{x}}$ and $B_{\mathbf{a}}^{-i}$ are located at $\mathbf{a}\oplus \epsilon\cdot \mathbf{x}$ and $\mathbf{a} \ominus \mathbf{e}_{i}\cdot r(1-\epsilon) \ominus (\mathbf{1}^d-\mathbf{e}_i)\cdot 2\epsilon \delta$ respectively. Hence, we have \begin{align*} \texttt{dist}\left(C_{\mathbf{a}}^{\mathbf{x}} , B_{\mathbf{a}}^{-i}\right)^2 & = (r(1-\epsilon)+\epsilon\cdot \mathbf{x}[i])^{2} + \sum_{j=1 \colon j\neq i}^{d} (\epsilon\cdot \mathbf{x}[j] +2\epsilon\delta)^2, \\ & \leq (r(1-\epsilon) + \epsilon\delta)^2 + (d-1)(3\epsilon\delta)^2, \tag{since $\mathbf{x}[i],\mathbf{x}[j] \leq \delta$}\\ & = (r(1-\epsilon) + \epsilon\delta)^2 + 9\epsilon r^2, \tag{by definition of $\epsilon$} \\ & \leq 2r^2(1-\epsilon)^2 + 2\epsilon^2\delta^2 + 9\epsilon r^2, \tag{since $(\alpha+\beta)^2 \leq 2\alpha^2 + 2\beta^2$}\\ & \leq 2r^2(1-\epsilon)^2 + 11\epsilon r^2, \tag{since $\epsilon\delta^2 \leq r^2$}\\ & = 2r^2((1-\epsilon)^2 + 5.5\epsilon) < 2r^2(1+1.75\epsilon)^2 < (2r)^2. \tag{since $\epsilon \leq 1/16$} \end{align*} \end{enumerate} \end{proof} \begin{claim} \normalfont For each $\mathbf{a}\in \mathcal{V}$, the distance of $\mathbf{a}$ to any point \textcolor{black}{in} $\textsc{Border}[\mathbf{a}]$ is $r(1+\epsilon)$. \label{lem:D[a]-are-close-to-a} \end{claim} \begin{proof} Let $p$ be any point in $\textsc{Border}[\mathbf{a}]$. Then we have two choices for $p$, \textcolor{black}{namely $p=B_{\mathbf{a}}^{+i}$ or $p=B_{\mathbf{a}}^{-i}$. In both cases, we have} \begin{align*} \texttt{dist}(p,\mathbf{a})^2 = (r(1-\epsilon))^2 + (d-1)(2\epsilon\delta)^2 = r^2(1-\epsilon)^2 + 4\epsilon r^2 = (r(1+\epsilon))^2, \end{align*} \textcolor{black}{where the second equality is obtained by the definition of $\epsilon$} (\autoref{defn:r-epsilon-general-d}). \end{proof} \begin{lemma} \normalfont For each $\mathbf{a}\in \mathcal{V}$ and each $i\in [d]$, \begin{itemize} \item If $w\in \mathcal{U}$ such that $\texttt{dist}\left( w, B_{\mathbf{a}}^{+i}\right)<2r(1+\epsilon)$ then $w\in \left( \mathcal{D}[\mathbf{a}]\ \bigcup\ \mathcal{S}_{\{\mathbf{a},\mathbf{a}\oplus\mathbf{e}_{i} \}}\right)$. \item If $w\in \mathcal{U}$ such that $\texttt{dist}\left( w, B_{\mathbf{a}}^{-i}\right)<2r(1+\epsilon)$ then $w\in \left( \mathcal{D}[\mathbf{a}]\ \bigcup\ \mathcal{S}_{\{\mathbf{a},\mathbf{a}\ominus\mathbf{e}_{i} \}}\right)$. \end{itemize} \label{lem:border-doesnt-intersect-other-connectors-general-d} \end{lemma} \begin{proof} The proof of this lemma is quite long, and is hence deferred to~\autoref{app:long-lemma-proof} to maintain the flow of the paper. \end{proof} \begin{remark} \normalfont \autoref{lem:border-doesnt-intersect-other-connectors-general-d} gives a necessary but not sufficient condition. Also, it might be the case that for some $\mathbf{a}\in \mathcal{V}$ and $i\in [d]$ the vector $\mathbf{a}\oplus\mathbf{e}_{i}\notin \mathcal{V} \left(\mbox{resp., } \mathbf{a}\ominus \mathbf{e}_{i}\notin \mathcal{V}\right)$ in which case the set $\mathcal{S}_{\{\mathbf{a},\mathbf{a}\oplus\mathbf{e}_{i} \}} \left(\mbox{resp., } \mathcal{S}_{\{\mathbf{a},\mathbf{a}\ominus \mathbf{e}_{i} \}}\right)$ is empty. \end{remark} \begin{lemma} \normalfont Let $\mathbf{a}\in \mathcal{V}$ and $i\in [d]$ be such that $\mathbf{a}':=(\mathbf{a}\oplus\mathbf{e}_i) \in \mathcal{V}$. For each $\ell\in [\delta]$, \begin{itemize \item[ (1)] If $\mathbf{x}\in R_{\mathbf{a}}$ and $\ell\leq \mathbf{x}[i]$, then $\texttt{dist}\left(C_{\mathbf{a}}^{\mathbf{x}} , S_{\{\mathbf{a},\mathbf{a}'\}}^{\ell}\right)< 2r$. \item[ (2)] If $\mathbf{x}\in R_{\mathbf{a}}$ and $\ell> \mathbf{x}[i]$, then $\texttt{dist}\left(C_{\mathbf{a}}^{\mathbf{x}} , S_{\{\mathbf{a},\mathbf{a}'\}}^{\ell}\right)\geq 2r(1+\epsilon)$. \item[ (3)] If $\mathbf{y}\in R_{\mathbf{a}'}$ and $\ell> \mathbf{y}[i]$, then $\texttt{dist}\left( C_{\mathbf{a}'}^{\mathbf{y}} , S_{\{\mathbf{a},\mathbf{a}'\}}^{\ell}\right)< 2r$. \item[ (4)] If $\mathbf{y}\in R_{\mathbf{a}'}$ and $\ell\leq \mathbf{y}[i]$, then $\texttt{dist}\left( C_{\mathbf{a}'}^{\mathbf{y}} , S_{\{\mathbf{a},\mathbf{a}'\}}^{\ell}\right)\geq 2r(1+\epsilon)$. \end{itemize} \label{lem:how-H-intersects-with-geq-general-d} \end{lemma} \begin{proof} Recall from~\autoref{table:general-d:construction} that the points $C_{\mathbf{a}}^{\mathbf{x}}$ and $S_{\{\mathbf{a},\mathbf{a}'\}}^{\ell}$ are located at $\mathbf{a}\oplus \epsilon\cdot \mathbf{x}$ and $\mathbf{a}\oplus\mathbf{e}_{i}\cdot((1-\epsilon)2r+\epsilon\ell)$ respectively. \begin{itemize} \item[(1)] If $\ell\leq \mathbf{x}[i]$, then $\texttt{dist}\left( C_{\mathbf{a}}^{\mathbf{x}} , S_{\{\mathbf{a},\mathbf{a}'\}}^{\ell} \right)^2$ \begin{align*} &=(2r(1-\epsilon) + \epsilon(\ell-\mathbf{x}[i]))^2 + \sum_{j=1 \colon j\neq i}^{d} (\epsilon\cdot \mathbf{x}[j])^2, \\ & \leq (2r(1-\epsilon))^2 + (d-1)\epsilon^2\delta^2 = (2r(1-\epsilon))^2 + \epsilon r^2 \tag{since $\ell \leq \mathbf{x}[i]$ and $\mathbf{x}[j] \leq \delta$} \\ & = (2r)^2\left((1-\epsilon)^2 + \frac{\epsilon}{4}\right) < (2r)^2. \tag{since $0< \epsilon < 1$} \end{align*} \item[(2)] If $\ell> \mathbf{x}[i]$, then $\texttt{dist}\left( C_{\mathbf{a}}^{\mathbf{x}} , S_{\{\mathbf{a},\mathbf{a}'\}}^{\ell} \right)^2$ \begin{align*} &=(2r(1-\epsilon) + \epsilon(\ell-\mathbf{x}[i]))^2 + \sum_{j=1 \colon j\neq i}^{d} (\epsilon\cdot \mathbf{x}[j])^2, \\ & \geq (2r(1-\epsilon) + \epsilon)^2 = (2r(1-\epsilon) + 4r\epsilon)^2 = (2r(1+\epsilon))^2. \tag{since $\ell > \mathbf{x}[i]$ and $4r = 1$} \end{align*} \end{itemize} We now show the remaining two claims: recall from~\autoref{table:general-d:construction} that the points $C_{\mathbf{a}'}^{\mathbf{y}}$ and $S_{\{\mathbf{a},\mathbf{a}'\}}^{\ell}$ are located at $(\mathbf{a}'\oplus \epsilon\cdot \mathbf{y}) = \mathbf{a}\oplus \mathbf{e}_i \oplus \epsilon\cdot \mathbf{y}$ and $\mathbf{a}\oplus\mathbf{e}_{i}\cdot((1-\epsilon)2r + \epsilon\ell)$ respectively. \begin{itemize} \item[(3)] If $\ell> \mathbf{y}[i]$, then $\texttt{dist}\left( C_{\mathbf{a}'}^{\mathbf{y}} , S_{\{\mathbf{a},\mathbf{a}'\}}^{\ell} \right)^2$ \begin{align*} & = (1+\epsilon\cdot\mathbf{y}[i] - (1-\epsilon)2r -\epsilon\ell)^2 + \sum_{j=1 \colon j\neq i}^{d} (\epsilon\cdot \mathbf{y}[j])^2, \\ & \leq (4r+\epsilon\cdot\mathbf{y}[i] - (1-\epsilon)2r -\epsilon\ell)^2 + (d-1)\epsilon^2\delta^2, \tag{since $4r=1$ and $\mathbf{y}[j]\leq \delta$}\\ & = (2r(1+\epsilon) - \epsilon (\ell-\mathbf{y}[i]))^2 + \epsilon r^2, \tag{since $(d-1)\epsilon\delta^2 = r^2$}\\ & \leq (2r(1+\epsilon) - \epsilon)^2 + \epsilon r^2, \tag{since $\ell > \mathbf{y}[i]$}\\ & = (2r(1-\epsilon))^2 + \epsilon r^2, \tag{since $4r =1$}\\ & = (2r)^2\left((1-\epsilon)^2 + \frac{\epsilon}{4}\right) < (2r)^2. \tag{since $0< \epsilon < 1$} \end{align*} \item[(4)] If $\ell\leq \mathbf{y}[i]$, then $\texttt{dist}\left( C_{\mathbf{a}'}^{\mathbf{y}} , S_{\{\mathbf{a},\mathbf{a}'\}}^{\ell} \right)^2$ \begin{align*} & = (1+\epsilon\cdot\mathbf{y}[i] - (1-\epsilon)2r -\epsilon\ell)^2 + \sum_{j=1 \colon j\neq i}^{d} (\epsilon\cdot \mathbf{y}[j])^2, \\ & \geq (2r(1+\epsilon) + \epsilon (\mathbf{y}[i]-\ell))^2, \tag{since $4r = 1$}\\ & \geq (2r(1+\epsilon))^2. \tag{since $\mathbf{y}[i] \geq \ell$} \end{align*} \end{itemize} \end{proof} \begin{lemma} \normalfont Let $\mathbf{a}\in \mathcal{V}$ and $i\in [d]$ be such that $\mathbf{a}':=(\mathbf{a}\oplus\mathbf{e}_i) \in \mathcal{V}$. If $\mathbf{a}''\notin \{\mathbf{a}, \mathbf{a}'\}$ then the distance between any point \textcolor{black}{in} $\textsc{Core}[\mathbf{a}'']$ \textcolor{black}{and} any point in $\mathcal{S}_{\mathbf{a},\mathbf{a}'}$ is at least $2r(1+\epsilon)$. \label{lem:connector-far-away-from-other-cores} \end{lemma} \begin{proof} Let $\mathbf{p}$ and $\mathbf{q}$ be two arbitrary points from $\textsc{Core}[\mathbf{a}'']$ and $\mathcal{S}_{\mathbf{a},\mathbf{a}'}$, respectively. By \autoref{table:general-d:construction}, $\mathbf{p}$ is located at $\mathbf{a}''\oplus \epsilon\cdot \mathbf{x}$ for some $\mathbf{x}\in R_{\mathbf{a}}\subseteq [\delta]^d$ and $\mathbf{q}$ is located at $\mathbf{a}\oplus \mathbf{e}_i \cdot ((1-\epsilon)2r + \epsilon\ell)$ for some $\ell\in [\delta]$. Since $\mathbf{a}'=\mathbf{a}\oplus \mathbf{e}_i$ and $\mathbf{a}''\notin \{\mathbf{a},\mathbf{a}'\}$, we have three cases to consider: \begin{itemize} \item \underline{$\mathbf{a}''[j]= \mathbf{a}[j]$ for all $j\neq i$ and $\mathbf{a}''[i]\leq \mathbf{a}[i]-1$}: In this case, we have $\texttt{dist}(\mathbf{p}, \mathbf{q})^2$ \begin{align*} & \geq \left(\left(\mathbf{a}[i]+ (1-\epsilon)2r+\epsilon\ell\right) - \left(\mathbf{a}''[i]+\epsilon\cdot \mathbf{x}[i]\right) \right)^2, \tag{only considering the $i$-th coordinate}\\ & = \left(\mathbf{a}[i]-\mathbf{a}''[i] + (1-\epsilon)2r+\epsilon \ell -\epsilon \mathbf{x}[i] \right)^2,\\ & \geq \left(1 + (1-\epsilon)2r + \epsilon\cdot 4r -\epsilon\delta \right)^2, \tag{since $\mathbf{a}[i]-\mathbf{a}''[i] \geq 1$, $\ell\geq 1 = 4r$ and $\mathbf{x}[i]\leq \delta$}\\ & > (2r(1+\epsilon))^2. \tag{since $1-\epsilon\delta \geq 1- \frac{1}{16}>0$} \end{align*} \item \underline{$\mathbf{a}''[j]= \mathbf{a}[j]$ for all $j\neq i$ and $\mathbf{a}''[i]\geq \mathbf{a}[i]+2$}: In this case, we have $\texttt{dist}(\mathbf{p}, \mathbf{q})^2$ \begin{align*} &\geq \left(\left(\mathbf{a}''[i] +\epsilon\cdot \mathbf{x}[i] \right) - \left(\mathbf{a}[i]+(1-\epsilon)2r+\epsilon\ell \right) \right)^2, \tag{only considering the $i$-th coordinate}\\ & = \left(\mathbf{a}''[i] - \mathbf{a}[i] - (1-\epsilon)2r +\epsilon\cdot \mathbf{x}[i] - \epsilon\ell \right)^2, \\ & \geq (2 - (1-\epsilon)2r +\epsilon -\epsilon\delta)^2, \tag{since $\mathbf{a}''[i]- \mathbf{a}[i] \geq 2$, $\mathbf{x}[i] \geq 1$ and $\ell \leq \delta$}\\ & = (4r - (1-\epsilon)2r + 1 + \epsilon - \epsilon\delta)^2, \tag{since $4r = 1$}\\ & > (2r(1+\epsilon))^2. \tag{since $1-\epsilon\delta \geq 1- \frac{1}{16}>0$} \end{align*} \item \underline{There exists $j\neq i$ such that $\mathbf{a}''[j]\neq \mathbf{a}[j]$}: In this case, we have $\texttt{dist}(\mathbf{p}, \mathbf{q})$ \begin{align*} &\geq \left|\mathbf{a}[j]-\left(\mathbf{a}''[j]+\epsilon\cdot \mathbf{x}[j]\right)\right|, \tag{only considering the $j$-th coordinate}\\ &\geq \left|\mathbf{a}[j]-\mathbf{a}''[j]\right| - \epsilon\cdot \mathbf{x}[j], \tag{by triangle inequality}\\ &\geq 1-\epsilon\cdot \delta, \tag{since $\mathbf{a}[j]\neq \mathbf{a}''[j]$ and $\mathbf{x}[j]\leq\delta$} \\ & \geq 2r + 2r - r^2 = 2r + 2r \left(1-\frac{r}{2}\right), \tag{since $4r = 1$ and $\epsilon\delta \leq r^2$}\\ & > 2r(1+\epsilon). \tag{since $1-\frac{r}{2}>\frac{1}{16}\geq \epsilon$} \end{align*} \end{itemize} \end{proof} \subsection{{\normalsize $\mathcal{I}$ has a satisfying assignment $\Rightarrow$ \text{OPT}\xspace for the instance $\mathcal{U}$ of $|\mathcal{V}|$-\textsc{Center}\xspace is $< 2r$ }} \label{subsec:k-center-general-d-easy} Suppose that the $d$-dimensional geometric $\geq$-\text{CSP}\xspace $\mathcal{I}=(\mathcal{V},\mathcal{D},\mathcal{C})$ has a satisfying assignment $f:\mathcal{V}\to \mathcal{D}$. Consider the set of points $F$ given by $\Big \{ C_{\mathbf{a}}^{f(\mathbf{a})} : \mathbf{a}\in \mathcal{V}\Big\}$. Since $f:\mathcal{V}\to \mathcal{D}$ is a satisfying assignment for $\mathcal{I}$, it follows that $f(\mathbf{a})\in R_{\mathbf{a}}$ for each $\mathbf{a} \in \mathcal{V}$ and hence the set $F$ is well-defined. Clearly, $|F|=|\mathcal{V}|$. We now show that $$\text{OPT}\xspace(F):=\Big(\max_{u\in \mathcal{U}} \big( \min_{v\in F} \texttt{dist}(u,v) \big) \Big)< 2r$$ This implies that \text{OPT}\xspace for the instance $\mathcal{U}$ of $|\mathcal{V}|$-\textsc{Center}\xspace is $< 2r$. We show $\text{OPT}\xspace(F)<2r$ by showing that $\texttt{dist}(p, F)<2r$ for each $p\in \mathcal{U}$. From~\autoref{table:general-d:construction} and~\autoref{table-special-sets-general-d}, it is sufficient to consider the two cases depending on whether $p$ is a primary point or a secondary point. \begin{lemma} \normalfont If $p$ is a primary point, then $\texttt{dist}(p, F)<2r$. \label{lem:general-d-udg-dominates-D} \end{lemma} \begin{proof} If $p$ is a primary point, then by~\autoref{table:general-d:construction} and~\autoref{table-special-sets-general-d} it follows that $p$ is either a core point or a border point: \begin{itemize} \item \textbf{$p$ is a core point}: By~\autoref{table:general-d:construction}, $p\in \textsc{Core}[\mathbf{b}]$ for some $\mathbf{b}\in \mathcal{V}$. Then, \autoref{lem:core-pairwise-intersects-general-d} implies that $\texttt{dist}\big(p, C_{\mathbf{b}}^{f(\mathbf{b})} \big) <r$. Since $C_{\mathbf{b}}^{f(\mathbf{b})}\in F$, we have $\texttt{dist}\big( p, F \big) \leq \texttt{dist}\big(p, C_{\mathbf{b}}^{f(\mathbf{b})} \big) <r$. \item \textbf{$p$ is a border point}: By~\autoref{table:general-d:construction}, $p\in \textsc{Border}[\mathbf{b}]$ for some $\mathbf{b}\in \mathcal{V}$. Then, \autoref{lem:core-intersects-all-border-general-d} implies that $\texttt{dist}\big( p, C_{\mathbf{b}}^{f(\mathbf{b})}\big) <2r$. Since $C_{\mathbf{b}}^{f(\mathbf{b})}\in F$, we have $\texttt{dist}\big( p, F \big)\leq \texttt{dist}\big(p, C_{\mathbf{b}}^{f(\mathbf{b})} \big) <2r$. \qedhere \end{itemize} \end{proof} \begin{lemma} \normalfont If $p$ is a secondary point, then $\texttt{dist}(p, F)<2r$. \label{lem:secondary-at-most-2r} \end{lemma} \begin{proof} If $p$ is a secondary point, then by~\autoref{table:general-d:construction} and~\autoref{table-special-sets-general-d} it follows that there exists $\mathbf{a}\in \mathcal{V}, i\in [d]$ and $\ell\in [\delta]$ such that $p = S_{\{\mathbf{a}, \mathbf{a}\oplus \mathbf{e}_i\}}^{\ell}$. Note that $C_{\mathbf{a}}^{f(\mathbf{a})}\in F$ and $C_{\mathbf{a}\oplus \mathbf{e}_i}^{f(\mathbf{a}\oplus \mathbf{e}_i)}\in F$. We now prove the lemma by showing that $\min \Big\{ \texttt{dist} \big(p, C_{\mathbf{a}}^{f(\mathbf{a})} \big) ; \texttt{dist} \big(p, C_{\mathbf{a}\oplus \mathbf{e}_i}^{f(\mathbf{a}\oplus \mathbf{e}_i)} \big) \Big\}<2r$. Since $f:\mathcal{V}\to \mathcal{D}$ is a satisfying assignment, the binary constraint on $\mathbf{a}$ and $\mathbf{a}\oplus \mathbf{e}_i$ is satisfied, i.e., $\delta\geq f(\mathbf{a})[i]\geq f(\mathbf{a}\oplus \mathbf{e}_i)[i]\geq 1$. Since $\ell\in [\delta]$ this implies that either $\ell\leq f(\mathbf{a})[i]$ or $\ell> f(\mathbf{a}\oplus \mathbf{e}_i)[i]$. The following implications complete the proof: \begin{itemize} \item If $\ell\leq f(\mathbf{a})[i]$, then~\autoref{lem:how-H-intersects-with-geq-general-d}(1) implies that $\texttt{dist}\big( C_{\mathbf{a}}^{f(\mathbf{a})}, p\big) < 2r$. \item If $\ell> f(\mathbf{a}\oplus \mathbf{e}_i)[i]$, then~\autoref{lem:how-H-intersects-with-geq-general-d}(3) implies that $\texttt{dist}\big( C_{\mathbf{a}\oplus \mathbf{e}_i}^{f(\mathbf{a}\oplus \mathbf{e}_i)}, p\big) < 2r$. \qedhere \end{itemize} \end{proof} \noindent From~\autoref{table-special-sets-general-d},~\autoref{lem:general-d-udg-dominates-D} and~\autoref{lem:secondary-at-most-2r} it follows that \text{OPT}\xspace for the instance $\mathcal{U}$ of $|\mathcal{V}|$-\textsc{Center}\xspace is $<2r$. \subsection{{\normalsize $\mathcal{I}$ does not have a satisfying assignment $\Rightarrow$ \text{OPT}\xspace for the instance $\mathcal{U}$ of $|\mathcal{V}|$-\textsc{Center}\xspace is $\geq 2r(1+\epsilon)$}} \label{subsec:k-center-general-d-hard} Suppose that the instance $\mathcal{I}=(\mathcal{V},\mathcal{D},\mathcal{C})$ of $d$-dimensional geometric $\geq$-\text{CSP}\xspace does not have a satisfying assignment. We want to now show that \text{OPT}\xspace for the instance $\mathcal{U}$ of $|\mathcal{V}|$-\textsc{Center}\xspace is $\geq 2r(1+\epsilon)$. Fix any set $Q\subseteq \mathcal{U}$ of size $|\mathcal{V}|$: it is sufficient to show that \begin{equation}\label{eqn:opt-less-2r-eps} \text{OPT}\xspace(Q):=\Big(\max_{u\in \mathcal{U}} \big( \min_{v\in Q} \texttt{dist}(u,v) \big) \Big)\geq 2r(1+\epsilon) \end{equation} We consider two cases: either $\big|Q\cap \textsc{Core}[\mathbf{a}]\big|=1$ for each $\mathbf{a}\in \mathcal{V}$ (\autoref{lem:exactly-one-per-core}) or not (\autoref{lem:not-exactly-one-per-core}). \begin{lemma} \normalfont If $\big|Q\cap \textsc{Core}[\mathbf{a}]\big|=1$ for each $\mathbf{a}\in \mathcal{V}$ then $\text{OPT}\xspace(Q)\geq 2r(1+\epsilon)$. \label{lem:exactly-one-per-core} \end{lemma} \begin{proof} Since $|Q|=|\mathcal{V}|$ and $|Q\cap \textsc{Core}[\mathbf{a}]|=1$ for each $\mathbf{a}\in \mathcal{V}$ it follows that the only points in $Q$ are core points (see~\autoref{table:general-d:construction} for definition) and moreover $Q$ contains exactly one core point corresponding to each element from $\mathcal{V}$. Let $\phi: \mathcal{V} \to [\delta]^d$ be the function such that $Q\cap \textsc{Core}[\mathbf{a}] = C_{\mathbf{a}}^{\phi(\mathbf{a})}$. By~\autoref{table:general-d:construction}, it follows that $\phi(a)\in \textup{R}_{\mathbf{a}}$ for each $\mathbf{a}\in \mathcal{V}$. Recall that we are assuming in this section that the instance $\mathcal{I}=(\mathcal{V},\mathcal{D},\mathcal{C})$ of $d$-dimensional geometric $\geq$-\text{CSP}\xspace does not have a satisfying assignment. Hence, in particular, the function $\phi: \mathcal{V} \to [\delta]^d$ is not a satisfying assignment for $\mathcal{I}$. All unary constraints are satisfied since $\phi(a)\in \textup{R}_{\mathbf{a}}$ for each $\mathbf{a}\in \mathcal{V}$. Hence, there is some binary constraint which is not satisfied by $\phi$: let this constraint be violated for the pair $\mathbf{a}, \mathbf{a}\oplus \mathbf{e}_i$ for some $\mathbf{a}\in \mathcal{V}$ and $i\in [d]$. Let us denote $\mathbf{a}\oplus \mathbf{e}_i$ by $\mathbf{a}'$. The violation of the binary constraint on $\mathbf{a}$ and $\mathbf{a}\oplus \mathbf{e}_i$ by $\phi$ implies that $1\leq \phi(\mathbf{a})[i]<\phi(\mathbf{a}')[i]\leq \delta$. We now show that $\texttt{dist}\big(Q, S_{\{\mathbf{a},\mathbf{a}'\}}^{\phi(\mathbf{a}')[i]} \big)\geq 2r(1+\epsilon)$ which, in turn, implies that $\text{OPT}\xspace(Q)\geq 2r(1+\epsilon)$. The following implications complete the proof: \begin{itemize} \item \autoref{lem:how-H-intersects-with-geq-general-d}(2) implies that $\texttt{dist}\big( S_{\{\mathbf{a},\mathbf{a}'\}}^{\phi(\mathbf{a}')[i]}, C_{\mathbf{a}}^{\phi(\mathbf{a})} \big) \geq 2r(1+\epsilon)$. \item \autoref{lem:how-H-intersects-with-geq-general-d}(4) implies that $\texttt{dist}\big( S_{\{\mathbf{a},\mathbf{a}'\}}^{\phi(\mathbf{a}')[i]}, C_{\mathbf{a}'}^{\phi(\mathbf{a}')} \big) \geq 2r(1+\epsilon)$. \item Consider any point $s\in Q\setminus \big\{ C_{\mathbf{a}}^{\phi(\mathbf{a})}, C_{\mathbf{a}'}^{\phi(\mathbf{a}')} \big\}$. Then $s\in \textsc{Core}[\mathbf{a}'']$ for some $\mathbf{a}''\notin \big\{ \mathbf{a}, \mathbf{a}' \big\}$. \autoref{lem:connector-far-away-from-other-cores} implies $\texttt{dist}\big( S_{\{\mathbf{a}, \mathbf{a}'\}}^{\phi(\mathbf{a}')[i]}, s \big)\geq 2r(1+\epsilon)$. \qedhere \end{itemize} \end{proof} \begin{lemma} \normalfont If there exists $\mathbf{a}\in \mathcal{V}$ such that $\big|Q\cap \textsc{Core}[\mathbf{a}]\big|\neq 1$ then $\text{OPT}\xspace(Q)\geq 2r(1+\epsilon)$. \label{lem:not-exactly-one-per-core} \end{lemma} \begin{proof} Suppose that $\text{OPT}\xspace(Q)< 2r(1+\epsilon)$. To prove the lemma, we will now show that this implies $|Q\cap \textsc{Core}[\mathbf{a}]|=1$ for each $\mathbf{a}\in \mathcal{V}$. This is done via the following two claims, namely~\autoref{clm:exactly-one-D} and~\autoref{clm:exactly-one-core}. \begin{claim} $\big|Q\cap \mathcal{D}[\mathbf{a}]\big|=1$ for each $\mathbf{a}\in \mathcal{V}$ \label{clm:exactly-one-D} \end{claim} \begin{proof} Define three sets $I_0, I_1$ and $I_{\geq 2}$ as follows: \begin{eqnarray} I_0 := \big\{ \mathbf{a}\in \mathcal{V} : \big|Q\cap \mathcal{D}[\mathbf{a}]\big|=0 \big\} \\ I_1 := \big\{ \mathbf{a}\in \mathcal{V} : \big|Q\cap \mathcal{D}[\mathbf{a}]\big|=1 \big\} \\ I_{\geq 2} := \big\{ \mathbf{a}\in \mathcal{V} : \big|Q\cap \mathcal{D}[\mathbf{a}]\big|\geq 2 \big\} \end{eqnarray} By definition, we have \begin{equation} |I_0|+ |I_1| + |I_{\geq 2}| = |\mathcal{V}| \label{eqn:sum-of-indices-is-k^2} \end{equation} Consider a variable $\mathbf{b}\in I_0$. \textcolor{black}{Since $\texttt{dist}\left(Q,B_{\mathbf{b}}^{+i}\right)$ and $\texttt{dist}\left(Q,B_{\mathbf{b}}^{-i}\right)<2r(1+\epsilon)$, and $Q\cap \mathcal{D}[\mathbf{b}]=\emptyset$}, \autoref{lem:border-doesnt-intersect-other-connectors-general-d} implies that for each $i\in [d]$ \begin{enumerate \item[(i)] $Q$ must contain a point from $\mathcal{S}_{\{\mathbf{b},\mathbf{b}\oplus\mathbf{e}_i\}}$ since $Q\cap \mathcal{D}[\mathbf{b}]=\emptyset$, and \item[(ii)] $Q$ must contain a point from $\mathcal{S}_{\{\mathbf{b},\mathbf{b}\ominus\mathbf{e}_i\}}$ since $Q\cap \mathcal{D}[\mathbf{b}]=\emptyset$ \end{enumerate} Since each secondary point can be \emph{``charged''} to two variables in $\mathcal{V}$ (recall the definition of secondary points from~\autoref{table:general-d:construction}: each secondary point is indexed by a set of two variables $\{\mathbf{b},\mathbf{b}'\}$ such that $\mathbf{b}'=\mathbf{b}\oplus \mathbf{e}_i$ for some $i\in [d]$), it follows that $Q$ contains $\geq \frac{2d}{2}=d\geq 2$ \emph{distinct} secondary points corresponding to each variable in $I_{0}$. Therefore, we have \begin{align} &|I_0|+|I_1|+|I_{\geq 2}| = \big|\mathcal{V}\big| \tag{from~\autoref{eqn:sum-of-indices-is-k^2}}\\ &= |Q| \tag*{} \\ &\geq |Q\cap \textsc{Primary}| + |Q\cap \textsc{Secondary}| \tag{since $\textsc{Primary}\cap \textsc{Secondary}=\emptyset$ }\\ &\geq \Big(|I_1| + 2|I_{\geq 2}|\Big) + |Q\cap \textsc{Secondary}| \tag{by definition of $I_1$ and $I_{\geq 2}$}\\ &\geq \Big(|I_1| + 2|I_{\geq 2}|\Big) + 2|I_0| \end{align} \textcolor{black}{where the last inequality follows because $Q$ contains at least $2$ secondary points corresponding to each variable in $I_{0}$}. Hence, we have $|I_0|+|I_1|+|I_{\geq 2}| \geq 2|I_0| + |I_1| + 2|I_{\geq 2}|$ which implies $|I_0|=0=|I_{\geq 2}|$. From~\autoref{eqn:sum-of-indices-is-k^2}, we get $|I_1|=\big|\mathcal{V}\big|$, i.e., $\big|Q\cap \mathcal{D}[\mathbf{a}]\big|=1$ for each $\mathbf{a}\in \mathcal{V}$. This concludes the proof of~\autoref{clm:exactly-one-D}. \end{proof} Since $|Q|=\big|\mathcal{V}\big|$ \textcolor{black}{and $\mathcal{D}[\mathbf{a}] \cap \mathcal{D}[\mathbf{b}] = \emptyset$ for distinct $\mathbf{a},\mathbf{b} \in \mathcal{V}$},~\autoref{clm:exactly-one-D} implies that \begin{equation}\label{eqn:Q-has-no-secondary-pts} Q\ \text{contains no secondary points} \end{equation} \textcolor{black}{We now prove that $Q$ doesn't contain border points either}. \begin{claim} $\big|Q\cap \textsc{Core}[\mathbf{a}]\big|=1$ for each $\mathbf{a}\in \mathcal{V}$ \label{clm:exactly-one-core} \end{claim} \begin{proof} Fix any $\mathbf{a}\in \mathcal{V}$. From~\autoref{clm:exactly-one-D}, we know that $\big|Q\cap \mathcal{D}[\mathbf{a}]\big|=1$. Suppose that this unique point in $Q\cap \mathcal{D}[\mathbf{a}]$ is from $\textsc{Border}[\mathbf{a}]$. Without loss of generality, let $Q\cap \mathcal{D}[\mathbf{a}]=\big\{B_{\mathbf{a}}^{+i}\big\}$ for some $i\in [d]$. Since $\text{OPT}\xspace(Q)<2r(1+\epsilon)$, it follows that $\texttt{dist}\big(Q, B_{\mathbf{a}}^{-i}\big)<2r(1+\epsilon)$. Hence,~\autoref{lem:border-doesnt-intersect-other-connectors-general-d}(2) implies that $Q\cap \Big( \mathcal{D}[\mathbf{a}]\ \bigcup\ \mathcal{S}_{\{\mathbf{a},\mathbf{a}\ominus\mathbf{e}_{i} \}}\Big) \neq \emptyset$. \textcolor{black}{Since $Q$ contains no secondary points (\autoref{eqn:Q-has-no-secondary-pts}), we have $ Q\cap \left( \mathcal{D}[\mathbf{a}]\ \bigcup\ \mathcal{S}_{\{\mathbf{a},\mathbf{a}\ominus\mathbf{e}_{i} \}}\right) = Q\cap \mathcal{D}[\mathbf{a}] = \left\{B_{\mathbf{a}}^{+i}\right\}$. But from \autoref{lem:borders-dont-intersect-2d} we know $\texttt{dist} \left( B_{\mathbf{a}}^{+i}, B_{\mathbf{a}}^{-i}\right)\geq 2r(1+\epsilon)$. We thus obtain a contradiction.} This concludes the proof of~\autoref{clm:exactly-one-core}. \end{proof} Therefore, we have shown that $\text{OPT}\xspace(Q)< 2r(1+\epsilon)$ implies $\big|Q\cap \textsc{Core}[\mathbf{a}]\big|=1$ for each $\mathbf{a}\in \mathcal{V}$. This concludes the proof of~\autoref{lem:not-exactly-one-per-core}. \end{proof} \subsection{Finishing the proof of~\autoref{thm:dom-set-d-dimensions}} \label{subsec:finishing-the-proof} Finally, we are ready to prove~\autoref{thm:dom-set-d-dimensions} which is restated below: \domsetd* \begin{proof} Given an instance $\mathcal{I}=(\mathcal{V},\mathcal{D},\mathcal{C})$ of a $d$-dimensional geometric $\geq$-\text{CSP}\xspace, we build an instance $\mathcal{U}$ of $|\mathcal{V}|$-\textsc{Center}\xspace in $\mathbb{R}^d$ given by the reduction in~\autoref{subsec:redn-general-d}. This reduction has the property that \begin{itemize} \item if $\mathcal{I}$ has a satisfying assignment then \text{OPT}\xspace for the instance $\mathcal{U}$ of $|\mathcal{V}|$-\textsc{Center}\xspace is $< 2r$ (\autoref{subsec:k-center-general-d-easy}), and \item if $\mathcal{I}$ does not have a satisfying assignment then \text{OPT}\xspace for the instance $\mathcal{U}$ of $|\mathcal{V}|$-\textsc{Center}\xspace is $\geq 2r(1+\epsilon^*)$ (\autoref{subsec:k-center-general-d-hard}) \end{itemize} where $r=1/4$ and \textcolor{black}{$\epsilon^* = \dfrac{r^2}{(d-1)\delta^2}\geq \dfrac{1}{16(d-1)|\mathcal{D}|}$, since $|\mathcal{D}|=\left|[\delta]^d\right| \geq \delta^2$}. Hence, any algorithm for the $|\mathcal{V}|$-center problem which has an approximation factor $\leq (1+\epsilon^*)$ can solve the $d$-dimensional geometric $\geq$-\text{CSP}\xspace. Note that the instance $\mathcal{U}$ of $k$-\textsc{Center}\xspace in $\mathbb{R}^d$ has $k=|\mathcal{V}|$ and the number of points $n\leq |\mathcal{V}|\cdot 2d + |\mathcal{C}| +|\mathcal{V}|^{2}\cdot \delta = |\mathcal{I}|^{O(1)}$ where $|\mathcal{I}|=|\mathcal{V}|+|\mathcal{D}|+|\mathcal{C}|$. We now derive the two lower bounds claimed in the theorem: \begin{description} \item[\textbf{- (Inapproximability result)}] Suppose that there exists $d\geq 2$ such that the $k$-center on $n$ points in $\mathbb{R}^d$ admits an $(1+\epsilon)$-approximation algorithm in $f(k)\cdot \Big(\frac{1}{\epsilon}\Big)^{o(k^{1-1/d})}\cdot n^{o(k^{1-1/d})}$ time for some computable function $f$. As argued above, using a $(1+\epsilon^*)$-approximation for the $k$-center problem with $k=|\mathcal{V}|$ and $n=|\mathcal{I}|^{O(1)}$ points can solve the $d$-dimensional geometric $\geq$-\text{CSP}\xspace problem. Recall that $16(d-1)|\cdot \mathcal{I}|\geq 16(d-1)|\cdot \mathcal{D}|\geq \frac{1}{\epsilon^*}$ since $|I|=|\mathcal{V}|+|\mathcal{D}|+|\mathcal{C}|$, and hence we have an algorithm for the $d$-dimensional geometric $\geq$-\text{CSP}\xspace problem which runs in time $f(|\mathcal{V}|)\cdot (16d)^{o(k^{1-1/d})}\cdot |\mathcal{I}|^{o(k^{1-1/d})}$ which contradicts~\autoref{thm:marx-sidiropoulos}. \item[\textbf{- (Lower bound for exact algorithm)}] Suppose that there exists $d\geq 2$ such that the $k$-center on $n$ points in $\mathbb{R}^d$ admits an exact algorithm in $f(k)\cdot n^{o(k^{1-1/d})}$ time for some computable function $f$. As argued above\footnote{The argument above is actually stronger: even a $(1+\epsilon^*)$-approximation algorithm for $k$-center can solve $d$-dimensional geometric $\geq$-\text{CSP}\xspace}, solving the $k$ center problem with $k=|\mathcal{V}|$ and $n=|\mathcal{I}|^{O(1)}$ points can solve the $d$-dimensional geometric $\geq$-\text{CSP}\xspace problem. Hence, we have an algorithm for the $d$-dimensional geometric $\geq$-\text{CSP}\xspace problem which runs in time $f(|\mathcal{V}|)\cdot |\mathcal{I}|^{o(k^{1-1/d})}$ which again contradicts~\autoref{thm:marx-sidiropoulos}. \qedhere \end{description} \end{proof} \newpage \section*{\refname \@mkboth{\MakeUppercase{\refname}}{\MakeUppercase{\refname}}} } \usepackage{color,tikz,environ} \usetikzlibrary{decorations.markings} \usepackage{etoolbox} \usepackage{titlesec} \setcounter{secnumdepth}{4} \titleformat{\paragraph} {\normalfont\normalsize\bfseries}{\theparagraph}{1em}{} \titlespacing*{\paragraph} {0pt}{3.25ex plus 1ex minus .2ex}{1.5ex plus .2ex} \makeatletter \makeatother \newcommand{\boundellipse}[3]% {(#1) ellipse (#2 and #3) } \usepackage[margin=1in]{geometry} \ifthenelse{\isundefined{\lipics}} { \usepackage{graphicx} \usepackage[colorlinks=true,bookmarks=false]{hyperref} \usepackage[small,bf]{caption} \usepackage{subfigure} \usepackage[british]{babel} } {} \usepackage{xcolor} \definecolor{darkblue}{rgb}{0,0,0.45} \definecolor{darkred}{rgb}{0.6,0,0} \definecolor{darkgreen}{rgb}{0.13,0.5,0} \hypersetup{colorlinks, linkcolor=darkblue, citecolor=darkgreen, urlcolor=darkblue} \ifthenelse{\isundefined{\llncs}}{ \usepackage{amsthm} \usepackage{authblk} \renewcommand\Affilfont{\small} } \usepackage{amsmath,amsfonts,amssymb} \usepackage{mathtools} \usepackage{mathrsfs} \usepackage{wrapfig} \usepackage{xspace} \usepackage[shortlabels]{enumitem} \setlist[enumerate]{nosep} % \setlist[itemize]{nosep} % \usepackage{multirow} \usepackage[protrusion=true]{microtype} \usepackage{aliascnt} \ifthenelse{\isundefined{\llncs}}{ \theoremstyle{plain} }{} \newtheorem{theorem}{Theorem \newaliascnt{lemma}{theorem} \newaliascnt{corollary}{theorem} \newaliascnt{definition}{theorem} \newaliascnt{claim}{theorem} \newaliascnt{proposition}{theorem} \newaliascnt{remark}{theorem} \newaliascnt{hypothesis}{theorem} \newaliascnt{observation}{theorem} \newaliascnt{conjecture}{theorem} \newtheorem{lemma}[lemma]{Lemma} \newtheorem{claim}[claim]{Claim} \newtheorem{proposition}[proposition]{Proposition} \newtheorem{corollary}[corollary]{Corollary} \newtheorem{remark}[remark]{Remark} \newtheorem{hypothesis}[hypothesis]{Hypothesis} \newtheorem{observation}[observation]{Observation} \newtheorem{conjecture}[conjecture]{Conjecture} \ifthenelse{\isundefined{\llncs}}{ \theoremstyle{definition} }{} \newtheorem{definition}[definition]{Definition} \aliascntresetthe{lemma} \aliascntresetthe{remark} \aliascntresetthe{corollary} \aliascntresetthe{proposition} \aliascntresetthe{definition} \aliascntresetthe{claim} \aliascntresetthe{hypothesis} \aliascntresetthe{observation} \providecommand*{\theoremmautorefname}{Theorem} \providecommand*{\thmautorefname}{Remark} \providecommand*{\lemmaautorefname}{Lemma} \providecommand*{\corollaryautorefname}{Corollary} \providecommand*{\definitionautorefname}{Definition} \providecommand*{\claimautorefname}{Claim} \providecommand*{\hypautorefname}{Hypothesis} \providecommand*{\hypautorefname}{Observation} \newcommand{\ol}[1]{{\overline{#1}}} \newcommand{\mc}[1]{{\mathcal{#1}}} \newcommand{\ms}[1]{\ensuremath{\mathscr{#1}}} \newcommand{{\varepsilon}}{{\varepsilon}} \newcommand{\renewcommand{\qed}{\hfill$\lrcorner$}}{\renewcommand{\qed}{\hfill$\lrcorner$}} \DeclareMathOperator{\texttt{cost}\xspace}{cost} \DeclareMathOperator*{\val}{val} \newcommand{\hbox{-}\nobreak\hskip0pt}{\hbox{-}\nobreak\hskip0pt} \newcommand{\ignore}[1]{} \usepackage{environ,xstring} \newif\iflabel \newif\ifdbs \newif\ifamp \NewEnviron{doitall}{% \noexpandarg \expandafter\IfSubStr\expandafter{\BODY}{\label}{\labeltrue}{\labelfalse}% \expandafter\IfSubStr\expandafter{\BODY}{\\}{\dbstrue}{\dbsfalse}% \expandafter\IfSubStr\expandafter{\BODY}{&}{\amptrue}{\ampfalse}% \iflabel\def\doitallstar{}\else\def\doitallstar{*}\fi \ifdbs \ifamp \defequation{align}% \else \defequation{multline}% \fi \else \defequation{equation} \fi \begingroup\edef\x{\endgroup \noexpand\begin{equation\doitallstar}% \noexpand\BODY \noexpand\end{equation\doitallstar}% }\x } \def\[#1\]{\begin{doitall}#1\end{doitall}} \newcommand{\pname}[1]{\textsc{#1}} \newcommand{\newreptheorem}[2]{\newtheorem*{rep@#1}{\rep@title}\newenvironment{rep#1}[1]{\def\rep@title{\bf #2 \ref*{##1}}\begin{rep@#1}}{\end{rep@#1}}} \makeatother \newreptheorem{theorem}{Theorem} \newreptheorem{lemma}{Lemma} \newreptheorem{corollary}{Corollary} \newtheorem*{rep@thm}{\rep@title} \newcommand{\newrepthm}[2]{% \newenvironment{rep#1}[1]{% \def\rep@title{\autoref{##1}}% \begin{rep@thm} }% {\end{rep@thm} } } \makeatother \newrepthm{thm}{} \newrepthm{lem}{} \newrepthm{crl}{} \usepackage{framed} \usepackage{color,tikz,environ} \usetikzlibrary{decorations.markings,arrows,plotmarks} \tikzset{nomorepostaction/.code={\let\tikz@postactions\pgfutil@empty}} \tikzset{middlearrow/.style={ decoration={markings, mark= at position 0.5 with {\arrow{#1}} , }, postaction={decorate} } } \tikzset{onethirdarrow/.style={ decoration={markings, mark= at position 0.33 with {\arrow{#1}} , }, postaction={decorate} } } \tikzset{twothirdarrow/.style={ decoration={markings, mark= at position 0.67 with {\arrow{#1}} , }, postaction={decorate} } } \tikzset{endarrow/.style={ decoration={markings, mark= at position 0.9 with {\arrow{#1}} , }, postaction={decorate} } } \tikzset{startarrow/.style={ decoration={markings, mark= at position 0.1 with {\arrow{#1}} , }, postaction={decorate} } } \newcommand{\tikz \draw[-triangle 90] (0,0) -- +(.1,0);}{\tikz \draw[-triangle 90] (0,0) -- +(.1,0);}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Three dimensional or radial grids \cite{PhysRevA.88.023422} are flexible representations of time-dependent wave functions in solutions of time-dependent problems or in time-dependent density functional calculations. Localized basis functions, e.g. Gaussian orbitals, are less flexible for time-dependent Hamiltonians, e.g. for systems interacting with strong laser pulses. In describing ionization, one often needs to represent the wave function up to a few hundred Bohr distances, requiring large spatial grids. The attractive feature of basis functions in comparison to real space grids is reduced dimensionality. The question is how can we optimize the basis functions to represent the rapidly changing time-dependent wave function. Due to the experimental advances in attosecond extreme ultraviolet light pulses and intense x-ray sources \cite{RevModPhys.81.163}, many different basis function representations have been developed to solve the time-dependent Schr\"odinger-equation for atoms interacting with strong laser pulses \cite{PhysRevA.90.033403,PhysRevA.90.012506,doi:10.1063/1.2358351, PhysRevA.61.053411,PhysRevA.98.023413,PhysRevA.89.033415, PhysRevA.77.033412,PhysRevA.65.063403}. The most often used basis functions are the discrete variable representations \cite{PhysRevA.55.3417, PhysRevLett.98.073001,PhysRevA.55.3417,PhysRevA.79.012719} and B-splines \cite{Bachau_2001,PhysRevA.74.052702,PhysRevLett.103.063002}. These basis functions have been combined with innovative approaches to solve the time-dependent Schr\"odinger equation \cite{PhysRevE.90.063309,PhysRevA.60.3125,PhysRevA.95.023401,PhysRevA.78.032502, doi:10.1063/1.466058,GHARIBNEJAD2019,PhysRevE.95.053309,Wells2019,kormann_2016, doi:10.1063/1.465362,PhysRevA.99.013404,PhysRevE.95.023310}. In these works, the proper boundary conditions are enforced by using complex absorbing potentials \cite{PhysRevA.78.032502,Yu_2018,DeGiovannini2015}, exterior complex scaling \cite{PhysRevA.81.053845,WEINMULLER2017199}, or perfectly matched layers \cite{SCRINZI201498}. Gaussians basis functions are the most popular choices of quantum mechanical calculations because their matrix elements can be evaluated analytically \cite{RevModPhys.85.693,svm_book}. Gaussian functions however, have difficulties in reproducing the characteristic oscillatory behavior of continuum orbitals in the asymptotic region. Gaussians with complex parameters may be better suited to describe the continuum because of their inherent oscillatory nature \cite{PhysRevA.99.012504}. One way to extend Gaussians for problems involving ionization is to augment them with suitable functions such as B-splines \cite{PhysRevA.90.012506}. In this work we will solve the time-dependent Schr\"odinger-equation by time propagation using a time-dependent basis. The parameters of the basis will be optimized using the time-dependent variational principle (TDVP) \cite{dirac_1930,doi:10.1080/00268976400100041}. We will consider a hydrogen atom in laser a field. The oscillating field moves the electron density away from the atom and then back towards the atom. The time-dependent basis functions will be optimized to accurately represent the moving density. The time-dependent variational method was introduced by Dirac \cite{dirac_1930}, extended by McLachlan \cite{doi:10.1080/00268976400100041} and reformulated for Gaussian wave packets in Ref. \cite{doi:10.1063/1.449204}. The time-dependent variational method has been used in various calculations, such as in the description of the dynamical behavior of Bose-Einstein condensates \cite{PhysRevA.82.023611}, and in wave packet dynamics \cite{ZOPPE2005308,PhysRevA.79.043417}. Furthermore, the study of the dynamics of strongly interacting lattice bosons \cite{Carleo2012} and strongly correlated electrons \cite{PhysRevB.92.245106} reflect the increasing popularity of the time dependent variational method in other fields. The TDVP is also often used in approximating complex many-body wave functions, e.g. Fermion IC Molecular Dynamics \cite{RevModPhys.72.655}, Electron Nuclear Dynamics \cite{RevModPhys.66.917}, and time-dependent Multi configuration Self-consistent-field calculations \cite{C7CP02086D}. In these approaches, the wavefunction is approximated by Slater determinants of localized single particle orbitals. The orbitals are parameterized by dynamical variables (wave packet width, average position or momentum) and the TDVP is used to derive equation of motion for these dynamical variables. In this work we use the TDVP to time propagate a wave function by optimizing its linear and nonlinear parameters. In a previous paper we used the imaginary time propagation method combined with the TDVP to accurately describe few-particle systems \cite{PhysRevA.99.012504}. It was shown, that the TDVP can be used to obtain basis functions with accuracy comparable or better than gradient based Newton-Raphson optimization. This success paves the way for the application of the TDVP to time-dependent problems. We will test this application using the 1D and 3D Hydrogen atom with Gaussian and soft Coulomb potential in strong laser pulses, and then compare the results to finite difference grid calculations. The advantage of the present approach is that only a few basis functions are needed, while in the finite difference calculations millions of grid points are used. Moreover, the present approach can be extended to larger systems, while the finite difference is limited to 3D. An additional advantage is that no boundary conditions have to be enforced, and the basis flexible evolves according to TDVP. \section{Formalism} \subsection{Time-dependent variational principle} The time dependent wave function of this system in a general form can be written as: \begin{equation} \psi(t)=\psi({\bf q}(t)) \end{equation} where ${\bf q}(t)$ is a set of linear and nonlinear variational parameters. The time-dependent Schr\"odinger equation, \begin{equation} i{d\over dt}\psi(t)=H\psi(t) \end{equation} will be solved by the McLachlan variational method \cite{doi:10.1080/00268976400100041}. In this approach, the norm of the deviation between the right-hand and the left-hand side of the time-dependent Schr\"odinger equation is minimized with respect to the trial function. The quantity \begin{equation} I = ||i \phi(t) -H \psi(t)||^2 \rightarrow \min \label{tdv1} \end{equation} is to be varied with respect to $\phi$ only, and then the equivalency $\dot\psi\equiv \phi$ is enforced. At time $t$ the wave function is known and its time derivative is determined by minimizing $I$. In case of $I=0$, an exact solution exists, but the approximation in the expansion of $\psi(t)$ leads to $I>0$ values. The variations of $I$ with respect to $\phi$ gives the equations of motion: \begin{equation} \left\langle \frac{\partial\psi}{\partial{\bf q}} \Big| i\dot\psi - H \psi \right\rangle = 0 \; . \label{tdv2} \end{equation} This equation can be used to determine the (linear and nonlinear) variational parameters. \subsection{Parameter optimization} We can also write \eqref{tdv2} in matrix form as: \begin{equation} i M\dot {\bf q} = {\bf v} \label{prop} \end{equation} where \begin{equation} M_{ij} = \left\langle\frac{\partial \psi}{\partial {q}_i}\Big| \frac{\partial \psi}{\partial {q}_j}\right\rangle \; , \; \label{mmat} \end{equation} and \begin{equation} {v}_i = \left\langle \frac{\partial \psi}{\partial {q}_i} \Big| H \psi \right\rangle \; . \end{equation} By approximating the time derivative with first order finite difference, Eq. \eqref{prop} becomes \begin{equation} \dot{\bf q} = -iM^{-1}{\bf v} \label{tdv4} \end{equation} There are various established ways to solve such first order linear differential equations, and better approximations allowing larger time steps such as a Runge-Kutta approach can be used, but we elected to use the Euler method for time propagation for simplicity. \subsection{Hamiltonian and basis functions} We will test the approach by using a Hamiltonian describing a particle in a laser field in length gauge \begin{equation} H=-{1\over 2}\left( {d^2\over d x^2}+ {d^2\over d y^2}+ {d^2\over d z^2}\right)+V(x,y,z)+F(t)z, \end{equation} where $F(t)$ is the time dependent electric field pulse, which is defined as: \begin{equation} F(t)=E_0 e^{-(t-T)^2/\tau^2}\cos(\omega t). \label{laser} \end{equation} We define two different types of basis functions to represent the time-dependent wave function. The first one takes on the form: \begin{equation} g_i= c_i z^{n_i} g_{\alpha_i}(x)g_{\alpha_i}(y)g_{\beta_i}(z) =c_i z^{n_i} e^{-\alpha_i(x^2+y^2)-\beta_i z^2}, \label{basis1} \end{equation} where \begin{equation} g_\sigma(x)=e^{-\sigma x^2} \end{equation} is a one dimensional Gaussian and will be referred to as polynomial times Gaussian (PTG). The second basis is a plane wave times Gaussian (PWG): \begin{equation} g_i= c_i g_{\alpha_i}(x)g_{\alpha_i}(y)g_{\beta_i}(z)e^{kz} =c_i e^{-\alpha_i(x^2+y^2)-\beta_i z^2+kz}. \label{basis2} \end{equation} The parameters of the Gaussians are kept equal in the $x$ and $y$ direction due to the cylindrical symmetry of the potential. In one dimensional (1D) test calculations $\alpha=0$ is used to reduce the basis to 1D. The variational parameters form a vector, \begin{equation} {\bf q}(t)=\left( \begin{array}{c} c(t)\\ \alpha(t)\\ \beta(t)\\ \end{array} \right)= \left( \begin{array}{c} c_1(t)\\ {\vdots}\\ c_N(t)\\ \alpha_{1}(t)\\ {\vdots}\\ \alpha_{N}(t)\\ \beta_1(t)\\ {\vdots}\\ \beta_N(t) \end{array} \right), \end{equation} in the case of PTG and a similar vector can be defined for PWG. For PTG, the values of $n_k$ must be set to be integers. The variational trial function is \begin{equation} \psi(t)=\psi({\bf q}(t))=\sum_{k=1}^N c_k(t) \phi_k(t)=\sum_{k=1}^N g_k(t). \label{exp} \end{equation} To illustrate the flexibility of the Gaussian basis in time-dependent calculations, we solve the TDVP equation (Eq. \eqref{prop}) analytically for a free particle in Appendix \ref{appA}. This case can be used to test the time step and matrix elements in the numerical calculations. As the example in Appendix \ref{appA} and Eq.\ \eqref{tdv4} show, the parameters of the basis functions become complex during the time propagation. This is completely different from the conventional time propagation in which the wave function is expanded into some basis, and the linear coefficients are time dependent and complex. A Gaussian with a complex parameter can be written as: \begin{equation} e^{-(\alpha_r+i\alpha_i)x^2}= e^{-\alpha_r x^2}\left(\cos(\alpha_i x^2)+i\sin(\alpha_i x^2)\right). \end{equation} This function is an oscillatory function with a Gaussian envelope, and seems to greatly enhance the flexibility of the basis function \cite{PhysRevA.99.012504}. To make the integrals of the matrix element convergent, $\alpha_r$ should be positive, which is not explicitly guaranteed in the time propagation of Eq. \eqref{tdv4}, but in our numerical examples it was automatically satisfied. We will use two potentials to test the approach. A single Gaussian potential, \begin{equation} V=-V_0 e^{-\mu (x^2+y^2+z^2)}, \end{equation} with $V_0=1$ and $\mu=0.1$ a.u., and a soft Coulomb potential, \begin{equation} V=-{1\over \sqrt{x^2+y^2+z^2+a^2}}, \end{equation} with $a=1.0$ a.u., respectively. We will use the soft Coulomb potential because the Coulomb potential cannot be easily used in grid calculations and its use is problematic in 1D \cite{doi:10.1139/p06-072}. In the case of the PWG basis, the soft Coulomb potential is expanded into 50 Gaussians to facilitate the analytical calculations of the matrix elements. In 1D test cases, the condition $x=y=0$ is set in the potential and $\alpha=0$ is used in the basis function with 1D kinetic energy. The matrix elements of these basis functions can be calculated analytically as it is shown in Appendices \ref{appb}, \ref{appc} and \ref{appd}. \subsection{Time propagation of the wave function} Equation \eqref{tdv4} defines the time propagation of both linear and nonlinear parameters of the wave function. With the exception of very small time steps, the simple first order finite difference approximation is not expected to be accurate enough to preserve the norm of the wave function. To alleviate this problem, we only use this equation to time propagate the nonlinear parameters and we update the linear parameters separately to preserve the norm. One can view this as an optimization of the basis functions by updating the nonlinear parameters using TDVP. We then time propagate the wave function on the updated basis. We have a set of basis function in time $t$, $\phi_k(t)$, which is time-propagated to time $t+\Delta t$ to become $\phi_k(t+\Delta t)$ using Eq. \eqref{tdv4}. Both of these sets of basis functions can be used to represent the wave function at time $t$: \begin{equation} \psi(t)=\sum_{k=1}^N \hat{c}_k(t,t) \phi_k(t)=\sum_{k=1}^N \hat{c}_k(t,t+\Delta t) \phi_k(t+\Delta t). \label{texp} \end{equation} In this equation $\hat{c}_k(t,t)$ is known as we know the wave function at time $t$ (and it is not calculated using Eq. \eqref{tdv4}). The unknown $\hat{c}_k(t,t+\Delta t)$ coefficients can be easily derived by defining the overlap of the basis functions \begin{equation} S_{ij}(t,t')=\langle \phi_i(t)\vert\phi_j(t')\rangle \end{equation} and multiplying Eq. \eqref{texp} with $\psi_i(t)$. The result is: \begin{equation} \hat{c}_i(t,t+\Delta t)=\sum_{j=1}^N S^{-1}_{ij}(t,t)\sum_{k=1}^N S_{j,k}(t,t+\Delta t) \hat{c}_k (t,t). \end{equation} Now we know the linear combination coefficient of the wave function $\psi(t)$ at time $t$ on the optimal basis $\phi_k(t+\Delta)$, so we can time propagate the wave function in the conventional way using \begin{equation} \psi(t+\Delta t)=e^{-iH\Delta t} \psi(t) \end{equation} to calculate $\hat{c}_k(t+\Delta t,t+\Delta t)$. We choose the numerically stable Crank-Nicolson approach to update the coefficients: \begin{eqnarray} &&\hat{C}(t+\Delta t,t+\Delta t) = \\ &&{S(t+\Delta t,t+\Delta t)-{i\over 2}H(t+\Delta t,t+\Delta t)\over S(t+\Delta t,t+\Delta t)+{i\over 2}H(t+\Delta t,t+\Delta t)} \hat{C}(t,t+\Delta t), \nonumber \end{eqnarray} where $\hat{C}^T=(\hat{c}_1,{\ldots},\hat{c}_N)$ and \begin{equation} H_{ij}(t,t')=\langle \psi_i(t)\vert H \vert \psi_j(t')\rangle. \end{equation} This approach significantly improves the stability of the approach and allows larger time steps. \section{Calculations} \subsection{Ground state} Before the time propagation we need to calculate the ground state (without the laser field). In the time propagation that will be the initial state at $t=0$. To calculate the ground state the parameters of the Gaussians will be defined with a geometric progression, \begin{equation} {1\over \sqrt{\alpha_i}}=a \nu^{i-1}, \end{equation} with $a=0.5$ and $\nu=1.3$. For the ground state calculation, we will use $n=0$ for the PTG basis and $k=0$ in the PWG basis. For 1D grid calculation, $N=5000$ equidistant grid points are used with $h=0.125$ grid spacing, and a $N=61\times 61\times 1200$ size grid with $h=0.25$ is used in 3D. While very fine grid spacing can be used in 1D, it must be larger in 3D due to the increase in computational cost. The ground state energies are listed in Table I. {\rev These energies were calculated by diagonalization of the PTG and PWG case. In the case of the grid calculations, the ground state energy was calculated by the conjugate gradient method using the codes of \cite{varga_book_cn}.} There is an excellent agreement in 1D, and a slight difference between the PWG and the grid calculation in 3D. While agreement can be achieved with a finer grid, there are more computational constraints the finer the grid becomes. We only used the PTG for the Gauss potential, so the PTG ground state energy for other cases is not shown. \begin{table}[] \begin{tabular}{|l|l|l|l|} \hline Basis & $N$ & Potential & Energy \\\hline\hline 1D PTG & 30 & Gauss & -0.79526702 \\\hline 1D PWG & 20 & Gauss & -0.79526702 \\\hline 1D Grid & 5000 & Gauss & -0.79526702 \\\hline 1D PWG & 20 & Soft Coulomb & -0.66977138 \\\hline 1D Grid & 5000 & Soft Coulomb & -0.66977138 \\\hline 3D PWG & 30 & Soft Coulomb & -0.27489135 \\\hline 3D Grid & 4465200 & Soft Coulomb & -0.27461231 \\\hline \end{tabular} \caption{Ground state energies (in a.u.). {\rev The basis dimension is $N$ for the PWG and PTG, and the number of grid points in the 1D and 3D grid case.}} \end{table} \begin{figure} \includegraphics[width=0.95\linewidth]{figure1.eps} \caption{The laser fields used in the calculation, laser $A$, $E_0=0.25, \tau=20.5, \omega=1.0/2\pi, T=50$ (black dotted line); laser $B$, $E_0=1.0, \tau=20.5, \omega=1.0, T=50$ (red line)} \label{fig1} \end{figure} \subsection{Time propagation} Two different laser pulses are used in the calculation. The first (see Fig. \ref{fig1}), laser $A$, has only a few cycles and moves the electron to one direction as will be shown later. The second, laser $B$, has many cycles and moves the electron almost symmetrically left and right. The time step is $\Delta t=0.001$ a.u. in 1D calculations, and $\Delta t=0.0005$ a.u. in the 3D calculations for both the PWG and the grid. The PTG requires a smaller time step as we will discuss later. The PTG ground state calculation was restricted to $n=0$ and to make a starting PTG basis for time propagation, the basis will be doubled by adding $n=1$ states with the same $\beta_i$ parameters as of the $n=0$ states. These states are needed because the laser field operator $F(t)z$ matrix elements are only nonzero for basis states for even $n_i+n_{i'}+1$. To start the calculation from the ground state, the linear coefficients of the $n_i=1$ basis states will be set to zero. States with $n>1$ do not seem to improve the calculation. The PWG basis does not need any modification and one can start the computation form the ground state wave function. \begin{figure} \includegraphics[angle=270,origin=c,width=0.95\linewidth]{figure2.eps} \caption{Electron densities in Gaussian potential at $t=100$ a.u. for laser $A$ and laser $B$.} \label{fig2} \end{figure} The electron density, $\vert\psi(x,t)\vert^2$, after time propagation up to t=100 a.u. are compared in Fig. \ref{fig2} in the case of the Gaussian potential. The agreement between the grid and the PWG calculations are excellent. In the asymptotic region where the density becomes smaller than $10^{-4}$, the two approaches do not fully agree. This is partly because of numerical noise, which can be decreased with a smaller time step, and partly due to the grid spacing. Test calculations show that PTG basis can only be used with smaller time steps ($\Delta t=0.00001$ a.u.) to produce the same results as the grid and PWG. This is because this basis easily becomes nearly linearly dependent (large overlap between basis functions), especially in the 1D case, which makes the calculation of the inverse of $M$ difficult. The other difficulty is choosing the optimal number of basis states with $n=0$ and $n=1$. It is still useful to consider the PTG basis as an alternative test, especially that in 3D the Coulomb potential can be analytically calculated for this basis (see Appendix \ref{appc}). Figures \ref{fig3}, and \ref{fig4} show the energy and the occupation probability of the ground state as a function of time. The occupation probability is defined as: \begin{equation} P(t)=\vert \langle \psi(0)\vert\psi(t)\rangle \vert^2. \end{equation} The energy and the occupational probability are in excellent agreement for the grid, PTG, and PWG basis functions for both laser fields. Laser $A$ strongly ionizes the system and the ground state occupation becomes about 0.3 after the pulse. This means (see Fig. \ref{fig2}) that the tail of the wave function has large amplitude far away form the center of the potential, but the complex Gaussian basis is flexible enough to represent this. The next example is a test for a soft Coulomb potential. Since the PTG requires much smaller time step, we exclude it from the discussion from now. Figures \ref{fig5} and \ref{fig6} show that the approach works well for the soft Coulomb potential as well. Comparing Figs. \ref{fig3} and \ref{fig4} to \ref{fig5} and \ref{fig6} show that the effect of the laser field is very similar in both the Gauss and soft Coulomb potentials. The electron is slightly less bound in the soft Coulomb potential and the laser causes larger excitation and ionization. \begin{figure} \includegraphics[width=0.95\linewidth]{figure3.eps} \caption{Gaussian potential with laser field $A$ in 1D. Top: Energy as a function of time for grid (solid blue line), PWG (red dashed line) and PTG (black dotted line). Bottom: Ground state occupation probability as a function of time for grid (solid blue line), PWG (red dashed line) and PTG (black dotted line). The three lines are indistinguishable in the resolution of the figure.} \label{fig3} \end{figure} \begin{figure} \includegraphics[width=0.95\linewidth]{figure4.eps} \caption{Gaussian potential with laser field $B$ in 1D. Top: Energy as a function of time for grid (solid blue line), PWG (red dashed line) and PTG (black dotted line). Bottom: Ground state occupation probability as a function of time for grid (solid blue line), PWG (red dashed line) and PTG (black dotted line). The three lines are indistinguishable in the resolution of the figure.} \label{fig4} \end{figure} The last example covers the case of soft Coulomb in 3D for lasers $A$ and $B$, which are illustrated in Figs. \ref{fig7} and \ref{fig8}. The agreement between the grid and PWG calculations is still very good, although the necessary time step to reach accuracy is smaller for PWG than in 1D. The grid calculation would converge with a time step that is 10 times larger, but we used the same time step for both grid and PWG for consistency. However, even with a larger time step, the grid calculation, is very computationally demanding due to its large grid size. Indeed, its computational time takes at least two orders of magnitude longer than that of the PWG for the soft Coulomb potential. We have also tested a restricted PTG basis, constraining the Gaussian to be spherically symmetric by choosing $\alpha=\beta$ in Eq. \eqref{basis2}. Test calculations for shorter, weaker pulses show good agreement between this restricted basis and the grid calculations, but this basis is not flexible enough for accurate calculations in the test examples presented in this work. Despite this, the result is still noteworthy because it may lead to an extension of Gaussian atomic orbitals for weak fields. \begin{figure} \includegraphics[width=0.95\linewidth]{figure5.eps} \caption{Soft Coulomb potential with laser field $A$ in 1D. Top: Energy as a function of time for grid (solid blue line), PWG (red dashed line). Bottom: Ground state occupation probability as a function of time for grid (solid blue line), PWG (red dashed line).} \label{fig5} \end{figure} \begin{figure} \includegraphics[width=0.95\linewidth]{figure6.eps} \caption{Soft Coulomb potential with laser field $B$ in 1D. Top: Energy as a function of time for grid (solid blue line), PWG (red dashed line). Bottom: Ground state occupation probability as a function of time for grid (solid blue line), PWG (red dashed line).} \label{fig6} \end{figure} \begin{figure} \includegraphics[width=0.95\linewidth]{figure7.eps} \caption{Soft Coulomb potential with laser field $A$ in 3D. Top: Energy as a function of time for grid (solid blue line), PWG (red dashed line). Bottom: Ground state occupation probability as a function of time for grid (solid blue line), PWG (red dashed line).} \label{fig7} \end{figure} \begin{figure} \includegraphics[width=0.95\linewidth]{figure8.eps} \caption{Soft Coulomb potential with laser field $B$ in 3D. Top: Energy as a function of time for grid (solid blue line), PWG (red dashed line). Bottom: Ground state occupation probability as a function of time for grid (solid blue line), PWG (red dashed line).} \label{fig8} \end{figure} \rev{To test the applicability of the approach for larger systems we have considered a two electron system in 1D with the Hamiltonian \begin{equation} H=-{1\over 2} {d^2\over dx_1^2} -{1\over 2} {d^2\over dx_2^2}-2V(x_1)-2V(x_2)+V(x_1-x_2)+F(t)(x_1+x_2), \end{equation} with a Gaussian potential, $V(x)=e^{\mu x^2}$, $\mu=0.1$. The basis function is taken in the form \begin{equation} g_i=c_ie^{-\alpha_{1i} x_{1}^2-\alpha_{2i} x_2^2+\beta_i x_1 x_2 +k_{1i} x_1 +k_{2i} x_2} \end{equation} with six variational parameters, $\alpha_{1i},\alpha_{2i},\beta_i,k_{1i},k_{2i}$ and $c_i$, $(i=1,{\ldots},N$. The two particles are assumed to be distinguishable (one electron with spin up and one with spin down). The energy of the two electron system as the function of time is shown in Fig. \ref{fig9}. The convergence was checked by using different starting basis sets and different basis dimensions. $N=15$ basis functions with $\Delta t=0.0001$ a.u. yields well converged results. Figure \ref{fig10} show the snapshots of the two-electron density. At $t=0$ the electrons are confined to the potential well around the origin. The laser field moves them out of the well towards the positive direction ($t=30$ a.u. in Fig. \ref{fig10}), and then back toward the origin. After the peak of the laser field (in Fig. \ref{fig10}, $t=50$ a.u.) there are two peaks that appear in the density. This corresponds to a configuration where the first electron's probability distribution has a maximum close to the origin, while the second electron's probability distribution has two maxima, which are left and right with respect to the origin. \begin{figure} \includegraphics[width=0.95\linewidth]{figure9.eps} \caption{Energy (black line) and laser field (dashed line) of a 2 electron system as a function of time. The laser parameters are $E_0=0.1, \tau=20.5, \omega=1.0, T=25$. The amplitude of the laser on the figure is multiplied by 10 for better visibility.} \label{fig9} \end{figure} \begin{figure} \includegraphics[width=0.95\linewidth]{figure10a.eps} \includegraphics[width=0.95\linewidth]{figure10b.eps} \includegraphics[width=0.95\linewidth]{figure10c.eps} \caption{Snapshots of the two electron density at $t=0$, $t=30$ and $t=50$ a.u.. The plane axis are $x_1$ and $x_2$, the coordinates of electron 1 and 2.} \label{fig10} \end{figure} } \section{Summary} We have used the TDVP to time propagate the wave function. The TDVP optimizes the linear and the nonlinear parameters on the same footing. The results are compared to those of grid calculations and the accuracy to the present approach is demonstrated. We have tested various forms of basis functions including Gaussians multiplied by polynomials, plane waves, and non-spherical Gaussians. The complex parameters of the Gaussians make the basis functions flexible enough to represent oscillatory wave functions. In addition, several potentials and laser fields were used to test the approach for different degree of ionization. The approach has several advantages. First, a simple Gaussian basis can be used to solve time-dependent problems, which may be useful in various electronic structure codes. Second, the number of basis functions needed is considerably smaller than the number of grid points required to represent a wave function, which makes the calculation faster. Furthermore, no boundary conditions need to be enforced, and the TDVP automatically generates the Gaussians to represent the wave function in space. As the free Gaussian wave packet example (Appendix \ref{appA}) shows, the wave function can propagate from any given point to any desired distance without artificial reflections. In principle, a complex absorbing potential can also be used, in which case the number of Gaussian basis states may be less, because the wave function only need to be represented in a well defined region. The approach can be extended to larger systems using Explicitly Correlated Gaussians \cite{RevModPhys.85.693}. \rev{The example of a 2 electron system presented in this paper shows promising results.} The main disadvantage is that the basis needs to be carefully initialized, otherwise large overlap between basis functions can make the inversion of the $M$ matrix in Eq. \eqref{mmat} difficult. This can possibly be alleviated by using a singular value decomposition for calculation of the inverse. It is also somewhat difficult to determine a sufficient number of basis functions and their desired initial paremeters to minimize error during time propagation. The approach can be improved in several ways. Chief among them, the simple first order time propagation should be replaced with a more accurate approach. The approach can also benefit from adaptive time steps, using larger time step for smooth regions of the time dependent potential and smaller time steps where the potential has abrupt changes. Both of these improvements would allow for larger time steps and faster calculation. One can also design some scheme to prune the number of Gaussians and add new Gaussians as needed. Finally, another possibility is to refit the wave function with a completely new set of Gaussians after a certain time interval to exclude ill-behaved basis states. \section{Acknowledgment} This work has been supported by the National Science Foundation (NSF) under Grant No. IRES 1826917.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Massive multi-input multi-output (MIMO), as a key technology in communication systems, has great advantages over traditional MIMO systems, such as higher spectral efficiency, higher energy efficiency, and higher spatial resolution \cite{6798744}. Accurate downlink channel state information (CSI) is critical for frequency division duplex (FDD) massive MIMO systems \cite{7811130}. In traditional FDD MIMO systems, downlink CSI is first estimated at the UE and then fed back to the BS. However, this feedback strategy is expensive because the substantial antennas at the BS greatly increase the dimension of the CSI matrix, thereby leading to a large overhead. To address this issue, the CSI matrix should be efficiently compressed, which can be realized by compressive sensing (CS) or deep learning (DL) techniques. Recently, DL has been proven better than CS with more satisfactory feedback accuracy. The DL-based image compression technique is first introduced to massive MIMO CSI feedback in \cite{8322184} based on autoencoder models of CSINet. Ye et al. propose DNNet in \cite{9076084} to achieve superior feedback performance at low signal-to-noise ratio. Learning from inception modules in GoogLeNet, CRNet is proposed in \cite{9149229} to achieve great feedback accuracy with relatively small computational complexity. In \cite{8972904}, CSINet+ is introduced with two main modifications on CSINet: changing convolutional kernel size and upgrading refinement process, which improves the performance of the decoder. In these DL-based works \cite{8322184}, \cite{9149229}, \cite{8972904}, the autoencoder models only work for fixed compression rate CSI feedback. However, the communication environment is constantly changing, while relying on fixed-rate (FR) models may lead to superfluous bits and waste resources. In a changing environment, a high compression rate should be used when the CSI matrix is sparse, and vice versa \cite{7727995}. Therefore, the compression rate should be adjusted according to the sparsity of CSI matrices while ensuring feedback accuracy. To address this problem, Guo et al. \cite{8972904} propose a switchable structure to enable the autoencoder to handle multi-rate feedback. However, they do not propose a concrete criterion to adapt the compression rate according to the change of the environment, which makes it hard to put into practical use. In this paper, we introduce a novel feedback framework that is able to automatically adapt the compression rate according to the change of CSI matrices. For the encoder, we design the sparsity analysis network (SAM) to instantaneously classify the compression rate that will be used for channel feedback. We also develop a loss function to discourage SAM from making the worst misclassification, and avoid intervening the network on correct classification at the mean time. For the decoder, we design a {\it Dense-In-Dense} structure to achieve higher feedback accuracy. Simulation demonstrates the superiority of the proposed framework in decreasing normalized mean square error (NMSE) and saving feedback bits when NMSE is kept below a threshold. \section{System Model} \begin{figure*}[t] \centering \centerline{\includegraphics[width=16.5cm]{大图2.png}} \caption{The AMR feedback framework is composed of encoder and decoder.} \end{figure*} We consider a single-cell downlink massive MIMO system with $N_{t}$($\gg1$) transmit antennas at BS and a single receiver antenna at UE. The system is operated in OFDM over $N_{c}$ subcarriers, and the received signal at the $n$-th subcarrier can be expressed as: \begin{equation} \begin{aligned} y_{n}=\tilde{\mathbf{h}}_{n}^{H} \mathbf{v}_{n} s_{n}+z_{n}, \quad n=1,\ldots, N_c, \end{aligned} \end{equation} where $\tilde{\mathbf{h}}_{n}$ and $\mathbf{v}_{n} \in \mathbf{C}^{N_{t} \times 1}$ are the channel frequency response vector and the precoding vector at the $n$-th subcarrier, respectively; $s_{n}$ represents the transmitted data symbol; $z_{n}$ is the additive noise. Thus, the overall CSI in the spatial-frequency domain can be expressed in matrix form as $\tilde{\mathbf{H}}=\left[\tilde{\mathbf{h}}_{1}, \tilde{\mathbf{h}_{2}}, \ldots, \tilde{\mathbf{h}}_{N_{c}}\right]^{H} \in \mathbb{C}^{N_{c} \times N_{t}}$. In FDD systems, $\tilde{\mathbf{H}}$ with 2$N_{c}N_{t}$ parameters has to be fed back to BS, whose overhead is huge with massive number of antennas \cite{6214417}. In order to reduce feedback overhead, CSI matrix can be converted to angular-delay domain with 2D discrete Fourier transform (DFT) as: \begin{equation} \begin{aligned} \mathbf{H}=\mathbf{F}_{\mathrm{d}} \tilde{\mathbf{H}} \mathbf{F}_{\mathrm{a}}, \end{aligned} \end{equation} where $\mathbf{F}_{\mathrm{d}} $ and $\mathbf{F}_{\mathrm{a}}$ are the $N_{c}$ × $N_{c}$ and $N_{t}$ × $N_{t}$ normalized DFT matrices respectively. In the delay domain, only the first $N_{c}'$ rows of $\mathbf{H}$ contain non-zero values because the time delay between multipath arrivals lies within a limited period \cite{9149229}, \cite{8322184}. Therefore, we can retain the first $N_{c}'$ rows of $\mathbf{H}$ and remove the remaining rows. By an abuse of notation, we still use $\mathbf{H}$ to denote the $N_{c}'$ × $N_{t}$ truncated channel matrix. Splitting the real and imaginary part of each element of $\mathbf{H}$, we consider $\mathbf{H}$ has $N=2N_{c}'N_{t}$ elements of real numbers. In this paper, we propose an adaptive multi-rate (AMR) feedback framework, including \textit{encoder} and \textit{decoder}. At the UE, the encoder transforms $\mathbf{H}$ into an $M$ dimensional vector $\mathbf{x} $ by an agnostic function $f_{\text {en }}$ as: \begin{equation} \mathbf{x}=f_{\text {en }}(\mathbf{H}), \end{equation}where $M<N$ and $M$ can be selected from a pre-determined set $\{M_1, M_2, \cdots, M_K\}$. Note that the value of $M$ requires additional $\log_2 K$ bits to be fed back, but the overhead is usually negligible when $K$ is small. The feedback compression rate (CR) is then defined as $M_k/N$ which has $K$ options. A key property of the proposed encoder is its capability to predict which CR is most suitable according to the CSI matrix. For instance, in order to reduce feedback overhead while guaranteeing a certain feedback accuracy, the \textit{best CR} is defined as the smallest $M_k/N$ that can ensure NMSE below a threshold $T_H$. Then, the autoencoder self-adapts to the predicted CR. After receiving $\mathbf{x}$ at BS, the decoder inversely transforms it back into $\mathbf{\hat{H}}$ as: \begin{equation} \mathbf{\mathbf{\hat{H}}}=f_{\text {de }}(\mathbf{x}). \end{equation} The recovered channel matrix $\mathbf{\hat{\tilde{H}}}$ in the spatial-frequency domain can be obtained by applying zero filling and inverse DFT to $\mathbf{\hat{H}}$ at BS. \section{The Proposed Feedback Framework } In this section, we will specify the designed encoder and decoder, whose structure is shown in Fig.~1. \subsection{Encoder} We choose inception network as the structure of encoder that is composed of multiple branches with different representational capacity. In each branch, convolutions of multiple kernel sizes are implemented to obtain various receptive fields. In order to decrease the parameter number, we employ asymmetric convolution with consecutive non-square kernels, \textit{e.g.}, 1$\times$3 and 3$\times$1 kernels \cite{ding2019acnet}. The outputs of all branches are concatenated and processed by a 5$\times$5 convolution to reduce channel numbers. In this way, the spatial information at various scales is aggregated. Considering the storage at UE is relatively small, the encoder convolution neural network (CNN) is shared while the encoder fully connected (FC) layers can be adjusted, to reduce burden of parameter storage. \subsection{SAM} \begin{table}[t] \begin{center} \label{tab2} \caption{Parameter Numbers Of Encoder And SAM} \begin{tabular}{|c<{\centering}|c<{\centering}|c<{\centering}|} \hline Encoder&SAM&The Ratio of SAM to Encoder \\ \hline 1572864&394232&25.06\%\\ \hline \end{tabular} \end{center} \end{table} At UE, we design SAM to pre-process $\mathbf{H}$ before the encoder, which is composed of 4 FC layers with the activation function of the last layer as softmax. The input of SAM is $\mathbf{H}$ and the outputs of SAM are $K$ decimal numbers, each of which represents the probability that the corresponding CR being the best. CR with the largest probability will be chosen in feedback as shown in Fig.~1. The proposed encoder with SAM is called AMR encoder, and the traditional encoder without SAM is called FR encoder. Note that the parameters of the SAM is relatively small compared to the parameters of the encoder as shown in Table \uppercase\expandafter{\romannumeral1}. Thus the additional parameter storage resources that AMR encoder requires are relatively small. We define true examples (TE) as correct classification of $\mathbf{H}$, false positive examples (FPE) as misclassifying $\mathbf{H}$ into larger CRs, and false negative examples (FNE) as misclassifying $\mathbf{H}$ into smaller CRs. In case of TE and FPE, NMSE is below $T_H$, while in case of FNE, NMSE is above $T_H$ In order to enlarge the probability to maintain NMSE below $T_H$, we design the weighted cross-entropy loss function as: \begin{equation} \mathcal{L}_{i}^{w}=\sum_{k=1}^{K}-y_{i k} \log \left(\hat{y}_{i k}\right) -\gamma_{k}\left(1-y_{i k}\right) \log \left(1-\hat{y}_{i k}\right), \end{equation} where $y_{i}$ is a ground-truth one-hot vector for the $i$-th sample and $\hat{y}_{i}$ is the output of SAM for the $i$-th sample. The loss function (5) penalizes the misclassification into smaller CR more aggressively by setting larger $\gamma_{k}$ for smaller CR. If all $\gamma_{k}$ is set to 1, then (5) degenerates into unweighted loss function. Different from traditional weighted cross-entropy loss, we do not adjust weights of TE \cite{9277638}. Instead, we only tune weights of FPE and FNE to increase the probability to maintain NMSE below $T_H$. \subsection{Decoder} In the decoder, we designed an improved residual dense network (RDNet) \cite{8964437} named RDNet+. \subsubsection{Inner-RDBlock Structure} \textcolor[rgb]{0,0,1}{RDNet+ inherits the basic units of RDNet that are named RDBlocks, as shown in Fig.~ 1. RDNet+ has 4 RDBlocks each of which consists of 3 densely connected layers (DCL), the squeeze and excitation (SE) module, and the local residual learning (LRL) module \cite{8964437}.} The input of every layer of DCL has direct access to all the subsequent layers, which passes on information that needs to be preserved and leads to an implicit deep supervision. SE concatenates the outputs of all the preceding layers within the current RDBlock and adopts a channel attention algorithm to exploit the inter-row relationship of the input tensor. LRL is realized by two steps: 1) a 1$\times$1 convolutional layer is implemented to reduce the channel number to the same as the input of RDBlock; 2) the last layer's output is added to the original input of the current RDBlock, which can reduce the risk of vanishing gradient. \subsubsection{Inter-RDBlock Structure} \textcolor[rgb]{0,0,1}{The difference between RDNet and RDNet+ is that RDNet+ has an enhanced inter-RDBlock structure.} To improve the information flow among RDBlocks, we introduce direct connections from each RDBlock to all subsequent RDBlocks in RDNet+, \textit{i.e.}, RDBlocks are also densely connected. Since inter-RDBlock connection can create longer skip over layers than inner-RDBlock dense connection, inter-RDBlock connection is much more significant than inner one. Define $\mathbf{F}_{i}$ as the output of the $i$-th RDBlock, which has a recursive relationship as: \begin{equation} \mathbf{F}_{i}=\sigma_{i}(\mathbf{F}_{i-1}, \mathbf{F}_{i-2}, \cdots, \mathbf{F}_{0}), \end{equation} where $\sigma_{i}(\cdot)$ denotes the function of the $i$-th RDBlock. This recursive relationship brings about much benefit, which is illustrated in the view of back propagation by the following steps. Suppose $i_1, i_2 \in \mathbb{Z}$, $0<i_1<i_2$, and define $\mathbf{f}_i\triangleq \vec{\mathbf{F}}_i$. During back propagation for gradient descent, the partial derivative of $\mathbf{f}_{i_{2}}$ with respect to $ \mathbf{f}_{i_{1}}$ is calculated as: \begin{equation} \begin{aligned} &\frac{\partial \mathbf{f}_{i_{2}}}{\partial \mathbf{f}_{i_{1}}} = \sum_{k=1}^{i_{2}-i_{1}} \frac{\sigma_{i_{2}}}{\partial \mathbf{f}_{i_{2}-k}} \frac{\partial \mathbf{f}_{i_{2}-k}}{\partial \mathbf{f}_{i_{1}}}\\ &=\sum_{k=1}^{i_{2}-i_{1}} \sum_{i_{1}<p_{1}<\ldots <p_{k}<i_{2}}\frac{\partial \sigma_{i_{2}}}{\partial \mathbf{f}_{i_{2}-p_{1}}} \frac{\partial \sigma_{i_{2}-p_{1}}}{\partial \mathbf{f}_{i_{2}-p_{2}}} \ldots \frac{\partial \sigma_{i_{2}-p_{k}}}{\partial \mathbf{f}_{i_{1}}}, \\ \end{aligned} \end{equation} In deep networks, the vanishing gradient problem is likely to occur, while in shallow neural networks, both learning ability and generalization ability cannot be strengthened. However, the proposed recursive relationship among $\mathbf{F}_{i}$ can ameliorate both problems. No matter how large $i_2-i_1$ may be, there are invariably multiplications of relatively few matrices in (7), creating short connection and effectively preventing vanishing gradient. There are also multiplications of relatively massive matrices in (7) creating long connection and enhancing decoder's ability of learning and generalization \cite{7820046}, like \textit{Res-In-Res} multilevel residual networks. We name the inter and inner structure of dense connectivity as Dense-in-Dense (DID). DID encourages information reuse throughout the decoder, which ensures hierarchical information of all RDBlocks' outputs are sufficiently utilized. Although DID structure brings heavier calculation burden, it shows the noticeable advantage of eliminating negative impact when the decoder network goes too deep. After concatenating the outputs of all the preceding RDBlocks, we extract the global feature of the whole decoder by applying a 3×3 convolutional layer, which is named as global feature fusion (GFF). We then add the input of decoder to the output of GFF, which is named as global residual learning (GRL). GFF reduces the channel number to the same as the original input of the decoder, which makes GRL possible. It should be noted that GRL is of the essence to accelerate the training speed for the proposed complicated decoder. \section{Simulation Results and Analysis} In this section, we provide various simulations to verify the effectiveness as well as the superiority of the proposed framework. \subsection{Comparison among Decoders} \begin{table}[t] \begin{center} \label{tab2} \caption{\textcolor[rgb]{0,0,1}{NMSE(dB) Comparison Between Different Networks}} \begin{tabular}{|c<{\centering}|c<{\centering}|c<{\centering}|c<{\centering}| c<{\centering}|} \hline CR&Decoder&FLOPs&NMSE(indoor)&NMSE(outdoor)\\ \hline \makecell[c]{1/4}&\makecell[c]{CSINet+\\CRNet\\DS-NLCSINet\\ACRNet-1x\\RDNet\\RDNet+} &\makecell[c]{24.57M\\5.12M\\11.30M\\\textbf{4.64M}\\5.36M\\5.97M} &\makecell[c]{-27.37\\-26.99\\-24.99\\-27.16\\-25.14\\\textbf{-27.50}} &\makecell[c]{-12.40\\\textbf{-12.70}\\-12.09\\-10.71\\-11.02\\-12.36}\\ \hline \makecell[c]{1/8}&\makecell[c]{CSINet+\\CRNet\\DS-NLCSINet\\ACRNet-1x\\RDNet\\RDNet+} &\makecell[c]{23.52M\\4.07M\\10.25M\\\textbf{3.60M}\\4.26M\\4.75M} &\makecell[c]{-18.29\\-16.01\\-17.00\\-15.34\\-15.98\\\textbf{-18.76}} &\makecell[c]{\textbf{-8.72}\\-8.04\\-7.96\\ -7.85\\-7.24\\-8.14}\\ \hline \makecell[c]{1/16}&\makecell[c]{CSINet+\\CRNet\\DS-NLCSINet\\ACRNet-1x\\RDNet\\RDNet+} &\makecell[c]{23.00M\\3.55M\\9.72M\\\textbf{3.07M}\\3.39M\\3.77M} &\makecell[c]{\textbf{-14.14}\\-11.35\\-12.93\\-10.36\\-12.75\\-14.09} &\makecell[c]{\textbf{-5.73}\\ -5.44\\-4.98\\-5.19\\-4.59\\-5.14}\\ \hline \end{tabular} \end{center} \end{table} \textcolor[rgb]{0,0,1}{We first compare the NMSE performance of the 6 following decoders based on the FR scheme on the dataset in \cite{8972904}: CSINet+, CRNet, DS-NLCSINet, ACRNet-1x, RDNet, and RDNet+ in Table \uppercase\expandafter{\romannumeral2}. It is seen from Table \uppercase\expandafter{\romannumeral2} that the NMSE increases with the decrease of CR because the smaller CR is, the smaller bits are fed back and the poorer the performance is. In the regard of NMSE, RDNet+ outperforms traditional decoder DS-NLCSINet, ACRNet-1x and RDNet under all CR for both indoor and outdoor scenarios. Although the NMSE of RDNet+ is slightly higher than CSINet+, RDNet+ requires much less FLOPs than CSINet+, which means RDNet+ is feasible in practice. The superiority of the proposed RDNet+ decoder is that it makes a compromise between feedback accuracy and computing overhead.} \subsection{Comparison between Encoders} \subsubsection{Experiment Setting} We consider the FDD massive MIMO system with $N_{t}$ = 32, $N_{c}$ = 1024, $N_{c}'$ = 32, and $N=2N_{c}'N_{t}=2048$, operating at 2.6 GHz band. Moreover, $\mathbf{H}$ is generated following the default setting in QuaDRiGa \cite{6758357}. To demonstrate the benefits of automatically selecting CR, we need to generate channels with different sparsity, \textit{i.e.}, we generate $Q$ groups of $\mathbf{H}$ with $Q$ different numbers of clusters, each group having 20000 $\mathbf{H}$, and $Q=15$. The first group is generated with the most clusters, and subsequent groups are generated with increasingly scarce clusters. The change of the number of clusters reflects the change of UE's environment, which leads CR to change in real time. A total of 300000 independently generated channels are randomly split into the training, validation, and test dataset with 180000, 60000 and 60000 CSI matrices, respectively, and the generation methods will be elaborated in the next subsection. The batch size is set to 256, and $T_H$ is set to $-10$. Besides, $M_k$ is selected among 4 candidates: 512, 256, 128, and 64, \textit{i.e.}, CR$=M_k/N$ is selected among 4 candidates: $1/4$, $1/8$, $1/16$, and $1/32$. The whole pipeline is implemented in PyTorch. The Xavier initialization is applied on both convolution layers and FC layers. We use Adam optimizer with default setting ($\beta_{1}$ = 0.9, $\beta_{2}$ = 0.999, $\epsilon$ = 1e-8) and the mean square error (MSE) loss. The network is trained for 500 epochs. In order to evaluate the performance of channel feedback, we measure the distance between the original $\mathbf{H}$ and the reconstructed $\hat{\mathbf{H}}$ with NMSE defined as: \begin{equation} \mathrm{NMSE}=E\left\{\frac{\left\|\mathbf{H}-\hat{\mathbf{H}}\right\|_{2}^{2}}{\left\|\mathbf{H}\right\|_{2}^{2}}\right\}. \end{equation} We first train autoencoder with CR$=1/4$ until more epochs bring no improvement. We then freeze the parameters in encoder's CNN and train encoder's FC layers as well as decoders with CR$=1/8,1/16,1/32$ respectively. We then utilize the sufficiently trained autoencoder to label each $\mathbf{H}$ with the correct class of CR that is defined as the smallest CR to ensure NMSE below $T_H$. In the loss function (5), $\gamma_{k}$ is set as $1.00, 1.05, 1.10, 1.15$ corresponding to CR$=1/4, 1/8, 1/16, 1/32$. \subsubsection{Experiment Results} \begin{figure}[t] \begin{center} \includegraphics[width=8.7cm]{上帝视角.png} \caption{\textcolor[rgb]{0,0,1}{The difference of feedback bits for each $\mathbf{H}$ between the AMR encoder and the FR encoder under different $T_H$.}} \end{center} \end{figure} \textcolor[rgb]{0,0,1}{We first compare the mean feedback bits of $\mathbf{H}$ between the following two types of encoders: the proposed AMR encoder and the traditional FR encoder in Fig.~2, where the best scheme is the ground truth scheme that picks exactly the best CR for each batch, and all three schemes are based on the same multiple compression ratio framework.} For each one of the $Q$ groups, CR is selected in two different ways. For the AMR encoder, CR is automatically selected by SAM for each $\mathbf{H}$; while for FR encoder, the user has to select the smallest CR that can ensure the NMSE of each batch in the whole group is below $T_H$. For both types of encoders, each floating-point number is uniformly quantized into a number of 4 bits at the UE, while this 4 bits is uniformly dequantized into the floating-point number at the BS. It is seen from Fig.~2, that the smaller $T_H$ is, the more bits are fed back for each $\mathbf{H}$. For the AMR encoder, the larger $Q$ is, the less bits are fed back for each $\mathbf{H}$, because the subsequent groups added to the test are generated with increasingly scarce clusters, which makes the subsequent $\mathbf{H}$ increasingly sparse. For the FR encoder, the feedback bits remain unchanged in the beginning and decrease afterwards, because all the beginning groups have to adopt CR=$1/4$. For all $T_H$, the bits fed back by the AMR encoder are roughly $3/4$ of the FR encoder for all groups, which demonstrates the superiority of the proposed encoder under dynamic change of the environment. \begin{figure}[t] \centering \subfigure[Confusion Matrix of SAM with Unweighted Cross-Entropy Loss]{ \includegraphics[width=5.9cm]{simu3.png}} \subfigure[Confusion Matrix of SAM with Weighted Cross-Entropy Loss]{ \includegraphics[width=5.9cm]{simu4.png}} \caption{Comparison of Confusion Matrix of SAM with Weighted and Unweighted Cross-Entropy Loss } \label{1} \end{figure} Next, we verify the advantages of the designed weighted cross-entropy loss function (5) by comparing it with unweighted cross-entropy loss function, and the results are shown in Fig.~3 as heatmaps. The column coordinates mean the correct class of CR, and the row coordinates mean the predicted class of CR. The sum of elements above the diagonal line represents the probability of FNE that is 1.29e-3 and 1.47e-3 for weighted and unweighted loss function respectively. Since NMSE is above $T_H$ only in case of FNE, the probability of NMSE below $T_H$ is 99.871$\%$ and 99.853$\%$ respectively. Hence, the designed (5) increases the probability that NMSE is below $T_H$, which proves the superiority of the proposed weighted cross-entropy loss function. \section{Conclusion} In this paper, we proposed an AMR CSI feedback framework. In order to automatically select CR at the UE, we designed SAM that is trained with weighted cross-entropy loss. When the channel between the BS and the UE changes dynamically, the AMR framework can reduce feedback overhead while maintaining a certain feedback accuracy. Moreover, we designed the decoder RDNet+ that has Dense-In-Dense structure. Simulation demonstrated that the proposed AMR framework is superior to the traditional FR framework, and the proposed decoder RDNet+ is superior to the traditional RDNet. \small \bibliographystyle{ieeetr} \section{Introduction} Massive multi-input multi-output (MIMO), as a key technology in communication systems, has great advantages over traditional MIMO systems, such as higher spectral efficiency, higher energy efficiency, and higher spatial resolution \cite{6798744}. Accurate downlink channel state information (CSI) is critical for frequency division duplex (FDD) massive MIMO systems \cite{7811130}. In traditional FDD MIMO systems, downlink CSI is first estimated at the UE and then fed back to the BS. However, this feedback strategy is expensive because the substantial antennas at the BS greatly increase the dimension of the CSI matrix, thereby leading to a large overhead. To address this issue, the CSI matrix should be efficiently compressed, which can be realized by compressive sensing (CS) or deep learning (DL) techniques. Recently, DL has been proven better than CS with more satisfactory feedback accuracy. The DL-based image compression technique is first introduced to massive MIMO CSI feedback in \cite{8322184} based on autoencoder models of CSINet. Ye et al. propose DNNet in \cite{9076084} to achieve superior feedback performance at low signal-to-noise ratio. Learning from inception modules in GoogLeNet, CRNet is proposed in \cite{9149229} to achieve great feedback accuracy with relatively small computational complexity. In \cite{8972904}, CSINet+ is introduced with two main modifications on CSINet: changing convolutional kernel size and upgrading refinement process, which improves the performance of the decoder. In these DL-based works \cite{8322184}, \cite{9149229}, \cite{8972904}, the autoencoder models only work for fixed compression rate CSI feedback. However, the communication environment is constantly changing, while relying on fixed-rate (FR) models may lead to superfluous bits and waste resources. In a changing environment, a high compression rate should be used when the CSI matrix is sparse, and vice versa \cite{7727995}. Therefore, the compression rate should be adjusted according to the sparsity of CSI matrices while ensuring feedback accuracy. To address this problem, Guo et al. \cite{8972904} propose a switchable structure to enable the autoencoder to handle multi-rate feedback. However, they do not propose a concrete criterion to adapt the compression rate according to the change of the environment, which makes it hard to put into practical use. In this paper, we introduce a novel feedback framework that is able to automatically adapt the compression rate according to the change of CSI matrices. For the encoder, we design the sparsity analysis network (SAM) to instantaneously classify the compression rate that will be used for channel feedback. We also develop a loss function to discourage SAM from making the worst misclassification, and avoid intervening the network on correct classification at the mean time. For the decoder, we design a {\it Dense-In-Dense} structure to achieve higher feedback accuracy. Simulation demonstrates the superiority of the proposed framework in decreasing normalized mean square error (NMSE) and saving feedback bits when NMSE is kept below a threshold. \section{System Model} \begin{figure*}[t] \centering \centerline{\includegraphics[width=16.5cm]{大图2.png}} \caption{The AMR feedback framework is composed of encoder and decoder.} \end{figure*} We consider a single-cell downlink massive MIMO system with $N_{t}$($\gg1$) transmit antennas at BS and a single receiver antenna at UE. The system is operated in OFDM over $N_{c}$ subcarriers, and the received signal at the $n$-th subcarrier can be expressed as: \begin{equation} \begin{aligned} y_{n}=\tilde{\mathbf{h}}_{n}^{H} \mathbf{v}_{n} s_{n}+z_{n}, \quad n=1,\ldots, N_c, \end{aligned} \end{equation} where $\tilde{\mathbf{h}}_{n}$ and $\mathbf{v}_{n} \in \mathbf{C}^{N_{t} \times 1}$ are the channel frequency response vector and the precoding vector at the $n$-th subcarrier, respectively; $s_{n}$ represents the transmitted data symbol; $z_{n}$ is the additive noise. Thus, the overall CSI in the spatial-frequency domain can be expressed in matrix form as $\tilde{\mathbf{H}}=\left[\tilde{\mathbf{h}}_{1}, \tilde{\mathbf{h}_{2}}, \ldots, \tilde{\mathbf{h}}_{N_{c}}\right]^{H} \in \mathbb{C}^{N_{c} \times N_{t}}$. In FDD systems, $\tilde{\mathbf{H}}$ with 2$N_{c}N_{t}$ parameters has to be fed back to BS, whose overhead is huge with massive number of antennas \cite{6214417}. In order to reduce feedback overhead, CSI matrix can be converted to angular-delay domain with 2D discrete Fourier transform (DFT) as: \begin{equation} \begin{aligned} \mathbf{H}=\mathbf{F}_{\mathrm{d}} \tilde{\mathbf{H}} \mathbf{F}_{\mathrm{a}}, \end{aligned} \end{equation} where $\mathbf{F}_{\mathrm{d}} $ and $\mathbf{F}_{\mathrm{a}}$ are the $N_{c}$ × $N_{c}$ and $N_{t}$ × $N_{t}$ normalized DFT matrices respectively. In the delay domain, only the first $N_{c}'$ rows of $\mathbf{H}$ contain non-zero values because the time delay between multipath arrivals lies within a limited period \cite{9149229}, \cite{8934725}. Therefore, we can retain the first $N_{c}'$ rows of $\mathbf{H}$ and remove the remaining rows. By an abuse of notation, we still use $\mathbf{H}$ to denote the $N_{c}'$ × $N_{t}$ truncated channel matrix. Splitting the real and imaginary part of each element of $\mathbf{H}$, we consider $\mathbf{H}$ has $N=2N_{c}'N_{t}$ elements of real numbers. In this paper, we propose an adaptive multi-rate (AMR) feedback framework, including \textit{encoder} and \textit{decoder}. At the UE, the encoder transforms $\mathbf{H}$ into an $M$ dimensional vector $\mathbf{x} $ by an agnostic function $f_{\text {en }}$ as: \begin{equation} \mathbf{x}=f_{\text {en }}(\mathbf{H}), \end{equation}where $M<N$ and $M$ can be selected from a pre-determined set $\{M_1, M_2, \cdots, M_K\}$. Note that the value of $M$ requires additional $\log_2 K$ bits to be fed back, but the overhead is usually negligible when $K$ is small. The feedback compression rate (CR) is then defined as $M_k/N$ which has $K$ options. A key property of the proposed encoder is its capability to predict which CR is most suitable according to the CSI matrix. For instance, in order to reduce feedback overhead while guaranteeing a certain feedback accuracy, the \textit{best CR} is defined as the smallest $M_k/N$ that can ensure NMSE below a threshold $T_H$. Then, the autoencoder self-adapts to the predicted CR. After receiving $\mathbf{x}$ at BS, the decoder inversely transforms it back into $\mathbf{\hat{H}}$ as: \begin{equation} \mathbf{\mathbf{\hat{H}}}=f_{\text {de }}(\mathbf{x}). \end{equation} The recovered channel matrix $\mathbf{\hat{\tilde{H}}}$ in the spatial-frequency domain can be obtained by applying zero filling and inverse DFT to $\mathbf{\hat{H}}$ at BS. \section{The Proposed Feedback Framework } In this section, we will specify the designed encoder and decoder, whose structure is shown in Fig.~1. \subsection{Encoder} We choose inception network as the structure of encoder that is composed of multiple branches with different representational capacity. In each branch, convolutions of multiple kernel sizes are implemented to obtain various receptive fields. In order to decrease the parameter number, we employ asymmetric convolution with consecutive non-square kernels, \textit{e.g.}, 1$\times$3 and 3$\times$1 kernels \cite{ding2019acnet}. The outputs of all branches are concatenated and processed by a 5$\times$5 convolution to reduce channel numbers. In this way, the spatial information at various scales is aggregated. Considering the storage at UE is relatively small, the encoder convolution neural network (CNN) is shared while the encoder fully connected (FC) layers can be adjusted, to reduce burden of parameter storage. At UE, we design SAM to pre-process $\mathbf{H}$ before the encoder, which is composed of 4 FC layers with the activation function of the last layer as softmax. The input of SAM is $\mathbf{H}$ and the outputs of SAM are $K$ decimal numbers, each of which represents the probability that the corresponding CR being the best. CR with the largest probability will be chosen in feedback as shown in Fig.~1. The proposed encoder with SAM is called AMR encoder, and the traditional encoder without SAM is called FR encoder. We define true examples (TE) as correct classification of $\mathbf{H}$, false positive examples (FPE) as misclassifying $\mathbf{H}$ into larger CRs, and false negative examples (FNE) as misclassifying $\mathbf{H}$ into smaller CRs. In case of TE and FPE, NMSE is below $T_H$, while in case of FNE, NMSE is above $T_H$ In order to enlarge the probability to maintain NMSE below $T_H$, we design the weighted cross-entropy loss function as: \begin{equation} \mathcal{L}_{i}^{w}=\sum_{k=1}^{K}-y_{i k} \log \left(\hat{y}_{i k}\right) -\gamma_{k}\left(1-y_{i k}\right) \log \left(1-\hat{y}_{i k}\right), \end{equation} where $y_{i}$ is a ground-truth one-hot vector for the $i$-th sample and $\hat{y}_{i}$ is the output of SAM for the $i$-th sample. The loss function (5) penalizes the misclassification into smaller CR more aggressively by setting larger $\gamma_{k}$ for smaller CR. If all $\gamma_{k}$ is set to 1, then (5) degenerates into unweighted loss function. Different from traditional weighted cross-entropy loss, we do not adjust weights of TE \cite{9277638}. Instead, we only tune weights of FPE and FNE to increase the probability to maintain NMSE below $T_H$. \subsection{Decoder} In the decoder, we designed an improved residual dense network (RDNet) \cite{8964437} named RDNet+. \subsubsection{Inner-RDBlock Structure} RDNet+ inherits the basic units of RDNet that are named RDBlocks, as shown in Fig.~ 1. The RDBlock consists of 3 densely connected layers (DCL), the squeeze and excitation (SE) module, and the local residual learning (LRL) module \cite{8964437}. The input of every layer of DCL has direct access to all the subsequent layers, which passes on information that needs to be preserved and leads to an implicit deep supervision. SE concatenates the outputs of all the preceding layers within the current RDBlock and adopts a channel attention algorithm to exploit the inter-row relationship of the input tensor. LRL is realized by two steps: 1) a 1$\times$1 convolutional layer is implemented to reduce the channel number to the same as the input of RDBlock; 2) the last layer's output is added to the original input of the current RDBlock, which can reduce the risk of vanishing gradient. \subsubsection{Inter-RDBlock Structure} The difference between RDNet and RDNet+ is that RDNet+ has an enhanced inter-RDBlock Structure. To improve the information flow among RDBlocks, we introduce direct connections from each RDBlock to all subsequent RDBlocks in RDNet+, \textit{i.e.}, RDBlocks are also densely connected. Since inter-RDBlock connection can create longer skip over layers than inner-RDBlock dense connection, inter-RDBlock connection is much more significant than inner one. Define $\mathbf{F}_{i}$ as the output of the $i$-th RDBlock, which has a recursive relationship as: \begin{equation} \mathbf{F}_{i}=\sigma_{i}(\mathbf{F}_{i-1}, \mathbf{F}_{i-2}, \cdots, \mathbf{F}_{0}), \end{equation} where $\sigma_{i}(\cdot)$ denotes the function of the $i$-th RDBlock. This recursive relationship brings about much benefit, which is illustrated in the view of back propagation by the following steps. Suppose $i_1, i_2 \in \mathbb{Z}$, $0<i_1<i_2$, and define $\mathbf{f}_i\triangleq \vec{\mathbf{F}}_i$. During back propagation for gradient descent, the partial derivative of $\mathbf{f}_{i_{2}}$ with respect to $ \mathbf{f}_{i_{1}}$ is calculated as: \begin{equation} \begin{aligned} &\frac{\partial \mathbf{f}_{i_{2}}}{\partial \mathbf{f}_{i_{1}}} = \sum_{k=1}^{i_{2}-i_{1}} \frac{\sigma_{i_{2}}}{\partial \mathbf{f}_{i_{2}-k}} \frac{\partial \mathbf{f}_{i_{2}-k}}{\partial \mathbf{f}_{i_{1}}}\\ &=\sum_{k=1}^{i_{2}-i_{1}} \sum_{i_{1}<p_{1}<\ldots <p_{k}<i_{2}}\frac{\partial \sigma_{i_{2}}}{\partial \mathbf{f}_{i_{2}-p_{1}}} \frac{\partial \sigma_{i_{2}-p_{1}}}{\partial \mathbf{f}_{i_{2}-p_{2}}} \ldots \frac{\partial \sigma_{i_{2}-p_{k}}}{\partial \mathbf{f}_{i_{1}}}, \\ \end{aligned} \end{equation} In deep networks, the vanishing gradient problem is likely to occur, while in shallow neural networks, both learning ability and generalization ability cannot be strengthened. However, the proposed recursive relationship among $\mathbf{F}_{i}$ can ameliorate both problems. No matter how large $i_2-i_1$ may be, there are invariably multiplications of relatively few matrices in (7), creating short connection and effectively preventing vanishing gradient. There are also multiplications of relatively massive matrices in (7) creating long connection and enhancing decoder's ability of learning and generalization \cite{7820046}, like \textit{Res-In-Res} multilevel residual networks. We name the inter and inner structure of dense connectivity as Dense-in-Dense (DID). DID encourages information reuse throughout the decoder, which ensures hierarchical information of all RDBlocks' outputs are sufficiently utilized. Although DID structure brings heavier calculation burden, it shows the noticeable advantage of eliminating negative impact when the decoder network goes too deep. After concatenating the outputs of all the preceding RDBlocks, we extract the global feature of the whole decoder by applying a 3×3 convolutional layer, which is named as global feature fusion (GFF). We then add the input of decoder to the output of GFF, which is named as global residual learning (GRL). GFF reduces the channel number to the same as the original input of the decoder, which makes GRL possible. It should be noted that GRL is of the essence to accelerate the training speed for the proposed complicated decoder. \section{Simulation Results and Analysis} In this section, we provide various simulations to verify the effectiveness as well as the superiority of the proposed framework. \subsection{Experiment Setting} We consider the FDD massive MIMO system with $N_{t}$ = 32, $N_{c}$ = 1024, $N_{c}'$ = 32, and $N=2N_{c}'N_{t}=2048$, operating at 2.6 GHz band. Moreover, $\mathbf{H}$ is generated following the default setting in QuaDRiGa \cite{6758357}. To demonstrate the benefits of automatically selecting CR, we need to generate channels with different sparsity, \textit{i.e.}, we generate $Q$ groups of $\mathbf{H}$ with $Q$ different numbers of clusters, each group having 20000 $\mathbf{H}$, and $Q=15$. The first group is generated with the most clusters, and subsequent groups are generated with increasingly scarce clusters. The change of the number of clusters reflects the change of UE's environment, which leads CR to change in real time. A total of 300000 independently generated channels are randomly split into the training, validation, and test dataset with 180000, 60000 and 60000 CSI matrices, respectively, and the generation methods will be elaborated in the next subsection. The batch size is set to 256, and $T_H$ is set to $-10$. Besides, $M_k$ is selected among 4 candidates: 512, 256, 128, and 64, \textit{i.e.}, CR$=M_k/N$ is selected among 4 candidates: $1/4$, $1/8$, $1/16$, and $1/32$. The whole pipeline is implemented in PyTorch. The Xavier initialization is applied on both convolution layers and FC layers. We use Adam optimizer with default setting ($\beta_{1}$ = 0.9, $\beta_{2}$ = 0.999, $\epsilon$ = 1e-8) and the mean square error (MSE) loss. The network is trained for 500 epochs. In order to evaluate the performance of channel feedback, we measure the distance between the original $\mathbf{H}$ and the reconstructed $\hat{\mathbf{H}}$ with NMSE defined as: \begin{equation} \mathrm{NMSE}=E\left\{\frac{\left\|\mathbf{H}-\hat{\mathbf{H}}\right\|_{2}^{2}}{\left\|\mathbf{H}\right\|_{2}^{2}}\right\}. \end{equation} We first train autoencoder with CR$=1/4$ until more epochs bring no improvement. We then freeze the parameters in encoder's CNN and train encoder's FC layers as well as decoders with CR$=1/8,1/16,1/32$ respectively. We then utilize the sufficiently trained autoencoder to label each $\mathbf{H}$ with the correct class of CR that is defined as the smallest CR to ensure NMSE below $T_H$. In the loss function (5), $\gamma_{k}$ is set as $1.00, 1.05, 1.10, 1.15$ corresponding to CR$=1/4, 1/8, 1/16, 1/32$. \subsection{Experiment Results} We first investigate the relationship between the NMSE and the statistic characteristics $s$ that can measure the sparsity of $\mathbf{H}$. Moreover, $s$ is defined based on the relationship between the $L_1$ norm and the $L_2$ norm as: \begin{equation} s\left(\mathbf{H}\right)=\frac{\sqrt{N}-\left\|\mathbf{H}\right\|_{1}/\left\|\mathbf{H}\right\|_{2}}{\sqrt{N}-1}. \end{equation} The relationship between $s$ and the NMSE when CR=$1/4$ is not significant according to Fig.~\ref{L0}. Thus, we need SAM to dig out the intrinsic relationship between $\mathbf{H}$ and its NMSE. \begin{figure}[t] \begin{center} \includegraphics[width=7cm]{simu2'.png} \caption{The NMSE for RDNet and RDNet+ when CR$=1/4$ versus training epochs.} \label{L0} \end{center} \end{figure} \subsubsection{Comparison between Encoders} \begin{figure}[t] \begin{center} \includegraphics[width=8.7cm]{fix.jpg} \caption{The difference of feedback bits for each $\mathbf{H}$ between the proposed framework with traditional framework under different $T_H$.} \end{center} \end{figure} We first compare the mean feedback bits of $\mathbf{H}$ between the following two types of encoders: the proposed AMR encoder and the traditional FR encoder that is also based on the same adaptive multi-rate framework in Fig.~2. For each one of the $Q$ groups, CR is selected in two different ways. For the AMR encoder, CR is automatically selected by SAM for each $\mathbf{H}$; while for FR encoder, the user has to select the smallest CR that can ensure the NMSE of each batch in the whole group is below $T_H$. For both types of encoders, each floating-point number is uniformly quantized into a number of 4 bits at the UE, while this 4 bits is uniformly dequantized into the floating-point number at the BS. It is seen from Fig.~2, that the smaller $T_H$ is, the more bits are fed back for each $\mathbf{H}$. For the AMR encoder, the larger $Q$ is, the less bits are fed back for each $\mathbf{H}$, because the subsequent groups added to the test are generated with increasingly scarce clusters, which makes the subsequent $\mathbf{H}$ increasingly sparse. For the FR encoder, the feedback bits remain unchanged in the beginning and decrease afterwards, because all the beginning groups have to adopt CR=$1/4$. For all $T_H$, the bits fed back by the AMR encoder are roughly $3/4$ of the FR encoder for all groups, which demonstrates the superiority of the proposed encoder under dynamic change of the environment. \begin{figure}[t] \centering \subfigure[Confusion Matrix of SAM with Unweighted Cross-Entropy Loss]{ \includegraphics[width=5.9cm]{simu3.png}} \subfigure[Confusion Matrix of SAM with Weighted Cross-Entropy Loss]{ \includegraphics[width=5.9cm]{simu4.png}} \caption{Comparison of Confusion Matrix of SAM with Weighted and Unweighted Cross-Entropy Loss } \label{1} \end{figure} Next, we verify the advantages of the designed weighted cross-entropy loss function (5) by comparing it with unweighted cross-entropy loss function, and the results are shown in Fig.~3 as heatmaps. The column coordinates mean the correct class of CR, and the row coordinates mean the predicted class of CR. The sum of elements above the diagonal line represents the probability of FNE that is 1.29e-3 and 1.47e-3 for weighted and unweighted loss function respectively. Since NMSE is above $T_H$ only in case of FNE, the probability of NMSE below $T_H$ is 99.871$\%$ and 99.853$\%$ respectively. Hence, the designed (5) increases the probability that NMSE is below $T_H$, which proves the superiority of the proposed weighted cross-entropy loss function. \subsubsection{Comparison among Decoders} \begin{table}[t] \begin{center} \label{tab2} \caption{NMSE(dB) Comparison Between Different Networks} \begin{tabular}{|c<{\centering}|c<{\centering}|c<{\centering}|c<{\centering}| c<{\centering}|} \hline \diagbox{Encoder}{Decoder}&CSINet+& CRNet&RDNet (FLOPS)&\makecell[c]{RDNet+ (FLOPS)} \\ \hline fixed CR$=1/4$&-13.84&-14.96&-14.02 ()&\textbf{-15.17}\\ \hline fixed CR$=1/8$&-10.32&-11.74&-11.01&\textbf{-12.79}\\ \hline fixed CR$=1/16$&-7.95&-9.70&-9.81&\textbf{-9.98}\\ \hline fixed CR$=1/32$&-5.64&-6.19&-6.23&\textbf{-6.87}\\ \hline \end{tabular} \end{center} \end{table} We then compare the NMSE performance of the 4 following decoders on all $Q$ groups: CSINet+, CRNet, RDNet, and RDNet+ under different CR in Table \uppercase\expandafter{\romannumeral1}. The proposed decoder RDNet+ outperforms traditional decoder CSINet+, CRNet, and RDNet under all CR. The NMSE increases with the decrease of CR because the smaller CR is, the smaller bits are fed back and the poorer the performance is. \begin{figure}[t] \begin{center} \includegraphics[width=7cm]{simu2'.png} \caption{The NMSE for RDNet and RDNet+ when CR$=1/4$ versus training epochs.} \label{RDNet1} \end{center} \end{figure} Next, we compare the NMSE descent curves between RDNet and RDNet+. The NMSE performance for two kinds of decoders versus training epochs when CR$=1/4$ is illustrated in Fig.~\ref{RDNet1}. In the training process, both curves gradually drop to an error floor. NMSE of RDNet+ decreases faster in the beginning and can reach a lower value than NMSE of RDNet, and the NMSE curve of RDNet+ gets less fluctuant. \section{Conclusion} In this paper, we proposed an AMR CSI feedback framework. In order to automatically select CR at the UE, we designed SAM that is trained with weighted cross-entropy loss. When the channel between the BS and the UE changes dynamically, the AMR framework can reduce feedback overhead while maintaining a certain feedback accuracy. Moreover, we designed the decoder RDNet+ that has Dense-In-Dense structure. Simulation demonstrated that the proposed AMR framework is superior to the traditional FR framework, and the proposed decoder RDNet+ is superior to the traditional decoders. \small \bibliographystyle{ieeetr} \section{Introduction} Massive multi-input multi-output (MIMO), as a key technology in communication systems, has great advantages over traditional MIMO systems, such as higher spectral efficiency, higher energy efficiency, and higher spatial resolution \cite{6798744}. Accurate downlink channel state information (CSI) is critical for frequency division duplex (FDD) massive MIMO systems \cite{7811130}. In traditional FDD MIMO systems, downlink CSI is first estimated at the UE and then fed back to the BS. However, this feedback strategy is expensive because the substantial antennas at the BS greatly increase the dimension of the CSI matrix, thereby leading to a large overhead. To address this issue, the CSI matrix should be efficiently compressed, which can be realized by compressive sensing (CS) or deep learning (DL) techniques. Recently, DL has been proven better than CS with more satisfactory feedback accuracy. The DL-based image compression technique is first introduced to massive MIMO CSI feedback in \cite{8322184} based on autoencoder models of CSINet. Ye et al. propose DNNet in \cite{9076084} to achieve superior feedback performance at low signal-to-noise ratio. Learning from inception modules in GoogLeNet, CRNet is proposed in \cite{9149229} to achieve great feedback accuracy with relatively small computational complexity. In \cite{8972904}, CSINet+ is introduced with two main modifications on CSINet: changing convolutional kernel size and upgrading refinement process, which improves the performance of the decoder. In these DL-based works \cite{8322184}, \cite{9149229}, \cite{8972904}, the autoencoder models only work for fixed compression rate CSI feedback. However, the communication environment is constantly changing, while relying on fixed-rate (FR) models may lead to superfluous bits and waste resources. In a changing environment, a high compression rate should be used when the CSI matrix is sparse, and vice versa \cite{7727995}. Therefore, the compression rate should be adjusted according to the sparsity of CSI matrices while ensuring feedback accuracy. To address this problem, Guo et al. \cite{8972904} propose a switchable structure to enable the autoencoder to handle multi-rate feedback. However, they do not propose a concrete criterion to adapt the compression rate according to the change of the environment, which makes it hard to put into practical use. In this paper, we introduce a novel feedback framework that is able to automatically adapt the compression rate according to the change of CSI matrices. For the encoder, we design the sparsity analysis network (SAM) to instantaneously classify the compression rate that will be used for channel feedback. We also develop a loss function to discourage SAM from making the worst misclassification, and avoid intervening the network on correct classification at the mean time. For the decoder, we design a {\it Dense-In-Dense} structure to achieve higher feedback accuracy. Simulation demonstrates the superiority of the proposed framework in decreasing normalized mean square error (NMSE) and saving feedback bits when NMSE is kept below a threshold. \section{System Model} \begin{figure}[t] \centering \centerline{\includegraphics[width=16.5cm]{大图2.png}} \caption{The AMR feedback framework is composed of encoder and decoder.} \end{figure} We consider a single-cell downlink massive MIMO system with $N_{t}$($\gg1$) transmit antennas at BS and a single receiver antenna at UE. The system is operated in OFDM over $N_{c}$ subcarriers, and the received signal at the $n$-th subcarrier can be expressed as: \begin{equation} \begin{aligned} y_{n}=\tilde{\mathbf{h}}_{n}^{H} \mathbf{v}_{n} s_{n}+z_{n}, \quad n=1,\ldots, N_c, \end{aligned} \end{equation} where $\tilde{\mathbf{h}}_{n}$ and $\mathbf{v}_{n} \in \mathbf{C}^{N_{t} \times 1}$ are the channel frequency response vector and the precoding vector at the $n$-th subcarrier, respectively; $s_{n}$ represents the transmitted data symbol; $z_{n}$ is the additive noise. Thus, the overall CSI in the spatial-frequency domain can be expressed in matrix form as $\tilde{\mathbf{H}}=\left[\tilde{\mathbf{h}}_{1}, \tilde{\mathbf{h}_{2}}, \ldots, \tilde{\mathbf{h}}_{N_{c}}\right]^{H} \in \mathbb{C}^{N_{c} \times N_{t}}$. In FDD systems, $\tilde{\mathbf{H}}$ with 2$N_{c}N_{t}$ parameters has to be fed back to BS, whose overhead is huge with massive number of antennas \cite{6214417}. In order to reduce feedback overhead, CSI matrix can be converted to angular-delay domain with 2D discrete Fourier transform (DFT) as: \begin{equation} \begin{aligned} \mathbf{H}=\mathbf{F}_{\mathrm{d}} \tilde{\mathbf{H}} \mathbf{F}_{\mathrm{a}}, \end{aligned} \end{equation} where $\mathbf{F}_{\mathrm{d}} $ and $\mathbf{F}_{\mathrm{a}}$ are the $N_{c}$ × $N_{c}$ and $N_{t}$ × $N_{t}$ normalized DFT matrices respectively. In the delay domain, only the first $N_{c}'$ rows of $\mathbf{H}$ contain non-zero values because the time delay between multipath arrivals lies within a limited period \cite{9149229}, \cite{8934725}. Therefore, we can retain the first $N_{c}'$ rows of $\mathbf{H}$ and remove the remaining rows. By an abuse of notation, we still use $\mathbf{H}$ to denote the $N_{c}'$ × $N_{t}$ truncated channel matrix. Splitting the real and imaginary part of each element of $\mathbf{H}$, we consider $\mathbf{H}$ has $N=2N_{c}'N_{t}$ elements of real numbers. In this paper, we propose an adaptive multi-rate (AMR) feedback framework, including \textit{encoder} and \textit{decoder}. At the UE, the encoder transforms $\mathbf{H}$ into an $M$ dimensional vector $\mathbf{x} $ by an agnostic function $f_{\text {en }}$ as: \begin{equation} \mathbf{x}=f_{\text {en }}(\mathbf{H}), \end{equation}where $M<N$ and $M$ can be selected from a pre-determined set $\{M_1, M_2, \cdots, M_K\}$. Note that the value of $M$ requires additional $\log_2 K$ bits to be fed back, but the overhead is usually negligible when $K$ is small. The feedback compression rate (CR) is then defined as $M_k/N$ which has $K$ options. A key property of the proposed encoder is its capability to predict which CR is most suitable according to the CSI matrix. For instance, in order to reduce feedback overhead while guaranteeing a certain feedback accuracy, the \textit{best CR} is defined as the smallest $M_k/N$ that can ensure NMSE below a threshold $T_H$. Then, the autoencoder self-adapts to the predicted CR. After receiving $\mathbf{x}$ at BS, the decoder inversely transforms it back into $\mathbf{\hat{H}}$ as: \begin{equation} \mathbf{\mathbf{\hat{H}}}=f_{\text {de }}(\mathbf{x}). \end{equation} The recovered channel matrix $\mathbf{\hat{\tilde{H}}}$ in the spatial-frequency domain can be obtained by applying zero filling and inverse DFT to $\mathbf{\hat{H}}$ at BS. \section{The Proposed Feedback Framework } In this section, we will specify the designed encoder and decoder, whose structure is shown in Fig.~1. \subsection{Encoder} We choose inception network as the structure of encoder that is composed of multiple branches with different representational capacity. In each branch, convolutions of multiple kernel sizes are implemented to obtain various receptive fields. In order to decrease the parameter number, we employ asymmetric convolution with consecutive non-square kernels, \textit{e.g.}, 1$\times$3 and 3$\times$1 kernels \cite{ding2019acnet}. The outputs of all branches are concatenated and processed by a 5$\times$5 convolution to reduce channel numbers. In this way, the spatial information at various scales is aggregated. Considering the storage at UE is relatively small, the encoder convolution neural network (CNN) is shared while the encoder fully connected (FC) layers can be adjusted, to reduce burden of parameter storage. At UE, we design SAM to pre-process $\mathbf{H}$ before the encoder, which is composed of 4 FC layers with the activation function of the last layer as softmax. The input of SAM is $\mathbf{H}$ and the outputs of SAM are $K$ decimal numbers, each of which represents the probability that the corresponding CR being the best. CR with the largest probability will be chosen in feedback as shown in Fig.~1. The proposed encoder with SAM is called AMR encoder, and the traditional encoder without SAM is called FR encoder. We define true examples (TE) as correct classification of $\mathbf{H}$, false positive examples (FPE) as misclassifying $\mathbf{H}$ into larger CRs, and false negative examples (FNE) as misclassifying $\mathbf{H}$ into smaller CRs. In case of TE and FPE, NMSE is below $T_H$, while in case of FNE, NMSE is above $T_H$ In order to enlarge the probability to maintain NMSE below $T_H$, we design the weighted cross-entropy loss function as: \begin{equation} \mathcal{L}_{i}^{w}=\sum_{k=1}^{K}-y_{i k} \log \left(\hat{y}_{i k}\right) -\gamma_{k}\left(1-y_{i k}\right) \log \left(1-\hat{y}_{i k}\right), \end{equation} where $y_{i}$ is a ground-truth one-hot vector for the $i$-th sample and $\hat{y}_{i}$ is the output of SAM for the $i$-th sample. The loss function (5) penalizes the misclassification into smaller CR more aggressively by setting larger $\gamma_{k}$ for smaller CR. If all $\gamma_{k}$ is set to 1, then (5) degenerates into unweighted loss function. Different from traditional weighted cross-entropy loss, we do not adjust weights of TE \cite{9277638}. Instead, we only tune weights of FPE and FNE to increase the probability to maintain NMSE below $T_H$. \subsection{Decoder} In the decoder, we designed an improved residual dense network (RDNet) \cite{8964437} named RDNet+ whose basic units are RDBlocks, as shown in Fig.~ 1. \subsubsection{Inner-RDBlock Structure} The RDBlock consists of 3 densely connected layers (DCL), the squeeze and excitation (SE) module, and the local residual learning (LRL) module \cite{8964437}. The input of every layer of DCL has direct access to all the subsequent layers, which passes on information that needs to be preserved and leads to an implicit deep supervision. SE concatenates the outputs of all the preceding layers within the current RDBlock and adopts a channel attention algorithm to exploit the inter-row relationship of the input tensor. LRL is realized by two steps: 1) a 1$\times$1 convolutional layer is implemented to reduce the channel number to the same as the input of RDBlock; 2) the last layer's output is added to the original input of the current RDBlock, which can reduce the risk of vanishing gradient. \subsubsection{Inter-RDBlock Structure} To improve the information flow among RDBlocks, we introduce direct connections from each RDBlock to all subsequent RDBlocks, \textit{i.e.}, RDBlocks are also densely connected. Since inter-RDBlock connection can create longer skip over layers than inner-RDBlock dense connection, inter-RDBlock connection is much more significant than inner one. Define $\mathbf{F}_{i}$ as the output of the $i$-th RDBlock, which has a recursive relationship as: \begin{equation} \mathbf{F}_{i}=\sigma_{i}(\mathbf{F}_{i-1}, \mathbf{F}_{i-2}, \cdots, \mathbf{F}_{0}), \end{equation} where $\sigma_{i}(\cdot)$ denotes the function of the $i$-th RDBlock. This recursive relationship brings about much benefit, which is illustrated in the view of back propagation by the following steps. Suppose $i_1, i_2 \in \mathbb{Z}$, $0<i_1<i_2$, and define $\mathbf{f}_i\triangleq \vec{\mathbf{F}}_i$. During back propagation for gradient descent, the partial derivative of $\mathbf{f}_{i_{2}}$ with respect to $ \mathbf{f}_{i_{1}}$ is calculated as: \begin{equation} \begin{aligned} &\frac{\partial \mathbf{f}_{i_{2}}}{\partial \mathbf{f}_{i_{1}}} = \sum_{k=1}^{i_{2}-i_{1}} \frac{\sigma_{i_{2}}}{\partial \mathbf{f}_{i_{2}-k}} \frac{\partial \mathbf{f}_{i_{2}-k}}{\partial \mathbf{f}_{i_{1}}}\\ &=\sum_{k=1}^{i_{2}-i_{1}} \sum_{i_{1}<p_{1}<\ldots <p_{k}<i_{2}}\frac{\partial \sigma_{i_{2}}}{\partial \mathbf{f}_{i_{2}-p_{1}}} \frac{\partial \sigma_{i_{2}-p_{1}}}{\partial \mathbf{f}_{i_{2}-p_{2}}} \ldots \frac{\partial \sigma_{i_{2}-p_{k}}}{\partial \mathbf{f}_{i_{1}}}, \\ \end{aligned} \end{equation} In deep networks, the vanishing gradient problem is likely to occur, while in shallow neural networks, both learning ability and generalization ability cannot be strengthened. However, the proposed recursive relationship among $\mathbf{F}_{i}$ can ameliorate both problems. No matter how large $i_2-i_1$ may be, there are invariably multiplications of relatively few matrices in (7), creating short connection and effectively preventing vanishing gradient. There are also multiplications of relatively massive matrices in (7) creating long connection and enhancing decoder's ability of learning and generalization \cite{7820046}, like \textit{Res-In-Res} multilevel residual networks. We name the inter and inner structure of dense connectivity as Dense-in-Dense (DID). DID encourages information reuse throughout the decoder, which ensures hierarchical information of all RDBlocks' outputs are sufficiently utilized. Although DID structure brings heavier calculation burden, it shows the noticeable advantage of eliminating negative impact when the decoder network goes too deep. After concatenating the outputs of all the preceding RDBlocks, we extract the global feature of the whole decoder by applying a 3×3 convolutional layer, which is named as global feature fusion (GFF). We then add the input of decoder to the output of GFF, which is named as global residual learning (GRL). GFF reduces the channel number to the same as the original input of the decoder, which makes GRL possible. It should be noted that GRL is of the essence to accelerate the training speed for the proposed complicated decoder. \section{Simulation Results and Analysis} In this section, we provide various simulations to verify the effectiveness as well as the superiority of the proposed framework. \subsection{Experiment Setting} We consider the FDD massive MIMO system with $N_{t}$ = 32, $N_{c}$ = 1024, $N_{c}'$ = 32, and $N=2N_{c}'N_{t}=2048$, operating at 2.6 GHz band. Moreover, $\mathbf{H}$ is generated following the default setting in QuaDRiGa \cite{6758357}. To demonstrate the benefits of automatically selecting CR, we need to generate channels with different sparsity, \textit{i.e.}, we generate $Q$ groups of $\mathbf{H}$ with $Q$ different numbers of clusters, each group having 20000 $\mathbf{H}$, and $Q=15$. The first group is generated with the most clusters, and subsequent groups are generated with increasingly scarce clusters. The change of the number of clusters reflects the change of UE's environment, which leads CR to change in real time. A total of 300000 independently generated channels are randomly split into the training, validation, and test dataset with 180000, 60000 and 60000 CSI matrices, respectively, and the generation methods will be elaborated in the next subsection. The batch size is set to 256, and $T_H$ is set to $-10$. Besides, $M_k$ is selected among 4 candidates: 512, 256, 128, and 64, \textit{i.e.}, CR$=M_k/N$ is selected among 4 candidates: $1/4$, $1/8$, $1/16$, and $1/32$. The whole pipeline is implemented in PyTorch. The Xavier initialization is applied on both convolution layers and FC layers. We use Adam optimizer with default setting ($\beta_{1}$ = 0.9, $\beta_{2}$ = 0.999, $\epsilon$ = 1e-8) and the mean square error (MSE) loss. The network is trained for 500 epochs. In order to evaluate the performance of channel feedback, we measure the distance between the original $\mathbf{H}$ and the reconstructed $\hat{\mathbf{H}}$ with NMSE defined as: \begin{equation} \mathrm{NMSE}=E\left\{\frac{\left\|\mathbf{H}-\hat{\mathbf{H}}\right\|_{2}^{2}}{\left\|\mathbf{H}\right\|_{2}^{2}}\right\}. \end{equation} We first train autoencoder with CR$=1/4$ until more epochs bring no improvement. We then freeze the parameters in encoder's CNN and train encoder's FC layers as well as decoders with CR$=1/8,1/16,1/32$ respectively. We then utilize the sufficiently trained autoencoder to label each $\mathbf{H}$ with the correct class of CR that is defined as the smallest CR to ensure NMSE below $T_H$. In the loss function (5), $\gamma_{k}$ is set as $1.00, 1.05, 1.10, 1.15$ corresponding to CR$=1/4, 1/8, 1/16, 1/32$. \subsection{Experiment Results} \subsubsection{Comparison between Encoders} \begin{figure}[t] \begin{center} \includegraphics[width=11cm]{fix.jpg} \caption{The difference of feedback bits for each $\mathbf{H}$ between the proposed framework with traditional framework under different $T_H$.} \end{center} \end{figure} We first compare the mean feedback bits of $\mathbf{H}$ between the following two types of encoders: the proposed AMR encoder and the traditional FR encoder in Fig.~2. For each one of the $Q$ groups, CR is selected in two different ways. For the AMR encoder, CR is automatically selected by SAM for each $\mathbf{H}$; while for FR encoder, the user has to select the smallest CR that can ensure the NMSE of each batch in the whole group is below $T_H$. For both types of encoders, each floating-point number is uniformly quantized into a number of 4 bits at the UE, while this 4 bits is uniformly dequantized into the floating-point number at the BS. It is seen from Fig.~2, that the smaller $T_H$ is, the more bits are fed back for each $\mathbf{H}$. For the AMR encoder, the larger $Q$ is, the less bits are fed back for each $\mathbf{H}$, because the subsequent groups added to the test are generated with increasingly scarce clusters, which makes the subsequent $\mathbf{H}$ increasingly sparse. For the FR encoder, the feedback bits remain unchanged in the beginning and decrease afterwards, because all the beginning groups have to adopt CR=$1/4$. For all $T_H$, the bits fed back by the AMR encoder are roughly $3/4$ of the FR encoder for all groups, which demonstrates the superiority of the proposed encoder under dynamic change of the environment. \begin{figure}[t] \centering \subfigure[Confusion Matrix of SAM with Unweighted Cross-Entropy Loss]{ \includegraphics[width=8cm]{simu3.png}} \subfigure[Confusion Matrix of SAM with Weighted Cross-Entropy Loss]{ \includegraphics[width=8cm]{simu4.png}} \caption{Comparison of Confusion Matrix of SAM with Weighted and Unweighted Cross-Entropy Loss } \label{1} \end{figure} Next, we verify the advantages of the designed weighted cross-entropy loss function (5) by comparing it with unweighted cross-entropy loss function, and the results are shown in Fig.~3 as heatmaps. The column coordinates mean the correct class of CR, and the row coordinates mean the predicted class of CR. The sum of elements above the diagonal line represents the probability of FNE that is 1.29e-3 and 1.47e-3 for weighted and unweighted loss function respectively. Since NMSE is above $T_H$ only in case of FNE, the probability of NMSE below $T_H$ is 99.871$\%$ and 99.853$\%$ respectively. Hence, the designed (5) increases the probability that NMSE is below $T_H$, which proves the superiority of the proposed weighted cross-entropy loss function. \subsubsection{Comparison among Decoders} \begin{table}[t] \large \begin{center} \label{tab2} \caption{NMSE(dB) Comparison Between Different Networks} \begin{tabular}{|c<{\centering}|c<{\centering}|c<{\centering}|c<{\centering}| c<{\centering}|} \hline \diagbox{Encoder}{Decoder}&CSINet+& CRNet&RDNet&\makecell[c]{RDNet+} \\ \hline fixed CR$=1/4$&-13.84&-14.96&-14.02&\textbf{-15.17}\\ \hline fixed CR$=1/8$&-10.32&-11.74&-11.01&\textbf{-12.79}\\ \hline fixed CR$=1/16$&-7.95&-9.70&-9.81&\textbf{-9.98}\\ \hline fixed CR$=1/32$&-5.64&-6.19&-6.23&\textbf{-6.87}\\ \hline \end{tabular} \end{center} \end{table} We then compare the NMSE performance of the 4 following decoders on all $Q$ groups: CSINet+, CRNet, RDNet, and RDNet+ under different CR in Table \uppercase\expandafter{\romannumeral1}. The proposed decoder RDNet+ outperforms traditional decoder CSINet+, CRNet, and RDNet under all CR. The NMSE increases with the decrease of CR because the smaller CR is, the smaller bits are fed back and the poorer the performance is. \begin{figure}[t] \begin{center} \includegraphics[width=11cm]{simu2'.png} \caption{The NMSE for RDNet and RDNet+ when CR$=1/4$ versus training epochs.} \end{center} \end{figure} Next, we compare the NMSE descent curves between RDNet and RDNet+. The NMSE performance for two kinds of decoders versus training epochs when CR$=1/4$ is illustrated in Fig.~4. In the training process, both curves gradually drop to an error floor. NMSE of RDNet+ decreases faster in the beginning and can reach a lower value than NMSE of RDNet, and the NMSE curve of RDNet+ gets less fluctuant. \section{Conclusion} In this paper, we proposed an AMR CSI feedback framework. In order to automatically select CR at the UE, we designed SAM that is trained with weighted cross-entropy loss. When the channel between the BS and the UE changes dynamically, the AMR framework can reduce feedback overhead while maintaining a certain feedback accuracy. Moreover, we designed the decoder RDNet+ that has Dense-In-Dense structure. Simulation demonstrated that the proposed AMR framework is superior to the traditional FR framework, and the proposed decoder RDNet+ is superior to the traditional decoders. \small \bibliographystyle{ieeetr} \section{Introduction} Recently, reconfigurable intelligent surface (RIS) has been proposed as a promising technology for the future wireless communication networks, since it has the ability to change the properties of the incident electromagnetic wave \cite{1}. RISs consist of a large number of programmable sub-wavelength elements, so-called unit cells or meta atoms, that can change the properties of an impinging electromagnetic wave while reflecting it \cite{RIS}. For instance, a properly designed unit cell phase distribution across the surface enables the RIS to alter the direction of the wavefront of the reflected wave, thereby realizing the generalized Snell's law \cite{3}. The channel along the path from the base station (BS) to RIS and then to the user is modeled as the cascading convolution of three terms, including the BS-to-RIS channel, the diagonal phase shift matrix of the RIS, and the RIS-to-user channel \cite{2}, \cite{via}. Existing works discussing the path loss of RIS-aided communication systems could be classified into two categories, including the antenna theory-based model and the electromagnetic-theory based model. For the antenna theory-based model, each RIS element is equivalent to a small antenna with specific power radiation pattern, and the Friis Transmission Equation is used to calculate the path loss \cite{3}, \cite{passive}. In \cite{3}, the authors propose that the received power at the target receiver grows quadratically with the RIS elements' number in the far field. For the electromagnetic theory-based model, a boundary condition problem is formed by considering the RIS as a holographic surface, on which the reflection coefficient at each point can be designed, and the scattered fields could be modeled using certain theorems \cite{5}, \cite{7}, \cite{France}. For the discussed two models, the main advantage of the latter is that it is derived from the perspective of Maxwell equations, which underlines the electromagnetic nature of the RIS more clearly. In most literature, the transmit antenna and the receiver antenna are both considered in the far field, and the RIS is designed to convert the incident planar wave into the reflected planar wave with an arbitrary angle. However, when the receiver antenna is in the near field of the RIS, the reflected planar wave may cause serious energy leakage, and thus lead to poor channel capacity between the transmitter and receiver. In this paper, we propose a scheme of the RIS to convert planar waves into cylindrical waves so that the energy is concentrated on the receiver antenna. Simulation demonstrates that the proposed RIS scheme can reduce energy leakage and thus enlarge the channel capacity compared to the traditional scheme. With cylindrical wave radiation, the power received by a uniform linear array (ULA) antenna is sensitive to the its location and attitude, which could be utilized to detect the location and attitude of the antenna in communication. \section{System Model} \begin{figure}[t] \begin{minipage}[t]{0.45\linewidth} \subfigure[planar wave]{ \includegraphics[width=4cm]{平面波.png}} \end{minipage} \begin{minipage}[t]{0.45\linewidth} \subfigure[cylindrical wave]{ \includegraphics[width=4cm]{柱面波.png}} \end{minipage} \caption{Comparison of Planar Reflected Wave and Cylindrical Reflected Wave } \end{figure} \subsection{Channel Model} Suppose a practical RIS consists of $N$ sub-wavelength-sized elements that scatter the incoming signals with unique reflection coefficient to achieve coherent beamforming in a direction of interest. We consider a line of sight (LoS) setup and get the received signal \cite{France}, \cite{next} \begin{equation} y=\sqrt{\beta_{\mathrm{RIS}}} \mathbf{h}_{\mathrm{sr}}^{\mathrm{T}} \mathbf{\Phi h}_{\mathrm{rd}} s+n, \end{equation} where $x$ is the transmitted signal, $\beta_{\mathrm{RIS}}$ is the path loss when using the RIS to reflect $x$, $\mathbf{h}_{\mathrm{sr}}=\left[e^{j \psi_{1}^{\mathrm{sr}}}, \ldots, e^{j \psi_{N}^{\mathrm{sr}}}\right]^{\mathrm{T}}$ and $\mathbf{h}_{\mathrm{rd}}=\left[e^{j \psi_{1}^{\mathrm{rd}}}, \ldots, e^{j \psi_{N}^{\mathrm{rd}}}\right]^{\mathrm{T}}$ are the normalized LoS channels between the source and RIS and the RIS and the receiver, respectively. Additive noise is $n \sim \mathcal{N}_{\mathbb{C}}\left(0, \sigma^{2}\right)$, and the phase shifts of each surface element are stacked in $\boldsymbol{\Phi}=\operatorname{diag}\left(e^{j \phi_{1}}, \ldots, e^{j \phi_{n}}, \ldots, e^{j \phi_{N}}\right)$. This channel model provides a different insight into the proposed idea. The aim of adjusting reflection coefficient of the RIS is to find the best $\mathbf{\Phi}$ that can make $y$ has the largest power \cite{article}. However, when we consider an ideal RIS that can realize continuous reflection coefficient change, then $N$ tends to infinity and the channel model is not applicable; so we establish an electromagnetic model to seek out the best $\mathbf{\Phi}$ that is interpreted as containing the information about the reflection coefficient in the next section. \subsection{Electromagnetic Model} We consider RIS as a rectangular, perfectly conducting plate of size $a$ × $b$ with length $a$ in y-axis direction and length $b$ in x-axis direction, located in the horizontal plane. The wave number is $k=2\pi/\lambda$ where $\lambda$ is the wavelength. For a transmit antenna in the far field, a point source far away is radiating a linearly polarized electromagnetic wave. We assume that the curvature of the incident wave front, over the dimensions of the RIS, can be neglected. We further assume the field configuration of the incident wave is the transverse magnetic \textit{x}, \textit{i.e.}, the polarization of the source is such that the E-field is parallel to $\mathbf{e}_x$ and the H-field lies in the plane spanned by $\mathbf{e}_y$ and $\mathbf{e}_z$. Let $\theta_i \in [0,2\pi]$ denote the angle of incidence. The impinging wave field is approximated as a plane that has the electric and magnetic field distributions: \begin{equation} \begin{aligned} \mathbf{E}_{i} &=E_{0} e^{-j k\left(\sin \left(\theta_{i}\right) y-\cos \left(\theta_{i}\right) z\right)} \boldsymbol{e}_{x}, \\ \mathbf{H}_{i} &=-\frac{E_{0}}{\eta}\left(\cos \left(\theta_{i}\right) \boldsymbol{e}_{y}+\sin \left(\theta_{i}\right) \boldsymbol{e}_{z}\right) e^{-j k\left(\sin \left(\theta_{i}\right) y-\cos \left(\theta_{i}\right) z\right)}, \end{aligned} \end{equation} where $\eta$ is the characteristic impedance of the medium. We assume that the RIS can realize a continuous reflection coefficient function $\Gamma(x, y) = \tau(x,y) e^{j\beta(x,y)}$, where $\tau(x,y)$ is the amplitude coefficient and $\beta(x, y)$ represents the phase shift of the reflection coefficient at the point $(x, y, 0)$ on the RIS. We assume the receiver antenna is in the radiating near field (Fresnel) region, where the reactive fields are negligible compared with the radiating fields. Suppose $d$ denotes the distance between the antenna and the center of RIS, and the region is commonly given by \cite{soft}: \begin{equation} 0.62 \sqrt{\frac{(a^2+b^2)^{3/2}}{\lambda}}<d<\frac{2 (a^2+b^2)}{\lambda}. \end{equation} Note that depending on the values of $a$, $b$ and $\lambda$, this field region may or may not exist. \section{Model the Scattered Fields via the Induction Theorem} According to \cite{textbook}, $\mathbf{E}_{s}$ and $\mathbf{H}_{s}$ satisfy the relations \begin{equation} \begin{gathered} \left.\mathbf{E}_{s}\right|_{z=0}=\left.\Gamma(x, y) \mathbf{E}_{i}\right|_{z=0} \\ \mathbf{e}_{z} \times\left.\mathbf{H}_{s}\right|_{z=0}=-\Gamma(x, y) \mathbf{e}_{z} \times\left.\mathbf{H}_{i}\right|_{z=0} \end{gathered} \end{equation} Let us suppose that the RIS can be replaced by an imaginary surface, and that the transmitted fields below this imaginary surface are expressed by $\mathbf{E}_{t}$ and $\mathbf{H}_{t}$. Then, according to the induction theorem, we assume that $\mathbf{E}_{i}$ and $\mathbf{H}_{i}$ above this surface are removed. Under this assumption, a equivalent electric current density $\mathbf{J}_{e}$ and a magnetic current density $\mathbf{M}_{e}$ must be imposed on this surface to satisfy the boundary conditions \cite{textbook}, which can be separately expressed as \begin{equation} \begin{aligned} \mathbf{J}_{e} &=\mathbf{e}_{z} \times\left(\left.\mathbf{H}_{s}\right|_{z=0}-\left.\mathbf{H}_{t}\right|_{z=0}\right), \\ \mathbf{M}_{e} &=-\mathbf{e}_{z} \times\left(\left.\mathbf{E}_{s}\right|_{z=0}-\left.\mathbf{E}_{t}\right|_{z=0}\right). \end{aligned} \end{equation} Then we replace the media below this surface with a perfect magnetic conductor (PMC) so that $\mathbf{E}_{t}$, $\mathbf{H}_{t}$, and $\mathbf{M}_{e}$ become zero. Only $\mathbf{J}_{e}$ contributes to the scattered fields. Finally, the image theory is applied to remove the PMC and to derive an unbounded environment \cite{textbook}. To achieve this, the final equivalent electric current density $\mathbf{J}_{f}$ is expressed as \begin{equation} \begin{aligned} &\mathbf{J}_{f}=2 \mathbf{J}_{e}=-2 \tau e^{\beta(x, y)} \mathbf{e}_{z} \times\left.\mathbf{H}_{i}\right|_{z=0}\\ \hfill &=-2\frac{E_0}{\eta} \tau e^{\beta(x, y)} \cos \theta_{i} e^{-j k \sin \left(\theta_{i}\right) y} \mathbf{e}_{x}=J_{x} \mathbf{e}_{x} \hfill \end{aligned} \end{equation} Once $\mathbf{J}_{f}$ is obtained, we can compute the vector potential $\mathbf{A}$ [13, Ch. 6.6] at an arbitrary observation point $(x^{\prime} , y^{\prime} , z^{\prime})$ as \begin{equation} \mathbf{A}=\frac{\mu}{4 \pi} \iint_{\mathcal{S}} \mathbf{J}_{f} \frac{e^{-j k R}}{R} d x d y \end{equation} where $R=\sqrt{\left(x^{\prime}-x\right)^{2}+\left(y^{\prime}-y\right)^{2}+\left(z^{\prime}\right)^{2}}$ is the distance from point $(x, y, 0)$ on the RIS to this observation point, and $\mathcal{S}$ denotes the surface of the RIS. Then, we use $\mathbf{A}$ to derive $\mathbf{E}_{s}$ as \begin{equation} \mathbf{E}_{s}=\frac{1}{j k \sqrt{\mu \varepsilon}} \nabla \times(\nabla \times \mathbf{A})=\frac{1}{j k \sqrt{\mu \varepsilon}} \nabla(\nabla \cdot \mathbf{A})-\nabla^2\mathbf{A}. \end{equation} Note that the above results are only applicable for the scattered fields above the xy plane. Consider the receiver antenna is a ULA perpendicular to the yz plane with infinite length and its coordinate in the yz plane as $(f_y,f_z)$, whose location is named the focal point. Since the value of energy flux density is $S=\frac{\Vert \mathbf{E}_{s} \Vert^2}{2\eta}$, and $\mathbf{E}_{s}$ relies on $\beta(x,y)$; we aim to derive the \textit{best} $\beta(x,y)$ to optimize $\Vert \mathbf{E}_{s} \Vert^2$. \begin{equation} \begin{aligned} \beta(x,y)_{best}= \mathop{argmax}\limits_{\beta(x,y)}{\Vert \mathbf{E}_{s} \Vert^2}\hfill \end{aligned} \end{equation} \section{Reflection Coefficient Design Criterion of the RIS} \subsection{Traditional Reflection Coefficient Design For Planar Wave} Traditionally, the RIS is designed to redirect the incident plane wave to the reflected planar wave with reflected angle $\theta_s$ with the following field distributions: \begin{equation} \begin{aligned} \mathbf{E}_{s} &=E_{s} e^{-j k\left(\sin \left(\theta_{s}\right) y+\cos \left(\theta_{s}\right) z\right)} \boldsymbol{e}_{x}, \\ \end{aligned} \end{equation} With $\mathbf{E}_{s}$, $\beta(x,y)_planar$ is derived by the generalized Snell's law \cite{snell} \begin{equation} \begin{aligned} &\beta(x,y)_{planar}=\angle\left(\frac{\left.\mathbf{E}_{s}\right|_{z=0}\cdot \boldsymbol{e}_{x}}{\left.\mathbf{E}_{i}\right|_{z=0}\cdot \boldsymbol{e}_{x}}\right)\\ &=ky(\sin \left(\theta_{i}\right)-\sin \left(\theta_{s}\right)). \end{aligned} \end{equation} However, planar wave can not focus the incident energy on the focal point in the near field, which indicates that $\beta(x,y)_{planar}$ is far from $\beta(x,y)_{best}$ that is derived by a cylindrical wave front in the nest section. \begin{figure}[t] \centering \centerline{\includegraphics[width=7.8cm]{锯齿图.png}} \caption{Phase shift on the RIS is designed to convert the planar wave into the planar wave.} \end{figure} \subsection{Reflection Coefficient Design For Cylindrical Wave} In order to focus the incident energy on the focal point, the optimal scattered beams should converge at the focal point. According to the Fermat's principle, all the beams should share the same phase at the focal point. Thus, when we consider the time reverse of the optimal waves, they can be regarded as radiated by the same source of radiation with the current intensity $I_1$ at the focal point. Since the radiation distribution is consistent with the change of x, the source of radiation can be seen as a line source parallel to $\boldsymbol{e}_{x}$ located at $(0,f_y,f_z)$ with infinite length. Therefore, the optimal scattered beams should have cylindrical wave fronts and the scattered electric field $\mathbf{E}_{s}$ and magnetic field $\mathbf{H}_{s}$ can be expressed as \cite{textbook} \begin{equation} \begin{aligned} \mathbf{E}_{s}=& (\frac{-I_{1} k \eta}{4} H_{0}^{(2)}\left(k R\right))^* \boldsymbol{e}_{x}, \\ \mathbf{H}_{s}=& (\frac{-j I_{1} k}{4} H_{1}^{(2)}\left(k R\right))^* \\ & \times\left(\frac{(z-f_z) \boldsymbol{e}_{y}}{R}-\frac{(y-f_y)\boldsymbol{e}_{z}}{R}\right), \end{aligned} \end{equation} where $R=\sqrt{(y-f_y)^{2}+(z-f_z)^{2}}$, $I_{1} \in \mathbb{C}$ denotes current intensity of the line source, $H_{n}^{(2)}$ refers to the Hankel function of type 2 with order n, and $(\cdot)^*$ is used to invert the phase for time reverse operation. Without loss of generalization, $\angle I_0$ is assumed to be $0$. With $\mathbf{E}_{s}$, $\beta(x,y)_{best}$ is derived by the generalized Snell's law \cite{snell} \begin{equation} \begin{aligned} &\beta(x,y)_{best}=\angle\left(\frac{\left.\mathbf{E}_{s}\right|_{z=0}\cdot \boldsymbol{e}_{x}}{\left.\mathbf{E}_{i}\right|_{z=0}\cdot \boldsymbol{e}_{x}}\right)\\ &=\angle\left(\frac{(\frac{-I_{1} k \eta}{4} H_{0}^{(2)}\left(k R\right))^*}{E_0 e^{-j k\left(\sin \left(\theta_{i}\right) y\right)}}\right) \hfill \\ &=-\angle\left(-H_{0}^{(2)}\left(k R\right)\right)+k\sin \left(\theta_{i}\right)y. \end{aligned} \end{equation} \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm]{柱面波相位变化.png}} \caption{Phase shift on the RIS is designed to convert the planar wave into the cylindrical wave.} \end{figure} We calculate $\beta(x,y)_{best}$ when $a=b=20\lambda$ and the incident angle is $\pi/6$. As is shown in Fig.~4, the tangent slope of $\beta(x,y)_{best}$ change from positive to negative with the decrease of the reflection angel from $y=-10\lambda$ to $y=10\lambda$. When the tangent slope of $\beta(x,y)_{best}$ is zero, it indicates that at this point the reflection angel is equal to the incident angel like normal mirror reflection. Far away from this point, the change of $\beta(x,y)_{best}$ is nearly linear, which indicates that the reflection does not change obviously. In those areas, the change of $\beta(x,y)_{best}$ is analogous to the traditional phase shift designed to convert the planar wave into the planar wave shown in Fig.~2. When the reflected beams are plane waves, the amplitudes of the scattered radiation on each point of the RIS should be identical, which means the amplitudes of the reflection coefficient on each point of the RIS are identical. However, when the reflected beams are cylindrical waves, the amplitudes of the scattered radiation on each point of the RIS rely on $R$ and $|I_{1}|$. $R$ is known and $|I_{1}|$ can be calculated by $E_0$. With the assumption that the RIS is passive lossless, power conservation determines the relation between $E_0$ and $|I_{1}|$. Suppose the power of the incident wave on the RIS is $P_{\text {incident}}$ and the power of the reflected wave on the RIS is $P_{\text {reflected}}$ as \begin{equation} \begin{aligned} &P_{\text {incident}} =\frac{E_{0}^{2}}{2 \eta} a b cos(\theta_i), \\ &P_{\text {reflected}} =\frac{\tan ^{-1}((f_y + 0.5a)/ f_z)-\tan ^{-1}((f_y - 0.5a)/ f_z)}{2\pi}\\ &\times\left(\text { power radiated by current } I_{1} \text {with length b}\right) \\ &=\frac{|I_{1}|{ }^{2} k \eta b}{16 \pi} (\tan ^{-1}((f_y + 0.5a)/ f_z)-\tan ^{-1}((f_y - 0.5a)/ f_z)). \end{aligned} \end{equation} When $P_{\text {incident}} = P_{\text {reflected}}$, $|I_{1}|$ is determined by $E_0$, and the desired amplitude of the reflection coefficient is derived by the definition as: \begin{equation} \begin{aligned} \tau(x,y)=\left|\left(\frac{(\frac{-|I_{1}| k \eta}{4} H_{0}^{(2)}\left(k R\right))^*}{E_0 e^{-j k\left(\sin \left(\theta_{i}\right) y\right)}}\right)\right|. \hfill \end{aligned} \end{equation} \section{Simulation Results and Analysis} We set $a = b =20 \lambda$ and $\lambda$ = 0.1 m, so the radiating near field requires $93.2630\lambda< d < 1600\lambda$. Additionally, we set $E_i$ = 1, and $\theta_i$ is $\pi/6$. \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm]{能量分布比较.png}} \caption{The Comparison of Normalized Power between Planar Wave and Cylindrical Wave} \end{figure} In order to examine the degree of energy leakage, we fix the observation points on the circular arc of $x=0$, $y=dcos(\theta)$, $z=dsin(\theta)$, $\theta \in (0, 90)$ and calculate the normalized power defined as the ratio of power per degree to the total power received by the circular arc. Three groups of $(f_y,f_z)$ are set in order as $(80\lambda, 80\lambda)$, $(180\lambda, 180\lambda)$, and $(280\lambda, 280\lambda)$ respectively. The normalized power distribution is shown in Fig.~5. For the cylindrical wave, we design the RIS to focus energy on the line of $y=f_y$, $z=f_z$. The power is maximum at $\theta=45\degree$, and can concentrate the majority of the power around the focal point. The degree of power concentration keeps almost unchanged when the focal point moves closer to the center of the RIS. For the planar wave, we design the reflection direction to be parallel to the vector from (0, 0, 0) to (0, $f_y$, $f_z$), \textit{i.e.}, the reflection angle is $\pi/4$. When $f_y=f_z=80\lambda$, the main lobe width is significantly larger than the proposed cylindrical wave, which results in serious energy leakage. As the focal point moves farther to the center of the RIS, the degree of power concentration increases and the power distribution of the planar wave is increasingly similar to the power distribution of the cylindrical wave. Since the channel capacity is calculated as $C=log_2(1+S/N)$, where $S$ denotes the power of the received signal and $N$ denotes the power of the addictive noise, the proposed cylindrical reflected wave can reduce energy leakage and thus enlarge the channel capacity compared to the planar reflected wave. \subsection{Cylindrical Wave Used For Location Detection} \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm]{检测位置.png}} \caption{The Received Normalized Power Versus $\psi$} \end{figure} Suppose a $10 \lambda$ long ULA receiver antenna is located in the near field of the RIS with fixed attitude that is parallel to $\mathbf{e}_x$, known distance $d$ and unknown angle position $\psi_0$. In order to figure out $\psi_0$, we propose the following strategy. For the cylindrical wave, we fix the focal point in the yz plane with distance $d$ to the center of the RIS and adjust the coefficient of the RIS so that the angle position of the focal point $\psi$ moves from $0^\degree$ to $90^\degree$. For the planar wave, we adjust the coefficient of the RIS so that the angle position of the focal point $\psi$ moves from $0^\degree$ to $90^\degree$. During the process, the receiver antenna record the simultaneous change of the received power $P$, and the angle when the maximum power is received is assumed as the estimation of $\psi_0$ \begin{equation} \begin{aligned} \hat{\psi}_0= \mathop{argmax}\limits_{\psi}{{P}}.\hfill \end{aligned} \end{equation} The location detection simulation is shown in Fig.~6 where $\psi_0=60\degree$. The cylindrical wave is well qualified to detect the right $\psi_0$, as the maximum power is received at $\psi_0$ under all $d$. However, the planar wave is uncapable of detecting the right $\psi_0$, as the maximum power is not received at $\psi_0$ when $d=150\lambda$ and the relatively large lobe width will lower the detection accuracy. \subsection{Cylindrical Wave Used For Attitude Detection} \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm]{旋转.png}} \caption{The Normalized Power Versus $\phi$} \end{figure} We further explore the relation between the attitude of the antenna and $P$. Suppose a $10 \lambda$ long ULA receiver antenna is located in the radiating near field of the RIS with its center fixed at ($0, f_y, f_z$). We assume the receiver antenna is constrained in the plane perpendicular to $\mathbf{e}_y$ with the angle between the antenna and $\mathbf{e}_x$ defined as $\phi \in (0\degree,90\degree)$. The normalized power versus $\phi$ is shown in Fig.~6. Since both the planar wave and the cylindrical wave are designed to focus power on an antenna parallel to $\mathbf{e}_x$, the power is maximum when $\phi=0\degree$ and decreases with the increase of $\phi$. When the antenna is closer to the RIS, the variation of the power between different $\phi$ is more significant, which indicates that the antenna attitude has a more noteworthy influence on the power when the antenna is closer to the RIS. Besides, the variation of the power between different $\phi$ is much more significant for cylindrical wave than for planar wave, which indicates that the power for cylindrical wave is more sensitive to the change of the antenna attitude. The above mentioned traits can be utilized in antenna attitude detection. Given the position of the antenna, we can design the RIS to reflect cylindrical wave to the center of the antenna and acquire the information about the antenna attitude through the power it receives. \section{Conclusion} In this paper, we propose a scheme of RIS to convert planar waves into cylindrical waves so that the energy is concentrated on the receiver antenna. Simulation demonstrates that the proposed RIS scheme can reduce energy leakage and thus enlarge the channel capacity compared to the traditional scheme. With cylindrical wave radiation, the power received by a near field ULA antenna is sensitive to the its location and attitude, which could be utilized to detect the location and attitude of the antenna in communication. \small \bibliographystyle{ieeetr} \section{Introduction} Recently, reconfigurable intelligent surface (RIS) has been proposed as a promising technology for the future wireless communication networks, due to its ability to change the reflected angle of the incident electromagnetic wave \cite{1}. RISs consist of a large number of programmable sub-wavelength elements, \textit{i.e.}, unit cells or meta atoms \cite{RIS}. For instance, a properly designed unit cell phase distribution across the surface enables the RIS to alter the direction of the wavefront of the reflected wave, thereby realizing the generalized Snell's law \cite{3}. The channel along the path from the base station (BS) to RIS and then to the user is modeled as the cascading convolution of the BS-to-RIS channel, the diagonal phase shift matrix of the RIS, and the RIS-to-user channel \cite{2}, \cite{via}. Existing works on the path loss of RIS-aided communication systems could be classified into two categories, the antenna theory-based model and the electromagnetic-theory based model. For the antenna theory-based model, each RIS element is equivalent to a small antenna with specific power radiation pattern, and the Friis Transmission Equation is used to calculate the path loss \cite{3}, \cite{passive}. In \cite{3}, the authors prove that the received power at the target receiver grows quadratically with the RIS elements' number in the far field. For the electromagnetic theory-based model, a boundary condition problem is formed by considering the RIS as a holographic surface, on which the reflection coefficient at each point can be designed, and the scattered fields could be modeled using electromagnetic theorems \cite{5}, \cite{7}, \cite{France}. The main advantage of the electromagnetic theory-based model is that it is derived from the perspective of Maxwell equations, which underlines the electromagnetic nature of the RIS more clearly. In most literature, the transmit antenna and the receiver antenna are both considered in the far field, and the RIS is designed to convert the incident planar wave into the reflected planar wave with an arbitrary angle. However, when the receiver is equipped with uniform linear array (ULA) and stays in the near field of the RIS, the reflected planar wave may cause serious energy leakage and thus lead to poor channel capacity between the transmitter and receiver. In this paper, we propose a RIS scheme to convert planar waves into cylindrical waves such that the energy is concentrated on the receiver's ULA antenna. Simulation results demonstrate that the proposed scheme can reduce energy leakage and thus enlarge the channel capacity compared to the traditional scheme. Considering the reflected cylindrical waves as the time reverse of waves radiated by a linear source, we utilize Hankel function to derive the phase shift on the RIS to reflect cylindrical waves. With cylindrical wave radiation, the power received by the ULA antenna is a function highly correlated to its location and attitude, and this property could be utilized to sense the location and attitude of the ULA antenna for communications. \section{System Model} \begin{figure}[t] \begin{minipage}[t]{0.45\linewidth} \subfigure[planar wave]{ \includegraphics[width=4.5cm]{平面波.png}} \end{minipage} \begin{minipage}[t]{0.45\linewidth} \subfigure[cylindrical wave]{ \includegraphics[width=4.5cm]{柱面波.png}} \end{minipage} \caption{Comparison of Planar Reflected Wave and Cylindrical Reflected Wave } \end{figure} We consider the RIS as a rectangular and perfectly conducting plate with length $a$ in $y$-axis direction and length $b$ in $x$-axis direction, located in the horizontal plane. Suppose the wave number is $k=2\pi/\lambda$ where $\lambda$ is the wavelength. We assume that the RIS can realize a continuous reflection with coefficient function $\Gamma(x, y) = \tau(x,y) e^{j\beta(x,y)}$, where $\tau(x,y)$ is the amplitude coefficient and $\beta(x, y)$ is the phase shift at the point $(x, y, 0)$ on the RIS. A point source in the far field is radiating a linearly polarized electromagnetic wave. We assume that the curvature of the incident wave front, over the dimensions of the RIS, can be neglected. Suppose the incident wave is parallel to the $yz$ plane, and the receiver is in the radiating near field (Fresnel) region, where the reactive fields are negligible compared with the radiating fields. Let $d$ denote the distance between the center of the antenna and the center of RIS, and the radiating near field region is commonly given by \cite{soft}: \begin{align} 0.62 \sqrt{\frac{(a^2+b^2)^{3/2}}{\lambda}}<d<\frac{2 (a^2+b^2)}{\lambda}. \end{align} Note that depending on the values of $a$, $b$ and $\lambda$, the radiating near field region may or may not exist. \subsection{Channel Model} Suppose the receiver is equipped with ULA of $M$ antennas, and the length of the ULA is $L$. As one of the first works studying the electromagnetic model, we assume there is only the line of sight (LOS) path from the source to the RIS and from the RIS to the destination. Since the transmit antenna is in the far field, it could be regarded as a point source. The received signal can be written as \cite{France}, \cite{next} \begin{equation} \mathbf{y}=\mathbf{h} s+\mathbf{n}, \end{equation} where $s$ is the transmitted signal, $\mathbf{n}$ is the additive noise, and $\mathbf{h}$ is the channel vector. According to the Huygens–Fresnel principle, every point on the RIS is a new source of spherical wavelets, and the secondary wavelets emanating from different points mutually interfere. The sum of these spherical wavelets forms the wavefront propagating from the RIS to the receiver. The energy of the wave emitted by a point source falls off as the inverse square of the distance travelled, and thus the amplitude falls off as the inverse of the distance. The complex amplitude of the electric field intensity at the point $(x,y,0)$ is given by \begin{align} E(x,y)&=\frac{A e^{j k (l+y \sin(\theta_{\text{in}}))}}{l+y\sin(\theta_{\text{in}})} \approx\frac{A e^{j k (l+y \sin(\theta_{\text{in}}))}}{l} , \end{align} where $A$ represents the magnitude of the disturbance at the far field point source, $l$ denotes the distance between the source and the center of the RIS, and the approximation holds because $l\gg y$. The energy flux density at $(0,0,0)$ can be calculated by electromagnetic theory or the Friis Transmission Equation \cite{3} as \begin{equation} D_0=\frac{\Vert \mathbf{E}(0,0) \Vert^2}{2\eta}=\frac{A^2}{2\eta l^2} =\frac{P_t G_t}{4\pi l^2}, \end{equation} where $P_t$ is the transmit power, $G_t$ is the transmit antenna gain. Thus, we derive the identical relation as \begin{equation} A^2=\frac{P_t G_t \eta}{2\pi}. \end{equation} Consider $(x,y,0)$, $x\in (-0.5b,0.5b)$, $y\in (-0.5a,0.5a)$ as new sources of spherical wavelets, and the contribution of $(x,y,0)$ to the complex amplitude of the reflected wave at the $m$th antenna is given by Fresnel-Kirchoff's diffraction formula as \begin{align} E_r(x,y,m)=\frac{A}&{j\lambda} \frac{ e^{j k (l+y \sin(\theta_{\text{in}}))}}{l} \frac{ e^{j k d(x,y,m)}}{d(x,y,m)} \nonumber \\ &\times \frac{\cos(\theta_{\text{in}})+\cos(\theta_{\text{out}}(x,y,M))}{2}, \end{align} where $\theta_{\text{in}}$ is the incident angle, $\theta_{\text{out}}(x,y,m)$ denotes the angle between $\mathbf{e}_z$ and the vector from $(x,y,0)$ to the $m$th receiver antenna, and $d(x,y,m)$ denotes the distance from $(x,y,0)$ to the $m$th receiver antenna. Suppose $\mathbf{E}_r=[E_r(1),\cdots,E_r(M)]^T$ we rewrite (6) as \begin{align} \mathbf{E}_r(x,y)&=A \mathbf{q}(x,y) \circ \mathbf{b}(x,y)\\ \mathbf{q}(x,y)&=\frac{\sqrt{M}}{2j l \lambda} \left[\frac{\cos(\theta_{\text{in}})+\cos(\theta_{\text{out}}(x,y,1))}{d(x,y,1)} \right.\nonumber\\ &\hspace{1.3cm}\left.\cdots,\frac{\cos(\theta_{\text{in}})+\cos(\theta_{\text{out}}(x,y,M))}{d(x,y,M)} \right]^T \\ \mathbf{b}(x,y)&=\frac{1}{\sqrt{M}}[e^{jk(l+y\sin(\theta_{\text{in}})+d(x,y,1))},\nonumber\\ &\hspace{1.3cm}\left.\cdots,e^{-jk(l+y\sin(\theta_{\text{in}})+d(x,y,M))}\right]^T, \end{align} where $\mathbf{q}(x,y) \in \mathbb{C}^M$ denotes the scalar-multiplication of the path gain from the transmit antenna to $(x,y,0)$ and the path gain from $(x,y,0)$ to each antenna of the receiver, $\circ$ denotes Hadmand product, and $\mathbf{b}(x,y) \in \mathbb{C}^M$ represents the steering vector from $(x,y,0)$ to the receiver. Suppose the power reflected by $(x,y,0)$ and received by the $m$th antenna forms the vector $\mathbf{P}_r(x,y) \in \mathbb{C}^M$, and the channel gain $\mathbf{g}(x,y) \in \mathbb{C}^M$ is derived as \begin{align} \mathbf{g(x,y)}&=\sqrt{\frac{\mathbf{P}_r(x,y)}{P_t}} =\sqrt{\frac{\left|\mathbf{E}_r(x,y)\right|^2 G_r \lambda^2}{2\eta P_t 4\pi }}\nonumber\\ &=\sqrt{\frac{A^2 G_r \lambda^2 \left|\mathbf{q}(x,y)\right|^2 \circ \left|\mathbf{b}(x,y)\right|^2 }{8\eta \pi P_t}\nonumber}\\ &=\sqrt{\frac{G_t G_r \lambda^2 \left|\mathbf{q}(x,y)\right|^2 }{16\eta \pi^2 M}} =\frac{\lambda \mathbf{q}(x,y)}{4 \pi}\sqrt{\frac{G_t G_r }{\eta M}}, \end{align} where $\left|\cdot\right|$ denotes the length of complex numbers. Moreover, $\left|\cdot\right|^2$, $\sqrt{\cdot}$, and $(\cdot)^2$ are element wise operations for vectors. The fourth equation holds because $\left|\mathbf{b}(x,y)\right|^2=\frac{1}{M}[1,\cdots,1]^T$. Moreover, $G_r$ is the antenna gain of each receiver antenna. Finally, $\mathbf{h}$ is computed by merging the shift of phase and amplitude on the RIS, channel gain, and steering vector together; and integrate on the reflecting surface of the RIS denoted by $S$ as \begin{align} \mathbf{h}=\iint_{S} \tau(x,y) e^{\beta(x,y)}\mathbf{g}(x,y) \circ \mathbf{b}(x,y)dxdy. \end{align} The goal is to design reflection coefficient, \textit{i.e.}, the best $\tau(x,y)$ and $\beta(x,y)$ that can maximize $\|\mathbf{h}\|^2$ such that the received signal has the largest power \cite{article}. \subsection{Electromagnetic Model} We assume the field configuration of the incident wave is the transverse magnetic $\mathbf{e}_x$, \textit{i.e.}, the E-field is parallel to $\mathbf{e}_x$ and the H-field lies in the plane spanned by $\mathbf{e}_y$ and $\mathbf{e}_z$. Since the source is in the far field, the impinging wave field is approximated as a plane that has the electric and magnetic field distributions: \begin{align} \mathbf{E}_{\text{in}} &=E_{\text{in}} e^{-j k\left(\sin \left(\theta_{\text{in}}\right) y-\cos \left(\theta_{\text{in}}\right) z\right)} \boldsymbol{e}_{x}, \\ \mathbf{H}_{\text{in}} &=-\frac{E_{\text{in}}}{\eta}\left(\cos \left(\theta_{\text{in}}\right) \boldsymbol{e}_{y}+\sin \left(\theta_{\text{in}}\right) \boldsymbol{e}_{z}\right) e^{-j k\left(\sin \left(\theta_{\text{in}}\right) y-\cos \left(\theta_{\text{in}}\right) z\right)}, \end{align} where $\eta$ is the characteristic impedance of the medium. Denote the electric field at the $m$th antenna caused by the reflected wave as $\mathbf{E}_{\text{out},m}$. Since the value of energy flux density is $D_1=\frac{\Vert \mathbf{E}_{\text{out}} \Vert^2}{2\eta}$, the signal power $P$ at ULA can be derived by the Friis Transmission Equation \cite{3} as \begin{align} P=\Vert \mathbf{h} s\Vert^2=\sum_{m=1}^M \frac{\Vert \mathbf{E}_{\text{out},m} \Vert^2}{2\eta} \frac{\lambda^2 G_r}{4\pi}, \end{align} where $G_r$ is the antenna gain. The identical relation confirms the compatibility of channel model and electromagnetic model. \section{Derive the Scattered Fields via the Induction Theorem} According to the reflected theory \cite{textbook}, $\mathbf{E}_{\text{out}}$ and $\mathbf{H}_{\text{out}}$ satisfy the relations \begin{align} \left.\mathbf{E}_{\text{out}}\right|_{z=0}&=\left.\Gamma(x, y) \mathbf{E}_{\text{in}}\right|_{z=0} \\ \mathbf{e}_{z} \times\left.\mathbf{H}_{\text{out}}\right|_{z=0}&=-\Gamma(x, y) \mathbf{e}_{z} \times\left.\mathbf{H}_{\text{in}}\right|_{z=0}. \end{align} Suppose that the RIS can be replaced by an imaginary surface, and the transmitted fields below this imaginary surface are expressed by $\mathbf{E}_{\text{tr}}$ and $\mathbf{H}_{\text{tr}}$, respectively. According to the induction theorem, $\mathbf{E}_{\text{in}}$ and $\mathbf{H}_{\text{in}}$ above this surface can be removed. Then, an equivalent electric current density $\mathbf{J}_{e}$ and a magnetic current density $\mathbf{M}_{e}$ must be imposed on this surface to satisfy the boundary conditions \cite{textbook}, which can be separately expressed as \begin{align} \mathbf{J}_{e} &=\mathbf{e}_{z} \times\left(\left.\mathbf{H}_{\text{out}}\right|_{z=0}-\left.\mathbf{H}_{\text{tr}}\right|_{z=0}\right), \\ \mathbf{M}_{e} &=-\mathbf{e}_{z} \times\left(\left.\mathbf{E}_{\text{out}}\right|_{z=0}-\left.\mathbf{E}_{\text{tr}}\right|_{z=0}\right). \end{align} Next, we replace the media below this surface with a perfect magnetic conductor (PMC) such that $\mathbf{E}_{\text{tr}}$, $\mathbf{H}_{\text{tr}}$, and $\mathbf{M}_{e}$ become zero. Hence, only $\mathbf{J}_{e}$ contributes to the scattered fields. Finally, the image theory is applied to remove the PMC and to derive an unbounded environment \cite{textbook}. To achieve this, the final equivalent electric current density $\mathbf{J}_{f}$ is expressed as \begin{align} &\mathbf{J}_{f}=2 \mathbf{J}_{e}=-2 \tau e^{\beta(x, y)} \mathbf{e}_{z} \times\left.\mathbf{H}_{\text{in}}\right|_{z=0}\nonumber\\ \hfill &=-2\frac{E_0}{\eta} \tau e^{\beta(x, y)} \cos \theta_{\text{in}} e^{-j k \sin \left(\theta_{\text{in}}\right) y} \mathbf{e}_{x}=J_{x} \mathbf{e}_{x}. \hfill \end{align} With $\mathbf{J}_{f}$, we can compute the vector potential $\mathbf{A}$ [13, Ch. 6.6] at an arbitrary observation point $(x^{\prime} , y^{\prime} , z^{\prime})$ as \begin{align} \mathbf{A}=\frac{\mu}{4 \pi} \iint_{\mathcal{S}} \mathbf{J}_{f} \frac{e^{-j k R}}{R} d x d y, \end{align} where $R=\sqrt{\left(x^{\prime}-x\right)^{2}+\left(y^{\prime}-y\right)^{2}+\left(z^{\prime}\right)^{2}}$ is the distance from point $(x, y, 0)$ to this observation point. Then, $\mathbf{E}_{\text{out}}$ can be derived as \begin{align} \mathbf{E}_{\text{out}}=\frac{1}{j k \sqrt{\mu \varepsilon}} \nabla \times(\nabla \times \mathbf{A})=\frac{1}{j k \sqrt{\mu \varepsilon}} \nabla(\nabla \cdot \mathbf{A})-\nabla^2\mathbf{A}. \end{align} Note that (12) is only applicable for the scattered fields above the $xy$ plane. \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm]{锯齿图.png}} \caption{Phase shift on the RIS is designed to convert the planar wave into the planar wave. In this case, $\beta(x,y)$ is irrelevant to $x$.} \end{figure} \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm]{柱面波相位变化.png}} \caption{Phase shift on the RIS is designed to convert the planar wave into the cylindrical wave. In this case, $\beta(x,y)$ is irrelevant to $x$.} \end{figure} \section{Reflection Coefficient Design Criterion of the RIS} \subsection{Traditional Reflection Coefficient Design For Planar Wave} Traditionally, the RIS is designed to redirect the incident plane wave to the reflected planar wave with reflected angle $\theta_{\text{out}}$ and the following field distribution: \begin{align} \mathbf{E}_{\text{out}} =E_{\text{out}} e^{-j k\left(\sin \left(\theta_{\text{out}}\right) y+\cos \left(\theta_{\text{out}}\right) z\right)} \boldsymbol{e}_{x}. \end{align} Suppose the receiver ULA is located on the focal line located at $(0,f_y,f_z)$ and perpendicular to the $yz$ plane. In order to concentrate power on the focal line, we should design $\theta_{\text{out}}=\arctan(f_y/f_z)$. With $\mathbf{E}_{\text{out}}$, $\beta(x,y)_{planar}$ is derived by the generalized Snell's law \cite{snell} as \begin{align} \beta(x,y)_{planar}&=\angle\left(\frac{\left.\mathbf{E}_{\text{out}}\right|_{z=0}\cdot \boldsymbol{e}_{x}}{\left.\mathbf{E}_{\text{in}}\right|_{z=0}\cdot \boldsymbol{e}_{x}}\right)=ky(\sin \left(\theta_{\text{in}}\right)-\sin \left(\theta_{\text{out}}\right)). \end{align} Since the amplitudes of the incident and the reflected wave are identical on each point of the RIS, the amplitudes of the reflection coefficient are identically equal to $1$, \textit{i.e.}, \begin{align} \tau(x,y)_{planar}=1. \end{align} However, the planar wave can not focus the incident energy on ULA in the near field. \subsection{Reflection Coefficient Design For Cylindrical Wave} In order to focus the incident energy on the focal line, the optimal scattered waves should converge on the focal line. To achieve the constructive interference, all the waves should share the same phase on the focal line. Thus, when we consider the time reverse of the optimal waves (the backward propagation of the optimal waves), they can be regarded as radiated by the same source of radiation with the current intensity defined as $I_1$ on the focal line. Suppose the radiation distribution is consistent with the change of $x$, and the source of radiation can be seen as a line source parallel to $\boldsymbol{e}_{x}$ located at $(0,f_y,f_z)$ with infinite length. Since the time reverse of the optimal scattered waves are axially symmetric with the focal line as the axis, the optimal scattered waves share the same property, \textit{i.e.}, the optimal scattered waves for ULA should have cylindrical wave fronts, from which the scattered electric field $\mathbf{E}_{\text{out}}$ and magnetic field $\mathbf{H}_{\text{out}}$ can be expressed as \cite{textbook} \begin{align} \mathbf{E}_{\text{out}}=& \left(\frac{-I_{1} k \eta}{4} H_{0}^{(2)}\left(k R\right)\right)^* \boldsymbol{e}_{x}, \\ \mathbf{H}_{\text{out}}=& \left(\frac{-j I_{1} k}{4} H_{1}^{(2)}\left(k R\right)\right)^* \times\left(\frac{(z-f_z) \boldsymbol{e}_{y}}{R}-\frac{(y-f_y)\boldsymbol{e}_{z}}{R}\right), \end{align} where $R=\sqrt{(y-f_y)^{2}+(z-f_z)^{2}}$, $I_{1} \in \mathbb{C}$ denotes the current intensity of the line source, $H_{n}^{(2)}$ refers to the Hankel function of type $2$ with order $n$, and $(\cdot)^*$ is used to invert the phase for time reverse operation. Without loss of generalization, $\angle I_0$ is assumed to be $0$. With $\mathbf{E}_{\text{out}}$, $\beta(x,y)_{\text{cylind}}$ is derived by the generalized Snell's law \cite{snell} as \begin{align} \beta(x,y)_{\text{cylind}}&=\angle\left(\frac{\left.\mathbf{E}_{\text{out}}\right|_{z=0}\cdot \boldsymbol{e}_{x}}{\left.\mathbf{E}_{\text{in}}\right|_{z=0}\cdot \boldsymbol{e}_{x}}\right) =\angle\left(\frac{(\frac{-I_{1} k \eta}{4} H_{0}^{(2)}\left(k R\right))^*}{E_0 e^{-j k\left(\sin \left(\theta_{\text{in}}\right) y\right)}}\right) \hfill \nonumber\\ &=-\angle\left(-H_{0}^{(2)}\left(k R\right)\right)+k\sin \left(\theta_{\text{in}}\right)y. \end{align} Let us calculate $\beta(x,y)_{\text{cylind}}$ when $a=b=20\lambda$ and the incident angle is $\pi/6$ as an example in Fig.~3, where the focal line is parallel to $\mathbf{e}_x$ so $\beta(x,y)_{\text{cylind}}$ is irrelevant to $x$ and is only relevant to $y$. The tangent slope of $\beta(x,y)_{\text{cylind}}$ changes from positive to negative with the decrease of the reflection angle from $y=-10\lambda$ to $y=10\lambda$. When the tangent slope of $\beta(x,y)_{\text{cylind}}$ is zero, it indicates that at this traditional-mirror-reflection point the reflection angle is equal to the incident angle like traditional mirror reflection. With the decrease of $y$, $\beta(x,y)_{\text{cylind}}$ is increasingly analogous to a linear function, as the reflection angle changes less obviously. When the reflected waves are plane waves, the amplitudes of the scattered radiation on each point of the RIS should be identical, which means the amplitudes of the reflection coefficient on each point of the RIS are identical too. However, when the reflected waves are cylindrical waves, the amplitudes of the scattered radiation on each point of the RIS rely on $R$ and $|I_{1}|$. According to the location and the attitude of the focal line, $R$ is known and $|I_{1}|$ can be calculated by $E_0$. With the assumption that the RIS is passive lossless, the power conservation determines the relation between $E_0$ and $|I_{1}|$. The power of the incident wave on the RIS and the power of the reflected wave on the RIS are \begin{align} P_{\text {incident}} &=\frac{E_{0}^{2}}{2 \eta} a b \cos(\theta_{\text{in}}), \\ P_{\text {reflected}} &=\frac{\tan ^{-1}((f_y + 0.5a)/ f_z)-\tan ^{-1}((f_y - 0.5a)/ f_z)}{2\pi}\nonumber\\ &\hspace{5cm}\times \frac{|I_{1}|{ }^{2} k \eta b}{16 \pi}, \end{align} respectively. When $P_{\text {incident}} = P_{\text {reflected}}$, $|I_{1}|$ is determined by $E_0$, and the desired amplitude of the reflection coefficient is derived by the definition as \begin{align} \tau(x,y)_{\text{cylind}}=\left|\left(\frac{(\frac{-|I_{1}| k \eta}{4} H_{0}^{(2)}\left(k R\right))^*}{E_0 e^{-j k\left(\sin \left(\theta_{\text{in}}\right) y\right)}}\right)\right|. \hfill \end{align} \section{Simulation Results and Analysis} In the simulations, we set $a = b =20 \lambda$ and $\lambda$ = 0.1 m, \textit{i.e.}, the frequency $f=2.998$ GHz and the radiating near field requires $93.263\lambda< d < 1600\lambda$. Moreover, we set $E_{\text{in}} = 1 V/m$, $M=128$, $\tau=377$ Ohm, $G_r=5$dB, $L=20 \lambda$, and $\theta_{\text{in}} =30\degree$. \subsection{The Comparison of Received Power Between Planar Waves and Cylindrical Waves} \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm,height=7cm]{能量分布比较.png}} \caption{The Comparison of Normalized Power between Planar Wave and Cylindrical Wave} \end{figure} In order to examine the degree of energy leakage, \textit{i.e.}, the energy carried by the reflected waves to undesired area, we select the observation points at the circular arc of $x=0$, $y=d\cos(\theta)$, $z=d\sin(\theta)$, $\theta \in (0\degree, 90\degree)$ and calculate the electric field $\mathbf{E}_{\text{obs}}$ on the observation points. Three groups of $(f_y,f_z)$ are set in order as $(80\lambda, 80\lambda)$, $(180\lambda, 180\lambda)$, and $(280\lambda, 280\lambda)$ respectively, i.e., we want to focus the power at $\theta = 45\degree$ with different $d$. The distribution of the normalized power $P_{n}=\frac{\Vert \mathbf{E}_{\text{obs}} \Vert^2}{2\eta d^2}$ is shown in Fig.~4. For the cylindrical wave, the power is maximum at $\theta=45\degree$, and the majority of the power is concentrated around the focal line. The degree of power concentration keeps almost unchanged when the focal line moves closer to the center of the RIS. For the planar wave, we design the reflection direction parallel to the vector from (0, 0, 0) to (0, $f_y$, $f_z$), \textit{i.e.}, the reflection angle is $45\degree$. When $f_y=f_z=80\lambda$, the main lobe width is significantly larger than the proposed cylindrical wave and the maximum power is 8.53 dB smaller than the proposed cylindrical wave, which results in serious energy leakage. As the focal line moves farther to the center of the RIS, the degree of power concentration increases and the power distribution of the planar wave is increasingly similar to the power distribution of the cylindrical wave. Since the channel capacity is calculated as \begin{align} C=\log_2\left(1+\sum_{m=1}^M \frac{\Vert \mathbf{E}_{\text{out},m} \Vert^2}{2\eta} \frac{\lambda^2 G_r}{4\pi N}\right), \end{align} where $N$ denotes the power of the additive noise, the proposed cylindrical reflected waves can reduce energy leakage and thus enlarge the channel capacity compared to the planar reflected wave. In practice, the location and attitude of the ULA is always unknown. In order to calculate the optimal reflection coefficient of the RIS, we need to sense the location and attitude of the ULA in the first place. As is shown in Fig.~4, the concentrated power distribution for cylindrical reflected waves leads to the high correlation between the received power and the location and the attitude of the ULA. Thus, the cylindrical reflected waves could be utilized to sense the location and attitude of the ULA, whose simulation is shown in the following subsections. However, the planar reflected waves could not be utilized to sense the location or the attitude of the ULA in radiating near field. \subsection{Cylindrical Wave Used For Location Sensing} \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm,height=7cm]{检测位置.png}} \caption{The received power versus $\psi$.} \end{figure} Suppose the ULA receiver antenna is located in the near field of the RIS with fixed attitude that is parallel to $\mathbf{e}_x$, with unknown distance $d_0 \in (d_{min},d_{max})$ ($(d_{min},d_{max})$ is the predetermined search range) and unknown angle position $\psi_0 \in (0\degree,90\degree)$. In order to figure out $d_0$ and $\psi_0$, we propose the following strategy. For the cylindrical waves, we adjust the coefficient of the RIS such that the projection of the focal line in the $yz$ plane traverses through the region of $d \in (d_{min},d_{max}), \psi \in (0\degree,90^\degree)$. During the process, the receiver ULA records the simultaneous change of the received power $P_r$ as shown in Fig.~5, and the estimation of $(d_0, \psi_0)$ can be written as \begin{align} \left(\hat{d}_0, \hat{\psi}_0\right)= \arg\max_{(d, \psi)}{P_r}.\hfill \end{align} The location sensing simulation is shown in Fig.~5 where $(d_0, \psi_0)=(180\lambda,67\degree)$ and $\arg \max_{(d, \psi)}{P_r}$ is identical to $(d_0, \psi_0)$. Note that, the above sensing process is similar to the traditional beam scanning, with the difference that we need to sense both the distance and the angle to locate the ULA in the near field. However, the traditional planar wave is uncapable of sensing the right $(d_0, \psi_0)$, as the maximum power is flat around $(d_0, \psi_0)$ as shown in Fig.~4. \subsection{Cylindrical Wave Used For Attitude Sensing} \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm]{旋转.png}} \caption{The received power versus $\phi$ and $d$.} \end{figure} In order to sense the attitude of the ULA (assume the receiver antenna is constrained in the plane perpendicular to $\mathbf{e}_y$), we first explore the relationship between the received power $P_r$ and the attitude of the ULA in the following simulation. Suppose the ULA receiver antenna is located in the radiating near field of the RIS with its center fixed at ($0, f_y, f_z$). We assume the angle between the antenna and $\mathbf{e}_x$ is defined as $\phi \in (0\degree,90\degree)$. The received power $P_r$ versus $\phi$ is shown in Fig.~6. Since both the planar wave and the cylindrical wave are designed to focus power on the ULA parallel to $\mathbf{e}_x$, the power is maximum when $\phi=0\degree$ and decreases with the increase of $\phi$. When the antenna is closer to the RIS, the variation of the power between different $\phi$ is more significant, which indicates that the antenna attitude has a more noteworthy influence on the power when the antenna is closer to the RIS. Besides, the variation of the power between different $\phi$ is much more significant for cylindrical wave than for planar wave, which indicates that the received power for cylindrical wave is more sensitive to the change of the antenna attitude. Therefore, given the position of the antenna, we can design the RIS to reflect cylindrical wave to the center of the antenna and acquire the information about the antenna attitude through the power it receives. However, the traditional planar wave is uncapable of sensing the right attitude, as $P_r$ does not change sharply with the change of the antenna attitude. \section{Conclusion} In this paper, we propose a scheme of RIS to convert planar waves into cylindrical waves such that the energy is concentrated on the ULA antenna. Simulation demonstrates that the proposed RIS scheme can reduce energy leakage and thus enlarge the channel capacity compared to the traditional planar wave scheme. Moreover, with cylindrical wave radiation, the power received by a near field ULA antenna is a function of its location and attitude, which could be utilized to sense the location and attitude of the antenna. \small \bibliographystyle{ieeetr} \section{Introduction} Multi-input multi-output (MIMO) as a key technology in 5G communication systems, has great advantages over traditional MIMO systems, such as high spectral efficiency and energy efficiency and high spatial resolution .\cite{91975982}To realize these advantages, accurate downlink channel state information (CSI) is critical. Thanks to the development of artificial intelligence technology, 6G-oriented wireless communication technology began to use feedback to transmit CSI information.In an autoencoder, an Encoder is used in the terminal to compress CSI to get a code word, and the terminal will feed the code word back to the base station, which will use a Decoder to restore the code word to the explicit CSI. In the above-mentioned DL-based work,the existing autoencoder model for CSI feedback is without exception a bottleneck structure, which consists of two convolutional neural networks (CNN) and two fully connected layers (FCN) as depicted in Fig.1.\cite{2019Multi} The fully connection layer is distributed at the end of the encoder and the beginning of the decoder to compress and decompress CSI.The compression rate can be controlled by adjusting the number of neurons in the fully connected layer, so that an autoencoder corresponds to a fixed compression rate. The CSI feedback in massive MIMO systems should be drastically compressed while the coherence time is short and vice versa. Therefore, the compression rate must be adjusted according to the environments.For different compression rates, both the number and the values of network parameters will change.It is unavoidable to train multiple neural networks for different rates of compression , which increases the time of training network and the load of storing network parameters. Aiming at the problem of multi-compression rate CSI feedback, we adopt a parameter prediction model and use a neural network to adjust different compression rates to achieve multi-compression rate feedback and reduce the network training time and parameter scale. Since the compression rate varies according to the degree of sparsity of the image, our network should self-adaptively extract hierarchical features which is impractical in most networks. To address this problem, Yulun Zhang et al. propose residual dense network (RDN) to fully make use of all the hierarchical features from the original LR image with their proposed residual dense block in Image Super-Resolution.They propose residual dense block (RDBlock) as the building module for RDN. RDBlock consists of densely connected layers and local feature fusion (LFF) with local residual learning (LRL). The output of one RDBlock has direct access to each layer of the next RDBlock,resulting in a contiguous state pass. Each convolutional layer in RDBlock has access to all the subsequent layers and passes on information that needs to be preserved. LFF extracts local dense feature by adaptively preserving the information, which leads to an implicit deep supervision. Although RDNet is originally used in Image Super-Resolution, we draw lessons from its structure to create a novel network as depicted in Fig.2 . Our network not only use dense connection within the same RDBlock, but also allows direct connections between any two RDBlocks. This Dense In Dense structure avoids losing any useful hierarchical features from the original codeword. It extracts and fuses features in a more efficient way. We will detail our RDNet in next section. In this paper, we introduce a novel autoencoder framework and training strategy, which is especially suitable to CSI feedback with multiple compression rates. \begin{figure*}[htp] \centering \centerline{\includegraphics[width=18cm]{RDnet.png}} \caption{RDNet} \end{figure*} \section{SYSTEM MODEL} We consider a simple single-cell downlink massive MIMO system with $N_{t}$($\gg1$) transmit antennas at a BS and a single receiver antenna at a UE. The system is operated in OFDM over $N_{c}$ subcarriers. The received signal at the n-th subcarrier can be expressed as follows: \begin{equation} \begin{aligned} y_{n}=\tilde{\mathbf{h}}_{n}^{H} \mathbf{v}_{n} x_{n}+z_{n} \end{aligned} \end{equation} where $\tilde{\mathbf{h}}_{n}$ and $\mathbf{v}_{n} \in \mathbb{C}^{N_{t} \times 1}$are the channel frequency response vector and the precoding vector at the n-th subcarrier, respectively, $x_{n}$ represents the transmitted data symbol, $z_{n}$ is the additive noise or interference, and $(·)^{H}$ represents conjugate transpose. The CSI matrix in the spatial-frequency domain can be expressed in matrix form as $\tilde{\mathbf{H}}=\left[\tilde{\mathbf{h}}_{1}, \mathbf{h}_{2}, \ldots, \tilde{\mathbf{h}}_{N_{c}}\right]^{H} \in \mathbb{C}^{N_{t} \times N_{c}}$. In the FDD system, UE estimates the downlink channel and then feeds this information (CSI) to the BS. With the downlink CSI, the BS calculates precoding vector $\mathbf{v}_{n} \in \mathbb{C}^{N_{t} \times 1}$ via singular value decomposition. The number of feedback parameters is 2$N_{c}N_{t}$, which is proportional to the number of antennas. Excessive feedback in massive MIMO system greatly occupies the precious bandwidth. We consider reducing feedback overhead by exploiting the sparsity of CSI in the angular-delay domain. The CSI matrix in the spatial-frequency domain can be converted into the angular-delay domain by 2D discrete Fourier transform (DFT) as follows: \begin{equation} \begin{aligned} \mathbf{H}=\mathbf{F}_{\mathrm{d}} \tilde{\mathbf{H}} \mathbf{F}_{\mathrm{a}} \end{aligned} \end{equation} where $\mathbf{F}_{\mathrm{d}} $ is a $N_{c}$ × $N_{c}$ DFT matrix and Fa is a $N_{t}$ ×$N_{t}$ matrix. Due to the sparsity of massive MIMO channel in the angular-delay domain, most elements in the delay domain are near zero and only the first $N_{c}'$ (<$N_{c}$)rows exhibit distinct non-zero values because the time delay among multiple paths only lies in a particularly limited period. Therefore, we directly truncate the channel matrix rows to the first $N_{c}'$ rows that are with distinct non-zero values. Meanwhile, the channel matrix is also sparse in a defined angle domain by performing DFT on spatial domain channel vectors . In this paper, we regard the 2D channel matrix as an image and the normalized absolute values of CSI matrix are regarded as the gray-scale values to visualize the sparsity of the retained $N_{c}'$ × $N_{t}$ channel matrix $\mathbf{H}$ in the angular-delay domain.We are interested in designing the encoder \begin{equation} \mathbf{s}=f_{\text {en }}(\mathbf{H}) \end{equation} which can transform the channel matrix into an M dimensional vector (codeword), where M < N. The data compression ratio is $γ = M/N$. In addition, we have to design the inverse transformation (decoder) from the codeword to the original channel, that is, \begin{equation} \mathbf{s}=f_{\text {de }}(\mathbf{H}) \end{equation} The CSI feedback approach is as follows. Once the channel matrix H˜ is acquired at the UE side, we perform 2D DFT in (2) to obtain the truncated matrix H and then use the encoder (3) to generate a codeword s. Next, s is returned to the BS, and the BS uses the decoder (4) to obtain H. The final channel matrix in the spatial-frequency domain can be obtained by performing inverse DFT. \section{DESIGN OF RDNET AND TRAINING SCHEME} \subsection{Encoder Design} We choose inception network as the structure of encoder. The inception network is made up of multiple inception modules.In each module, an 1x1 convolution is used to gather information,and then feature extraction of different scales are carried out by convolutions of multiple sizes. The outputs of all inception modules are concatenated and processed by an 5x5 convolution to reduce channel numbers. In this way, it can process spatial information at various scales and then aggregate. The parameters of each layer does not depend upon compression rate. \subsection{Decoder Design} We choose residual dense network as the structure of decoder. \cite{8964437} RDBlocks consist of 3 densely connected layers and local feature fusion (LFF) with local residual learning (LRL). (Fig. 2)In each RDBlock, the input of every layer has direct access to all the subsequent layers,passing on information that needs to be preserved and leading to an implicit deep supervision. Although the output of each layer has only 4 channels ,the concatenation of all the preceding layers has as many as 10 channels . Concatenating the states of all the preceding layers within the current RDBlock, LFF introduce a channel attention module to exploit the inter-channel relationship of features ,and utilize a 3 × 3 convolutional layer to reduce the channel number to 2. The result is added to the original input of this RDBlock to asymptotically approximate complicated functions that we want more easily. Besides,our RDNet also support contiguous memory among RDBlocks, which ensures hierarchical features produced by RDBlocks are amply used. This is achieved by connecting all these blocks densely, so the local feature information are fully used without missing. The output of the x-th RDBlock can be formulated as: \begin{equation} F_{x}=\sigma_{x}([F_{x-1}, F_{x-2}, \cdots, F_{0}]) \end{equation} Where $\sigma$ denotes the function of the x-th RDBlock, $[F_{x-1}, F_{x-2},\cdots, F_{0}]$ refers to the concatenation of the feature-maps produced by the $(x−1) -th,··· ,1-th$ RDBlock. Suppose the parameters in the Meta-Upscale layer are $w_{1},··· ,w_{k}$, so the input of the first RDBlock is $F_{0}=\sigma_{0} ([F_{-1},w_{1},··· ,w_{k}])$ Where $F_{-1}$ refers to the output of the Meta-Upscale layer. When conducting back propagation, the partial derivative of $F_{x}$ with respect to $ F_{0}$ and the partial derivative of Loss function L with respect to $w_{i}$ can be formulated by mathematical induction methods as: \begin{equation} \begin{aligned} &\frac{\partial F_{x}}{\partial F_{0}} = \sum_{k=1}^{x} \frac{\sigma_{x}}{\partial F_{x-k}} \frac{\partial F_{x-k}}{\partial F_{0}}\\ &=\sum_{k=1}^{x-1} \frac{\sigma_{x}}{\partial F_{x-k}} \frac{\partial F_{x-k}}{\partial F_{0}}+\frac{\sigma_{x}}{\partial F_{0}} \\ &=\sum_{k=1}^{x-1} \sum_{0<p_{1}<p_{2}<\ldots <p_{k}<x}\frac{\partial \sigma_{x}}{\partial F_{x-p_{1}}} \frac{\partial \sigma_{x-p_{1}}}{\partial F_{x-p_{2}}} \ldots \frac{\partial \sigma_{ x-p_{k}}}{\partial F_{0}}+\frac{\partial \sigma_{x}}{\partial F_{0}} \\ \end{aligned} \end{equation} \begin{equation} \begin{aligned} \frac{\partial L}{\partial w_{i}} &=\frac{\partial L}{\partial F_{x}} \frac{\partial F_{x}}{\partial F_{0}} \frac{\partial F_{0}}{\partial w_{i}} \end{aligned} \end{equation} In this equation, $F_{x}$ refers to the column vector reshaped by $F_{x}$, so $\frac{\partial F_{x}}{\partial F_{0}}$ refers to a Jacobian matrix. The partial derivative of Loss function L is the sum of a series of multiplication of both more and less matrices. During back propagation, multiplication of relatively less matrices creates short connection and effectively prevents gradient disappearance. Multiplication of relatively more matrices creates long connection and enhance the ability to approach nonlinear functions. This dense in dense structure can smooth gradient descent curve and remarkably improve the network's performance when RDBlocks are relatively few. However, dense in dense structure brings about heavier calculation burden with the increase of RDBlock number. Concatenating the outputs of all the preceding RDBlock, GFF (global feature fusion) is achieved by applying a 3 × 3 convolutional layer. This layer reduce the channel number to 2 in the same time, which makes GRL(Global residual learning) possible ,as shown in Fig.4. It should be noted that this long skip connection is essential to fit the ideal optimal mapping for this complicated decoder. \section{MULTIPLE-RATE CSI FEEDBACK } \subsection{Multiple-Rate Compression} \subsection{Meta-Upscale Formulation} We discover that instead of directly upscale codewords, Given a compressed image I$^{c}$ which is downscaled from the encoder, the task of Meta-Upscale is to generate an upscaled image I$^{u}$ whose size is the same as the original image. Here, we focus on formulating the Meta-Upscale Module.For the Meta-Upscale Module, there are three important functions. That is, the Location Projection, the Weight Prediction and the Feature Mapping.\cite{2020Meta} \begin{figure}[htp] \centering \includegraphics[width=8cm]{projection.png} \caption{Location Projection} \end{figure} Suppose the compression rate of height is r$_{1}$ and the compression rate of width is r$_{2}$.We define the projection point of (i,j) as (i$_{0}$ ,j$_{0}$ ),which is equal to([i/r$_{1}$],[j/r2]) as depicted in Fig.3. For each pixel (i,j) on the uncompressed image, we suppose that it is decided by the feature around the pixel (i$_{0}$ ,j$_{0}$ ) on the compressed image and the weights of the corresponding filter. From this perspective,the upscale module can be seen as a mapping function to map I$^{u}$ and I$^{c}$ .First, the upscale module should map the pixel (i,j) to the pixel (i$_{0}$ ,j$_{0}$ ). Then, the upscale module needs a specific filter to map the feature around the pixel (i$_{0}$ ,j$_{0}$ ) to generate the value of this pixel (i,j). Since each pixel on the uncompressed image corresponds to a filter. For different compression rates, both the number of the filters and the weights of the filters are different from those of other compression rates. The idea of Weight Prediction coming from meta-learning helps to solve this problem. \cite{2015Metalearning} In order to solve this problem,the parameters of this kernel are dynamically predicted by another two-layer neural network which takes four parameters (i-i$_{0}$,j-j$_{0}$, 1/r$_{1}$, 1/r$_{2}$) as input.Therefore, in the process of upscaling the image, a total of r$_{1}$ * r$_{2}$ convolution kernels are used to process the compressed image, and the parameters of these convolution kernels are generated by another neural network through four position and compression parameters. Suppose vector $\beta $=(i-i$_{0}$,j-j$_{0}$, 1/r$_{1}$, 1/r$_{2}$) is competent to determine the transform degree of a convolutional layer .The objective is finding a nonlinear function F(.), which map $\beta$ to kernel weights as W=F($\beta$). By training a network that can effectively predict the convolution kernel parameters, the network avoids storing a large number of convolution kernel parameters with different compression rates, which reduces the time of training network and the load of storing network parameters. \begin{figure}[htp] \centering \includegraphics[width=8cm]{decoder.png} \caption{Weight Prediction and Feature Mapping} \end{figure} At last, the feature mapping function maps the feature on the compressed image with the predicted weights back to the uncompressed image to calculate the value of the pixel as shown in Fig.4. We choose the convolution operation as the feature mapping function. In the same way as interpolation in upsampling ,values of unknown pixels are generated by using the values of known adjacent pixels. Thanks to meta-learning ,we create far more sophisticated means of upsampling than any traditional interpolation algorithms. Meanwhile, our feature mapping function varies according to upscaling rate ,which enables the autoencoder to perform well invariantly regardless of the change of compression rate. However,our network brings about convenience as well as some drawbacks. In our feature mapping layer, the relevant parameters can be formulated as : \begin{equation} \begin{aligned} n_{\text {out }} &=[\left\lfloor\frac{n_{\text {in }}+2 p-k}{s}\right\rfloor+1]*rate \\ j_{\text {out }} &=j_{\text {in }} * s*rate^{\text {-1 }}\\ r_{\text {out }} &=[r_{\text {in }}+(k-1) * j_{\text {in }}]*rate^{\text {-1 }} \\ \text { C}_{\text {out }} &=[\operatorname{C}_{\text {in }}+\left(\frac{\mathrm{k}-1}{2}-\mathrm{p}\right) * j_{\text {in }}]*rate^{\text {-1 }} \end{aligned} \end{equation} where n denotes feature numbers of a certain dimension, rate denotes the compression rate of a certain dimension,r denotes receptive field size, j denotes distance between two consecutive features in axis of the original image, C denotes center coordinate of the first feature. The biggest difference between our feature mapping layer and traditional convolutional layer is that the distance between two consecutive features is greatly reduced by compression rate. As a consequence, the receptive field size will grow slightly. This should be the major reason limiting the performance of our network. \section{EXPERIMENT} We jointly train our model with 4 compression rates all together. For different iterations in an epoch, loss value is calculated and back propagation is implemented in turn with 4 compression rates. For the same dataset in different epochs, compression rates ought to change to avoid overfitting. This training strategy slows down gradient descent speed, but ensures universality for different compression rates. For a certain compression rate, only one fourth of iterations decrease its loss, while the others increase its loss. Fortunately, the decrease of loss well offsets the increase in general. But a small learning rate is required to limit volatility of loss in training process. In multi-task learning problems, we can use homovariance uncertainty as the basis of weighted loss . Homovariance uncertainty only depends on tasks regardless of input data. It is not a model output, but a quantity that remains constant for all input data and varies from task to task. Therefore, it can be described as task-dependent uncertainty. In this section \cite{8578879}, a multi-task loss function based on gaussian likelihood maximization of homovariance uncertainty is derived.Let $\mathbf{f}^{\mathbf{W}}(\mathbf{x})$ be the output of a neural network with weights $\mathbf{W}$ on input $\mathbf{x}$. \begin{equation} \begin{aligned} \mathcal{L}\left(\mathbf{W}, \sigma_{1}, \sigma_{2}\right) &=-\log p\left(\mathbf{y}_{1}, \mathbf{y}_{2} \mid \mathbf{f}^{\mathbf{W}}(\mathbf{x})\right) \\ & \propto \frac{1}{2 \sigma_{1}^{2}}\left\|\mathbf{y}_{1}-\mathbf{f}^{\mathbf{W}}(\mathbf{x})\right\|^{2}+\frac{1}{2 \sigma_{2}^{2}}\left\|\mathbf{y}_{2}-\mathbf{f}^{\mathbf{W}}(\mathbf{x})\right\|^{2}+\log \sigma_{1}^{2} \sigma_{2}^{2} \\ &=\frac{1}{2 \sigma_{1}^{2}} \mathcal{L}_{1}(\mathbf{W})+\frac{1}{2 \sigma_{2}^{2}} \mathcal{L}_{2}(\mathbf{W})+\log \sigma_{1}^{2} \sigma_{2}^{2} \end{aligned} \end{equation} \section{Conclusion} In summary, our main contributions are two-fold: \begin{itemize} \item The CSI feedback with arbitrary compression rates is realized through a single network, thereby reducing training cost and network storage cost.The compression rate can be adjusted adaptively according to the channel environment, enhancing the flexibility of the system to deal with environmental change. \item We propose an upgraded RDNet. It uses Dense in Dense structure to enhance its ability to restore images with different degree of sparsity. Our upgraded RDNet is well qualified for CSI feedback with multiple compression rates. \end{itemize} \begin{table}[h] \begin{center} \label{tab1} \caption{Comparison between different training schemes(height = 16,width = 32)} \begin{tabular}{|c|r|r|r|r|} \hline \multirow{2}{*}{NMSE with such}& \multicolumn{4}{c|}{Trained with such compression rate} \\ \cline{2-5} compression rate& $r_1$ =2 $r_2$ =2& $r_1$ =2 $r_2$ =4&$r_1$ =4 $r_2$ =2&$r_1$ =4 $r_2$ =4 \\ \hline $r_1$ =2 $r_2$ =2&-24.65&-22.26&-22.13&-21.17\\ \hline $r_1$ =2 $r_2$ =4&-11.24&-15.01&-14.81&-14.79\\ \hline $r_1$ =4 $r_2$ =2&-8.34&-13.53&-14.64&-12.81\\ \hline $r_1$ =4 $r_2$ =4&-0.73&-6.59&-8.23&-10.29\\ \hline \end{tabular} \end{center} \end{table} \begin{table}[h] \begin{center} \label{tab2} \caption{Comparison between different networks} \begin{tabular}{| c |r|r|r|} \hline \multirow{2}{*}{NMSE }& \multicolumn{3}{c|}{different networks} \\ \cline{2-4} & CSINet&CRNet&RDNet\\ \hline $r_1$ =2 $r_2$ =2&-17.36&-26.99&-21.17\\ \hline $r_1$ =2 $r_2$ =4&-12.70&-16.01&-14.79\\ \hline $r_1$ =4 $r_2$ =2&-12.70&-16.01&-12.81\\ \hline $r_1$ =4 $r_2$ =4&-8.65&-11.35&-10.29\\ \hline \end{tabular} \end{center} \end{table} \begin{table}[h] \begin{center} \label{tab3} \caption{Comparison between different networks} \begin{tabular}{| c |c|c|c|} \hline NMSE& CSINet&CRNet&RDNet\\ \hline training time(h)&31.30&39.72&7.19\\ \hline parameter storage(MB)&4.72&6.10&1.15\\ \hline \end{tabular} \end{center} \end{table} \bibliographystyle{IEEEtran} \section{Introduction} The enhancement of wireless connectivity in the last decades has radically changed the way humans perceive and interact with the surroundings. However, the interfacing network infrastructure has been mostly confined at rooftops and distant-away serving sites. Recently, the idea of improving wireless networks by means of relays has been renovated through the concept of low-cost smart mirrors. As evidence, nowadays' scientific literature is full of appellatives such as intelligent reflecting surfaces (IRS), large intelligent surfaces (LIS), reconfigurable intelligent surfaces (RIS), and passive relaying arrays (PRA) \cite{9384499,9149203,8746155,9145334}. In particular, RIS has been deemed as a promising technology for the future wireless communication networks, due to its ability to adjust the channel environment \cite{1}. Practical RISs consist of a large number of discrete programmable sub-wavelength elements, \textit{i.e.}, unit cells or meta atoms \cite{RIS}, \cite{9343768}. For instance, a properly designed unit cell phase distribution across the surface enables the RIS to alter the direction of the wavefront of the reflected wave, thereby realizing the generalized Snell's law \cite{3}. RISs composed of discrete elements make the wireless environment controllable and programmable, and thus bring unprecedented new opportunities to enhance the performance of wireless communication systems \cite{9221372,9495362,9416239}. The channel along the path from the base station (BS) to RIS and then to the user is modeled as the cascading convolution of the BS-to-RIS channel, the diagonal phase shift matrix of the RIS, and the RIS-to-user channel \cite{2,via,9464314}. However, discrete elements of the RIS result in discrete phase shifts on the RIS, which may influence the desired reflection property such as the reflection angle, the anomalous reflection, the path loss, and the interference intensity of the scattered waves \cite{be1}, \cite{8466374}. Thus, RISs composed of continuous elements have aroused much research attention recently due to its benefits in realizing more sophisticated anomalous reflection and increasing the achievable rate \cite{9424177,9110889,passive}. Existing works on the path loss of the continuous RIS-aided communication systems could be classified into two categories: the antenna theory-based model and the electromagnetic-theory based model. For the antenna theory-based model, each RIS element is equivalent to a small antenna with specific power radiation pattern, and the Friis Transmission Equation is used to calculate the path loss \cite{3}. In \cite{3}, the authors prove that the received power at the target receiver grows quadratically with the RIS elements' number in the far field. For the electromagnetic theory-based model, a boundary condition problem is formulated by considering the RIS as a holographic surface, on which the reflection coefficient at each point can be designed, and the scattered fields could be modeled using electromagnetic theorems \cite{5,7,France}. The main advantage of the electromagnetic theory-based model is that it is derived from Maxwell equations, and underlines the electromagnetic nature of the RIS more clearly \cite{9247315}, \cite{9120479}. However, in most literature on the continuous RIS, the transmit antenna and the receiver antenna are both considered in the far field, and the RIS is designed to convert the incident planar wave into the reflected planar wave with an arbitrary angle \cite{9424177}, \cite{9594786}. For near-field planar arrays, the design of the RIS in [17] and [25] is still feasible because planar waves are still capable of concentrating power on the receiver. However, when the receiver is equipped with uniform linear array (ULA) or a single antenna and stays in the near field of the RIS, the reflected planar wave can not converge on the receiver, which may cause serious energy leakage and lead to poor channel capacity between the transmitter and the receiver. In order to improve the focusing property, the RIS should reflect cylindrical waves or spherical waves, where the axis of the cylinder is the focal line or the centre of sphere is the focal point. Nevertheless, the price paid is that the position of focal line or focal point should be known before the near field RIS design. Therefore, the location of the receiver needs to be sensed in the first place. To the best of the authors' knowledge, there is no previous work exploring how to focus energy on the ULA or the single antenna and how to sense the location of the receiver in the near field of the continuous aperture RIS. In this paper, we propose a continuous RIS scheme to convert planar waves into cylindrical waves or spherical waves such that the energy is concentrated on the ULA or the single antenna within the near field region. . We derive the analytical expression of the reflection coefficient of the RIS to radiate cylindrical or spherical waves. With cylindrical or spherical wave radiation, the power received by the receiver is a function highly related to its location. At the same time, we propose a method to sense the localization and the attitude of the receiver based on the analytic expression of the reflection coefficient. Simulation results demonstrate that the proposed scheme can reduce energy leakage and thus enlarge the channel capacity compared to the conventional scheme. Moreover, the location of the receiver could be accurately sensed by the proposed method. The rest of this paper is organized as follows. Section \uppercase\expandafter{\romannumeral2} presents the system model and the problem formulation. Section \uppercase\expandafter{\romannumeral3} derives the reflected fields via the induction theorem. Section \uppercase\expandafter{\romannumeral4} describes the reflection coefficient design criterion. Section \uppercase\expandafter{\romannumeral5} presents the methods to sense the location and attitude of the receiver. Section \uppercase\expandafter{\romannumeral6} provides the numerical simulation results, and Section \uppercase\expandafter{\romannumeral7} draws the conclusions. Notations: Boldface denotes vector or matrix; $j$ corresponds to the imaginary unit; $(\cdot)^H$ , $(\cdot)^T$, and $(\cdot)^*$ represent the Hermitian, transpose, and conjugate, respectively; $\circ$ represents the Hadamard product operator; $\left|\mathbf{a}\right|$ denotes the vector composed of lengths of each element in complex vector $\mathbf{a}$; $\left\Vert\mathbf{a}\right\Vert$ denotes 2-norm of the vector $\mathbf{a}$. \section{System Model} We consider the RIS as a rectangular plate with length $a$ in $y$-axis direction and length $b$ in $x$-axis direction, located in the horizontal plane. Suppose the wave number is $k=2\pi/\lambda$ where $\lambda$ is the wavelength. We consider continuous-aperture RIS instead of discrete-aperture RIS for benefits to achieve more sophisticated anomalous reflection and increasing the transmission rate \cite{9424177}, \cite{9110889}. We assume that the RIS can realize a continuous reflection with coefficient function $\Gamma(x, y) = \tau(x,y) e^{j\beta(x,y)}$, where $\tau(x,y)$ is the amplitude coefficient and $\beta(x, y)$ is the phase shift at the point $(x, y, 0)$ on the RIS. A point source in the far field is radiating a linearly polarized electromagnetic wave. We assume that the curvature of the incident wave front over the dimensions of the RIS can be neglected. Suppose the incident wave is parallel to the $yz$ plane, and the receiver is in the radiating near field (Fresnel) region. Then the reactive fields are negligible compared with the radiating fields. Let $d$ denote the distance between the center of the antenna and the center of RIS. Then the radiating near field region (RNFR) is given by \cite{soft}: \begin{align} 0.62 \sqrt{\frac{(a^2+b^2)^{3/2}}{\lambda}}<d<\frac{2 (a^2+b^2)}{\lambda}. \end{align} The upper bound of the region $\frac{2 (a^2+b^2)}{\lambda}$ is called the Rayleigh Distance. Note that depending on the values of $a$, $b$ and $\lambda$, the RNFR may or may not exist.\footnote{The propose scheme can be easily extended to the case when source is in the near field while the receiver is in the far field, or when both terminals are in the near field.} \subsection{Channel Model} \subsubsection{When The Receiver Is Equipped With ULA} Suppose the receiver is equipped with ULA of $M$ antennas, and the length of the ULA is $L$. As one of the first works studying the electromagnetic model, we assume there is only the line of sight (LOS) path from the source to the RIS and from the RIS to the destination.\footnote{This is also a typical scenario in case of mmWave transmission.} Since the transmit antenna is in the far field, it could be regarded as a point source. The received signal can be written as \cite{France}, \cite{next} \begin{equation} \mathbf{y}=\mathbf{h} s+\mathbf{n}, \end{equation} where $s$ is the transmitted signal, $\mathbf{n}$ is the additive noise, and $\mathbf{h}$ is the overall channel from the transmitter to RIS and RIS to the receiver. . According to the Huygens–Fresnel principle, every point on the RIS is a new source of spherical wavelets, and the secondary wavelets radiating from different points mutually interfere. The sum of these spherical wavelets generates the wavefront propagating from the RIS to the receiver. The energy of the wave emitted by a point source falls off as the inverse square of the distance travelled, and thus the amplitude falls off as the inverse of the distance. The complex amplitude of the electric field intensity at the point $(x,y,0)$ is given by \begin{align} E(x,y)&=\frac{c e^{j k (l+y \sin(\theta_{\text{in}}))}}{l+y\sin(\theta_{\text{in}})} \approx\frac{c e^{j k (l+y \sin(\theta_{\text{in}}))}}{l} , \end{align} where $\theta_{\text{in}}$ is the incident angle, $\theta_{\text{out}}$ is the reflected angle, $c$ represents the magnitude of the disturbance of the electric field at the far field point source, $l$ denotes the distance between the source and the center of the RIS, and the approximation holds because $l\gg y$. The energy flux density at $(0,0,0)$ can be calculated from electromagnetic theory or the Friis Transmission Equation \cite{3} as \begin{equation} D_0=\frac{\Vert \mathbf{E}(0,0) \Vert^2}{2\eta}=\frac{c^2}{2\eta l^2} =\frac{P_t G_t}{4\pi l^2}, \end{equation} where $P_t$ is the transmit power, $G_t$ is the transmit antenna gain. Thus, we derive the following property: \begin{equation} c^2=\frac{P_t G_t \eta}{2\pi}. \end{equation} Consider $(x,y,0)$, $x\in (-0.5b,0.5b)$, $y\in (-0.5a,0.5a)$ as new sources of spherical wavelets, and the contribution of $(x,y,0)$ to the complex amplitude of the reflected wave at the $m$th antenna can be derived from Fresnel-Kirchoff's diffraction formula as \begin{align} E_r(x,y,m)=&\frac{c}{j\lambda} \frac{ e^{j k (l+y \sin(\theta_{\text{in}}))}}{l} \frac{ e^{j k d(x,y,m)}}{d(x,y,m)} \nonumber \\ &\times \frac{\cos(\theta_{\text{in}})+\cos(\theta_{\text{out}}(x,y,M))}{2}, \end{align} where $\theta_{\text{in}}$ is the incident angle, $\theta_{\text{out}}(x,y,m)$ denotes the angle between $\mathbf{e}_z$ and the vector from $(x,y,0)$ to the $m$th receiver antenna, and $d(x,y,m)$ denotes the distance from $(x,y,0)$ to the $m$th receiver antenna. Define $\mathbf{E}_r=[E_r(1),\cdots,E_r(M)]^T$. We rewrite (6) as \begin{align} \mathbf{E}_r(x,y)&=c \mathbf{q}(x,y) \circ \mathbf{b}(x,y)\\ \mathbf{q}(x,y)&=\frac{\sqrt{M}}{2j l \lambda} \left[\frac{\cos(\theta_{\text{in}})+\cos(\theta_{\text{out}}(x,y,1))}{d(x,y,1)} \right.\nonumber\\ &\hspace{1.3cm}\left.\cdots,\frac{\cos(\theta_{\text{in}})+\cos(\theta_{\text{out}}(x,y,M))}{d(x,y,M)} \right]^T \\ \mathbf{b}(x,y)&=\frac{1}{\sqrt{M}}[e^{jk(l+y\sin(\theta_{\text{in}})+d(x,y,1))},\nonumber\\ &\hspace{1.3cm}\left.\cdots,e^{-jk(l+y\sin(\theta_{\text{in}})+d(x,y,M))}\right]^T, \end{align} where $\mathbf{q}(x,y) \in \mathbb{C}^M$ denotes the scalar-multiplication of the path gain from the transmit antenna to $(x,y,0)$ and the path gain from $(x,y,0)$ to each antenna of the receiver, whereas $\mathbf{b}(x,y) \in \mathbb{C}^M$ represents the steering vector from $(x,y,0)$ to the receiver. Suppose the power reflected by $(x,y,0)$ and received by the $m$th antenna forms the vector $\mathbf{p}_r(x,y) \in \mathbb{C}^M$. The channel gain $\mathbf{g}(x,y) \in \mathbb{C}^M$ can be derived as \begin{align} \mathbf{g(x,y)}&=\sqrt{\frac{\mathbf{p}_r(x,y)}{P_t}} =\sqrt{\frac{\left|\mathbf{E}_r(x,y)\right|^2 G_r \lambda^2}{2\eta P_t 4\pi }}\nonumber\\ &=\sqrt{\frac{c^2 G_r \lambda^2 \left|\mathbf{q}(x,y)\right|^2 \circ \left|\mathbf{b}(x,y)\right|^2 }{8\eta \pi P_t}\nonumber}\\ &\overset{(a)}{=}\sqrt{\frac{G_t G_r \lambda^2 \left|\mathbf{q}(x,y)\right|^2 }{16\eta \pi^2 M}} =\frac{\lambda \mathbf{q}(x,y)}{4 \pi}\sqrt{\frac{G_t G_r }{\eta M}}, \end{align} where $\left|\cdot\right|^2$, $\sqrt{\cdot}$, and $(\cdot)^2$ are element-wise operations for vectors, (a) holds because $\left|\mathbf{b}(x,y)\right|^2=\frac{1}{M}[1,\cdots,1]^T$, and $G_r$ is the antenna gain of each receiver antenna. Moreover, $\mathbf{h}$ is computed by merging the shift of phase and amplitude on the RIS, the channel gain, and the steering vector together as \begin{align} \mathbf{h}=\iint_{S} \tau(x,y) e^{\beta(x,y)}\mathbf{g}(x,y) \circ \mathbf{b}(x,y)dxdy, \end{align} where $S$ denotes the reflecting surface of the RIS. The goal is to design reflection coefficient $\tau(x,y)$ and $\beta(x,y)$ that can maximize $\|\mathbf{h}\|^2$ such that the received signal has the largest power \cite{article}. \subsubsection{When The Receiver Is Equipped With a single antenna} Since both $\mathbf{g}(x,y)$ and $\mathbf{b}(x,y)$ are only relevant to the coordinates of each antenna and are irrelevant to the arrangement mode of the antenna array, equations (2)-(11) also hold for any other kind of multi-antenna array receiver and the single antenna receiver. For the single antenna receiver, we have $M=1$, where $\mathbf{h}$, $\mathbf{g}(x,y)$, and $\mathbf{b}(x,y)$ all degrade into a complex number. \subsection{The Received Power} Denote the electric field at the $m$th antenna caused by the reflected wave as $\mathbf{E}_{\text{out},m}$. Since the value of energy flux density is $D_1=\frac{\Vert \mathbf{E}_{\text{out}} \Vert^2}{2\eta}$, the received signal power $P$ can be derived by the Friis Transmission Equation \cite{3} as \begin{align} P=\Vert \mathbf{h} s\Vert^2=\sum_{m=1}^M \frac{\Vert \mathbf{E}_{\text{out},m} \Vert^2}{2\eta} \frac{\lambda^2 G_r}{4\pi}, \end{align} where $G_r$ is the antenna gain. The identical relation confirms the compatibility of channel model and electromagnetic model. \section{Reflected Fields of the RIS} We assume the field configuration of the incident wave is the transverse magnetic $\mathbf{e}_x$, \textit{i.e.}, the E-field is parallel to $\mathbf{e}_x$ while the H-field lies in the plane spanned by $\mathbf{e}_y$ and $\mathbf{e}_z$. Since the source is in the far field, the impinging wave field is approximated as a plane that has the electric and the magnetic field distributions: \begin{align} \mathbf{E}_{\text{in}} &=E_{0} e^{-j k\left(\sin \left(\theta_{\text{in}}\right) y-\cos \left(\theta_{\text{in}}\right) z\right)} \boldsymbol{e}_{x}, \\ \mathbf{H}_{\text{in}} &=-\frac{E_{0}}{\eta}\left(\cos \left(\theta_{\text{in}}\right) \boldsymbol{e}_{y}+\sin \left(\theta_{\text{in}}\right) \boldsymbol{e}_{z}\right) e^{-j k\left(\sin \left(\theta_{\text{in}}\right) y-\cos \left(\theta_{\text{in}}\right) z\right)}, \end{align} where $\eta$ is the characteristic impedance of the medium. According to the reflected theory \cite{textbook}, $\mathbf{E}_{\text{out}}$ and $\mathbf{H}_{\text{out}}$ satisfy the relations \begin{align} \left.\mathbf{E}_{\text{out}}\right|_{z=0}&=\left.\Gamma(x, y) \mathbf{E}_{\text{in}}\right|_{z=0} \\ \mathbf{e}_{z} \times\left.\mathbf{H}_{\text{out}}\right|_{z=0}&=-\Gamma(x, y) \mathbf{e}_{z} \times\left.\mathbf{H}_{\text{in}}\right|_{z=0}. \end{align} Since the RIS is a large reflecting surface, the thickness of the RIS is always negligible compared to its length and width. We suppose that the RIS can be replaced by an imaginary surface, and the transmitted fields below the imaginary surface are expressed by $\mathbf{E}_{\text{tr}}$ and $\mathbf{H}_{\text{tr}}$, respectively. According to the induction theorem \cite{textbook}, $\mathbf{E}_{\text{in}}$ and $\mathbf{H}_{\text{in}}$ above the imaginary surface can be removed. Then, an equivalent electric current density $\mathbf{J}_{e}$ and a magnetic current density $\mathbf{M}_{e}$ must be imposed on the imaginary surface to satisfy the boundary conditions \cite{textbook}, which can be separately expressed as \begin{align} \mathbf{J}_{e} &=\mathbf{e}_{z} \times\left(\left.\mathbf{H}_{\text{out}}\right|_{z=0}-\left.\mathbf{H}_{\text{tr}}\right|_{z=0}\right), \\ \mathbf{M}_{e} &=-\mathbf{e}_{z} \times\left(\left.\mathbf{E}_{\text{out}}\right|_{z=0}-\left.\mathbf{E}_{\text{tr}}\right|_{z=0}\right). \end{align} \begin{figure*}[t] \begin{minipage}[t]{0.32\linewidth} \subfigure[planar wave]{ \includegraphics[width=6cm]{pingmianbo.png}} \end{minipage} \begin{minipage}[t]{0.32\linewidth} \subfigure[cylindrical wave]{ \includegraphics[width=6cm]{zhumianbo.png}} \end{minipage} \begin{minipage}[t]{0.32\linewidth} \subfigure[spherical wave]{ \includegraphics[width=7cm]{dantianxian.png}} \end{minipage} \caption{Comparison of Planar Reflected Wave, Cylindrical Reflected Wave, and Spherical Reflected Wave } \end{figure*} Conventionally, the RIS can be assumed as a perfect magnetic conducting (PMC) surface \cite{6}, where $\mathbf{E}_{\text{tr}}$, $\mathbf{H}_{\text{tr}}$, and $\mathbf{M}_{e}$ become zero. Hence, only $\mathbf{J}_{e}$ contributes to the scattered fields. Finally, the image theory is applied to remove the PMC surface and to derive an unbounded environment \cite{textbook}. The final equivalent electric current density $\mathbf{J}_{f}$ is expressed as \begin{align} &\mathbf{J}_{f}=2 \mathbf{J}_{e}=-2 \tau e^{\beta(x, y)} \mathbf{e}_{z} \times\left.\mathbf{H}_{\text{in}}\right|_{z=0}\nonumber\\ \hfill &=-2\frac{E_0}{\eta} \tau e^{\beta(x, y)} \cos \theta_{\text{in}} e^{-j k \sin \left(\theta_{\text{in}}\right) y} \mathbf{e}_{x}=J_{x} \mathbf{e}_{x}. \hfill \end{align} With $\mathbf{J}_{f}$, we can compute the vector potential $\mathbf{A}$ \cite{textbook} at an arbitrary observation point $(x^{\prime} , y^{\prime} , z^{\prime})$ as \begin{align} \mathbf{A}=\frac{\mu}{4 \pi} \iint_{\mathcal{S}} \mathbf{J}_{f} \frac{e^{-j k R}}{R} d x d y, \end{align} where $R=\sqrt{\left(x^{\prime}-x\right)^{2}+\left(y^{\prime}-y\right)^{2}+\left(z^{\prime}\right)^{2}}$ is the distance from point $(x, y, 0)$ to this observation point. Then, $\mathbf{E}_{\text{out}}$ can be derived as \begin{align} \mathbf{E}_{\text{out}}=\frac{1}{j k \sqrt{\mu \varepsilon}} \nabla \times(\nabla \times \mathbf{A})=\frac{1}{j k \sqrt{\mu \varepsilon}} \nabla(\nabla \cdot \mathbf{A})-\nabla^2\mathbf{A}. \end{align} Note that (12) is only applicable for the scattered fields above the $xy$ plane. \section{Reflection Coefficient Design Criterion of the RIS} \subsection{Conventional Reflection Coefficient Design For Planar Wave} Conventionally, the RIS is designed to reflect the incident plane wave as the planar wave with an identical reflected angle $\theta_{\text{out}}$ as shown in Fig. 1(a), where the following field distribution holds \begin{align} \mathbf{E}_{\text{out}} =E_{\text{out}} e^{-j k\left(\sin \left(\theta_{\text{out}}\right) y+\cos \left(\theta_{\text{out}}\right) z\right)} \boldsymbol{e}_{x}. \end{align} Suppose the center of the receiver is located at $(0,f_y,f_z)$. In order to concentrate the reflected signal power on the receiver, one should design $\theta_{\text{out}}=\arctan(f_y/f_z)$. With $\mathbf{E}_{\text{out}}$, $\beta(x,y)_{planar}$ can be derived from the generalized Snell's law \cite{snell} as \begin{align} \beta(x,y)_{planar}&=\angle\left(\frac{\left.\mathbf{E}_{\text{out}}\right|_{z=0}\cdot \boldsymbol{e}_{x}}{\left.\mathbf{E}_{\text{in}}\right|_{z=0}\cdot \boldsymbol{e}_{x}}\right)=ky(\sin \left(\theta_{\text{in}}\right)-\sin \left(\theta_{\text{out}}\right)). \end{align} To provide a vivid picture on how the refection coefficient vary with the coordinates on the RIS, we present $\beta(x,y)_{\text{planar}}$ in Fig.~2. Since the amplitudes of the incident and the reflected wave are identical on each point of the RIS, the amplitudes of the reflection coefficient are equal to $1$, \textit{i.e.}, \begin{align} \tau(x,y)_{planar}=1. \end{align} Note that, the conventional communication focuses on the direction of beamforming but does not pay attention to the propagation of electromagnetic \cite{9398864}. However, the planar wave can not focus the incident energy on ULA or a single antenna in the near field, which will be discussed from EM theory in the next subsection. \subsection{Reflection Coefficient Design For the ULA receiver} Consider a ULA receiver being located in the radiating near field of the RIS, which may be caused by the large scale of the RIS, the short distance of the receiver, or the high frequency of electromagnetic waves according to (1). Since the LOS path from each points of the RIS and the receiver can not be regarded parallel, it is inappropriate to adopt the conventional identical reflection angle scheme. In order to create a focal line in the position of the ULA receiver, we need to design a more sophisticated reflection coefficient scheme to realize variable reflection angles. Suppose the ULA receiver is located at $(0,f_y,f_z)$ and is perpendicular to the $yz$ plane. In order to focus the incident energy, the optimal scattered waves should converge on the focal line. To achieve the constructive interference, all the waves should share the same phase on the focal line. Thus, when we consider the time reverse of the optimal waves (the backward propagation of the optimal waves that is a standard operation in \cite{PhysRevX.6.041008}), they can be regarded as radiated by the same source of radiation with the current intensity defined as $I_1$ on the focal line. Suppose the radiation distribution is consistent with the change of $x$, and then the source of radiation can be seen as a line source parallel to $\boldsymbol{e}_{x}$ located at $(0,f_y,f_z)$ with infinite length. Since the time reverse of the optimal scattered waves are axially symmetric with the focal line as the axis, the optimal scattered waves share the same property, \textit{i.e.}, the optimal scattered waves for ULA should have cylindrical wavefronts as shown in Fig. 1(b), from which the scattered electric field $\mathbf{E}_{\text{out}}$ and magnetic field $\mathbf{H}_{\text{out}}$ can be expressed as \cite{textbook} \begin{align} \mathbf{E}_{\text{out}}=& \left(\frac{-I_{1} k \eta}{4} H_{0}^{(2)}\left(k R_c\right)\right)^* \boldsymbol{e}_{x}, \\ \mathbf{H}_{\text{out}}=& \left(\frac{-j I_{1} k}{4} H_{1}^{(2)}\left(k R_c\right)\right)^* \!\! \times\!\!\left(\frac{(z-f_z) \boldsymbol{e}_{y}}{R_c}-\frac{(y-f_y)\boldsymbol{e}_{z}}{R_c}\!\right), \end{align} where $R_c=\sqrt{(y-f_y)^{2}+(z-f_z)^{2}}$, $I_{1} \in \mathbb{C}$ denotes the current intensity of the line source, and $H_{n}^{(2)}$ refers to the Hankel function of type $2$ with order $n$. Without loss of generality, $\angle I_0$ is assumed to be $0$. With $\mathbf{E}_{\text{out}}$, $\beta(x,y)_{\text{cylind}}$ can be derived as \begin{align} \beta(x,y)_{\text{cylind}}&=\angle\left(\frac{\left.\mathbf{E}_{\text{out}}\right|_{z=0}\cdot \boldsymbol{e}_{x}}{\left.\mathbf{E}_{\text{in}}\right|_{z=0}\cdot \boldsymbol{e}_{x}}\right) \!\!=\!\!\angle\left(\frac{(\frac{-I_{1} k \eta}{4} H_{0}^{(2)}\left(k R_c\right))^*}{E_0 e^{-j k\left(\sin \left(\theta_{\text{in}}\right) y\right)}}\right) \hfill \nonumber\\ &=-\angle\left(-H_{0}^{(2)}\left(k R_c\right)\right)+k\sin \left(\theta_{\text{in}}\right)y. \end{align} To provide a vivid picture on how the refection coefficient vary with the coordinates on the RIS, we calculate $\beta(x,y)_{\text{cylind}}$ when $a=b=20\lambda$ and the incident angle is $\pi/6$, as an example in Fig.~3, where the focal line is parallel to $\mathbf{e}_x$ such that $\beta(x,y)_{\text{cylind}}$ is irrelevant to $x$ and is only relevant to $y$. The tangent slope of $\beta(x,y)_{\text{cylind}}$ changes from positive to negative with the decrease of the reflection angle from $y=-10\lambda$ to $y=10\lambda$. When the tangent slope of $\beta(x,y)_{\text{cylind}}$ is zero, it indicates that at this conventional-mirror-reflection point the reflection angle is equal to the incident angle like conventional mirror reflection. With the decrease of $y$, $\beta(x,y)_{\text{cylind}}$ is increasingly analogous to a linear function, as the reflection angle changes less obviously. \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm]{jvchitu.png}} \caption{Phase shift on the RIS to convert the planar wave into the planar wave. In this case, $\beta(x,y)$ is irrelevant to $x$.} \end{figure} When the reflected waves are cylindrical waves, the amplitudes of the scattered radiation on each point of the RIS rely on $R_c$ and $|I_{1}|$. According to the location and the attitude of the focal line, $R_c$ is known and $|I_{1}|$ can be calculated by $E_0$. With the assumption that the RIS is passive lossless, the power conservation determines the relation between $E_0$ and $|I_{1}|$. The power of the incident wave on the RIS and the power of the reflected wave on the RIS are \begin{align} P_{\text {incident}} &=\frac{E_{0}^{2}}{2 \eta} a b \cos(\theta_{\text{in}}), \\ P_{\text {reflected}} &=\frac{\tan ^{-1}((f_y + 0.5a)/ f_z)-\tan ^{-1}((f_y - 0.5a)/ f_z)}{2\pi}\nonumber\\ &\hspace{1cm}\times \frac{|I_{1}|{ }^{2} k \eta b}{16 \pi}, \end{align} respectively. When $P_{\text {incident}} = P_{\text {reflected}}$, $|I_{1}|$ is determined by $E_0$, and the desired amplitude of the reflection coefficient is derived by the definition as \begin{align} \tau(x,y)_{\text{cylind}}=\left|\left(\frac{(\frac{-|I_{1}| k \eta}{4} H_{0}^{(2)}\left(k R_c\right))^*}{E_0 e^{-j k\left(\sin \left(\theta_{\text{in}}\right) y\right)}}\right)\right|. \hfill \end{align} \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm]{zhumianboxiangweibianhua.png}} \caption{Phase shift on the RIS to convert the planar wave into the cylindrical wave. In this case, $\beta(x,y)$ is irrelevant to $x$.} \end{figure} \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm,height=7cm]{nengliangfenbubijiao.png}} \caption{The comparison of normalized power between planar wave and cylindrical wave} \label{能量分布比较} \end{figure} For the ULA receiver, the power should be concentrated on the focal line, which cannot be achieved by traditional planar waves. To demonstrate the superiority of cylindrical waves over planar waves, we examine the degree of energy leakage, \textit{i.e.}, the energy carried by the reflected waves to undesired area. We select the observation points at the circular arc of $x=0$, $y=d\cos(\theta)$, $z=d\sin(\theta)$, $\theta \in (0\degree, 90\degree)$ and calculate the electric field $\mathbf{E}_{\text{obs}}$ on the observation points. To show the change of focusing property when the ULA moves from near to far, three groups of $(f_y,f_z)$ are set in the order as $(80\lambda, 80\lambda)$, $(180\lambda, 180\lambda)$, and $(180\lambda, 180\lambda)$ respectively. The distribution of the normalized power $P_{n}=\frac{\Vert \mathbf{E}_{\text{obs}} \Vert^2}{2\eta d^2}$ versus $\theta$ is shown in Fig.~\ref{能量分布比较}. It is seen that, for the cylindrical wave, the power is the maximum at $\theta=45\degree=\theta_{f}$ that is precisely the angular position of the ULA, and the majority of the power is concentrated around the focal line. The degree of power concentration keeps almost unchanged when the focal line moves closer to the center of the RIS. For the planar wave, we design the reflection coefficient such that the reflection angle is equal to $\theta_{f} =45\degree$, which is the optimal reflected angle to concentrate power on the ULA. When $f_y=f_z=80\lambda$, the main lobe width is significantly larger than the proposed cylindrical wave and the maximum power is 8.53 dB smaller than the proposed cylindrical wave, which results in serious energy leakage. As the focal line moves farther to the center of the RIS, the degree of power concentration increases and the power distribution of the planar wave is increasingly similar to the power distribution of the cylindrical wave. \subsection{Reflection Coefficient Design For A Single Antenna Receiver} Consider the receiver is equipped with a single antenna whose location $(f_x,f_y,f_z)$ is the focal point of the RIS as shown in Fig. 1(c). In order to focus the incident energy on this focal point, the optimal scattered waves should converge on the focal point. To achieve constructive interference, all the waves should share the same phase at the focal point. Thus, when we consider the time reverse of the optimal waves (the backward propagation of the optimal waves), they can be regarded as being radiated by the same point source with the magnitude of radiation $U_1$ at the focal point. Therefore, the optimal reflected waves for the single antenna should have spherical wave fronts and the amplitude of the scattered electric field $E_{\text{out}}$ can be expressed as \cite{textbook} \begin{align} E_{\text{out}}= \frac{U_1 e^{jkR_s}}{R_s}, \end{align} where $R_s=\sqrt{(x-f_x)^{2}+(y-f_y)^{2}+(z-f_z)^{2}}$, and $\angle\left(U_1\right)$ can be regarded as $0$ without loss of generality. We note that the above field distribution is accurate if an ideal drain, \textit{i.e.}, the time-reversed equivalent of the point source, is also positioned in the focal point. In practice, the absence of the ideal drain may reduce the overall performance of the designed scheme. With $\mathbf{E}_{\text{out}}$, $\beta(x,y)_{\text{sphere}}$ is derived as \begin{align} \beta(x,y)_{\text{sphere}}=\angle\left(\frac{E_{\text{out}}}{\left\Vert\mathbf{E}_{\text{in}}\right\Vert}\right) =-k R_s+k\sin \left(\theta_{\text{in}}\right)y. \end{align} \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm,height=7cm]{qiumianboxiangweibianhua.png}} \caption{Phase shift on the RIS is designed to convert the planar wave into the spherical wave.} \label{球面波相位图} \end{figure} To provide a vivid picture on how the refection coefficient vary with the coordinates on the RIS, we present $\beta(x,y)_{\text{sphere}}$ that is a hyperboloidal function plus a linear function, when $(f_x,f_y,f_z)=(0,0,20\lambda)$ and the incident angle is $\pi/6$, as an example in Fig.~\ref{球面波相位图} While the phase shift is essential to focus the radiated field on the focal point, the amplitude shift can be used to reduce the sidelobe level. Indeed, a high level of the secondary lobes around the focal point may degrade measurement accuracy in non-contact sensing applications. Meanwhile, high sidelobes may reduce transmission efficiency in wireless power transfer systems, increase the interference with nearby wireless systems, and raise the personnel exposure to radiation hazards. In near-field-focused microwave antennas \cite{7912361}, \cite{6948345}, it has been observed that an amplitude shift that gives lower transverse sidelobes also yields higher forelobes and aftlobes along the aperture axis. The amplitudes of the scattered radiation on each point of the RIS rely on $R_s$ and $|U_1|$. According to the location of the focal point, $R_s$ is known and $|U_{1}|$ can be calculated by $E_0$. With the assumption that the RIS is passive lossless, the power conservation determines the relation between $E_0$ and $|U_{1}|$. The power of the incident wave on the RIS and the power of the reflected wave on the RIS are \begin{align} P_{\text {incident}} &=\left|\frac{1}{2}\left(\mathbf{H}_{\text{in}}\times\mathbf{B}_{\text{in}}\right)\cdot\mathbf{S}\right|=\frac{E_{0}^{2}}{2 \eta} a b \cos(\theta_{\text{in}}), \\ P_{\text {reflected}} &=\Omega \frac{|U_{1}|^2}{2\eta}, \end{align} where $\mathbf{S}$ denotes the normal direction vector of $S$, and $\Omega$ denotes the solid angle between the RIS and the focal point. When $P_{\text {incident}} = P_{\text {reflected}}$, $|U_{1}|$ is determined by $E_0$, and the desired amplitude of the reflection coefficient is derived by the definition as \begin{align} \tau(x,y)_{\text{sphere}}=\left|\left(\frac{|U_{1}|}{R_s E_0 e^{-j k\left(\sin \left(\theta_{\text{in}}\right) y\right)}}\right)\right|. \hfill \end{align} Note that the spherical wave applies to more than the single receiver model. If the ULA receiver is relatively short or is composed of relatively few antenna, the ULA degenerates into a point. In this case, the RIS reflection coefficient design is the same as (31)-(35). \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm,height=7cm]{qiumianbo_d_100.png}} \caption{The normalized power of spherical wave for single antenna} \label{球面波d=100} \end{figure} \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm,height=7cm]{zhumianboxiangweitu.png}} \caption{The normalized power of cylindrical wave for single antenna} \label{柱面波球坐标} \end{figure} In order to demonstrate the distribution of the power in the whole space, we take the following example. We set the single antenna as the focal point for the spherical waves at $d=180\lambda$, $\phi_s =90\degree$, $\theta_s =45\degree$ for the spherical wave. For the cylindrical wave, since the focusing property is the best when the single antenna is on the focal line, we set the focal line on $d=180\lambda$, $\theta_s =45\degree$. Then, we calculate the electric field $\mathbf{E}_{\text{obs}}$ on the observation points for cylindrical waves and spherical waves respectively. In order to show the superiority of the spherical waves in the case of single antenna, we select the observation points on the spherical surface of $d=180\lambda$, $\phi_s \in (0\degree,180\degree)$, $\theta_s \in (0\degree, 90\degree)$, where $\phi_s$ is the longitude angle and $\theta_s$ is the latitude angle. The distribution of the normalized power $P_{n}=\frac{\Vert \mathbf{E}_{\text{obs}} \Vert^2}{2\eta d^2}$ is shown in Fig.~\ref{球面波d=100} for spherical waves and Fig.~\ref{柱面波球坐标} for cylindrical waves. It is seen that for the cylindrical wave, the power is maximum on the line of $\theta_s = 45\degree$, $\phi_s \in (0\degree,180\degree)$, and the majority of the power is concentrated around $\phi_s \in (80\degree,100\degree)$ on the focal line. Since the size of the RIS is limited, the ability of cylindrical wave to focus the power on the focal line gradually decreases when $\phi_s$ moves away from $90\degree$. For the spherical wave, the power is maximum at the point of $\theta_s = 45\degree$, $\phi_s =90\degree$, and the majority of the power is concentrated around $\theta_s \in (40\degree,50\degree)$ around the focal point. The power decreases sharply with the increase of the distance between the observed point and the focal point. Since the power is concentrated in a smaller area than the cylindrical wave and the maximum power is 5.72 dB larger than the cylindrical wave, the spherical wave outperforms the cylindrical wave in the capability of focusing power on a single antenna. Moreover, we compare the spherical waves and planar waves in Table ~\uppercase\expandafter{\romannumeral1}, from which we can see the superiority of the proposed design over the conventional one. \begin{table}[t] \begin{center} \label{tab2} \caption{The superiority of spherical waves over planar waves in Normalized Power (dB)} \begin{tabular}{|c<{\centering}|c<{\centering}|c<{\centering}|c<{\centering}|} \hline d=100$\lambda$&d=180$\lambda$&d=180$\lambda$&d=380$\lambda$\\ \hline 13.25&9.98&8.15&7.24\\ \hline \end{tabular} \end{center} \end{table} \section{Sensing The Location and Attitude} Different from far field beamforming that only needs the knowledge of $\mathbf{h}$ or the direction of the receiver, RIS coefficient design for the near field transmission needs the exact location information of the receiver. Hence, the near field transmission belongs to location-awared technologies \cite{5230781}. We assume that the source in the far field is fixed (typically the BS) and the receiver in the near field is mobile with uncertain location. Thus, the incident angle $\theta_{in}$ can be assumed as a known constant, while the location and the attitude of the receiver is unknown. In general, there are three ways to obtain the the position information of the receiver. \begin{enumerate} \item The first way is similar to the conventional channel estimation based beamforming, that is, the optimal beamforming coefficient is obtained by calculating the channel and then matching $\mathbf{h}$. However, since the RIS here has continuous aperture, even if we estimate $\mathbf{h}$ of discrete channel, we cannot directly obtain the reflection coefficient of the RIS at the receiver. An alternative is to extract the position and attitude of receiver through formula (11), and then use these two parameters to calculate the optimal coefficient from (27), (30), (32), and (35). This approach is similar to the traditional angular domain based beamforming method for massive MIMO \cite{9398864}, \cite{8753608}, \cite{8333702}. However, for continuous RIS, it is quite difficult to obtain the physical parameters of receiver by the inverse integration from formula (11). \item The second way may refer to the sensors to obtain the position and attitude information of the physical object \cite{9705498,9705498,9687468}. For example, one can use the passive sensors like camera to obtain near field environmental information around RIS, and then use computer vision technology to capture position and attitude information of the receiver from the image \cite{9580233,9129762,9512383}. Another advantage of camera is that it does not need radio frequency (RF) link for signal processing, which is especially suitable for RIS-assisted transmission scenes. \item The third way refers to the conventional beam scanning method, which searches the position of the receiver in space, and then the receiver feeds back the time slot with the strongest receiving power to the transmitter to determine the optimal beam direction. Different from traditional far-field beam scanning that only search over the azimuth and elevation angle, the near-field scanning has to scan beam azimuth, elevation, as well as distance at the same time. Thereby the scanning here should be one dimension higher than the conventions, which is a reasonable price by looking into near field communications. Nevertheless, the advantage of this approach is that the perception can be completed by communication link itself without additional equipment. We name the near field scanning as {\it focal scanning} \end{enumerate} The choice of the above three methods depends on many factors, such as the computing power of the system, the hardware cost, \textit{etc}. In this paper, we will mainly illustrate the focal scanning in the near field. \footnote{If the source is in the near field and the receiver is in the far field, we only needs to scan the direction of the receiver, and the process degenerates into the conventional algorithm.} \subsection{Sensing For ULA Receiver Antenna} We establish the cylindrical coordinate system where the $z$-axis is aligned with the $x$-axis in cartesian coordinate system, and the polar axis is aligned with the $y$-axis in cartesian coordinate system. Suppose the ULA receiver is located in the near field of the RIS with fixed attitude that is parallel to $\mathbf{e}_x$, with unknown distance $d \in (d_{min},d_{max})$\footnote{$(d_{min},d_{max})$ is the predetermined search range} and unknown angular position $\theta \in (0\degree,90\degree)$. In order to figure out $d$ and $\theta$, we propose the following approach. For the cylindrical waves, we adjust the coefficient of the RIS such that the projection of the imaginary focal line in the $yz$ plane scans through the region of $d \in (d_{min},d_{max}), \theta \in (0\degree,90\degree)$. During the process, the receiver ULA records the simultaneous change of the received power $P_r$, and the estimation of $(d, \theta)$ can be written as \begin{align} \left(\hat{d}, \hat{\theta}\right)= \arg\max_{(d, \theta)}{\space P_r}.\hfill \end{align} The scanning method can be simplified as follows. First, we can fix $d$ on the upper or the lower boundary, and then traverse $\theta$ to find the upper and the lower boundary of $\theta$ as: \begin{align} \theta_{min}&= \min (\arg \max_{(d=d_{min}, \theta)}{P_r},\arg\max_{(d=d_{max}, \theta)}{P_r})\hfill \\ \theta_{max}&= \max (\arg\max_{(d=d_{min}, \theta)}{P_r},\arg\max_{(d=d_{max}, \theta)}{P_r}).\hfill\hfill \end{align} With the constraint $\theta \in (\theta_{min},\theta_{max})$, the scanning time can be drastically reduced. \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm,height=7cm]{jianceweizhi.png}} \caption{The received power versus $\theta$.} \label{检测位置} \end{figure} To take an example, we adjust the coefficient of the RIS such that the projection of the focal line in the $yz$ plane scans through the region of $d \in (d_{min},d_{max}), \theta \in (0\degree,90\degree)$. During the process, the receiver ULA records the simultaneous change of the received power $P_r$ as shown in Fig.~\ref{检测位置}, while $(d_0, \theta_0)$ can be estimated by methods described in section \uppercase\expandafter{\romannumeral5}. We set $(d_0, \theta_0)=(180\lambda,67\degree)$, $(d_{min},d_{max})=(80\lambda,180\lambda)$ for the example shown in Fig.~\ref{检测位置}, and we notice that $\arg \max_{(d, \theta)}{P_r}$ is identical to $(d_0, \theta_0)$. The above sensing process is similar to the conventional beam scanning, with the difference that we need to sense both the distance and the angle to locate the ULA in the near field. It needs to be emphasized that, the conventional planar wave is uncapable of sensing the right $(d_0, \theta_0)$, as the maximum power is flat around $(d_0, \theta_0)$ as shown in Fig.~\ref{能量分布比较}. \subsection{Sensing for Single Receiver Antenna} Let us use the spherical coordinate system instead of cartesian coordinate system. Suppose a single antenna receiver is located in the near field of the RIS with unknown location expressed by $d \in (d_{min},d_{max})$, $\phi_s \in (0\degree, 180\degree)$, $\theta_s \in (0\degree, 90\degree)$, where $\phi_s$ is the longitude angle and $\theta_s$ is the latitude angle. Since there are three independent parameters determining the location, a direct way of location acquisition is to make the receiver feedback power in 3-D search of focal scanning as \begin{align} \left(\hat{d}, \hat{\theta_s}, \hat{\phi_s}\right)= \arg\max_{(d, \theta, \phi)}{\space P_r}.\hfill \end{align} However, in practical applications, 3-D search is too complex to meet the real-time requirements of communication systems. To address this issue, we propose a new approach where the search in a 3-D space can be decoupled into the search in a 2-D space plus the search in a 1-D space, which is shown in Fig. 9. \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm,height=7cm]{tikz.png}} \caption{The proposed location sensing algorithm can be divided into two steps.} \end{figure} First, we estimate $\hat{d}$ and $\hat{\theta_s}$ by utilizing the cylindrical waves similarly to the previous subsection. Since the power is concentrated on the imaginary focal line when cylindrical waves are reflected, the maximum power is received when the single antenna is located on the focal line, based on which $\hat{d}$ and $\hat{\theta_s}$ can be written as \begin{align} \left(\hat{d}, \hat{\theta_s}\right)= \arg\max_{(d, \theta_s)}{\space P_r}.\hfill \end{align} Second, we utilize spherical waves and adjust the coefficient of the RIS such that the imaginary focal point scans across the focal line. During the process, the receiver antenna records the simultaneous change of the received power $P_r$. Since the power is concentrated on the imaginary focal point when spherical waves are reflected, the maximum power is received when the single antenna is located at the focal point and the estimation of $\phi_s$ can be written as \begin{align} \hat{\phi}_s= \arg\max_{(d=\hat{d},\theta=\hat{\theta}_s, \phi_s \in (0\degree,180\degree))}{P_r}.\hfill \end{align} For instance, we set $d=180$, $\phi_s=90\degree$, $\theta_s =45\degree$. The first step is utilizing cylindrical waves to estimate $d$ and $\theta_s$ as shown in Fig.~\ref{第一步检测} that presents the change of the received power with the change of the RIS's imaginary focal line. \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm,height=7cm]{diyibujiance.png}} \caption{The First Step of Location Sensing} \label{第一步检测} \end{figure} After we estimate $\hat{d}$ and $\hat{\theta_s}$ as the brightest point in Fig.~\ref{第一步检测}, the second step is utilizing spherical waves to estimate $\phi$ as shown in Fig.~\ref{第二步检测} that presents the change of the received power with the change of the RIS's imaginary focal point. Note that with the constraint $d=\hat{d},\theta_s=\hat{\theta_s}$, the position of the imaginary focal point can be expressed by $\phi$ alone. We estimate $\hat{\phi}$ as the one corresponding to the highest point in Fig.~\ref{第二步检测}. \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm,height=7cm]{dierbujiance.png}} \caption{The Second Step of Location Sensing} \label{第二步检测} \end{figure} \section{Simulation Results and Analysis} In the simulations, we set $a = b =20 \lambda$ and $\lambda$ = 0.1 m, \textit{i.e.}, the frequency $f=2.998$ GHz and thus the radiating near field requires $93.263\lambda< d < 1600\lambda$. Note that the range of $d$ is typical in practice, \textit{e.g.}, in road traffic the distance from vehicles to the nearest RIS may fall in this range. Moreover, we set $E_{\text{in}} = 1$ V/m, $M=128$, $\eta=377$ Ohm, $G_r=5$dB, $L=20 \lambda$, and $\theta_{\text{in}} =30\degree$. Although the magnitudes of electromagnetic fields are related to $E_{\text{in}}$ and $\eta$ that will change in practice, the design of the reflection coefficient is irrelevant to $E_{\text{in}}$ and $\eta$ according to (23)-(35). Moreover, the variation trend of the received power in the sensing process is irrelevant to $E_{\text{in}}$ and $\eta$ too, which means that the sensing accuracy will not be affected by the change of $E_{\text{in}}$ and $\eta$. We define SNR as the ratio of the power at the receiver transmitted by planar waves to the power of the noise at the receiver. Moreover, $C$ is the mean value of channel capacity on each antenna. Therefore, the following simulations are mainly used to reflect the influence of types of waves on capacity without the influence of the number of antennas. \subsection{The Comparison of Channel Capacity with ULA} \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm,height=7cm]{SNRvsC.png}} \caption{The channel capacity $C$ versus SNR.} \label{SNRvsC} \end{figure} For the ULA receiver, the power should be concentrated on the line, which cannot be achieved by traditional planar waves. To demonstrate that cylindrical waves are superior to planar waves, we examine the loss of channel capacity caused by the energy leakage, \textit{i.e.}, the energy carried by the reflected waves to undesired area. In the design of the reflection coefficient, we want to focus the power at $\theta_{f} = \arctan (f_z/f_y) =45\degree$ with the distance $d=180 \lambda$. The channel capacity is calculated as \begin{align} C=\log_2\left(1+\sum_{m=1}^M \frac{\Vert \mathbf{E}_{\text{out},m} \Vert^2}{2\eta} \frac{\lambda^2 G_r}{4\pi N}\right), \end{align} where $N$ denotes the power of the additive noise. The channel capacity $C$ versus SNR is shown in Fig.~\ref{SNRvsC}. It is seen that, $C$ increases with the increase of SNR because the focusing property increases with the decrease of the noise, and the cylindrical wave has the best performance. The SNR gain of the cylindrical wave over the spherical wave is 2 dB, and it demonstrates that the best effect can be achieved only by designing the cylindrical waves for the ULA. \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm,height=7cm]{ruilijvli.png}} \caption{The channel capacity $C$ versus the distance of the receiver.} \label{瑞丽距离} \end{figure} We further explore the relationship between the channel capacity and the distance of the receiver in Fig.~\ref{瑞丽距离}. To compensate for the power decay caused by the increase of the distance, we adjust the SNR for different distances such that $C$ of the planar waves is identical to 1. It is seen that, the channel capacities of cylindrical and spherical waves drop with the increase of the distance until the Rayleigh Distance. When the distance is above $250\lambda$, the channel capacity of spherical waves is close to the channel capacity of cylindrical waves because the ULA is not long enough compared with the distance and the ULA can be deemed as a point receiver. Beyond the Rayleigh Distance, the channel capacities of cylindrical, planar and spherical waves are indistinguishable, which indicates that cylindrical and spherical waves degenerate into planar waves in the far field. \subsection{The Comparison of Channel Capacity with A Single Antenna} \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm,height=7cm]{SNRvsC_single.png}} \caption{The channel capacity $C$ versus SNR.} \label{SNRvsC_single} \end{figure} When the receiver is equipped with a single antenna, spherical waves have the best focusing property. We set the single antenna as the focal point for the spherical waves at $d=180\lambda$, $\phi_s =90\degree$, $\theta_s =45\degree$. For the cylindrical wave, since the focusing property is the best when the single antenna is on the focal line, we set the focal line on $d=180\lambda$, $\theta_s =45\degree$. The channel capacity $C$ versus SNR is shown in Fig.~\ref{SNRvsC_single}. It is seen that, $C$ of cylindrical and spherical waves still increases with the increase of SNR, and the spherical wave has the best performance. The SNR gain of the spherical wave over the cylindrical wave is 3 dB. \subsection{Sensing The Location of ULA} \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm,height=7cm]{sensing_snr.png}} \caption{The RMSE of sensing location varies with SNR for the single antenna receiver.} \label{sensing_snr} \end{figure} In Fig.~\ref{sensing_snr}, we investigate the RMSE of sensing location versus SNR with $d=200\lambda,600\lambda,1600\lambda$ respectively. For all cases, the RMSE decreases with the increase of SNR. The sensing result goes from almost randomly located in the searching space to increasingly close to receiver with the increase of SNR. Note that for this simulation where a single RIS is used for sensing, the smaller $d$ is, the more accurate the sensing is, because the near filed effect is more significant and the searching scale is smaller. At the same time, when the receiver is located in the Rayleigh distance $d=1600\lambda$, the sensing accuracy is still improving with the increase of SNR, because when we normalize the power of the receiver the near field effect decays slowly with the increase of the distance. \subsection{Sensing The Location of Single Antenna} In this simulation, we set $d=180$, $\phi_s=90\degree$, $\theta_s =45\degree$. We adopt the decoupled 2-D and 1-D search in the 3-D space mentioned in section \uppercase\expandafter{\romannumeral5}. In Fig.~\ref{sensing_snr_single}, we investigate the RMSE of sensing location versus SNR with $d=200\lambda,600\lambda,1600\lambda$ respectively. For all cases, the RMSE decreases with the increase of SNR. The similar phenomenon is observed as in Fig.~\ref{sensing_snr}. In the same distance, the RMSE of locating the single antenna is larger than the RMSE of locating the ULA, because the additional dimension of the location of the single antenna brings additional error to the searching process. \section{Conclusion} \begin{figure}[t] \centering \centerline{\includegraphics[width=8.7cm,height=7cm]{sensing_snr_single.png}} \caption{The RMSE of sensing location varies with SNR for the single antenna receiver.} \label{sensing_snr_single} \end{figure} In this paper, we discover that the conventional planar reflected wave of RIS may cause serious energy leakage when the receiver is located in the near field and is not equipped with planar array. We propose a scheme of RIS to convert planar waves into cylindrical waves such that the energy is concentrated on the ULA antenna and a scheme of the RIS to convert planar waves into spherical waves such that the energy is concentrated on the single antenna. Different from the conventional scheme that the planar array receives planar waves, the position focal line or focal point depends on the position of the receiver and should be obtained before the RIS design. With cylindrical or spherical wave radiation, the power received by the receiver is a function highly related to its location, and this property could be utilized to sense the location of the receiver for communications by focal scanning. Simulation demonstrates that the proposed RIS scheme can reduce energy leakage and thus enlarge the channel capacity compared to the conventional planar wave scheme. Moreover, the location of the receiver could be accurately sensed by the proposed method. \small \bibliographystyle{ieeetr}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Recently three-dimensional topological insulators (TIs) with magnetically ordered surface states had attracted much attention, both theoretically \cite{Nagaosa-2010,Nagaosa-2010-1,Zang-Nagaosa,Franz-2010,Rosenberg-2010,Belzig-2012,Loss-PRL-2012,Nogueira-Eremin-2012,Cortijo-2012,Schmidt-2012,Qi-2013,Chulkov} and experimentally.\cite{Hor,Wray,Vobornik,Rader-2012,Checkelsky,Moodera-2012,Moodera,kapitulnik-2013} The topological surface states, usually gapless and featuring a low-energy Dirac dispersion protected by time-reversal (TR) symmetry, generally become gapped when a ferromagnetically ordered layered material is grown over it. In this case the TR symmetry is broken by the proximity effect to the magnetic material. Interesting effects on the magnetization dynamics arise when the topological surface becomes ferromagnetic.\cite{Nagaosa-2010,Franz-2010,Loss-PRL-2012,Nogueira-Eremin-2012} Quantum fluctuations of the surface Dirac fermions induce an additional Berry phase, which modifies the Landau-Lifshitz (LL) equation for the magnetization dynamics.\cite{Nagaosa-2010,Nogueira-Eremin-2012} Furthermore, when an electric field is present, either external or originating from Coulomb interactions, a topological magnetoelectric torque also appears in the LL equation.\cite{Nogueira-Eremin-2012} Actually both effects arise from an induced Chern-Simons (CS) term \cite{CS} upon integrating out the Dirac fermions.\cite{Nogueira-Eremin-2012,Nogueira-Eremin-2013} It is important to notice that most of the theoretical calculations in TIs were performed so far only at zero temperature, although some aspects like the shift of the Curie temperature in the ferromagnetic insulator/TI heterostructure were discussed.\cite{Nogueira-Eremin-2012,Moodera} The disregard of the temperature effects till now is partially connected to the fact that finite temperature is not expected to modify the topological contribution to the effective electromagnetic Lagrangian of a TI in the bulk,\cite{Qi-2008} \begin{equation} \label{Eq:L-EM-TI} {\cal L}_{\rm EM}=\frac{1}{8\pi}\left(\epsilon{\bf E}^2-\frac{1}{\mu_0}{\bf B}^2\right) +\frac{\alpha}{4\pi^2}\theta~{\bf E}\cdot{\bf B}, \end{equation} where $\alpha=e^2/(\hbar c)$ is the fine-structure constant and $\theta$ is the so-called axion field \cite{Axions}. While both the dielectric constant, $\epsilon$, and magnetic permeability, $\mu_0$, receive finite temperature corrections, $\theta$ remains temperature-independent.\cite{Liu-Ni-PRD-1988} This somewhat confirms the naive expectation for the term being topological in origin. This, however, is not the full story. For example, differently from the chiral anomaly leading to the axion term in Eq. (\ref{Eq:L-EM-TI}), the parity anomaly in $d=2+1$ dimensions does indeed receive finite temperature corrections.\cite{CS-finite-T} Therefore, integrating out the Dirac fermions in 2+1 dimensions and at finite temperature generates additional temperature-dependent CS term. Another important finite temperature aspect relates to the electromagnetic response and the many-body screening of the Coulomb interaction. In graphene, where the chemical potential is typically zero, there is no screening of the long-range Coulomb interaction at zero temperature, although a renormalization of the dielectric constant occurs.\cite{interac-graphene} However, at finite temperature a thermal screening should occur, corresponding to a situation reminiscent of massless QED in $2+1$ dimensions.\cite{Dorey-1991} Moreover, in contrast to graphene, most three-dimensional TIs feature surface Dirac fermions at a non-vanishing chemical potential, and we may expect that the screening in TIs \cite{Zang-Nagaosa,Screen-TI} and in doped graphene \cite{Ando-2006} at zero temperature behaves differently. In recent experiments \cite{Moodera,kapitulnik-2013} thin films of EuS/Bi$_2$Se$_3$ heterostructure were grown, and it has been shown that the topological surface becomes ferromagnetic due to a proximity-induced symmetry breaking mechanism. The experiments also confirmed the theoretical expectations that the surface Dirac fermions become gapped.\cite{Moodera,kapitulnik-2013} Many measurements in such a heterostructure are made at temperatures close to the Curie temperature. Therefore, it is of paramount importance to study theoretically quantum field theory models of Dirac fermions at finite temperature and chemical potential. As we have mentioned above, there are finite temperature corrections to the generated CS term. Furthermore, since the CS term provides a correction to the Berry phase of the proximity-induced ferromagnetism,\cite{Nagaosa-2010,Nogueira-Eremin-2012} the precessional behavior of the magnetization will be modified at finite temperature and in turn affect the spin-wave excitations. A finite chemical potential influences the physics on the surface dramatically even at zero temperature. For example, it is known that when the chemical potential is larger than the gap, the Hall conductivity is not quantized any longer and is given by \cite{Zang-Nagaosa} $\sigma_{xy}=e^2m_0/(2h\mu)$. Here $m_0$ represents the zero temperature gap. This suggests that in such a metallic regime the Hall conductivity cannot always be identified with the coefficient of a {\it local} fluctuation-generated CS term, i.e., the topological mass which is induced by quantum fluctuations rather than by an external response. Since the relevant situation for the magnetization dynamics is when the chemical potential is inside the gap, it is important to ask whether the Hall conductivity remains quantized in this case when the temperature is finite. In this paper we will derive an analytical expression for Hall conductivity at finite temperature and chemical potential in precisely this regime. We will show that a plateau persists at finite temperature when the chemical potential is inside the gap. The plan of the paper is as follows. In Section II we introduce the model and discuss some simple thermally induced screening effects following from the vacuum polarization on the TI surface. In Section III we derive the low-energy CS term at finite temperature and chemical potential. We discuss the physical consequences of this result for the magnetization dynamics in Section IV, and in Section V we summarize the main results. \section{Model and vacuum polarization at finite temperature} Our starting point is the following Hamiltonian for the surface Dirac fermions, $H=\psi^\dagger {\cal H}~\psi$, where $\psi=[\psi_\uparrow, \psi_\downarrow]^T$. The $2\times 2$ matrix ${\cal H}$ reads, \begin{equation} \label{Eq:matrix-H} {\cal H}=v_F(-i\hbar{\mbox{\boldmath $\nabla$}}\times\hat {\bf z})\cdot{{\mbox{\boldmath $\sigma$}}}-e\varphi-J(n_x\sigma_x+n_y\sigma_y)-J_\perp n_z\sigma_z, \end{equation} where $\varphi$ is a scalar potential including contributions both from external electric field and Coulomb interaction, and the magnetization ${\bf n}$ satisfies the constraint ${\bf n}^2=1$. The Lagrangian ${\cal L}=\psi^\dagger i\hbar\partial_t\psi-H$ can be after a rescaling $\varphi\to (J/e)\varphi$ conveniently written in QED-like form,\cite{Nogueira-Eremin-2012,Nogueira-Eremin-2013} \begin{equation} \label{Eq:TI-QED} {\cal L}=\bar \psi(i\slashchar{\partial}-J\slashchar{a}-J_\perp n_z)\psi, \end{equation} In the above equation the Dirac slash notation is used, $\slashchar{Q}=\gamma^\mu Q_\mu$, with $\gamma^0=\sigma_z$, $\gamma^1=-i\sigma_x$, and $\gamma^2=i\sigma_y$, and $\partial_\mu=\hbar(\partial_t,v_F{\mbox{\boldmath $\nabla$}})$, $\bar \psi=\psi^\dagger\gamma^0$, and the vector field $a^\mu=(a_0,{\bf a})=(\varphi,n_y,-n_x)$. In order to study thermal effects on the magnetization dynamics and the screening of the Coulomb interaction, we will perform the calculations in the imaginary time formalism, setting as usual $t=-i\tau$ with $\tau\in[0,\beta]$, and integrating out the fermions with anti-periodic boundary conditions, $\psi(0,{\bf r})=-\psi(\beta,{\bf r})$. Thus, the amplitude $\exp(i\hbar^{-1}\int d^3x{\cal L}_F)$ appearing in the functional integral becomes $\exp(-\hbar^{-1}\int_0^\beta d\tau\int d^2r{\cal L}_F^{\rm eucl})$ in the imaginary time formalism, with the Lagrangian in euclidean spacetime given by, \begin{equation} \label{Eq:TI-QED-eucl} {\cal L}^{\rm eucl}=\bar \psi(\slashchar{\partial}-\mu\gamma_0-iJ\slashchar{a}+J_\perp n_z)\psi, \end{equation} where now the Dirac slash features the euclidean Dirac matrices $\gamma_\mu=(\sigma_z,\sigma_x,-\sigma_y)$ and a chemical potential $\mu$ was included (note that $\psi^\dagger\psi=\bar \psi\gamma_0\psi$). Integrating out the fermions yields the effective action in the form, ${\cal S}_{\rm eff}={\cal S}_F+{\cal S}_{\rm mag}({\bf n})$, with \begin{equation} \label{Eq:Sf} {\cal S}_F=-\frac{N}{V}{\rm Tr}\ln(\slashchar{\partial}-\mu\gamma_0-iJ\slashchar{a}+J_\perp n_z), \end{equation} where $N$ is the number of Dirac fermion species and $V$ is the (infinite) volume. ${\cal S}_{\rm mag}({\bf n})$ is the magnetic action, which has the general form in the imaginary time formalism, \begin{equation} \label{Eq:S-mag} {\cal S}_{\rm mag}=\int d\tau\int d^2r\left[{\bf b}({\bf n})\cdot\partial_\tau{\bf n}+{\cal H}_{\rm mag}\right], \end{equation} featuring a Berry gauge potential satisfying ${\mbox{\boldmath $\nabla$}}_{\bf n}\times{\bf b}={\bf n}$, representing a magnetic monopole in the magnetization space. The magnetic Hamiltonian density ${\cal H}_{\rm mag}$ may contain several contributions, the most important ones being the coupling to external fields and the exchange terms. In TIs the number $N$ of fermion species is odd. In the present context this is essential, otherwise no parity and TR symmetry breaking via the generation of a CS term would occur dynamically.\cite{Nogueira-Eremin-2013} Assuming that the system orders along the $z$-axis, we can write $n_z=\langle n_z\rangle+\tilde n_z$ and expand Eq. (\ref{Eq:Sf}) up to quadratic order in the fields, \begin{eqnarray} &&{\cal S}_F\approx\frac{N}{2}\int_0^\beta d\tau\int d^2r\int_0^\beta d\tau'\int d^2r' \left[\Pi_{\mu\nu}(\tau-\tau',{\bf r}-{\bf r}')\right.\nonumber\\ &\times&\left.a_\mu(\tau,{\bf r})a_{\nu}(\tau',{\bf r}') +\chi_{zz}(\tau-\tau',{\bf r}-{\bf r}')\tilde n_z(\tau,{\bf r})\tilde n_z (\tau',{\bf r}') \right], \end{eqnarray} where $\Pi_{\mu\nu}(\tau-\tau',{\bf r}-{\bf r}')=\delta^2{\cal S}_F/[\delta a_\mu(\tau,{\bf r}) \delta a_\nu(\tau',{\bf r}')]|_{n_z=\langle n_z\rangle}$, is the vacuum polarization tensor at finite temperature encompassing screening effects in the Coulomb potential and the transverse magnetic susceptibility, and $\chi_{zz}(\tau-\tau',{\bf r}-{\bf r}')=\delta^2{\cal S}_F/[\delta \tilde n_z(\tau,{\bf r}) \delta \tilde n_z(\tau',{\bf r}')]|_{n_z=\langle n_z\rangle}$, is the longitudinal magnetic susceptibility. The fermionic propagator has the form, $G_F=(\slashchar{\partial}-\mu\gamma_0+m)^{-1}$, or in momentum space, $G_F(p)=(i\slashchar{p}+\mu\gamma_0+m)^{-1}=[m-(i\omega_n+\mu)\gamma_0-i{\mbox{\boldmath $\gamma$}}\cdot{\bf p}]/[m^2+{\bf p}^2-(i\omega_n+\mu)^2]$, where $p=(p_\mu)=(p_0,{\bf p})=(\omega_n,v_F{\bf k})$, $m=J_\perp\langle n_z\rangle$, and $\omega_n=(2n+1)\pi/\beta$ is the usual fermionic Matsubara frequency. The units are such that $\hbar=1$. The calculation of the vacuum polarization in $2+1$ dimensions and zero temperature is well known \cite{CS} and has been reviewed by us in detail recently in Ref. \onlinecite{Nogueira-Eremin-2013}. Due to current conservation, it fulfills $p_\mu\Pi_{\mu\nu}=0$. Periodicity in the Matsubara time allows one to choose a rest frame for the heat bath given by the vector $u_\mu=(1,0,0)$.\cite{Kapusta} Therefore, in this case we can write the vacuum polarization in momentum space in the form, \begin{equation} \label{Eq:Pi-decomp} \Pi_{\mu\nu}(p)=A(p)P_{\mu\nu}^T+B(p)P_{\mu\nu}^L+C(p)\epsilon_{\mu\nu\lambda}p_\lambda, \end{equation} where $P_{\mu\nu}^T$ and $P_{\mu\nu}^L$ are both transverse in 2+1 dimensions, with $P_{\mu\nu}^T$ being transverse and $P_{\mu\nu}^L$ longitudinal in two spatial dimensions. Thus, we have $P_{0i}^T=P_{00}^T=0$, $P_{ij}^T=\delta_{ij}-p_ip_j/{\bf p}^2$, and $P_{\mu\nu}^T+P_{\mu\nu}^L=\delta_{\mu\nu}-p_\mu p_\nu/p^2$, where Latin indices refer to spatial dimensions. At the same time, \begin{eqnarray} \label{Eq:v-pol} &&\Pi_{\mu\nu}(\omega_n,{\bf p})=-NJ^2T \nonumber\\ &\times&\sum_{l}\int\frac{d^2k}{(2\pi)^2} {\rm tr}[\gamma_\mu G_F(\nu_l,{\bf k})\gamma_\nu G_F(\nu_l+\omega_n,{\bf k}+{\bf p})], \end{eqnarray} where $\nu_l=\pi(2l+1)T$ and $\omega_n=2\pi nT$, $l,n\in\mathbb{Z}$. At finite temperatures, an interesting aspect of the vacuum polarization in relativistic-like fermionic systems is the generation of a thermal mass for the vector field along the time direction.\cite{Kapusta,Dorey-1991} In other words, the Coulomb potential acquires a thermal gap or thermal screening. In the case of a TI surface, for example, the Coulomb interaction $\phi_C(r)=e^2/(\epsilon r)$ in momentum space is given by $\phi_C({\bf q})=2\pi e^2/(\epsilon|{\bf q}|)$, similarly to interacting graphene.\cite{interac-graphene} The vacuum polarization screens this Coulomb interaction, and we have, $\phi_{\rm eff}({\bf q})=J^2/\{J^2[\phi_c({\bf q})]^{-1}+\Pi_{00}(0,{\bf q})\}$, allowing us to define the so called {\it electric mass},\cite{Kapusta} $m_{\rm el}^2\equiv\Pi_{00}(0,0)$. An explicit calculation assuming $|{\bf q}|\ll \sqrt{2(\mu^2-m^2)}$ yields, \begin{widetext} \begin{equation} \Pi_{00}(0,{\bf q})=\frac{NJ^2}{2\pi v_F^2}\left\{ T\ln[\cosh(|m|/T)+\cosh(\mu/T)] -\frac{{\bf q}^2+2m^2}{\sqrt{{\bf q}^2+4m^2}}\frac{\sinh\left(\frac{\sqrt{{\bf q}^2+4m^2}}{2T}\right)}{\cosh(\mu/T) +\cosh\left(\frac{\sqrt{{\bf q}^2+4m^2}}{2T}\right)}\right\}, \end{equation} \end{widetext} so that the electric mass is given by, \begin{eqnarray} \label{Eq:m-el} m^2_{\rm el}&=&\frac{NJ^2}{2\pi v_F^2}\left\{\vphantom{\frac{1}{2}}T\ln[\cosh(|m|/T)+\cosh(\mu/T)] \right.\nonumber\\ &-&\left.\frac{|m|\sinh(|m|/T)}{\cosh(|m|/T)+\cosh(\mu/T)}\right\}. \end{eqnarray} At zero temperature, $m_{\rm el}^2|_{T=0}=[NJ^2\mu/(2\pi v_F^2)]\theta(\mu-|m_0|)$, where $m_0=\lim_{T\to 0}m(T)$. Thus, the electric mass vanishes for $0\leq \mu<|m_0|$. Using the above results, we obtain that in the long wavelength limit the effective Coulomb interaction becomes, \begin{equation} \label{Eq:Veff} \phi_{\rm eff}({\bf q})\approx \frac{J^2}{J^2[\phi_c({\bf q})]^{-1}+\Pi_{00}(0,0)}=\frac{2\pi e^2}{\epsilon(|{\bf q}|+s)}, \end{equation} where $s=2\pi e^2m_{\rm el}^2/(\epsilon J^2)$, featuring in this way a screening of the Thomas-Fermi type. At zero temperature, $s|_{T=0}\equiv s_0=Ne^2\mu\theta(\mu-|m_0|)/(\epsilon v_F^2)$. In real space we have, \begin{equation} \label{Eq:Veff-real} \phi_{\rm eff}(r)=\frac{e^2}{\epsilon r}\{1+(\pi sr/2)[Y_0(sr)-H_0(sr)]\}, \end{equation} where $Y_0(x)$ is a Bessel function of second kind and $H_0(x)$ is a Struve function. The above results imply that no screening would occur at zero temperature if $\mu<|m_0|=J_\perp|\langle n_z\rangle|$, i.e., the screening takes place only when the chemical potential is located above the gap. This is a reasonable result, since this regime corresponds to a metallic state. For temperatures above the Curie temperature, $T_c$, we have, \begin{equation} \label{Eq:s-large-T} s|_{T\geq T_c}=\frac{Ne^2}{\epsilon v_F^2}T\ln\left[2\cosh^2\left(\frac{\mu}{2T}\right)\right]. \end{equation} The above equation is also valid when no ferromagnetic material is in contact with the TI surface. In this case Eq. (\ref{Eq:s-large-T}) is valid for all $T\geq 0$. Interestingly, since we are dealing with interacting Dirac fermions, we have that in absence of ferromagnetism Eq. (\ref{Eq:s-large-T}) would also apply to doped graphene \cite{Ando-2006} if one replaces $N\to N/2$ to account for the even number of Dirac cones. \section{The Chern-Simons term and the Hall conductivity at finite temperature} Next, we consider the finite temperature behavior of the CS term, which will yield a finite temperature correction to the Berry phase on the TI surface. The finite temperature behavior of the CS term is generally a highly nontrivial matter,\cite{CS-finite-T,Dunne-1997} depending on the type of background field configuration involved. However, in our case we are mainly interested in the low-energy behavior, so that the calculation simplifies considerably. The calculation parallels the zero temperature one,\cite{Nogueira-Eremin-2013} and thus $C(p)=-2NJ^2mI(p)$, where \begin{eqnarray} \label{Eq:Integral} I(\omega_n,{\bf p})&=& T\sum_l\int\frac{d^2 k}{(2\pi)^2}\frac{1}{(i\nu_l+\mu)^2-v_F^2{\bf k}^2-m^2} \nonumber\\ &\times&\frac{1}{(i\nu_l+i\omega_n+\mu)^2-v_F^2({\bf k}-{\bf p})^2-m^2}. \end{eqnarray} When the chemical potential is inside the gap, i.e., $\mu<|m|$, the surface state is insulating and we can take $I(p)\approx I(0)$ in the low-energy regime, since in this case $m$ yields the dominant energy scale in the problem and there is no Fermi momentum. The effective action is local in this case, following straightforwardly from a derivative expansion. On the other hand, in the metallic regime where $\mu>|m|$, the effective action is inherently non-local. In this case there is a Fermi surface singularity with a Fermi momentum $p_F=\sqrt{\mu^2-m^2}$. Generally, the coefficient of the CS term, the so called topological mass, yields a measure of the Hall conductivity, when scaled in appropriate unities. We will now show that in the insulating regime, the topological mass can be evaluated analytically at finite temperature and chemical potential. In this regime the CS contribution to the effective action is given approximately by, \begin{eqnarray} \label{Eq:CS-action} {\cal S}_{\rm CS}\approx\frac{\sigma(T,m }{2}\int dt\int d^2r \epsilon_{\mu\nu\lambda}a^\mu\partial^\nu a^\lambda \end{eqnarray} where in the above expression we have returned to real time and have defined $\sigma(T,m)=2NJ^2I(0)m$. Expressing $\sigma(T,m)$ in units of conductivity via $\sigma(T,m)=NJ^2\tilde \sigma(T,m)/(v_F^2e^2)$ and using that $I(0)=[f_-(0)-f_+(0)]/(8\pi|m|)$ (see appendix A), we obtain, \begin{equation} \label{Eq:sigma} \tilde \sigma(T,m)=\frac{e^2{\rm sgn}(m)\sinh(|m|/T)}{2h[\cosh(|m|/T)+\cosh(\mu/T)]}, \end{equation} where we have reintroduced the Planck constant. The above equation provides an analytical expression for the Hall conductivity for the case where the chemical potential is inside the gap, i.e., in the insulating regime. However, it differs from the actual Hall conductivity $\sigma_{xy}$ when $\mu>m$. In order to see this, let us first compare Eq. (\ref{Eq:sigma}) to the Hall conductivity at zero temperature, where it can be calculated exactly for a nonzero chemical potential. \begin{figure}[h!] \centering \subfigure[]{ \includegraphics[scale=0.32]{a}} ~ \subfigure[]{ \includegraphics[scale=0.32]{b}} \subfigure[]{ \includegraphics[scale=0.32]{c}} ~ \subfigure[]{ \includegraphics[scale=0.32]{d}} \caption{Calculated topological mass $\tilde \sigma(T,m)$ and Hall conductivity $\sigma_{xy}$ as a function of temperature, $T$, and the chemical potential, $\mu$, normalized by the Curie temperature $T_c$. We consider for $m(T)$ a mean-field temperature dependence, $m(T)\approx J_\perp\sqrt{1-T/T_c}$ and use the estimate $J_\perp/T_c\approx 37.4$ [(a) and (b)] based on experimental results from Ref. \onlinecite{Moodera} for the Bi$_2$Se$_3$/EuS interface. Note the plateau regions in both panels (a) and (b), indicating nearly quantized $\tilde \sigma$ and $\sigma_{xy}$, although both $T$ and $\mu$ are finite. There is a large region in the $T-\mu$ plane where the Hall conductivity is finite and not quantized, while the topological mass vanishes. Panels (a) and (b) depict a situation where $J_\perp$ is large compared to $T_c$. In panels (c) and (d) we illustrate a situation with $J_\perp/T_c=0.8$. Observe that the region of the plateau is smaller and less sharp. In any situation, the plot of the topological mass represents the Hall conductivity only in the regime $\mu<|m|$.}\label{Fig:sigmaT} \end{figure} Introducing the unit vector $\hat d(k)={\bf d}(k)/|{\bf d}(k)|$ associated to the mean-field Hamiltonian ${\cal H}_{\rm MF}={\bf d}(k)\cdot{\mbox{\boldmath $\sigma$}}=-k_y\sigma_x+k_x\sigma_y+m\sigma_z$, we can write an expression for $\sigma_{xy}(T,m)$ in terms of the topological charge density, \begin{equation} {\cal Q}_{xy}(k)=\frac{1}{2\pi}\hat d(k)\cdot[\partial_{k_x}\hat d(k)\times\partial_{k_y}\hat d(k)]. \end{equation} as \begin{equation} \label{Eq:sigma_xy} \sigma_{xy}(T,m)=\frac{e^2}{2h}\int d^2k~{\cal Q}_{xy}(k) [f_-(k)-f_+(k)], \end{equation} where $f_\pm(k)=1/[e^{\beta(\pm|{\bf d}(k)|-\mu)}+1]$ are Fermi-Dirac distributions. Thus, it is straightforward to compute $\sigma_{xy}$ at $T=0$, \cite{Zang-Nagaosa} \begin{equation} \label{Eq:sigma_xy-0} \sigma_{xy}(0,m_0)=\frac{e^2}{2h}\left[\left({\rm sgn}(m_0)-\frac{m_0}{\mu}\right)\theta(|m_0|-\mu)+\frac{m_0}{\mu}\right], \end{equation} and we see that the Hall conductivity at $T=0$ is non-quantized and non-universal in the metallic regime ($\mu>m_0$). The topological mass, on the other hand, vanishes in the limit $T\to 0$ when $\mu>|m_0|$, \begin{equation} \label{Eq:spin-conduc-1} \tilde \sigma(0,m_0)=\frac{e^2}{2h}{\rm sgn}(m_0)\theta(|m_0|-\mu). \end{equation} Thus, when the system is in the metallic phase, the topological mass obtained from the low-energy regime of the CS action does not agree with the Hall conductivity. This means that non-local corrections have to be considered in order to make the local effective action approach to agree with the expression obtained from linear response. Further insight on this point is obtained by considering how the topological mass relates to the actual Hall conductivity and see how it deviates from it in the metallic regime. First, we note that we can write, \begin{equation} \label{Eq:spin-conduc} \tilde \sigma(T,m)=\sigma_{xy}(T,m)+\tau_{xy}(T,m), \end{equation} where \begin{equation} \label{Eq:tau_xy} \tau_{xy}(T,m)=-\frac{e^2}{2 h}\int d^2k~{\cal Q}_{xy}(k) |{\bf d}(k)|\frac{\partial}{\partial\mu}[f_+(k)+f_-(k)], \end{equation} The quantity $\tau_{xy}$ yields the deviation of the coefficient of the local contribution to the CS term from the Hall conductivity when the chemical is above the gap. We see that for $|m_0|<\mu$ the contribution $\tau_{xy}$ is responsible for canceling the non-quantized contribution at $T=0$, since, \begin{eqnarray} \tau_{xy}(T=0)&=&-\frac{e^2}{2h}\int d^2kQ_{xy}(k)|{\bf d}(k)|\delta(|{\bf d}(k)|-\mu) \nonumber\\ &=&-\frac{e^2m_0}{2h\mu}=-\sigma_{xy}(T=0,|m_0|<\mu), \end{eqnarray} precisely canceling the last term in Eq. (\ref{Eq:sigma_xy-0}). Thus, we see that the chemical potential acts as a cutoff setting the limit of validity of the local effective action. In Fig. \ref{Fig:sigmaT}(a) we show $\tilde \sigma(T,m)$ for a typical mean-field theory dependence with the temperature for $m$. In order to plot $\tilde \sigma$ and $\sigma_{xy}$ we have used values for $J_\perp$ and Curie temperature estimated from the experiment by Wei {\it et al.} \cite{Moodera} performed on Bi$_2$Se$_3$/EuS samples. There the estimated value for $J_\perp$ at the interface is considerably larger ($\sim 150$ meV \cite{private}) than the one obtained in {\it ab initio} calculations for Bi$_2$Se$_3$/MnSe,\cite{Qi-2013,Chulkov} which features an antiferromagnet material (MnSe) rather than a ferromagnetic one. The numerical calculation of the Hall conductivity is shown in Fig. \ref{Fig:sigmaT}(b). We note a plateau similar to the one arising for $\tilde \sigma$, which indicates that quantization nearly holds for finite $T$ and $\mu$ in a region of the $T-\mu$ plane determined by the inequality $\mu<|m(T)|$. However, there is also a region in the $T-\mu$ plane where the Hall conductivity is non-quantized while the topological mass $\tilde \sigma$ vanishes. Note that sharp and large plateau regions up to $T=T_c$ [Fig. 1, panels (a) and (b)] occur only if the gap is larger than $T_c$, which is precisely the case in the experiment from Ref. \onlinecite{Moodera}. In panels (c) and (d) of Fig. 1 we also illustrate the opposite situation, taking for example, $J_\perp/T_c=0.8$. Observe that the plateau region here is not as sharp and loses its approximate quantization long before $T_c$ is reached. \section{Magnetization dynamics} Let us now derive the LL equation in the insulating regime where $\mu<|m|$. In this case the CS action (\ref{Eq:CS-action}) contributes to the LL equation in two different ways. This can be see by decomposing Eq. (\ref{Eq:CS-action}) into two parts, ${\cal S}_{\rm CS}={\cal S}_{\rm Berry}+{\cal S}_{\rm TME}$, where \begin{equation} \label{Eq:Berry} {\cal S}_{\rm Berry}= \frac{\sigma(T,m)}{2} \int dt\int d^2r~({\bf n}\times\hat {\bf z})\cdot \partial_t{\bf n}, \end{equation} is the correction to the Berry phase, and \begin{equation} \label{Eq:TME} {\cal S}_{\rm TME}=-\frac{e\sigma(T,m)}{J} \int dt\int d^2r~{\bf n}\cdot{\bf E}, \end{equation} is the magnetoelectric contribution. Therefore, these two contributions lead to the LL equation, \begin{equation} \label{Eq:LL-main} \left[1-\frac{\sigma(T,m)}{2}({\bf n}\cdot\hat {\bf z})\right]\partial_t{\bf n}= {\bf n}\times{\bf H}_{\rm eff}-\alpha({\bf n}\times\partial_t{\bf n})+{\bf T}_{\rm TME}, \end{equation} where \begin{equation} {\bf T}_{\rm TME}=\frac{e\sigma(T,m)}{J}{\bf n}\times{\bf E} \end{equation} is the magnetoelectric torque. Note that within the one-loop accuracy of our calculations $(\hat {\bf z}\cdot{\bf n})\partial_t{\bf n}\approx\langle n_z\rangle\partial_t{\bf n}=(m/J_\perp)\partial_t{\bf n}$. Thus, due to the magnetoelectric torque, the spin-wave excitation will be gapped in general. For instance, if ${\bf E}=-E\hat {\bf x}$ with $E={\rm const}$, we have, \begin{equation} \omega_{\rm sw}(k)=\frac{1}{1-\frac{m\sigma}{2J_\perp}}\sqrt{H_{\rm eff}^2(k)+\frac{e^2\sigma^2}{J^2}E^2}, \end{equation} where in the absence of an external magnetic field, $H_{\rm eff}(k)\to 0$ as $k\to 0$. \section{Conclusions} We have discussed the screening of the Coulomb interaction on the surface of a TI at finite temperature and chemical potential. In the presence of proximity-induced ferromagnetism we found a peculiar behavior where the screening length depends on the magnetization via the electronic gap. This implies in particular that no screening occurs at zero temperature if the chemical potential is smaller than the proximity-induced fermionic gap. We have also calculated the coefficient of the CS term (the topological mass) analytically for finite temperature and chemical potential in the regime $\mu<|m|$. This topological mass yields the Hall conductivity in this regime and directly influence the magnetization dynamics at the surface, modifying the spin-wave velocity and inducing an magnetoelectric gap in the spin-wave excitation spectrum. Interestingly, in the insulating regime the topological mass remains nearly quantized even at finite temperature and chemical potential. \acknowledgments We would like to thank the SFB TR 12 program of the Deutsche Forschungsgemeinschaft (DFG) for the financial support. The work of IE is also supported by the Ministry of Education and Science of the Russian Federation in the framework of Increase Competitiveness Program of NUST "MISiS" (Nr. K2-2014-015)
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} The Chung--Diaconis--Graham process \cite{CDG} is the random walk on $\mathbb {F}_p$ (or more generally on $\mathbb {Z}/n\mathbb {Z}$ for composite $n$) defined as \begin{equation}\label{f:CDG} X_{j+1} = a X_j + \varepsilon_{j+1} \,, \end{equation} where $a\in \mathbb {F}_p^*$ is a fixed residue and the random variables $\varepsilon_j$ are independent and identically distributed (in the original paper \cite{CDG} the variables $\varepsilon_j$ were distributed uniformly on $\{-1,0,1\}$ and $a=2$). This process was studied extensively, see papers \cite{CD}, \cite{CDG}---\cite{Hildebrand} and so on. In our article we are interested in the following characteristic of $X_n$, which is called the {\it mixing time}. The definition is $$ t_{mix}(\varepsilon) := \inf \left\{n ~:~ \max_{A \subseteq \mathbb {F}_p} \left| \mathrm{P} (X_n \in A) - \frac{|A|}{p} \right| \leq \varepsilon\right\} \,. $$ Usually one takes a concrete value of the parameter $\varepsilon$, e.g., $\varepsilon=1/4$ and below we will say about $t_{mix} := t_{mix} (1/4)$. Simple random walk on $\mathbb {F}_p$ has the mixing time $t_{mix}$ of order $p^2$, see \cite{Peres} and it was shown in \cite{CDG} (also, see recent paper \cite{EV}) that the mixing time of process \eqref{f:CDG} is at most $O(\log p \cdot \log \log p)$. Hence the Chung--Diaconis--Graham process gives an example of a speedup phenomenon, i.e., a phenomenon of increasing the time of the convergence. In \cite{He} it was studied a more general non--linear version of the Chung–Diaconis–Graham process, defined as \begin{equation}\label{f:f_Markov} X_{j+1} = f(X_j) + \varepsilon_{j+1} \,, \end{equation} where $f$ is a bijection on $\mathbb {F}_p$. In particular, it was proved that for rational functions of bounded degree (defined correctly at poles, see \cite{He}) the mixing time is \begin{equation}\label{f:t_mix_He} t_{mix}(1/4) = O(p^{1+\varepsilon}) \,, \quad \quad \quad \forall \varepsilon>0\,. \end{equation} Perhaps, the right answer for process \eqref{f:f_Markov} is $t_{mix} = O(\log p)$ but it was obtained in the only case $f(x) = 1/x$ for $x\neq 0$ and $f(0)=0$, see \cite{HPX}. The proof is based on ${\rm SL}_2 (\mathbb {F}_p)$--actions methods from paper \cite{BG}. In \cite{CD} it was asked whether other explicit examples of Markov chains with low mixing time could be provided. Our paper is devoted to a multiplicative form of Chung--Diaconis--Graham process. Multiplicative variants of the process were studied in \cite{Asci}, \cite{Hildebrand_mult}, \cite{Hildebrand_mult_m}, \cite{Kruglov} and in other papers. Consider the family of functions \begin{equation}\label{def:f_ab} f^{\alpha,\beta}_* (x) = \frac{x}{\alpha x + \beta} \,, \end{equation} where $\alpha, \beta \neq 0$. Most of our results below do not depend on $\alpha, \beta$, so we will not write these parameters in such cases. In Theorems \ref{t:main_intr}, \ref{t:main} we need $f^{\alpha,\beta}_* (x)$ be a bijection, so we put $f^{\alpha,\beta}_* (-\beta/\alpha):= 1/\alpha$. In turn Theorems \ref{t:main_intr}, \ref{t:main} not depend on a particular choice of $(\alpha,\beta)$ and one can consider $\alpha =1$, $\beta=-1$, say, and write $f_* (x) := f^{1,-1}_* (x)$. Let us formulate a particular case of our main result. \begin{theorem} Let $p$ be a prime number and $\gamma \in \mathbb {F}_p^*$ be a primitive root. Also, let $\varepsilon_{j}$ be the random variables distributed uniformly on $\{ \gamma^{}, \gamma^{-1}\}$. Consider the lazy Markov chain $0\neq X_0,X_1,\dots, X_n, \dots$ defined by \[ X_{j+1}=\left\{\begin{array}{ll} f_* \left(X_{j}\right) \cdot \varepsilon_{j+1} & \text { with probability } 1 / 2\,, \\ X_{j} & \text { with probability } 1 / 2 \,. \end{array}\right. \] Then for any $c>0$ and any $n = c \exp(\log p/ \log \log p)$ one has \[ \| P_n - U\| := \frac{1}{2} \max_{A \subseteq \mathbb {F}^*_p} \left| \mathrm{P} (X_n \in A) - \frac{|A|}{p-1} \right| \leqslant e^{-O(c)} \,. \] The same is true for the chain $X_{j+1} = f_* \left(X_{j}\right) \cdot \varepsilon_{j+1}$, where $\varepsilon_j$ denote the random variables distributed uniformly on $\{ 1, \gamma^{-1}, \gamma\}$. \label{t:main_intr} \end{theorem} In other words, the mixing time of our Markov chain is $\exp(O(\log p/ \log \log p))$. By a similar method we obtain the same bound for another chain with $f_* (x) = \mathrm{ind} (x)$ and for the chain of form \eqref{f:f_Markov} with $f(x)=\exp(x)$, see Theorem \ref{t:ind,exp} and formulae \eqref{f:ind}, \eqref{f:exp} below. As a byproduct we show that in the case $f(x)=x^2$ and $p \equiv 3 \pmod 4$ the mixing time of \eqref{f:f_Markov} is, actually, $O(p\log p)$, see Remark \ref{r:f(x)=x^2}. Our approach is not analytical as in \cite{He} but it uses some methods from Additive Combinatorics and Incidence Geometry. In particular, we apply some results on growth in the affine group ${\rm Aff}(\mathbb {F}_p)$. The core of our article has much more in common with papers \cite{BG}, \cite{s_Kloosterman} than with \cite{He} but we extensively use the general line of the proof from this paper. From additive--combinatorial point of view the main innovation is a series of asymptotic formulae for the incidences of points and lines, which were obtained via the action of ${\rm Aff}(\mathbb {F}_p)$, see the beginning of section \ref{sec:proof}. The author hopes that such formulae are interesting in its own right. It is well--known see, e.g., \cite{BG}, \cite{Brendan_rich}, \cite{collinear}, \cite{RS_SL2}, \cite{s_asymptotic}, \cite{s_Kloosterman}, \cite{s_Sidon}, \cite{SdZ} that Incidence Geometry and the sum--product phenomenon sometimes work better than classical analytical methods and that is why it is possible to break the square--root barrier, which corresponds to natural bound \eqref{f:t_mix_He} (for details see Theorem \ref{t:Chung} and the proofs of Theorems \ref{t:main}, \ref{t:ind,exp}). It turns out that the same method is applicable to a purely additive--combinatorial question on Sidon sets. Sidon sets is a classical subject of Combinatorial Number Theory, see, e.g., survey \cite{Bryant}. Recall that a subset $S$ of an abelian group ${\mathbf G}$ with the group operation $*$ is called $g$--Sidon set if for any $z\neq 1$ the equation $z=x*y^{-1}$, where $x,y\in S$ has at most $g$ solutions. If $g=1$, then we arrive to the classical definition of Sidon sets \cite{Sidon}. Having an arbitrary set $A\subseteq {\mathbf G}$, we write $\mathsf{Sid}^*(A)$ for size of the maximal (by cardinality) Sidon subset of the set $A$. It is known \cite{KSS} (also, see \cite{Semchenkov}) that for any subset $A$ of our abelian group ${\mathbf G}$ the following estimate takes place $$ \mathsf{Sid}^*(A) \gg \sqrt{|A|} $$ and Klurman and Pohoata \cite{Klurman} asked about possibility to improve the last bound, having {\it two} different operations on a {\it ring} ${\mathbf G}$. In \cite{s_Sidon} the author obtains \begin{theorem} Let $A\subseteq \mathbb {F}$ be a set, where $\mathbb {F} = {\mathbb R}$ or $\mathbb {F} = \mathbb {F}_p$ (in the prime field case suppose, in addition, that $|A|<\sqrt{p}$, say). Then there are some absolute constants $c>0$, $K\geqslant 1$ such that \begin{equation}\label{f:Sidon_intr} \max \{ \mathsf{Sid}^{+}_K (A), \mathsf{Sid}_K^{\times}(A) \} \gg |A|^{1/2+c} \,. \end{equation} \label{t:Sidon_intr} \end{theorem} On upper bounds for \eqref{f:Sidon_intr}, see \cite{R-N_W} and \cite{s_Sidon}. Notice that $\mathsf{Sid}_K^{\times}(A) = \mathsf{Sid}_K^{+}(\log (A))$ and $\mathsf{Sid}_K^{+}(A) = \mathsf{Sid}_K^{\times}(\exp (A))$ for $A\subseteq {\mathbb R}^+$, say. Hence it is possible to rewrite bound \eqref{f:Sidon_intr} in terms of the only operation. We now consider a general question, which was mentioned by A. Warren during CANT--2021 conference \cite{Warren}. \bigskip {\bf Problem.} {\it Let $f,g$ be some `nice' (say, convex or concave) functions. Is it true that for any set $A\subset {\mathbb R}^+$, say, one has \[ \max \{ \mathsf{Sid}^{+}_K (A), \mathsf{Sid}_K^{+}(f(A)) \} \,, \quad \quad \max \{ \mathsf{Sid}^{\times}_K (A), \mathsf{Sid}_K^{\times}(g(A)) \} \gg |A|^{1/2+c} \,? \] Here $c>0$, $K\geqslant 1$ are some absolute constants. What can be said for $K$ exactly equals one and for a certain $c>0$? } \bigskip In this paper we obtain an affirmative answer in the case of $g(x)=x+1$ and $f(x)=\exp(x)$, where in the case of $\mathbb {F}_p$ the latter function is defined as $\exp(x) := g^x$ and $g$ is a fixed primitive root. \begin{theorem} Let $A\subseteq \mathbb {F}$ be a set, where $\mathbb {F} = {\mathbb R}$ or $\mathbb {F} = \mathbb {F}_p$ (in the prime field case suppose, in addition, that $|A|<\sqrt{p}$). Then there are some absolute constants $c>0$, $K\geqslant 1$ such that \begin{equation}\label{f:Sidon_intr_new} \max \{ \mathsf{Sid}^{\times}_K (A), \mathsf{Sid}_K^{\times}(A+1) \} \gg |A|^{1/2+c} \,, \end{equation} and \begin{equation}\label{f:Sidon_intr_new1.5} \max \{ \mathsf{Sid}^{+}_K (A), \mathsf{Sid}_K^{+}(\exp(A)) \} \gg |A|^{1/2+c} \,, \end{equation} On the other hand, for any integer $k\geqslant 1$ there is $A \subseteq \mathbb {F}$ with \begin{equation}\label{f:Sidon_intr_new2} \max \{ \mathsf{Sid}^{\times}_k (A), \mathsf{Sid}^{\times}_k (A+1) \} \ll k^{1/2} |A|^{3/4} \,. \end{equation} \label{t:Sidon_intr_new} \end{theorem} We thank Jimmy He for very useful discussions and valuable suggestions. \section{Definitions and preliminaries} \label{sec:preliminaries} By ${\mathbf G}$ we denote an abelian group. Sometimes we underline the group operation writing $+$ or $\times$ in the considered quantities (as the energy, the representation function and so on, see below). Let $\mathbb {F}$ be the field ${\mathbb R}$ or $\mathbb {F}=\mathbb {F}_p = \mathbb {Z}/p\mathbb {Z}$ for a prime $p$. Let $\mathbb {F}^* = \mathbb {F} \setminus \{0\}$. We use the same capital letter to denote set $A\subseteq \mathbb {F}$ and its characteristic function $A: \mathbb {F} \to \{0,1 \}$. Given two sets $A,B\subset {\mathbf G}$, define the {\it sumset} of $A$ and $B$ as $$A+B:=\{a+b ~:~ a\in{A},\,b\in{B}\}\,.$$ In a similar way we define the {\it difference sets} and {\it higher sumsets}, e.g., $2A-A$ is $A+A-A$. We write $\dotplus$ for a direct sum, i.e., $|A\dotplus B| = |A| |B|$. For an abelian group ${\mathbf G}$ the Pl\"unnecke--Ruzsa inequality (see, e.g., \cite{TV}) holds stating \begin{equation}\label{f:Pl-R} |nA-mA| \leqslant \left( \frac{|A+A|}{|A|} \right)^{n+m} \cdot |A| \,, \end{equation} where $n,m$ are any positive integers. It follows from a more general inequality contained in \cite{Petridis} that for arbitrary sets $A,B,C \subseteq {\mathbf G}$ one has \begin{equation}\label{f:Petridis} |B+C+X| \leqslant \frac{|B+X|}{|X|} \cdot |C+X| \,, \end{equation} where $X\subseteq A$ minimize the quantity $|B+X|/|X|$. We use representation function notations like $r_{A+B} (x)$ or $r_{A-B} (x)$ and so on, which counts the number of ways $x \in {\mathbf G}$ can be expressed as a sum $a+b$ or $a-b$ with $a\in A$, $b\in B$, respectively. For example, $|A| = r_{A-A} (0)$. For any two sets $A,B \subseteq {\mathbf G}$ the {\it additive energy} of $A$ and $B$ is defined by $$ \mathsf{E} (A,B) = \mathsf{E}^{+} (A,B) = |\{ (a_1,a_2,b_1,b_2) \in A\times A \times B \times B ~:~ a_1 - b^{}_1 = a_2 - b^{}_2 \}| \,. $$ If $A=B$, then we simply write $\mathsf{E}^{} (A)$ for $\mathsf{E}^{} (A,A)$. More generally, for sets (functions) $A_1,\dots, A_{2k}$ belonging an arbitrary (noncommutative) group ${\mathbf G}$ and $k\geqslant 2$ define the energy $\mathsf{T}_{k} (A_1,\dots, A_{2k})$ as \[ \mathsf{T}_{k} (A_1,\dots, A_{2k}) = \] \begin{equation}\label{def:T_k} = |\{ (a_1, \dots, a_{2k}) \in A_1 \times \dots \times A_{2k} ~:~ a_1 a^{-1}_2 \dots a_{k-1} a^{-1}_k = a_{k+1} a^{-1}_{k+2} \dots a_{2k-1} a^{-1}_{2k} \}| \,. \end{equation} In the abelian case put for $k\geqslant 2$ \begin{equation}\label{def:E_k} \mathsf{E}^{+}_k (A) = \sum_x r^k_{A-A} (x) = \sum_{\alpha_1, \dots, \alpha_{k-1}} |A\cap (A+\alpha_1) \cap \dots \cap (A+\alpha_{k-1})|^2 \,. \end{equation} Clearly, $|A|^k \leqslant \mathsf{E}^{+}_k (A) \leqslant |A|^{k+1}$. Also, we write $\hat{\mathsf{E}}^{+}_k (A) = \sum_x r^k_{A+A} (x)$. By $\mathrm{ord} (x)$ denote the multiplicative order of an element of $x\in \mathbb {F}^*_p$ and let $\mathrm{ind} (x)$ is defined as $x=g^{\mathrm{ind} (x)}$, where $g$ is a fixed primitive root of $\mathbb {F}_p^*$. It is convenient for us to think that the function $\mathrm{ind} (x)$ takes values from $1$ to $p-1$ and hence $\mathrm{ind} (x)$ is defined on $\mathbb {F}_p^*$. In a similar way, we denote by $\exp(x) : \mathbb {F}^*_p \to \mathbb {F}_p^*$ the function $\exp(x) = g^x$, where $x\in \mathbb {F}^*_p$. Let ${\rm Aff} (\mathbb {F})$ be the group of transformations $x\to ax+b$, where $a\in \mathbb {F}^*$, $b\in \mathbb {F}$. Sometimes we write $(a,b)\in {\rm Aff} (\mathbb {F})$ for the map $x\to ax+b$. The signs $\ll$ and $\gg$ are the usual Vinogradov symbols. When the constants in the signs depend on a parameter $M$, we write $\ll_M$ and $\gg_M$. All logarithms are to base $2$. If we have a set $A$, then we will write $a \lesssim b$ or $b \gtrsim a$ if $a = O(b \cdot \log^c |A|)$, $c>0$. Let us denote by $[n]$ the set $\{1,2,\dots, n\}$. \bigskip We now mention several useful results, which we will appeal in the text. We start with a result from \cite{s_Kloosterman}. \begin{lemma} Let $f_1,\dots,f_{2k} : {\mathbf G} \to \mathbb{C}$ be any functions. Then \begin{equation}\label{f:T_2^k} \mathsf{T}^{2k}_{k} (f_1,\dots, f_{2k}) \leqslant \prod_{j=1}^{2k} \mathsf{T}_{k} (f_j) \,, \end{equation} and $\| f \| := \mathsf{T}_k (f)^{1/2k} \geqslant \| f\|_{2k}$, $k\geqslant 2$ is a norm of a function $f: {\mathbf G} \to \mathbb{C}$. \label{l:T_2^k} \end{lemma} The next result on collinear quadruples $\mathsf{Q}(A)$ was proved in \cite{collinear}. We rewrite the asymptotic formula for $\mathsf{Q}(A)$ in the following convenient form. \begin{lemma} Let $A\subseteq \mathbb {F}_p$ be a set and $f_A (x) = A(x)-|A|/p$. Then \[ \sum_{l \in {\rm Aff}(\mathbb {F}_p)} \left| \sum_x f_A(x) f_A(lx) \right|^4 \ll |A|^5 \log |A| \,, \] where the summation over $l$ in the last formula is taken over all affine transformations. \label{l:collinear} \end{lemma} Finally, we need a simplified version of \cite[Theorem 5]{s_Bourgain}. \begin{theorem} Let $A,B\subseteq \mathbb {F}_p$ be sets, $|AB|\leqslant M|A|$, $k\geqslant 2$, and $|B| \gtrsim_k M^{2^{k+1}}$. Then \begin{equation}\label{f:upper_T(AB)} \mathsf{T}^{+}_{2^k} (A) \lesssim_k M^{2^{k+1}} \left( \frac{|A|^{2^{k+1}}}{p} + |A|^{2^{k+1}-1} \cdot |B|^{-\frac{k-1}{2}} \right) \,. \end{equation} \label{t:upper_T(AB)} \end{theorem} \section{The proof of the main result} \label{sec:proof} We start with our counting Proposition \ref{p:counting_collinear}. Let $\mathcal{P}, \mathcal{L} \subseteq \mathbb {F}_p \times \mathbb {F}_p$ be a set of points and a set of lines, correspondingly. The number of incidences between $\mathcal{P}$ and $\mathcal{L}$ is \begin{equation}\label{def:I(P,L)} \mathcal{I} (\mathcal{P},\mathcal{L}) :=|\{(q,l)\in \mathcal{P} \times \mathcal{L}:\,q\in l\}| \,. \end{equation} \begin{proposition} Let $A,B \subseteq \mathbb {F}_p$ be sets and $\mathcal{L}$ be a set of affine transformations. Then for any positive integer $k$ one has \begin{equation}\label{f:counting_collinear} {\cal I} (A\times B, \mathcal{L}) - \frac{|A||B||\mathcal{L}|}{p} \ll \sqrt{|A||B| |\mathcal{L}|} \cdot (\mathsf{T}_{2^k} (\mathcal{L}) |A| \log |A|)^{1/2^{k+2}} \,. \end{equation} \label{p:counting_collinear} \end{proposition} \begin{proof} We have $$ {\cal I} (A\times B, \mathcal{L}) = \frac{|A||B||\mathcal{L}|}{p} + \sum_{x\in B} \sum_{l\in \mathcal{L}} f_A (l x) = \frac{|A||B||\mathcal{L}|}{p} + \sigma \,. $$ To estimate the error term $\sigma$ we use the H\"older inequality several times as in \cite{Brendan_rich}, \cite{RS_SL2} and obtain \[ \sigma^2 \leqslant |B| \sum_{h} r_{\mathcal{L}^{-1}\mathcal{L}} (h) \sum_x f_A (x) f_A (h x) \,, \] and further \[ \sigma^{2^{k}} \leqslant |B|^{2^{k-1}} |A|^{2^{k-1}-1} \sum_{h} r_{(\mathcal{L}^{-1}\mathcal{L})^{2^{k-1}}} (h) \sum_x f_A (x) f_A (h x) \,. \] Finally, applying Lemma \ref{l:collinear} and the H\"older inequality one more time, we derive \[ \sigma^{2^{k+2}} \ll |B|^{2^{k+1}} |A|^{2^{k+1}-4} \left(\sum_{h} r^{4/3}_{(\mathcal{L}^{-1}\mathcal{L})^{2^{k-1}}} (h) \right)^3 \cdot |A|^5 \log |A| \ll \] \[ \ll |B|^{2^{k+1}} |A|^{2^{k+1}-4} \mathsf{T}_{2^k} (\mathcal{L}) |\mathcal{L}|^{2^{k+1}} \cdot |A|^5 \log |A| \] as required. $\hfill\Box$ \end{proof} \bigskip The main advantage of bound \eqref{f:counting_collinear} of Proposition \ref{p:counting_collinear} is that we have an asymptotic formula for the number of incidences ${\cal I} (A\times B, \mathcal{L})$ (and the set $\mathcal{L}$ can be rather small) but not just upper bounds for ${\cal I} (\mathcal{P}, \mathcal{L})$ as in \cite{SdZ}. An asymptotic formula for the quantity ${\cal I} (\mathcal{P}, \mathcal{L})$ was known before in the specific case of large sets (see \cite{Vinh} or estimate \eqref{f:Vinh} below) and in the case of Cartesian products but with large sets of lines, see \cite{s_asymptotic} and \cite{SdZ}. \bigskip In the next lemma we estimate the energy $\mathsf{T}_k (\mathcal{L})$ for a concrete family of lines which will appear in the proofs of the results of our paper. \begin{lemma} Let $A,B \subseteq \mathbb {F}^*_p$ be sets, and $\mathcal{L} = \{ (a,b) ~:~ a\in A,\, b\in B\} \subseteq {\rm Aff}(\mathbb {F}_p)$. Then for any $k\geqslant 2$ one has \begin{equation}\label{f:T_k-E(L)} \mathsf{T}_k (\mathcal{L}) \leqslant |A|^{2k-1} \mathsf{T}^{+}_k (B) \,. \end{equation} \label{l:T_k-E(L)} \end{lemma} \begin{proof} Let us consider the case of even $k$ and for odd $k$ the arguments are similar. One has $\mathcal{L}^{-1}\mathcal{L} = \{ (a/c, (b-d)/c) ~:~ a,c\in A,\, b,d\in B \}$. Considering $\mathsf{T}_{2k} (\mathcal{L})$, we arrive to two equations. The first one is \begin{equation}\label{f:first_eq} \frac{a_1 \dots a_k}{c_1 \dots c_k} = \frac{a'_1 \dots a'_k}{c'_1 \dots c'_k} \,. \end{equation} If we fix all variables $a_1 \dots a_k, a'_1 \dots a'_k$, $c_1 \dots c_k, c'_1 \dots c'_k \in A$, then the number of the solutions to the second equation is $\mathsf{T}^{+}_{2k} (\alpha_1 B, \dots, \alpha_{2k} B)$, where $\alpha_1,\dots, \alpha_{2k} \in \mathbb {F}_p^*$ are some elements of $A$ depending on the fixed variables. The last quantity is at most $\mathsf{T}^{+}_{2k} (B)$ by Lemma \ref{l:T_2^k}. Returning to \eqref{f:first_eq}, we obtain the required inequality. $\hfill\Box$ \end{proof} \bigskip Now we can obtain our first driving result. \begin{theorem} Let $A,B, X_1,Y_1,Z_1 \subseteq \mathbb {F}^*_p$ be sets, $A=X Y_1$, $B=X Y_2$, $|A|=|X||Y_1|/K_*$, $|B|=|X||Y_2|/K_*$, and $|X Z|\leqslant K|X|$, $|ZZ| \leqslant \tilde{K} |Z|$. Suppose that $|Z| \geqslant p^\delta$ for a certain $\delta \gg \log^{-1} \left( \frac{\log p}{\log \tilde{K}} \right)$. Then for a certain $k \ll \delta^{-1}$ the following holds \begin{equation}\label{f:sol_Sidon} |\{ (a,b)\in A \times B ~:~ a := f_* (b) \}| - \frac{K^2 K_*^2 |A||B|}{p} \ll K^2 K^2_* \tilde{K} \sqrt{|A||B|} \cdot p^{-\frac{1}{16^{k}}} \,. \end{equation} \label{t:sol_Sidon} \end{theorem} \begin{proof} Let $\sigma$ be the quantity from the left--hand side of \eqref{f:sol_Sidon}. Also, let $Q_1=AZ$, $Q_2 = BZ$. Then $|Q_1|\leqslant |X Z||Y_1| \leqslant K|X| |Y_1|= KK_* |A|$ and, similarly, for $Q_2$. We have \[ |Z|^2 \sigma \leqslant |\{ (q_1,q_2,z_1,z_2)\in Q_1\times Q_2 \times Z^2 ~:~ q_1/z_1 := f_* (q_2/z_2) \}| \,. \] Using the definition of the function $f_*$, we arrive to the equation \begin{equation}\label{f:lines} \frac{q_1}{z_1} = \frac{q_2}{\alpha q_2 + \beta z_2} \quad \quad \implies \quad \quad \frac{z_1}{q_1} - \frac{\beta z_2}{q_2} = \alpha \,. \end{equation} The last equation can be interpreted as points/lines incidences with the set of lines $\mathcal{L}= Z \times Z$, any $l\in \mathcal{L}$ has the form $l: z_1X-\beta z_2 Y = \alpha$ and the set of points $\mathcal{P} = Q^{-1}_1 \times Q^{-1}_2$. Applying Proposition \ref{p:counting_collinear}, we obtain for any $k$ \[ \sigma - \frac{|Q_1||Q_2|}{p} \ll |Z|^{-1} \sqrt{|Q_1||Q_2|} \cdot (\mathsf{T}_{2^k} (\mathcal{L}) |Q_1| \log |Q_1|)^{1/2^{k+2}} \,. \] Using our bounds for sizes of the sets $Q_1,Q_2$, combining with Lemma \ref{l:T_k-E(L)} and Theorem \ref{t:upper_T(AB)}, we get \[ \sigma - \frac{K^2 K^2_* |A||B|}{p} \lesssim K K_* \tilde{K} \sqrt{|A||B|} \cdot \left(K K_* |A| |Z|^{-\frac{k+1}{2}} \right)^{1/2^{k+2}} \] provided $|Z| \gtrsim_k \tilde{K}^{2^{k+1}}$ and $|Z|^{k+1} \ll p^2$. Taking $|Z|^{k} \sim p$, we satisfy the second condition and obtain \[ \sigma - \frac{K^2 K^2_* |A||B|}{p} \ll K^2 K^2_* \tilde{K} \sqrt{|A||B|} \cdot p^{-\frac{1}{16^{k}}} \,. \] Choosing $k \sim 1/\delta$, we have the condition $|Z|^{k} \sim p$ and the assumption $\delta \gg \log^{-1} \left( \frac{\log p}{\log \tilde{K}} \right)$ implies that the inequality $|Z| \gtrsim_k \tilde{K}^{2^{k+1}}$ takes place. $\hfill\Box$ \end{proof} \begin{remark} One can increase the generality of Theorem \ref{t:sol_Sidon} considering different sets $X_1, X_2, Z_1,Z_2$ such that $|X_1 Z_1|\leqslant K_1 |X_1|$, $|X_2 Z_2|\leqslant K_2 |X_2|$ and so on. We leave the proof of this generalization to the interested reader. \end{remark} \begin{corollary} Let $g$ be a primitive root and $I,J \subseteq \mathbb {F}^*_p$ be two geometric progressions with the same base $g$ such that \begin{equation}\label{cond:sol_Sidon_cor} \exp(C \log p/ \log \log p) \ll |I| = |J| \leqslant p/2 \,, \end{equation} where $C>0$ is an absolute constant. Then \begin{equation}\label{f:sol_Sidon_cor} |\{ (a,b)\in I \times J ~:~ a := f_* (b) \}| \leqslant (1-\kappa) |I|\,, \end{equation} where $\kappa>0$ is an absolute constant. \label{c:sol_Sidon} \end{corollary} \begin{proof} Let $I = a\cdot \{1,g,\dots, g^n\}$, $J= b\cdot \{1,g,\dots, g^n\}$, where $n=|I|=|J|$. We apply Theorem \ref{t:sol_Sidon} with $A=I$, $B=J$, $Y_1 = \{a\}$, $Y_2 = \{b\}$, $X=\{1,g,\dots, g^n\}$, $K_*= 1$ and $Z = \{1,g,\dots, g^m \}$, where $m=[cn]$, $c=1/4$. Then $K\leqslant 1+c$ and $\tilde{K} <2$. By formula \eqref{f:sol_Sidon}, we obtain $$ |\{ (a,b)\in I \times J ~:~ a := f_* (b) \}| - \frac{(1+c)^2 |I||J|}{p} \ll |I| \cdot p^{-\frac{1}{16^{k}}} \,. $$ We have $\frac{(1+c)^2 |I||J|}{p} \leqslant \frac{25}{32} |I|$ because $n\leqslant p/2$. Recalling that $k\sim 1/\delta$ and $\delta \gg (\log \log p)^{-1}$, we derive estimate \eqref{f:sol_Sidon_cor} thanks to our condition \eqref{cond:sol_Sidon_cor}. This completes the proof. $\hfill\Box$ \end{proof} \bigskip Now we are ready to prove Theorem \ref{t:main_intr} from the introduction, which we formulate in a slightly general form. In our arguments we use some parts of the proof from \cite{He}. \begin{theorem} Let $p$ be a prime number and $\gamma \in \mathbb {F}_p^*$ be an element of order at least $$ \exp(\Omega(\log p/\log \log p)) \,. $$ Also, let $\varepsilon_{j}$ be the random variables distributed uniformly on $\{\gamma^{-1}, \gamma \}$. Consider the lazy Markov chain $0\neq X_0,X_1,\dots, X_n, \dots $ defined by \[ X_{j+1}=\left\{\begin{array}{ll} f_* \left(X_{j}\right) \cdot \varepsilon_{j+1} & \text { with probability } 1 / 2\,, \\ X_{j} & \text { with probability } 1 / 2 \,. \end{array}\right. \] Then for an arbitrary $c>0$ and for any $n = c \exp(\log p/ \log \log p)$ one has \[ \| P_n - U\| := \frac{1}{2} \max_{A \subseteq \mathbb {F}^*_p} \left| \mathrm{P} (X_n \in A) - \frac{|A|}{p-1} \right| \leqslant e^{-O(c)} \,. \] The same is true for the chain $X_{j+1} = f_* \left(X_{j}\right) \cdot \varepsilon_{j+1}$, where $\varepsilon_j$ denote the random variables distributed uniformly on $\{ 1, \gamma^{-1}, \gamma\}$. \label{t:main} \end{theorem} Let $P$ be an ergodic Markov chain on a $k$--regular directed graph $G=G(V,E)$. Let $h(G)$ be the Cheeger constant \begin{equation}\label{def:Cheeger} h(G) = \min_{|S| \leqslant |V|/2} \frac{e(S,S^c)}{k|S|} \,, \end{equation} where $e(S,S^c)$ is the number of edges between $S$ and the complement of $S$. We need a result from \cite{Chung} (a more compact version is \cite[Theorem 4.1]{He}). \begin{theorem} Let $P$ be an ergodic Markov chain on a graph $G=G(V,E)$. Consider the lazy chain $X_0,X_1,\dots, X_n, \dots $ with transition matrix $(I+P)/2$, and starting from a certain deterministic $X_0$. Then for any $c>0$ and any $n = c h(G)^{-2} \log |V|$ one has \[ \max_{A\subseteq V} \left| \mathrm{P} (X_n \in A) - \frac{|A|}{|V|} \right| \leqslant e^{-O(c)} \,. \] \label{t:Chung} \end{theorem} In our case $G=G(V,E)$ with $V=\mathbb {F}_p^*$ and $x \to y$ iff $y=f_*(x) \gamma^{\pm 1}$. Thus our task is to estimate the Cheeger constant of $G$. Take any $S$, $|S|\leqslant p/2$ and write $S$ as the disjoint union $S =\bigsqcup_{j\in J} G_j$, where $G_j$ are geometric progressions with step $\gamma^2$. Here and below we use the fact that $\mathbb {F}_p^*$ is cyclic, isomorphic to $\mathbb {Z}/(p-1)\mathbb {Z}$ and generated by a fixed primitive root $g$. Consider $z,z\gamma, z\gamma^2$, where $z\in S$ is a right endpoint (if it exists) of some $G_j$. Then $z\gamma^2 \in S^c$ and $z,z\gamma^2$ are connected with $f^{-1}_* (z\gamma)$. The point $f^{-1}_* (z\gamma)$ belongs either $S$ or $S^c$ but in any case we have an edge between $S$ and $S^c$. Let $J=J_0 \bigsqcup J_1$, where for $j\in J_0$ the set $G_j$ has no the right endpoint and $J_1 = J\setminus J_0$. Clearly, $|J_0| \leqslant 2|S|/\mathrm{ord}(\gamma)$. By the argument above \begin{equation}\label{f:est_h1} 2h(G) \geqslant \frac{|J_1|}{|S|} \geqslant \frac{|J|}{|S|} - \frac{2}{\mathrm{ord}(\gamma)} \,. \end{equation} We want to obtain another lower bound for $h(G)$, which works better in the case when $J$ is small. Put $L=|S|/|J|$ and let $\omega \in (0,1)$ be a small parameter, which we will choose later. One has $\sum_{j\in J} |G_j| =|S|$ and hence $\sum_{j ~:~ |G_j|\geqslant \omega L} |G_j| \geqslant (1-\omega) |S|$. Splitting $G_j$ up into intervals of length exactly $L_\omega := \omega L/2$, we see that the rest is at most $(1-2\omega) |S|$. Hence we have obtained some geometric progressions $G'_i$, $i\in I$, having lengths $L_\omega$ and step $\gamma^2$ and such that $\sum_{i\in I} |G'_i| \geqslant (1-2\omega) |S|$. Put $S' = \bigsqcup_{i\in I} G'_j$ and let $\Omega = S\setminus S'$, $|\Omega| \leqslant 2\omega |S|$. In other words, we have $S' = XY$, $|S'| = |X||Y|\geqslant (1-2\omega)|S|$, where $X=[1,\gamma^2, \dots, \gamma^{2(L_\omega-1)}]$ and $Y$ is a certain set of multiplicative shifts. Clearly, \begin{equation}\label{f:h_2} 2h(G) \geqslant \frac{e(S,S^c)}{|S|} \geqslant 1- \frac{e(S,S)}{|S|} \geqslant 1 - 8 \omega - \frac{e(S',S')}{|S|} \,. \end{equation} Put $Z= [1,\gamma^2, \dots, \gamma^{2(L'_\omega-1)}]$, where $L'_\omega = [cL_\omega]$, where $c=1/4$. We have $|ZZ|< 2|Z|$. Also, by the assumption the element $\gamma$ has order at least $\exp(\Omega(\log p/\log \log p))$. Using Theorem \ref{t:sol_Sidon} with $K=1+c$, $\tilde{K}=2$, $k\sim 1/\delta$ and taking $\delta \geqslant C(\log \log p)^{-1}$ for sufficiently large constant $C>0$, we get \[ \frac{e(S',S')}{|S|} - \frac{25 |S'|}{16 p} \ll p^{-\frac{1}{16^{k}}} \leqslant \frac{1}{32} \,. \] Recalling that $|S'|\leqslant |S| \leqslant p/2$, we derive \[ \frac{e(S',S')}{|S|} \leqslant \frac{25}{32} + \frac{1}{32} = \frac{13}{16} \,. \] Substituting the last formula into \eqref{f:h_2}, taking sufficiently large $p$ and choosing $\omega = 2^{-8}$, say, we have $h(G) \geqslant 1/32$. We need to check the only condition of Theorem \ref{t:sol_Sidon}, namely, $|Z| \geqslant p^\delta$. If not, then $$ |S|/|J| = L \ll L_\omega \ll |Z| < p^\delta \sim \exp (O(\log p /\log \log p)) \,, $$ and hence $|J| \gg |S| \exp (-O(\log p /\log \log p))$. But then by \eqref{f:est_h1} and our assumption $\mathrm{ord}(\gamma) = \exp(\Omega(\log p/\log \log p))$, we see that in any case $h(G) \gg \exp (-O(\log p /\log \log p))$. Combining the last bound for the Cheeger constant and Theorem \ref{t:Chung}, we derive $n \leqslant \exp (O(\log p /\log \log p))$. The last part of Theorem \ref{t:main} follows by the same method, combining with the arguments from \cite{CD} and \cite[Section 4.3]{He}. We need to ensure that the bijection $f_* (f^{-1}_* (\cdot) \gamma) :\mathbb {F}_p^* \to \mathbb {F}_p^*$ has the same form as in \eqref{def:f_ab} (with our usual convention that $f_*(-\beta/\alpha) =1/\alpha$ of course). It can be check via a direct calculation or thanks to the fact that $f_*$ corresponds to the standard action of a lower--triangular matrix in $\mathrm{GL}_2 (\mathbb {F}_p)$. This completes the proof of Theorem \ref{t:main}. $\hfill\Box$ \begin{remark} Consider lazy Markov chain \eqref{f:f_Markov} with $f(x)=x^2$ and $p\equiv 3 \pmod 4$. Using the same argument as in the proof of Theorem \ref{t:main}, we need to have deal with the equation $y+a = f(x+b) = x^2 +2bx +b^2$, where $a,b$ belong to some arithmetic progression $P$ and $x,y$ are from a disjoint union of $J$ arithmetic progressions, see details in \cite{He} (strictly speaking, now the stationary distribution is not uniform and, moreover, our graph is not regular which requires to have a modification of definition \eqref{def:Cheeger}). Then last equation can be interpreted as points/lines incidences with the set of lines $\mathcal{L}$ of the form $Y=2bX + (b^2-a)$ and the set of points $\mathcal{P} = (y-x^2,x)$. Using the main result from \cite{Vinh} (also, see \cite{s_asymptotic}), we obtain \begin{equation}\label{f:Vinh} \left| \mathcal{I} (\mathcal{P}, \mathcal{L}) - \frac{|\mathcal{P}||\mathcal{L}|}{p} \right| \leqslant \sqrt{|\mathcal{P}||\mathcal{L}|p} \,. \end{equation} By formula \eqref{f:Vinh} and the calculations as above (see details in \cite[Section 4.2]{He}) we have an expander if $|S|/J \sim |P| \gg \sqrt{p}$. If the last inequality does not holds, then $J\gg |S|/\sqrt{p}$ and by an analogue of formula \eqref{f:est_h1}, we obtain $h(G) \gg 1/\sqrt{p}$. Hence in view of Theorem \ref{t:Chung}, we see that the mixing time is $O(p\log p)$. \label{r:f(x)=x^2} \end{remark} The method of the proof of Theorem \ref{t:main} (and see Remark \ref{r:f(x)=x^2}) allows us to produce easily some lazy Markov chains on $\mathbb {F}^*_p$ with the mixing time $O(p\log p)$, e.g., \begin{equation}\label{f:ind} X_{j+1}=\left\{\begin{array}{ll} \mathrm{ind} \left(X_{j}\right) \cdot \varepsilon_{j+1} & \text { with probability } 1 / 2\,, \\ X_{j} & \text { with probability } 1 / 2 \end{array}\right. \end{equation} ($X_0 \neq 0$) or as in \eqref{f:f_Markov} with $f(x)= \exp(x)$, namely, \begin{equation}\label{f:exp} X_{j+1}=\left\{\begin{array}{ll} \exp \left(X_{j}\right) + \varepsilon_{j+1} & \text { with probability } 1 / 2\,, \\ X_{j} & \text { with probability } 1 / 2 \,. \end{array}\right. \end{equation} Indeed, in the first chain we arrive to the equation $ya=\mathrm{ind} (x) + \mathrm{ind} (b)$ and in the second one to $y+b=\exp(x) \cdot \exp(a)$. Both equations correspond to points/lines incidences. Let us underline one more time that our functions $\mathrm{ind} (x), \exp(x)$ are defined on $\mathbb {F}_p^*$ but not on $\mathbb {F}_p$. In reality, one has much better bound for the mixing time of two Markov chains above. \begin{theorem} Let $p$ be a prime number and $\gamma \in \mathbb {F}^*_p$. Then the mixing time of Markov chain \eqref{f:exp} is $\exp(O(\log p/\log \log p))$. If, in addition, the order of $\gamma$ is $\exp(\Omega(\log p/\log \log p))$, then the mixing time of Markov chain \eqref{f:ind} is $\exp(O(\log p/\log \log p))$. \label{t:ind,exp} \end{theorem} \begin{proof} Our arguments follow the same scheme as the proofs of Theorem \ref{t:sol_Sidon} and Theorem \ref{t:main}. In both cases we need to estimate the energy $\mathsf{T}_{2^k}$ of the set of affine transformations $L$ of the form $x\to gx+r$, where coefficients $g\in \Gamma$ and $r\in P$ belongs to a geometric and an arithmetic progression of size $\sqrt{|L|}$, respectively. An application of Lemma \ref{l:T_k-E(L)} is useless because $\mathsf{T}^{+}_{2^k} (P)$ is maximal. Nevertheless, we consider the set $L^{-1}L$ and notice that any element of $L^{-1}L$ has the form $x\to g_2/g_1 x + (r_2-r_1)/g_1$, where $g_1,g_2\in \Gamma$, $r_1,r_2 \in P$. Now in view of the arguments of Lemma \ref{l:T_k-E(L)} our task is to estimate $|\Gamma|^{2^{k+1}-1} |P|^{2^{k+1}} \mathsf{T}^{+}_{2^k} (Q/\Gamma)$, where $Q=P-P$. Write $W=Q/\Gamma$ and notice that $|Q|<2|P|$. Taking $X\subseteq \Gamma^{-1}$ as in inequality \eqref{f:Petridis} and applying this inequality with $A=\Gamma^{-1}$, $B=\Gamma^{-1}$ and $C=Q$, we see that $$ |W X| = |Q/\Gamma \cdot X| \leqslant 2|Q/\Gamma| = 2|W| \,. $$ Increasing the constant $2$ to $O(1)$ in the formula above, one can easily assume (or see \cite{TV}) that for a certain $Y$ the following holds $|Y|\geqslant |\Gamma|/2$. Applying Theorem \ref{t:upper_T(AB)} with $A=W$ and $B=Y$, we obtain \[ \mathsf{T}^{+}_{2^k} (W) \lesssim_k \frac{|W|^{2^{k+1}}}{p} + |W|^{2^{k+1}-1} \cdot |\Gamma|^{-\frac{k-1}{2}} \,. \] Here we need to assume that $|\Gamma| \gtrsim_k 1$. Hence arguing as in Lemma \ref{l:T_k-E(L)} and using the trivial bound $|W| \leqslant |L|$, we get \[ \mathsf{T}_{2^{k+1}} (L) \lesssim_k |\Gamma|^{2^{k+1}-1} |P|^{2^{k+1}} \left( \frac{|L|^{2^{k+1}}}{p} + |L|^{2^{k+1}-1} \cdot |\Gamma|^{-\frac{k-1}{2}} \right) \ll |L|^{2^{k+2}} \cdot |\Gamma|^{-\frac{k+5}{2}} \,, \] provided $|L| \gtrsim_k 1$ and $|\Gamma|^{k+3} \ll p^2$. After that we apply the same argument as in the proof of Theorem \ref{t:main}. $\hfill\Box$ \end{proof} \section{Combinatorial applications} \label{sec:applications} We now obtain an application of the developed technique to Sidon sets and we follow the arguments from \cite{s_Sidon}. We need Lemma 3, Lemma 7 and Theorem 4 from this paper. \begin{lemma} Let $A\subseteq {\mathbf G}$ be a set. Then for any $k\geqslant 2$ one has \begin{equation}\label{f:random_Ek} \mathsf{Sid}_{3k-3} (A) \gg \left( \frac{|A|^{2k}}{\mathsf{E}_k (A)} \right)^{1/(2k-1)} \,, \quad \quad \mbox{ and } \quad \quad \mathsf{Sid}_{2k-2} (A) \gg \left( \frac{|A|^{2k}}{\hat{\mathsf{E}}_k (A)} \right)^{1/(2k-1)} \,. \end{equation} \label{l:random_Ek} \end{lemma} \begin{lemma} Let $A\subseteq {\mathbf G}$ be a set, $A=B+C$, and $k\geqslant 1$ be an integer. Then $$ \mathsf{Sid}^{}_k (A) \leqslant \min \{ |C| \sqrt{k|B|} + |B|, |B| \sqrt{k|C|} + |C| \} \,. $$ \label{l:L_in_sumsets} \end{lemma} \begin{theorem} Let $A\subseteq {\mathbf G}$ be a set, $\delta, \varepsilon \in (0,1]$ be parameters, $\varepsilon \leqslant \delta$.\\ $1)~$ Then there is $k=k(\delta, \varepsilon) = \exp(O(\varepsilon^{-1} \log (1/\delta)))$ such that either $\mathsf{E}^{}_k (A) \leqslant |A|^{k+\delta}$ or there is $H\subseteq {\mathbf G}$, $|H| \gtrsim |A|^{\delta(1-\varepsilon)}$, $|H+H| \ll |A|^\varepsilon |H|$ and there exists $Z\subseteq {\mathbf G}$, $|Z| |H| \ll |A|^{1+\varepsilon}$ with $$|(H\dotplus Z) \cap A| \gg |A|^{1-\varepsilon} \,.$$ $2)~$ Similarly, either there is a set $A'\subseteq A$, $|A'| \gg |A|^{1-\varepsilon}$ and $P\subseteq {\mathbf G}$, $|P| \gtrsim |A|^\delta$ such that for all $x\in A'$ one has $r_{A-P}(x) \gg |P| |A|^{-\varepsilon}$ or $\mathsf{E}_k (A) \leqslant |A|^{k+\delta}$ with $k \ll 1/\varepsilon$. \label{t:Ek} \end{theorem} To have deal with the real setting we need the famous Szemer\'edi--Trotter Theorem \cite{ST}. \begin{theorem} Let $\mathcal{P}$, $\mathcal{L}$ be finite sets of points and lines in ${\mathbb R}^2$. Then $$ \mathcal{I} (\mathcal{P}, \mathcal{L}) \ll (|\mathcal{P}||\mathcal{L}|)^{2/3} + |\mathcal{P}| + |\mathcal{L}| \,. $$ \label{t:SzT} \end{theorem} Now we are ready to prove Theorem \ref{t:Sidon_intr}. Take any $\delta<1/2$, e.g., $\delta=1/4$ and let $\varepsilon \leqslant \delta/4$ be a parameter, which we will choose later. In view of Lemma \ref{l:random_Ek} we see that $\mathsf{E}^{\times}_k (A) \leqslant |A|^{k+\delta}$ implies \begin{equation}\label{tmp:08.03_2} \mathsf{Sid}^{\times}_{3k-3} (A) \gg |A|^{\frac{1}{2} + \frac{1-2\delta}{2(2k-1)}} = |A|^{\frac{1}{2} + \frac{1}{4(2k-1)}} \end{equation} and we are done. Here $k=k(\varepsilon)$. Otherwise there is $H\subseteq \mathbb {F}$, $|H| \gtrsim |A|^{\delta(1-\varepsilon)} \geqslant |A|^{\delta/2}$, $|HH| \ll |A|^{\varepsilon} |H|$ and there exists $Z\subseteq \mathbb {F}$, $|Z| |H| \ll |A|^{1+\varepsilon}$ with $|(H\cdot Z) \cap A| \gg |A|^{1-\varepsilon} \,.$ Here the product of $H$ and $Z$ is direct. Put $A_* = (H\cdot Z) \cap A$, $|A_*| \gg |A|^{1-\varepsilon}$ and we want to estimate $\mathsf{E}^{\times}_{l+1} (A_* +1)$ or $\hat{\mathsf{E}}_{l+1}^{\times} (A_* +1)$ for large $l$. After that having a good upper bound for $\mathsf{E}^{\times}_{l+1} (A_* +1)$ or $\hat{\mathsf{E}}_{l+1}^{\times} (A_* +1)$, we apply Lemma \ref{l:random_Ek} again to find large multiplicative Sidon subset of $A_*$. First of all, notice that in view of \eqref{f:Pl-R}, one has $$ |H A^{-1}_*| \leqslant |HH^{-1}||Z| \ll |A|^{2\varepsilon} |H||Z| \ll |A|^{1+3\varepsilon} \,. $$ In other words, the set $A^{-1}_*$ almost does not grow after the multiplication with $H$. Let $Q = H A^{-1}_*$, $|Q| \ll |A|^{1+3\varepsilon}$ and also let $M=|A|^{\varepsilon}$. Secondly, fix any $\lambda \neq 0, 1$. The number of the solutions to the equation $a_1 / a_2 = \lambda$, where $a_1,a_2 \in A_*+1$ does not exceed $$ \sigma_\lambda := |H|^{-2} |\{ h_1,h_2\in H,\, q_1,q_2\in Q ~:~ (h_1/q_1 + 1) / (h_2/q_2 +1) = \lambda \}| \,. $$ The last equation has form \eqref{f:lines}, namely, $$ \frac{h_1}{q_1} - \frac{\lambda h_2}{q_2} = \lambda - 1 $$ and can be interpreted as a question about the number of incidences between points and lines. For each $\lambda \neq 0,1$ the quantity $\sigma_\lambda$ can be estimated as \begin{equation}\label{f:sigma_la} \sigma_\lambda \ll |H|^{-2} \cdot |Q| |H|^{2-\kappa} \ll |A|^{1+3\varepsilon} |H|^{-\kappa} \end{equation} similarly to the proof of Theorem \ref{t:sol_Sidon} above (in the case $\mathbb {F}={\mathbb R}$ the same is true thanks to Theorem \ref{t:SzT}). Here $\kappa = \kappa(\delta)>0$. Indeed, by our assumption $|A|<\sqrt{p}$, Theorem \ref{t:upper_T(AB)}, Proposition \ref{p:counting_collinear} and Lemma \ref{l:T_k-E(L)}, we have \begin{equation}\label{f:sigma_la'} \sigma_\lambda - \frac{|Q|^2}{p} \lesssim |Q| |H|^{-1/2} (|Q| \mathsf{T}^{+}_{2^r} (H))^{1/2^{r+2}} \lesssim |Q| \sqrt{M} \left( M^3 |A| |H|^{-\frac{r+1}{2}} \right)^{1/2^{r+2}} \end{equation} provided $|H| \gtrsim_r M^{2^{r+1}}$ and $|H|^{r+1} \ll p$. Here $r$ is a parameter and we take $r\sim 1/\delta$ to satisfy the second condition. To have the first condition just take $\varepsilon {2^{r+1}} \ll \delta$ (in other words, $\varepsilon \leqslant \exp(-\Omega(1/\delta))$) and we are done because $|H| \gg |A|^{\delta/2}$. Further using $|H| \gg |A|^{\delta/2}$, $|A_*| \gg |A|^{1-\varepsilon}$ and choosing any $\varepsilon \leqslant \delta \kappa/100$, we obtain after some calculations and formula \eqref{f:sigma_la} that $\sigma_\lambda \ll |A_*|^{1-\delta \kappa/4}$. Hence taking sufficiently large $l \gg (\delta \kappa)^{-1}$, we derive \[ \hat{\mathsf{E}}_{l+1}^{\times} (A_*) = \sum_{\lambda} r^{l+1}_{A_* A_*} (\lambda) \ll |A_*|^{l+1} + (|A_*|^{1-\delta \kappa/2})^l |A_*|^2 \ll |A_*|^{l+1} + |A|^{l+2-\delta\kappa l/2} \ll |A_*|^{l+1}\,. \] Applying Lemma \ref{l:random_Ek} and choosing $\varepsilon \ll l^{-1}$, we see that $$ \mathsf{Sid}^\times_{2l} (A) \geqslant \mathsf{Sid}^\times_{2l} (A_*) \gg |A_*|^{\frac{l+1}{2l+1}} \gg |A|^{\frac{(1-\varepsilon)(l+1)}{2l+1}} = |A|^{\frac{1}{2} + \frac{1-2\varepsilon(l+1)}{2(2l+1)}} \gg |A|^{\frac{1}{2}+c} \,, $$ where $c = c(\delta) >0$ is an absolute constant. We have obtained bound \eqref{f:Sidon_intr} of Theorem \ref{t:Sidon_intr}. As for estimate \eqref{f:Sidon_intr_new1.5}, we use the same argument as above but now our analogue of the quantity $\sigma_\lambda$ is $\exp(q_1) \exp(h_1) - \exp(q_2) \exp(h_2) = \lambda$, where $q_1,q_2 \in Q=A_*+H$, $h_1, h_2\in H$ (we use the notation above). The last equation can be treated as points/lines incidences with the set of lines $x \exp(h_1) - y \exp(h_2) = \lambda$, $|\mathcal{L}| = |H|^2$ and the correspondent set of points $\mathcal{P}$ of size $|Q|^2$. Then analogues of bounds \eqref{f:sigma_la}, \eqref{f:sigma_la'} take place and we are done. \bigskip It remains to obtain estimate \eqref{f:Sidon_intr_new2} of the theorem. For any sets $X_1,X_2,X_3$ consider the set $R[X_1,X_2,X_3]$ $$ R[X_1,X_2,X_3] = \left\{ \frac{x_1-x_3}{x_2-x_3} ~:~ x_1,x_2,x_3 \in X,\, x_2 \neq x_3 \right\} \,. $$ If $X_1=X_2=X_3=X$, then we put $R[X_1,X_2,X_3] = R[X]$. One can check that $1-R[X_1,X_2,X_3] = R[X_1,X_3,X_2]$. For $\mathbb {F} = {\mathbb R}$ or $\mathbb {F}=\mathbb {F}_p$ we put $X=P$, $A=R[X]$, where $P = \{1,\dots, n \}$, $\bar{P} = \{-n, \dots, n \}$ and let $n<\sqrt{p}$ in the case of $\mathbb {F}_p$. Then $A$ is contained in $\bar{P} / \bar{P} := B \cdot C$ and in view of Lemma \ref{l:L_in_sumsets} any multiplicative $k$--Sidon subset of $A$ has size at most $O (\sqrt{k} |A|^{3/4})$ because as one can check $|A| \gg |P|^2$. Further $1-A=A$ and hence the same argument is applicable for the set $1-A$. It remains to notice that $\mathsf{Sid}^\times (X) = \mathsf{Sid}^\times (-X)$ for any set $X$. Finally, let us make a remark that there is an alternative (but may be a little bit harder) way to obtain estimate \eqref{f:Sidon_intr_new2}. Indeed, consider $R[\Gamma]$, where $\Gamma\subseteq \mathbb {F}_p^*$, $|\Gamma|< \sqrt{p}$ is a multiplicative subgroup (we consider the case $\mathbb {F}=\mathbb {F}_p$, say). One can notice that $R[\Gamma] = (\Gamma-1)/(\Gamma-1)$ and repeat the argument above.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Imaging through diffusive media is an incredibly difficult task for an optician. It spans topics from imaging through atmospheric clouds, to in-vivo microscopy, to imaging in dense atomic gases. Several techniques are currently intensively investigated including speckle correlations ~\cite{katz2014non} and transmission matrix reconstruction ~\cite{popoff2010measuring}. In bio-imaging, light-sheet or selected plane illumination microscopy allows selective illumination of tissues and fast 3D imaging of live organisms at the cellular scale which are much more precise than conventional confocal or multi-photon imaging ~\cite{keller2008reconstruction,huisken2004optical}. However, the main limitation for the field-of-view of light-sheet microscopy is the penetration depth of the illumination through the tissue ~\cite{fahrbach2010microscopy}. This fundamental limitation makes light-sheet imaging of deep tissues in living animals a challenging task, as it is complicated to illuminate deep structures effectively. In this paper we propose and implement a general method to resolve this strong limitation. We design a non-diffracting Bessel beam with the intensity in the central spot exponentially rising as function of the propagation, in order to exactly compensate for the losses in the tissue (or in any lossy medium). Actually, the exponential attenuation of the beam due to scattering is common in optics. As soon as a beam propagates through a medium, scattering will inevitably degrade the signal. Our technique is of major interest not only for imaging through biological samples but it could also be used to study disordered media or light propagation in dilute atomic clouds and non-linear crystals. In all these systems the exponential attenuation of a beam due to linear losses is a fundamental limitation ~\cite{mccormick2008strong,glorieux2010double,jasperse2011relative,glorieux2012generation}. Introduced by Durnin \emph{et al} in 1987 ~\cite{Durnin:1987}, zero-order Bessel beams are one of the most representative "non-diffracting" solution of the Helmholtz equation with Airy (or parabolic) beams ~\cite{Bandres:2004}. These optical fields result from the interference of an infinite number of plane waves whose wave-vectors constitute the generating lines of the so-called Bessel cone. The radial intensity profile of zero-order Bessel beams is described by the zero-order Bessel function of the first kind; a high intensity central peak is surrounded by an infinity of concentric rings of decreasing intensity. Although perfect Bessel beams are only mathematical objects as they should carry an infinite energy, spatially limited quasi-Bessel beams can be realized experimentally. Those beams have found various applications -- in optical trapping ~\cite{Cizmar:2005, Chavez:2002}, laser machining ~\cite{Courvoisier:2013}, nonlinear optics \cite{Johannisson:2003, Porras:2004, Polesana:2008} and imaging ~\cite{Dufour:2006, Zhao:2014} for example -- as their central cores stay collimated on a distance that is orders of magnitude longer than the Rayleigh length. The modification of the intensity profile in the propagation direction has been recently studied experimentally in the context of counterbalancing the intensity decay induced by light absorption in weakly absorbing dye solutions \cite{Dorrah:2016}. The general idea is to tailor the on-axis intensity of a Bessel beam. It has been reported the use of an exicon (exponential intensity axicon) \cite{Golub:2012,Golub:09} and the generation of attenuation-resistant beams with computed generated holograms ~\cite{Dorrah:2016,Zamboni-Rached:2006}. This last approach is known as frozen waves and results from the superposition of equal frequency Bessel beams produced by modulating only the amplitude of an incident plane-wave with a Spatial Light Modulator (SLM) ~\cite{Zamboni-Rached:04,Zanarella:18,PhysRevA.92.043839,Vieira:12,Vieira:14}. In this paper, we report on a more general and versatile method based on both phase and amplitude shaping of an incident Gaussian beam. It allows for compensating any absorption coefficients up to $\alpha \! = \! 200$~m$^{-1}$, independently of the refractive index and of loss mechanism by using real space shaping with a reflective phase-only SLM. We demonstrate experimentally the accuracy of this method in two very different media: a scattering sample based on a dilute solution of milk in water (index $n=1.33$) and an absorptive sample of near resonance atomic vapor ($n\simeq 1$). In the first section, we present the theoretical background of the method including a general approach to compensate for the refractive index of the medium. A detailed description of our experimental setup follows, as well as the measurement of the on-axis intensity profile of the central peak in air. In the next part, we present our results on the compensated absorption for two different media and show that our procedure is efficient independently of the refractive index and for a wide range of loss coefficients. Finally, we describe the potential improvement of 2 orders of magnitude on the field of view for light-sheet microscopy using this approach. \section{Shaping Bessel beams on-axis intensity} At a given position $z$ on the optical axis (we assume $z = 0$ in the following), the electric field in the transverse plane $E(r,z=0)$ of a radially symmetric laser beam can be expressed, under the paraxial and the scalar approximation, as an infinite superposition of zero order Bessel functions of the first kind $J_0$: \begin{equation} \label{ElectricField} E(r,z=0) = \frac{1}{2 \pi} \int_{0}^{\infty} S(k_{\perp},z=0) J_0(r k_{\perp}) k_{\perp} \mathrm{d} k_{\perp}, \end{equation} \noindent where $r$ and $k_{\perp}$ stand respectively for the transverse radial coordinate and the associated spatial angular frequency. The spatial spectrum $S(k_{\perp},z=0)$ is the Hankel transform of the electric field amplitude $E(r,z=0)$. The on-axis electric field $E(r=0,z)$ is obtained by taking the inverse Fourier transform of Eq.~\eqref{ElectricField} ~\cite{Cizmar:09}: \begin{multline} \label{OnAxisField} E(r=0,z) = \frac{1}{\pi} \int_{0}^{\infty} S(\sqrt{k_{0}^2 - k_{z}^2},z=0) \\ \times \exp{\left( i k_{z} z\right)} \, k_{z} \, \mathrm{d} k_{z}. \end{multline} where $k_{0} = 2\pi/\lambda$ is the laser wave-vector ($\lambda$ its wavelength) and $k_{z} = \sqrt{k_{0}^{2}-k_{\perp}^{2}}$ the longitudinal spatial frequency of a given Bessel mode. This formula give a physical insight about the engineering process we use to overcome attenuation. Each of the Bessel mode coming in the spectral decomposition Eq.~\eqref{ElectricField} will propagate in free-space with a slightly different longitudinal wave-vector $k_{z}$ and thus merge with different cone angles at distinct position along the optical axis. The on-axis electric field results then from the interference arising between the individual modes. At the end of the day, if one wants to design a Bessel beam with a given on-axis intensity profile $I(z) = \vert E(r=0,z) \vert^{2}$ along the optical axis, the spatial spectrum $S$ must be engineered according to the following formula : \begin{equation} \label{Spectrum} S(k_{\perp}, z=0) \! = \! \frac{1}{k_{z}} \int_{0}^{\infty} \! \sqrt{I(z)} \, \exp{\left[i(k_{z0}-k_{z})z\right]} \, \mathrm{d}z, \end{equation} The spectrum $S$ is centered around the axial wave-vector of the target Bessel beam $k_{z0} = k_{0} \cos(\theta_{0})$. The cone angle $\theta_{0}$ sets the spot size (the full width at half-maximum (FWHM) of the central peak in the transverse intensity profile), which is equal to $2.27/\left(k_{0} \sin (\theta_{0})\right)$ for a perfect zero order Bessel beam. For non--evanescent modes, $k_{z}$ should lie in the interval $\left[k_{\min} \, , \,k_{0}\right]$, where $k_{\min}$ is defined by the numerical aperture (NA) of the imaging system as $k_{\min} = k_{0} \sqrt{1-\mathrm{NA}^{2}}$. As explained in Ref. ~\cite{Ouadghiri-Idrissi:16}, the target intensity profile should not vary on a length scale smaller than $\Delta z = 4 \pi / \left(k_{0}-k_{\min}\right)$ to avoid significant frequency truncations in the associated spectrum, leading to undesirable oscillations in the measured on--axis intensity profile. The initial electric field $E(r,z=0)$ that will produce a Bessel beam with a given cone angle $\theta_{0}$ and an on-axis intensity profile $I(z)$ can be evaluated using Eq.~\eqref{ElectricField} and Eq.~\eqref{Spectrum}. In the following, we show how to generate the target beam by real-space shaping of an incident Gaussian beam on a Spatial Light Modulator (SLM). Fourier space shaping may also be considered ~\cite{Cizmar:09}. However, as the intensity distribution of a Bessel beam in Fourier space is a thin ring, the small overlap between the latter and the incident Gaussian profile will filter out most of the incident energy. Higher efficiency can then be obtained using real space shaping. \\ We define $z=0$ to be the SLM plane position along the optical axis. Discretizing the electric field accordingly to the SLM matrix ($N_x\times N_y$), the target electric field $E(i,j,z=0^{+})$ right after the SLM can be decomposed in amplitude $A(i,j)$ and phase $\Phi(i,j)$ (where $0 \leq i \leq N_{x}$ and $0 \leq j \leq N_y$ stand for the pixel coordinates). As suggested by Davis \emph{et al.} ~\cite{Davis:99}, locally reducing the phase wrapping contrast allows for a modulation of the amount of light scattered in the first diffraction order, using a single hologram. We apply this technique with a phase-only SLM. The expression of the SLM phase mask $\Psi$ is given by ~\cite{Bolduc:13, Ouadghiri-Idrissi:16}: \begin{equation} \label{PhaseMask} \Psi(i,j) = M(i,j) \, \mathrm{mod} \left[ F(i,j) + \Phi_{\mathrm{g}}(i,j), \, 2 \pi \right]. \end{equation} The function $F$ contains the phase information of the target electric field and $\Phi_{\mathrm{g}}$ stands for the grating linear phase ramp, used to separate the different diffraction orders in Fourier space. The modulo operation provides the phase wrapping. The diffraction efficiency is locally tuned by the modulation function $M$ ($0 \leq M(i,j) \leq 1$). The complex amplitude of the field diffracted in the first order can be expressed as follow ~\cite{Bolduc:13, Ouadghiri-Idrissi:16}: \begin{multline} \label{FirstDiffraction} E_{1}(i,j,z=0^{+}) = A_{\mathrm{in}}(i,j) \, \mathrm{sinc} \left(\pi M(i,j) -\pi \right) \\ \times \exp{\left[ i \left( F(i,j) + \pi M(i,j) \right) \right]}, \end{multline} where $A_{\mathrm{in}}$ is the amplitude of the incident laser beam on the SLM. By identifying $E_{1}$ with the target electric field, one can obtain the functions $F$ and $M$ solving the following system: \begin{align} \label{System} M(i,j) &= 1 + \frac{1}{\pi} \, \mathrm{sinc}^{-1} \left( \frac{A(i,j)}{A_{\mathrm{in}}(i,j)} \right) \\ F(i,j) &= \Phi(i,j) - \pi M(i,j) \end{align} The inverse sinc function ($\mathrm{sinc}^{-1}$) is defined on $[-\pi,0]$. Computing it for each points of the hologram is usually demanding ($N_{x} \times N_{y}$ operations). However, if both the incident and the first order diffracted beams are radially symmetric, we only need to determine the radial profile of the modulation function. For beams centered in the SLM matrix, Eq.~\eqref{System} can be simplified such as: \begin{equation} \label{Eq1System} m(i) = 1 + \frac{1}{\pi} \, \mathrm{sinc}^{-1} \left( \frac{A(i,N_y/2)}{A_{\mathrm{in}}(i,N_y/2)} \right) , \end{equation} where $i$ is an integer running from 0 to $N_{x}/2$ (for $N_{x} \geq N_{y}$). Using a circular interpolation, $M$ can be entirely reconstructed from $m$, computing the inverse $\mathrm{sinc}$ function for $N_{x}/2$ points only instead of $N_{x}\times N_{y}$. In practice, we start with the clean-up of the incident laser beam, filtering out in Fourier space its high k-vector components with a small pinhole aperture. Afterwards, the widths of the input Gaussian beam is radially symmetric in the SLM plane ($\omega_{x,y} \! \simeq \! 3.3 \pm 0.1$ mm). \\ In principle, arbitrary on-axis intensity profiles can be generated using the method described above. In the following section, we introduce the target profile $I(z)$ we use to maintain the central peak intensity constant along the propagation in an uniform and linear lossy medium. Our approach is independent of the loss origin; we demonstrate its validity for both absorbing and scattering type of losses. Let $L$ and $\alpha$ stand respectively for the propagation length and the linear attenuation coefficient of the medium. According to the Beer-Lambert's law, the transmittance $t$ of the medium decays exponentially with the propagation distance: $T = \exp(-\alpha z)$. Therefore, to compensate for these losses, the on-axis intensity should exponentially increase along the propagation such as $I(z)\sim \exp(\alpha z)$. We ramp the on-axis intensity up (from 0 to $I(z_1)=I_{0}$), until the entrance plane position $z_1$, before exponentially increasing it over the length $L$. We then make it go back to 0 smoothly. The full on-axis target profile we designed is described as: \begin{align} \label{SystemProfile} I(z) = \begin{cases} I_{0} \left( \frac{\sin(C_{1} z/z_{1})}{\sin(C_{1})} \right) ^{2} &\mathrm{if} \;\; 0 \leq z \leq z_{1} \\ I_{0} \exp \left[ \alpha (z-z_{1}) \right] &\mathrm{if} \;\; z_{1} \leq z \leq z_{2} \\ I_{\max} \sin^{2} \left[ C_{2} + (\frac{\pi}{2} - C_{2}) \frac{z-z_{2}}{z_{3}-z_{2}} \right] \; &\mathrm{if} \;\; z_{2} \leq z \leq z_{3} \\ I_{\max}\sin^{2} \left[\frac{\pi}{2} \left(1- \frac{z-z_{3}}{z_{4}-z_{3}} \right) \right] &\mathrm{if} \;\; z_{3} \leq z \leq z_{4} . \end{cases} \end{align} For all the measurements we performed, we set $z_{1} \times G^2 = 1.5$ cm, $z_{2} = z_{1} + \frac{L}{G^2}$ (with $L = 7.5$ cm) and $z_{4} = 3 \, z_{1} + \frac{L}{G^2}$, where $G = 0.5$ stands for the telescope demagnification factor which optically conjugates the SLM and the $z=0$ planes. $C_{1,2}$ and $z_{3}$ are constants chosen in order to make the profile continuous and differentiable; they are defined in the supplementary materials. In the following, the profile has been normalized to 1 dividing $I(z)$ by the maximum intensity $I_{\max} = I_{0} \left( 1 + \exp(\alpha L) \right)$. The spatial spectrum associated to this on-axis profile is analytically derived in the appendix. We obtain the target electric field by computing the inverse Hankel transform using Eq.~\eqref{ElectricField}.\\ \noindent When the linear refractive index $n$ of the medium is not equal to 1 (as implicitly assumed before), the target Bessel beam will undergo refraction at both the entrance and the output plane of the medium. Applying the Snell's law at the entrance (resp. the output) plane, we get: $\sin(\theta_{i}) = n \sin(\theta_{r})$, where $\theta_{i}$ and $\theta_{r}$ stand respectively for the incident and refractive cone angle of a given Bessel mode. Using the transverse spatial angular frequency $k_{\perp} = n k_{0} \sin(\theta)$, we find that $k_{\perp}^{(i)} = k_{\perp}^{(r)}$. Therefore, according to Eq.~\eqref{ElectricField}, the transverse shape of the target Bessel beam is not modified by successive refractions ~\cite{Mugnai:2009}. Nevertheless, the cone angle is modified when the Bessel beam enters the medium. As $n > 1$, the inner cone angle $\theta_{r}$ is lower than the external one and the Bessel beam will lengthen more than in air. This stretching of the beam inside the medium will necessarily reduce the compensation coefficient by a factor $n$. To counteract this effect, we constrict the exponentially rising part of the target on-axis profile beforehand by a factor $n$ (as suggested in ~\cite{Golub:12}). In other words, we replace $L$ by $L/n$ and $\alpha$ by $\alpha n$ in the second line of Eq.~\eqref{SystemProfile}. By doing so, the stretching of the beam will be controlled, and so compensate exactly for the exponential attenuation in the medium, as shown in Fig. \ref{fig:Stretching}. This compensation procedure generalizes easily to more complex situations when several layers of materials (with different attenuation coefficients and refractive indices) are involved. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Stretching.pdf} \caption{Numerical simulation of the longitudinal refractive stretching of the on-axis profile. Blue, red and grey circles are obtained by solving numerically the evolution of the Bessel beam in air (blue), in a non-lossy (red) and in a lossy material of refractive index $n=1.33$. The simulation data obtained with the refractive medium can be deduced from the vacuum ones by stretching the z axis by a factor $n$ between $z_{1}$ and $z_{2}$. We adjust the exponentially growing section of the profile (blue dotted line) such that the stretched Bessel beam ends up compensating the attenuation. When losses are added, the on-axis intensity remains constant along the propagation (black dots) as expected.} \label{fig:Stretching} \end{figure} \section{Experiment} \label{sec:examples} \subsection{Experimental setup} \noindent Our experimental setup is shown in Fig. \ref{fig:ExperimentalSetup}. A continuous-wave laser beam, produced by a tapered amplifier laser system, gets 4 times magnified by a 4-f telescope system (lenses $L_{1}$ and $L_{2}$) and is spatially filtered in the Fourier space of $L_1$. The resulting $6.6\,$mm diameter radially symmetric Gaussian beam reaches the center of the SLM chip with a normal incidence. The SLM used for the experiment is a liquid crystal on silicon (LCOS) phase-only modulator, with an effective area of $1272 \times 1024$ pixels and a pitch of 12.5 $\micro$m. After shaping, a $50:50$ non-polarizing beam splitter separates the diffracted beam from the incoming one. Another 4-f telescope ($L_{3}$ and $L_{4}$, $\mathrm{NA} = 0.017$) optically conjugates the SLM and the $z=0$ planes with a demagnification factor $G=0.5$. The choice of the telescope lenses $L_{3}$ and $L_{4}$, as well as the demagnification factor $G$, is conditioned by the length of the lossy medium we are dealing with. For biological applications, $G$ should be divided by 10 at least, as pointed out in section \ref{sec:Result}. The first order diffracted beam is then selected by masking the zero and the higher order ones in the Fourier plane of $L_{3}$. The Bessel beam finally starts forming from the focal plane of $L_{4}$ (at $z=0$) and propagates through a lossy medium. The output plane of the medium is imaged by a third 4-f arrangement (lenses $L_{5}$ and $L_{6}$, $\mathrm{NA}'= 0.042$) onto a microscope objective which is set up on a computer controlled translation stage. By moving the objective along the optical axis, we can monitor the Bessel beam evolution along $z$. The last lens ($L_{7}$) images the plane we look at on the CMOS camera. The magnification factor $G'$ of the whole imaging system is $13.6 \pm 0.1$. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Setup.pdf} \caption{Experimental setup. $L_{1-7}$ label the different lenses. PH is a pinhole used to clean up the beam. An iris and a mask (thin metallic dot on a glass window) cut all the diffraction orders (except the first one) in the Fourier space of $L_{3}$. The mirror $M$ sets on a translation stage to adjust the $L_{4}$ focal plane position, where the Bessel beam start forming. Insets: (a) Phase mask applied on the SLM. No grating was added on top. (b) Transmission measurement setup. A wide (non-saturating) Gaussian beam splits on a (10:90) beamsplitter; the most intense part propagates through the lossy medium. The beams are focused on two photodiodes (PD) in order to monitor both the stability of the laser intensity and the material transmission. (c) Target on-axis intensity profile. The shadowed region shows where the lossy material should be positioned. (d) Transverse profile of the generated Bessel beam.} \label{fig:ExperimentalSetup} \end{figure} \subsection{Results and discussion} \label{sec:Result} The first step to verify the compensation of the beam attenuation is to compare the transverse and longitudinal intensity profiles of the experimentally measured beam with the target ones from simulations, in air. As shown in Fig. \ref{fig:ProfileAll}, we design the Bessel beam to overcome $96\%$ attenuation over a lossy, $7.5$ cm long medium. The 2D map in Fig. \ref{fig:ProfileAll} (a) is obtained by scanning slowly ($v = 2$ mm.s$^{-1}$) the microscope objective along the z axis. Both the transverse, Fig. \ref{fig:ProfileAll} (b), and the longitudinal, Fig. \ref{fig:ProfileAll} (c), measured intensity distributions of the tailored beam (blue circle and line) are in excellent agreement with the simulation (dashed black lines). The target profiles (dashed black lines) are obtained by solving numerically the evolution of the transverse electric field from $z=0$ to $L$ with the second order split-step method. We take as initial condition a field with the SLM imprinted phase $\Psi$ and the radially symmetric Gaussian envelope of the SLM input beam. To determine accurately the central peak intensity along $z$ as presented in Fig. \ref{fig:ProfileAll} (c), we fit with a Gaussian profile the region delimited by the two white dashed lines on both sides of the central peak as illustrated in Fig. \ref{fig:ProfileAll} (b) (red line). The width of the peak along the propagation is found to be constant ($\pm 5\%$), as shown in the inset of Fig. \ref{fig:ProfileAll} (b); we are therefore able to control the longitudinal intensity profile without altering the non-diffracting behavior of the Bessel beam. Nevertheless, small amplitude oscillations can be observed at the beginning of the measured on-axis profile Fig. \ref{fig:ProfileAll} (b). They are due to high longitudinal frequency truncation, as $k_{z}$ is upper bounded by the laser wave-vector $k_{0}$. We can reduce the oscillation amplitude by increasing the Bessel cone angle $\theta_{0}$. In our setup we are limited by the mirror size as the beam will start to clip and will loose its radial symmetry. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{ProfileAll7.pdf} \caption{Experimental characterization of the reconstructed Bessel beam. The Bessel cone angle $\theta_0$ was set to $(1/G) \times 8.5$ mrad. The 2D map fig. (a) is obtained by scanning slowly ($v = 2$ mm.s$^{-1}$) the microscope objective along the z axis. The white dotted lines on both sides of the central peak define the region where the Gaussian fit is performed. The blue curves on fig. (b) and (c) are obtained cutting the 2D map along $z = 0.083$ (at the maximum on axis intensity position) and $x=0$ respectively.} \label{fig:ProfileAll} \end{figure} \noindent The lossy medium is then positioned on the beam path. By fitting the on-axis intensity profile with the function of Eq.~\eqref{SystemProfile}, we find the position $z_{2}$ where the medium output plane should set. We then move the lossy medium cell along the optical axis until imaging this plane on the camera. The $1$ mm depth-of-field of the imaging system and the standard deviation on the fit parameters translate into an uncertainty of $\pm 2$~mm on the medium output plane position. \\ \noindent Three different media (contained in three different glass cells) have been used to check our ability to compensate for the attenuation of the Bessel peak intensity along propagation. Two cells are filled with isotopically pure Rubidium vapor (the first ($7.5$ cm long) with $^{87}$Rb only and the second ($2.5$ cm long) with $^{85}$Rb only); the third one ($2.5$ cm long) contains a diffusive water-milk mixture. Rubidium cells are heated up to 140$\degree$C. At this temperature, the atomic density is large ($n_{a} \simeq 2-5 \times 10^{13}$ atoms/cm$^{3}$). By tuning the laser frequency $\nu_{0}$ over the $D_{2}$ Rubidium absorption lines, we can change the transmission over several orders of magnitude, without affecting significantly the refractive index $n_{\mathrm{Rb}}$. The latter is estimated theoretically taking the Rubidium hyperfine structure and the Doppler broadening into account ~\cite{Siddons:2008}: $n_{\mathrm{Rb}}(\nu_{0}) \simeq 1.00 \pm 0.02$ (scanning $\nu_{0}$ over the whole absorption spectrum). The transmission of the water-milk mixture can be tuned changing the milk concentration. Remaining under highly diluted condition, the medium refractive index stays close to the water one $n_w \simeq 1.33$. As explained above, we should balance in this case the change of refracting index stretching the Bessel beam along the optical axis, replacing beforehand in the target profile $L$ and $\alpha$ with $L/n$ and $\alpha \, n$ respectively. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{Transmission7.pdf} \caption{Transmission T as a function of the attenuation coefficient $\alpha$. The transmission of the Bessel beam through the Rubidium vapors are plotted in blue stars ($^{87}$Rb) and orange circles ($^{85}$Rb). Data obtained with the water-milk mixture are plotted in grey diamonds. The two red lines show the transmission expected from the Beer-Lambert law for 2.5 cm and 7.5 cm long lossy materials.} \label{fig:Transmission} \end{figure} We design the target profile to overcome attenuation over 7.5 cm long materials, whatever the length of the cell we use. The overall Bessel power is reduced to keep the input peak intensity lower than the Rubidium saturation intensity ($I_{\mathrm{sat}} \simeq 2.5$ mW/cm$^{2}$ for linearly polarized laser beam). We finally measure both the peak intensity in the entrance plane (without cell) and in the output plane (with cell) to evaluate the transmission trough the medium. We perform the fitting procedure on five different images of the central spot with a 2D-Gaussian function as detailed before. The measured transmission is shown in Fig. \ref{fig:Transmission}. The experimental data obtained with the $^{87}$Rb$\,$vapor cell, the $^{85}$Rb one and the water-milk mixture are respectively plotted in blue stars, orange circles and grey diamonds. The $4 \%$ reflectively of the cell windows has been taken into account. The black dashed line represents a perfect on-axis compensation ($T=1$); most of the experimental points lie right under it. The small discrepancy comes from the input plane intensity measurements rather than from the output plane ones. The oscillations of the intensity at the medium input plane, visible on Fig. \ref{fig:ProfileAll} (c), adding to a positioning uncertainty of $4$ mm of the input plane, lead to the errorbar uncertainty reported in Fig. \ref{fig:Transmission}. The red lines represent the transmission of a non-saturating collimated Gaussian beam with respect to the attenuation coefficient $\alpha$ for a 2.5 cm and a 7.5 cm long lossy medium. \\ \section{Applications} \noindent Compared to previous experiments, in which compensation for $30 \, \%$ and $10 \, \%$ attenuation have been achieved using exponential intensity axicon ~\cite{Golub:12} and attenuation-resistant frozen waves ~\cite{Dorrah:16} respectively, we manage to maintain the on-axis intensity of the Bessel beam quasi-constant along its propagation in media with an attenuation up to $99.995 \%$. This is a crucial advantage for biological applications as the diffusive coefficients of biological tissues observed in light-sheet microscopy range typically from $50$ to $200$ cm$^{-1}$ ~\cite{Johns:05, Nylk:2018}. For comparison, let's assume that the sample transmission is $6 \times 10^{-3}$ under Gaussian illumination (as for the last orange circle of Fig. \ref{fig:Transmission}) for a diffusive coefficient of $200$ cm$^{-1}$. The length of the sample is then equal$\,$to $ -\ln{(6 \times 10^{-3})}/200 \simeq 250 \, \micro$m. By increasing the demagnification of the 4-f telescope $\{L_{3}, L_{4}\}$ by 10, we could easily decrease the length of the Bessel zone by a factor 100 (as the length varies with $G^{2}$) and, using the same target profile, uniformly illuminate biological tissues over hundred of microns up to diffusive coefficient $\alpha$ of $200$ cm$^{-1}$. For such attenuation coefficient, the field of view would then be more than 100 $\micro$m longer than the best one obtained in the literature so far ~\cite{Nylk:2018} (or even more if partial attenuation-compensation is considered).\\ On the other hand, any light-matter interface that relies on maximising interactions over a long distance along the propagation axis, such as EIT-based quantum memory or gradient echo memory ~\cite{glorieux2012temporally}, will benefit from this technique to improve significantly the effective optical depth. As it is known that the quantum memory efficiency (for Raman schemes) is proportional to the Rabi frequency of the coupling field ~\cite{hetet2008photon}, compensating for losses will immediately increase the storage efficiency especially in the case of an ultra-high optical depth medium ~\cite{sparkes2013gradient}. \noindent Finally, attenuation compensated Bessel beams are needed for the generation of stationary non-diffracting potentials in fluid of light experiments in the propagation configuration ~\cite{Carusotto:2014}, where superfluidity has recently been observed ~\cite{Michel:2018,Fontaine:18}. So far, exponential attenuation of the defect was the main limitation of these experiments in hot atomic media. With the approach described in this paper, future experiments can be envisioned where a Bessel beam pumps a vapor close to resonance and modifies the medium refractive index locally in the transverse plane and uniformly along the beam axis. \section{Conclusions} We have reported the shaping of the longitudinal intensity profile of Bessel beams using a phase-only SLM. We have shown that this method can be used to compensate the Beer-Lambert law and generate a constant intensity profile along the propagation direction in a lossy medium. We verified that our approach is robust independently of the loss mechanism at play in the medium and for a wide range of conditions (various refraction indices and attenuation coefficients). The results are in agreement with numerical simulations and the most crucial limitations are clearly identified. We identified two applications where this can be advantageously used: in bio-imaging and in quantum optics. Finally, this method can be easily generalized to tailor any kind of on-axis intensity profile. \section*{Funding Information} This work has been supported by the C-FLigHT ANR project and PhoQus Quantum Flagship and by ECNU scholarship program for graduate students. H.H. is supported by the Caiyuanpei Programm. Q.G. and A.B. are members of the Institut Universitaire de France (IUF). \section*{Supplementary materials} \subsection{On-axis intensity profile and spatial spectrum} The target profile we designed to compensate on the optical axis the attenuation of the central peak intensity is given by Eq.~\eqref{SystemProfile}. The SLM mask is optically conjugated with the $z=0$ plane by the 4-f telescope formed with the lenses $L_3$ and $L_4$. For clarity, we have ignored the telescope magnification factor $G = 0.5$ in the analytical derivations above, but we need to consider it in practice when we compute the target intensity profile. For all the measurements we performed, we set $z_{1} \times G^2 = 1.5$ cm, $z_{2} = z_{1} + \frac{L}{G^2}$ (with $L = 7.5$ cm) and $z_{4} = 3 \, z_{1} + \frac{L}{G^2}$. We choose $C_{1}$ and $C_{2}$ in order to make the target profile continuous and differentiable at $z_{1}$ and $z_{2}$: the constant $C_{1}$ is obtained by solving the equation $\tan(x) = \frac{2 x}{\alpha L}$ (deriving from the diffentiability condition at $z_{1}$) and $C_{2} = \sin^{-1}{\left(\sqrt{I_0/I_{\mathrm{max}}} \exp{\left( \frac{\alpha L}{2} \right)}\right)}$. The maximum intensity $I_{\mathrm{max}}$ is obtained for $z_{3} = z_{2} + \frac{2 C_{2}}{\alpha \tan(C_{2})}$. \vspace{4pt} \newline The spatial spectrum associated with the target profile can be derived analytically using Eq.~\eqref{Spectrum}. As all the parts composing the target profile can either be expressed by an exponential rising function or a sine square function, computing the spatial spectrum associated to the following generic functions is sufficient: $I_{\mathrm{sin}}^{(i, j)}(z) = I \, \sin^{2} \left( a_{i} \, \frac{z-z_{i}}{z_{j}-z_{i}}+b \right)$ and $I_{\mathrm{exp}}^{(i, j)}(z) = I \, \exp \left[\alpha (z-z_{i}) \right)]$. The derivation of the associated spectra $S_{\mathrm{sin}}^{(i, j)}$ and $S_{\mathrm{exp}}^{(i, j)}$ is straightforward; we only give the final result : \begin{align} \begin{split} S_{\mathrm{sin}}^{(i, j)} = {}& \sqrt{I} \, \frac{l}{k_{z}} \left[a_{i} \, \frac{\cos(a_{i}) - \cos(a_{i}+b_{j})}{a_{i}^{2}-(\delta k l)^{2}} \right. \\ & \hspace{-0.2cm} - \left. i \delta k \frac{\sin(a_{i}) \, e^{\,i \delta k z_{i}} - \sin(a_{i}+b_{j}) \, e^{\,i \delta k z_{j}}}{a_{i}^{2}-(\delta k l)^{2}} \right], \end{split} \end{align} \vspace{-15pt} \begin{equation} S_{\mathrm{exp}}^{(i, j)} = - \sqrt{I} \, \frac{2}{k_{z}} \frac{e^{\,i \delta k z_{i}}- \exp{\left(\alpha l/2\right)} \, e^{\,i \delta k z_{j}}}{\alpha+2i\delta k}, \end{equation} \noindent where $l = z_{j}-z_{i}$ and $\delta k = k_{z0}-k_{z}$. We finally obtain the spectrum summing the spectral contributions coming from the different parts of the profile: $S = S_{\mathrm{sin}}^{(0, 1)} + S_{\mathrm{exp}}^{(1, 2)} + S_{\mathrm{sin}}^{(2, 3)} + S_{\mathrm{sin}}^{(3, 4)}$. \subsection{Clearing of the refractive stretching} \vspace{-10pt} Let's assume that the target Bessel beam enters at $z_{1}$ a material with an attenuation coefficient $\alpha$ and a refractive index $n$. In order to counteract the refractive stretching of the beam, let's also replace $L$ with $L/n$ and $\alpha$ with $\alpha \, n$ in the second line of Eq.~\eqref{Spectrum12}. Using Eq.~\eqref{Spectrum} and the change of variable $z \rightarrow \Tilde{z} = n(z - z_{1})$, we can derive the spectrum $S_{1,2}$ associated to the exponential rising part of the on-axis profile (between $z_{1}$ and $z_{2}$): \begin{align} \label{Spectrum12} S_{1,2} &= \sqrt{I_{0}} \, \frac{e^{\, i(k_{z0}-k_{z}) \, z_{1}}}{n \, k_{z}} \int_{0}^{L} \exp{\left( \frac{\alpha \Tilde{z}}{2} \right)} \, e^{\, i(k_{z0}-k_{z})\frac{\Tilde{z}}{n}} \, \mathrm{d}\Tilde{z} \nonumber \\ & = - \frac{i}{{n \, k_{z}}}\frac{\sqrt{I_{0}} \, e^{\,i(k_{z0}-k_{z}) \, z_{1}}}{\left(\frac{k_{z}-k_{z0}}{n}\right) + i\frac{\alpha}{2}} \left(1- e^{\,-i\left[\left(\frac{k_{z}-k_{z0}}{n}\right)+ i \frac{\alpha}{2}\right]L} \right) \end{align} \noindent The on-axis electric field $E(r=0,z)$ is related to the spatial spectrum $S$ by the Fourier transform Eq.~\eqref{OnAxisField}. In practice, $k_{\mathrm{min}}$ and $k_{0}$ set respectively the lower and upper bounds of the integral coming in this equation (in air). Using Eq.~\eqref{Spectrum12} and Eq.~\eqref{OnAxisField} and the change of variable $\Bar{k}_{z} = (k_{z} - k_{z0})/n$, we can derive the on-axis electric field $E_{1,2} (r=0,z)$ associated to $S_{1,2}$ : \begin{multline} \label{OnAxisExp} E_{1,2}(r=0, \delta z) = \sqrt{I_{0}} \, e^{\,i \, k_{z0} (z_{1}+\delta z/n)} \\ \times \left[ \frac{-i}{\pi} \int_{0}^{\infty} \frac{1-e^{\,-i\left[\Bar{k_{z}}+ i \frac{\alpha}{2}\right]L}}{\Bar{k}_{z} + i\frac{\alpha}{2}} e^{\,i \Bar{k}_{z} \delta z} \, \mathrm{d}\Bar{k}_{z} \right]. \end{multline} \noindent As $z$ lies in the interval $[z_{1},z_{2}]$ and $z_{2} = z_{1} +L/n$, $\delta z = n(z-z_{1})$ varies from 0 to $L$. The phase $\Phi_{l} = k_{z0} \, (z_{1} + \delta z/n)$ is the phase accumulated by the Bessel beam along its propagation until $z$. The medium is supposed to be linear; this phase term is therefore the only expected. The inside brackets Eq.~\eqref{OnAxisExp} should then be real. Let's divide the integral in two parts $I_{1}$ and $I_{2}$ as follow : \begin{align} I_{1}(\delta z) &= \frac{-i}{\pi} \int_{0}^{\infty} \frac{\Bar{k}_{z} - i\frac{\alpha}{2}}{\Bar{k}_{z}^{2} + \left(\frac{\alpha}{2}\right)^{2}} \, e^{\,i \Bar{k}_{z} \delta z} \, \mathrm{d}\Bar{k}_{z} \label{I1} \\ \label{I2} I_{2}(\delta z) &= \frac{i}{\pi} \, \exp{\left(\frac{\alpha L}{2}\right)} \int_{0}^{\infty} \frac{\Bar{k}_{z} - i\frac{\alpha}{2}}{\Bar{k}_{z}^{2} + \left(\frac{\alpha}{2}\right)^{2}} \, e^{\,-i \Bar{k}_{z} (L-\delta z)} \, \mathrm{d}\Bar{k}_{z} \end{align} \noindent From Eq.~\eqref{I1} and Eq.~\eqref{I2}, we derive the real parts of $I_{1}$ and $I_{2}$ : $\mathrm{Re}\left(I_{1}\right) = 0 $ and $\mathrm{Re}\left(I_{2}\right) = \exp{\left( \frac{\alpha \delta z}{2} \right)}$. The on-axis electric field $E_{1,2} (r=0,\delta z)$ is finally given by : \begin{align} \label{Spectrum12} \begin{split} E_{1,2}\,(r=0,\delta z) ={}& \sqrt{I_{0}} \, e^{\,i \, k_{z0} (z_{1}+\delta z/n)} \\ & \hspace{1.8cm} \times \left[ \mathrm{Re}\left(I_{1}\right) + \mathrm{Re}\left(I_{2}\right) \right] \nonumber \end{split}\\ \begin{split} \phantom{E_{1,2}\,(r=0,\delta z)} ={}& - \sqrt{I_{0}} \, e^{\,i \, k_{z0} (z_{1}+\delta z/n)} \exp{\left( \frac{\alpha \delta z}{2} \right)}. \end{split} \end{align} \noindent By replacing $L$ with $L/n$ and $\alpha$ with $\alpha \, n$ in the expression of the target on-axis intensity profile Eq.~\eqref{SystemProfile}, we manage to overcome the refractive stretching of the Bessel beam and compensate for the good attenuation coefficient $\alpha$.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In the past years, a lot of independent cosmological observations, such as supernova (SN) Ia at high redshift \cite{Riess:1998cb, Perlmutter:1998np}, the cosmic microwave background (CMB) anisotropy,\cite{Spergel:2003cb, Ade:2013zuv} and large-scale structure \cite{Tegmark:2003ud}, have confirmed that the Universe is undergoing an accelerated expansion. In the framework of general relativity, an unknown energy component, usually called dark energy, has to be introduced to explain this phenomenon. The simplest and most theoretically appealing scenario of dark energy is the vacuum energy which is about $\rho_{\rm ovac}\sim (10^{-3} \rm{eV})^4=10^{-8} \rm{ergs/cm}^3$ matched from observational data. However, this model is confronted with a very difficult problem--cosmological constant problem \cite{Weinberg:1988cp, Carroll:2000fy, Martin:2012bt, Padilla:2015aaa, Padmanabhan:2002ji} (may suffer from age problem as well \cite{Yang:2009ae}). To briefly illustrate this issue, we consider, for example, the vacuum energy density of a scalar field. It is well known that the total vacuum energy density of a scalar field with mass $m$ is quartically divergent in the ultraviolet (UV) \begin{eqnarray} \label{vac} \rho_{\rm tvac}=\langle0|\hat{\rho}_{\rm tvac}|0\rangle=\int dk \frac{k^2\hbar}{4\pi^2c^2}\sqrt{k^2c^2+m^2c^4/\hbar^2}. \end{eqnarray} A usually used regularisation for this divergence is to artificially take a UV cutoff. But if we take different UV cutoffs, such as electroweak scale, grand unification scale, or Planck scale, we can get different values of vacuum energy density. Furthermore the differences between these values are huge, see for example, taking electroweak scale, we get $\rho_{\rm tvac}\sim (10^{11} \rm{eV})^4=10^{48} \rm{ergs/cm}^3$; taking Planck scale, we have $\rho_{\rm tvac}\sim (10^{27} \rm{eV})^4=10^{112} \rm{ergs/cm}^3$. The ratio of theoretical to observational value of the vacuum energy ranges from $10^{56}$ to $10^{120}$. This is the well known cosmological constant problem \cite{Weinberg:1988cp, Carroll:2000fy, Martin:2012bt, Padilla:2015aaa, Padmanabhan:2002ji}. Which scale we should take is still an open problem. Can we find a UV cutoff from fundamental laws of physics? This is the major issue we will consider in this letter. Here, combining with quantum and black hole physics, we find an upper bound for the wave number of a quantum particle, which gives a natural cutoff for the vacuum energy of a scalar field. The rest of the paper is organized as follows. In next section, we will present the upper limit of the wave number from quantum and black hole physics and consider a cutoff of the vacuum energy of scalar field. Finally, we will briefly summarize and discuss our results in section III. \section{Upper bound for wave number and a natural cutoff for vacuum energy} For a quantum particle with mass $m$, the de Broglie relation reads $E=\hbar \omega,~~~~\vec{p}=\hbar\vec{k}$. According to the mass-energy relation in special relativity, the total energy of a particle is $E^2=p^2c^2+m^2c^4$. Combining the de Broglie relation and the mass-energy relation, then we have \begin{eqnarray} \label{medeb} E^2=\hbar^2k^2c^2+m^2c^4. \end{eqnarray} This equation indicates that $E\longrightarrow \infty$ for $k\longrightarrow \infty$. A natural question rises: is this result reasonable? In other words, because $\omega=\sqrt{k^2c^2+m^2c^4/\hbar^2}$, the question can also be stated as: can a particle oscillate arbitrarily fast (or, can the de Broglie wavelength of a particle be arbitrarily small)? If we take into account the effect of gravitation, the answer may be not. Think of black hole physics, a system with total energy $E$ has an effective mass $E/c^2$, so it will be characterized with a Schwarzschild radius which is given by \begin{eqnarray} \label{sr} r_{\rm c}=\frac{2G}{c^3}\sqrt{\hbar^2k^2+m^2c^2}. \end{eqnarray} The hoop conjecture in black hole physics states: if matter is enclosed in sufficiently small region, then the system should collapse to a black hole \cite{1972Thorne,1991Flanagan}. Similar assumptions were also suggested in \cite{Hong:2004rq,Aste:2004ba,Japaridze:2015tva}: for example, it argued that the energy of a system of size $L$ must have an upper bound not to collapse into a black hole \cite{Hong:2004rq}. Here we generalize the hoop conjecture to the quantum case: the de Broglie wavelength of a quantum system can not be arbitrarily small, it should be larger than the characterized Schwarzschild radius of the quantum system. This can be called quantum hoop conjecture. This quantum hoop conjecture can get supports from earlier works in literatures. Possible connection between gravitation and the fundamental length was discussed in \cite{Mead:1964zz}. From quantum mechanics and classical general relativity, it was shown in \cite{Calmet:2004mp, Calmet:2005mh} that any primitive probe or target used in an experiment must be larger than the Planck length, which implies a device independent limit on possible position measurements. Researches from string theory, black hole physics, and quantum gravity also predict that there exists a minimum measurable length scale which is approximately equivalent to the Planck length $l_{\rm p}$ \cite{Konishi:1989wk, Maggiore:1993rv, Hossenfelder:2012jw, Garay:1994en, Yang:2009vf}. Based on these researches, we can conclude that the de Broglie wavelength of any quantum system must not be less than the minimum length scale. This conclusion is consistent with the quantum hoop conjecture proposed here: the de Broglie wavelength of a quantum system should be larger than its characterized Schwarzschild radius. In \cite{Casadio:2013uga}, a quantum hoop conjecture was also suggested by constructing the horizon wave-function for quantum mechanical states representing two highly boosted non-interacting particles, which is different from the conjecture we proposed here. The quantum hoop conjecture suggested here provides: $\lambda>r_{\rm c}$, which gives an upper bound for the wave number \begin{eqnarray} \label{wn} k=\frac{2\pi}{\lambda}<\frac{2\pi}{r_{\rm c}}=\frac{\pi c^3}{G}\left[\hbar^2k^2+m^2c^2\right]^{-\frac{1}{2}}< \frac{\pi c^3}{Gk\hbar}. \end{eqnarray} It is easy to get \begin{eqnarray} \label{kb} k<\sqrt{\pi}l^{-1}_{\rm p}, \end{eqnarray} where $l_{\rm p}=\sqrt{G\hbar/c^3}$ is the Planck length. This bound only holds in the observer's reference frame. Bound (\ref{kb}) also gives an upper limit for the momentum of the particle: $p<\sqrt{\pi}\hbar l^{-1}_{\rm p}$. Obviously, the wave number of a massive particle is less than that of a massless particle. As an application, we apply the bound for the wave number (\ref{kb}) to the vacuum energy of a scalar field. For a quantum particle of a scalar field, there are three freedoms for oscillation: $k=\sqrt{k^2_{\rm x}+k^2_{\rm y}+k^2_{\rm z}}$. So we have $k<\frac{2\sqrt{3}\pi}{r_{\rm c}}<\sqrt{\sqrt{3}\pi}l^{-1}_{\rm p}$ which offers a natural cutoff for the vacuum energy of a scalar field (\ref{vac}) \begin{eqnarray} \label{vac1} \rho_{\rm tvac}=\langle0|\hat{\rho}_{\rm tvac}|0\rangle=\int_0^{k_{\rm max}} dk \frac{k^2\hbar}{4\pi^2c^2}\sqrt{k^2c^2+m^2c^4/\hbar^2}. \end{eqnarray} For $k\gg m$, integration (\ref{vac1}) is approximatively equivalent to $3\hbar/(16cl^{4}_{\rm p})$ which closes to the value obtained by taking the Planck scale cutoff. Also based on black hole physics, a cutoff for vacuum energy of a scalar field was found in \cite{Culetu:2004ta}. \section{Conclusions and discussions} In this letter, we suggested a quantum hoop conjecture: the de Broglie wavelength of a quantum system can not be arbitrarily small, it must be larger than the characterized Schwarzschild radius of the quantum system. This conjecture gives an upper bound for the wave number or the momentum of the quantum system. For application, we found a natural cutoff for the vacuum energy of a scalar field. \begin{acknowledgments} This study is supported in part by National Natural Science Foundation of China (Grant Nos. 11147028 and 11273010), Hebei Provincial Natural Science Foundation of China (Grant No. A2014201068), the Outstanding Youth Fund of Hebei University (No. 2012JQ02), the Open Project Program of State Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, China (No.Y4KF101CJ1), and the Midwest universities comprehensive strength promotion project. \end{acknowledgments} \bibliographystyle{elsarticle-num}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The central object of study in this article is the following conjecture. \begin{conj}[Andr\'e-Pink-Zannier] \label{APZ} Let $S$ be a Shimura variety and $\Sigma$ a subset of a generalised Hecke orbit in $S$ (as in~\cite[\S3]{RY}). Then the irreducible components of the Zariski closure of $\Sigma$ are weakly special subvarieties. \end{conj} This conjecture is an important special case of the Zilber-Pink conjectures for Shimura varieties, which has recently been and continues to be a subject of active research. A special case of Conjecture~\ref{APZ} was first formulated in 1989 by Y.~André in~\cite[\S{}X 4.5, p.\,216 (Problem 3)]{Andre}. Conjecture~\ref{APZ} was then stated in the introduction to the second author's 2000 PhD thesis~\cite{Y}\footnote{The statement there uses the terminology 'totally geodesic subvarieties' instead of 'weakly special', but Moonen had proved in~\cite{MoMo} that the two notions are equivalent.}, following discussions with Bas Edixhoven. Both statements refer to classical Hecke orbits, rather than \emph{generalised} Hecke orbits (cf.~\cite[\S3.4.1]{RY}). Zannier has considered questions of this type in the context of abelian schemes and tori. Richard Pink, in his 2005 paper~\cite{Pink}, has formulated and studied this question; he used a generalised notion of Hecke orbit, defined using auxiliary linear representations (cf.~\cite[\S3.4.2]{RY}). Pink proves it for "Galois generic" points of Shimura varieties\footnote{Roughly, the image of the corresponding Galois representation intersects the derived subgroup of the ambient group in an adélically open subgroup. This is too strong to hold in general. See the first author's 2009 PhD thesis~\cite[III.\S7, p.\,59]{R-PhDfull} for a weaker hypothesis that is sufficient expected to hold.}: this implies in particular that such points are Hodge generic in their connected component. Pink uses equidistribution of Hecke points proved in~\cite{COU} (or in~\cite{EO}). We refer to the introduction of~\cite{RY} for further background on Conjecture~\ref{APZ}. In the Pila-Zannier approach and most other approaches to Zilber-Pink conjectures, one of the major difficulties is to obtain suitable lower bounds for Galois orbits of points in the ``unlikely locus'' (see~\cite{DR}.) In~\cite{RY}, we develop a general approach to Conjecture~\ref{APZ} based on the Pila-Zannier strategy (o-minimality and functional transcendence). In~\cite{RY}, we define generalised Hecke orbits, we define a natural height function on these orbits, and we prove precise lower Galois bounds~\cite[Th.~7.4]{RY} under the ``weakly adélic Mumford-Tate conjecture''~\cite[\S7.1]{RY}. Let~$(G,X)$ be a Shimura datum and let~$K\leq G({\mathbb A}_f)$ be a compact open subgroup and let~$S=Sh_K(G,X)$ be the associated Shimura variety. The main result of \cite{RY} is as follows. \begin{theorem}[Theorem 2.4 of \cite{RY}]\label{main theorem RY} Let $x_0 \in X$. Assume that~$x_0$ satisfies the weakly adélic Mumford-Tate conjecture. Then the conclusion of the conjecture \ref{APZ} holds for any subset of the generalised Hecke orbit of $[x_0,1]$. \end{theorem} In the present article we prove conclusions of this theorem \emph{unconditionally} for all Shimura varieties \emph{of abelian type}. This completely generalises the main result of~\cite{Orr} by M.~Orr. Our main result is as follows. \begin{theorem} \label{main theorem} Let~$s_0$ be a point in a Shimura variety~$Sh_K(G,X)$ of abelian type. Let~$Z$ be a subvariety whose intersection with the generalised Hecke orbit of~$s_0$ is Zariski dense in~$Z$. Then~$Z$ is a finite union of weakly special subvarieties of~$S$. \end{theorem} We actually prove the more general statement below, which we believe to be of independent interest. Its assumption is weaker than~`weakly adélic Mumford-Tate conjecture' in~Th.~\ref{main theorem RY}. It is the `uniform integral Tate conjecture' assumption explained in~\S\ref{sec:Def:Tate}. We refer to~\cite[Def.~3.1]{RY} for the notion of geometric Hecke orbit. By~\cite[Th.~3.2]{RY}, a generalised Hecke orbit is a finite union of geometric Hecke orbits. \begin{theorem} \label{main theorem 2} Let~$s_0=[x_0,1]$ be a point in a Shimura variety~$Sh_K(G,X)$, and assume the uniform integral Tate conjecture for~$x_0$ in~$X$ in the sense of~Definition~\ref{defi:Tate bis}. Let~$Z$ be a subvariety whose intersection with the geometric Hecke orbit of~$s_0$ is Zariski dense in~$Z$. Then~$Z$ is a finite union of weakly special subvarieties of~$S$. \end{theorem} Using Faltings' theorems, we prove in~\S\ref{Tate:abelian type} that points on Shimura varieties of adjoint type and abelian type satisfy this `uniform integral Tate assumption'. Thus Theorem~\ref{main theorem}, in the adjoint type case, is a special case of Theorem~\ref{main theorem 2}. Because Conjecture~\ref{APZ} can be reduced to the adjoint case, we deduce Theorem~\ref{main theorem} for any Shimura variety of abelian type. At the heart of this article is obtaining polynomial lower bounds~\cite[Th.~7.4]{RY} which are unconditional for Shimura varieties of abelian type, or in general under the assumption of the Tate hypothesis. We emphasize that Shimura varieties of abelian type constitute the most important class of Shimura varieties. The Tate hypothesis is used to compare the sizes of Galois orbits with that of the adélic orbits of~\cite[App.~B]{RY}. In our setting, we can easily recover former results of~\cite{Orr} which were only concerned with~$S$-Hecke orbits (involving a finite set~$S$ of primes). In order to work with whole Hecke orbits, and even geometric Hecke orbits, we use an ``integral and uniform'' refined version of the Tate conjecture. Using generalised Hecke orbits is important for our strategy to work, in particular for the reduction steps in ~\cite[\S8]{RY}. Some of new ideas in this article relate the notion of ``Stability'' in Mumford sense to the Tate hypothesis. The fine estimates we need use stability not only over complex numbers, but in a broader context, over~$\mathbb{Z}_p$ and~$\mathbb{Z}$. This is where the ``uniformity and integrality'' in our Tate hypothesis is essential. These ideas originate from~\cite{R-PhD}, part of the first author's 2009 PhD thesis. This article also develops several results of independent interest. Theorem~\ref{pKN} is a~$p$-adic version of a Theorem of Kempf-Ness~\cite{KN}. We expect it to be useful in other contexts, and proved in more generality than needed here. Theorem~\ref{thm:compare reductive} gives precise and uniform comparison on norms along two closed orbits of reductive groups. \subsection*{Outline of the paper} We define the uniform integral hypothesis in section~\ref{sec:Def:Tate}. In section~\ref{sec:proof}, we reduce Th.~\ref{main theorem 2} to the bounds on Galois orbits established in the rest of the paper, and the functorial invariance properties of the Tate hypothesis of section~\ref{sec:functoriality}. Since the formal strategy is almost identical to that of \cite[\S8]{RY} we only give a sketch indicating necessary adjustments and provide precise references to~\cite{RY}. In section~\ref{sec:functoriality} also derive the refined version of Faltings' theorems that we use, using arguments of Serre and Noot. We deduce that the uniform integral Tate hypothesis holds in Shimura varieties which are of abelian type and also of adjoint type. The central and technically hardest part of the paper are \S\S\ref{sec:bounds}--\ref{sec:pKN}. There we establish the lower bounds for the Galois orbits of points in geometric Hecke orbits as in \cite{RY} under assumptions of Th.~\ref{main theorem 2}. The main result~Th.~\ref{thm:compare reductive} of section~\ref{sec:reductive} is essential to the proofs in section~\ref{sec:bounds}. We derive it in section~\ref{sec:reductive} from the results of sections~\ref{sec:pKN} and~\ref{sec:slopes}. Section~\ref{sec:pKN} gives a~$p$-adic analogue Th.~\ref{pKN} of a Theorem of Kempf-Ness. We prove in greater generality than required for Th.~\ref{thm:compare reductive}, as we believe it will be useful in other contexts. It involves good reduction properties of homogeneous spaces of reductive groups over ${\mathbb Z}_p$, and of closed orbits in linear representations over ${\mathbb Z}_p$. The ideas behind the convexity and slope estimates in \S\ref{sec:slopes} can be better understood in the context of Bruhat-Tits buildings as in~\cite{R-PhD}. The height functions which are central in our implementation of the Pila-Zannier strategy give examples of the type of functions studied in~\S\ref{sec:slopes}. \subsubsection*{Acknowledgements} We would like to express our greatest gratitude to Laurent Moret-Bailly for discussions and suggestions regarding the content of section~\ref{sec:pKN}. Both authors were supported by Leverhulme Trust Grant RPG-2019- 180. The support of the Leverhulme Trust is gratefully acknowledged. The first author is grateful to the IHÉS for its invitation during the preparation of this article. \section{Uniform integral Tate conjecture}\label{sec:Def:Tate} In this section, we define in~Def.~\ref{defi:Tate bis} our main assumption in this paper, the `uniform integral Tate conjecture' property. This is an extension of the conclusions of Faltings' theorem in the form given in Th.~\ref{Faltings}, to all Shimura varieties. \subsection{Uniform integral Tate conjecture} In~\S\ref{defTate1} and~\S\ref{defTate2} we consider an abstract setting. In~\S\ref{Shimura applied def} we specialise it to the context of Shimura varieties. \subsubsection{}\label{defTate1} Let~$M\leq G$ be (connected) reductive algebraic groups over~${\mathbb Q}$. We identify~$G$ with its image by a faithful representatio \[ \rho:G\to GL(d). \] Def.~\ref{defi:Tate} and Theorem~\ref{main theorem 2} will not depend on this choice. The Zariski closure in~$GL(d)_{\mathbb{Z}}$ of the algebraic groups~$M$ and~$G$ and~$Z_{G}(M)$ define models over~$\mathbb{Z}$. We write~$G_{\mathbb{F}_p}$ the special fibre\footnote{For almost all primes~$p$ the group~$G({\mathbb Z}_p)$ is hyperspecial and ~$G_{{\mathbb F}_p}$ is a connected reductive algebraic group over~${\mathbb F}_p$.} and \[ G({\mathbb Z}_p)=G({\mathbb Q}_p)\cap GL(d,{\mathbb Z}_p)\text{ and }G(\widehat{{\mathbb Z}})=\prod_p G({\mathbb Z}_p)=G({\mathbb A}_f)\cap GL(d,\widehat{{\mathbb Z}}). \] We also have a reduction map~$G({\mathbb Z}_p)\to G_{{\mathbb F}_p}({\mathbb F}_p)$. These constructions apply to~$M$ and~$Z_G(M)$ as well. \subsubsection{} \label{defTate2} Let~$U\leq M({\mathbb A}_f)$ be a compact subgroup. For every prime~$p$, we define~$U_p=M({\mathbb Z}_p)\cap U$. We denote by~$U(p)$ the image of~$U_p$ in~$G({\mathbb F}_p)$. We define~${U_p}^0=U_p\cap {H_p}^0({\mathbb Q}_p)$ where~$H_p=\overline{U_p}^{Zar}\leq G_{{\mathbb Q}_p}$ the Zariski closure as a~${\mathbb Q}_p$-algebraic subgroup, and~${H_p}^0$ its neutral Zariski connected component. \begin{definition}[{\bf Uniform integral Tate property}]\label{defi:Tate} We say that a compact subgroup~$U\leq M({\mathbb A}_f)$ \emph{``satisfies the uniform integral Tate'' property with respect to~$M$,~$G$ and~$\rho$} if: \begin{enumerate} \item \label{defi:Tate1} For every~$p$, \begin{subequations} \begin{equation}\label{defi:tate eq1} Z_{G_{\mathbb{Q}_p}}(U_p)= Z_{G_{\mathbb{Q}_p}}({U_p}^0)= Z_{G}(M)_{\mathbb{Q}_p}. \end{equation} and \begin{equation}\label{defi:tate eq 1.2} \text{the action of $U_p$ on ${\mathbb{Q}_p}^d$ is semisimple.} \end{equation} (This~\eqref{defi:tate eq 1.2} is equivalent to: $H_p$ is reductive.) \end{subequations} \item \label{defi:Tate2} For every~$D$, there exists an integer~$M(D)$ such that for every~$p \geq M(D)$ and every~$U'\leq U_p$ of index~$[U_p:U']\leq D$, we have \begin{subequations} \begin{equation}\label{defi:tate eq 2} Z_{G_{\mathbb{F}_p}}(U'(p))=Z_{G_{\mathbb{F}_p}}(M_{\mathbb{F}_p}) \end{equation} and \begin{equation}\label{defi:tate eq 2.2} \text{the action of $U'(p)$ on $\overline{\mathbb{F}_p}^d$ is semisimple.} \end{equation} \end{subequations} (When~$p>d$,~\eqref{defi:tate eq 2.2} is equivalent to: the Nori group, defined below, of~$U'(p)$ is semisimple.) \end{enumerate} \end{definition} In our terminology, \emph{integrality} refers to the second property over~$\mathbb{F}_p$ on~$U(p)$ and \emph{uniformity} to the fact that the integer~$M(D)$ depends on~$D$ only. \subsubsection{Remarks} \label{rem:Tate} We collect here some facts that will be used throughout this article. \begin{enumerate} \item \label{rem2} For~$p$ large enough, in terms of~$d$, we can use Nori theory~\cite{N}. For a subgroup~$U'(p)\leq G(\mathbb{F}_p)$, the group ${U'(p)}^{\dagger}$ defined in~\eqref{defi daggers}, is of the form~$H({\mathbb F}_p)^\dagger$ for a \textbf{reductive} algebraic group~$H\leq G_{{\mathbb F}_p}$ over ${\mathbb F}_p$. We call this~$H$ the \textbf{Nori group} of~$U'(p)$. The property~\eqref{defi:tate eq 2.2} is then equivalent to the fact that~$H$ is a \textbf{reductive} group~$H\leq G_{{\mathbb F}_p}$ over ${\mathbb F}_p$(see~\cite[Th.~5.3]{SCR}). We also note that~$[H({\mathbb F}_p):H({\mathbb F}_p)^\dagger]$ can be bounded in terms of~$\dim(G)$ (see.~\cite[3.6(v)]{N}). \item \label{rem2.2} If~$U'\leq U$ has index~$[U:U']\leq p$, then~$U'(p)^\dagger= U(p)^\dagger$. \item \label{rem3} This ``uniform integral Tate'' property does not depend\,\footnote{Indeed, Def.~\ref{defi:Tate} does not involve~$\rho$ itself, but only the induced models of~$G$ and~$M$. The algebraic groups~$G_{\mathbb{Q}_p}$ and~$M_{\mathbb{Q}_p}$ do not depend on the integral models, and two models, for almost all~$p$, induce the same local models~$G_{\mathbb{Z}_p}$ and~$M_{\mathbb{Z}_p}$.} on the choice of a faithful representation~$\rho$. \item \label{rem4} The semisimplicity of the action over~$\overline{\mathbb{F}_p}$ is equivalent to the semisimplicity over~$\mathbb{F}_p$. \item \label{passage aux Up} The group~$U\leq M({{\mathbb A}_f})$ ``satisfies the uniform integral Tate'' property with respect to~$M$,~$G$ and~$\rho$ if and only if the subgroup~$\prod_p {U_p}\leq U$ does so. \item Part~\eqref{defi:Tate1} of Def.~\ref{defi:Tate} is satisfied for~$U$ if and only it is satisfied for some subgroup of finite index in~$U$. \item Propriety~\eqref{defi:tate eq1} of part~\eqref{defi:Tate1} of Def.~\ref{defi:Tate} is satisfied for~$U_p$ if and only it is satisfied for a subgroup~$U'$ of~$U_p$: we will have~$U'^0\leq {U_p}^0\leq U \leq M$, and~$Z_G(M)=Z_G(U'^0)\geq Z_G({U_p}^0)\geq Z_G({U_p})\geq Z_G(M)$. \end{enumerate} In view of Remark~\ref{rem:Tate}~(\ref{rem3}) we will, from now on, just say ``satisfies the uniform integral Tate conjecture'' without referring to a particular faithful representation~$\rho$. We deduce from the above facts the following. \begin{lemma}\label{lem U ast} Let~$U''\leq U\leq M(\widehat{\mathbb{Z}})$ be such that~$U''$ satisfies the uniform integral Tate property with respect to~$M$,~$G$ and~$\rho$. Then~$U$ satisfies the uniform integral Tate'' property with respect to~$M$,~$G$ and~$\rho$. \end{lemma} \subsubsection{}\label{Shimura applied def} We denote by~$(G,X)$ a Shimura datum, by~$K\leq G(\mathbb{A}_f)$ a compact open subgroup, and by~$S=Sh_K(G,X)$ the associated Shimura variety. Fix~$x_0\in X$ and let~$M\leq G$ be the Mumford-Tate group of~$x_0$. Let~$E$ be a field of finite type over $\mathbb{Q}$ such that~$s_0=[x_0,1]\in S(E)$ (such an~$E$ always exists). We denote by~$\rho_{x_0}:{\rm Gal}(\overline{E}/E)\to M(\mathbb{A}_f)$ the representation associated to $x_0$ (see Section 4 of \cite{RY}), and by~$U\leq M(\mathbb{A}_f)\cap K$ its image. The main hypothesis in Theorem~\ref{main theorem 2} is the following. \begin{definition}\label{defi:Tate bis}We say that~$x_0$ ``satisfies the uniform integral Tate conjecture'' if~$U=\rho_{x_0}({\rm Gal}(\overline{E}/E))$ ``satisfies the uniform integral Tate'' property with respect to~$M$,~$G$ in the sense of Def.~\ref{defi:Tate}. \end{definition} \subsubsection{} We will make use of the following terminology. \begin{definition}\label{defi:indep} We say that a subgroup~$U\leq M({\mathbb A}_f)$ satisfies the~$\ell$-independence property if it is of the form \[ U=\prod_p U_p \] with~$U_p\leq M({\mathbb Q}_p)$ for every prime~$p$. \end{definition} \section{Proof of the main result}\label{sec:proof} \addtocontents{toc}{\protect\setcounter{tocdepth}{1}} The structure of the proof Th.~\ref{main theorem 2} is essentially the same as in~\cite{RY}. The main difference is that our hypothesis is the integral uniform Tate property instead of the ``weakly adélic Mumford-Tate conjecture''. Using the results of \S\S\ref{sec:functoriality},\ref{sec:bounds}, we may follow the same proof as~\cite[\S8]{RY} making the following changes. \subsection{} In the step ``reduction to the Hodge generic case''~\cite[8.1.1]{RY} we make the following changes. Since we work with geometric Hecke orbits~$\mathcal{H}^{g}(x_0)$ instead of generalised Hecke orbits~$\mathcal{H}(x_0)$, we use~\cite[Cor.~3.5]{RY} to remark that, with~$\Sigma^g=\mathcal{H}^g([x_0,1])\cap Z$ the following set is a finite union \[ \Sigma'^g:=\stackrel{-1}{\Psi}(\Sigma^g)=\mathcal{H}^g([x'_1,1])\cup\ldots\cup \mathcal{H}^g([x'_k,1]) \] of geometric Hecke orbits in~$Sh_{K\cap G'({\mathbb A}_f)}(G',X')$. We replace~``On the other hand, the Mumford-Tate hypothesis [...]'' by the observation that if the geometric Hecke orbit~$\mathcal{H}^g([x_0,1])$ in $Sh(G,X)$ satisfies the Tate conjecture (relative to~$M$ and~$G$), then, by Prop.~\ref{Tate:subdatum}, each of the geometric Hecke orbits~$\mathcal{H}^g([x'_1,1]),\ldots,\mathcal{H}^g([x'_k,1])$ satisfy the Tate conjecture (relative to~$M$ and~$G'$). \subsection{} In the step ``reduction to the adjoint datum''~\cite[8.1.2]{RY} we make the following changes. Instead of~``Using § 4, the Mumford-Tate hypothesis will still be valid even [...]'', we use~Prop.~\ref{Tate:invariance}. Instead of~``In view of § 7, the Mumford-Tate hypothesis [...]'' we use Prop.~\ref{Tate:subdatum}. \subsection{} In~\cite[8.1.3]{RY}, ``Induction argument for factorable subvarieties'', we make the following changes. Instead of~``As explained in § 7, the Mumford-Tate hypothesis [...]'' we use Prop.~\ref{Tate:products}. \subsection{} The last change from~\cite[\S 8]{RY} is in~\cite[8.2.3]{RY} where we use our Th.~\ref{Galois bounds}, instead of the lower bound on the size of Galois orbits~\cite[Th.~7.4]{RY}. We may apply Th.~\ref{Galois bounds} to the Galois image~$U$, because: the hypothesis on~$M^{ab}$ is satisfied for Galois images (cf.~\cite[Lem.~\S7.11]{RY}); the other hypotheses are satisfied by assumption. (In the case of Shimura data of abelian type, see~\S\ref{Tate:abelian type}.) \section{Functoriality of the Tate condition and independence condition}\label{sec:functoriality} \addtocontents{toc}{\protect\setcounter{tocdepth}{1}} In this section, we verify that the conditions in definition \ref{defi:Tate} and~\ref{defi:indep} are preserved by various natural operations. This is necessary to make simplifying assumptions in the proof of the main theorems (cf. \cite[\S8.1]{RY}). We also show that the conditions of \ref{defi:Tate} and~\ref{defi:indep} hold for all Shimura varieties of abelian type. According to~Remark~\ref{rem:Tate}, the definition~\ref{defi:Tate} does not depend on~$\rho$. It follows from definition~\ref{defi:indep}, that the property that the Galois image satisfies the~$\ell$-independence property does not depend on~$G$, nor on~$\rho$. \subsection{Invariance on the geometric Hecke orbit} \begin{proposition}\label{Tate:invariance} Let~$x_\phi\in \mathcal{H}^g(x_0)$. If~$x_0$ ``satisfies the uniform integral Tate conjecture'', then~$x_\phi$ ``satisfies the uniform integral Tate conjecture''. If the image~$U$ of~$\rho_{x_0}$ satisfies the~$\ell$-independence property in the sense of~Def.~\ref{defi:indep}, then the image~$U'$ of~$\rho_{x_\phi}$ satisfies the~$\ell$-independence property in the sense of~Def.~\ref{defi:indep}. \end{proposition} \begin{proof}Let~$g\in G(\overline{{\mathbb Q}})$ be such that~$\phi=g\phi_0g^{-1}$, and let~$L$ be a number field be such that~$g\in G(L)$, and let~${\mathbb A}_{f,L}={\mathbb A}_f\otimes_{\mathbb Q} L$ be the ring of ad\`eles of the number field~$L$. Denote by~$U'$ the image of~$\rho_{x_\phi}$. According to~\cite[Prop.~4.3]{RY} we have \[ U'=\phi(U). \] We prove the last assertion. Assume~$U=\prod_p U_p$. According to~\cite[Prop.~4.3]{RY}, we have~$U'=\phi(U)$. Since $\phi$ is defined over ${\mathbb Q}$, we have \[ U'=\prod_p \phi(U_p). \] This proves the last assertion. We treat the semisimplicity over~$\mathbb{Q}_p$ in Def.~\ref{defi:Tate}. Assume that the action of~$U_p$ is semisimple. Equivalently, the Zariski closure~$\overline{U_p}^{Zar}$ is reductive. As~$\phi$ is defined over~${\mathbb Q}$, the algebraic group~$\overline{\phi(U_p)}^{Zar}=\phi(\overline{U_p}^{Zar})$ is reductive, or equivalently, the action of~$U'_p=\phi(U_p)$ is semisimple. We now treat the centraliser property of part~\ref{defi:Tate1} of Def.~\ref{defi:Tate}. For every prime~$p$, we have \begin{multline*} Z_{G_{\mathbb{Q}_p}}(U'_p)=gZ_{G_{\mathbb{Q}_p}}(U_p)g^{-1}\\ =gZ_{G_{\mathbb{Q}_p}}({U_p}^0)g^{-1}= gZ_{G}(M)_{\mathbb{Q}_p}g^{-1}=Z_{G}(gMg^{-1})_{\mathbb{Q}_p}=Z_{G}(\phi(M))_{\mathbb{Q}_p} \end{multline*} As~$\phi(M)$ is the Mumford-Tate group of~$x_\phi$ we have proved~(1) of~Def.~\ref{defi:Tate} for~$x_\phi$. We now treat part \ref{defi:Tate2} of Def.~\ref{defi:Tate}. Note that the component~$g_p$ of~$g$ as an adélic element is in~$G(O_{L\otimes\mathbb{Q}_p})$ for~$p$ large enough. For~$p$ large enough, the group~$\phi(M)({\mathbb Z}_p)=gM({\mathbb Z}_p) g^{-1}$ is hyperspecial and the reduction map~ \[ g\mapsto \overline{g}:\phi(M)(\overline{{\mathbb Z}_p})\to \phi(M)(\overline{{\mathbb F}_p}) \] is well defined. Let~$m_0\in\mathbb{Z}_{\geq1}$ be such that the above apply for~$p\geq m_0$. Let~$D$ and~$M(D)$ be as in (2) of Def.~\ref{defi:Tate}. Then, for~$p\geq M'(D):=\max\{m_0;M(D)\}$ we have \[ U'(p)=\overline{g_p}U(p)\overline{g_p}^{-1} \] with~$\overline{g}$ the reduction of~$g$ in~$G(\kappa_L)\leq G(\overline{{\mathbb F}_p})$ , where~$\kappa_L$ is the residue field of~$L$ at a prime above~$p$. The semisimplicity follows. For~$p\geq M'(D)$, we also have \begin{multline*} Z_{G_{{\mathbb F}_p}}(U'(p))=\overline{g_p}Z_{G_{{\mathbb F}_p}}(U(p))\overline{g_p}^{-1}\\ =\overline{g_p}Z_{G_{{\mathbb F}_p}}({U(p)}^0)\overline{g_p}^{-1}= \overline{g_p}Z_{G_{{\mathbb F}_p}}(M)_{{\mathbb F}_p}\overline{g_p}^{-1}\\ =Z_{G_{{\mathbb F}_p}}(\overline{g_p}M\overline{g_p}^{-1})=Z_{G_{{\mathbb F}_p}}(\phi(M)).\qedhere \end{multline*} \end{proof} \subsection{Passage to a subdatum}\label{passage to sub} \begin{proposition}\label{Tate:subdatum} Let~$\Psi:(G',X')\to (G,X)$ be an injective morphism of Shimura data. Let~$x_\phi\in \mathcal{H}^g(x_0)$ be such that there exists~$x'_\phi$ such that~$x_\phi=\Psi\circ x'_\phi$. If~$x_0$ ``satisfies the uniform integral Tate conjecture'', then~$x'_\phi$ ``satisfies the uniform integral Tate conjecture''. If the image~$U$ of~$\rho_{x_0}$ satisfies the~$\ell$-independence property in the sense of~Def.~\ref{defi:indep}, then the image~$U'$ of~$\Psi\circ \rho_{x_\phi}$ satisfies the~$\ell$-independence property in the sense of~Def.~\ref{defi:indep}. \end{proposition} \begin{proof} By Prop.~\ref{Tate:invariance} we may assume~$x_\phi=x_0$. We identify~$G'$ with its image in~$G$. By~\cite[Prop. 3.5]{RY} we have \[ U=\Psi(U')=U' \] where we denote by~$U'$ the image of~$\rho_{x'_0}'$ in $G'(\mathbb{A}_f)$. The semisimplicity of the action of $U'$ is automatic. It follows readily from the definitions and the remark that \[ Z_{G'}(U'_p)=Z_{G}(U'_p)\cap G'=Z_{G}(M)\cap G'=Z_{G'}(M). \] and similarly for~${\mathbb F}_p$ for~$p$ big enough so that~$G'$ is hyperspecial at~$p$. The last statement follows from \[ U'=\Psi(U')=U=\prod_p U_p.\qedhere \] \end{proof} \subsection{Passage to quotients by central subgroups}\label{passage to quotients} \begin{proposition} Let $F \subset Z(G)$ ($Z(G)$ is the centre of $G$) be a subgroup and let $G'$ be the quotient $G/F$. Let~$\Psi:(G,X)\to (G',X')$ be the morphism of Shimura data induced by the quotient $G\longrightarrow G'$. If~$x_0$ ``satisfies the uniform integral Tate conjecture'', then~$x'_0=\Psi\circ x_0$ ``satisfies the uniform integral Tate conjecture''. If the image~$U$ of~$\rho_{x_0}$ satisfies the~$\ell$-independence property in the sense of~Def.~\ref{defi:indep}, then the image~$U'$ of~$\rho_{x_0'}$ satisfies the~$\ell$-independence property in the sense of~Def.~\ref{defi:indep}. \end{proposition} \begin{proof} Arguments are similar. Firstly, by~\cite[Prop.~4.3]{RY} we have \[ U'=\Psi(U). \] We use, remarking~$\Psi(M)$ is the Mumford-Tate group of~$x'_0$, \[ Z_{G^{ad}}(U'_p)=Z_{G}(U_p)/F=Z_{G}(M)/F=Z_{G^{ad}}(M/Z(G))=Z_{G^{ad}}(\Psi(M)). \] (For~$p$ big enough~$\Psi$ will be compatible with the integral models.) The semisimplicity and Finally, if~$U=\prod_p U_p$, then~$U'=\prod_p \Psi(U_p)$. This proves the assertion about~$\ell$-independence. For the semisimplicity property over~$\mathbb{F}_p$ we can use, for~$p$ large enough: the~\ref{rem2} of Remark~\ref{rem:Tate} in order apply Nori theory, and the remark below. \end{proof} \begin{rem} The subgroup~$U(p)^\dagger$ used by Nori is the~$p$-Sylow of~$U(p)$. From~Lemma~\ref{Sylow} one can deduce that the Nori group of~$\Psi(U(p))$ is~$\Psi(H)$. \end{rem} \begin{rem} This proposition in particular shows that we can restrict ourselves to the case of Shimura varieties where $G$ is semisimple of adjoint type (by taking $F = Z(G)$ in this proposition). \end{rem} \subsection{Compatibilty to products.} \begin{proposition}\label{Tate:products} Assume $G$ to be of adjoint type and not simple. Let $$ (G,X) = (G_1, X_1) \times (G_2, X_2) $$ be a decomposition of $(G,X)$ as a product. We denote~$\pi_1:G\to G_1$ and~$\pi_2:G\to G_2$ the projection maps. If~$x_0$ ``satisfies the uniform integral Tate conjecture'', then~$x_i=\pi_i\circ x_0$ ``satisfies the uniform integral Tate conjecture''. If the image~$U$ of~$\rho_{x_0}$ satisfies the~$\ell$-independence property in the sense of~Def.~\ref{defi:indep}, then the image~$U_i$ of~$\rho_{x_i}$ satisfies the~$\ell$-independence property in the sense of~Def.~\ref{defi:indep}. \end{proposition} The proof is the same above. We recall that~$\rho_{x_i}=\pi_i\circ \rho_{x_0}$ by~\cite[Prop.~4.3]{RY}. We also recall that the Mumford-Tate group of~$x_i$ is~$M_i=\pi_i(M)$, and that \[ G_i\cap Z_{G}(M)=Z_{G_i}(M_i). \] \subsection{Shimura varieties of abelian type}\label{Tate:abelian type} \begin{proposition} All Shimura varieties of abelian type and adjoint type satisfy conditions of \ref{defi:Tate} and \ref{defi:indep}. More precisely, let~$(G,X)$ be a Shimura datum of abelian type with~$G$ of adjoint type, and~$S$ be an associated Shimura variety. Then for every point of~$s_0=[x_0,1]$ of~$S$, \begin{itemize} \item the point~$x_0$ satisfies the uniform integral Tate conjecture, \item and there exist a field of finite type~$E$ over~${\mathbb Q}$ such that~$U=\rho_{x_0}(Gal(\overline{E}/E))$ satisfies the~$\ell$-independence condition. \end{itemize} \end{proposition} \begin{proof} By definition of abelian type Shimura data (\cite[\S3.2]{Upr}, \cite[Prop.~2.3.10]{Deligne}), there exists a isomorphism of Shimura data \[ ({G'}^{ad},{X'}^{ad}) \] with~$(G',X)$ of Hodge type. Using Lemma~\ref{Tate:invariance} we may replace~$s_0=[x_0,1]$ by any point of its geometric Hecke orbit, and assume~$x_0$ belongs to the image of~$X'$ in~$X\simeq {X'}^{ad}$: there exists~$x_0'\in X'$ such that~$x_0={x'_0}^{ad}$. According to~\S\ref{passage to quotients}, we may substitute~$(G,X)$ with~$(G,X')$ and~$x_0$ by~$x_0'$. By definition of Hodge type data, there exists an injective morphism of Shimura data~$(G,X)\to (\mathfrak{H}_{g},GSp(2g))$, the latter being the Shimura datum of the moduli space~$\mathcal{A}_g$. Let~$\tau_0$ be the image of~$x_0$ in~$\mathfrak{H}_{g}$. According to~\S\ref{passage to sub}, we may assume~$(G,X)=(\mathfrak{H}_{g},GSp(2g))$ and~$x_0=\tau_0$. It then follows from Th.~\ref{Faltings} and Cor.~\ref{coro Faltings}. \end{proof} \subsection{Uniform integral Faltings' theorem over fields of finite type} \begin{theorem}[Faltings]\label{Faltings} Let~$K$ be a field of finite type over~${\mathbb Q}$, and let~$A/K$ be an abelian variety. Fix an algebraic closure~$\overline{K}$ of~$K$. And denote \[ T_A\approx \widehat{\mathbb{Z}}^{2\dim(A)} \] the~$\widehat{{\mathbb Z}}$-linear Tate module, on which we have a continuous~$\widehat{\mathbb{Z}}$-linear representation \begin{equation}\label{Faltings rep} \rho=\rho_A:Gal(\overline{K}/K)\to GL_{\widehat{\mathbb{Z}}}(T_A). \end{equation} We assume that~${\rm End}_K(A)={\rm End}_{\overline{K}}(A)$ and we let~${\rm End}(A/K)$ act on~$T_A$ and denote \[ Z:=\{b\in{\rm End}_{\widehat{{\mathbb Z}}}(T_A)|\forall a\in{\rm End}(A/K),[b,a]=0\ \] the $\widehat{{\mathbb Z}}$-algebra which is the centraliser of~${\rm End}(A/K)$ in~${\rm End}_{\widehat{\mathbb{Z}}}(T_A)\approx Mat({2\dim(A)},\widehat{{\mathbb Z}})$. We denote the image of~$\rho$ by \begin{equation}\label{U in Falt} U:=\rho(Gal(\overline{K}/K)). \end{equation} Then, for every~$d\in\mathbb{Z}_{\geq1}$, there exists some~$M(A,K,d)\in \mathbb{Z}_{\geq1}$ such that: for every open subgroup~$U'\leq U$ of index at most~$d$, we have \[ \widehat{{\mathbb Z}}[U']\leq Z \] is an open subalgebra of index at most~$M(A,K,d)$. \end{theorem} This statement follows from Faltings' theorems. A reference in the case where~$K$ is a number field is~\cite[Th.~1, Cor.~1.1]{MW}. Because we lack a reference, we give a specialisation argument which reduces the theorem for general fields of finite over~${\mathbb Q}$ to the case of number fields. In view of~Def.~\ref{defi:Tate} we will prove the following refinement of Th.~\ref{Faltings}. We will prove a refinement. \begin{proposition}\label{prop Falt refined} The same conclusion hold if we replace~\eqref{U in Falt} by \begin{equation}\label{U in Falt prod} U:=\prod_p {U_p}^0. \end{equation} with~$U_p:=\rho({\rm Gal}(\overline{K}/K))\cap GL_{\mathbb{Z}_p}(T_A\otimes\mathbb{Z}_p)$. \end{proposition} The refinement will use the following results. \begin{theorem}[Serre]\label{Serre} In the same situation, assume moreover that~$K$ is a number field. Then there exists a finite extension~$L/K$ such that Galois image~$U:=\rho({\rm Gal}(\overline{L}/L))$ \begin{enumerate} \item is such that~$U$ satisfies the~$\ell$-independence condition in the sense of Def.~\ref{defi:indep}, (\cite[136. Th.~1, p.34]{S4}, \cite[\S3.1]{SCrit}) \item and such that the~$U_p$ are Zariski connected, (\cite[133.~p.\,15 ;135.~2.2.3 p.\,31]{S4}, \cite[6.14, p\,623]{LP}) \item and such that~$U\leq M(\widehat{\mathbb{Z}})$ where~$M$ is the Mumford-Tate group of~$A$. (cf.~\cite[\S4]{RY} and~\S\ref{Tate:abelian type}.) \end{enumerate} \end{theorem} \begin{proof}Let~$\eta=\mbox{Spec}(K)$ and~$\overline{\eta}=Spec(\overline{K})$. Following~\cite[\S1.2 and Cor.~1.5]{Noot}, there exists a number field~$F\leq K$, and an abelian variety~$A_F$ over~$F$ such that \begin{itemize} \item We have an identification of Tate modules~$T:=T_A\simeq T_{A_F}$ \item We have an identity (cf.~\cite[Cor.~1.5]{Noot} \begin{equation}\label{Noot End} {{\rm End}}_K(A)\simeq {{\rm End}}_F(A) \end{equation} as subalgebras of~$B:={\rm End}_{\widehat{{\mathbb Z}}}(T)$, \item we have a diagram \[ \begin{tikzcd} Gal(\overline{K}/K)\arrow{d}{\rho}\arrow[hookleftarrow]{r} & D_F \arrow[twoheadrightarrow]{r} & Gal(\overline{F}/F)\arrow{d}{\rho'}\\ {\rm End}(T_A)&\arrow[equals]{l}{\rm End}(T)\arrow[equals]{r}& {\rm End}(T_{A_F}). \end{tikzcd} \] \end{itemize} The commutativity implies that~$U_F:=\rho(D_F)$ satisfies \[ \rho(D_F)=U_F:=\rho'(Gal(\overline{F}/F)) \] and \[ \rho(D_F)\leq U:=\rho(Gal(\overline{K}/K)). \] By~Th.~\ref{Serre}, after possibly passing to a finite extension of~$F$ and the corresponding finite extension of~$K$, we may assue \[ U_F=\prod_p {(U_F)_p}^0. \] We note that~$(U_F)_p\leq U_p$ and thus~${(U_F)_p}^0\leq {U_p}^0$. We deduce \[ U_F\leq \widetilde{U}:=\prod_p {U_p}^0. \] We will prove the refinement Prop.~\ref{prop Falt refined} of Th.~\ref{Faltings} with \[ M(A,K,d)=M(A_F,F,d). \] Fix~$d$ and an open subgroup~$U'\leq \widetilde{U}$ of index at most~$d$. We denote \[ U'_F=U'\cap U_F. \] We first note that~$U'_F\leq U_F$ is a subgroup of index at most~$d$. We have, as~$\widehat{\mathbb{Z}}$-subalgebras of~$B:=Mat({2\dim(A)},\widehat{{\mathbb Z}})$, \[ \widehat{{\mathbb Z}}[U'_F]\leq \widehat{{\mathbb Z}}[U']. \] From~\eqref{Noot End} we have \[ Z=Z_F:= \{b\in{\rm End}_{\widehat{{\mathbb Z}}}(T_A)|\forall a\in{\rm End}(A_F/F),[b,a]=0\}. \] We use the number field case of the theorem (see~\cite[Th.~1, Cor.~1.1]{MW}) for~$A_F$ and~$d$ and~$U'_F$ and get \[ \left[Z_F:\widehat{{\mathbb Z}}[U'_F]\right]\leq M(A_F,F,d). \] We note that~$\widehat{{\mathbb Z}}[U'_F]\leq Z$ because~$U\geq U'$ commutes with the action of~${\rm End}(A)$ (all the endomorphisms are rational over~$K$). Finally \[ \widehat{{\mathbb Z}}[U'_F]\leq \widehat{{\mathbb Z}}[U']\leq Z \] hence \[ \left[Z:\widehat{{\mathbb Z}}[U']\right]\leq \left[Z:\widehat{{\mathbb Z}}[U'_F]\right] = \left[Z_F:\widehat{{\mathbb Z}}[U'_F]\right] \leq M(A_F,F,d). \]\end{proof} \begin{corollary}\label{coro Faltings} Choose an isomorphism~$H^1(A;\mathbb{Z})\simeq \mathbb{Z}^{2\dim(A)}$ and denote~$\rho:GL(H^1(A;\mathbb{Z}))\to GL(2\dim(A))$ the corresponding isomorphism. There exists~$c(A)$ such the following holds. The subgroup~$U\leq M(\widehat{\mathbb{Z}})$ satisfies the uniform integral Tate property with respect to~$M$,~$GL(2g)$ and~$\rho$ in the sense of Def.~\ref{defi:Tate}, with~$M(D):=\max\{c(A);M(A,K,D)\}$. The subgroup~$U\leq M(\widehat{\mathbb{Z}})$ satisfies the uniform integral Tate property with respect to~$M$,~$GSp(2g)$ and~$\rho$ in the sense of Def.~\ref{defi:Tate} with~$M(D):=\max\{c(A);M(A,K,D)\}$. \end{corollary} We only treat the case of~$GL(2g)$, as the case of~$GSp(2g)$ follows directly. \begin{proof}Thanks to Lemma~\ref{lem U ast}, we may assume~$U=\prod_p {U_p}^0$. We have then \[ \widehat{\mathbb{Z}}[U]=\prod_p\mathbb{Z}_p[{U_p}^0]. \] We use Prop.~\ref{prop Falt refined}. Then \begin{itemize} \item for every~$p$, the algebra~${\mathbb{Q}}_p[U_p]={\mathbb{Q}}_p[{U_p}^0]$ is the commutant of~${\rm End}(A/K)\otimes\mathbb{Q}_p=Z_{{\rm End}_{\mathbb{Q}_p}(H^1(A;\mathbb{Q}_p))}(M)$. This implies~\eqref{defi:tate eq1}. Because~${\rm End}(A/K)\otimes\mathbb{Q}_p$ is a semisimple algebra, so is it commutant in~${\rm End}_{\mathbb{Q}_p}(H^1(A;\mathbb{Q}_p))$ and thus the action of~$U_p$ is semisimple. \item for every~$D$, and every~$p\geq M(A,K,D)$, and every~$U'\leq U(p)$ of index at most~$D$, the algebra~${\mathbb{F}}_p[U']$ is the commutant of~${\rm End}(A/K)\otimes\mathbb{F}_p$, and is equal to~${\mathbb{F}}_p[M(\mathbb{F}_p)]$. For~$p\gg 0$, depending only on~${\rm End}(A/K)$ and~$M$, the action of~${\mathbb{F}}_p[M(\mathbb{F}_p)]$ is semisimple.\qedhere \end{itemize} \end{proof} We deduce the following, using the well-known relation between Galois representations on the Tate module, and Galois action on isogeny classes in the Siegel modular variety~$\mathcal{A}_g$ (see~\cite{UY}). \begin{corollary} Let~$s_0$ be a point in~$\mathcal{A}_g=Sh_{GL(2g,\widehat{\mathbb{Z}})}(\mathfrak{H}_g,GSp(2g))$. Then~$s_0$ satifies Def.~\ref{defi:Tate bis}. \end{corollary} \section{Polynomial Galois bounds}\label{sec:bounds} This section is at the heart of this paper. We obtain suitable lower bounds for Galois orbits of points in generalised Hecke orbits under much weaker assumptions than those made in \cite{RY} (in particular, as seen above, they are satisfied by all Shimura varieties of abelian type). \subsection{Statement } We use the notations~$\succcurlyeq$ and~$\approx$ of~\cite[Def.~6.1]{RY} for polynomial domination and polynomial equivalence of functions. For the definition of~$H_f(\phi)$ we refer to~\cite[App. B]{RY}. For the MT property we refer to~\cite[\S7, Def. 7.1]{RY}, and refer to~\cite[\S 7.4]{RY} for the fact that~\eqref{Galois bound 2} is satisfied for Galois images. \begin{theorem}\label{Galois bounds} Let~$M \leq G$ are connected reductive $\mathbb{Q}$-groups. Let~$U\leq M({\mathbb A}_f)$ be subgroup satisfying the following. \begin{enumerate} \item \label{Galois bound 1} The image of~$U$ in~$M^{ab}$ is MT in $M^{ab}$. \label{thm:galois bounds H1} \item \label{Galois bound 2} The group~$U$ satisfies the uniform integral Tate conjectures as in Def.~\ref{defi:indep} and~\ref{defi:Tate}.\label{thm:galois bounds H2} \item \label{Galois bound 3} For every~prime $p$, $U_p$ is Zariski connected.\label{thm:galois bounds H3} \end{enumerate} Denote by~$\phi_0:M\to G$ the identity homomorphism and~$W=G\cdot \phi_0$ its conjugacy class. Then as~$\phi$ varies in~$W(\mathbb{A}_f)$, we have, for any compact open subgroup~$K\leq G(\mathbb{A}_f)$, where~$K_M=K\cap M(\mathbb{A}_f)$, we have \[ [\phi(U):\phi(U)\cap K]\approx [\phi(K_M):\phi(K_M)\cap K]\succcurlyeq H_f(\phi). \] as functions~$W(\mathbb{A}_f)\to \mathbb{Z}_{\geq1}$. \end{theorem} \subsubsection{Reduction to a local problem} From~\cite[Th.~B.1]{RY} we already have \[ [\phi(K_M):\phi(K_M)\cap K]\succcurlyeq H_f(\phi). \] and because~$U\leq K_M$, we have \[ [\phi(U):\phi(U)\cap K]\leq [\phi(K_M):\phi(K_M)\cap K]. \] Thus it will be enough to prove \begin{equation}\label{to prove} [\phi(U):\phi(U)\cap K]\succcurlyeq [\phi(M(\widehat{\mathbb{Z}})):\phi(M(\widehat{\mathbb{Z}}))\cap K]. \end{equation} Since they are commensurable groups, we may replace~$K$ by~$G(\widehat{\mathbb{Z}})$. In view of Remark~\ref{rem:Tate}~\eqref{passage aux Up}, we may replace~$U$ by~$\prod_pU_p$, which is smaller. Then the required inequality~\eqref{to prove} can be rewritten in the product form \[ \prod_p[\phi(U_p):\phi(U_p)\cap G(\mathbb{Z}_p)]\succcurlyeq \prod_p[\phi(M(\mathbb{Z}_p)):\phi(M(\mathbb{Z}_p))\cap G(\mathbb{Z}_p)] \] and thus the problem can be studied prime by prime. More precisely, it will be enough to prove \begin{itemize} \item that there exists~$c\in\mathbb{R}_{>0}$ such that, for almost all primes \begin{multline}\label{precise-bound} \forall \phi\in W(\mathbb{Q}_p), [\phi(U_p):\phi(U_p)\cap G(\mathbb{Z}_p)]\geq\\ [\phi(M(\mathbb{Z}_p)):\phi(M(\mathbb{Z}_p))\cap G(\mathbb{Z}_p)]^{c}. \end{multline} \item and, for the finitely remaining primes, that we have the polynomial domination, as functions~$W(\mathbb{Q}_p)\to \mathbb{R}_{\geq 0}$, \[ [\phi(U_p):\phi(U_p)\cap G(\mathbb{Z}_p)]\succcurlyeq [\phi(M(\mathbb{Z}_p)):\phi(M(\mathbb{Z}_p))\cap G(\mathbb{Z}_p)]. \] Namely, that there exist~$a(p),c(p)\in\mathbb{R}_{>0}$ such that \begin{multline}\label{imprecise-bound} \forall \phi\in W(\mathbb{Q}_p), [\phi(U_p):\phi(U_p)\cap G(\mathbb{Z}_p)]\geq\\ a(p)\cdot [\phi(M(\mathbb{Z}_p)):\phi(M(\mathbb{Z}_p))\cap G(\mathbb{Z}_p)]^{c(p)}. \end{multline} \end{itemize} By the argument from~\cite[Proof of Cor.~B.2]{RY}, it will be sufficient, instead of~\eqref{precise-bound}, to prove: there exist~$a,c\in\mathbb{R}_{>0}$ such that, for almost all primes \begin{multline}\label{precise with a} \forall \phi\in W(\mathbb{Q}_p), [\phi(U_p):\phi(U_p)\cap G(\mathbb{Z}_p)]\geq\\ a\cdot [\phi(M(\mathbb{Z}_p)):\phi(M(\mathbb{Z}_p))\cap G(\mathbb{Z}_p)]^{c}. \end{multline} This the the following statement. \begin{theorem}[Local Galois bounds]\label{thm:local Galois} In the setting of Th.~\ref{Galois bounds}, there exist~$a,c\in{\mathbb R}_{>0}$, and for each~$p$, there exists~$b(p)\in{\mathbb R}_{>0}$ such that \[ [\phi(U_p):\phi(U_p)\cap K_p]\geq b(p)\cdot [\phi(M(\mathbb{Z}_p)):\phi(M(\mathbb{Z}_p))\cap K_p]^{c} \] and such that~$b(p)\geq a$ for almost all~$p$. \end{theorem} We prove~\eqref{imprecise-bound} in~\ref{every prime}. It is deduced from the functoriality of heights. We prove, for almost all primes,~\eqref{precise-bound} in~\ref{almost all primes}. It requires new tools developed in this article. For reference, we rephrase ``The image of~$U$ in~$M^{ab}$ is MT in $M^{ab}$'' as follows. We denote~$ab_M:M\to M^{ab}:=M/M^{der}$ the abelianisation map. Then there exists~$C_{MT}\in \mathbb{Z}_{\geq 1}$ such that \begin{equation}\label{defi CMT} \forall p, [M^{ab}(\mathbb{Z}_p):ab_M(U_p)]\leq C_{MT}. \end{equation} Because~$\exp(p\mathfrak{m}^{ab}_{\mathbb{Z}_p})$ is a~$p$-group, its action on~$M^{ab}(\mathbb{Z}_p)/ab_M(U_p)$ is trivial when~$p>C_{MT}$: we have \begin{equation}\label{defi CMT 2} \forall p>C_{MT},\exp(p\mathfrak{m}^{ab}_{\mathbb{Z}_p}) \leq ab_M(U_p). \end{equation} \subsection{For every prime}\label{every prime} We fix a prime~$p$. Let~$f_1,\ldots,f_{k}$ be a basis of the~$\mathbb{Q}_p$-Lie algebra~$\mathfrak{u_p}$ of~$U_p$. Replacing each~$f_i$ by a sufficiently small scalar multiple, we may assume that each~$u_i=\exp(f_i)$ converges and belongs to~$U_p$. By~\eqref{Galois bound 2} of~Th.~\ref{Galois bounds} and~\eqref{defi:tate eq1} of~Def.~\ref{defi:Tate}, we have, \[ Z_{G_{\mathbb{Q}_p}}({U_p})=Z_{G_{\mathbb{Q}_p}}({U_p}^0)=Z_{G_{\mathbb{Q}_p}}(\mathfrak{u}_p)=Z_{G_{\mathbb{Q}_p}}(\{f_1,\ldots,f_k\}). \] We define \[ v=(f_1,\ldots,f_k)\in \mathfrak{g}^k\stackrel{d\rho}{\hookrightarrow} E:={\rm End}(V_{\mathbb{Q}_p})^k. \] For the induced action of~$G$ on~$E$ we have \[ Z_{G_{\mathbb{Q}_p}}(U_p)=Stab_{G_{\mathbb{Q}_p}}(v). \] By our assumption, \[ Z_{G_{\mathbb{Q}_p}}(U_p)=Stab_{G_{\mathbb{Q}_p}}(\phi_0)=Z_G(M)_{\mathbb{Q}_p}. \] As a consequence, we have a well defined isomorphism~$W\to G\cdot v$ defined over~$\mathbb{Q}_p$ of homogeneous varieties, given by \[ g\cdot Z_G(M)_{\mathbb{Q}_p}\mapsto g\cdot v \] From~\eqref{defi:Tate1}, the Zariski closure~$\overline{U_p}^{Zar}$ is reductive. We may thus apply~\cite{Ri-Conj}, and deduce that the induced map \[ \iota:W\to E \] is an closed affine embedding. We use the standard norm on~$E\simeq {\mathbb{Q}_p}^{\dim(V)^2\cdot k}$. We denote by~$H_\iota$ the local Weil height associated to this embedding, which is given by \begin{multline} H_\iota:\phi\mapsto H_p(g\cdot v):=\max\{1;\norm{g\cdot v}\}\\=\max\{1;\norm{g\cdot f_1};\ldots;\norm{g\cdot f_k}\}. \end{multline} By functoriality properties of height functions, the function~$H_\iota$ and~$\phi\mapsto H_p(\phi)$ are polynomially equivalent. Namely, there are~$a(p)$ and~$c(p)$ such that \[ H_\iota\geq a(p)\cdot H_p(\phi)^{c(p)}. \] We denote~$U'\leq U_p$ the~$p$-adic Lie subgroup generated by \[\{\exp(f_1);\ldots;\exp(f_k)\}.\] We have \[ [\phi(U_p):\phi(U_p)\cap G(\mathbb{Z}_p)]\geq [\phi(U'):\phi(U')\cap G(\mathbb{Z}_p)]. \] Using~\cite[Th.~A3]{RY}, we also have \[ [\phi(U'): \phi(U')\cap K_p]\geq H_\iota(\phi). \] \subsubsection{Remark} Note that the above bound already implies the Andr\'e-Pink-Zannier conjecture for $S$-Hecke orbits. This is more general than the result of Orr (unpublished) for Shimura varieties of abelian type and less precise than \cite{RY2} which proves a strong topological form under a weaker hypothesis. The method of Orr relies on Masser-W\"ustholz bounds, and~\cite{RY2} relies ultimately on $S$-adic Ratner theorems through the work of~\cite{RZ}. \subsection{For almost all primes}\label{almost all primes} \subsubsection{Construction of tuples} We denote by \[ Y\mapsto \overline{Y}:\mathfrak{m}_{\mathbb{Z}_p}\to\mathfrak{m}_{\mathbb{F}_p}\text{ and }\pi_p: U_p\to M_{\mathbb{F}_p}(\mathbb{F}_p) \] the reduction modulo~$p$ maps, and define \[ U(p):=\pi_p(U_p) \] the image of~$U_p$ in~$M_{\mathbb{F}_p}(\mathbb{F}_p)$. We denote the subgroup of~$U(p)$ generated by its unipotent elements by \begin{equation}\label{defi daggers} U(p)^\dagger\text{ and by }U_p^\dagger:=\stackrel{-1}{\pi_p}(U_p) \end{equation} its inverse image in~$U_p$. We define \begin{equation}\label{defi nu} \nu=\mathfrak{m}^{der}_{\mathbb{Z}_p}+p\cdot \mathfrak{z(m)}_{\mathbb{Z}_p}. \end{equation} \begin{proposition}\label{prop X Y} We consider the setting of Theorems~\ref{thm:local Galois} and~\ref{Galois bounds}. For almost all~$p$, there exists \[ X_1,\ldots,X_k,Y_1,\ldots,Y_l\in\mathfrak{m}_{{\mathbb Z}_p} \] satisfying the following \begin{enumerate} \item \label{XY1} The exponentials~$\exp(X_1),\ldots,\exp(X_k)$ converge and topologically generate~$U_p^\dagger$. \item \label{XY2} We have \[ Y_1,\ldots,Y_l\in \mathfrak{u}:=\mathbb{Z}_p\cdot X_1+\ldots+\mathbb{Z}_p\cdot X_k. \] \item \label{XY3} We have \[ \frac{1}{p}\cdot Y_1\cdot {\mathbb Z}_p+\ldots+\frac{1}{p}\cdot Y_l\cdot {\mathbb Z}_p = \mathfrak{z(m)}_{\mathbb{Z}_p}\pmod{p\cdot\mathfrak{m}_{\mathbb{Z}_p}}. \] \item \label{XY4} We have \begin{equation}\label{PropNori:Centalisateurs} Z_{G_{{\mathbb F}_p}}(M_{{\mathbb F}_p})= Z_{G_{{\mathbb F}_p}}\left(\left\{\pi_p(X_1),\ldots,\pi_p(X_k),\overline{\frac{1}{p}Y_1},\ldots,\overline{\frac{1}{p}Y_l}\right\}\right). \end{equation} \end{enumerate} \end{proposition} \begin{proof} Let~$u_1,\ldots,u_i$ be unipotent generators of~$U(p)^\dagger$. Because~$U(p)^\dagger\leq U(p)=\pi_p(U_p)$, we may write \[ u_1=\pi_p(x_1),\ldots,u_i=\pi_p(x_i) \] with~$x_1,\ldots,x_i\in U_p$. By definition of~$U_p^\dagger$, we have~$x_1,\ldots,x_i\in U^\dagger_p$. The compact group~$\ker(\pi_p)\leq U_p\leq M(\mathbb{Z}_p)$ is topologically of finite type. We choose~$x_{i+1},\ldots,x_k$ a topologically generating family of~$\ker(\pi_p)$. By construction \begin{equation}\label{defi xs} x_1,\ldots,x_k\text{ generates~$U_p^\dagger$.} \end{equation} Moreover, the~$\pi_p(x_i)$ are unipotent. By~\cite[Prop.~A.1]{RY}, the series~$X_1=\log(x_1),\ldots,X_k=\log(x_k)$ converge, and, for~$p>d$, we have~$X_1,\ldots,X_k\in \mathfrak{m}_{\mathbb{Z}_p}$. By~\cite[Lem. A.2]{RY} we have \begin{equation}\label{defi X} x_1=\exp(X_1),\ldots,x_k=\exp(X_k) \end{equation} We define \begin{equation}\label{defi u} \mathfrak{u}=\mathbb{Z}_p\cdot X_1+\ldots+\mathbb{Z}_p\cdot X_k\qquad u=(X_1,\ldots,X_k). \end{equation} For~$X\in \mathfrak{m}_{\mathbb{Z}_p}\smallsetminus\nu$, its reduction in~$\mathfrak{m}_{\mathbb{F}_p}$ is not nilpotent. Thus, by~\cite[Prop.~A.1]{RY}, the series~$\exp(X)$ does not converge. Consequently we have \[ X_1,\ldots,X_k\in\nu\text{ and thus }\mathfrak{u}\leq \nu. \] We define \[ \pi_\nu:\nu\to \overline{\nu}:=\nu\otimes\mathbb{F}_p, \] and denote the image of~$\mathfrak{u}$ by \[ \overline{\mathfrak{u}}\leq \overline{\nu}. \] From~\eqref{defi nu}, we have \begin{equation}\label{defi nubar} \overline{\nu}=\mathfrak{m}^{der}~(\bmod~p\nu)+p\cdot \mathfrak{z(m)}~(\bmod~{p\nu}). \end{equation} We notice that \begin{equation}\label{same rep} \mathfrak{m}_{\mathbb{F}_p}=\mathfrak{m}^{der}_{\mathbb{F}_p}+\mathfrak{z(m)}_{\mathbb{F}_p}\text{ and }\overline{\nu}\simeq \mathfrak{m}^{der}_{\mathbb{F}_p}+\frac{p\cdot \mathfrak{z(m)}}{p^2\cdot\mathfrak{z(m)}} \end{equation} are isomorphic $\mathbb{F}_p$-linear representation of~$M(\mathbb{Z}_p)$ and~$M(\mathbb{F}_p)$, and thus as representations of~$U_p$ and~$U(p)$ as well. We consider \[ ab_{\mathfrak{m}}:\mathfrak{m}\to \mathfrak{m}^{ab}:=\mathfrak{m}/\mathfrak{m}^{der}. \] Let us prove the claim \begin{equation}\label{reciprocite p p2} p\cdot\mathfrak{m}^{ab}\leq ab_{\mathfrak{m}}(\mathfrak{u}). \end{equation} \begin{proof Let~$Z\in p\cdot\mathfrak{m}^{ab}$. Let~$z=\exp(Z)\in M^{ab}(\mathbb{Z}_p)$. From~\eqref{defi CMT 2}, when~$p>C_{MT}$, there exists~$y\in U_p$ with~${\rm ab}_M(y)=z$. Assume~$p\gg0$, so that the algebraic tori~$Z(M)_{\mathbb{Z}_p}$ and~$M^{ab}$ have good reductio , and assume furthermore~$p>\#\ker(Z(M)\to M^{ab})$ so that the differential of the isogeny~$Z(M)\to M^{ab}$ induces a $\mathbb{Z}_p$-isomorphism~$\mathfrak{z(m)}\to\mathfrak{m}^{ab}$. Thus, there exists~$Z'\in p\mathfrak{z}(m)$ with~$ab_{\mathfrak{m}}(Z')=Z$. Let~$z'=\exp(Z')$. We have~$\overline{z'}\in Z(M)(\mathbb{F}_p)^\dagger$ and, because~$Z(M)(\mathbb{F}_p)$ has good reduction, we have~$Z(M)(\mathbb{F}_p)^\dagger=\{1\}$. We also have~$y\in M^{der}(\mathbb{Z}_p)\cdot z'$. Thus \[ \overline{y}\in U(p)\cap M^{der}(\mathbb{F}_p). \] Let~$\gamma=c(\dim(G))$ from Lem.~\ref{lem:4.1}. Then~$\overline{y}^{\gamma}\in U(p)^\dagger$, and thus~$y^{\gamma}\in U_p^\dagger$. Assume~$p>\gamma$, so that~$\gamma\in\mathbb{Z}_p^\times$. Because~$Z$ is arbitrary, we have \[ ab_M(U_p^\dagger)\geq\exp(p\cdot\mathfrak{m}_{\mathbb{Z}_p}^{ab})^\gamma=\exp(\gamma\cdot p\cdot\mathfrak{m}_{\mathbb{Z}_p}^{ab}) = \exp(p\cdot\mathfrak{m}^{ab}). \] Conversely~$ab_M(U_p^\dagger)\leq \exp(p\cdot\mathfrak{m}_{\mathbb{Z}_p}^{ab})=\ker M^{ab}(\mathbb{Z}_p)\to M^{ab}(\mathbb{F}_p)$ because~$ab_{M_{\mathbb{F}_p}}(U(p)^\dagger)\leq ab_{M_{\mathbb{F}_p}}(M^{der}(\mathbb{F}_p)^\dagger)=\{1\}$. The group~$U_p^\dagger$ is topologically generated by~$\exp(X_1),\ldots,\exp(X_k)$, and thus $ab_M(U_p^\dagger)$ is topologically generated by~$\exp(Z_1),\ldots,\exp(Z_k)$ with~$Z_i=ab_{\mathfrak{m}}(X_i)$. Thus the logarithms \[ Z_i=\log(z_i)=ab_{\mathfrak{m}}(X_i) \] topologically generate~$\log(\exp(p\cdot\mathfrak{m}^{ab}))= p\cdot\mathfrak{m}^{ab}$. The conclusion follows. \end{proof} We let \begin{equation}\label{defi Zs} Z_1,\ldots,Z_l\text{ be a basis of } \frac{p\mathfrak{m}^{ab}}{p^2\mathfrak{m}^{ab}}\simeq \nu/\mathfrak{m}^{der}_{\mathbb{F}_p} \end{equation} Pick an arbitrary~$Z\in\{Z_1;\ldots;Z_l\}$, and define \[ A=\{\overline{Y}\in\overline{\mathfrak{u}}| \overline{Y}\equiv Z\pmod{\mathfrak{m}^{der}_{\mathbb{F}_p}}\}. \] From~\eqref{reciprocite p p2}, this~$A$ is non empty. It is thus an affine subspace of~$\nu$, and, for any~$\overline{Y}_0\in A$, we have \[ A=\overline{Y}_0+V\text{ where }V=\overline{\mathfrak{u}}\cap \mathfrak{m}^{der}_{\mathbb{F}_p} \] is the ``direction'' of~$A$. The $\mathbb{F}_p$-linear vector subspace~$V\leq \overline{\nu}$ is invariant under~$U(p)$, and because the action of~$U(p)$ is semisimple on~$\mathfrak{m}_{\mathbb{F}_p}$, and thus, by~\eqref{same rep}, on~$\overline{\nu}$, there exists a supplementary $U(p)$-invariant $\mathbb{F}_p$-linear subspace \[ W\leq \overline{\nu}. \] The following intersection is an affine space of dimension~$0$, hence it is a singleton \[ A\cap W=\{\overline{Y}\}. \] It is also invariant under~$U(p)$. Thus the line \[ \mathbb{F}_p\cdot \overline{Y} \] is fixed by~$U(p)$. But the centraliser of~$U(p)$ and~$M_{\mathbb{F}_p}$ in~$M_{\mathbb{F}_p}$ are the same. For~$p\gg0$, these centralisers are smooth as group schemes (cf. Lem.~\ref{conj orbit lemma}), and thus have the same Lie algebra \[ \mathfrak{z}_{\mathfrak{m}_{\mathbb{F}_p}}(U(p)) = \mathfrak{z(m)}_{\mathbb{F}_p}. \] Thus \begin{equation}\label{Ybar in Z} \overline{Y}\in{p\cdot \mathfrak{z}(M)}~(\bmod~p\nu). \end{equation} We finally choose a representative~$Y\in \mathfrak{u}$ of~$\overline{Y}\in\overline{\mathfrak{u}}$. Thus~$Y\in p\cdot \mathfrak{z(m)}+p\nu=p\mathfrak{m}_{\mathbb{Z}_p}$ and~$Y\in\mathfrak{u}$. We define \[ \widetilde{\mathfrak{m}}:=\frac{p\mathfrak{m}_{\mathbb{Z}_p}}{p^2\mathfrak{m}_{\mathbb{Z}_p}}=(p\mathfrak{m}_{\mathbb{Z}_p})\otimes \mathbb{F}_p, \] and denote the image of~$Y\in \mathfrak{u}\cap p\mathfrak{m}\leq p\mathfrak{m}$ by \[ \widetilde{Y}\in\widetilde{\mathfrak{u}}\leq \widetilde{\mathfrak{m}}. \] Again~$\widetilde{\mathfrak{m}}\simeq\mathfrak{m}_{\mathbb{F}_p}$ as a representation. We define \[ A=\{\overline{Y}\in\widetilde{\mathfrak{u}}| \overline{Y}\equiv \widetilde{Y}\pmod{\widetilde{\mathfrak{m}^{der}}}\}. \] and similarly, there exists~$\overline{Y}\in A$ which is fixed by~$U(p)$ and thus is in~$\widetilde{\mathfrak{z(m)}}$. We choose a lift~$Y$ of~$\widetilde{Y}$ in~$\mathfrak{u}$. Repeating the process for each~$Z\in\{Z_1;\ldots;Z_l\}$ we define \begin{equation}\label{Ys in u} Y_1,\ldots,Y_l\in\mathfrak{u}. \end{equation} The assertion~\ref{XY1} follows from~\eqref{defi xs}, \eqref{defi X}, \eqref{defi u}. The assertion~\ref{XY2} follows from~\eqref{defi u}, \eqref{Ys in u}. The assertion~\ref{XY3} follows from~\eqref{Ybar in Z}, \eqref{defi Zs}, \eqref{defi nu}, \eqref{defi nubar}. We will now prove the assertion~\ref{XY4}. We define \[ Z:=Z_{G_{{\mathbb F}_p}}\left(\left\{\pi_p(X_1),\ldots,\pi_p(X_k),\overline{\frac{1}{p}Y_1},\ldots,\overline{\frac{1}{p}Y_l}\right\}\right). \] and \[ U':=(U(p)\cap Z(M)^0_{\mathbb{F}_p}(\mathbb{F}_p)) \text{ and } U'':=U(p)^\dagger\cdot U'. \] We first note that~$\pi_p(X_1),\ldots,\pi_p(X_k)$ generates the group~$U(p)^\dagger$ and that~$\overline{Y_1/p},\ldots,\overline{Y_l/p}$ generates the Lie algebra~$\mathfrak{z(m)}_{\mathbb{F}_p}$. Thus \begin{equation Z=Z_{G_{{\mathbb F}_p}}(U(p)^\dagger)\cap Z_{G_{{\mathbb F}_p}}(\mathfrak{z(m)}_{\mathbb{F}_p}) \end{equation} We have\footnote{We use that~$Z(M_{\mathbb{F}_p})$ is connected and, for~$p\gg 0$ smooth as a group scheme.} \[ Z_{G_{{\mathbb F}_p}}(\mathfrak{z(m)}_{\mathbb{F}_p})=Z_{G_{{\mathbb F}_p}}(Z(M)^0). \] Applying Cor.~\ref{coro big dans Z} with~$\delta=C_{MT}$ from~\eqref{defi CMT}, we have,with~$U':=(U(p)\cap Z(M)^0_{\mathbb{F}_p}(\mathbb{F}_p))$, \[ [U(p): U(p)^\dagger\cdot U']\leq D:=C_{MT}\cdot \gamma(\dim(G)) \] With~$p>M(D)$, with~$M(D)$ as in Def.~\ref{defi:Tate}, we have \[ Z_{G_{\mathbb{F}_p}}(M_{\mathbb{F}_p})= Z_{G_{\mathbb{F}_p}}(U(p)^\dagger\cdot U') = Z_{G_{{\mathbb F}_p}}(U(p)^\dagger)\cap Z_{G_{{\mathbb F}_p}}(U'). \] We may thus apply Lemma~\ref{Lemma bounded and centraliser}, and deduce \[ \forall p\gg 0, Z_{G_{\mathbb{F}_p}}(U')=Z_{G_{\mathbb{F}_p}}(Z(M)^0_{\mathbb{F}_p}). \] From~$U'':=U(p)^\dagger\cdot U'\leq U(p)^\dagger\cdot Z(M)^0_{\mathbb{F}_p}\leq M_{\mathbb{F}_p}$ we get \[ Z_{G_{\mathbb{F}_p}}(M_{\mathbb{F}_p}) = Z_{G_{\mathbb{F}_p}}(U'')\leq Z_{G_{\mathbb{F}_p}}(U(p)^\dagger)\cap Z_{G_{\mathbb{F}_p}}(Z(M)^0_{\mathbb{F}_p}) \leq Z_{G_{\mathbb{F}_p}}(M_{\mathbb{F}_p}) \] Finally \[ Z=Z_{G_{\mathbb{F}_p}}(U(p)^\dagger)\cap Z_{G_{\mathbb{F}_p}}(Z(M)^0_{\mathbb{F}_p})= Z_{G_{\mathbb{F}_p}}(M_{\mathbb{F}_p}).\qedhere \] \end{proof} \subsubsection{Conjugacy classes of tuples} The following will be used to check, for almost all primes, one of the hypotheses of Th.~\ref{thm:compare reductive}. \begin{lemma}\label{conj orbit lemma} Let~$p$ be a prime, let~$G\leq GL(n)_{\mathbb{F}_p}$ be a reductive algebraic subgroup, and consider~$v_1,\ldots,v_n\in G(\mathbb{F}_p)$. Denote by~$U$ the group generated by~$\{v_1;\ldots;v_k\}$, and define~$v=(v_1,\ldots,v_n)$. Assume that \begin{equation}\label{U ss g} \text{the action of~$U$ on~$\mathfrak{g}_{\mathbb{F}_p}$ is semisimple.} \end{equation} If~$p>2\cdot \dim(G)$ then the simultaneous conjugacy class~$G\cdot v$ is Zariski closed in~$G^k$. If~$p>c_3(\dim(G))$ then the centraliser of~$v$ in~$G$, as a group scheme over~$\mathbb{F}_p$, is smooth. \end{lemma} The quantity~$c_3$ is from~\cite[\S4. Th.~E]{N}. \begin{proof} From~\cite[\S5.1]{SCR}, we have~$h(G)\leq \dim(G)$. From~\cite[Cor. 5.5]{SCR}, if~$p>2h(G)-2$, the assumption~\eqref{U ss g} implies that~$U$ is~$G$-cr, or ``strongly reductive'' in~$G$ in the sense of Richardson. The first assertion follows from~\cite[Th.~3.7]{SCR} (cf. \cite[\S16]{Ri-Conj}). Thanks to~\eqref{U ss g} and the condition~$p>c_3(\dim(G))$ we may apply~\cite[\S4. Th.~E]{N} (cf. also~\cite[137. p.\,40]{S4}). Thus the hypothesis of~\cite[II, \S5.2, 2.8, p.\,240]{DG} is satisfied and we conclude. (cf. also~\cite{BMR10} and~\cite{H13} on the subject, beyond the semi-simplicity assumption.) \end{proof} \setcounter{secnumdepth}{3} \subsubsection{Consequences for heights bounds}We denote~$\norm{~}:\mathfrak{g}_{\mathbb{Q}_p}\to\mathbb{R}_{\geq0}$ the~$p$-adic norm associated to the~$\mathbb{Z}_p$-structure~$\mathfrak{g}_{\mathbb{Z}_p}$. We denote~$\norm{\Sigma}=\max\{\norm{s}:s\in \Sigma\}$ for a bounded subset~$\Sigma\subseteq \mathfrak{g}_{\mathbb{Q}_p}$. We recall that~$H_f(\phi)=\prod_p H_p(\phi)$ with~$H_p(\phi)$ given for instance by \[ H_p(\phi)=\max\{1;\norm{\phi(\mathfrak{m}_{{\mathbb Z}_p})}\}. \] We also define \[ H'(\phi)=\max\{1;\norm{\phi(\nu)}\},\qquad H''(\phi)=\max\{1;\norm{\phi(\mathfrak{m}^{der}_{\mathbb{Z}_p})}\}. \] We then have~$p\cdot \mathfrak{m}_{\mathbb{Z}_p}\leq \nu\leq \mathfrak{m}_{\mathbb{Z}_p}$ and thus \[ \frac{1}{p}\cdot H'(\phi)\leq H_p(\phi)\leq H'(\phi) \] Using the tools of~\cite{RZ} we deduce the following. \begin{proposition} Define~ \[ v=(X_1,\ldots,X_k,Y_1,\ldots,Y_l)\text{ and } v'=(X_1,\ldots,X_k,\frac{1}{p}Y_1,\ldots,\frac{1}{p}Y_l) \] and \[ H_v(\phi)= \max\{1;\norm{g\cdot v}\}=H_p(g\cdot v)\text{ and }H_{v'}(\phi)=H_p(g\cdot v'). \] Then we have \begin{equation}\label{galois exp bound} [\phi(U):\phi(U)\cap G({\mathbb Z}_p)]\geq H_v(\phi)\geq H_{v'}(\phi)/p \end{equation} and \begin{equation}\label{eq} H_{v'}(\phi)\geq H_p(\phi)^{c(\rho)}. \end{equation} \end{proposition} \begin{proof} The inequality \[ [\phi(U):\phi(U)\cap G({\mathbb Z}_p)]\geq H_v(\phi)= \max\{1;\norm{g\cdot v}\}=H_p(g\cdot v) \] follows from the Lemma of the exponential and the subgroup principle (\cite[Th. A.3, \S B.0.1]{RY}). The inequality~$H_v(\phi)\geq H_{v'}(\phi)/p$ follows from the definitions. We prove~\eqref{eq}. Let~$m_1,\ldots,m_d$ be a generating set for~$\mathfrak{m}_{{\mathbb Z}_p}$ and define~$w=(m_1,\ldots,m_d)$. We recall that by construction, we have \[ H_p(\phi)=\max\{1;\norm{g\cdot m_1},\ldots,\norm{g\cdot m_d}\}. \] Using~\eqref{PropNori:Centalisateurs} and Lem.~\ref{conj orbit lemma} for~$v'$, we may apply Theorem~\ref{thm:compare reductive}, and we deduce \[ H_p(g\cdot v')\geq H_p(\phi)^{C(\Sigma(\rho))},\qedhere \] \end{proof} where~$\Sigma(\rho)$ is the set of roots of~$G$ and does not depend on~$p$. \begin{corollary}In particular, if~$H_{v'}(\phi)\notin\{1;p\}$ we have \[ [\phi(U):\phi(U)\cap G({\mathbb Z}_p)]\geq H_p(\phi)^{c(\rho)/2}. \] \end{corollary} \begin{proof} We recall that, because~$H_{v'}(\phi)\in p^{{\mathbb Z}}$, we have~$H_{v'}(\phi)\geq p^2$ as soon as~$H_p(\phi)\notin\{1;p\}$. It follows \[ H_v(\phi)\geq H_{v'}(\phi)/p\geq H_{v'}(\phi)^{1/2}\geq H_p(\phi)^{1/(2\cdot c(\rho))}.\qedhere \] \end{proof} In proving Th.~\ref{thm:local Galois} we may now assume that~$H_{v'}(\phi)\leq p$. We define \[ H_X(\phi)=\max\{1;\norm{\phi(X_1)};\ldots;\norm{\phi(X_k)}\} \] and \[ H_Y(\phi)=\max\{1/p;\norm{\phi(Y_1)};\ldots;\norm{\phi(Y_k)}\}, \] so that \[ H_{v'}(\phi)=\max\{H_X(\phi);p\cdot H_Y(\phi)\} \] and \[ H_{v}(\phi)=\max\{H_X(\phi); H_Y(\phi)\}. \] If~$H_X(\phi)=p$, we have, by~\eqref{galois exp bound}, \[ [\phi(U):\phi(U)\cap G({\mathbb Z}_p)]\geq H_v(\phi)=H_X(\phi)= H_v'(\phi)\geq H_p(\phi)^{c(\rho)}. \] We now assume~$H_X(\phi)=1$. We have~$p\cdot H_Y(\phi)=H_{v'}(\phi)\in\{1;p\}$. \subsubsection{The case~$p\cdot H_Y(\phi)=H_X(\phi)=1$} In this case we have~$H_{v'}(\phi)=1$, and by~\eqref{eq}, \[ H_p(\phi)=1. \] Obviously \[ [\phi(U_p):\phi(U_p)\cap G({\mathbb Z}_p)]\geq H_p(\phi). \] \subsubsection{The case~$p\cdot H_Y(\phi)=H_{v'}(\phi)=p$} \label{last case} From~\eqref{XY3} of Prop.~\ref{prop X Y}, for every~$Y_i$, there exists~$Z_i\in \mathfrak{z(m)}_{\mathbb{Z}_p}$ such that \[ Y_i\equiv Z_i\pmod{p\cdot\mathfrak{m}_{\mathbb{Z}_p}}. \] Define~$v''=(X_1,\ldots,X_k,Z_1,\ldots,Z_l)$. Then the reductions modulo~$p$ are equal \[ \overline{v'}=\overline{v''}\text{ in }{\mathfrak{m}_{\mathbb{F}_p}}^{k+l}\leq{\mathfrak{g}_{\mathbb{F}_p}}^{k+l}. \] Thus \begin{itemize} \item The orbit~$G_{\mathbb{F}_p}\cdot \overline{v''}$ is equal to~$G_{\mathbb{F}_p}\cdot \overline{v''}$ and is closed; \item and~$Stab_{G_{\mathbb{F}_p}}(\overline{v'})=Stab_{G_{\mathbb{F}_p}}(\overline{v''})=Z_{G_{{\mathbb F}_p}}(M_{{\mathbb F}_p})$ (cf~\eqref{XY4} of Prop.~\ref{prop X Y}). \end{itemize} Applying Th.~\ref{pKN}, we have \[ \phi(v')\in{\mathbb{Z}_p}^{k+l}\text{ if and only if } \phi(v'')\in{\mathbb{Z}_p}^{k+l}. \] We have \[ H_{v''}=\max\{H_X;H_Z\}\text{ with }H_Z(\phi):=\max\{1;\norm{\phi(Z_1)};\ldots;\norm{\phi(Z_l)}\}. \] Because~$H_X(\phi)=1$ and~$H_{v'}=p\neq 1$ in~\S\ref{last case}, we have \[ H_Z(\phi)\neq 1. \] For~$p$ big enough, we can apply~\cite[4.3.9]{EdYa} with the torus~$Z(M)^0$: we have, for some~$c\in\mathbb{R}_{>0}$ that does not depend on~$p$, \[ [\phi(Z(M)(\mathbb{Z}_p)):\phi(Z(M)(\mathbb{Z}_p))\cap G(\mathbb{Z}_p)]\geq p/c \] Using Prop.~\ref{prop4.7} we deduce \[ [\phi(U_p):\phi(U_p)\cap G(\mathbb{Z}_p)]\geq p/(\gamma(n)\cdot c)=H_{v'}(\phi)/(\gamma(n)\cdot c). \] Using~\eqref{eq} we conclude \[ [\phi(U_p):\phi(U_p)\cap G(\mathbb{Z}_p)]\geq H_{p}(\phi)^{c(\rho)}/(\gamma(n)\cdot c). \] This proves~\eqref{precise with a} with~$c=c(\rho)$ and~$a=1/(\gamma(n)\cdot c)$. We have proven Th.~\ref{thm:local Galois} and Th.~\ref{Galois bounds}. \subsection{Some Structure Lemmas} We consider the situation of Theorem~\ref{thm:local Galois}. We identify~$G$ with its image by a faithful representation in~$GL(n)$ such that~$G(\mathbb{Z}_p)=GL(n,\mathbb{Z}_p)\cap G$, and we denote by~$U(p)$ the image of~$U_p$ in~$G(\mathbb{F}_p)\leq GL(n,\mathbb{F}_p)$. We denote by~$\overline{M}=M_{\mathbb{F}_p}$ the~$\mathbb{F}_p$-algebraic group from the model of~$M$ over~$\mathbb{Z}_p$ induced by~$M\leq GL(n)$, and we denote~$Z(M_{\mathbb{F}_p})$ the centre of~$M_{\mathbb{F}_p}$. In Lem.~\ref{lem:4.1}, Prop.~\ref{prop4.7} Cor.~\ref{coro big dans Z}, the quantity depending on~$n$ also depend implicitly on the function~$D\mapsto M(D)$ in~\ref{defi:Tate2} of~Def.~\ref{defi:Tate}. \begin{proposition}\label{prop4.7} There exists~$\gamma(n)$ such that \[ [Z(M_{{\mathbb F}_p})({\mathbb F}_p):Z(M_{{\mathbb F}_p})({\mathbb F}_p)\cap U(p)]\leq \gamma(n)\cdot [M_{{\mathbb F}_p}^{ab}({\mathbb F}_p):ab_{M_{{\mathbb F}_p}}(U(p))]. \] \end{proposition} By Hypothesis~\ref{thm:galois bounds H1} of Th.~\ref{Galois bounds} we may use~\eqref{defi CMT} and deduce the following. \begin{corollary}\label{coro big dans Z} With~$C_{MT}$ as in~\eqref{defi CMT}, we have \[ [Z(M_{{\mathbb F}_p})({\mathbb F}_p):Z(M_{{\mathbb F}_p})({\mathbb F}_p)\cap U_p]\leq \gamma(n)\cdot C_{MT}. \] \end{corollary} We prove Proposition~\ref{prop4.7}. \begin{proof}We define~$\overline{M}:=M_{\mathbb{F}_p}$. We consider the isogeny \[ (ad_{\overline{M}},ab_{\overline{M}}):\overline{M}\to \overline{M}^{ad}\times \overline{M}^{ab} \] We write~$U=U(p)$ and denote by~$\widetilde{U}$ its image in~$\overline{M}^{ad}({\mathbb F}_p)\times \overline{M}^{ab}({\mathbb F}_p)$. We denote~$\widetilde{U}_1$ and~$\widetilde{U}_2$ its images by the projections on the two factors. From Lemma~\ref{lem:4.1}, we have \[ [\widetilde{U}_1:\widetilde{U}\cap \overline{M}^{ad}({\mathbb F}_p)]\leq [\widetilde{U}_1:\widetilde{U}^\dagger]\leq c(n). \] By Goursat's Lemma~\ref{Goursat} \[ [\widetilde{U}_2:\widetilde{U}\cap \overline{M}^{ad}({\mathbb F}_p)]=[\widetilde{U}_1:\widetilde{U}\cap \overline{M}^{ad}({\mathbb F}_p)]\leq c(n). \] Let~$U'$, resp.~$U''$ be the inverse image in~$U_p$ of~$\widetilde{U}\cap \overline{M}^{ad}({\mathbb F}_p)$, resp.~$\overline{M}({\mathbb F}_p)$. Because~$Z(\overline{M})$ is the $(ad_{\overline{M}},ab_{\overline{M}})$-inverse image of~$\overline{M}^{ab}$ in~$\overline{M}$, we have \[ U'\leq U''\leq Z(\overline{M})({\mathbb F}_p). \] Define~$F:=Z(\overline{M})\cap \overline{M}^{der}$, which is a finite~${\mathbb F}_p$-algebraic group of degree at most~$c_2(\dim(M))\leq c_2(n)$. As we have~$U''\leq F(\overline{{\mathbb F}_p})\cdot U'$, we have \[ [U'':U']\leq \# F\leq c_2(n). \] On the other hand, we have \[ [Z(\overline{M})({\mathbb F}_p):U'']\leq [ \overline{M}^{ab}({\mathbb F}_p):\widetilde{U}\cap \overline{M}^{ab}({\mathbb F}_p)]. \] It follows \begin{multline} [Z(\overline{M})({\mathbb F}_p):U']\leq c_2(n)\cdot[ \overline{M}^{ab}({\mathbb F}_p):\widetilde{U}\cap \overline{M}^{ab}]\\ \leq c(n)\cdot c_2(n)\cdot[ \overline{M}^{ab}({\mathbb F}_p):\widetilde{U}_1]\qedhere \end{multline} \end{proof} \begin{lemma}\label{lem:4.1} Let~$\hat{U}:=ad_{M_{{\mathbb F}_p}}(U(p))$~be the image of~$U(p)$ in~$M_{{\mathbb F}_p}^{ad}({\mathbb F}_p)$. Then~$\hat{U}^\dagger=ad_{M_{{\mathbb F}_p}}(U(p)^\dagger)$, and there exists~$c(n)$ such that \[ [\hat{U}:\hat{U}^\dagger]\leq c(n). \] \end{lemma} \begin{proof}The equality~$\hat{U}^\dagger=ad_{\overline{M}}(U(p)^\dagger)$ follows from Lemma~\ref{Sylow}. By construction, \[ Z_{\overline{M}^{ad}}(\hat{U})=Z_{\overline{M}}(U)/Z(\overline{M})\leq \overline{M}^{ad} \] We let~$j(N)$ be the Jordan constant (cf.~\cite[\S5]{SCrit}). From Jordan theorem there exists~$\hat{U}^\dagger\leq \hat{U}'\leq \hat{U}$ of index~$[\hat{U}:\hat{U}']\leq C(n):=j(d(n))$ such that \[ \hat{U}'/\hat{U}^\dagger \] is abelian, where~$d(n)$ is as in~\cite[134. \S7 p.25 about~$GL_d$]{S4}. We assume~$p\geq M(C(n)))$ as in~\ref{defi:Tate2} of the Tate hypothesis Def.~\ref{defi:Tate}. We use~$c(n)$ and~$m(n)$ from Lemma~\ref{lem:4.2}, and assume~$p>m(n)$ and~$p>M(C(n)\cdot c(n))$. Then, by~\ref{defi:Tate2} the Tate hypothesis Def.~\ref{defi:Tate}, the uniform integral Tate conjecture, Lemma~\ref{lem:4.2} applies to~$\hat{U}'\leq \overline{M}^{ad}({\mathbb F}_p)$. The conclusion follows. \end{proof} \begin{lemma}\label{lem:4.2} For every~$n\in\mathbb{Z}_{\geq1}$, there exists~$c(n)$, $c'(n)$, $m(n)$ such that the following holds. Let~$p>m(n)$ be a prime, let~$M\leq GL(n)$ be adjoint over~${\mathbb F}_p$, and let \[ \hat{U}\leq M({\mathbb F}_p) \] be a subgroup \begin{itemize} \item such that~$\hat{U}/\hat{U}^\dagger$ is abelian \item and such that for every~$U'\leq \hat{U}$ of index at most $c(n)$: \begin{enumerate} \item \label{cond 1} we have~$Z_M(U')=1$; \item \label{cond 3} the action of~$U'$ is semisimple; \end{enumerate} \end{itemize} Then we have~$[\hat{U}:\hat{U}^\dagger]\leq c'(n)$. \end{lemma} \subsubsection{Remark}\label{Remark Lemma} In proving the Lemma, we may substitute~$\hat{U}$ with~$U'$ if~$\hat{U}\leq U'\leq \hat{U}$ and~$[\hat{U}:U']\leq f(n)$, where~$f(n)$ depends only on~$n$. We then have to change~$c(n)$ into~$c(n)\cdot f(n)$ accordingly. \begin{proof}We assume~$p>n+1$, so that we can apply Nori theory~\cite{N}. We denote by~$S\leq M$ the~$\mathbb{F}_p$-algebraic associated by Nori to~$\hat{U}^\dagger$, and denote by~$N=N_M(S)$ the normaliser of~$S$ in~$M$. We deduce from~\eqref{cond 3} that~$S$ is semisimple, and thus~$N^0=S\cdot Z_M(S)^0$. We recall that the semisimple Lie subalgebras~$\mathfrak{m}\leq \mathfrak{gl}(N)_{\overline{\mathbb{F}_p}}$ can assume finitely many types, independently from~$p$. We deduce uniform bounds \begin{equation} \#N/N^0\leq c_1(n)\text{ and }\#Z(S)\leq c_2(n). \end{equation} We have~$\hat{U}\leq N$, and~$U^\dagger\leq N^\dagger\leq N^0$. If~$U'=\hat{U}\cap N^0$ we have~$[\hat{U}:U']\leq c_1(N)$ and~$U^\dagger\leq U'$. Using the Remark~\ref{Remark Lemma}, we may replace~$\hat{U}$ by~$U'=\hat{U}\cap N^0$. We denote~$Z(S)=Z_M(S)\cap S$ and we consider \[ N^0\to S^{ad}\times Z_M(S)^0/Z(S). \] We denote~$\widetilde{U}$ the image of~$\hat{U}$, by \[ \widetilde{U}_1\leq S^{ad}(\mathbb{F}_p)\text{ and~}\widetilde{U}_2\leq (Z_M(S)^0/Z(S))(\mathbb{F}_p) \] the projections of~$\widetilde{U}$ and define \[U'_1:=\widetilde{U}\cap (S^{ad}({\mathbb F}_p)\times\{1\}) \text{ and } U'_2:=\widetilde{U}\cap(\{1\}\times Z_M(S)/Z(S)).\] From Lemma~\ref{Sylow}, the image of~$S(\mathbb{F}_p)^\dagger\leq \hat{U}$ is~$S^{ad}({\mathbb F}_p)^\dagger$. Thus \[ S^{ad}({\mathbb F}_p)^\dagger\times\{1\} \leq U'_1\leq \widetilde{U}_1\times\{1\}\leq S^{ad}({\mathbb F}_p)\times\{1\}. \] With~$r(n)$ given by~\cite[3.6(iv-v) and p.\,270]{N}, we have \begin{equation}\label{rN} [U'_1:\widetilde{U}_1\times\{1\}]\leq [S^{ad}({\mathbb F}_p):S^{ad}({\mathbb F}_p)^\dagger]\leq r(n)=2^{n-1}. \end{equation} By Goursat's Lemma~\ref{Goursat} and~\eqref{rN} we have \[ [U'_1:\widetilde{U}_1\times\{1\}]=[\{1\}\times\widetilde{U}_2:U'_2]\leq r(n). \] Thus, with~$U':=U'_1\cdot U'_2\simeq U'_1\times U'_2$, we have \[ [\widetilde{U}:U']\leq [\widetilde{U}_1\times\widetilde{U}_2:U']\leq r(n)^2. \] Because~$\widetilde{U}^\dagger=S(\mathbb{F}_p)^\dagger$ is sent to~$S^{ad}(\mathbb{F}_p)^\dagger\times\{1\}$ (cf. Lemma~\ref{Sylow}) and because~$S^{ad}(\mathbb{F}_p)^\dagger\times\{1\}\leq U'_1\leq U'$ we may use the Remark~\ref{Remark Lemma}, and replace~$\hat{U}$ by the inverse image of~$U'$. We denote by \[ \hat{U}_1,\quad\hat{U}_2 \] the inverse images of~$U'_1$ and~$U'_2$. Because~$\hat{U}_1\leq Z_M(S)$ and~$\hat{U}_2\leq S$, the groups~$\hat{U}_1$ and~$\hat{U}_2$ commute with each other. We reduce the situation to the case where~$\hat{U}_2$ is abelian. We know that~$\hat{U}/\hat{U}^\dagger$ is abelian and that~$\hat{U}^\dagger\leq S(\mathbb{F}_p)$. It follows that~$\hat{U}_2/(\hat{U}_2\cap S(\mathbb{F}_p))$ is abelian. We have~$F:=\hat{U}_2\cap S(\mathbb{F}_p)\leq Z_M(S)\cap S=Z(S)$, and thus~$\abs{F}\leq c_2(n)$, and~$\hat{U}_2$ is an extension of the abelian group~$U'_2=\hat{U}_2/(\hat{U}_2\cap S)=\widetilde{U}\cap Z_M(S)/Z(S)$ by a finite group~$F$ of order at most~$c_2(n)$. Moreover,~$U'_2$ is of order prime to~$p$, and thus is diagonalisable in~$Z_M(S)/Z(S)(\overline{\mathbb{F}_p})$. It follows we can find a monomorphism~$U_2'\leq (\overline{\mathbb{F}_p}^\times)^r$ where~$r(Z_M(S))$ is the rank of~$Z_M(S)$. We have~$r(Z_M(S))\leq r(M)\leq N$. From Cor.~\ref{coro extension}, there exists an abelian subgroup~$U''_2\leq \hat{U}_2$ of index at most \[ c_3(n)=e(c_2(n))^n. \] Using the remark we may replace~$\hat{U}=\hat{U}_1\cdot \hat{U}_2$ by~$U'=\hat{U_1}\cdot U'_2$. Then~$U'_2$ commutes to~$\hat{U}_1$ and to itself:~$U'_2$ is in the centre of~$\hat{U}$. By Hypothesis~\eqref{cond 1}, we have~$U'_2=1$. Thus~$\hat{U}=\hat{U_2}\leq S({\mathbb F}_p)$ and \[ [\hat{U}:\hat{U}^\dagger]\leq [S^{ad}({\mathbb F}_p):S^{ad}({\mathbb F}_p)^\dagger]\leq r(n).\qedhere \] \end{proof} \subsubsection{Other lemmas} \begin{lemma}\label{Lemma bounded and centraliser} Let~$H\leq G\leq GL(d)$ be algebraic groups over~$\mathbb{Q}$ with~$H$ Zariski connected. There exists~$\lambda\in\mathbb{Z}_{\geq 1}$, and~$N\in\mathbb{Z}_{\geq0}$ such that: for all prime~$p\geq N$, and for all subgroup~$U\leq H_{\mathbb{F}_p}(\mathbb{F}_p)$ such that \[ [H_{\mathbb{F}_p}(\mathbb{F}_p):U]\leq \lambda \] we have \[ Z_{G_{\mathbb{F}_p}}(U)=Z_{G_{\mathbb{F}_p}}(H_{\mathbb{F}_p}). \] \end{lemma} \begin{proof}Without loss of generality we may assume~$G=GL(d)$. We define a scheme~$X\leq Y\times Y$, with~$Y=GL(d)_{\mathbb{Z}}$ by \[ X=\{(g,h)\in GL(d)_{\mathbb{Z}}\times GL(d)_{\mathbb{Z}}|[g,h]=1, h\in H\} \] and denote by~$\phi: X\to Y$ the first projection. According to the Lemma~\ref{lemma schemes}, for every prime~$p$, and every~$g\in G(\overline{\mathbb{F}_p})$, \[ \pi_0(H\cap Z_{G_{\overline{\mathbb{F}_p}}}(\{g\}))\leq \gamma. \] For~$p\gg0$, the~$\mathbb{F}_p$ group~$H_{\mathbb{F}_p}$ will be Zariski connected. (\citestacks[Lem. 37.26.5.]{055H}). From~\cite[Lem.~3.5]{N}, we have \[ \#H'(\mathbb{F}_p)\leq (p+1)^{\dim(H')}\cdot \gamma \] and \[ \#H(\mathbb{F}_p)\geq (p-1)^{\dim(H)}. \] Let~$\lambda=\gamma\cdot (\frac{p-1}{p+1})^{\dim(G)}$ and let \[ U\leq H(\mathbb{F}_p) \] be such that \[ Z_{G_{\mathbb{F}_p}}(U)\neq Z_{G_{\mathbb{F}_p}}(H_{\mathbb{F}_p}). \] We have~$Z_{G_{\mathbb{F}_p}}(U)\geq Z_{G_{\mathbb{F}_p}}(H_{\mathbb{F}_p})$. We remark that~$Z_{G_{\mathbb{F}_p}}(U)$ and~$Z_{G_{\mathbb{F}_p}}(H_{\mathbb{F}_p})$ are Zariski connected because they are non empty Zariski open subsets~$A\cap GL(d)$ in a subalgebra of~$A\leq End({\mathbb{F}_p}^d)$. For~$p> \left(\frac{p-1}{p+1}\right)^{\dim(G)}$ (cf.~\cite[Lem.~3.5]{N}), we will have \[ \#Z_{G_{\mathbb{F}_p}}(U)({\mathbb{F}_p})< \#Z_{G_{\mathbb{F}_p}}(H_{\mathbb{F}_p})({\mathbb{F}_p}) \] and there exists \[ g\in Z_{G_{\mathbb{F}_p}}(U)({\mathbb{F}_p})\smallsetminus \#Z_{G_{\mathbb{F}_p}}(H_{\mathbb{F}_p})({\mathbb{F}_p}) \] We have then, with~$H'=X_g=Z_{G_{\mathbb{F}_p}}(\{g\})$, which is defined over~$\mathbb{F}_p$, \[ H'<H. \] Because~$H$ is connected, we have~$\dim(H')<\dim(H)$ and \[ \#U\leq \#H'(\mathbb{F}_p)\leq \frac{1}{\lambda}\cdot \#H(\mathbb{F}_p). \] The Lemma follows. \end{proof} \begin{lemma}\label{lemma schemes} Let~$\phi:X\to Y$ be a morphism of schemes of finite type over~$\mathbb{Z}$. Then there exists~$\gamma$ such that: for every field~$K$, and every~$y\in Y(K)$, the number of geometric connected components~$\#\pi_0(X_y)$ of the fibre~$X_y$ satisfies \[ \#\pi_0(X_y)\leq \gamma. \] If~$\phi$ is flat, then~$y\mapsto \dim(X_y)$ is lower semicontinuous on~$Y$. \end{lemma} \begin{proof}If~$Y$ is non-empty, there exists a proper closed subset outside of which the function~$y\mapsto\#\pi_0(X_y)$ is constant, according to\footnote{Applied to the generic point of an irreducible component of~$Y$. The latter exist because~$Y$ is noetherian.} \citestacks[Lemma 37.26.5.]{055H}. We conclude by noetherian induction. The second assertion, on dimensions, is~\citestacks[Lemma 37.28.4.]{0D4H} (compare \cite[15.5.1]{EGA42}.) \end{proof} \begin{lemma}\label{Sylow} Let~$\phi:U\to U'$ be an epimorphism of finite groups and, for some prime~$p$, denote by~$U^\dagger$, resp.~$U'^\dagger$ be the subgroup generated by elements of order a power of~$p$. Then \[ \phi(U^\dagger)=U'^{\dagger}. \] \end{lemma} \begin{proof} If~$u$ is of order a power of~$p$, then so is~$\phi(u)$. Thus \[ \phi(U^\dagger)\leq U'^{\dagger}. \] The subgroup~$U'^\dagger$ is normal and is the smallest normal subgroup such that~$U/U^\dagger$ does not contain a non trivial element of order a power of~$p$; equivalently~$\#U/U^\dagger$ is prime to~$p$. Because~$\phi$ is an epimorphism,~$\phi(U^\dagger)$ is normal in~$U'$ and we have a group isomorphism \[ U/U^\dagger\to U'/\phi(U^\dagger). \] Thus~$\#U'/\phi(U^\dagger)$ is prime to~$p$, and the mentioned minimality property implies \[ \phi(U^\dagger)\geq U'^{\dagger}.\qedhere \] \end{proof} \begin{lemma}\label{Lemma extension} Let \[ 1\to N\to G \xrightarrow{\pi} H\to 1 \] be a short exact sequence of finite groups such that~$H$ is abelian. There exist~$e(\#N)\in\mathbb{Z}_{\geq1}$ and~$H'\leq H$ and~$H''\leq G$ such that~$\pi|_H$ is an isomorphism onto~$H''$ and \[ H'\geq e(\#N)\cdot H. \] \end{lemma} \begin{proof} Let~$\rho:G\to\operatorname{Aut}(N)$ be the adjoint action on its normal subgroup~$N$. Then~$G'=\ker \rho$ has index at most~$\#\operatorname{Aut}(N)\leq (\#N)!$ and we may replace~$G$ with~$G'$ and~$H$ with~$p(H)$, that is: we assume the extension of~$H$ by~$N$ is a central extension. Let~$\gamma=\#N$, and, for~$h=\gamma\cdot h'$ in~$H':=\gamma\cdot H$, choose~$g'$ such that~$p(g')=h'$. We claim that \[ h\mapsto \sigma(h):=g'^\gamma \] is a well defined section of~$p$ on~$H'$. This would prove the Lemma for~$e(\#N)=(\#N)!\cdot \#N$ and~$H''=\sigma(H')$. We prove that~$\sigma(h)$ does not depend on the choice of~$g'$. Let~$g''=n\cdot g'$ with~$n\in N$. Then, because~$N$ is central, we have \[ g''^\gamma=(g'\cdot n)\cdot\ldots\cdot (g'\cdot n)=g'^{\gamma}\cdot n^\gamma=g'^\gamma. \] We prove~$\sigma(h_1\cdot h_2)=\sigma(h_1)\cdot \sigma(h_2)$. We pick lifts~$g_1,g_2$ of~$h_1,h_2$. Because~$H$ is commutative, we have \[ [g'_1,g'_2]\in N \] that is~$g_1\cdot g_2=n\cdot g_2\cdot g_1$ for some~$n\in N$. We have, for~$i,j\in\mathbb{Z}_{\geq0}$, \[ g_1^{i+1}\cdot g_2^{j+1}=g_1^{i}\cdot(n\cdot g_2\cdot g_1)\cdot g_2^{j}= (g_1^{i}g_2\cdot g_1\cdot g_2^{j})\cdot n \] and by induction \[ g_1^{i+1}\cdot g_2^{j}=g_2\cdot g_1^{i+1}\cdot g_2^{j}\cdot n^{i+1}. \] We deduce by induction, for~$i=j=\gamma$, \[ g_1^{\gamma}g_2^{\gamma}=g_2^{\gamma}g_1^{\gamma}n^{\gamma^2} =g_2^{\gamma}g_1^{\gamma}. \] The Lemma is proved. \end{proof} \begin{corollary}\label{coro extension} If~$H$ is generated by~$k$ elements, we have \[[H':H]\leq e(\#N)^k.\] \end{corollary} We used the following form of Goursat's Lemma. \begin{lemma}[Goursat's Lemma]\label{Goursat} Let~$U\leq G_1\times G_2$ be a subgroup, and~$U_1$, $U_2$ be its projections, and define~$U'_1=U\cap(G_1\times\{1\})$ and~$U'_2=U\cap(\{1\}\times G_2)$. Then~$(U_1\times\{1\})/U'_1$ and~$(\{1\}\times U_2)/U'_2$ are isomorphic, and hence \[ \abs{(U_1\times\{1\})/U'_1}=\abs{(\{1\}\times U_2)/U'_2}. \] \end{lemma} \section{Reductive norm estimates from residual stability}\label{sec:reductive} \subsection{Standing hypotheses}\label{standing hyp} Let~$F\leq G\to GL(d)$ be reductive groups over~$\mathbb{Q}_p$. The ultrametric absolute value is denoted by~$\abs{~}:\mathbb{C}_p\to \mathbb{R}_{\geq0}$ and the norm on~${\mathbb{C}_p}^d$ is denoted by \[ \norm{(v_i)_{i=1}^d}=\max\{\abs{v_1};\ldots;\abs{v_d}\}. \] The $\mathbb{Q}_p$-algebraic group~$GL(d)$ has a model~$GL(d)_{\mathbb{Z}_p}$, which induces models~$F_{\mathbb{Z}_p}$ and~$G_{\mathbb{Z}_p}$ over~$\mathbb{Z}_p$. We denote by~$F_{\mathbb{F}_p}$ and~$G_{\mathbb{F}_p}$ their special fibres, which are algebraic groups over~$\mathbb{F}_p$. We assume that, in the sense\footnote{An equivalent property is that~$F_{\mathbb{F}_p}$ and~$G_{\mathbb{F}_p}$ are connected reductive algebraic groups.} of~\cite[S3.8]{Tits} \begin{equation}\label{hyp:hyp} \text{$F_{\mathbb{Z}_p}$ and~$G_{\mathbb{Z}_p}$ are ``hyperspecial'' .} \end{equation} \setcounter{secnumdepth}{4} \subsubsection{Some consequences} We review some constructions and some properties that hold under hypotheses~\ref{hyp:hyp}, and will be needed later. \paragraph{} We consider a maximal split torus~$T\leq G$, a basis~$\mathbb{Z}^d\simeq X(T)$. We denote the set of weights of the representation~$\rho:T\to G\to GL(d)$ by \[ \Sigma(\rho)\subseteq X(T) \] and the weight decomposition of~$V={\mathbb{Q}_p}^d$ under the action of~$T$, by \begin{equation}\label{eigen decomp} {\mathbb{Q}_p}^d=\bigoplus_{\chi\in\Sigma(\rho)} V_\chi\text{ where }V_\chi:=\{v\in{\mathbb{Q}_p}^d|\forall t\in T(\mathbb{Q}_p), t\cdot v=\chi(t)\cdot v\}. \end{equation} \paragraph{Remark}\label{rem set weights} For any other maximal torus~$T'$ there is a conjugation~$t\mapsto gtg^{-1}:T\mapsto T'$ in~$G(\mathbb{C}_p)$. We deduce a set~$\Sigma(T')$ corresponding to~$\Sigma(T)$. The resulting set~$\Sigma(T)$ does not depend on the choice of the conjugating element. The weight spaces~$V_\chi$ in the decomposition~\eqref{eigen decomp} depend on~$T$. \paragraph{}\label{Good torus} From\footnote{With~$\Omega=\{x_0\}$ if~$x_0\in\mathcal{BT}(G/L)$ is the fixed point of~$G_{\mathbb{Z}_p}(\mathbb{Z}_p)$.}~\cite[\S3.5]{Tits} we know that the induced model~$T_{\mathbb{Z}_p}$ has good reduction, i.e.~$T_{\mathbb{F}_p}$ is a torus, and that we have \[ X(T)\simeq X(T_{\mathbb{F}_p}). \] This also implies, c.f. e.g.~\cite[Prop.\,5]{Sesh} that~\eqref{eigen decomp} is compatible with integral structures: \[ {\mathbb{Z}_p}^d=\bigoplus \Lambda_{\chi}\text{ where }\Lambda_{\chi}:={\mathbb{Z}_p}^d\cap V_\chi; \] and that we have a corresponding weight decomposition \[ {\mathbb{F}_p}^d=\bigoplus \overline{V}_{\chi}\text{ where }\overline{V}_{\chi}:=\Lambda_{\chi}\otimes\mathbb{F}_p. \] \paragraph{} There is a Cartan decomposition~\cite[\S4.6(i), 4.4.3]{BT72} (see also\footnote{See~\cite[\S3 and \S3.3]{Tits} for assumptions of~\cite[3.3.3]{Tits}.}~\cite[3.3.3]{Tits}), for~$L/\mathbb{Q}_p$ a finite extension, and~$T_L$ a maximally split torus of~$G/L$, \begin{equation}\label{Cartan} G(L)=G_{\mathbb{Z}_p}(O_L)T_L(L)G_{\mathbb{Z}_p}(O_L). \end{equation} and consequently over~$\overline{\mathbb{Q}_p}$, when~$T$ is a maximal torus, \begin{equation}\label{Cartanbar} G(\overline{\mathbb{Q}_p})=G_{\mathbb{Z}_p}(\overline{\mathbb{Z}_p})T(\overline{\mathbb{Q}_p})G_{\mathbb{Z}_p}(\overline{\mathbb{Z}_p}). \end{equation} \subsection{Main statement} The following theorem can be seen as a refined more precise version of the functoriality of local heights used in~\S\ref{every prime}. \begin{theorem}[Local relative stability estimates]\label{thm:compare reductive} Under hypotheses from~\S\ref{standing hyp}, let~$v,v'\in{\mathbb{Z}_p}^d$ be non zero vectors, denote by~$\overline{v},\overline{v'}\in {{\mathbb F}_p}^d$ their reduction, and assume that \begin{enumerate} \item \label{test} the orbits~$G_{\mathbb{Q}_p}\cdot v,G\cdot v'\subseteq {\mathbb A}_{{\mathbb Q}_p}^d$ are closed subvarieties; \item the stabiliser groups~$F_v:=\Stab_G(v),F_{v'}:=\Stab_G(v')$ satisfy \[ F_v=F_{v'}=F; \] \end{enumerate} and that \begin{enumerate} \item the orbits~$G_{{\mathbb F}_p}\cdot \overline{v},G_{{\mathbb F}_p}\cdot \overline{v'}\subseteq {\mathbb A}_{{\mathbb F}_p}^d$ are closed subvarieties; \item the stabiliser groups~$F_{\overline{v}}:=\Stab_G(v),F_{\overline{v'}}:=\Stab_G(v')$ satisfy, as group schemes\footnote{It amounts to the property that~$F_{\overline{v}}$ and~$F_{\overline{v}}$ are smooth.}, \begin{equation}\label{Hyp 51} F_{\overline{v}}=F_{\overline{v'}}=F_{{\mathbb F}_p}. \end{equation} \end{enumerate} We define two functions~$G(\mathbb{C}_p)\to \mathbb{R}$ given by \[ H_{v}:g\mapsto \max\{1;\norm{g\cdot v}\}\text{ and } H_{v'}:g\mapsto \max\{1;\norm{g\cdot v'}\}. \] Then the functions~$h_v=\log H_v$ and~$h_v=\log H_{v'}$ satisfy \begin{equation}\label{log equiv on reductive} h_v\leq C\cdot h_{v'}\text{ and }h_{v'}\leq C\cdot h_v, \end{equation}\label{reductive theorem final estimate} in which~$C=C(\Sigma(\rho))$ depends only on the set of weights of~$\rho$ (cf.~\ref{rem set weights}). \end{theorem} \noindent In our proof, the quantity~$C(\Sigma)$ will depend upon the choice of an invariant euclidean metric ``in the root system'' of~$G$, and there are canonical choices of such metrics. The hypothesis~\eqref{Hyp 51} can be replaced by the weaker hypothesis in~\eqref{pKN flat}. Several features that are important to our strategy. \begin{itemize} \item The quantity~$C$ only depends on the weights of~$\rho$. Thus, when~$\rho$ comes from a representation defined over~${\mathbb Q}$, this~$C$ does not depend on the prime~$p$. \item That the inequality does not need an additive constant: we have \[ H_v\leq A\cdot {H_{v'}}^C \] with~$A=1$. Thus, when we multiply the inequalities over infinitely many primes, we don't accumulate an uncontrolled multiplicative factor~$\prod_p A(p)$. \item The estimate~\eqref{reductive theorem final estimate} depends upon~$v$ only through its stabiliser group~$F$. This is precisely information about the stabilisers that we deduce from Tate conjecture. \end{itemize} \subsection{Proof} Because~$G_{\mathbb{Z}_p}(\overline{\mathbb{Z}_p})\leq GL(d,\overline{\mathbb{Z}_p})$ acts isometrically on~$\overline{\mathbb{Z}_p}^d$, the functions~$h_v$ and~$h_{v'}$ are left~$G_{\mathbb{Z}_p}(\overline{\mathbb{Z}_p})$-invariant. We denote the quotient functions by \[ h'_v,h'_{v'}: G_{\mathbb{Z}_p}(\overline{\mathbb{Z}_p})\backslash G(\overline{\mathbb{Q}_p})\to \mathbb{R}. \] Choose an arbitrary~$g\in G(\overline{\mathbb{Q}_p})$. It is sufficient to prove~\eqref{CCL proof KN} with this element~$g$, as the other inequality in~\eqref{log equiv on reductive} can be deduced after swapping~$v$ and~$v'$. Let~$T\leq G$ be a maximal torus defined over~$\overline{\mathbb{Q}_p}$. We endow~$A_T$, defined in~\eqref{defi appartment}, with a canonical euclidean distance~$d(~,~)=d_G(~,~)$, invariant under~$N_G(T)$ and depending only on~$G$ (using e.g.~\cite[LIE VI.12]{BBK}). We denote~$\Sigma(\rho)$ the set of weights of the action~$T\to G\xrightarrow{\rho} GL(n)$. We denote~$\gamma(\Sigma(\rho))$ the quantity from Prop.~\ref{coro slopes}, which does not depend on the maximal torus~$T$ up to conjugation, and only on the weights of~$\rho$. Because~$G_{\mathbb{Z}_p}$ is hyperspecial, there is a Cartan decomposition~\eqref{Cartan}. Thus there are some~$t'\in T(\overline{\mathbb{Q}_p})$ and~$k\in G_{\mathbb{Z}_p}(\overline{\mathbb{Z}_p})$ such that \[ G_{\mathbb{Z}_p}(\overline{\mathbb{Z}_p})\cdot g=G(\overline{\mathbb{Z}_p})\cdot t \] with~$t=k t' k^{-1}$. We may thus assume~$g=t$. We may write, as in~\eqref{hv to hmu} \[ h_v\restriction_{T(\mathbb{C}_p)}=h_{\mu}\circ a_T\text{ and }h_v\restriction_{T(\mathbb{C}_p)}=h_{\mu'}\circ a_T. \] According to Proposition~\ref{prop slopes comparison}, we have \[ c(\Sigma(v))\cdot d(a,C_\mu)\leq h_{\mu}(a)\text{ and }h_{\mu'}(a)\leq c'(\Sigma(v'))\cdot d(a,C_{\mu'}). \] Thanks to hypotheses of~Theorem~\ref{thm:compare reductive} we may apply\footnote{Here~\eqref{universal integral} would suffice to prove~$C_\mu=C_{\mu'}$.} Theorem~\ref{pKN}. Thus \begin{multline*} \set{t\in T(\overline{\mathbb{Q}_p})}{h_v(t)=0} \\=\set{t\in T(\overline{\mathbb{Q}_p})}{tF\in G/F(\overline{\mathbb{Z}_p})} \\=T(\overline{\mathbb{Q}_p})\cap (G(\overline{\mathbb{Z}_p})\cdot F(\overline{\mathbb{Q}_p})) \\=\set{t\in T(\overline{\mathbb{Q}_p})}{h_{v'}(t)=0}. \end{multline*} As the valuation group~$\Gamma(\overline{\mathbb{Q}_p})$ is~$\mathbb{Q}$, we deduce~$C^\mathbb{Q}_{\mu}=C^\mathbb{Q}_{\mu'}$, and, by Lemma~\ref{C Q density}, we deduce \[ C:=C_\mu=C_{\mu'}. \] Applying Corollary~\ref{coro slopes}, we conclude \begin{equation}\label{CCL proof KN} h_{v'}(g)=h_{v'}(t)=h_\mu\circ a_T(t)\leq \gamma(\Sigma(\rho))^2\cdot h_\mu\circ a_T(t)=h_{v'}(t)=h_{v}(g). \end{equation} \subsection{Norms on Toric orbits and the Apartment}\label{appartments} For a torus~$T$ over an ultrametric extension~$L/\mathbb{Q}_p$, the associated ``apartment'' is defined as \begin{equation}\label{defi appartment} A_T=A_{T/L}=Y(T/L)\otimes\mathbb{R}\simeq {\rm Hom}(X(T),\mathbb{R}) \end{equation} where~$Y(T)=Y(T/L):={\rm Hom}(GL(1)_L,T)$ and~$X(T)=X(T/L):={\rm Hom}(T,GL(1)_L)$ are the group of cocharacters and characters, and are~$\mathbb{Z}$-linear dual to each other. Then the pairing \[ (t,\chi)\mapsto \log_p\abs{\chi(t)}:T(L)\times X(T)\to \mathbb{R}. \] induces a map \begin{equation}\label{T to A} a_T:T(L)\to A_T, \end{equation} Denote by~$\mathbb{Z}\leq \Gamma_L:=\log_p\abs{L^\times}\leq \mathbb{R}$ be the valuation group of~$L$. When~$T$ has a model over~$L$ which is a torus~$T_{O_L}$ over~$O_L$, the map~$a_T$ factors as \[ T(L)\twoheadrightarrow \frac{T(L)}{T_{O_L}(O_L)}\xrightarrow{\sim} Y(T)\otimes\Gamma_L\hookrightarrow A_T. \] For a character~$\chi\in X(T)$ the function \[ \log_p\abs{\chi}:T(L)\xrightarrow{\chi} L^\times\xrightarrow{\abs{~}} \mathbb{R}_{>0}\xrightarrow{\log_p} \mathbb{R} \] passes to the quotient to~$\frac{T(L)}{T_{O_L}(O_L)}$ and extends to a~$\mathbb{R}$-linear form which we denote by \[ \omega_\chi:A_T\to \mathbb{R}, \] which is also the one deduced from~$A_T\simeq {\rm Hom}(X(T),\mathbb{R})$. Assume~$T\leq GL(n)$ is a Torus over~$L$ with good reduction: denoting the eigenspace decomposition of~$L^n$ for the action of~$T$ by \[ L^n=\sum_{\chi \in X(T)} V_\chi \] we have (\ref{Good torus}, \cite[Prop.\,5]{Sesh}) \begin{equation}\label{integral eigen} {O_L}^n=\sum_{\chi \in X(T)} V_\chi\cap {O_L}^n. \end{equation} It follows, denoting by~$\norm{~}$ the standard norm on~$L^n$, that, for~$v\in L^n$, \begin{equation}\label{norm tore} \norm{v}=\max\{0\}\cup\{\norm{v_\chi}~|~{\chi\in X(T)}\}. \end{equation} We denote by~$\Sigma(T)\subseteq X(T)$ the set of weights for the action of~$T$, and denote by \[ \Sigma(v)=\{\chi\in X(T)~|~v_\chi\neq 0\}\subseteq \Sigma(T), \] and, if~$v\in{O_L}^n$, we define \[ \overline{\Sigma}(v)=\{\chi\in X(T)~|~\norm{v_\chi}=1\}\subseteq \Sigma(v) \] and a function~$\mu:\Sigma(v)\to \mathbb{R}_{\geq0}$ given by \[ \mu(\chi)=\log_p \norm{v_\chi}. \] The functions~$H_v:T(\overline{\mathbb{Q}_p})\to\mathbb{R}_{\geq 0}$ and~$h_v=\log(H_v)$ defined by \[ H_v(t)=\max\{1;\norm{t\cdot v}\} \] can be computed from the formula \begin{equation}\label{hv to hmu} h_v=h_\mu\circ a_T\text{ with }h_\mu(a):=\max\{0\}\cup\set{\omega_\chi(a)+\mu(\chi)}{\chi \in \Sigma(v)}. \end{equation} \begin{lemma}\label{C Q density} Define \[ C_\mu=\set{a\in A_T}{h_{\mu}(a)=0}\text{ and }A_T^\mathbb{Q}=Y(T)\otimes\mathbb{Q}\subseteq A_T \] Then \[ C_\mu=\overline{C_\mu\cap A_T^\mathbb{Q}}. \] \end{lemma} We skip the proof of Lemma~\ref{C Q density}. The main point is that the convex set~$C_\mu$ is constructed from affine forms~$\omega_{\chi}+\mu(\chi)$ on~$A_T$ which are \emph{defined over~${\mathbb Q}$}, with respect to the~${\mathbb Q}$-structure~$A_T^{\mathbb Q}$. \section{Residual stability and {$p$-adic} Kempf Ness Theorem} \label{sec:pKN} The estimates of Th.~\ref{thm:compare reductive} rely on the following result, of independent interest. This is an analogue of~\cite[Th. 0.1 b)]{KN} in the context~\cite{B92} of~$p$-adic Mumford's stability. It relies on careful analysis of the reduction of models of homogeneous spaces given by invariant theory~\cite{Sesh} or closed orbits in a linear representation. \begin{theorem}[$p$-adic Kempf-Ness Theorem]\label{pKN} Let~$F_{\mathbb{Z}_p}\leq G_{\mathbb{Z}_p}\leq GL(n)_{\mathbb{Z}_p}$ be smooth reductive group schemes, such that~$F_{\mathbb{Z}_p}\to G_{\mathbb{Z}_p}\to GL(n)_{\mathbb{Z}_p}$ are closed immersions, and~$G_{\mathbb{Z}_p}$ is connected. Let~$v\in{\mathbb{Z}_p}^n$, denote by~${\overline{v}\in\mathbb{F}_p}^n$ its reduction and assume that \begin{equation}\label{pKN flat} \Stab_{G_{\mathbb{Q}_p}}(v)=F_{\mathbb{Q}_p}\text{ and } \dim (\Stab_{G_{\mathbb{F}_p}}(\overline{v}))=\dim (F_{\mathbb{F}_p}), \end{equation} (using Krull dimensions) and assume that the orbits \begin{equation}\label{pKN stab} G_{\mathbb{Q}_p}\cdot v\subseteq \mathbb{A}^n_{\mathbb{Q}_p}\text{ and } G_{\mathbb{F}_p}\cdot\overline{v}\subseteq \mathbb{A}^n_{\mathbb{F}_p} \end{equation} are closed. Then, for all~$g\in G(\overline{\mathbb{Q}_p})$, we have, denoting by~$\mathbb{Z}_p[G/F]:=\mathbb{Z}_p[G]\cap \mathbb{Q}_p[G]^{F}$ the algebra of $F$-invariant functions~$G\to\mathbb{A}^1$ defined over~$\mathbb{Z}_p$, \begin{equation}\label{universal integral} g\cdot v \in \overline{\mathbb{Z}_p}^n~\text{ if and only if }~\forall f\in \mathbb{Z}_p[G/F], f(g)\in\overline{\mathbb{Z}_p}. \end{equation} Moreover,~$ \mbox{Spec}(\mathbb{Z}_p[G/F])$ is smooth over~$\mathbb{Z}_p$, and we have \begin{equation}\label{KN CCL} (G(\overline{\mathbb{Q}_p})\cdot v)\cap\overline{\mathbb{Z}_p}^n=G(\overline{\mathbb{Z}_p})\cdot v. \end{equation} \end{theorem} A reformulation of~\eqref{universal integral} is: denoting by~$gF\in(G/F)(\overline{\mathbb{Q}_p})$ the image of~$g\in G(\overline{\mathbb{Q}_p})$ in, we have \begin{equation}\label{universal integral bis} g\cdot v \in \overline{\mathbb{Z}_p}^n~\text{ if and only if }~gF\in(G/F)(\overline{\mathbb{Z}_p}). \end{equation} \subsubsection*{Remarks} Some of the hypotheses can be rephrased as follows. The~$\mathbb{Q}_p$-algebraic groups~$F$ and~$G$ are reductive, the compact subgroups ~$F_{\mathbb{Z}_p}(\mathbb{Z}_p)\leq F(\mathbb{Q}_p)$ and~$G_{\mathbb{Z}_p}(\mathbb{Z}_p)\leq G(\mathbb{Q}_p)$ are hyperspecial subgroups, and we have~$F_{\mathbb{Z}_p}(\mathbb{Z}_p)=F(\mathbb{Q}_p)\cap GL(n,\mathbb{Z}_p)$ and~$G_{\mathbb{Z}_p}(\mathbb{Z}_p)=G(\mathbb{Q}_p)\cap GL(n,\mathbb{Z}_p)$. The property~\eqref{pKN stab} is related to semi-stability and residual semi-stability of the vector~$v$ in the sense of~\cite{B92}. In~\eqref{pKN flat}, the hypothesis on dimensions means that~$\Stab_{G_{\mathbb{F}_p}}(\overline{v})^{0,red}$ (the reduced subgroup of the neutral component) is equal to~$(F_{\mathbb{F}_p})^0$. Equivalently~$\Stab_{G_{\mathbb{F}_p}}(\overline{v})^{0}(\overline{\mathbb{F}_p})=F^0(\overline{\mathbb{F}_p})$. This is implied by the stronger condition \begin{equation}\label{red scheme identity} \Stab_{G(\overline{\mathbb{F}_p})}(\overline{v})=F(\overline{\mathbb{F}_p}) \end{equation} and the stronger one \begin{equation}\label{Scheme stab identity} \Stab_{G_{\mathbb{F}_p}}(\overline{v})=F_{\mathbb{F}_p}\text{ as group schemes.} \end{equation} \subsubsection*{Proof of Theorem~\ref{pKN}} We first prove~\eqref{universal integral}. \begin{proof}Let~$S$ be the closure of~$G_{\mathbb{Q}_p}\cdot v$ in~$\mathbb{A}^n_{\mathbb{Z}_p}$ as in~\eqref{defi schematic closure S}: it is flat and we have \begin{equation}\label{flat S} x\in S(\overline{\mathbb{Z}_p})=G(\overline{\mathbb{Q}_p})\cdot v \cap \overline{\mathbb{Z}_p}^n \qquad \overline{x}\in S(\overline{\mathbb{F}_p})=\left\{\overline{x'}\middle|x'\in S(\overline{\mathbb{Z}_p})\right\}. \end{equation} Let~$a_1,\ldots,a_n\in A:=\mathbb{Z}_p[S]$ be the restrictions to~$S$ of the coordinate functions on~$\mathbb{A}^n_{\mathbb{Z}_p}$: by definition we have, for~$x=g\cdot v\in S(\overline{\mathbb{Q}_p})=G(\overline{\mathbb{Q}_p})\cdot v$, \[ x=g\cdot v\in\overline{\mathbb{Z}_p}^n\text{ if and only if }\forall i\in \{1;\ldots;n\}, a_i(x)\in\overline{\mathbb{Z}_p}. \] Because~$S$ is closed in~$\mathbb{A}^n_{\mathbb{F}_p}$ the family~$(a_i)_{i\in\{1;\ldots;n\}}$ generates~$\mathbb{Z}_p[S]$. Denote by~$x':=g F$ the image of~$g$ in~$(G/F)(\overline{\mathbb{Q}_p})\simeq G(\overline{\mathbb{Q}_p})/F(\overline{\mathbb{Q}_p})$, and define~$B:=\mathbb{Z}_p[G/F]$. Applying Cor.~\ref{coro integral extension}, we may use Lem.~\ref{lemma integral extension}. We deduce~\eqref{universal integral}. \end{proof} We now prove~\eqref{KN CCL}. \begin{proof}Consider~$x=gF\in (G/F)(\overline{\mathbb{Z}_p})$ corresponding to \[ \xi:\mbox{Spec}(\overline{\mathbb{Z}_p})\to G/F. \] The reduction~$\overline{x}$ of~$x$ is~$\overline{x}=\overline{gF}\in\xi(\mbox{Spec}(\mathbb{F}_p)) (G/F)(\overline{\mathbb{F}_p})$. Because~$(G/F)(\overline{\mathbb{F}_p})\simeq G(\overline{\mathbb{F}_p})/F(\overline{\mathbb{F}_p})$, there exists~$\overline{g}\in G(\overline{\mathbb{F}_p})$ such that~$\overline{g}\in \overline{x}=\overline{gF}(\overline{\mathbb{F}_p})$. From Lemma~\ref{platitude}, the map~$\omega:G\to G/F$ is flat. Flatness is a property which is stable under base change. Thus the restriction~$\omega\times_{\mbox{Spec}(\mathbb{Z}_p)} \xi$ of~$\omega$ along~$\xi$ is a flat map which we denote by \[ T_\xi\to \mbox{Spec}(\mathbb{Z}_p). \] By construction~$T_\xi(\overline{\mathbb{Q}_p})$ is the transporter~$\set{g\in G(\overline{\mathbb{Q}_p})}{g\cdot v=x}$, and~$T_\xi(\overline{\mathbb{F}_p})$ is the transporter~$\set{\overline{g}\in G(\overline{\mathbb{F}_p})}{\overline{g}\cdot \overline{v}=x}$. We have~$\overline{g}\in T_\xi(\overline{\mathbb{F}_p})$ by construction, and, by flatness of~$T_\xi$, there exists~$g$ in~$T_\xi(\overline{\mathbb{Z}_p})=T_\xi(\overline{\mathbb{Q}_p})\cap G(\overline{\mathbb{Z}_p})$. By definition \[ g\cdot v=x\text{ and }g\in G(\overline{\mathbb{Z}_p}). \] Because~$x$ is arbitrary, we have~$(G/F)(\overline{\mathbb{Z}_p})=\omega(G(\overline{\mathbb{Z}_p}))={G(\overline{\mathbb{Z}_p})}F/F$. The equation~\eqref{KN CCL} follows, using~\eqref{universal integral}. \end{proof} \subsubsection*{The Smooth case} The authors thank L. Moret-Bailly for helpful discussions, the following addition, and its proof, and useful references. We denote by~$\underline{G}/\underline{F}$ denote the quotient of~$G$ by~$F$ as an ``algebraic space'' is the sense of Artin. Some references are~\cite{A},~\cite{Knu},~\cite{Ana},~\cite{Ray}. \begin{proposition}\label{LMB Prop} In the situation of Th.~\ref{pKN}, assume moreover that we have~\eqref{Scheme stab identity}. Then the morphism \begin{equation}\label{LMB map} \underline{G}/\underline{F}\to S \end{equation} is an isomorphism of schemes. In particular~$S$ is smooth and its reduced fibre is regular. \end{proposition} \begin{proof}Under assumption~\eqref{Scheme stab identity}, the morphism~\eqref{LMB map} is a monomorphism. (e.g.~\cite[V Th.~10.1.2]{SGA31}). Since the morphism~\eqref{LMB map} is also finite (by Lemma\footnote{We may use the Lemma for~$\underline{G}/\underline{F}$ instead of~$G/F$ with following changes in the proof of the Lemma. We replace Zariski main theorem by the version~\citestacks{082K} for algebraic spaces. Using the monomorphism~$\underline{G}/\underline{F}\to G/F$ from~\cite[V Th.~10.1.2]{SGA31}, we can deduce from Lem.~\ref{Seshadri} the corresponding version for~$\underline{G}/\underline{F}$.}~\ref{Lemma integral}), and thus proper, we may invoke~\cite[18.12.6]{EGA44} (or~\citestacks{04XV}). We deduce that the morphism~\eqref{LMB map} is closed immersion, that is an isomorphism onto a closed subscheme. This image is closed and contains the generic fibre, thus contains~$S$. Thus~$\underline{G}/\underline{F}$ contains~$S^{red}$. Because the scheme~$S$ is reduced, as can be checked on the generic fibre (or see~\citestacks{01J2}), the morphism~\eqref{LMB map} is an isomorphism. We also know that~$\underline{G}/\underline{F}$ is smooth from~\cite[$\text{VI}_\text{B}$ Prop.~9.2 (xii) (and~V Th.~10.1.2)]{SGA31}. \end{proof} \begin{corollary}\label{LMB vs Sesh} The natural morphism~$\underline{G}/\underline{F}\to G/F$ is an isomorphism onto the quotient defined by invariant theory. \end{corollary} \begin{proof}It will follow from Prop.~\eqref{LMB Prop} if we will realise~$G/F$ as an instance of~$S\subseteq \mathbb{A}^n$ such as in~Prop.~\eqref{LMB Prop}. According to~\cite[Th.~2]{Sesh}, the algebra~$\mathbb{Z}_p[G/F]$ admits a finite generating family~$f_1,\ldots,f_k\in \mathbb{Z}_p[G]\subseteq \mathbb{Q}_p[G]$. According to~\cite[Prop.~3]{Sesh}, there exists a finite dimensional subrepresentation~$V\subseteq \mathbb{Q}_p[G]$ of~$G$ containing~$\{f_1;\ldots;f_k\}$. We endow~$V$ with the integral structure~$V_{\mathbb{Z}_p}$ induced by~$\mathbb{Z}_p[G]$. We pick~$v=(f_1,\ldots,f_k)\in V_{\mathbb{Z}_p}^k\approx {\mathbb{Z}_p}^{n}$, with~$n:=\dim(V)\cdot k$. By construction~$G/F\to \mathbb{A}_{\mathbb{Z}_p}^n$ is a closed immersion. We only need to check~\eqref{pKN flat} and~\eqref{pKN stab}. Using Lemma~\ref{Seshadri} we get the following. \begin{itemize} \item We have~$\dim (G/F)_{\mathbb{F}_p}=\dim G_{\mathbb{F}_p}-\dim F_{\mathbb{F}_p}$, and~\eqref{pKN flat} follows. \item The morphism~$G_{\mathbb{F}_p}/F_{\mathbb{F}_p}\to (G/F)_{\mathbb{F}_p}$ is a closed immersion. Thus the images of~$f_1,\ldots,f_k$ generate~$\mathbb{F}_p[G_{\mathbb{F}_p}/F_{\mathbb{F}_p}]$, and~$G_{\mathbb{F}_p}\to G_{\mathbb{F}_p}\cdot \overline{v}$ is a closed immersion, proving~\eqref{pKN stab}.\qedhere \end{itemize} \end{proof} \subsection{Connectedness and Special fibre} \begin{lemma}\label{lemma KN connected} Let~$G\leq GL(n)$ be hyperspecial group over~$\mathbb{Q}_p$, let~$v\in \overline{\mathbb{Z}_p}$ and let \begin{equation}\label{defi schematic closure S} S=\overline{G\cdot v}^{Zar(\mathbb{A}^n_{\overline{\mathbb{Z}_p}})} \end{equation} be the schematic closure of~${G\cdot v}\subseteq \mathbb{A}^n_{\overline{\mathbb{Z}_p}}$ in~$\mathbb{A}^n_{\overline{\mathbb{Z}_p}}$. Then~$S_{\mathbb{F}_p}$ is connected. \end{lemma} We first treat the case~$T=G= GL(1)$. \begin{proof}If~$S_{\mathbb{F}_p}$ is a closed orbit of~$T$, then it is connected, as it is the image of~$GL(1)$, which is connected. We will show that, otherwise, we can decompose~$S_{\mathbb{F}_p}$ under the form \begin{equation}\label{decompo tore en + 0 -} S_{\mathbb{F}_p}(\overline{\mathbb{F}_p})=S^-\cup\{\overline{v_0}\}\cup S^+ \end{equation} where each of~$S^-$ and~$S^+$ is either empty or of the form~$X=T(\overline{\mathbb{F}_p})\cdot\overline{w}$ with~$\{\overline{v_0}\}\in \overline{X}^{Zar}$. For every~$\overline{w}\in\overline{F_p}^n$, because~$T=GL(1)$ is connected, so is~$T\cdot \overline{w}$, and so is its Zariski closure. It follows that~$S^-$ and~$S^+$ are contained in the connected component of~$\overline{v_0}$, and finally that~$S_{\mathbb{F}_p}$ the connected component of~$\overline{v_0}$. From~\eqref{flat S} a point in~$S(\overline{\mathbb{Z}_p})$ is of the form \[ \overline{x}\text{ with }x=t\cdot v\in\overline{\mathbb{Z}_p}^n\text{ and }t\in T(\overline{\mathbb{Q}_p}). \] We identify~$X(T):={\rm Hom}(GL(1),GL(1))$ with~$\mathbb{Z}$ and denote by \[ v=\sum_{k\in\mathbb{Z}} v_k \] the eigendecomposition of~$v$ for the action of~$T$. Then~$x=t\cdot v=\sum t^k\cdot v_k$, and, by~\eqref{norm tore}, \begin{equation}\label{eq t x entier} \norm{x}=\max\{t^k\cdot \norm{v_k}\}\leq 1. \end{equation} Define \[ c=\min_{k< 0} \norm{v_k}^{-1/k}\in[0;1]\text{ and } c'=\max_{k> 0} \norm{v_k}^{-1/k}\in[1;+\infty] \] For~$t\in T(\overline{\mathbb{Q}_p})$ we have~$t\cdot v\in\overline{\mathbb{Z}_p}^n$ if and only if~$c\leq \abs{t}\leq c'$. We define \begin{eqnarray*} T_-&=&\set{t\in T(\overline{\mathbb{Q}_p})}{\abs{t}=c}\\ T_0&=&\set{t\in T(\overline{\mathbb{Q}_p})}{c<\abs{t}<c'}\\ T_+&=&\set{t\in T(\overline{\mathbb{Q}_p})}{\abs{t}=c'}. \end{eqnarray*} and \[ S^-=\set{\overline{t\cdot v}}{t\in T^-}\text{ and } S^+=\set{\overline{t\cdot v}}{t\in T^+}. \] and \[ v_-=\sum_{k\geq 0} v_k\text{ and }v_+=\sum_{k\geq 0} v_k. \] If~$c=0$, then~$T^-=S^-=\emptyset$. Otherwise, let us pick~$u\in T^-$. Assume first~$c\neq c'$. We then have~$\overline{u\cdot v_+}=0$ and \[ \overline{w}:=\overline{u\cdot v}=\overline{u\cdot v_-+v_0} \] We then have \[ S^-=T(\overline{\mathbb{F}_p})\cdot \overline{w} \] Because the weights of~$\overline{u\cdot v_-}$ are negative we have \[ \lim_{\overline{t}\to +\infty} \overline{t}\cdot\overline{u\cdot v_-+v_0}= \overline{u\cdot 0+v_0}=\overline{v_0}, \] where limits are understood in the sense of Hilbert-Mumford criterion, as in~\cite[Lem.\,1.3]{Kempf}. Thus~$\{v_0\}\in \overline{S^-}^{Zar}$. The case of~$S^+$ is treated similarly and we have obtained~\eqref{decompo tore en + 0 -} with the desired properties. We now treat the remaining case~$c=c'$. We then have \[ S_{\mathbb{F}_p}(\overline{\mathbb{F}_p})=S^+=S^-=T(\overline{\mathbb{F}_p})\cdot\overline{v}. \] (This is then a closed orbit of~$T_{\mathbb{F}_p}$, as~$S_{\mathbb{F}_p}$ is closed). \end{proof} We reduce the Proposition to the case of a torus~$GL(1)\simeq T\leq G$. \begin{proof}It is enough to prove that for an arbitrary~$\overline{x}\in S(\overline{\mathbb{F}_p})$, this~$\overline{x}$ and~$\overline{v}$ belong to the same connected component of~$S_{\mathbb{F}_p}$. We may find~$x\in S(\overline{\mathbb{Q}_p})\cap\overline{\mathbb{Z}_p}^n$ with reduction~$\overline{x}$. There exists~$g\in G(\overline{\mathbb{Q}_p})$ with~$g\cdot v=x$. From Cartan decomposition~\eqref{Cartanbar}, there exists~$k\in G(\overline{\mathbb{Z}_p})$ and maximal torus~$T\leq G$ and~$t\in T(\overline{\mathbb{Q}_p})$ with~$k\cdot t=g$. The torus~$T$ has good reduction by~\ref{Good torus}. There exists~$y:GL(1)\to T$ defined over~$\overline{\mathbb{Q}_p}$ and~$u\in T(\overline{\mathbb{Z}_p})$ and~$t'\in \overline{\mathbb{Q}_p}^\times$ with~$y(t')=u\cdot t$. Because~$G_{\mathbb{F}_p}$ is connected and~${S}_{\mathbb{F}_p}$ is~$G_{\mathbb{F}_p}$-invariant, the orbit~$G_{\mathbb{F}_p}(\overline{\mathbb{F}_p})\cdot\overline{x}$ is connected and contained in~$S_{\mathbb{F}_p}$. Thus~$\overline{x}$ and~$\overline{x'}={\overline{(k\cdot u)}^{\phantom{l}}}^{-1}\cdot \overline{x}$ lie in the same connected component of~$S_{\mathbb{F}_p}$. We may thus replace~$\overline{x}$ by~$\overline{x'}$ and~$x$ by~$(k\cdot u)^{-1}\cdot x$, and~$g$ by~$y(t')$. We have~$x\in GL(1)(\overline{\mathbb{Q}_p})\cdot v\cap \overline{\mathbb{Z}_p}^n$ and thus \[ \overline{v},\overline{x}\in S_T(\overline{\mathbb{F}_p})\text{ with }S_T:=\overline{T\cdot v}^{Zar(\mathbb{A}^n_{\overline{\mathbb{Z}_p}})}\subseteq S. \] From the previous~$G=GL(1)$ case,~$S_T$ is connected. Thus~$\overline{x}$ and~$\overline{v}$ lie in the same connected component. \end{proof} \begin{lemma}\label{Lemma S S'} In the situation of Lemma~\ref{lemma KN connected}, we assume that \[ \text{ the orbit $G_{\mathbb{F}_p}\cdot \overline{v}$ is Zariski closed in~$\mathbb{A}^n_{\mathbb{F}_p}$} \] and denote~$S'$ the corresponding reduced subcheme of~$\mathbb{A}^n_{\mathbb{F}_p}$. We assume furthermore that, using Krull dimension, \[ \dim \Stab_G(v)=\dim \Stab_{G_{\mathbb{F}_p}}(\overline{v}) \] Then \[ S'=(S_{\mathbb{F}_p})^{red.}. \] \end{lemma} \begin{proof}By construction~$S$ is flat over~$\mathbb{Z}_p$. According to Lemma~\ref{lemma schemes} we have \begin{equation}\label{semi dim S} \dim S_{\mathbb{F}_p}\leq \dim S_{\mathbb{Q}_p} \end{equation} From~\eqref{dim formula}, we have \begin{eqnarray} \dim S_{\mathbb{Q}_p} &= &\dim G_{\mathbb{Q}_p} - \dim \Stab_{G_{\mathbb{Q}_p}}(v),\\ \dim S'_{\phantom{\mathbb{Q}_p}} &= &\dim G_{\mathbb{F}_p} - \dim \Stab_{G_{\mathbb{F}_p}}(\overline{v}). \end{eqnarray} We deduce~$\dim(S')\geq \dim(S_{\mathbb{F}_p})$. Because~$S'\subseteq S_{\mathbb{F}_p}$ we have actually \[ \dim(S')=\dim(S_{\mathbb{F}_p}). \] Thus, (at the level of topological spaces,)~$S'$ contains a generic point of one irreducible component of~$S$, and thus\footnote{This~$S'$ is of finite type over~$S$ and its image will be constructible.} contains a non empty open subset of~$S$. Because~$S'$ is closed in~$\mathbb{A}^n_{\mathbb{F}_p}$, it is closed in~$S_{\mathbb{F}_p}$. Thus~$S'$ contains a connected component of~$S_{\mathbb{F}_p}$, and because~$S_{\mathbb{F}_p}$ is connected and~$S'$ is reduced, \[ S'=(S_{\mathbb{F}_p})^{red}.\qedhere \] \end{proof} We define, following~\cite{Sesh}, \begin{equation}\label{defi G sur F} \mathbb{Z}_p[G/F]=\mathbb{Z}_p[G]\cap \mathbb{Q}_p[G]^F\text{ and }G/F:= \mbox{Spec}(\mathbb{Z}_p[G/F]). \end{equation} By~\cite[\S{II}.4, Th.~2]{Sesh} (cf. also~\citestacks[Prop. 10.162.16.]{0335}) \[ \text{ $\mathbb{Z}_p[G/F]$ is of finite type over~$\mathbb{Z}_p$. } \] Since~$G$ is flat we have an isomorphism~$\mathbb{Z}_p[G_{\mathbb{F}_p}]\otimes\mathbb{F}_p\simeq\mathbb{F}_p[G_{\mathbb{F}_p}]$. Thus there exists an homomorphism~$\mathbb{Z}_p[G_{\mathbb{F}_p}]^{F}\otimes\mathbb{F}_p\to\mathbb{F}_p[G_{\mathbb{F}_p}]^{F_{\mathbb{F}_p}}$, and hence a morphism \[G_{\mathbb{F}_p}/F_{\mathbb{F}_p}\to G/F.\] \begin{lemma}\label{Seshadri} The map \[ G_{\mathbb{F}_p}/F_{\mathbb{F}_p}\to G/F \] induces an isomorphism with the reduced subscheme of the special fibre \[ G_{\mathbb{F}_p}/F_{\mathbb{F}_p}\simeq ((G/F)_{\mathbb{F}_p})^{red}. \] \end{lemma} \begin{proof}We apply~\cite[Prop. 6, \S{II.1}]{Sesh} to the closed affine embedding onto a~$G$-invariant subscheme \[ X:=G\subseteq GL(n)\subseteq SL(n+1)\subseteq V:=\mathbb{A}^{(n+1)^2}, \] where~~$G$ will be our~$F$, which is flat over~$\mathbb{Z}_p$, acting on the right on~$X$. In our instance, every geometric point~$x=g\cdot F$ of~$X$ is ``stable'' in the sense of~\cite[\S{II}.1, Def.~1]{Sesh}, every geometric orbit is closed, and the closure of two distinct orbits is empty. With~$B\subseteq \mathbb{Z}_p[X]^F$ the image of~$\mathbb{Z}_p[\mathbb{A}^{n+1}]^F$ in~$\mathbb{Z}_p[X]$, we have, at the level of the schemes \[ \mbox{Spec}(\mathbb{Z}_p[X])\to X/G:= \mbox{Spec}(\mathbb{Z}_p[X]^G)\to T:= \mbox{Spec}(B). \] From~\cite[Prop. 6, \S{II.1}]{Sesh} we have, on geometric points in an algebraically closed field~$k$ over~$\mathbb{Z}_p$, \[ X(k)\to X(k)/G(k)\simeq T(k). \] In our terms, this implies that we have bijections \[ (G/F)(\overline{\mathbb{F}_p})\to G(\overline{\mathbb{F}_p})/F(\overline{\mathbb{F}_p})\to T(\overline{\mathbb{F}_p}). \] It follow that the~$G_{\mathbb{F}_p}$-equivariant map \[ S'\simeq (G_{\mathbb{F}_p}/F_{\mathbb{F}_p})\to (G/F)_{\mathbb{F}_p} \] induces bijections \[ (G_{\mathbb{F}_p}/F_{\mathbb{F}_p})(\overline{\mathbb{F}_p})\simeq G(\overline{\mathbb{F}_p})/F(\overline{\mathbb{F}_p})\simeq (G/F)_{\mathbb{F}_p}. \] Because~$S':=G_{\mathbb{F}_p}/F_{\mathbb{F}_p}$ is reduced, we have a factorisation \begin{equation}\label{S' to Sesh} S'\to ((G/F)_{\mathbb{F}_p})^{red}\to (G/F)_{\mathbb{F}_p}. \end{equation} Let~$x$ be a geometric generic point of~$((G/F)_{\mathbb{F}_p})^{red}$ and let~$k$ be its residue field. Then there is a unique inverse image~$x'\in S'(k)$, and~$S'\to ((G/F)_{\mathbb{F}_p})^{red}$ will be an isomorphism from a neighbourhood~$U'$ of~$x'$ onto a neighbourhood~$U$ of~$x$ (cf. e.g.~\citestacks{0BXP}). Using the action of~$G_{\mathbb{F}_p}$ we may assume~$U$ and~$U'$ are~$G_{\mathbb{F}_p}$-invariant. We have necessarily~$U'=S'$, and~\eqref{S' to Sesh} is an open immersion. It is also surjective on~$\mathbb{F}_p$-points, hence surjective. \end{proof} \subsection{Normalisation and Integrality} \begin{lemma}\label{Lemma integral} We keep the situation of Lemma~\ref{Lemma S S'} and the notations from Lemma~\ref{Seshadri}. The morphism~$G/F\to S$ is integral, and finite. \end{lemma} \begin{proof} We claim the map \[ G/F\to S \] is bijective on geometric points. It is bijective on geometric points of characteristic~$0$. Indeed, on the generic fibres we have the isomorphisms~$G_{\mathbb{Q}_p}\cdot v\simeq G_{\mathbb{Q}_p}/F_{\mathbb{Q}_p} \simeq (G/F)_{\mathbb{Q}_p}= \mbox{Spec}(\mathbb{Q}_p[G])^{F_{\mathbb{Q}_p}}$. It also is bijective on geometric points of characteristic~$p$. Indeed, from Lem.~\ref{Lemma S S'} and~\ref{Seshadri}, we have \[ S'\simeq (S_{\mathbb{F}_p})^{red.}\simeq ((G/F)_{\mathbb{F}_p})^{red.}. \] We proved the claim and it follows that~$G/F\to S$ is quasi-finite and bijective. We define~$\overline{S}= \mbox{Spec}(\overline{\mathbb{Z}_p[S]})$ where we denote by~$\overline{\mathbb{Z}_p[S]}$ the integral closure of~$\mathbb{Z}_p[S]$ in~$\mathbb{Z}_p[G/F]$. According to Zariski Main Theorem in the form~\citestacks[Th. 10.123.12 (Zariski's Main Theorem)]{00Q9}, for any point~$x$, say of characteristic~$p$, in~$G/F$, there open subset~$U\subseteq \overline{S}$ containing its image in~$\overline{S}$ such that the map \[ \overline{\pi}:G/F\to \overline{S} \] induces an isomorphism~$\stackrel{-1}{\overline{\pi}}(U)\to U$ above~$U$. Let~$Z=G/F\smallsetminus \stackrel{-1}{\pi}(U)$ and~$Z'=\pi(Z)$ and~$U'=S\smallsetminus Z'$. This is a non-empty subset of~$S$ containing the image~$s:=\pi(x)\in S_{\mathbb{F}_p}$ and such that \[ \pi:G/F\to S \] is integral above~$U$. By homogeneity, for every~$g\in G(\overline{\mathbb{Z}_p})$, the morphism~$\pi':(G/F)_{\overline{\mathbb{Z}_p}}\to S_{\overline{\mathbb{Z}_p}}$ will be integral above~$gU\subseteq S_{\overline{\mathbb{Z}_p}}$. Because integrality is a local property, the morphism~$\pi'$ will be integral over the neighbourhood~$U''=G(\overline{\mathbb{Z}_p})\cdot U$ of the closed fibre~$S_{\overline{\mathbb{F}_p}}$. We also know~$\pi'$ is an isomorphism, hence integral, over the open subset~$U'''=S_{\overline{\mathbb{Q}_p}}$ (the generic fibre). It is then integral over~$U''\cup U'''=S_{\overline{\mathbb{Z}_p}}$. We use~\citestacks[Lem. 10.36.5.]{02JJ} to deduce finiteness from integrality. \end{proof} The following is a reformulation. \begin{corollary}\label{coro integral extension} The normalisation of~$S$ and~$G/F$ in~$G_{\mathbb{Q}_p}$ are the same. The ring~$\mathbb{Z}_p[G/F]$ is integral over~$\mathbb{Z}_p[S]$, and finite. \end{corollary} \begin{lemma}\label{lemma integral extension} Let~$A\subseteq B$ be~$\mathbb{Z}_p$-algebras with~$B$ integral over~$A$, an let~$(a_i)_{i\in I}$ be a generating set of~$A$ For~$x\in \mbox{Spec}(B)(\overline{\mathbb{Q}_p})$ the following are equivalent. \begin{eqnarray} \forall i\in{I},& a_i(x)&\in\overline{\mathbb{Z}_p}.\label{integral lemma generators}\\ \forall a\in{A},& a(x)&\in\overline{\mathbb{Z}_p}.\\ \forall b\in{B},& b(x)&\in\overline{\mathbb{Z}_p}.\label{integral lemma extension} \end{eqnarray} \end{lemma}\label{integral sur Zpbar} \begin{proof}It suffices to prove that for an arbitrary~$b$, assuming~\eqref{integral lemma generators}, we have \begin{equation}\label{eq integral sur Zpbar} b (x)\in\overline{\mathbb{Z}_p}. \end{equation} Because~$b$ is integral on~$A=\mathbb{Z}_p[(a_i)_{i\in I}]$, its image~$b(x)\in\overline{\mathbb{Q}_p}$ is integral over \[ \mathbb{Z}_p[(a_i(x))_{i\in I}] \] (If~$b^{d+1}=a_{(0)}+\ldots+a_{(d)}\cdot b^d$, then~$b(x)^{d+1}=a_{(0)}(x)+\ldots+a_{(d)}(x)\cdot b(x)^d$.) By assumption~$\mathbb{Z}_p[(a_i(x))_{i\in I}]\subseteq\overline{\mathbb{Z}_p}$. But~$\overline{\mathbb{Z}_p}$ is integrally closed in~$\overline{\mathbb{Q}_p}$. We deduce~\eqref{eq integral sur Zpbar}. \end{proof} The above is sufficient for proving~\eqref{universal integral} and our proof of our main result Th.~\ref{main theorem 2}. Below we address the smoothness of~$G/F$, which we use for~\eqref{KN CCL}. \subsection{Flatness and Smoothness} \begin{lemma}\label{platitude} Assume that~$(G/F)_{\mathbb{F}_p}$ is reduced, for instance that~$G/F$ is smooth over~$\mathbb{F}_p$. If~$F$ is flat, resp. smooth, then the maps \begin{equation}\label{univ orbit map} \omega:G\to G/F\text{ and }G/F\to \mbox{Spec}(\mathbb{Z}_p) \end{equation} are flat, resp. smooth. \end{lemma} \begin{proof} We know that~$G_{\mathbb{Z}_p}$ is flat and smooth over~$\mathbb{Z}_p$ by hypothesis. If~$F$ is smooth, so are~$F_{\mathbb{F}_p}$ and~$F_{\mathbb{Q}_p}$. From Lemma~\ref{Flat orbit lemma}, we know that~ \[ G_{\overline{\mathbb{Q}_p}}\to (G/F)_{\overline{\mathbb{Q}_p}}\cdot v=S_{\overline{\mathbb{Q}_p}}\text{ and~}G_{\overline{\mathbb{F}_p}}\to G_{\overline{\mathbb{F}_p}}/F_{\overline{\mathbb{F}_p}} \] are flat, resp. smooth morphisms of algebraic varieties. Because~$(G/F)_{\mathbb{F}_p}$ is reduced we have, by Lem.~\ref{Seshadri}, an isomorphism~$G_{\overline{\mathbb{F}_p}}/F_{\overline{\mathbb{F}_p}}\simeq (G/F)_{\overline{\mathbb{F}_p}}$. We may conclude with~\cite[Part 2, \S5.6, Lem.~5.21, p.\,132]{FGA} or~\citestacks[Lem. 37.16.3.]{039D} that~\eqref{univ orbit map} is flat, resp. with~\cite[17.11.1 d)]{EGA44} that~\eqref{univ orbit map} is smooth. (See also~\cite[$\text{VI}_\text{B}$ Prop.~9.2 (xii) (and~V Th.~10.1.2)]{SGA31}.) \end{proof} \begin{lemma}\label{Flat orbit lemma}Over a field~$\kappa$ let~$G\leq GL(n)_\kappa$ be a algebraic subgroup (smooth closed group subscheme), and choose~$v\in\kappa^n$. Then the map ``orbit through~$v$'' map \begin{equation}\label{omega smooth?} \omega:G\to G\cdot v \end{equation} is flat, where~$G\cdot v\simeq G/\Stab_G(v)$ is locally closed and given the reduced scheme structure. We have, using Krull dimension, \begin{equation}\label{dim formula} \dim(G\cdot v)=\dim(G)-\dim(\Stab_G(v)). \end{equation} If~$\Stab_G(v)$ is smooth as a group scheme\footnote{In practice~$\dim \Stab_{\mathfrak{g}}(v)=\dim \Stab_{G}(v)$.}, then~$\omega$ is a smooth map and~$G\cdot v$ is smooth (regular). \end{lemma} \begin{proof} According to the Orbit Lemma~\cite[\S{I} 1.8]{BorelLAG}, the orbit~$G\cdot v$ is locally closed. Because~$G\cdot v$ is reduced, by~\citestacks[Prop. 29.27.2]{052B} the flatness locus of the map~$\omega$ is a non-empty (dense open) subset of~$G\cdot v$. But this subset is~$G$-invariant. Thus the map is flat everywhere. We deduce~\eqref{dim formula} from the flat case of~\citestacks[Lem. 29.28.2.]{02JS} and~\citestacks[Lem. 29.29.3.]{02NL}, (using Krull dimension, cf.~\citestacks[Def. 5.10.1.]{0055}). (We can also find~\eqref{dim formula} in~\cite[p.\,7]{GIT}).) Concerning the smoothness, see for instance~Prop.~\ref{LMB Prop} and~\ref{LMB vs Sesh} or~\cite[$\text{VI}_\text{B}$ Prop.~9.2 (xii) (and~V Th.~10.1.2)]{SGA31}. \end{proof} \section{Slopes weights estimates}\label{sec:slopes} We consider an integer~$n\in\mathbb{Z}_{\geq0}$ and an Euclidean distance~$d(~,~)$ on~$\mathbb{R}^n$. The quantities~$c,c'$ and~$\gamma$ will implicitely also depend on~$d(~,~)$. \begin{lemma}\label{lemma cvx 1} Let~$\Sigma$ be a finite set of linear forms on~$\mathbb{R}^n$, and let the function~$h_{\Sigma}:\mathbb{R}^n\to\mathbb{R}_{\geq0}$ be given by \[ h_{\Sigma}(x)=\max_{\lambda\in\{0\}\cup\Sigma}\lambda(x), \] and define~$C=C(\Sigma):=\{x\in\mathbb{R}^n|h_{\Sigma}(x)=0\}$. Then there exist~$c(\Sigma),c'(\Sigma)\in\mathbb{R}_{>0}$ such that: for all~$x\in\mathbb{R}^n$ satisfying \begin{equation}\label{mini to C} d(x,C)=d(x,0) \end{equation} we have \begin{equation}\label{cvx c c'} c(\Sigma)\cdot d(0,x)\leq h_{\Sigma}(x)\leq c'(\Sigma)\cdot d(0,x). \end{equation} \end{lemma} \begin{proof}We may assume that~$x\neq 0$ and, by homogeneity of~\eqref{cvx c c'}, that \begin{equation}\label{sphere} d(0,x)=1. \end{equation} We can rewrite the condition~\eqref{mini to C} as \begin{equation}\label{mini to c} \forall c\in C,~d(0,x)\leq d(c,x). \end{equation} The set~$C^\bot:=\set{x\in\mathbb{R}^n}{d(0,x)=d(C,x)}$ is an intersection of affine half-spaces, and is a closed set~$C^\bot\subseteq \mathbb{R}^{d}$ (it is the \emph{polar dual cone} to~$C$). The intersection~$K$ of~$C^\bot$ with the unit sphere~$\set{x\in\mathbb{R}^n}{d(0,x)=1}$ is thus a compact set. We have~$x\in K$, by~\eqref{mini to C} and~\eqref{sphere}. The continuous function~$h_{\Sigma}$ has a minimum value and maximum value on the compact~$K$, which we denote by \[ c(\Sigma):=\min_{k\in K}h_{\Sigma}(k)\text{ and }c'(\Sigma):=\max_{k\in K}h_{\Sigma}(k). \] By definition,~\eqref{cvx c c'} is satisfied and we have~$0\leq c(\Sigma)\leq c'(\Sigma)<+\infty$. It will be enough to prove~$0<c(\Sigma)$. Assume by contradiction that~$c(\Sigma)= 0$ and choose~$k\in K$ such that~$h_{\Sigma}(k)=0$. Then~$k\in C$. From~\eqref{mini to c} for~$x=c=k$, we deduce~$d(0,k)\leq d(k,k)=0$, contradicting~\eqref{sphere}. \end{proof} \begin{proposition}\label{prop slopes comparison} We keep the setting of Lemma~\ref{lemma cvx 1}. Let us fix a map~$\mu:\Sigma\to\mathbb{R}_{\leq 0}$, let~$h_{\mu}:\mathbb{R}^n\to \mathbb{R}_{\geq0}$ be defined by \[ h_{\mu}(x)=\max\{0;\max_{\lambda\in\Sigma}\lambda(x)+\mu(\lambda)\}. \] Define~$C=C_{\mu}=\{x\in\mathbb{R}^n|h_{\Sigma}(x)=0\}$. Then \begin{equation}\label{cvx affine} \forall x\in \mathbb{R}^n, c(\Sigma)\cdot d(C,x)\leq h_{\Sigma}(x)\leq c'(\Sigma)\cdot d(C,x). \end{equation} \end{proposition} \begin{proof} Define~$\overline{\Sigma}:=\{\lambda\in\Sigma|\mu(\lambda)=0\}$ and~$\overline{C}=\{x\in\mathbb{R}^n|h_{\overline{\Sigma}}(x)=0\}$. We have~$h_{\overline{\Sigma}}\leq h_{\mu}$ and thus \[ C\subseteq \overline{C} \] In a first step we prove~\eqref{cvx affine} with the extra condition \begin{equation}\label{cvx extra} d(x,C)=d(x,0)\text{ (that is:~$\forall c\in C, d(x,c)\geq d(x,0)$)}. \end{equation} Let~$\norm{~}$ denote the euclidean norms induced by~$d(~,~)$ on~$\mathbb{R}^n$ and its dual. For~$a\in \mathbb{R}^n$ we have \[ \max_{\sigma\in\Sigma}\sigma(a)\leq \norm{a}\cdot \max_{\sigma\in\Sigma}\norm{\sigma}. \] Define~$\mu_0:=\max\set{\mu(\sigma)}{\sigma\in\Sigma\smallsetminus\overline{\Sigma}}<0$. Then, if~$a\in\mathbb{R}^n$ satisfies \begin{equation}\label{petit a} \norm{a}\cdot \max_{\sigma\in\Sigma}\norm{\sigma}\leq \mu_0, \end{equation} we have \begin{equation}\label{petit a et negatif} h_{\mu}(a)-h_\Sigma(a)\leq 0,\text{ and thus }h_{\mu}(a)=h_\Sigma(a). \end{equation} Let us prove that~$d(x,\overline{C})=d(x,0)$. \begin{proof}We want to prove that for an arbitrary~$b\in \overline{C}$ we have \begin{equation}\label{cvx claim 1} d(x,b)\geq d(x,0). \end{equation} Let~$\lambda\in\mathbb{R}_{>0}$ be sufficiently small so that~$a:=\lambda\cdot b$ satisfies~\eqref{petit a}. We deduce from~\eqref{petit a et negatif} that~$a\in C$, and from~\eqref{cvx extra} that \[ d(x,a)\geq d(x,0). \] Equivalently, denoting by~$(~,~)$ the euclidean scalar product,~$(a,x)\leq 0$. It follows~$(b,x)=\lambda\cdot (a,x)\leq 0$ and, equivalently,~\eqref{cvx claim 1}. \end{proof} Applying Lemma~\ref{lemma cvx 1}, we deduce~\eqref{cvx affine} under the assumption~\eqref{cvx extra}. We now reduce the general case to the first step, by a translation of the origin of~$\mathbb{R}^n$. Let~$x_0\in C$ be such that \[ d(x,C)=d(x,x_0) \] and define \[ \mu'(\lambda)=\lambda(x_0)-\mu(\lambda), \] so that \[ h_{\mu'}(y)=h_\mu(y+x_0). \] and~$C_{\mu'}=C_{\mu}-x_0$. Thus \[ d(x-x_0,C_{\mu'})=d(x-x_0,C_{\mu}-x_0)=d(x,C_{\mu})=d(x,x_0)=d(x-x_0,0). \] From~$x_0\in C_\mu$, we deduce~$h_{\mu'}(0)=h_\mu(x_0)=0$ and~$\forall\sigma\in\Sigma,\mu'(\sigma)\leq 0$. Then~\eqref{cvx affine} for~$x$ follows from the first step applied to~$x-x_0$. \end{proof} Defining \[ \gamma(\Sigma_0)= \frac{\max\set{c'(\Sigma)}{\Sigma\subseteq \Sigma_0}} {\hspace{2.2pt}\min\set{\phantom{{}'}c(\Sigma)}{\Sigma\subseteq \Sigma_0}} \] we deduce the following. \begin{corollary}\label{coro slopes} Let~$\Sigma_0$ be a finite set of linear forms on~$\mathbb{R}^n$. There exists~$\gamma(\Sigma_0)\in\mathbb{R}_{>0}$ such that for~$\Sigma,\Sigma'\subseteq \Sigma_0$ and~$\mu:\Sigma\to\mathbb{R}_{\leq0}$ and~$\mu':\Sigma\to\mathbb{R}_{\leq0}$ such that~$C_{\mu}=C_{\mu'}$, we have \[ \forall x\in \mathbb{R}^n, h_{\mu}(x)\leq \gamma(\Sigma_0)\cdot h_{\mu'}(x). \] \end{corollary}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Conclusion} We present a simple non-parametric model for clustering short documents (such as tweets) into storylines, which are conceptually coherent and temporally focused. Future work may consider learning more flexible temporal distance functions, which could potentially represent temporal periodicity or parametric models of content popularity. \section{TREC Evaluation} To test the efficacy of this approach, we evaluate on the Twitter Timeline Generation (TTG) task in the Microblog track of TREC 2014. It involves taking tweets based on a query $Q$ at time $T$ and returning a summary that captures relevant information. We perform the task on 55 queries with different timestamps and compare our results with 13 groups that submitted 50 runs for this task in 2014. We consider the following systems: \begin{description} \setlength\parskip{0pt} \setlength\parsep{0pt} \item[Baseline] We replace the distance-dependent prior with a standard Dirichlet prior. The number of clusters is heuristically set to 20. Annealed Gibbs sampling is employed for inference. \item[Offline inference] The dd-CRP model with offline inference procedure (described in \autoref{sec:inference}). \item[Online inference] The dd-CRP model with online inference procedure (described in \autoref{sec:online}). \end{description} For the online inference implementation, we set the size of window and number of iterations to five days and 500 respectively. For the baseline, the parameter of the Dirichlet prior was set to a vector of $0.5$ for each cluster. These values were chosen through 10-fold cross validation. To measure the quality of the clusterings obtained by these models, we compare the average weighted and unweighted F-measures for 55 TREC topics, using the evaluation scripts from the TREC TTG task. Overall results are shown in \autoref{tab:sim}. The \textsc{online model} has the best weighted F1 score, outperforming the offline version of the same model, even though its inference procedure is an approximation to the \textsc{offline model}. It may be that its approximate inference procedure discourages long-range linkages, thus placing a greater emphasis on the temporal dimension. Both models were trained over 500 iterations, and the \textsc{online model} was 30\% faster to train than the offline model. Compared to the other 2014 TREC TTG systems, our dd-CRP models are competitive. Both models outperform all but one of the fourteen submissions on the unweighted $F_1$ metric, and would have placed fourth on the weighted $F^{w}_1$ metric. Note that the TREC evaluation scores both clustering quality and retrieval. We use only the baseline retrieval model, which achieved a mean average precision of 0.31. The competing systems shown in \autoref{tab:sim} all use retrieval models that are far superior: the retrieval model for top-ranked PKUICST team (line 4) achieved a mean average precision (MAP) of 0.59~\cite{lv2014pkuicst}, and the QCRI~\cite{magdy2014qcri} and and hltcoe~\cite{xu2014hltcoe} teams (lines 5 and 6) used retrieval models with MAP scores of at least 0.5. Bayesian dd-CRP storyline clustering was competitive with these timeline generation systems despite employing a far worse retrieval model, so improving the retrieval model to achieve parity with these alternative systems seems the most straightforward path towards better overall performance. \begin{table*} \centering \begin{tabular}{l l l l l l} \toprule Model & Rec. & Rec.$^{w}$ & Prec. & $F_1$ & $F_1^{w}$ \\ \midrule \textit{dd-CRP clustering models} \\ 1. \textsc{baseline} & 0.14 & 0.27 & 0.33 & 0.20 & 0.30 \\ 2. \textsc{offline} & 0.32 & 0.47 & 0.27 & 0.29 & 0.34 \\ 3. \textsc{online} & 0.34 & 0.55 & 0.26 & 0.29 & 0.35 \\[8pt] \textit{Top systems from Trec-2014 TTG}\\ 4. \texttt{TTGPKUICST2}~\cite{lv2014pkuicst} & 0.37 & 0.58 & 0.46 & 0.35 & 0.46 \\ 5. \texttt{EM50}~\cite{magdy2014qcri} & 0.29 & 0.48 & 0.42 & 0.25 & 0.38 \\ 6. \texttt{hltcoeTTG1}~\cite{xu2014hltcoe} & 0.40 & 0.59 & 0.34 & 0.28 & 0.37 \\ \bottomrule \end{tabular} \caption{Performance of Models in the TREC 2014 TTG Task. Weighted recall and $F_1$ are indicated as Rec.$^{w}$ and $F_1^w$.} \label{tab:sim} \end{table*} \section{Inference} \label{sec:inference} The key sampling equation for the dd-CRP is the posterior likelihood, \begin{align*} \Pr(c_i = j \mid \vec{c}_{-i}, \vw) \propto & \Pr(c_i = j) P(\vw \mid \vec{c}). \end{align*} The prior is defined in Equation~\ref{eq:prior}. Let $\ell$ represent the likelihood under the partitioning induced when the link $c_i$ is cut. Now, the likelihood term has two cases: in the first case, $j$ is already in the same connected component as $i$ (even after cutting the link $c_i$), so no components are merged by setting $c_i = j$. In this case, the likelihood $P(\vw \mid c_i = j)$ is exactly equal to $\ell$. In the second case, setting $c_i = j$ causes two clusters to be merged. This gives the likelihood, \begin{align*} P&(\vw \mid c_i = j, \vec{c}_{-i}) \\ & \propto \frac{P(\{\vw_k : \zc_k = \zc_j \vee \zc_k = \zc_i\})} {P(\{\vw_k : \zc_k = \zc_i\}) P(\{\vw_k : \zc_k = \zc_j\}) }, \label{eq:likelihood} \end{align*} where the constant of proportionality is exactly equal to $\ell$. Each of the terms in the likelihood ratio is a Dirichlet Compound Multinomial likelihood. This likelihood function is itself a ratio of gamma functions; by eliminating constant terms and exploiting the identity $\Gamma(x+1) = x \Gamma(x)$, we can reduce the number of Gamma function evaluations required to compute this ratio to the number of words which appear in \emph{both} clusters $\zc_i$ and $\zc_j$. Words that occur in neither cluster can safely be ignored, and the gamma functions for words which occur in exactly one of the two clusters cancel in the numerator and denominator of the ratio. Note also that we only need compute the likelihood for $c_i$ with respect to each cluster, not for every possible follower link. \subsection{Online inference} \label{sec:online} While we make every effort to accelerate the computation of individual Gibbs samples, the complexity of the basic algorithm is superlinear in the number of instances. This is due to the fact that each sample requires computing the probability of instance $i$ joining every possible cluster, while the number of clusters itself grows with the number of instances (this growth is logarithmic in the Chinese Restaurant Process). Scalability to the streaming setting therefore requires more aggressive optimizations. To get back to linear time complexity, we employ a fixed-lag sampling procedure~\cite{doucet2000sequential}. After receiving instance $i$, we perform Gibbs sampling only within the fixed window $[t_i - \tau, t_i]$, leaving $c_j$ fixed if $t_j < t_i - \tau$. This approximate sampling procedure implicitly changes the underlying model, because there is no possibility of linking $i$ to a later message $j$ if the time gap $t_j - t_i > \tau$. Since we are only interested in obtaining a single storyline clustering --- rather than a full Bayesian distribution over clusterings --- we perform annealing for samples towards the end of the sampling window. Specifically, we set the temperature to $\gamma = 2.0$ and exponentiate the sampling likelihood by the inverse temperature~\cite{geman1984stochastic}. This has the effect of interpolating between probabilistically-correct Gibbs sampling and a hard coordinate-ascent procedure. \subsection{Hyperparameter estimation} \label{sec:hyper} The model has three parameters to estimate: \begin{itemize} \setlength\parskip{0pt} \setlength\parsep{0pt} \setlength\itemsep{0pt} \item $\alpha$, the concentration parameter of the dd-CRP \item $a$, the offset of the distance function \item $\eta$, the scale of the symmetric Dirichlet prior. \end{itemize} We interleave maximization-based updates to these parameters with sampling, in a procedure inspired by Monte Carlo Expectation Maximization~\cite{wei1990monte}. Specifically, we compute gradients on the likelihood $P(\vc)$ with respect to $\alpha$ and $a$, and take gradient steps after every fixed number of samples. For the symmetric Dirichlet parameter $\eta$, we employ the heuristic from~\newcite{minka2000estimating} by setting the parameter to $\eta = \frac{(K-1)/2}{\sum_k\log p_k}$, where $K$ is the number of words that appear exactly once, and $p_k$ is the probability of choosing the $k^{th}$ word from the vocabulary under the unigram distribution for the entire corpus. \section{Introduction} A long-standing goal for information retrieval and extraction is to identify and group textual references to ongoing events in the world~\cite{allan2002topic}. Success on this task would have applications in personalized news portals~\cite{gabrilovich2004newsjunkie}, intelligence analysis, disaster relief~\cite{vieweg2010microblogging}, and in understanding the properties of the news cycle~\cite{leskovec2009meme}. This task attains a new importance in the era of social media, where citizen journalists can document events as they unfold~\cite{lotan2011arab}, but where repetition and untrustworthy information can make the reader's task especially challenging~\cite{becker2011beyond,marcus2011twitinfo,petrovic2010streaming}. A major technical challenge is in fusing information from two heterogeneous data sources: textual content and time. Two different documents about a single event might use very different vocabulary, particularly in sparse social media data such as microblogs; conversely, two different sporting events might be described in nearly identical language, with differences only in the numerical outcome. Temporal information is therefore critical: in the first case, to find the commonalities across disparate writing styles, and in the second case, to identify the differences. A further challenge is that unlike in standard document clustering tasks, the number of events in a data stream is typically unknown in advance. Finally, there is a high premium on scalability, since online text is produced at a high rate. Due to these challenges, existing approaches for combining these modalities have been somewhat heuristic, relying on tunable parameters to control the tradeoff between textual and temporal similarity. In contrast, the Bayesian setting provides elegant formalisms for reasoning about latent structures (e.g., events) and their stochastically-generated realizations across text and time. In this paper, we describe one such model, based on the distance-dependent Chinese Restaurant Process (dd-CRP; Blei and Frazier, 2011)\nocite{blei2011distance}. This model is distinguished by the neat separation that it draws between textual content, which is treated as a stochastic emission from an unknown Multinomial distribution, and time, which is modeled as a prior on graphs over documents, through an arbitrary \emph{distance function}. However, straightforward implementations of the dd-CRP are insufficiently scalable, and so the model has been relatively underutilized in the NLP literature~\cite{titov2011bayesian,kim2011accounting,sirts2014pos}. We describe improvements to Bayesian inference that make the application of this model feasible, and present encouraging empirical results on the Tweet Timeline Generation task from TREC 2014~\cite{lin2014overview}. \section{Model} The basic task that we address is to group short text documents into an unknown number of storylines, based on their textual content and their temporal signature. The textual content may be extremely sparse --- the typical Tweet is on the order of ten words long --- so leveraging temporal information is crucial. Moreover, the temporal signal is multiscale: in the 24-hour news cycle, some storylines last for less than an hour, while others, like the disappearance of the Malaysian Airlines 370 plane in 2014, continue for weeks or months. In some cases, the temporal distribution of references to a storyline will be unimodal and well-described by a parametric model~\cite{marcus2011twitinfo}; in other cases, it may be irregular, with bursts of activity followed by periods of silence~\cite{he2007analyzing}. Finally, it will be crucial to produce an implementation that scales to large corpora. The distance-dependent Chinese Restaurant Process (dd-CRP) meets many of these criteria~\cite{blei2011distance}. In this model, the key idea is that each instance (document) $i$ ``follows'' another instance $c_i$ (where it is possible that $c_i = i$), inducing a graph. We can compute a partitioning over instances by considering the connected components in the undirected version of the follower graph; these partitions correspond to ``tables'' in the conventional ``Chinese Restaurant'' analogy~\cite{aldous1985exchangeability}, or to clusters. The advantage of this approach is that it is fundamentally non-parametric, and it introduces a clean separation between the textual data and the covariates: the text is generated by a distribution associated with the partition, while the covariates are associated with the following links, which are conditioned on a distance function. The distribution over follower links for document $i$ has the following form, \begin{equation} \Pr(c_i = j) \propto \begin{cases} f(d_{i,j}), & i \neq j\\ \alpha, & i = j,\\ \end{cases} \label{eq:prior} \end{equation} where $d_{i,j}$ is the distance between units $i$ and $j$, and $\alpha > 0$ is a parameter of the model. Large values of $\alpha$ induce more self-links and therefore more fine-grained partitionings. Since we are concerned with temporal covariates, we define the distance function as follows: \begin{equation} f(d_{i,j}) = e^{\frac{-|t_i - t_j|}{a}}. \end{equation} Thus, the likelihood of document $i$ following document $j$ decreases exponentially as the time gap $|t_i - t_j|$ increases. The text of each document $i$ is represented by a vector of word counts $\vw_i$. The likelihood distribution is multinomial, conditioned on a parameter $\theta$ associated with the partition to which document $i$ belongs. By placing a Dirichlet prior on $\theta$, we can analytically integrate it out. Writing $\vzc$ for the cluster membership induced by the follower graph $\vc$, we have: \begin{small} \begin{align} P(\vw \mid \vc; \eta) = & \prod_k P(\{ \vw_i : \zci = k \}; \eta)\\ = &\prod_k \int_\theta P(\{ \vw_i : \zci = k \} \mid \theta) P(\theta ; \eta) d\theta \label{eq:likelihood} \end{align} \end{small} Given a multinomial likelihood $P(\vw \mid \theta)$ and a (symmetric) Dirichlet prior $P(\theta \mid \eta)$, this integral has a closed-form solution as the Dirichlet-Multinomial distribution (also known as the multivariate Polya distribution). The joint probability is therefore equal to the product of \autoref{eq:prior} and \autoref{eq:likelihood}, \begin{align} P(\vw, \vc) = \prod_i P(c_i ; \alpha, a) \prod_k P(\{\vw_i : \zci = k\}; \eta). \label{eq:joint} \end{align} The model has three hyperparameters: $\alpha$, which controls the likelihood of self-linking, and therefore affects the number of clusters; $a$, which controls the time scale of the distance function, and therefore affects the importance of the temporal dimension to the resulting clusters; and $\eta$, which controls the precision of the Dirichlet prior, and therefore the importance of rare words in the textual likelihood function. Estimation of these hyperparameters is described in \autoref{sec:hyper}. \section{Related work} Topic tracking and first-story detection are very well-studied tasks; space does not permit a complete analysis of the related work, but see~\cite{allan2002topic} for a summary of ``first generation'' research. More recent non-Bayesian approaches have focused on string overlap~\cite{suen2013nifty}, submodular optimization~\cite{shahaf2012trains}, and locality-sensitive hashing~\cite{petrovic2010streaming}. In Bayesian storyline analysis, the seminal models are Topics-Over-Time~\cite{wang2006topics}, which associates a parametric distribution over time with each topic~\cite{ihler2006adaptive}, and the Dynamic Topic Model~\cite{blei2006dynamic}, which models topic evolution as a linear dynamical system~\cite{nallapati2007multiscale}. Later work by \newcite{diao2012finding} offers a model for identifying ``bursty'' topics, with inference requiring dynamic programming. All these approaches require the number of topics to be identified in advance. \newcite{kim2011accounting} apply a distance-dependent Chinese Restaurant \emph{Franchise} for temporal topic modeling; they evaluate using predictive likelihood rather than comparing against ground truth, and do not consider online inference. The Infinite Topic-Cluster model~\cite{ahmed2011unified} is non-parametric over the number of storylines, through the use of the recurrent Chinese Restaurant Process (rCRP). The model is substantially more complex than our approach. Unlike the dd-CRP, the rCRP is Markovian in nature, so that the topic distribution at each point in time is conditioned on the previous epoch (or, at best, the previous $K$ epochs, with complexity of inference increasing with $K$). This Markovian assumption creates probabilistic dependencies between the topic assignment for a given document and the documents that follow in subsequent epochs, necessitating an inference procedure that combines sequential Monte Carlo and Metropolis Hastings, and a custom data structure; this inference procedure was complex enough to warrant a companion paper~\cite{ahmed2011online}. The rCRP is also employed by Diao and Jiang (2013, 2014)\nocite{diao2013unified,diao2014recurrent}. In contrast, the dd-CRP makes no Markovian assumptions, and efficient inference is possible through relatively straightforward Gibbs sampling in a fixed window.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Methodology for Continual Learning} \label{sec:cl} \newcommand{\mathcal D}{\mathcal D} \newcommand{\mathcal G}{\mathcal G} \newcommand{\text{gencost}}{\text{gencost}} \newcommand{\text{cost}}{\text{cost}} \newcommand{K}{K} \newcommand{L}{L} \newcommand{M}{M} \subsection{Definition of Continual Learning} In continual learning \cite{schlimmer1986case, sutton1993online, ring1997child}, the goal is to train a model on a sequence of datasets $\{\mathcal D_1, \mathcal D_2, \dots, \mathcal D_T\}$, where each dataset corresponds to a (new) \textit{task}. According to the standard continual learning setup, when training the model for task $t$, the data of past tasks and future tasks are not available. That is, when training for task $t$, we are only allowed to use the dataset $\mathcal D_t$. The objective is to learn a single model which is able to predict well on data from all tasks $1, \dots, T$, despite training in a sequential manner. This is challenging in neural network models as training on the current task without incorporating data from earlier tasks typically results in forgetting the existing knowledge. This phenomenon is referred to as \emph{Catastrophic Forgetting} \cite{Thrun1995LifelongRL,french}. Namely, when training for task $t$, the model forgets the knowledge related to tasks with index $<t$, if no measures are taken to mitigate forgetting. In the following two subsections, we describe two strategies to combat catastrophic forgetting.\footnote{ In addition to avoiding catastrophic forgetting, another goal of continual learning is to improve/speed-up learning on future and past tasks. This is referred to as \emph{forward transfer} and \emph{positive backward transfer} \cite{LopezPaz2017GradientEM}. This is a very interesting research direction for continual learning, but in this paper we focus more on combating catastrophic forgetting.} \newcommand{\cem}[1]{\textbf{cem : #1}} \subsection{Naive Rehearsal} \label{sec:reh} A simple method to combat catastrophic forgetting is to keep a buffer of random samples to remember the past tasks. The buffer contains examples from earlier tasks to reinforce the knowledge from earlier tasks, when training on the current task. This method is referred to as naive rehearsal or simply \emph{rehearsal}, as we do in the rest of this paper. Although simple, this method is surprisingly effective, and has been shown to perform very comparable to state-of-the-art continual learning methods on various standard continual learning experiments \cite{contlearn_scenarios}. For this reason we use rehearsal as a baseline method. When training for $t$, we keep a buffer $\mathcal M = \bigcup_{k=1}^{t-1} \mathcal M_k$, where $\mathcal M_k$ contains randomly selected examples from task $k$, such that $k \leq t-1$. The cost function associated with rehearsal is: \begin{align} \mathcal L^t_\text{naivereplay} = \frac{1}{|\mathcal D_t|}\sum_{(x, y) \in \mathcal D_t} &\text{cost}(y, f_\theta(x)) + \notag \\ & {\color{black} \sum_{k=1}^{t-1} \frac{1}{|\mathcal M_k|} \sum_{(x', y') \in \mathcal M_k } \text{cost}(y', f_\theta(x')),} \end{align} where the first term accounts for the loss on the current task (current loss), and the second term accounts for the rehearsal loss. The input features are denoted with $x$, the target values are denoted with $y$, the continually trained classifier is denoted with $f_\theta(.)$, and $\text{cost}(.)$ denotes a classification loss, which is typically chosen as the cross-entropy loss. In Figure \ref{fig:basic_rh} we illustrate the schematics of the loss function. \newcommand{black}{black} \begin{figure}[h!] \begin{center} \begin{tikzpicture}[node distance=2cm,auto,>=latex'] \node (2) [draw=black, solid, node distance=1cm]{$f^t$}; \node (1) [left of=2]{$\mathcal X^t$}; \node (3) [right of=2]{$\hat{\mathcal Y}^t$}; \node (4) [right of=3]{current loss}; \node (a) [below of=1, node distance=1cm]{{\color{black}$\mathcal X^{1:t-1}_\text{buffer}$}}; \node (b) [below of=3, node distance=1cm]{{\color{black}$\hat{\mathcal Y}^{1:t-1}$}}; \node (c) [right of=b]{{\color{black}rh. loss}}; \draw[->] (1) edge (2); \draw[->] (2) edge (3); \draw[->] (3) edge node {$\mathcal Y^t$} (4); \draw[black,->] (b) edge node {\color{black}{$\mathcal Y^{1:t-1}_\text{buffer}$}} (c); \draw[black,->] (a) edge (2); \draw[black,->] (2) edge (b); \end{tikzpicture} \end{center} \caption{The diagram for the loss computation in the naive rehearsal method at task $t$. We separately compute two losses for the current tasks, and a rehearsal (rh.) loss on the stored buffer. } \label{fig:basic_rh} \end{figure} Even though rehearsal combats forgetting, it requires the storage of data in form of a rehearsal buffer. In the next section, we introduce another method which mitigates forgetting by continually learning a generative model, which does not require storage of past data items. \subsection{Generative Replay} An effective alternative to rehearsal is generative replay \cite{Shin2017}. This method continually trains a generative model in addition to the classifier to \emph{replay} the data from earlier tasks. By the virtue of having a generative model, in lieu of storing examples from earlier tasks, we generate data, and use this generated data to avoid forgetting (by using it as training data). The cost function for continual classifier training is therefore written as follows: \begin{align} {\mathcal L}^t_\text{genreplay} = \sum_{(x, y) \in \mathcal D_t} \text{cost}( f_t(x), y ) + {\color{black} \sum_{x_g \in \mathcal D_g} \text{cost}(f_t(x_g), f_{t-1}(x_g))}, \label{eq:genreplay} \end{align} where, the first term is the loss associated with the current task, and the second term is the loss associated with the rehearsal, where $\mathcal D_g$ is data simulated from the generative model model after being done with training it until task $t-1$, which is used to rehearse the datasets $\{ \mathcal D_1, \dots, \mathcal D_{t-1}\}$. The schematic illustration of this loss function is shown in Figure \ref{fig:genreplay_classifier}. Similarly, the generator $G^t$ is trained by using the examples from the current dataset $\mathcal D_t$ and the simulated examples from the generator $G^{t-1}$: \begin{align} {\mathcal L}^t_{\text{gen}} = \sum_{x \in \mathcal D_t} \text{gencost}( x ) + {\color{black} \sum_{x_g \in \mathcal D_g} \text{gencost}( x_g )}, \label{eq:contgen} \end{align} where again the loss function is composed of the current loss term (the first term) and the rehearsal loss (the second term). We illustrate the workflow of the method in Figure \ref{fig:genreplay_gen}. \label{sec:genrep} \begin{figure} [h!] \centering \begin{tikzpicture}[node distance=2cm,auto,>=latex'] \node [draw=black, dashed] (a) {$G^{t-1}$}; \node (b) [right of=a] {$\mathcal X^{1:{t-1}}_\text{replay}$}; \node (c) [draw=black, solid, right of=b] {$G^t$}; \node (d) [right of=c] {replay loss}; \node (e) [below of=b, node distance=1cm] {$\mathcal X^t$}; \node (f) [below of=d, node distance=1cm] {current loss}; \draw[->] (a) edge (b); \draw[->] (b) edge (c); \draw[->] (c) edge (d); \draw[->] (e) edge (c); \draw[->] (c) edge (f); \end{tikzpicture} \caption{Diagram for continually training a generative model using generative replay: At task $t$, the data is \emph{replayed} from the generative model $G^{t-1}$, and its likelihood is evaluated on the generative model $G^t$ that we currently train. The dashed blocks means that the parameters are frozen, and not being updated, and solid blocks mean that the block parameters are optimized. } \label{fig:genreplay_gen} \end{figure} \begin{figure}[h!] \centering \begin{tikzpicture}[node distance=2cm,auto,>=latex'] \node (a) [] {$\mathcal X^{1:t-1}_\text{replay}$}; \node (b) [draw=black, dashed, right of=a] {$f^{t-1}$}; \node (c) [right of=b] {$\mathcal Y^{1:t-1}_\text{target}$}; \node (d) [right of=c] {replay loss}; \node (e) [draw=black, solid, below of=b, node distance=1cm] {$f^t$}; \node (f) [below of=c, node distance=1cm] {$\hat{\mathcal Y}^{1:t-1}$}; \node [draw=black, dashed, below of=a, node distance=1cm] (h) {$G^{t-1}$}; \node (2) [below of=e, node distance=1cm]{}; \node (1) [left of=2]{$\mathcal X^t$}; \node (3) [right of=2]{$\hat{\mathcal Y}^t$}; \node (4) [right of=3]{current loss}; \draw[->] (h) edge (a); \draw[->] (a) edge (b); \draw[->] (b) edge (c); \draw[->] (c) edge (d); \draw[->] (a) edge (e); \draw[->] (e) edge (f); \draw[->] (f) edge (d); \draw[->] (1) edge (e); \draw[->] (e) edge (3); \draw[->] (3) edge node {$\mathcal Y^t$} (4); \end{tikzpicture} \caption{Training the classifier using the generative replay at task $t$: The data for the earlier tasks is generated from $G^{t-1}$. The outputs of the current classifier $f^t$ and the earlier classifier $f_{t-1}$ are matched to compute a replay loss.} \label{fig:genreplay_classifier} \end{figure} Note that in our application the generator $G$ generates of spectra segments as our goal to classify segments of audio data. Next, we describe the details of the architecture of the generator $G$. \subsection{The Generative Model Architecture} \label{sec:2-step-lrn} In this paper, we use maximum-likelihood based generative modeling as opposed to Generative Adversarial Networks (GANs) \cite{NIPS2014_5423} as the former is significantly easier to train \cite{Lucic2018}. In our generative models, we use a convolutional autoencoder to compute embeddings for spectrogram sequences. The architecture of our autoencoder is shown in Figure \ref{fig:AE_arc}, which consists of using convolutional layers across the time axis to model the temporal structure, and then reducing and increasing the feature dimensionality using fully connected layers. After learning the embeddings $h$, we learn the generative model by fitting a Gaussian mixture model (GMM) on the latent embeddings, as described in the 2-step learning method in \cite{subakan2018}. Advantages of using GMMs in the latent space is advocated by multiple papers in the literature \cite{hoffman2016elbosurgery, subakan2018, Jiang2016, Dilokthanakul2016, Tomczak2017}. In our experiments we have observed that separating the learning of parameters of the prior distribution on the latent variables from the learning of autoencoder resulted in the accurate learning of the generative model (which we refer to as 2-step training). We have observed that the joint training of GMM and the autoencoder often resulted in slightly worse results than that of the 2-step learning approach, and therefore we have chosen to use the 2-step training rather than jointly training the prior and the autoencoder. We also compare the proposed generative modeling scheme with VAEs with standard Gaussian prior \cite{Kingma2013}, and observe that the proposed generative modeling scheme yields much superior generations, which results in better classification. \begin{figure}[h!] \centering \includegraphics[width=0.44\textwidth]{assets/replay_generator.pdf} \caption{The autoencoder architecture used to model the spectra. The convolutional encoder maps the spectra into latent space $h$, which is then transformed by the decoder into reconstructed representation. We apply ReLU after each of the first two convolutional layers in both the encoder and the decoder.} \label{fig:AE_arc} \end{figure} \section{Conclusion} \label{sec:conc} We showed that generative replay is an effective continual learning method for audio classification tasks. Using a generative model whose size is less than $4$\% of the size of the training data, we obtain a test accuracy comparable to a buffer-based rehearsal scheme which needs to store $20$\% of all used training data. These results highlight the potential of using generative models instead of keeping previously seen training data when there are storage constraints. We see these aspects being crucial to sound recognition systems for which keeping prior training data is prohibitive, but often need (to learn) to perform new tasks on the fly. \section{Experimental Setup} \label{sec:exp} In this section we introduce our continual learning setup for audio classification. The experiments simulate scenarios where the model incrementally learns new sound classes without having full access to the previously-encountered sound classes. The model observes ten sound classes in a sequence of five tasks, where in each task two new classes are presented. This is similar to similar to the setup in~\cite{closed-loop-gan}. \subsection{Data} \label{ssec:data} We select the publicly available ESC-10 \cite{esc50} dataset for our experiments. The ESC-10 dataset consists of $400$ five-seconds recordings sampled at 44kHz of acoustic events from $10$ classes, namely: \textit{chainsaw}, \textit{clock ticking}, \textit{crackling fire}, \textit{crying baby}, \textit{dog barking}, \textit{helicopter}, \textit{rain}, \textit{rooster}, \textit{seawaves}, and \textit{sneezing}. For each recording we extract a Time-Frequency (TF) spectrogram representation using a $2048$ samples window and a $512$ samples hop size. Next, we compute the square root of the mel-scaled spectrogram using $128$ mel-features for each spectrogram. We further segment our data to snippets that correspond to $\approx 220$ ms so that each input data sample has a size of $128 \times 16$. We ignore low-energy spectra whose Frobenius norm is less than 1e-4. Finally, we normalize each spectrogram by the maximum energy from each mel-spectrogram so that each value lies in $[0,1]$. Our initial experimental results demonstrate that normalized mel-spectrograms are more discriminative under the chosen classifier architecture and can be easily reconstructed from the generator. In total there are $9500$ mel-spectrograms that we further split into training, validation and test set with a ratio of $7:2:1$. To setup the experiment in the setting of continual learning, we partition the dataset into five subsets/tasks where all classes are mutually exclusive. We group the classes based on the their label indices so the two sound classes from the same group are more similar to each other compared to classes from the other groups. \subsection{Generative Replay Setup} \label{ssec:gr_setup} We next discuss the setup for generative replay including the architecture of the classifier and the generator. \subsubsection{Classifier Architecture} \label{sssec:clf_arch} The classifier contains two 1-D convolutional layers with $64$ and $128$ filters, respectively, one average pooling layer and two fully-connected layers with $50$ and $10$ hidden nodes each. For the convolutional layers, we use a filter of length $3$ and perform same-padding to the input. We use a rectified linear unit (ReLU) as a nonlinearity after each convolutional layer and the first fully-connected layer. The output of the second fully-connected layer is passed into a softmax layer for a $10$-class classification. \subsubsection{Generator Architecture} \label{sssec:gen_arch} We experiment with both the autoencoder and the variational autoencoder architectures as the generator. The encoder consists of three 1-D convolutional layers followed by a fully-connected layer with $50$ hidden units. Each of the convolutional layers uses $128$ filters of lengths $6$, $4$, and $3$ and strides of $1$, $2$, and $2$, respectively. The decoder consists of three 1-D transposed convolutional layers, each with $128$ filters of length $4$, $4$, and $7$ and stride of $2$, $2$, and $1$, respectively. We do not perform zero-padding and we apply ReLU after each one of the first two convolutional layers in both the encoder and the decoder as shown in Figure \ref{fig:AE_arc}. The variational autoencoder architecture contains an additional linear layer on top of the convolutional encoder with $50$ dimensions with a reparameterization trick for being able to sample from the latent space. \subsection{Rehearsal Setup} \label{ssec:rhs} We compare the proposed generative replay mechanism with rehearsal based methods. We set up the rehearsal data by storing $p\%$ of the training data at each task into a buffer. This buffer is available to the models throughout all tasks. In our setting, the size of the buffer increases linearly with the number of tasks. We adjust the percentage of the rehearsal data such that the the data from each task have equal probability to be drawn. The parameter of the percentage $p$ of the rehearsal data lies in $p \in \{5, 10, 20, 100\}$. \subsection{Training Setup} \label{ssec:training} For all experiments, we optimize our models using Adam\cite{adam}. The batch size is set to $100$, and there are $10$ - $15$ batches per epoch for each task. To train the classifier, we use an initial learning rate equal to 5e-4 and we train it for 300 epochs by minimizing the cross-entropy loss for each task. Moreover, in order to train the generator, we use an initial learning rate of 1e-3 and train it for 1700 epochs for each task. The autoencoder loss is the binary cross-entropy for each time-frequency bin between the original spectrogram and the reconstruction. The loss for the variational autoencoder is the sum of binary cross-entropy and KL-Divergence between the modeled distribution and unit Gaussian. \section{Introduction} \label{sec:intro} Standard supervised machine learning setup posits that the full training dataset is available to the model at once. This is a simplistic assumption. In the wild, the training data may arrive in (non-iid) batches and new classes may appear throughout the learning process. This is typical of human learning where new concepts (classes) are learned throughout life. \emph{Continual learning} proposes a more realistic sequential learning paradigm composed of training episodes \cite{schlimmer1986case, sutton1993online, ring1997child}. At each episode, the model is only trained on data from a single new task and does not have access to data from earlier tasks. Continual learning is also useful for devices with constrained access to data (either due to storage limitations, or privacy constraints). In such cases classifiers need to be continually trained to learn new classes while minimizing storage. This limits the amount of possible retraining on previous tasks. Continual learning is particularly challenging for neural networks because of \emph{catastrophic forgetting}: at each episode the network will ``forget'' the knowledge it has learned in earlier tasks \cite{Thrun1995LifelongRL,french}. While a flurry of methods have been recently proposed for continual learning \cite{LopezPaz2017GradientEM, rebuffi2017icarl, Shin2017, li2018learning, parisi2018continual}, much work remains before continual learning becomes a practical technique. In this paper, we explore a continual learning setup for training a classifier on environmental sound classes. This is a challenging task because it necessitates learning a classifier on time-series data, as opposed to typical applications in the continual learning literature that focus on static data (e.g., images) \cite{rebuffi2017icarl}. To alleviate catastrophic forgetting, we utilize the generative replay technique \cite{Shin2017}, which provides very competitive continually-learned classifiers. A generator is trained simultaneously with the classifier. For each task, the generator is used to simulate earlier-task examples for the classifier. Further, we propose a convolutional autoencoder architecture to embed time-series data, and we make use of the two-step learning framework introduced in \cite{subakan2018} to learn the generative model to replay earlier tasks. We experiment with the ESC-10 (Environmental Sound Classification) dataset \cite{esc50}. Namely, we compare our proposed generative replay based method with \emph{rehearsal} which consists in storing a fixed percentage of the data associated with earlier tasks to combat forgetting. Stored data is used as training data in each of the subsequent episodes. This method has been shown to be a very strong baseline \cite{contlearn_scenarios}. We show that by using a generative model with size approximately equal to 4\% of the whole training set, we are able to match the classification accuracy obtained with a rehearsal method which stores 20\% of the training dataset. \section{Results and Discussions} \label{sec:res} We report the performance of various replay strategies under the sound classification setup. For each experiment we report the performance obtained by the models using five different permutations of the order of tasks. In each task, we report the mean accuracy on the test set, which contains all sound classes that the model has seen up until the current task. \begin{figure}[htb!] \centering \includegraphics[width=0.48\textwidth]{assets/full.pdf} \caption{Test accuracy on the ESC-10 dataset using generative replay and rehearsal methods with various buffer sizes. The x-axis denotes the task index and the y-axis refers to the accuracy of the model's prediction. Each point represents the mean of the accuracy after five runs using different permutations for the tasks.} \label{fig:acc} \end{figure} \subsection{Overall Results} Figure~\ref{fig:acc} shows the test accuracy of different generative replay strategies and rehearsal methods for various buffer sizes. ``AE+GMM'' refers to the proposed generative replay setting with an autoencoder and a Gaussian mixture learned in two steps as described in Section~\ref{sec:2-step-lrn}. ``VAE'' corresponds to the variational autoencoder mentioned in Section~\ref{sssec:gen_arch}. ``RHS X\%'' denotes a rehearsal based method with X\% training data stored into the buffer. ``RHS $100$\%'' is used as an upper-bound estimation of the performance of any replay strategy since it corresponds to the ideal case where all the training data is available in all future stages. Overall, RHS $100$\% has the highest mean accuracy and it fits the expectation as an upper-bound estimation of any replay strategy. The performance of rehearsal methods increases as the proportion of the data stored in buffer increases. We also notice that for all methods the variance of the test accuracy tends to decrease as the number of tasks increases. Initially, the variance is large because a random binary classification task might deviate too much in terms of difficulty from another. However, towards the end, the models have seen all sound classes regardless of the permutation, and therefore the mean accuracy tends to stabilize. \subsection{Comparison Between AE+GMM and VAE} AE+GMM significantly outperforms VAE as a replay strategy. The mean accuracy of AE+GMM is similar to RHS 20\%, while VAE performs significantly worse than 5\%. We analyze such notable difference by looking at the samples generated by both models as illustrated in Figure~\ref{fig:gen_samples}. We show three examples from the training set and the respective generations using AE+GMM and VAE. Note that VAE smooths out the temporal structure of the generated mel-spectrograms and lacks diversity between classes. On the other hand, AE+GMM generates mel-spectrograms with much more diverse temporal structure, exhibiting much closer resemblance to the examples from the training set. \begin{figure}[htb!] \centering \includegraphics[width=0.48\textwidth]{assets/all_samples.pdf} \caption{Mel-spectrograms from the training set (column a), generated by AE-GMM (column b) and generated by a VAE (column c). Notice how the VAE generated data do not reproduce salient class features, whereas the proposed AE+GMM generator does so better. } \label{fig:gen_samples} \end{figure} \subsection{Comparison Between AE+GMM and Rehearsal Based Methods} We observe that AE+GMM performs significantly better than rehearsal schemes with buffer proportion $p = 5\%, 10\%$. The accuracy of AE+GMM is almost identical to RHS 20\% at the last task and marginally higher in all previous tasks. The total number of trainable parameters in AE+GMM is less than $480,000$. The size of the network is equivalent to $\frac{480000 / (128\times 16) }{9500 \times 0.7} \approx 3.5\%$ of the training data. In other words, using a generator whose size is less than $4$\% of the training data, we are capable of reaching the accuracy comparable to storing $20$\% of the data. The result demonstrates the effectiveness of AE+GMM generative replay strategy when limited storage space is available.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}~\label{sec:intro} Driven by recent advances in computing, communication, and networking technologies, modern engineering systems (e.g., \cite{tac2020wwsc,2014Aunmanned,Lv2020Event}) have gradually shifted their computing and control workload to the cloud, and even edge with data transmitted over wired or wireless networks. Despite their flexibility, such network-based control systems (a.k.a., networked control systems) are known vulnerable to cyber threats~\cite{2013Attack,Alvaro2009Research}. In fact, existing works have shown that malicious attacks can severely disrupt the control performance and even render the system unstable \cite{Jang2014survey}. Examples of such failures in widely used safety- and security-critical control systems nowadays could put our lives and even national infrastructure at risk \cite{2011Stuxnet}. Several types of cyberattacks have been studied, including replay attacks \cite{Zhu2017replay,replay2018}, false-data injection attacks \cite{FP-RC-FB:10p ,wu2019Switching}, and Denial-of-Service (DoS) attacks \cite{shi2015jamming,Cetinkaya2019overview,hu2020dos}.~Relative to the others, DoS attacks can cause jamming in communication channels with little knowledge of system dynamics.~They are easy to launch and have received considerable attention \cite{L2010Protection}.~For instance, the work \cite{PersisInput} developed a general DoS framework, under which closed-loop system stability can be preserved via state-feedback control, provided certain DoS attack frequency and duration conditions are met.~This result has been extended in several directions, e.g., via output-feedback control in \cite{FengResilient}, as well as considering multiple output channels in \cite{LuInput}. All the aforementioned works assumed that the communication channels have an infinite data-rate. Clearly for real-world engineering systems, this condition is difficult to be met. Systems with digital communication channels offer a basic paradigm. The problem of limited bandwidth have been studied by accounting for the effect of quantization. There is a great deal of research indicating that even without attacks, quantization can compromise system performance \cite{bullo1576851}, which is often addressed by designing suitable encoding schemes and providing enough quantization levels. To name a few, for stabilization of systems with quantized measurements, \cite{Liberzon2000Quantized} first introduced the so-called ``zooming-in'' and ``zooming-out'' method. Following this work, a number of stabilization encoding schemes have been designed for systems with quantized output feedback in \cite{Sharon2008Input, WakaikiObserver}, and switched systems in \cite{WakaikiStability, Liberzon2014Finite, YangFeedback, Wakaiki2017Stabilization}. Recently, a few works have considered these two factors (i.e., quantization and DoS attacks) simultaneously; see \cite{chen2018Event, feng2020datarate,liu2020datarate, Feng2020multi, 8880482}. The trade-off between system resilience against DoS attacks and data-rate was analyzed in \cite{feng2020datarate}. The minimum data-rate for stabilizing a centralized system and a multi-agent system were derived in \cite{liu2020datarate} and \cite{Feng2020multi}, respectively. Capitalizing on the zooming-in and -out method, the work \cite{8880482} designed a resilient output encoding scheme for systems whose output channel is subject to DoS attacks and limited data-rate. The goal of this paper is to stabilize systems with both input (controller-to-plant) and output (plant-to-controller) channels subject to DoS attacks and limited bandwidth. To this aim, the quantizer encoding schemes should be carefully designed. In the absence of DoS attacks, the work \cite{WakaikiObserver} developed encoding schemes for signals transmitted through both input and output channels. However, their schemes cannot be applied here, due to the coupling between encoding strategies for different signals in the presence of DoS attacks. To overcome this challenge, we put forth a delicate structure, including a deadbeat controller and a transmission protocol. Our protocol requires signals transmitted through the input channel at a higher rate than those through the output channel. Precisely, their transmission rate ratio is exactly the controllability index of the system. Its efficacy is corroborated by the possibility to decouple design of different encoding schemes as, well as, establishing closed-loop stability conditions. We further apply this structure to stabilize systems with only output channel has network imperfections. In this scenario, it is proved that the proposed structure can secure the synchronization between encoder and decoder even without acknowledgments (ACKs), which are required by existing works, e.g., \cite{8880482,feng2020datarate}. In a nutshell, the main contributions of the present work are summarized as follows. \begin{itemize} \item[\textbf{c1)}] To cope with the coupling and synchronization issues, a structure consisting of a deadbeat controller and a transmission protocol for input and output channels, co-designed in terms of the controllability index, is advocated. \item[\textbf{c2)}] Under this structure, the input, output, and estimated output encoding schemes can be designed separately to achieve closed-loop stability when both input and output channels are subject to DoS attacks and quantization; and, \item[\textbf{c3)}] When such network phenomena appear only in the output channel, an encoding scheme is designed such that the system can be stabilized through an ACK-free protocol, that is in sharp contrast to existing ACK-based results. \end{itemize} \emph{Notation:} Denote the set of integers (real numbers) by $\mathbb{Z}$ ($\mathbb{R}$). Given $\alpha \in \mathbb{R}$ or $\alpha \in \mathbb{Z}$, let $\mathbb{R}_{>\alpha}$ ($\mathbb{R}_{\ge \alpha}$) or $\mathbb{Z}_{>\alpha}$ ($\mathbb{Z}_{\ge \alpha}$) denote the set of real numbers or integers greater than (greater than or equal to) $\alpha$. Let $\mathbb{N}$ denote the set of natural numbers and $\mathbb{N}_0 := \mathbb{N} \cup \{0\}$. For a vector $v = [v_1, v_2, \cdots\!, v_n]^T \in \mathbb{R}^n$, denote its maximum norm by $|v| := \max\{|v_1|, \cdots\!, |v_n|\}$ and the corresponding induced norm of a matrix $M \in \mathbb{R}^{m \times n}$ by $\Vert M\Vert := \sup\{|Mv|:v \in \mathbb{R^n}, |v| = 1\}$. \section{Preliminaries and Problem Formulation} \label{problem_Formulation} \subsection{Problem formulation}\label{systemdefination} In this paper, we study the networked control architecture in Fig. \ref{networkfig}, where a plant is to be stabilized by a remote digital controller over a network subject to DoS attacks.~The plant is described by the following dynamics \begin{subequations}\label{continuoussystem} \begin{align} &\dot{x}(t) = Ax(t) + Bu(t) \label{continuoussysteG_1}\\ &y(t) = Cx(t) \label{continuoussystem_2} \end{align} \end{subequations} where $x(t) \in \mathbb{R}^{n_x}, u(t) \in \mathbb{R}^{n_u}$, and $y(t) \in \mathbb{R}^{n_y}$ are the state, the control input, and the output, respectively. Here, we consider output signals and control inputs to be transmitted through different channels over a shared network, which are accordingly referred to as output channel and input channel. Specifically, data transmissions over the output channel occur periodically with interval $\Delta > 0$. That is, the output encoder samples $y(t)$ and sends its quantized version to the controller every $\Delta$ time. Likewise, the digital controller generates control signals and transmits their quantized values to the plant periodically with interval $\delta>0$. At the plant side, the quantized control inputs are first decoded, then pass through a zero-order hold (ZOH) before entering the plant.~To maintain the synchronization between input and output transmissions, we choose $\delta = \Delta/b $ for some $b \in \mathbb{N}$. For future reference, let \begin{equation*} x_{q,k} := x(q\Delta + k\delta),\qquad y_{q,k} := y(q\Delta + k\delta) \end{equation*} for every $q \in \mathbb{Z}_{\ge 0}$, and $k = 0, \cdots\!, \frac{\Delta}{b}$, and \begin{equation}\label{eq:adbd} A_d := e^{A\delta}, \qquad B_d := \int_{0}^{\delta}{e^{As}B\, ds}. \end{equation} Moreover, we use $x_q$ to denote $x_{q,0}$ for simplicity. \begin{figure}[t] \centering \includegraphics[width=8cm]{NetworkedStructure.jpg}\\ \caption{Networked control system with both input (the blue line) and output (the green line) channels subject to DoS attacks.}\label{networkfig} \centering \end{figure} We make the following assumptions on system \eqref{continuoussystem}. \begin{assumption}[Controllability and observability]\label{as:abca} The pair $(A,B)$ is controllable, and the pair $(C,A)$ is observable. \end{assumption} \begin{assumption}[Initial state bound]\label{x0bound} An upper bound on the initial state $|x_0|$ is known. \end{assumption} \begin{remark} Thanks to As. \ref{as:abca}, it has been shown in \cite{Kreisselmeier1999On} that if $\delta$ is non-pathological, then $(A_d, B_d)$ in \eqref{eq:adbd} is controllable. Let $\eta$ denote its controllability index, which can be computed by evaluating ${\rm rank} [B_d, \cdots\!, A_d^{\eta} B_d] = n_x$. Similarly, $(C, A_d^{\eta})$ is observable. An upper bound on the initial state in As. \ref{x0bound} can be derived via the zooming-out method; see \cite[Sec. 4]{8880482}. \end{remark} \subsection{Denial-of-Service attack} In Fig. \ref{networkfig}, since both input and output signals are transmitted periodically, we adopt the discrete-time DoS attack model in \cite{8880482}. Under this model, attacks are launched only at output transmission instants, and each lasts for an output transmission period $\Delta$. This model is general enough since it only poses requirements on the frequency and duration of DoS attacks. Here, DoS frequency is the number of DoS \emph{off/on} switches over a fixed time interval, while DoS duration represents the total number of attacks. \begin{assumption}[DoS frequency]\label{DoS_frequencyassumption} There exist constants $\kappa_f \in \mathbb{R}_{\ge 0}$ and $\nu_f \in \mathbb{R}_{\ge 2}$ such that DoS frequency satisfies \begin{equation}\label{dosfre} \Phi_f(q) \le \kappa_f + \frac{q}{\nu_f} \end{equation} over time interval $[0, q\Delta)$, where $q \in \mathbb{Z}_{\ge 0}$. \end{assumption} \begin{assumption}[DoS duration]\label{DoS_durationassumption} There exist constants $\kappa_d \in \mathbb{R}_{\ge 0}$ and $\nu_d \in \mathbb{Z}_{\ge 1}$ such that DoS duration satisfies \begin{equation}\label{dosdur} \Phi_d(q) \le \kappa_d + \frac{q}{\nu_d} \end{equation} over time interval $[0, q\Delta)$, where $q \in \mathbb{Z}_{\ge 0}$. \end{assumption} \begin{remark} Given its generality, this attack model has been widely used in e.g., \cite{8880482,Feng2020multi,FengResilient,LuInput,feng2020datarate,PersisInput}. As pointed out in \cite{Hespanha1999STABILITY}, $\nu_f\Delta$ in As. \ref{DoS_frequencyassumption} can be regarded as the average dwell-time between two consecutive DoS attacks \emph{off/on} switches. On the other hand, As. \ref{DoS_durationassumption} indicates that, the average duration of DoS attacks does not exceed a proportion $1/\nu_d$ of the time interval. Constants $\kappa_f$ and $\kappa_d$ are also known as chatter bounds. Conditions $\nu_d \ge 1$ and $\nu_f \ge 2$ suggest that DoS attacks are not strong enough to prevent all packets from being transmitted, thus rendering it possible for the system to be stabilized by suitable control strategies. \end{remark} \section{Networked Phenomena at Both Input and Output Channels}\label{inputoutputsection} This section aims to design resilient encoding schemes for stabilization of system (\ref{continuoussystem}) via a remote observer-based digital controller over communication channels subject to limited bandwidth and DoS attacks; see Fig. \ref{networkfig}. To this end, there are three signals that need to be quantized, i.e., the estimated output by observer $\hat{y}_q$, the control input $u_{q,k}$, and the plant output $y_q$, with their quantized values denote by $Q_1(\hat{y}_q)$, $Q_2(u_{q,k})$, and $Q_3(y_q)$, respectively. In addition, since the input and output channels share a communication network, we assume for simplicity that, once there is a DoS attack, neither the input nor the output signals will be received, and both of them are set to the default zero. In this manner, the decoder and encoder at both input and output sides can infer whether there is an attack. Further, their quantization ranges and centers are identical at every transmission instant. As a result, they can be synchronized even with an ACK-free protocol. \subsection{Controller architecture} To stabilize system (\ref{continuoussystem}), we put forth a two-stage observer-based controller by considering whether there is an attack or not. Specifically, in the absence of DoS attacks, both $Q_3(y_q)$ and $Q_1(\hat{y}_{q-1,\eta})$ are available at the observer side, so we construct the following controller \begin{subequations}\label{abdoscontroller} \begin{align} &\hat{x}_{q, k+1} \!=\! A_d\hat{x}_{q,k} \!+\! B_du_{q,k},\!\! &k &\le \eta - 1 \label{abdoscontroller_1}\\ &\hat{x}_{q}\!=\! \hat{x}_{q-1,k} \!+\! M\big[Q_3(y_{q}) \!-\! Q_1(\hat{y}_{q-1,k})\big],\!\! &k &= \eta \label{abdoscontroller_2}\\ &\hat{y}_{q,k} \!=\! C \hat{x}_{q,k} \label{abdoscontroller_3}\\ &u_{q,k} \!=\! K \hat{x}_{q,k} \label{abdoscontroller_4} \end{align} \end{subequations} where the initial condition $\hat{x}_0$ is given by $\hat{x}_0 = 0$, and $\delta$ is chosen such that \begin{equation}\label{delta} \delta = \frac{\Delta}{\eta}. \end{equation} Matrix $M \in \mathbb{R}^{n_x \times n_y}$ can be regarded as an observer gain such that $R := A_d^{\eta}(I - MC)$ is schur stable, which always exists since $(C, A_d^{\eta})$ is observable. Moreover, $(A_d, B_d)$ is controllable, thus a controller gain matrix $K \in \mathbb{R}^{n_u \times n_x}$ can be designed such that \begin{equation}\label{dbdb} \bar{R}^{\eta} = (A_d + B_dK)^{\eta} = 0. \end{equation} \begin{remark}\label{remark:dbgain} Matrix $K$ obeying (\ref{dbdb}) is also known as a class of deadbeat controller gain, since it assigns all the eigenvalues of $A_d + B_d K$ to the origin. Solution of this eigenstructure assignment problem is non-unique, and can be obtained through several approaches, e.g., \cite{FahmyDead}. \end{remark} On the other hand, when there is a DoS attack, none of $Q_1(\hat{y}_q)$, $Q_2(u_{q,k})$, or $Q_3(y_q)$ can be received, thus we simply employ an open-loop controller as follows \begin{subequations}\label{predoscontroller} \begin{align} &\hat{x}_{q,k+1} = A_d \hat{x}_{q,k}\\ &\hat{y}_{q,k} = C \hat{x}_{q,k}\\ &u_{q,k} = 0 \end{align} \end{subequations} with the initial estimated state $\hat{x}_0 = 0$. In addition, to apply the discrete-time signal $Q_2(u_{q,k})$ to the continuous-time system (\ref{continuoussysteG_1}), a ZOH is used, and the control input is given by \begin{equation*} u(t) = Q_2(u_{q,k}), \qquad q\Delta + k\delta \le t < q\Delta + (k+1)\delta \end{equation*} where $k = 0, \cdots\!, \eta - 1$. \subsection{Quantizer} We first design quantizers at the input channel. According to (\ref{abdoscontroller_1}) and (\ref{abdoscontroller_2}), $u_{q,k}$ is needed for feedback control, whereas $\hat{y}_{q,k}$, resetting the estimated state, is required at each successful transmission instants. Therefore, the controller sends $u_{q,k}$ and $\hat{y}_{q,\eta}$ to the quantizers periodically at a different rate. In more precise terms, periods for the former and the latter are $\delta$, and $\eta \delta$, respectively. Let $E_{1,q} \ge 0$ and $E_{2,q,k} \ge 0$ satisfy \begin{equation}\label{2E12inequality} |\hat{y}_{q - 1,\eta}| \le E_{1,q},\ \ \ |u_{q,k}| \le E_{2,q,k}. \end{equation} Suppose there are $N_1$ ($N_2$) levels for quantization of $\hat{y}_{q,\eta}$ ($u_{q,k}$). Partition the hypercubes at the encoders \begin{equation*} \begin{split} \{\hat{y} \in \mathbb{R}^{n_y}: |\hat{y}_{q - 1,\eta}| \le E_{1,q}\},~~ \{u \in \mathbb{R}^{n_u}: |u_{q,k}| \le E_{2,q,k}\} \end{split} \end{equation*} into $N_1^{n_y}$, and $N_2^{n_u}$ equal-sized boxes, respectively. In addition, each box is represented by a value in $\{1, \cdots\!, N_1^{n_y}\}$, or $\{1, \cdots\!, N_2^{n_u}\}$ following a bijection mapping. Indices that denote the partitioned boxes containing $\hat{y}_{q, \eta}$ and $u_{q, k}$ are then sent to the decoders. If $\hat{y}_{q, \eta}$ and $u_{q, k}$ are on the boundary of several boxes, then anyone of them can be chosen. At the decoders side, $Q_1(\hat{y}_{q, \eta})$ and $Q_2(u_{q, k})$ are recovered from the indices. This implies that the encoder and its corresponding decoder should share the same quantization ranges and centers. Since DoS attacks block both input and output signals from transmitting, encoders and decoders at both sides of input and output channels are naturally synchronized. The quantization errors of the aforementioned encoding schemes obey \begin{equation}\label{2E1inequality} |\hat{y}_{q - 1,\eta} - Q_1(\hat{y}_{q - 1,\eta})| \le \frac{E_{1,q}}{N_1}, \end{equation} \begin{equation}\label{2E2inequality} |u_{q,k} - Q_2(u_{q,k})| \le \frac{E_{2,q,k}}{N_2}. \end{equation} Since $\hat{x}_0 = 0$, we deduce that $\hat{y}_0 = u_0 = 0$. Therefore, the initial bounds $E_{1,0}$ and $E_{2,0,0}$ can be set by \begin{equation*} E_{1,0} = 0,\ \ \ E_{2,0,0} = 0. \end{equation*} Moreover, as for the output $y_q$, choose $E_{3,q} \ge 0$ such that \begin{equation}\label{2E3inequality} |y_q - Q_1(\hat{y}_{q - 1,\eta})| \le E_{3,q}. \end{equation} Let $N_3$ be the quantization level of $y_q$. The hypercube \begin{equation}\label{eq:hypercubeQ3} \{y \in \mathbb{R}^{n_y} : |y_q - Q_1(\hat{y}_{q - 1,\eta})| \le E_{3, q}\}. \end{equation} is partitioned into $N_3^{n_y}$ equal-sized boxes with the center $Q_1(\hat{y}_{q - 1, \eta})$. Then, following the same procedure as the above two quantizers, $Q_3(y_q)$ is transmitted to the controller every $\Delta$ time. The quantization error satisfies \begin{equation*} |y_q - Q_3(y_q)| \le \frac{E_{3, q}}{N_3}. \end{equation*} Define error of the system $e_{q, k} := x_{q,k} - \hat{x}_{q,k}$. Combining $\hat{x}_0 = 0$ with As. \ref{x0bound}, we deduce that the initial error obeys $|e_0| = |x_0|$. Thus it suffices to set $E_{3,0} := \Vert C\Vert |x_0|$. \subsection{Stability analysis}\label{Main_results} In this subsection, we start by presenting encoding schemes $\{E_{p,q,k}\} (p = 1, 2, 3)$, followed by formal stability conditions. Design $\{E_{1,q}: q \in \mathbb{Z}_{\ge 1}\}$ such that \begin{equation}\label{2E1q} E_{1,q} = E_{1,0},\qquad \forall q \in \mathbb{Z}_{\ge 1} \end{equation} and let $\{E_{2,q,k}: q \in \mathbb{Z}_{\ge 1}, k = 0, \cdots\!, \eta - 1\}$ be updated by \begin{equation}\label{2E2qk} E_{2,q,k} = \frac{N_3 - 1}{N_3}\left\Vert K\bar{R}^kM\right\Vert E_{3,q}, \qquad {\rm{~if~}}q\Delta = s_r. \end{equation} Moreover, the sequence $\{E_{3,q}: q \in \mathbb{Z}_{\ge 1}\}$ is given by \begin{equation}\label{2E3q} E_{3,q + 1} := \left\{ \begin{aligned} & \hat{\theta}_a E_{3,q}, & q\Delta \ne s_r\\ & \hat{\theta}_{0} E_{3,q}, & (q-1)\Delta \ne s_r, q\Delta = s_r\\ & \hat{\theta}_{na}E_{3,q}, & (q-1)\Delta = s_r, q\Delta = s_r \end{aligned} \right. \end{equation} where \begin{align*} \hat{\theta}_a & := \Vert A_d^{\eta}\Vert\\ \hat{\theta}_{0} & := a_0\rho + \frac{\Vert C\Vert a_1}{N_3} + \Vert C\Vert a_2\frac{N_3 - 1}{N_2N_3}\\ \hat{\theta}_{na} & := \rho + \frac{\Vert C\Vert a_1}{N_3} + \Vert C\Vert a_2\frac{N_3 - 1}{N_2N_3} \end{align*} with positive constants $a_0, a_1, a_2$, and $0 < \rho < 1$ validating the following for all $\ell \ge 1$ \begin{equation}\label{2rho} \begin{aligned} \big\Vert R^{\ell}\big\Vert \le a_0\rho^{\ell},~~~~ \big\Vert R^{\ell}A_d^{\eta}M\big\Vert \le a_1\rho^{\ell}\\ \sum_{i = 0}^{\eta - 1}{\big\Vert R^{\ell}A_d^{\eta - i - 1}B_d \big\Vert\big\Vert K\bar{R}^iM\big\Vert}\le a_2\rho^{\ell}.\\ \end{aligned} \end{equation} Since $R$ is schur stable, there always exist such constants. Next, we show that our designed schemes above are resilient to DoS attacks, which is one of our main results too. \begin{theorem}\label{2convergetheorem} Consider system (\ref{continuoussystem}) with the observer-based controller in (\ref{abdoscontroller}) and (\ref{predoscontroller}), with $K$ obeying (\ref{dbdb}) and $M$ chosen such that $R$ is schur stable. Let As. \ref{as:abca}--\ref{DoS_durationassumption} hold. If i) the input and output transmission periods adhere to (\ref{delta}), ii) the number of quantization levels $N_1 \ge 1$ is odd, \begin{equation}\label{2Ncondition} \begin{split} N_2\! >\! \max\!{\left\{\frac{a_2\Vert C\Vert}{1 - \rho},\, \frac{a_2}{a_1}\right\}},\,{\rm{~and~}} N_3\! > \!\frac{\Vert C\Vert a_1 - \frac{\Vert C\Vert a_2}{N_2}}{1 - \rho -\frac{\Vert C\Vert a_2}{N_2}} \end{split} \end{equation} and, iii) DoS attacks satisfy \begin{equation}\label{2doscondition} \frac{1}{\nu_d} \le \frac{\log{(1/\hat{\theta}_{na})}}{\log{(\hat{\theta}_a/\hat{\theta}_{na})}} - \frac{\log{(\hat{\theta}_0/\hat{\theta}_{na})}}{\log{(\hat{\theta}_a/\hat{\theta}_{na})}}\frac{1}{\nu_f} \end{equation} then the system is exponentially stable under the encoding scheme with error bounds $\{E_{p,q,k}:q \in \mathbb{Z}_{\ge 1}, k = 0, \cdots\!, \eta - 1\} (p = 1, 2, 3)$ constructed by the update rule in (\ref{2E1q})-(\ref{2E3q}). \end{theorem} We begin proving Thm. \ref{2convergetheorem} by giving a lemma demonstrating that the update rules in \eqref{2E1q}-\eqref{2E3q} satisfy (\ref{2E12inequality}) and (\ref{2E3inequality}). \begin{lemma}\label{lem:lem1} Consider system (\ref{continuoussystem}) with the controller in (\ref{abdoscontroller}) and (\ref{predoscontroller}), where $K$ obeys (\ref{dbdb}). Let As. \ref{as:abca}--\ref{DoS_durationassumption} hold. If $\{E_{p,q,k}:q \in \mathbb{Z}_{\ge 1}, k = 0, \cdots\!, \eta - 1\} (p = 1, 2, 3)$ obey (\ref{2E1q})-(\ref{2E3q}), then (\ref{2E12inequality}) and (\ref{2E3inequality}) hold true for all $q \in \mathbb{Z}_{\ge 1}$. \end{lemma} \begin{proof} Encoding schemes for systems with quantized inputs and outputs in the absence of DoS attacks have been discussed in \cite{WakaikiObserver}. However, their methods cannot be directly applied due to the DoS-induced coupling between these schemes. This challenge is addressed through our carefully designed controller structure in (\ref{abdoscontroller})-(\ref{dbdb}). According to (\ref{abdoscontroller}) and (\ref{dbdb}), \begin{equation*} \hat{y}_{q - 1, \eta} = C\hat{x}_{q - 1, \eta} = C\bar{R}^{\eta}\hat{x}_{q - 1} = 0 \end{equation*} holds true irrespective of DoS attacks, which implies $E_{1,q} \ge |\hat{y}_{q - 1, \eta}| = E_{1,0}$ for all $q \in \mathbb{Z}_{\ge 1}$, so $E_{1,q}$ remains unchanged. This result further indicates that $Q_1(\hat{y}_{q - 1, \eta}) = 0$. Hence, it follows from (\ref{eq:hypercubeQ3}) that the quantization center of $Q_3(y_q)$ is at the origin. On the other hand, if no DoS attacks occur within $[q_1\Delta, (q_1 + 1)\Delta)$, then \begin{equation}\label{2hatxqk} \begin{aligned} \hat{x}_{q_1, k} =&\ (A_d + B_dK)^{k }\hat{x}_{q_1} \\ = &\ \bar{R}^{k}(\hat{x}_{q_1 - 1, \eta} + M[Q_3(y_{q_1}) - Q_1(\hat{y}_{q_1 - 1, \eta})])\\ = &\ \bar{R}^k M[Q_3(y_{q_1}) - Q_1(\hat{y}_{q_1 - 1, \eta})] \end{aligned} \end{equation} hence $u_{q_1,k}$ can be expressed by $Q_3(y_{q_1}) - Q_1(\hat{y}_{q_1 - 1, \eta})$. In addition, since \begin{equation*} |Q_3(y_q) - Q_1(\hat{y}_{q-1, \eta})| \le \frac{N_3 - 1}{N_3}E_{3,q} \end{equation*} it follows that, in the absence of DoS attacks, $u_{q,k}$ satisfies \begin{equation}\label{2uqk} |u_{q,k}| \le \frac{N_3 - 1}{N_3}\big\Vert K\bar{R}^kM\big\Vert E_{3,q} =: E_{2,q,k} \end{equation} for all $q \ge 1, k = 0, \cdots\!, \eta -1$. When DoS attacks occur, the plant cannot receive inputs from controller. In other words, $E_{2,q,k}$ only depends on the latest $E_{3,q}$, thus $E_{2,q,k}$ can remain unchanged during DoS attacks. Following the definitions of $E_{1,q}$ and $E_{2,q,k}$, we are able to design sequence $\{E_{3,q}: q\in \mathbb{Z}_{\ge 1}\}$. First, in the absence of DoS attacks, the error just before each transmission instant, $e_{q - 1, \eta} = x_q - \hat{x}_{q - 1,\eta}$, satisfies \begin{align*} e_{q - 1, \eta} =&\ A_d^{\eta}(I - MC)e_{q - 1} - A_d^{\eta}M[Q_3(y_q)- y_q]\\ & + \sum_{i = 0}^{\eta - 1}{A_d^{\eta - i - 1}B_d[Q_2(u_{q-1,i}) - u_{q- 1,i}]}\\ & - A_d^{\eta}\big[\hat{y}_{q - 1, \eta} - Q_1(\hat{y}_{q - 1, \eta})\big] \end{align*} which implies that $e_{q - 1, \eta}$ generally relies on $\hat{y}_{q-1, \eta}, u_{q,k}$, and itself, thus introducing coupling in $E_{3,q}$ design. Here, this issue is addressed by (\ref{dbdb}). To see this, recalling (\ref{2uqk}), $\hat{y}_{q - 1, \eta} = 0$, and $Q_1(\hat{y}_{q - 1, \eta}) = 0$, we have that \begin{align}\label{2error} &e_{q + {\ell} - 1, \eta} =R^{\ell} e_{q - 1} + \sum_{j = 0}^{{\ell} - 1}R^jA_d^{\eta}M(Q_3(y_{q - j})- y_{q - j})\nonumber\\ &+ \sum_{j = 0}^{{\ell} - 1}R^j\sum_{i = 0}^{\eta \!-\! 1}A_d^{\eta \!-\! i - 1}B_d \big[Q_2(u_{q+\ell \!-\!j \!-\!1,i}) \!-\! u_{q+{\ell}-j-1,i}\big]. \end{align} Define $E_{3,q}$ as follows \begin{align*} E_{3,q+{\ell}} :=&\ a_0\rho^{\ell}E_{3,q} \\ & + \sum_{i = 0}^{{\ell} - 1}\left(\frac{(N_3 - 1)a_2\Vert C\Vert }{N_2N_3} + \frac{\Vert C\Vert a_1}{N_3}\right)\rho^{i}E_{3,q-i}. \end{align*} Hence, combining (\ref{2uqk}) with (\ref{2error}) yields \begin{align}\label{eq:yqy} |y_{q + 1} - Q_1(\hat{y}_{q, \eta})| & \le |y_{q + 1} - \hat{y}_{q, \eta}| + |\hat{y}_{q, \eta} - Q_1(\hat{y}_{q, \eta})|\nonumber\\ & \le \Vert C\Vert |x_{q + 1} - \hat{x}_{q, \eta}|\nonumber\\ & \le \hat{\theta}_{na} E_{3,q} \le E_{3,q + 1}. \end{align} Moreover, since both the input and output channels are blocked in the presence of DoS attacks, and $\hat{y}_{q,\eta} = 0$, due to the property of $\bar{R}$, it follows that \begin{equation*} |y_{q + 1} - Q_1(\hat{y}_{q, \eta})| \le \hat{\theta}_{a} E_{3,q} \le E_{3,q + 1} \end{equation*} and we complete the proof. \end{proof} Next, we establish upper bounds on the sequences $\{E_{p,q,k}:q \in \mathbb{Z}_{\ge 1}, k = 0, \cdots\!, \eta - 1\} (p = 1, 2, 3)$, whose existence will imply the boundness of state trajectory. \begin{lemma}\label{lem:eqbound} Consider system (\ref{continuoussystem}) with controller in (\ref{abdoscontroller}) and (\ref{predoscontroller}), where $K$ satisfies (\ref{dbdb}) and $M$ is chosen such that $R$ is schur stable. Let the assumptions and conditions in Thm. \ref{2convergetheorem} hold. If further $\{E_{p,q,k}:q \in \mathbb{Z}_{\ge 1}, k = 0, \cdots\!, \eta - 1\} (p = 1, 2, 3)$ obey (\ref{2E1q})-(\ref{2E3q}), there exist $\Omega_1 \ge 1$ and $\gamma \in (0,1)$ such that \begin{equation}\label{Eqomegagamma} E_{3, q} \le \Omega_1 \gamma^q |x_0| ,\qquad \forall k \in \mathbb{Z}_{\ge 1} \end{equation} and $E_{1, q} = E_{1, 0}$, and $E_{2, q, k} \le \Omega_2\gamma^q|x_0|$. \end{lemma} \begin{proof} Using (\ref{2E1q}), $E_{1, q}$ remains unchanged within the considered interval, therefore, $E_{1, q} = E_{1, 0}$ holds for all $q \in \mathbb{Z}_{\ge 1}$. The proof for $E_{3, q} \le \Omega \gamma^q |x_0|, \forall q \in \mathbb{Z}_{\ge 1}$ follows directly from that of Lemma 3.9 in \cite{8880482}, where $\Omega_1 := \frac{\hat{\theta}_0^{\Pi_f + 1}\hat{\theta}_a^{\Pi_d}}{\hat{\theta}_{na}^{\Pi_f + \Pi_d + 1}} \big(\hat{\theta}_{na}\big(\frac{\hat{\theta}_0}{\hat{\theta}_{na}}\big)^{\nu_f}\big(\frac{\hat{\theta}_a}{\hat{\theta}_{na}}\big)^{\nu_d}\big)$. Moreover, applying (\ref{2uqk}), $E_{2, q, k} \le \Omega_2\gamma^q |x_0|$ can be verified with $\Omega_2 := \frac{N_3 - 1}{N_3}\Vert K\bar{R}^{\eta - 1}\Vert\Omega_1$. \end{proof} We are now in a position to prove Thm. \ref{2convergetheorem}. \begin{proof} [Proof of Theorem \ref{2convergetheorem}] We first establish the bound of the state $x$ at the transmission instants, i.e., $|x(q\Delta)|$, then derive its bound at the sampling instants, i.e., $|x(q\Delta + k\delta)|$. Finally, combining these two bounds to yield bound $|x(t)|$ in the considered horizon. First, according to (\ref{continuoussystem}), (\ref{abdoscontroller}), and (\ref{predoscontroller}), one has \begin{align}\label{1xqeta2} x_{q, \eta} =&~ \bar{R}^{\eta}x_{q,k} + \sum_{i = 0}^{\eta - 1}\bar{R}^iB_dK(x_{q, \eta - i - 1} - \hat{x}_{q, \eta - i - 1}) \nonumber\\ & + \sum_{i = 0}^{\eta - 1}\bar{R}^iB_d(Q_2(u_{q, \eta - i - 1}) - K \hat{x}_{q, \eta - i - 1}) \end{align} and \begin{align}\label{eq:vertx} \vert x_{q, \eta}\vert \le&~ \Vert \bar{R}^{\eta}\Vert |x_{q}| + \sum_{i = 0}^{\eta - 1}\Vert\bar{R}^iB_dK\Vert |(x_{q, \eta - i - 1} - \hat{x}_{q, \eta - i - 1})|\nonumber\\ & + \sum_{i = 0}^{\eta - 1}\Vert \bar{R}^iB_d\Vert|(Q_2(u_{q, \eta - i - 1}) - K \hat{x}_{q, \eta - i - 1})|. \end{align} Since (\ref{eq:yqy}), it follows that \begin{equation}\label{eq:xqx} \Vert x_{q} - \hat{x}_{q - 1, \eta}\Vert \le \frac{E_{3,q}}{\Vert C\Vert}. \end{equation} Noticing that $\bar{R}^{\eta} = 0$, substituting (\ref{2E2qk}) and (\ref{eq:xqx}) into (\ref{eq:vertx}), \begin{align} \vert x_{q, \eta}\vert\le&\ \sum_{i = 0}^{\eta - 1} \frac{N_3 - 1}{N_2 N_3}\Vert \bar{R}^i B_d\Vert\Vert K\bar{R}^{\eta - i - 1}M\Vert E_{3,q}\nonumber\\ & + \sum_{i = 0}^{\eta - 1} \frac{\Vert \bar{R}^i B_d K A_d^{\eta - i - 1}\Vert}{\Vert C\Vert} E_{3,q}\nonumber\\ \le&\ \Omega_x \Omega_1 \gamma^q |x_0| \end{align} where $\Omega_x := \sum_{i = 0}^{\eta - 1}\big\{ \frac{N_3 - 1}{N_2 N_3}\Vert \bar{R}^i B_d\Vert\Vert K\bar{R}^{\eta - i - 1}M\Vert + \frac{\Vert \bar{R}^i B_d K A_d^{\eta - i - 1}\Vert}{\Vert C\Vert}\big\}$, and the last inequality holds due to (\ref{Eqomegagamma}). Since $x_{q, k+1} = A_dx_{q,k} + B_d Q_2(u_{q,k})$, we have that \begin{align}\label{eq:xql} \vert x_{q, {\ell}}\vert\le &\ \Vert\bar{R}^{{\ell}}\Vert |x_{q}| + \sum_{i = 0}^{{\ell} - 1} \frac{\Vert \bar{R}^i B_d K A_d^{{\ell} - i - 1}\Vert}{\Vert C\Vert}E_{3,q}\nonumber\\ &+ \sum_{i = 0}^{{\ell} - 1} \frac{N_3 - 1}{N_2 N_3}\Vert \bar{R}^i B_d\Vert\Vert K\bar{R}^{{\ell} - i - 1}M\Vert E_{3,q}\\ \le&\ \Omega_x \Omega_1 \gamma^q |x_0| + \Omega_3 \gamma^q |x_0|\nonumber \le \bar{\Omega}_x \gamma^q |x_0| \end{align} where $\Omega_3 := \Omega_1\max_{{\ell}} \sum_{i = 0}^{{\ell}}\big\{ \frac{N_3 - 1}{N_2 N_3}\Vert \bar{R}^i B_d\Vert\Vert K\bar{R}^{{\ell} - 1}M\Vert + \frac{\Vert \bar{R}^i B_d K A_d^{\eta - i - 1}\Vert}{\Vert C\Vert}\big\}, {\ell} \in \{1, \cdots\!, \eta - 1\}$, and $\bar{\Omega}_x := \Omega_3 + \Vert\bar{R}^{{\ell}}\Vert\Omega_x$. Finally, abiding by (\ref{continuoussystem}), $x(t)$ satisfies \begin{align*} x(t) = e^{A(t - q\Delta - k\delta)} + \int_{q\Delta + k\delta}^te^{As}BQ_2(u_{q, k}) \,ds \end{align*} for all $t \in [q\Delta + k\delta, q\Delta + (k+1)\delta)$. Combining Lem. \ref{lem:eqbound} and (\ref{eq:xql}), it follows that \begin{align*} \vert x(t) \vert \le \big(\Vert A_d\Vert \bar{\Omega}_x + \frac{N_2 + 1}{N_2}\Vert B_d\Vert\Omega_2\big)\gamma^q|x_0|\le \tilde{\Omega}_x e^{-\sigma t}|x_0| \end{align*} where $\sigma := \frac{1}{\eta \delta}\log \frac{1}{\gamma}$ and $\tilde{\Omega}_x := \Vert A_d\Vert \bar{\Omega}_x + \frac{N_2 + 1}{N_2}\Vert B_d\Vert\Omega_2$. This implies exponential convergence of the state. \end{proof} \begin{remark}\label{2dbdbremark} Leveraging the same technique as in Rmk. \ref{remark:dbgain}, one can also design $M$ to nullify $R^{\mu} = 0$, where $\mu$ is the observability index of $(C, A_d^{\eta})$. A direct benefit from using the deadbeat observer gain is that the encoding schemes can be simplified, since $R^{\ell} = 0$ holds for all $\ell \ge \mu$. However, the results in \cite{WakaikiObserver} indicate that despite exhibiting faster convergence and fewer quantization levels, due to the deadbeat property of matrices $R$ and $\bar{R}$, the quantization step size $E_{p,q}/N_p$ is large, which leads to large quantization errors. Moreover, it was shown in \cite{8880482} that if the quantization step size $E_{p,q}/N_p$ grows slower during DoS attacks, then the overshoot from an attack is smaller, and the level of system robustness is stronger. Therefore, instead of a deadbeat observer gain, a general one that can make $R$ schur stable is employed in the present work. \end{remark} \section{Network Phenomena at Output Channel}\label{outputsection} In this section, we consider stabilizing linear systems over a communication network, where only the output channel is subject to DoS attacks, i.e., the input channel is assumed ideal; see Fig. \ref{siglenetworkfig}. The transmission policy in the previous section is considered here; that is, the digital controller receives quantized output $Q(y_q)$ from the plant with period $\Delta$ and generates control input $u_{q,k}$ with period $\delta$. Notice that the decoder can recover the correct quantized value from the index sent by the encoder only if they share the same quantization ranges and centers. It is thus necessary to ensure that the encoder and the decoder are \emph{synchronized} before designing encoding schemes. A direct way to maintain synchronization is through using an ACK-based protocol; see Fig. \ref{tcpfig}, which has been adopted in previous studies, such as, \cite{feng2020datarate, 8880482}. Nevertheless, in real-time applications, protocols without ACKs, e.g., UDP, are often preferred since the resulting implementation is simpler as well as saves the additional energy required for sending ACKs \cite{HongUDP}. Hence, in the following, we first show that method for stabilizing systems with ACK-based protocols can no longer be used under ACK-free protocols. Then, we demonstrate that our proposed methods can inform the encoder of DoS attacks from zero inputs, thus the decoder and the encoder can be synchronized even without ACKs. \begin{figure} \centering \includegraphics[width=6cm]{NetworkedStructure0.jpg}\\ \caption{Closed-loop system with an ACK-free protocol.}\label{siglenetworkfig} \centering \end{figure} \begin{figure} \centering \includegraphics[width=6cm]{TCP.jpg}\\ \caption{Closed-loop system with an ACK-based protocol. The black dashed line represents the ACKs sent from the decoder to the encoder.}\label{tcpfig} \centering \end{figure} \subsection{Controller under an acknowledgment-based protocol} Recall that $\{s_r\}_{r\in \mathbb{N}_0}$ collects the sequence of successful transmission instants. Let $\delta = \Delta$, and choose $K$ such that $\bar{R} = A_d + B_dK$ is schur stable. We consider an observer-based controller described by \begin{subequations}\label{eq:tcpcontroller} \begin{align} &\hat{x}_{q+1} = A_d\hat{x}_q + B_du_q + L(Q(y_q) - \hat{y}_q), & q\Delta = s_r \label{eq:tcpcontroller_1}\\ &\hat{x}_{q+1} = A_d\hat{x}_q + B_du_q, & q\Delta \ne s_r \label{eq:tcpcontroller_2}\\ &\hat{y}_q = C\hat{x}_q \label{eq:tcpcontroller_3}\\ &u_q = K\hat{x}_q \label{eq:tcpcontroller_4} \end{align} \end{subequations} where $\hat{x}_{q}\! \in\! \mathbb{R}^{n_x}, \hat{y}_{q} \!\in\! \mathbb{R}^{n_y}$, and $Q(y_{q}) \!\in\! \mathbb{R}^{n_y}$ are the estimated state, the estimated output, and the quantized output, respectively. The initial condition is set to be $\hat{x}_0 = 0$. Since the input channel is ideal, it follows that \begin{equation*} u(t) = u_q, \quad q \Delta \le t < (q + 1)\Delta, \quad q \in \mathbb{Z}_{\ge 0}. \end{equation*} To design an encoding scheme such that the output $y_q$ can be quantized without saturation, an error bound between the estimated output and the actual output, i.e., $| e_q| := |x_q - \hat{x}_q| \le E_q$, should be derived. Based on (\ref{continuoussystem_2}) and (\ref{eq:tcpcontroller_2}), it can be deduced that \begin{equation}\label{eq:tcperror} |y_q - \hat{y}_q| = |C(x_q - \hat{x}_q)| = |Ce_q| \le \Vert C \Vert E_q. \end{equation} Let $N$ denote the number of quantization levels of $y_q$. Similar to the previous section, we partition the hypercube $ \{ y \in \mathbb{R} ^{n_y} : | y_q - \hat{y}_q| \le \Vert C \Vert E_q\}$ into $N^{n_y}$ equal-sized boxes. The quantization error obeys $ |Q(y_q) - y_q| \le \frac{\Vert C\Vert}{N}E_q$. According to As. \ref{x0bound}, the initial value $E_0$ is given by \begin{equation}\label{eq:tcpinitial} |e_0| = |x_0| \stackrel{\triangle}{=} E_0. \end{equation} Sequence $\{E_q, q \in \mathbb{Z}_{\ge 1}\}$ will be specified latter. Notice that the hypercube center is $\hat{y}_q$, which is generated by the predictor-based observer in (\ref{eq:tcpcontroller}). Therefore, this predictor should also be equipped at the encoder side. Under ACK-based protocol, the decoder sends ACKs to the encoder without delay at successful transmission instants; and when the encoder does not receive the ACKs, it infers that there is a DoS attack. In this manner, synchronization between these two predictors is ensured, which consequently implies that the quantization ranges and the centers at the encoder are identical to that of the decoder. Before giving stability condition for ACK-based protocol case, we present an output encoding scheme. Let \begin{equation}\label{eq:tcperrorbound} E_{q + 1} := \left\{ \begin{aligned} & \theta_a E_{q}, & q\Delta \ne s_r\\ & \theta_0 E_{q}, & (q-1)\Delta \ne s_r, q\Delta = s_r\\ & \theta_{na}E_{q}, & (q-1)\Delta = s_r, q\Delta = s_r \end{aligned} \right. \end{equation} with \begin{subequations}\label{eq:tcpencode} \begin{align} \theta_{a} &:=\left\|A_d \right\| \label{eq:tcpencode1}\\ \theta_{0} &:=H_{0} \rho+\frac{H_1\left\|C \right\|}{N} \label{eq:tcpencode2}\\ \theta_{na} &:=\rho+\frac{H_1 \left\|C \right\|}{N} \label{eq:tcpencode3} \end{align} \end{subequations} where constants $H_0$, $H_1$, and $0 < \rho < 1$ satisfy \begin{equation*} \Vert (A_d - LC)^\ell \Vert \le H_0 \rho^{\ell},\quad \Vert (A_d - LC)^\ell L\Vert \le H_1 \rho^{\ell}. \end{equation*} \begin{theorem}\label{th:outputtheorem1} Consider system (\ref{continuoussystem}) with controller (\ref{eq:tcpcontroller}), where $M$ and $K$ are chosen such that $A_d - LC$ and $A_d + B_d K$ are schur stable. Under As. \ref{as:abca}--\ref{DoS_durationassumption}, if i) the quantization levels \begin{equation}\label{2Ntcpcondition} \begin{split} N > \frac{H_1\Vert C\Vert}{1 - \rho} \end{split} \end{equation} and, ii) DoS attacks satisfy \begin{equation}\label{2dostcpcondition} \frac{1}{\nu_d} \le \frac{\log{(1/{\theta}_{na})}}{\log{({\theta}_a/{\theta}_{na})}} - \frac{\log{({\theta}_0/{\theta}_{na})}}{\log{(\theta_a/{\theta}_{na})}}\frac{1}{\nu_f} \end{equation} then the system is exponentially stable under the encoding scheme with error bounds $\{E_{q}:q \in \mathbb{Z}_{\ge 1}\}$ constructed by the update rule (\ref{eq:tcperrorbound}). \end{theorem} The proof is similar to that of \cite[Thm. 3.4]{8880482} and is thus omitted here due to space limitations. \subsection{Controller under an acknowledgment-free protocol}\label{1Acontrollersection} In this subsection, we show that the aforementioned controller and encoding scheme cannot stabilize the system when the ACK-based protocol is replaced by an ACK-free protocol. This is because synchronization between the encoder and decoder is no longer guaranteed. To see this, consider controller (\ref{eq:tcpcontroller}) with the encoding scheme in (\ref{eq:tcperrorbound}) employing an ACK-free protocol. In this setting, predictors at the encoder and decoder sides may become asynchronized, since no matter whether DoS attacks happen or not, the decoder does not send ACKs to the encoder. When a DoS attack occurs, the predictor at the controller side switches to (\ref{eq:tcpcontroller_2}), whereas the predictor at the encoder side sticks to (\ref{eq:tcpcontroller_1}). Moreover, the update rule of sequence $E_q$ at the decoder switches to (\ref{eq:tcpencode1}), while adhering to (\ref{eq:tcpencode2})-(\ref{eq:tcpencode3}) at the encoder. As a result, their quantization ranges and centers may deviate, and the correct output value cannot be recovered by the decoder. We prove that even if one DoS attack occurs (i.e., decoder and encoder are asynchronized for only one transmission period), the state may diverge eventually. To distinguish between predictors at the encoder and decoder, let $\hat{x}_q$, $\hat{y}_q$, and $\hat{Q}(y_q)$ denote the estimated state, estimated output, and quantized output at the controller side, and $\tilde{x}_q, \tilde{y}_q$, and $Q(y_q)$ denote their counterparts at the encoder side. In addition, let $u_q$ stand for the input sent by the controller, and $\tilde{u}_q$ the estimated input generated by the predictor at the encoder side. Predictor at the controller side can be expressed by \begin{subequations}\label{eq:tcpdecoder} \begin{align} &\hat{x}_{q+1} = A_d\hat{x}_q + B_du_q + L(\hat{Q}(y_q) - \hat{y}_q), & q\Delta = s_r \label{eq:tcpdecoder_1}\\ &\hat{x}_{q+1} = A_d\hat{x}_q + B_du_q, & q\Delta \ne s_r \label{eq:tcpdecoder_2}\\ &\hat{y}_q = C\hat{x}_q \label{eq:tcpdecoder_3}\\ &u_q = K\hat{x}_q \label{eq:tcpdecoder_4} \end{align} \end{subequations} and predictor at the encoder side is described by \begin{subequations}\label{eq:tcpencoder} \begin{align} &\tilde{x}_{q+1} = A_d\tilde{x}_q + B_d\tilde{u}_q + L({Q}(y_q) - \tilde{y}_q)\label{eq:tcpencoder_1}\\ &\tilde{y}_q = C\tilde{x}_q \label{eq:tcpencoder_2}\\ &\tilde{u}_q = K\tilde{x}_q \end{align} \end{subequations} where $q \in \mathbb{Z}_{\ge 0}$. Similarly, let $E_{d,q}$, and $E_{e,q}$ denote the error bound at the decoder, and the encoder side, respectively \begin{equation*} \begin{aligned} &E_{d,q + 1} := \left\{ \begin{aligned} & \theta_a E_{d,q}, & q\Delta \ne s_r\\ & \theta_0 E_{d,q}, & (q-1)\Delta \ne s_r, q\Delta = s_r\\ & \theta_{na}E_{d,q}, & (q-1)\Delta = s_r, q\Delta = s_r \end{aligned} \right.\\ &E_{e,q + 1} := \left\{ \begin{aligned} & \theta_0 E_{e,q}, & q\Delta = 0\\ & \theta_{na}E_{e,q}, & q\Delta > 0 \end{aligned} \right. \end{aligned} \end{equation*} where $\theta_{0}$, $\theta_{a}$, and $\theta_{na}$ are defined in (\ref{eq:tcpencode}). Accordingly, the errors at the encoder and decoder sides are $e_{e,q} := x_q - \tilde{x}_q$, and $e_{d,q} := x_q - \hat{x}_q$. Moreover, the quantized outputs in (\ref{eq:tcpdecoder_2}) and (\ref{eq:tcpencoder_1}) are \begin{align}\label{hatQ} \hat{Q}(y_q) = \hat{y}_q + Q^{i}_q\frac{\Vert C\Vert E_{d,q}}{N}\\ Q(y_q) = \tilde{y}_q + Q^{i}_q\frac{\Vert C\Vert E_{e,q}}{N} \end{align} where $Q_q^{i}$ denotes the quantization index transmitted from the encoder to the decoder. Suppose that a DoS attack is launched at $q_a\Delta$ and no attacks happen before or after $q_a\Delta$. It follows that $\hat{Q}(y_q) = Q(y_q)$ for all $q \le q_a$, and \begin{subequations}\label{tildex-x} \begin{align} &\hat{x}_{q_a} = \tilde{x}_{q_a} \\ &\hat{x}_{q_a\!+\!1} = (A_d \!+\! B_d K)\hat{x}_{q_a} \\ &\tilde{x}_{q_a \!+\! 1} = (A_d \!+\! B_d K) \tilde{x}_{q_a} \!+\! L Q^{i}_{q_a}\frac{\Vert C\Vert E_{e,q_a}}{N} \\ &\hat{x}_{q_a+2} = (A_d \!+\! B_d K)^2 \hat{x}_{q_a} \!+\! L Q^{i}_{q_a\!+\!1}\frac{\Vert C\Vert E_{d,q_a \!+\! 1}}{N}\\ \begin{split} &\tilde{x}_{q_a + 2} = (A_d \!+\! B_d K)^2 \tilde{x}_{q_a} \!+\! L Q^{i}_{q_a+1}\frac{\Vert C\Vert E_{e,q_a + 1}}{N}\\ &~~~~~~~~~~ \!+\! (A_d \!+\! B_d K)L Q^{i}_{q_a}\frac{\Vert C\Vert E_{e,q_a}}{N} \\ \end{split}\\ & \cdots \nonumber \end{align} \end{subequations} Notice that the quantizer operates normally without saturation only if $E_{e,q} \ge |e_{e,q}| = |x_{q} - \tilde{x}_q|$ and $E_{d,q} \ge |e_{d,q}| = |x_{q} - \hat{x}_q|$ hold for all $q \in \mathbb{Z}_{\ge 1}$. If the quantizer saturates, the error between the actual output and the quantized output maybe large, which consequently renders the system unstable. In the following, we assume that the quantizer is not saturated; that is $E_{e,q} \ge |e_{e,q}|$ and $E_{d,q} \ge |e_{d,q}|$ for all $q \in \mathbb{Z}_{\ge 1}$, and reach a contradiction. Since $E_{e,q + 1} = \theta_{na}E_{e,q}, q > 0$, and $\theta_{na} <1$, sequence $\{|x_q - \tilde{x}_q|\}$ is decreasing. Let $\Pi_{L} := A_d - LC$ and $\Pi_{K} := A_d + B_dK$. Combining (\ref{hatQ}) and (\ref{tildex-x}) yields \begin{align*} |x_{q_a + 1} - \tilde{x}_{q_a + 1}| & = |\Pi_{L} (x_{q_a} - \tilde{x}_{q_a}) - L(Q(y_{q_a}) - y_{q_a})|\\ & \le E_{e, q_a + 1} \stackrel{\triangle}{=} \tilde{E}_{e,q_a + 1}. \end{align*} Likewise, \begin{align*} &~|x_{q_a + 2}- \tilde{x}_{q_a + 2}| \\ =&\ \big|\Pi_{L} (x_{q_a + 1} - \tilde{x}_{q_a + 1}) - L(Q(y_{q_a + 1}) - y_{q_a + 1})\\ & - BKLQ^{i}_{q_a}\frac{\Vert CR^{-1}\Vert E_{e,q_a}}{N}\big|\\ \le &\ E_{q_a + 2} + \frac{1}{\theta_{na}^2}\frac{\Vert BKLQ^{i}_{q_a}\Vert\Vert CR^{-1}\Vert}{N}E_{e,q_a+2}\\ \stackrel{\triangle}{=} &\ \tilde{E}_{e,q_a + 2}. \end{align*} Iteratively, for $\ell \ge 3$, it follows that \begin{align*} &~ |x_{q_a + \ell} - \tilde{x}_{q_a + \ell}|\\ \le&\ E_{e, q_a + \ell} + \frac{1}{\theta_{na}^\ell}\frac{\Vert B_dK\Pi_{K}^{\ell - 1}LQ^{i}_{q_a}\Vert\Vert C\Vert E_{e, q_a}}{N}\\ & + \frac{1}{\theta_{na}^\ell}\frac{\Vert B_dK\Pi_{K}^{\ell - 2}LQ^{i}_{q_a + 1}\Vert\Vert C\Vert(\theta_{a} - \theta_{na}) E_{e, q_a}}{N}\\ & + \sum_{i = 0}^{\ell - 3}\frac{1}{\theta_{na}^{i + 3}}\frac{\Vert B_dK\Pi_{K}^{i}LQ^{i}_{q_a + \ell - i - 1}\Vert\Vert CR^{-1}\Vert}{N} \\ &~~ \times (\theta_0\theta_a - \theta_{na}^2) E_{e, q_a}\\ \stackrel{\triangle}{=}& ~\tilde{E}_{e,q_a + \ell}. \end{align*} Since ${1}/{\theta_{na}}>1$, $\{\tilde{E}_{e,q}\}$ is an increasing sequence, which contradicts the assumption that $\{|x_q - \tilde{x}_q|\}$ is a decreasing sequence. Therefore, it can be concluded that without ACKs, predictors at the encoder and controller sides may get asynchronized even if there is a single DoS attack. This causes mismatches on their quantization centers and ranges, and there exists $\hat{q} \ge q_a$ such that $E_{e, q} < |x_{q} - \tilde{x}_{q}|$ holds for all $q \ge \hat{q}$, and the state diverges eventually. We have just shown that the synchronization between decoder and encoder is essential. However, ACK-based protocol is not the only way to achieve this goal. In the absence of ACKs, this challenge can be overcome by using a deadbeat controller, and the prove will be given in the following. Let the number of the quantization level $N$ to be even. We adopt the same quantizer as in (\ref{eq:tcperror})-(\ref{eq:tcpinitial}), with $\hat{x}_q$, $e_q$, and $\hat{y}_q$ replaced by $\hat{x}_{q-1, \eta}$, $e_{q-1, \eta}$, and $\hat{y}_{q - 1, \eta}$, respectively. The observer-based controller is employed only at the decoder side \begin{subequations}\label{controller1} \begin{align} &\hat{x}_{q, k+1} = A_d\hat{x}_{q,k} + B_du_{q,k}, & k&\le \eta - 1 \label{controller1_1}\\ &\hat{x}_{q} = \hat{x}_{q-1, \eta} + M_q[Q(y_{q}) - \hat{y}_{q-1, \eta}], & k &= \eta \label{controller1_2}\\ &\hat{y}_{q,k} = C \hat{x}_{q,k} \label{controller1_3}\\ &u_{q,k} = K \hat{x}_{q,k} \label{controller1_4} \end{align} \end{subequations} Thanks to the ideal input channel, \begin{equation*}\label{ut=uk} u(t) = u_{q,k}, \qquad q\Delta + k\delta \le t < q\Delta +(k+1)\delta \end{equation*} for every $q \in \mathbb{Z}_{\ge 0}$, and $k = 0, \cdots\!, \eta - 1$. Consider an arbitrary transmission interval $[q \Delta, (q + 1)\Delta)$. From the property (\ref{dbdb}), one gets that $\hat{x}_{q,\eta} = (A_d + B_d K)^{\eta}\hat{x}_{q} = 0$, and $\hat{y}_{q, \eta} = C\hat{x}_{q, \eta} = 0$. It is thus sufficient to choose the quantization center to be the origin, and predictor (\ref{controller1}) is not needed at the encoder side. This saves computational resources. If an attack is launched at $(q + 1)\Delta$, the decoder is not going to receive the quantized output $Q(y_{q + 1})$, and instead it will use a default zero. Then, it follows from (\ref{controller1_1})-(\ref{controller1_2}) that $\hat{x}_{q + 1} = \hat{x}_{q,\eta} = 0$, and $u_{q+1} = K\hat{x}_{q + 1} = 0$. On the other hand, in the absence of DoS attacks, since the quantization center is zero and $N$ is even, the quantized value is nonzero. Therefore, the decoder receives a quantized output $Q(y_{q + 1}) \ne 0$. As a result, $\hat{x}_{q + 1} = \hat{x}_{q, \eta} + M(Q(y_{q+1}) - \hat{y}_{q, \eta})= MQ(y_{q + 1}) \ne 0$, and $u_{q+1} = K\hat{x}_{q + 1} \ne 0$. This suggests that the encoder can infer whether there is an attack or not from the input signals, thus its quantization ranges can be updated following the same scheme with the decoder. We have secured synchronization between the encoder and decoder. Now, what is left behind is the system stability analysis. Recall that $A_d^{\eta}(I - MC)$ is schur stable, there exist constants $G_0, G_1$, and $0 <\rho <1$ such that \begin{equation}\label{m0m1} \begin{aligned} \big\Vert R^{\ell}\big\Vert \le G_0 \rho^{\ell},\quad \big\Vert {R}^{\ell}A_d^{\eta}M\big\Vert \le G_1 \rho^{\ell}. \end{aligned} \end{equation} Define constants \begin{align*} \tilde{\theta}_{a} := \Vert A_d^{\eta}\Vert,~~ \tilde{\theta}_{0} := G_0\rho + \frac{G_1\Vert C\Vert}{N},~~ \tilde{\theta}_{na} := \rho + \frac{G_1\Vert C\Vert}{N} \end{align*} and the error bound $\{E_q:q \in \mathbb{Z}_{\ge 1}\}$ is updated by \begin{equation}\label{errorboundEq} E_{q + 1} := \left\{ \begin{aligned} & \tilde{\theta}_a E_q, & q\Delta \ne s_r\\ & \tilde{\theta}_0 E_q, & (q-1)\Delta \ne s_r, q\Delta = s_r\\ & \tilde{\theta}_{na}E_q, & (q-1)\Delta = s_r, q\Delta = s_r \end{aligned} \right.. \end{equation} The following result is an extension of Thm. \ref{th:outputtheorem1} under an ACK-free protocol, whose proof follows from that of Thm. \ref{th:outputtheorem1}. \begin{theorem}\label{1convergetheorem} Consider system (\ref{continuoussystem}) equipped with controller in (\ref{controller1}), where $M$ and $K$ are chosen such that $R$ is schur stable and (\ref{dbdb}) is met. Let As. \ref{as:abca}--\ref{DoS_durationassumption} hold. If i) the output and input transmission periods satisfy (\ref{delta}), ii) the quantization levels $N$ is even, and obey \begin{equation}\label{Ncondition} N > \frac{G_1\Vert C\Vert}{1 - \rho} \end{equation} and, iii) DoS attacks satisfy \begin{equation}\label{1doscondition} \frac{1}{\nu_d} \le \frac{\log{(1/\tilde{\theta}_{na})}}{\log{(\tilde{\theta}_a/\tilde{\theta}_{na})}} - \frac{\log{(\tilde{\theta}_0/\tilde{\theta}_{na})}}{\log{(\tilde{\theta}_a/\tilde{\theta}_{na})}}\frac{1}{\nu_f} \end{equation} then the system is exponentially stable under the encoding scheme with error bound $\{E_q:q \in \mathbb{Z}_{\ge 1}\}$ constructed by (\ref{errorboundEq}). \end{theorem} \begin{figure}[b] \centering \includegraphics[width=9cm]{doubledblqnormxbd.pdf}\\ \caption{Maximum norm of state $x$ and its estimate $\hat{x}$ with controller (\ref{abdoscontroller}).}\label{figdoubledblqnormx} \centering \end{figure} \begin{figure} \centering \includegraphics[width=9cm]{doubledblqnormE3bd.pdf}\\ \caption{Relationship between normalized quantization range $E_{3,k}/N_3$ and actual error $|y_q - Q_1(\hat{y}_q)|$ with controller (\ref{abdoscontroller}).}\label{figdoubledblqnormE} \centering \end{figure} \begin{figure} \centering \includegraphics[width=9cm]{doubledblqnormE2bd.pdf}\\ \caption{Normalized quantization range $E_{2,q,k}/N_2$ with controller (\ref{abdoscontroller}).}\label{figdoubledblqnormE2} \centering \end{figure} \begin{figure} \centering \includegraphics[width=9cm]{logdblq.pdf}\\ \caption{Normalized quantization ranges $E_{3,q}/N_3$ from using general observer gain and deadbeat observer gain in log space.}\label{figlogdblq} \centering \end{figure} \begin{figure} \centering \includegraphics[width=9cm]{doubledbdbnormxbd.pdf}\\ \caption{Maximum norm of state $x$ and its estimate $\hat{x}$ with controller (\ref{abdoscontroller}) and deadbeat observer gain.}\label{figdoubledbdbnormx} \centering \end{figure} \section{Numerical Example} \label{simulation} A linearized model of the unstable batch reactor in \cite{8880482} is given by $\dot{x}(t) = A x(t) + B u(t)$ and $y = C x(t)$, where \begin{align*} A := \left[ \begin{matrix} 1.38 & -0.2077 & 6.715 & -5.676\\ -0.5814 & -4.29 & 0 & 0.675\\ 1.067 & 4.273 & -6.654 & 5.893\\ 0.048 & 4.273 & -1.343 & -2.104 \end{matrix} \right],\\ B := \left[ \begin{matrix} 0 & 0\\ 5.679 & 0\\ 1.136 & -3.146\\ 1.136 & 0 \end{matrix} \right], C := \left[ \begin{matrix} 1 & 0 & 1 & -1\\ 0 & 1 & 0 & 0 \end{matrix} \right]. \end{align*} This system $(A, B, C)$ is observable and controllable with $\eta =\mu = 2$. Let the output transmission period $\Delta = 0.2$, so $\delta = \Delta/\eta = 0.1$. Choosing matrix $K$, such that (\ref{dbdb}) is met, i.e., \begin{align*} & K := \left[ \begin{matrix} 1.0106 & -1.5661 & 0.0385 & -4.0366\\ 8.1074 & -0.0347 & 4.3337 &- 3.6241 \end{matrix} \right]. \end{align*} Calculating the gain of the steady-state Kalman filter \begin{align*} M := \left[ \begin{matrix} 0.5534 & -0.0249\\ -0.0287 & 0.0396\\ 0.1489 & 0.0892\\ 0.0810 & 0.0931 \end{matrix} \right]. \end{align*} We first present the time responses when both input and output channels suffer from the network phenomena. Applying Thm. \ref{2convergetheorem}, when both the quantization levels $N_2$ and $N_3$ go to infinity, the duration bound $1/\nu_d$ and the frequency bound $1/\nu_f$ of DoS attacks approach to the line $\frac{1}{\nu_d} \approx -0.5544\frac{1}{\nu_f} + 0.2707$. According to (\ref{2doscondition}), if $\frac{1}{\nu_d} < -2.0380\frac{1}{\nu_f} + 0.2269$, then the closed-loop system with encoding schemes (\ref{2E1q})-(\ref{2E3q}) is stabilized. Over a simulation horizon of $160$s ($800$ time-step), DoS attacks (the gray shades) are generated randomly with $\Phi_d = 47$ and $\Phi_f = 44$. Setting $\kappa_d = 3, \nu_d = 18, \kappa_f = 2, \nu_f = 19$, condition (\ref{2doscondition}) holds, i.e., $1/\nu_d = 0.056 < 0.119$. Figs. \ref{figdoubledblqnormx} and \ref{figdoubledblqnormE} illustrate the time response in this situation. Since the condition in Thm. \ref{2convergetheorem} is satisfied, the maximum norm of the state converges, and the bound $E_{3,q}$ exponentially decreases. Fig. \ref{figdoubledblqnormE} depicts that $E_{3,q}$ shares the same trend with $|y_q - Q_1(\hat{y}_{q})|$, and Fig. \ref{figdoubledblqnormE2} demonstrates the evolution of the quantization step size $E_{2,q,k}/N_2$, which jumps up and down within an output transmission period, and decreases in general. Difference between the trend of $E_{3,q}/N_3$ and $E_{2,q,k}/N_2$ lies in the property of $\Vert R \Vert$ and $\Vert \bar{R}\Vert$. Fig. \ref{figlogdblq} compares the quantization step size $E_{3,q}/N_3$ of a general observer gain (blue line), such that $R$ is schur stable, and the deadbeat observer gain (dot marked green line), namely $R^{\mu} = 0$. This panel illustrates that although $E_{3,q}$ responds faster under deadbeat observer, the large quantization step size results in large overshoot of the state; see Fig. \ref{figdoubledbdbnormx}, which confirms Rmk. \ref{2dbdbremark}. \begin{figure} \centering \includegraphics[width=9cm]{siglenormx.pdf}\\ \caption{Maximum norm of state $x$ and its estimate $\hat{x}$ with controller (\ref{controller1}).}\label{figsiglenormx} \centering \end{figure} \begin{figure} \centering \includegraphics[width=9cm]{sigleerrorE.pdf}\\ \caption{Relationship between normalized quantization range $E_q/N$ and actual error $|e_q|$ with controller (\ref{controller1}).}\label{figsigleerrorE} \centering \end{figure} \begin{figure} \centering \includegraphics[width=9cm]{sigleu.pdf}\\ \caption{Input signal $u_{q,k}$ with controller (\ref{controller1}).}\label{figsigleu} \centering \end{figure} Next, consider network phenomena only at output channel. From (\ref{Ncondition}), the quantization levels satisfies $N > 6.957$, also since $N$ is even, we set $N = 100$. Over a simulation horizon of $60$s ($300$ time-step), generating DoS attacks randomly with $\Phi_d = 27$ and $\Phi_f = 25$. Setting $\kappa_d = 1, \nu_d = 11, \kappa_f = 1, \nu_f = 11$, so condition (\ref{1doscondition}) is met with $1/\nu_d = 0.01 < 0.198$, and convergence of the state is presented in Figs. \ref{figsiglenormx} and \ref{figsigleerrorE}. Further, Fig. \ref{figsigleu} shows that when a DoS attack happens, the control input is set to zero immediately, which verifies the effectiveness of our method. \section{Conclusions}\label{conclusion} This paper considered the problem of stabilizing networked control systems in the presence of DoS attacks and limited data rates. To overcome the network-induced challenges, a structure consisting of a deadbeat controller and a transmission protocol which are carefully co-designed based on the system controllability index, was proposed to address the network-induced challenges. Specifically, when both input and output channels are subject to the network phenomena, it was shown that the proposed structure can decouple and thus allow for separate design of encoding schemes for the input, output, and estimated output signals. Furthermore, easy-to-check conditions were derived such that exponential stability of the closed-loop system under this structure is ensured. On the other hand, when only the output channel is subject to the network phenomena, the proposed structure was shown able to guarantee synchronization between the encoder and decoder under an ACK-free protocol. Finally, a numerical example was presented to verify the effectiveness of our approach as well as the correctness of our theory. Future developments will focus on generalizing the results to more general systems and controllers under ACK-free protocols. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Scientific literature is the entry point to the understanding of any research topic. While key ideas identification in texts is useful when reading articles to spot important words and phrases, much information is usually scattered in many different articles and often can only be derived by mental connections. Reading several articles about one topic is an essential and yet time consuming activity aimed to make mental connections and come up with new hypotheses which are not yet explicitly reported in the literature itself. Automatic text analyses can facilitate researcher's mental process by speeding up tasks such as keywords and key phrases identification as well as making cross connections amongst a large number of papers. Text mining (TM) is the process of extracting information from documents by identifying text patterns via computational and statistical approaches. Some TM tools such as SciLite \cite{Venkatesan} aim to keywords highlighting. Differently, to support the process of extracting essential information, I developed PubSqueezer (http://www.pubsqueezer.com), a tool which aims at analysing and integrating multiple articles in order to extract not explicitly written information. PubSqueezer aims to transform unstructured collections of documents into a structured format which can be used for literature exploration and Data Science analyses. \section{Results} PubSqueezer is a TextMining web engine available at http://www.pubsqueezer.com which directly queries PubMed (https://pubmed.ncbi.nlm.nih.gov/). By downloading publications from PubMed, Pubsqueezer analyses the latest 2,000 (if abstracts available) publications about a give topic to mines gene names, key phrases and words, and ranks them according to their relevance. Due to the limited computational power of the hosting server, the analysis takes some time but each analysis is saved and listed in the homepage. The homepage (Figure \ref{figure1}) provides two lists. The one on the left, is a list of selected queries while the list on the right is a list of users' queries. The more PubSqueezer will be used, the more analyses will be publically available. These two lists can save time to any user searching for something already queried in the past. \begin{figure*} \includegraphics[width=\textwidth]{F1.jpg} \caption{PubSqueezer Homepage} \label{figure1} \end{figure*} The result page (Figure \ref{figure2}) is divided into three tabs. The first tab contains key terms and phrases. The second and third tabs are lists of biological pathways and processes (Obtained through GO enrichment \cite{go}) which are relevant to the query. Each result is directly linked to PubMed, Google, Google Scholar and Bing to allow the user to directly retrieving external information. Finally, all the lists shown in the three tabs can be downloaded as a zip file for further computational analyses. \begin{figure*} \includegraphics[width=\textwidth]{F2.jpg} \caption{PubSqueezer Results page. You can pick genes, key phrases or terms ranked according to significance score. The three tabs on top allow you to explore more details.} \label{figure2} \end{figure*} PubSquezer also has a section dedicated to Rare Diseases (Figure \ref{figure3}). This section shows similarities among rare diseases according to features varying from symptoms to genes. The network is interactive. Clicking on nodes shows details about a disease while clicking on links shows what two conditions have in common. \begin{figure*} \includegraphics[width=\textwidth]{F3.jpg} \caption{PubSqueezer one of the available Rare Disease Similarity Network. Clicking on nodes and edges you can get extra information. Clicking on diseases you get symptoms, genes atc. while clicking on edges you can see what two pathologies have in common i.e. common pathways, symptoms etc.} \label{figure3} \end{figure*} \section{Methods} All PubSqueezer results are obtained through statistic hypothesis testing. All processed texts are preprocessed to remove unnecessary terms, lemmatized and tokenized. In order to perform a statistical test, a large heterogeneous background set was build. PubSqueezer contains a large background of publications build downloading 2,000 publications for each one of the following keywords: proteomic, proteomics, gene, genetic, genetics, genomic, genomics, DAN, RNA, pathology, syndrome, disease, metabolism, metabolic. These terms were chosen to make the background as varied and as large as possible thus allowing relevant terms to "emerge" through statistical randomization testing. Upon a user's request, PubSqueezer downloads a set of publications from PubMed and compare its content against the background. As an example, every gene name and keyword which has a significant p-value against the background is considered relevant to the topic. The test is done comparing the query against 2,500 random samples of the same size of the user's set. The number 2,500 was calculated considering an upper bound to obtain a precision level of 0.01 \cite{pvalue}. All terms considered relevant to the queried topic are ranked according to their p-values. Key phrases are extracted by scanning abstracts with the RAKE algorithm \cite{rake}. Finally, pathways and biological processes are derived from KEGG\cite{kegg} using significant gene names. Each result is directly linked to PubMed, Google, Google Scholar and Bing websites to allow further investigation. \begin{table*} \begin{tabular}{cc} \includegraphics[height=1.5in]{egfr.png} & \includegraphics[height=1.5in]{mtor.png}\\ \includegraphics[height=1.5in]{notch.png} & \includegraphics[height=1.5in]{alz.png}\\ \end{tabular} \caption{PubSqueezer ranked lists VS. TF-IDF. Comparing p-value obtained by some processes using gene names filtered with different. Overall PbuSqueezer terms ranking seems to me better than TF-IDF.} \label{table1} \end{table*} In order to assess PubSqueezer results quality I tried to use it with 4 different processes: 3 pathways and one pathology. Using the results obtained, I compared the scores assigned by PubSqueezer with those obtained with the classic TF-IDF\cite{tfidf} algorithm. Overall, it seems like PubSqueezer ranking works better than TD-IDF. Taking PubSqueezer and TF-IDF lists and iteratively cutting them at different thresholds one can select top-X results. Using these genes with GO enrichment \cite{go} it is possible to see if the expected results obtain a significant p-value. As shown in the plot-table (Table \ref{table1}) PubSqueezer - with the exception of EGFR - tends to assign better scores (higher in the ranking) to more important genes. Finally, to test if it is possible to derive not explicitly reported facts from papers using PubSqueezer, I did an analysis on the Alzheimer's Disease and Parkinson's Diseases which, other than having disease specific pathways, share common processes which can not be detected by the simple disease specific gene ontology enrichment analysis \cite{calderone}. I performed this analysis removing possible hints for the algorithms – i.e. papers which potentially mention these common processes. \textbf{Query 1}: Alzheimer NOT Glucose NOT Phosphate NOT Metabolism NOT DNA NOT damage NOT Apoptosis NOT (Cell AND Cycle) NOT Protein NOT Localization NOT Vesicles NOT Trafficking NOT RNA NOT regulation NOT transcription \textbf{Query 2}: Parkinson NOT Glucose NOT Phosphate NOT Metabolism NOT DNA NOT damage NOT Apoptosis NOT (Cell AND Cycle) NOT Protein NOT Localization NOT Vesicles NOT Trafficking NOT RNA NOT regulation NOT transcription The following table shows biological processes implicitly recoverable through GO enrichment \cite{GO} from not explicitly reported term both from the two single queries and from their intersections. In other words, articles never talked about any of those arguments and yet, using the gene names ranked by PubSqueezer it is actually possible to recover those processes. \end{multicols} \begin{table}[h] \centering \begin{tabular}{|l|l|l|l|} \hline \textbf{Common processes} & \textbf{Alzheimer} & \textbf{Parkinson} & \textbf{Intersection}\\ \hline Glucose Metabolism & NO & YES & NO\\ \hline Phosphate Metabolism & YES & YES & YES\\ \hline DNA Damage & NO & NO & NO\\ \hline Apoptosis & YES & YES & YES\\ \hline Cell Cycle & YES & NO & NO\\ \hline Protein Localization Vesicles Trafficking & YES & YES & YES\\ \hline RNA Metabolism & YES & YES & NO\\ \hline Regulation of Transcription & YES & YES & YES\\ \hline \end{tabular} \end{table} \begin{multicols}{2} \section{Structured Data Exploitation through Data Science} Other than using the web interface to explore scientific literature going directly to facts which are somewhat statistically relevant to the query, it is also possible to download results as comma-separated-values CSV files so that they can be processed with Data Science strategies. In this section, I show two examples on how to use such data to perform more complex analyses using computational methods. \subsection{Rare Diseases: discovering connections despite the lack of direct publications} The first example is to help rare diseases research by projecting knowledge (literature) from one disease to another. Most rare diseases have few literature which hampers research and the understanding of the conditions themselves. On possibility is to see similar pathologies for which literature is available and use them to speculate on some other less studied but connected condition. \end{multicols} \begin{figure}[H] \centering \includegraphics[width=400px]{covid.jpg} \caption{SARS-CoV-2 heat-map comparisons against other conditions. It is also nice to notice similarities among mental conditions and forms of cancer} \label{figure4} \end{figure} \begin{multicols}{2} This first strategy aims at building a network of similarities so that the information known about one rare disease can be used to explore similarities among other rare diseases. I used PubSqueezer to query PubMed on thousands of rare diseases. Each query results in CSV's files which can be interpreted as "features" of each condition. Considering these features, one can calculate similarities, let's say what genes some pathology have in common, and link two conditions according to the magnitude of the similarity. In order to compute these similarities I used the classic cosine similarity which tolerates missing components and heterogeneous feature magnitudes. The final result is shown in Figure \ref{figure3}. In the PubSqueezer home page interface you can find several maps to explore similarities on different levels. \subsection{SARS-CoV-2: using data to get an overview} A second use case could be the exploration of literature at a glance. When one starts studying a new topic, it is mandatory to dig into the literature to find out what is known so far. Usually one relies on scientific reviews which are build upon many other scientific publications. PubSquezer somewhat make automatic reviews on a topic. In particular, I show how the SARS-CoV-2 query extracts what is known and how we can use these information to make sense out of it, or at least understand what is known so far. To go beyond the results page, I explore SARS-Cov-2 (Figure \ref{figure4}) using cosine similarity among other conditions. I build similarity heat-maps to highlight what are the cross similarities with SARS-CoV-2. This analysis shows knows facts: SARS-CoV-2 is similar to Pneumonia \cite{pneumonia} and Influenza plus it also has some correlation to diabetes\cite{diab}, especially at the level of biological pathways involved. \section{Conclusions} While this web tool is still on a very preliminary stage, I show in this article that it might be useful to squeeze multiple articles into structured data that can be further manipulated to derive new information. The hosting server is very limited but hopefully, in the future, I will migrate this to a better server. Results obtained with PubSqueezer are not perfect but already valid as shown in methods, they can help having a quick overview on a topic as well as and, most importantly, transform unstructured data into structured data to promote Data Science analyses. Possibly, the ideas in this draft article can be exploited by others. \section{Acknowledgements} I would like thank dr. Elisa Micarelli for supporting my work, listening to my crazy talking and for her suggestions on the preliminary interface. I would thank dr. Andrea Cerquone Perpetuini for listening to my work and for helping me trying different queries and cases and for trying this version of the web interface. Finally, I would like express my sincere gratitude to dr. Elena Santonico for her constant support, reading this draft and for trying this version of the web interface.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \begin{figure*}[t] \center \includegraphics[width=1\textwidth,height=.3\textheight,angle=0]{f1} \caption{The X-ray spectrum of H~2356-309, taken from the observation \#10498, plotted in the observer's frame. Left panel shows the entire spectrum between 1 and 40 \AA. The red line is a simple fit with a power law and the Galactic absorption. The 22.05 \AA\ feature (indicated by a red arrow) is clearly visible. Right panel shows the spectrum between 21 and 22.5 \AA.\ Red line is the model with three absorption features, one at 21.6 \AA\ (local Galactic \ion{O}{7} $K_{\alpha}$ absorption), one at 22.05 \AA\ (\ion{O}{8} $K_{\alpha}$ absorption intrinsic to the BL Lac), and one at 22.3 \AA (redshifted \ion{O}{7} $K_{\alpha}$ absorption associated with the WHIM in the Sculptor Wall). We fixed the Galactic and WHIM lines at the values measured in F10. Also shown in the inset is the stacked spectrum of H~2356-509 from the other ten Chandra observations (see F10). The two red absorption lines are the local and the Sculptor Wall (WHIM) \ion{O}{7} $K_{\alpha}$ lines (see F10). The green arrow indicates the wavelength of the transient feature seen in the observation \#10498.} \label{f:abs} \end{figure*} Blazars, characterized by their highly polarized emission in the optical band and strong variability at almost all frequencies, are often interpreted as active galactic nuclei (AGNs) with relativistic jets beamed toward us (see, e.g., \citealp{ang80}). BL Lac objects, which are a sub-class of blazars, typically exhibit weak or no spectral features in emission or absorption at all wavelengths (e.g., \citealp{urr95}). In particular, the very few weak absorption features detected in the optical band are believed to originate in the interstellar medium of the host galaxy (e.g., see \citealp{sba05,plo10}) and have been used to determine the redshift of the BL Lac object. Therefore, unlike the typical warm absorbers seen in AGNs, optical absorption lines in BL Lac objects offer no information about the immediate environment of the central black holes. However, in the X-ray band, \citet{can84} reported the first detection of an absorption feature in the spectrum of the BL Lac object PKS~2155-304, using the objective grating spectrometer on the {\sl Einstein Observatory}. Since then a number of X-ray absorption features have been reported (see, e.g., \citealp{urr86,mad91,gra97,sam97}), leading to the conclusion that such X-ray absorption features are quite common in the spectra of BL Lac objects. These features were typically broad (with a width of a few tens of eV up to a few hundred eV) in the soft X-ray band, and were often interpreted as resonant absorption from highly ionized oxygen originating in a high velocity outflow (up to a few 10,000 $\rm km\ s^{-1}$) intrinsic to the BL Lac object (e.g., \citealp{kro85}). These discoveries demonstrate that X-ray absorption features can provide an extremely valuable probe of the central region of BL Lac objects. Since the launch of the {\sl Chandra}~and {\sl XMM}-Newton~X-ray telescopes, a number of BL Lac objects have been observed with unprecedented high spectral resolution. However, so far no {\it intrinsic} X-ray absorption lines have been detected. {\it Non-intrinsic} X-ray absorption features have been reported in these BL Lac observations. But unlike previously detected features, when observed with high-resolution these features are typically narrower (width of a few eV or less) and often attributed to the foreground Galactic (e.g., \citealp{nic02,fan03,ras03}) or intergalactic origins (e.g., \citealp{fan02,nic05b}; \citealp{buo09} -- B09 hereafter; \citealp{fan10} -- F10 hereafter). \citet{blu04} and \citet{per05} examined a number of bright BL Lac objects with {\sl XMM}-Newton. They did not detect any broad features and argued the previous detections were affected by poor spectral quality, calibration uncertainties, as well as the simplification of the continuum model. Although in \citet{blu04} they found a few highly significant features (more than expected from statistic fluctuations), they were not able to find plausible identification of them, casting doubt on the existence of any absorption lines intrinsic to BL Lac objects. In this paper, we report the serendipitous detection of a transient absorption feature during our multiple observations of the BL Lac object H~2356-309 with gratings on board the {\sl Chandra}~and {\sl XMM}-Newton~X-ray telescopes. The primary science goal was to study the narrow absorption features produced by the warm-hot intergalactic medium (WHIM) along the sight line toward the BL Lac object. We clearly detected an \ion{O}{7} absorption line produced by the WHIM in the Sculptor Wall, a superstructure along the sight line at $z\sim0.03$ (B09, F10). During one of the exposures (observation \#10498), a strong absorption feature was identified at $\sim 22.05$ \AA.\ None of the other 11 {\sl Chandra}~and {\sl XMM}-Newton~observations showed this feature. In this paper we discuss several possibilities of the origin of this transient feature, and conclude it is unlikely an instrumental feature. The most likely explanation is an intrinsic, transient feature produced by hydrogen-like oxygen. We also discuss the constraints on the temperature and ionization structure. \section{Data Analysis} H~2356-309 is a BL Lac object located at $z=0.165\pm0.002$ \citep{fal91}. Multi-wavelength observations of this target showed its broad-band spectrum can be well described by the synchrotron self-Compton emission from the relativistic jet (e.g., \citealp{hes10}). Its sight line passes through a large-scale superstructure of galaxies, the Sculptor Wall, at $z\sim0.03$ (see Figure 1 of B09). With {\sl XMM}-Newton~it was observed in 2007 for approximately 130 ksec (ObsID 0504370701; see B09). With {\sl Chandra}~it was observed first in 2007 during cycle 8 for 100 ksec, and then again in 2008 during cycle 10 in ten separate exposures totaling 500 ksec. The {\sl Chandra}~exposures range from $\sim$ 15 to 100 ksec (see Table 1 of F10). Observation \#10498 was performed on September 22nd, 2008, for 80 ksec. As in B09 and F10, we followed the standard procedures to extract the spectra. We used the software package CIAO (Version 4.0\footnote{see http://asc.havard.edu/ciao}) and calibration database CALDB (Version 3.5\footnote{see http://asc.havard.edu/caldb}) developed by the {\sl Chandra} X-ray Center. We refer readers to B09 and F10 for details of data extraction, and only want to emphasize a few issues here. First, we have generated our own type II pha file, rather than using the file produced by the standard pipeline (Reprocessing III), to take advantage of an improved background filter not yet available in the standard processing (Wargelin et al. 2009)\footnote{see http://cxc.harvard.edu/contrib/letg/GainFilter/software.html.}. Secondly, to account for the high-order contributions of the LETG-HRC, we built a combined response matrix to include the first to the sixth-order contributions (see B09 and F10 for details). Finally, we rebinned the spectrum so that we have at least 40 counts per bin to enhance the spectral signal-to-noise ratio. We fitted the continuum with a model that includes a power law and the Galactic neutral hydrogen absorption and found this simple model is adequate in describing the overall broadband spectrum. For the observation \#10498, we found a power law photon index of $\Gamma=1.784\pm0.027$, and a 0.5 --- 2 keV flux of $1.94 \times 10^{-11}\rm\ ergs\ cm^{-2}s^{-1}$ (see F10 for details). Unless otherwise noted, Errors are quoted at 90\% confidence level throughout the paper.\\ \section{Modeling} \subsection{Intrinsic Absorption} In Figure~\ref{f:abs} left panel we plot the X-ray spectrum of H~2356-309 between 1 and 40 \AA\ for the observation \#10498, in the observer's frame. In the right panel we show the enlarged portion between 21 and 22.5\AA.\ An absorption feature is prominently located at $\sim 22.05$ \AA.\ In the inset we show the stacked spectrum of the remaining nine {\sl Chandra}~observations, and indicate the wavelength of this feature, which was not detected, with a green arrow. In this inset the absorption feature seen at $\sim 22.3$ \AA\ is an \ion{O}{7} K$\alpha$ absorption line produced by the WHIM gas in the Sculptor Wall (see B09 and F10). There is no known instrumental feature near this feature ({\sl Chandra}~Proposers' Observatory Guide, or POG\footnote{See http://cxc.harvard.edu/proposer/POG/}). We examined both plus and minus orders and this feature is present in both sides with similar strength. The total exposure time of this observation is $\sim$ 77 ksec. We also checked the consistency by splitting the exposure into two 38 ksec exposures, and we found this feature is consistently present in both exposures. We also checked the background spectrum and did not find any anomaly at this location that may have caused such an absorption feature. Considering also the transient nature of this feature, we conclude that it is not instrumental in origin. With the assumption that this feature is intrinsic to H~2356-309, we examine the possible ion species based on a combination of chemical abundance and line strength (the oscillator strength $f$). Giving the detected wavelength, and assuming a very generous velocity range ($\pm30,000$ $\rm km\ s^{-1}$), the likely line transitions are \ion{O}{7} $K_{\beta}$ at $\lambda_{rest}=18.63$ \AA,\ \ion{Ca}{18} at $\lambda_{rest}=18.70$ \AA,\ \ion{Ar}{15} at $\lambda_{rest}=18.82$ \AA,\ \ion{O}{8} at $\lambda_{rest}=18.97$ \AA,\ and \ion{Ca}{17} at $\lambda_{rest}=19.56$ \AA.\ Here we select ion species with $f>0.1$ only. Considering that both calcium and argon are orders of magnitude less abundant than oxygen, and we did not detect the corresponding \ion{O}{7} $K_{\alpha}$ transition, the most likely candidate is an intrinsic \ion{O}{8} $K_{\alpha}$ absorber in an outflow. We fitted the spectrum of the sequence \#10498 with a model that includes the following components: (1) Galactic neutral hydrogen absorption with a fixed column density of $N_H = 1.33\times 10^{20}$ ${\rm cm^{-2}}$~\citep{dic90}; (2) a power law; (3) the WHIM and Galactic \ion{O}{7} $K_{\alpha}$ absorption lines at 21.6 and 22.3 \AA, respectively (B09 and F10);\ and (4) an intrinsic absorption line at $\sim$ 22.05 \AA.\ We fixed the component (3) - the Galactic and the WHIM absorption lines - at the values obtained in F10. We chose the Voigt-profile based model that was described in B09 and F10 to fit the absorption feature (4); however, the exact form of the absorption line model is not important here, as long as the model can provide an adequate description of this feature. We limited the redshift range of this feature to account for an outflow velocity within $10,000$ $\rm km\ s^{-1}$,\ since most observed outflows from AGNs have velocities in the range of a few hundred to a few thousand $\rm km\ s^{-1}$~\citep{cre03}. We performed the fit by minimizing the $C$-statistic, which is identical to maximizing the Poisson likelihood function \citep{cas79}, and yields less biased best-fitting parameters than the standard $\chi^2$ implementation (see \citealp{hum09} for details). Figure~\ref{f:abs} shows the \#10498 spectrum (black) and the fitted model (red). The 22.05 \AA\ line has an equivalent width (EW) of $70.5\pm20.5$ m\AA.\ We are not able to constrain the upper limit on the \ion{O}{8} $K_{\alpha}$ column density, and obtain a best-fit value of $7.6 \times 10^{17}\rm\ cm^{-2}$ , with a lower limit of $6.1 \times 10^{16}\rm\ cm^{-2}$. The absorption feature is at a redshift of $z=0.163\pm0.001$ relative to the observer system. We also found a Doppler-$b$ parameter of $278.8^{+584.8}_{-159.6}$ $\rm km\ s^{-1}$. Thermal broadening can at most provide a $b$ parameter of up to $100$ $\rm km\ s^{-1}$~(for hot gas with temperatures up to a few $10^7$ K), indicating velocity gradient along the sight line plays a significant role in line broadening. If let free, both the Galactic line at 21.6 \AA~and the WHIM line at 22.3 \AA~are statistically insignificant due to low photon counts. This is consistent with what we found in the other Chandra observations (see Figure~2 of F10): both lines are weak in each individual spectrum and can only be detected when all the observations are analyzed simultaneously. An accurate measurement of the BL Lac redshift is necessary to determine the outflow velocity. Based on the measurement of the optical absorption lines produced by the interstellar medium in the host galaxy, \citet{fal91} determined the redshift of H~2356-309 is $z=0.165\pm0.002$. We find the measured redshifted of the \ion{O}{8} $K_{\alpha}$ absorption line is very consistent with that of H~2356-309, with an outflow velocity of at most $1500$ $\rm km\ s^{-1}$. We used Monte-Carlo simulations to assess the significance of this transient line. We fitted the \#10498 spectrum with a model that does not include the intrinsic line. When comparing with the fit that includes the intrinsic line, we found a decrease in the $C$-statistic of $\Delta C_{obs} = 21.3$. This intrinsic line was discovered when we studied the 21 to 22.5 \AA~spectral region of 12 observations (11 {\sl Chandra}~and one {\sl XMM}-Newton). For the Monte-Carlo simulation in each trial we made 12 mock spectra in this spectral region to mimic the 12 observations. Specifically, each mock spectrum was made with the model obtained from the real observation (see F10 for model parameters) but without any intrinsic line, i.e., we used the same power law index, normalization, exposure time, and two absorption lines (the Galactic and the WHIM) for that observation. We then searched each mock spectrum between 21 and 22.5 \AA~to identify any negative feature that could give a decrease of $\Delta C$ equal or larger than $\Delta C_{obs}$. We ran a total of 40,000 trials, and found for 38 trials, there is at least one mock spectrum with a change in $\Delta C$ that is equal to, or greater than what was observed. This indicates a detection significance of $3.3\sigma$, or 99.9\% confidence level, accounting for the number of "trials". When evaluating the detection significance we also fixed the Galactic and the WHIM absorption lines at the values obtained in F10. In principle, the two line parameters should be determined by a joint fit of all the 12 observations. However, such joint fit in our Monte-Carlo simulation is extremely computationally intensive, and our estimate indicated that the change in $\Delta C$ is negligible. Therefore, we decided to fix these two line parameters in our calculation. We reiterate that the 21---22.5 \AA\ range is the appropriate wavelength region over which to perform the random trials to assess the statistical significance of the 22.05 \AA\ line, because it was only from examining this limited wavelength range that, by chance, we discovered this transient line while studying the Sculptor WHIM in F10. However, for illustrative purposes only, we also computed the significance of this line by performing random trials over the entire 1---40 \AA\ range and obtained a significance of 2.5$\sigma$, or 99.0\% confidence. For comparison, it is worth noting that if we search the entire 1---40 \AA\ range for other features in the 12 spectra, the strongest features we find are 5 candidate lines where the decrease in the C-statistic is greater than 10, with a maximum change of 15 ($\sim 1.7\sigma$). These candidates are even less significant than the 22.05 \AA\ line and, importantly, none of them are associated with ion species (with strong oscillator strength) appropriate for the Milky Way, the blazar, or the Sculpor Wall WHIM absorber. \begin{figure}[t] \centerline{\includegraphics[scale=0.35,angle=180]{f2}} \caption{Thermal equilibrium curve. Green parts indicate stable states, while red parts indicate unstable states. The grey area shows the allowed region given by constraints on the ionization fractions.} \label{ionpara_t} \end{figure} \begin{figure*}[t] \center \includegraphics[scale=0.32,angle=180]{f3} \includegraphics[scale=0.32,angle=180]{f4} \caption{top panel: the ionization fraction of \ion{O}{7} (black line) and \ion{O}{8} (red line) as a function of the ionization parameter $\Xi$; bottom panel: the ionization fraction of \ion{O}{7} (black line) and \ion{O}{8} (red line) as a function of temperature.} \label{frac} \end{figure*} \subsection{Physical Properties} Considering the redshift of the \ion{O}{8} $K_{\alpha}$ absorption line, photonization by the central black hole of the blazar H~2356-309 likely plays a major role in ionizing the absorber. Therefore, we have used the photonization code CLOUDY to determine its physical condition. Calculations were performed with version 06.02 of CLOUDY, last described by \citet{fer98}. In general, photo-ionized gas achieves thermal equilibrium by balancing heating with cooling, where the major heating source is the ionizing photons from the central black hole, and the major cooling mechanism is collisionally excited, atomic and ionic line emission. At high temperatures, heating by Compton scattering and cooling by thermal bremsstrahlung radiation and inverse Compton scattering will become important. Taking all these processes into consideration, we calculated the thermal equilibrium temperature as a function of the ionization parameter $\Xi$, using CLOUDY (see Figure~\ref{ionpara_t}). Following \citet{kro81}, this ionization parameter is defined as \begin{equation} \Xi \equiv \frac{L_{i}}{4\pi R^2 n_H ckT} \end{equation} where $L_{i}$ is the luminosity of ionizing photons, $R$ is the distance of the absorber to the central source, $n_H$ is the gas density, $k$ is the Boltzmann constant, and $T$ is the gas temperature \footnote{The other commonly used definition of the ionization parameter is $\xi \equiv \left(L_i/n_HR^2\right)$. The conversion between this two definitions (in c.g.s unit) is: $\left(\xi/\Xi\right) \approx 52\ T_6$, where $T_6$ is temperature in units of $10^6$ K.}. For simplicity, we adopted a power law spectrum with a photon index of $\Gamma = 1.784$, obtained from our {\sl Chandra}~spectrum, and also solar metallicity. We will discuss the impact of these choices later. In Figure~\ref{ionpara_t}, cooling dominates over heating above the thermal equilibrium curve, and heating exceeds cooling below the curve. Along the equilibrium curve, the gas is thermally stable in the green parts, and unstable in the red parts where the gradient becomes negative. In the unstable states a small increase in temperature will lead to regions where heating exceeds cooling and therefore becomes unstable. The stable states include one ``cold'' ($T\leq 10^5$ K), one ``hot'' ($T > 10^7$ K), and one intermediate state ($T \sim 10^6$ K). We also calculate the ionization fraction of both \ion{O}{7} and \ion{O}{8}, following \citet{kro85}. The top panel of Figure~\ref{frac} shows the ionization fraction of \ion{O}{7} (black line) and \ion{O}{8} (red line) as a function of the ionization parameter $\Xi$; and the bottom panel of Figure~\ref{frac} shows the ionization fraction as a function of temperature. We do not detect the intrinsic \ion{O}{7} $K_{\alpha}$ line, and estimate a 3$\sigma$ upper limit of the line equivalent width of $24$ m\AA.\ This puts a tight lower limit of $\Xi \gtrsim 25$, and $T \gtrsim 10^5$ K. On the other hand, the derived \ion{O}{8} column density is about a few $\times 10^{17}$ ${\rm cm^{-2}}$.~It is therefore highly unlikely that the ionization fraction of \ion{O}{8} is much smaller than $10^{-4}$ as this would imply a hydrogen column density much higher than $10^{24}$ ${\rm cm^{-2}}$~even for solar abundance. This puts a tight constraint on the upper limit of $\Xi \lesssim 40$, and $T \lesssim 2.5\times 10^7$ K. Figure~\ref{ionpara_t} shows this allowed region in grey. The exact shape of the thermal equilibrium curve depends on assumptions such as the photon index of the incident spectrum and the metal abundance (see, e.g., \citealp{rey95}). A steeper spectrum (e.g., $\Gamma > 3$) will lower the Compton temperature at which Compton heating and cooling balance each other, therefore lowering the temperature of the ``hot'' stable state in Figure~\ref{ionpara_t}. On the other hand, a change in the metal abundance will also result in a change in the peak positions of the thermal equilibrium curve because of the metal line cooling mechanism. However, our estimates indicate unless these assumptions change dramatically, they do not have significant impact on the estimated parameters (ionization parameter, temperature, ionization fractions, etc.) here. \subsection{Transient Nature of the Absorber} The observation \#10497 was taken immediately before this observation and ended on September 20th, 2008 at about 10AM; and the observation \#10762 was taken immediately after this observation and started on September 25th, 2008 at about 2AM. This suggests the transient feature lasts at most $t_{max} \approx 4\times 10^5$ seconds, and at least $t_{min} = 8\times 10^4$ seconds. Line variability is fairly common in the soft X-ray spectrum of AGNs. In particular, recent observations of AGNs with high resolution spectroscopy indicate narrow absorption lines can appear and vanish in time scales less than a few 100 ksec (e.g., \citealp{gib07}). There are two likely scenarios that an absorption line can become transient: (1) the ionization structure of the absorber changes (see, e.g., \citealp{hal84}); or (2) the absorbing material changes, e.g., moving in and out of the sight line (see, e.g., \citealp{fab94}). We consider both scenarios in the following discussion. To change the physical state of the absorber during such a short period, either the ionizing source varies rapidly, or the absorber is in a physically unstable state. The source flux is extremely stable during our 500 ksec {\sl Chandra}~observations that span about four months (it varied at most about 30\%; see F10). Furthermore, one {\sl Chandra}~and one {\sl XMM}-Newton~observation performed about one year before these {\sl Chandra}~observations showed variations about a factor of less than 2 (B09). Hence, the source variation is unlikely to be the cause of this transient feature. Considering the possibility that the absorber becomes thermally unstable (the red parts in Figure~\ref{ionpara_t}), this intrinsic instability can lead to the transient nature of the absorber. In this case, the ionization structure can change rapidly if the photonization timescale is longer than the time interval $t_{min}$. This photonization timescale can be estimated as \begin{equation} t_{ion} = \left[\int_{\nu_{th}}^{\infty} \frac{L_{\nu}\sigma(\nu)}{4\pi R^2 h\nu} d\nu\right]^{-1}. \end{equation} Here $L_{\nu}$ ($\propto \nu^{-\alpha}$ where $\alpha=\Gamma-1$ is the spectral index) is the ionizing photon flux, $\sigma$ is the photonization cross section and $\propto \left(\nu_{th}/\nu\right)^3$, $\nu_{th}$ is the photonization threshold frequency, and $h$ is the Planck constant. Adopting the numbers for \ion{O}{8} \citep{ver96}, we found $t_{ion} \approx 2\times 10^4 R_{pc}^{2}L_{46}^{-1}\ s$. Here $R_{pc}$ is the distance to the absorber in units of pc, and $L_{46}$ is the ionizing luminosity in units of $10^{46}\rm\ ergs\ s^{-1}$. For H~2356-309, $L_i$, the luminosity of the ionizing photons, is $\sim 5\times10^{45}\rm\ ergs\ s^{-1}$. If $t_{ion} \gtrsim t_{min}$, we found the distance of the absorber must be $R \gtrsim 3$ pc. This distance would put the absorbing material somewhere between the typical broad line region (BLR, sub-pc) and narrow line region (NLR, 10 pc -- 1 kpc) of an AGN. The density of the absorber then is \begin{equation} n_H \approx 7 \times 10^5 R_{pc}^{-2} T_{6}^{-1} \Xi_{30}^{-1} L_{46}\ \rm cm^{-3}, \end{equation} where $T_6$ is the temperature in units of $10^6$ K, and $\Xi_{30}$ is the ionization parameter in units of 30. Taking the typical values for H~2356-309, the density is $n_H \approx 4\times 10^5\rm\ cm^{-3}$. With this density, the typical recombination time scale, $t_{rec} \approx 4 \times 10^6\ T_6^{1/2}(n_H/10^5\rm\ cm^{-3})^{-1} \approx 10^7$ seconds , is also longer than $t_{min}$. If instead the absorber is stable but moves in and out of the sight line between observations, then all the time scales must be shorter than $t_{max}$. The absorber then has an upper limit on the distance of $R\lesssim 6$ pc (from $t_{ion} < t_{max}$), and a lower limit on the density of $n_H \gtrsim 10^6\rm\ cm^{-3}$ (from $t_{rec} < t_{max}$). The \ion{O}{8} column density is $\sim 10^{17}\rm\ cm^2$. Assuming solar metallicity and an ionization fraction of 0.5, the size of the absorber along the sight line is $\sim 3 \times 10^{13}\rm\ cm$. If the absorber has a similar size in the perpendicular direction, it implies a light crossing time of $\sim 3\times10^5$ seconds if the velocity in that direction is $\sim 1,000\rm\ km\ s^{-1}$. Since this time is less than $t_{max}$, this scenario is also consistent with the data. We can also estimate the mass loss rate assuming a uniform spherically symmetric outflow \citep{kro85}: \begin{eqnarray} \dot{M} & = & f\frac{m_H \upsilon L_{i}}{ckT\Xi} \nonumber \\ & \approx & 15 f_{0.1} \ L_{45}T_6^{-1}\upsilon_{1000}\Xi_{30}^{-1}\rm\ M_{\odot}yr^{-1}, \end{eqnarray} Here $m_H$ is the hydrogen mass, $\upsilon_{1000}$ is the outflow velocity in units of 1,000 $\rm km\ s^{-1}$, and $f$ is the covering fraction and is roughly $0.1$ --- the fraction of observational time showing the line. This value is higher than those of Seyfert galaxies which typically have weak outflows ($\sim 1 \rm M_{\odot}yr^{-1}$, e.g., \citealp{ste05}), but comparable to those AGNs with energetic jets (e.g., \citealp{ste09}). The mass loss rate is likely smaller if it is beamed. Such a radial outflow can produce a P Cygni-type line profile typically seen in a stellar wind \citep{kro85}. Interestingly, we do obtain a marginal detection of an emission feature on the long-wavelength side of the absorption feature, as expected from the P Cygni profile (see discussion below.) \section{Discussion} \begin{figure}[t] \center \centerline{\includegraphics[scale=0.33,angle=180]{f5}} \caption{Spectral fitting with a P Cygni-type profile, which includes one absorption line model on the short-wavelength side, and one emission line on the long-wavelength side. The wavelength is plotted in the obseverber's frame.} \label{pc} \end{figure} \subsection{A Possible P Cygni Profile?} The P Cygni-type of line profile was originally discovered in the optical and ultraviolet spectra of stellar objects, and is often attributed to the stellar wind (e.g., \citealp{lam87}). With {\sl Chandra}, the X-ray P Cygni line was first detected in the spectrum of Circinus X-1, a Galactic X-ray binary (\citealp{bra00,sch02}) and, subsequently, it was also found in the X-ray spectra of active galactic nuclei (e.g, \citealp{kas01}). A P Cygni-type of line profile provides an important diagnostic tool of whether the outflow is beamed with a jet-like structure or is more spherically extended. In the observation \#10498, we find a possible emission feature right next to the absorption feature on the long-wavelength side, resembling a P Cygni-type profile. We fit this emission feature with a Gaussian profile, along with an absorption profile on the short-wavelength side (see Figure~\ref{pc}, plotted in the observer's frame). We find this is sufficient for fitting the absorption-emission feature at $22.05$ \AA.\ Using Monte-Carlo simulations as described in \S~3.1, we found this feature is detected at the $2.3\sigma$ level. If this transient emission feature is \ion{O}{8} $K\alpha$ and can be associated with the absorber, we measured an \ion{O}{8} line flux of $8.2 \times 10^{-5}\rm\ photons\ cm^{-2}s^{-1}$. The \ion{O}{8} line emissivity peaks at about $3\times10^6$ K. Assuming a peak emissivity, we obtained an upper limit of the emission measure of $EM = \int n_e^2 dV \approx 5\times 10^{10}\ \rm cm^{-6} pc^3 $ at the distance of the BL Lac. Here $n_e$ is the electron density and the integration is over the emission volume. However, we consider the P Cygni scenario unlikely since if the $n_e \sim 10^6\rm\ cm^{-3}$ as we estimated for the absorber, the linear size of the emission material ($\sim$ 0.4 pc) would be far greater than that of the absorber. Clearly, more sophisticated modeling is necessary to fully understand the structure and physical properties of this material as revealed by the emission/absorption profile. \subsection{Host galaxy, Intervening or Local Absorption?} The transient nature of this absorption feature makes it very unlikely to be produced by the ISM in the host galaxy, an intervening absorber, or a local absorber. We do notice that the observed wavelength of this feather is very close to the rest wavelength of the \ion{O}{6} $K_{\alpha}$ inner shell transition ($\lambda = 22.02$ \AA, see \citealp{pra03,sch04}). This \ion{O}{6} $K_{\alpha}$ inner shell transition was first reported in \citet{lee01} in the X-ray spectrum of MCG-6-30-15. \subsection{Summary} X-ray observations of narrow absorption features offer a unique opportunity to probe the inner region of BL Lac objects. In this paper we report the detection of a transient absorption line during our H~2356-309 campaign with the {\sl Chandra}~X-ray Telescope. This line is most likely produced by \ion{O}{8} in a photo-ionized outflow intrinsic to the BL Lac object H~2356-309. Considering the transient nature of the absorber, we obtain constrains on the absorber's ionization parameter, $25 \lesssim \Xi \lesssim 40$, temperature, $10^5 < T < 2.5\times10^7$ K, and density, a few $\times 10^5\ \rm cm^{-3}$. Our detection is quite different from X-ray absorption features detected in BL Lac objects before {\sl Chandra}~and {\sl XMM}-Newton~(e.g., \citealp{can84,mad91}). Those absorption features typically have a velocity width of up to a few $\times 10^4$ $\rm km\ s^{-1}$,\ while in H~2356-309, the velocity is at most $1-2 \times 10^3$ $\rm km\ s^{-1}$.\ However, even in our case, the line width is much larger than that expected from thermal broadening, suggesting an outflow as a likely cause of the broadening. \citet{blu04} studied the {\sl XMM}-Newton~RGS spectra of four BL Lac objects with previously known, broad X-ray absorption lines and found none. \citet{per05} also analyzed the X-ray spectra of 13 bright BL Lac objects observed with {\sl XMM}-Newton. They did not detect any broad, intrinsic features either, but they found strong evidence for the intrinsic curvature of the spectral index of most of the targets. In both studies, they concluded that the previously reported features were due to a combination of calibration uncertainties and the use of a overly simplified, single power-law model. At low resolution and low S/N a spectral curvature can mimic a broad absorption if the spectrum was fitted with a single power law \citep{per05}. However, our observation, which has none of these problems, along with their detections of several unexplainable absorption features detected in \citet{blu04}, raise again the question of whether or not such absorption is common in BL Lac objects. The high variability of the BL Lac object and its environment make it a challenge to address this issue. We detect this line in one ($\sim 80$ ksec) observation for a total of 12 ($\sim 600$ ksec for {\sl Chandra},~and $\sim$ 130 ksec for {\sl XMM}-Newton) observations. Taking this probability at face value, a long-term, monitoring program which focuses on several bright BL Lac objects would be a feasible approach to unveil the nature of these transient absorption lines. \\ {\it Acknowledgments:} We thank Brad Wargelin for assistance with observation set-up, Peter Ratzlaff for helping implement the new filtering procedure, and Vinay Kashyap for assistance with the {\sl Chandra} observation \#10498. We also thank Aaron Barth and H\'el\`ene Flohic for helpful discussions. T.F., D.A.B., and P.J.H. gratefully acknowledge partial support from NASA through Chandra Award Numbers GO7-8140X and G09-0154X issued by the Chandra X-Ray Observatory Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of NASA under contract NAS8-03060. We also are grateful for partial support from NASA-XMM grant NNX07AT24G. C.R.C. acknowledges NASA through Smithsonian Astrophysical Observatory contract SV1-61010.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction } The last decades have seen an increasing interest in the study of ``Manifolds with Density'', which is a manifold where both perimeter and volume carry the same weight. To have an idea of the possible applications of that subject one can consult, for instance \cite{Mo}, \cite{Mo1} and the references therein. In particular, much attention has been devoted to find, for a given manifold with density, its isoperimetric set (see, e.g., \cite{BCMR}, \cite{BBCLT}, \cite{BCM, BCM2, BCM3, BMP, XR}, \cite{CMV}, \cite{CJQW}, \cite{Cham}, \cite{DDNT}, \cite{Howe}, \cite{KZ}, \cite{MadernaSalsa}, \cite{Mo1}, \cite{Mo2}). On the other hand, many authors have studied isoperimetric problems when volume and perimeter carry two different weights. A remarkable example is obtained when the manifold is $\mathbb R^{N}$ and the two weights are two different powers of the distance from the origin. More precisely, given two real numbers $k$ and $l$, the problem is to find the set $G$ in $\mathbb R^{N}$ which minimizes the weighted perimeter $\displaystyle\int_{\partial G } |x|^k \, {\mathcal H}_{N-1} (dx) $ once the weighted volume $\displaystyle\int_{ G } |x|^l \, dx $ is prescribed. Such a problem is far from being artificial since its solution allows to compute, for instance, the best constants in the well-known Caffarelli-Kohn-Nirenberg inequalities as well as to establish the radiality of the corresponding minimizers. Several partial results have been obtained on such an issue (see, e.g., \cite{ABCMP}, \cite{BBMP2}, \cite{C}, \cite{diGiosia_etal}, \cite{DHHT}, \cite{Howe}, \cite{Mo}) and a complete solution is contained in in the recent paper (see \cite{diGiosia_etal}). There the authors find the full range of the parameters $k$ and $l$ for which the isoperimetric set is the ball centered at the origin. The first step of their proof consists of reducing the problem into a two-dimensional one by means of spherical symmetrization (also known as foliated Schwarz symmetrization). \\ Let $\mathbb{R}^{N} _{+} := \{ x \in \mathbb{R}^N :\, x_N >0\} $. The problem that we address here is the following: Given $k,l \in \mathbb{R}$, $\alpha > 0$, \medskip \noindent {\sl Minimize $\displaystyle\int_{\partial \Omega } |x|^k x_N^\alpha \, {\mathcal H}_{N-1} (dx) $ among all smooth sets $\Omega \subset \mathbb{R} ^{N}_{+}$ satisfying $\displaystyle\int_{\Omega } |x|^lx_N^\alpha \, dx =1$.} \medskip Let $B_R$ denote the ball of $\mathbb R^N$ of radius $R$ centered at the origin and let $B$ and $\Gamma$ denote the Beta and the Gamma function, respectively. Our main result, contained in Section 5, is the following. \begin{theorem} \label{maintheorem} Let $N\in \mathbb{N} $, $N\geq 2$, $k,l \in \mathbb{R} $, $\alpha > 0$ and $l+N+\alpha >0$. Further, assume that one of the following conditions holds: \\ {\bf (i)} $l+1\leq k $; \\ {\bf (ii)} $k\leq l+1$ and $ l\frac{N+\alpha-1}{N+\alpha} \leq k\leq 0$; \\ {\bf (iii)} $N\geq 2$, $ 0\leq k\leq l+1$ and \begin{equation}\label{l_1N3} l\le l_1 (k,N,\alpha ) := \frac{(k+N+\alpha-1)^3 }{(k+N+\alpha-1)^2 - \frac{(N+\alpha-1)^2 }{N+\alpha} } -N -\alpha\,. \end{equation} \\ Then \begin{equation} \label{mainineq} \displaystyle\int_{\partial \Omega } |x|^k x_N^\alpha\, {\mathcal H}_{N-1} (dx) \geq C_{k,l,N, \alpha} ^{rad} \left( \displaystyle\int_{\Omega } |x|^lx_N^\alpha\, dx \right) ^{(k+N+\alpha-1)/(l+N+\alpha) } , \end{equation} for all smooth sets $\Omega $ in $\mathbb{R}^N _+ $, where \begin{eqnarray} \label{defCkl} C_{k,l,N, \alpha} ^{rad} & := & \frac{\displaystyle\int_{\partial B_1 } |x|^k x_N^\alpha\, {\mathcal H}_{N-1} (dx)} {\left( \displaystyle\int_{B_1 \cap \mathbb{R}^N _+} |x|^l x_N^\alpha\, dx \right ) ^{(k+N+\alpha-1)/(l+N+\alpha) } } \\ &= & \nonumber \left( l+\alpha +N\right) ^{\frac{k+N+\alpha -1}{l+N+\alpha }}\left( B\left( \frac{N-1}{2},\frac{\alpha +1}{2}\right) \frac{\pi ^{\frac{N-1}{2}}}{ \Gamma \left( \frac{N-1}{2}\right) }\right) ^{\frac{l-k+1}{l+N+\alpha }}. \end{eqnarray} Equality in (\ref{mainineq}) holds if $\Omega =B_R\cap\mathbb{R}_+^N$. \end{theorem} \noindent Note that the weights we consider are not radial and it seems not trivial to use spherical symmetrization. So that we did not try to adapt the techniques contained in \cite{diGiosia_etal}, and, depending on the regions where the three parameters lie, we use different methods. The proof in the case {\bf (i)} is given in \cite{ABCMP_atti}. It is based on Gauss's Divergence Theorem. In the case {\bf (ii)} (see Theorem \ref{th1bis}) the proof uses an appropriate change of variables, which has been introduced in \cite{H} and \cite{HK}, together with the isoperimetric inequality with respect to the weight $x_{N}^{\alpha} $. The case {\bf (iii)} (see Theorem \ref{th1ter}) is the most delicate and it requires several different arguments: again a suitable change of variables, then an interpolation argument, introduced for the first time in our previous paper \cite{ABCMP} and, finally, the so-called starshaped rearrangement. In Section 4 we provide some necessary conditions on $k$, $l$ and $\alpha$ such that the half-ball centered at the origin is an isoperimetric set. In the proof we firstly evaluate the second variation of the perimeter functional. The claim is achieved using the fact that such a variation at a minimizing set must be nonnegative, together with a nontrivial weighted Poincar\' e inequality on the sphere derived in \cite{BCM2}. \noindent Part of these results have been announced in \cite{ABCMP_atti}. \section{Notation and preliminary results } Throughout this article $N$ will denote a natural number with $N\geq 2$, $k$ and $l $ are real numbers, while $\alpha$ is a nonnegative number and \begin{equation} \label{ass1} l+N+\alpha>0 . \end{equation} \noindent Let us introduce some notation. \begin{eqnarray*} \mathbb{R}^{N}_{+} & := & \left\{ x \in \mathbb{R}^{N}: x_N >0 \right\}, \\ \mathbb{S}^{N-1}_{+} & := & \left\{ x \in \mathbb{S}^{N-1} : x_N >0 \right\}, \\ B_{R}(x_0 ) & := & \left\{ x\in \mathbb{R}^{N}:\left\vert x-x_0 \right\vert <R\right\} , \quad (x_0 \in \mathbb{R}^N ), \\ B_{R} & := & B_{R}(0), \quad (R>0), \\ B_{R}^{+} & := & B_{R} \cap \mathbb{R}^{N}_{+}. \\ \end{eqnarray*} Furthermore, ${\mathcal L} ^m $ will denote the $m$-dimensional Lebesgue measure, ($1\leq m\leq N$), and \begin{eqnarray*} \omega _N & := & {\mathcal L}^N (B_1 ), \\ \kappa (N, \alpha ) & := & {\mathcal L} ^{N-1} (\mathbb{S} ^{N-1} _+ ). \end{eqnarray*} Note that \begin{equation} \label{measSN-1+} \kappa (N, \alpha ) = B\left( \frac{ N-1}{2},\frac{\alpha +1}{2}\right) \frac{\pi ^{\frac{N-1}{2}}}{\Gamma \left( \frac{N-1}{2}\right) }, \end{equation} where $B$ and $\Gamma$ are the Beta function and the Gamma function, respectively, (see \cite{BCM3}). \\ We will use frequently $N$-dimensional spherical coordinates $(r, \theta)$ in $\mathbb{R} ^N$: $$ \mathbb{R}^N \ni x = r\theta , \quad \mbox{where $r=|x|$, and $\theta = x|x|^{-1} \in \mathbb{S}^{N-1} $.} $$ If $M$ is any set in $\mathbb{R}^N _{+}$, then $\chi _M $ will denote its characteristic function. \noindent Next, let $k$ and $l$ be real numbers satisfying (\ref{ass1}). We define a measure $\mu _{l, \alpha}$ by \begin{equation} d\mu _{l, \alpha}(x)=|x|^{l} x_N^\alpha\,dx. \label{dmu} \end{equation} If $M \subset $ ${\mathbb R}^{N}_{+}$ is a measurable set with finite $\mu _{l, \alpha} $-measure, then we define $M^{\star }$, the \\ $\mu_{l,\alpha }$-symmetrization of $M$, as follows: \begin{equation} M^{\star } := B_{R}^{+} \hspace{.2 cm} \text{with } R: \mu_{l, \alpha} \left( B_{R}^{+} \right) = \mu _{l, \alpha} \left( M\right) = \int_M d\mu _{l, \alpha} (x) . \label{mu_(M)} \end{equation} If $u: \mathbb{R}^{N}_{+} \rightarrow \mathbb{R} $ is a measurable function such that $$ \mu_{l, \alpha} \left( \left\{ |u(x)|>t\right\} \right) <\infty \qquad \forall t>0, $$ then let $u^{ \star }$ denote the weighted Schwarz symmetrization of $u$, or, in short, the \\ $\mu_{l, \alpha} -$symmetrization of $u$, which is given by \begin{equation} u^{ \star }(x)=\sup \left\{ t\geq 0:\mu_{l, \alpha} \left( \left\{ |u(x)| >t\right\} \right) > \mu _{l, \alpha} \left( B_{\left\vert x\right\vert }^{+} \right) \right\} . \label{u_star} \end{equation} Note that $u^{\star }$ is radial and radially non-increasing, and if $M$ is a measurable set with finite $\mu _l $-measure, then $$ \left( \chi _M \right) ^{\star} = \chi _{M^{ \star }} . $$ The {\sl $\mu _{k, \alpha}$--perimeter\/} of a measurable set $M $ is given by \begin{equation} P_{\mu _{k, \alpha}}(M ):=\sup \left\{ \int_{M }\mbox{div}\,\left(x_N^\alpha |x|^{k} \mathbf{v}\right) \,dx:\,\mathbf{v}\in C_{0}^{1}(\mathbb{R}^N ,\mathbb{R}^{N}),\,| \mathbf{v}|\leq 1\mbox{ in }\, M \right\} . \end{equation} \noindent It is well-known that the above \textsl{distributional definition} of weighted perimeter is equivalent to the following \begin{equation} P_{\mu_{k} }(M ) = \left\{ \begin{array}{ccc} \displaystyle\int_{\partial \Omega }|x|^{k} \, {\mathcal H} _{N-1}(dx) & \mbox{ if } & \partial \Omega \mbox{ is } (N-1)-\mbox{rectifiable } \\ & & \\ + \infty \qquad & \mbox{ otherwise,} & \end{array} \right. \end{equation} where, here and throughout, ${\mathcal H} _{N-1} $ will denote the $(N-1)$-dimensional Hausdorff-measure. We will call a set $\Omega \subset \mathbb{R}^N _+ $ {\sl smooth}, if for every $x_0 \in \partial \Omega \cap \mathbb{R}^N _+ $, there is a number $r >0$ such that $B_r (x_0 )\subset \mathbb{R}^N _+ $, $B_r (x_0 ) \cap \Omega $ has exactly one connected component and $B_r (x_0 ) \cap \partial \Omega $ is the graph of a $C^1 $--function on an open set in $\mathbb{R} ^{N-1} $. Let $\Omega \subset \mathbb{R} ^{N}_{+}$ and $p\in \left[ 1,+\infty \right) $. We will denote by $L^{p}(\Omega ,d\mu _{l, \alpha})$ the space of all Lebesgue measurable real valued functions $u$ such that \begin{equation} \left\Vert u \right\Vert _{L^{p}(\Omega ,d\mu _{l, \alpha})} :=\left( \int_{\Omega } \left\vert u\right\vert ^{p}d\mu _{l, \alpha} (x) \right) ^{1/p} <+\infty . \label{Norm_Lp} \end{equation} \\ By $W^{1,p}(\Omega ,d\mu _{l, \alpha})$ we denote the weighted Sobolev space consisting of all functions which together with their weak derivatives $u_{x_{i}}$, ($i=1,...,N$), belong to $L^{p}(\Omega ,d\mu _{l, \alpha})$. This space will be equipped with the norm \begin{equation} \left\Vert u\right\Vert _{W^{1,p}(\Omega ,d\mu _{l, \alpha})}:=\left\Vert u\right\Vert _{L^{p}(\Omega ,d\mu _{l, \alpha})}+\left\Vert \nabla u\right\Vert _{L^{p}(\Omega ,d\mu _{l, \alpha})}. \label{Norm_Wp} \end{equation} Finally, ${\mathcal D} ^{1,p}( \Omega ,d\mu _{k, \alpha})$ will stand for the closure of $C_{0}^{\infty }(\mathbb{R}^N )$ under the norm $$ \left( \int_{\Omega } |\nabla u|^p \, d\mu _{k,\alpha } (x) \right) ^{1/p}. $$ We will often use the following well-known {\sl Hardy-Littlewood inequality} \begin{equation} \label{hardylitt1} \int_{\mathbb{R}^{N}_{+} } uv \, d\mu _{l, \alpha}(x) \leq \int_{\mathbb{R}^{N}_{+} } u^{ \star} v^{\star} \, d\mu _{l, \alpha} (x) , \end{equation} which holds for any couple of functions $u,v\in L^2 (\mathbb{R}^{N}_{+} , d\mu _{l, \alpha} )$. Now let us recall the so-called starshaped rearrangement (see \cite{Kaw}) which we will use in Section 5. For later convenience, we will write $y$ for points in $\mathbb{R}^{N}_{+} $ and $(z, \theta )$ for corresponding $N$-dimensional spherical coordinates ($z= |y|$, $\theta = y|y|^{-1} $). \\ We call a measurable set $M\subset \mathbb{R}^{N}_{+} $ {\sl starshaped\/} if the set $$ M\cap \{ z\theta : \, z\geq 0 \} $$ is either empty or a segment $\{ z\theta : \, 0\leq z< m(\theta ) \} $ for some number $m(\theta ) >0 $, for almost every $\theta \in {\mathbb S} ^{N-1} $. \\ If $M$ is a bounded measurable set in $\mathbb{R}^{N}_{+} $, and $\theta \in {\mathbb S}^{N-1}_{+} ,$ then let $$ M(\theta ) := M\cap \{ z\theta :\, z\geq 0\}. $$ There is a unique number $m(\theta )\in [0,+\infty )$ such that $$ \int_0 ^{m(\theta )} z^{N-1}\, dz = \int_{M(\theta )} z^{N-1} \, dz. $$ We define $$ \widetilde{M}(\theta ) := \{ z\theta : \, 0\leq z\leq m(\theta ) \} , \quad (\theta \in {\mathbb S} ^{N-1}_{+} ), $$ and $$ \widetilde{M} := \{ z\theta : \, z\in \widetilde{M}(\theta ) , \, \theta \in {\mathbb S} ^{N-1}_{+} \} . $$ We call the set $\widetilde{M}$ the {\sl starshaped rearrangement of $M$\/}. \\ Note that $\widetilde{M} $ is Lebesgue measurable and starshaped, and we have \begin{equation} \label{starsh1} {\mathcal L} ^N (M) = {\mathcal L} ^N (\widetilde{M}). \end{equation} If $v:\mathbb{R}^{N}_{+} \to \mathbb{R} $ is a measurable function with compact support, and $t\geq 0$, then let $ E_t $ be the super-level set $\{ y: \, |v(y)| \geq t\} $. We define $$ \widetilde{v} (y) := \sup \{ t\geq 0 :\, y \in \widetilde{E_t } \} . $$ We call $\widetilde{v} $ the {\sl starshaped rearrangement of $v$ \/}. It is easy to verify that $\widetilde{v}$ is equimeasurable with $v$, that is, the following properties hold: \begin{eqnarray} \label{starsh2} & & \widetilde{E_t} = \{ y:\, \widetilde{v} (y)\geq t\} , \\ \label{starsh3} & & {\mathcal L} ^N (E_t ) = {\mathcal L} ^N (\widetilde{E_t} ) \quad \forall t\geq 0. \end{eqnarray} This also implies Cavalieri's principle: If $F\in C ([0, +\infty ))$ with $F(0)=0$ and if $F(v) \in L^1 ( \mathbb{R}^N ) $, then \begin{equation} \label{caval1} \int_{\mathbb{R} ^N } F(v)\, dy = \int_{\mathbb{R} ^N } F(\widetilde{v} )\, dy \end{equation} and if $F$ is non-decreasing, then \begin{equation} \label{monrearr} \widetilde{F(v)} = F(\widetilde{v}). \end{equation} Note that the mapping $$ z\longmapsto \widetilde{v} (z\theta ) , \quad (z\geq 0), $$ is non-increasing for all $\theta \in {\mathbb S} ^{N-1} $. \\ If $v,w\in L^2 (\mathbb{R}^{N}_{+} )$ are functions with compact support, then there holds Hardy-Littlewood's inequality: \begin{equation} \label{harlit} \int_{\mathbb{R}^{N}_{+} } vw \, dy \leq \int_{\mathbb{R}^{N}_{+} } \widetilde{v} \widetilde{w} \, dy. \end{equation} If $f:(0,+\infty) \to \mathbb{R} $ is a measurable function with compact support, then its (equimeasurable) {\sl non-increasing rearrangement }, $\widehat{f} : (0,+\infty )\to [0,+\infty )$, is the monotone non-increasing function such that $$ {\mathcal L} ^1 \{ t \in [0,+\infty ) :\, |f(t )| > c\} = {\mathcal L}^1 \{ t \in [0,+\infty ) :\, \widehat{f}(t ) > c \} \quad \forall c\geq 0, $$ see \cite{Kaw}, Chapter 2. A general P\'{o}lya-Szeg\"o principle for non-increasing rearrangement has been given in \cite{Lan}, Theorem 2.1. For later reference we will only need a special case: \begin{lemma} \label{Landes} Let $\delta \geq 0$, and let $f:(0,+\infty ) \to \mathbb{R} $ be a bounded, locally Lipschitz continuous function with bounded support, such that $$ \int_0 ^{+\infty } t ^{\delta } |f' (t ) |\, dt <+\infty . $$ Then $\widehat{f} $ is locally Lipschitz continuous and \begin{equation} \label{landes1} \int_0 ^{+\infty } t^{\delta } |\widehat{f}' (t ) |\, dt \leq \int_0 ^{+\infty } t^{\delta } |f' (t ) |\, d t. \end{equation} \end{lemma} \bigskip \section{The functionals ${\mathcal R}_{k,l,N,\alpha}$ and ${\mathcal Q}_{k,l,N,\alpha}$ } Throughout this section we assume (\ref{ass1}), i.e. \begin{equation*} k+N+\alpha-1 >0 \ \mbox{ and } \ l+N+\alpha>0 . \end{equation*} If $M $ is any measurable subset of $\mathbb R^{N}_{+}$, with $0<\mu _{l,\alpha} (M)<+\infty $, we set \begin{equation} \label{rayl1} {\mathcal R}_{k,l,N, \alpha} (M) := \frac {P_{ \mu_{k , \alpha}} (M) } { \left( \mu_{l,\alpha} (M) \right)^{(k+N+\alpha-1)/(l+N+\alpha)} }. \end{equation} Note that \begin{equation} \label{Rklsmooth} {\mathcal R} _{k,l,N,\alpha} (M ) = \frac{ \displaystyle\int_{\partial M }x_N^\alpha |x|^k \, {\mathcal H}_{N-1} (dx) }{ \left( \displaystyle\int_{M }x_N^\alpha |x|^l \, dx \right) ^{(k+N+\alpha-1)/(l+N+\alpha)} } \end{equation} if the set $M$ is smooth. If $u\in C_0 ^1 (\mathbb{R} ^N_+ )\setminus \{ 0\} $, we set \begin{equation} \label{rayl2} {\mathcal Q}_{k,l,N, \alpha} (u ) := \frac{\displaystyle\int_{\mathbb{R} ^N_+ }x_N^\alpha |x|^k |\nabla u| \, dx}{ \left( \displaystyle\int_{\mathbb{R} ^N_+ } x_N^\alpha|x|^l |u| ^{(l+N+\alpha)/(k+N+\alpha-1)} \, dx \right) ^{(k+N+\alpha-1)/(l+N+\alpha)}}. \end{equation} Finally, we define \begin{equation} \label{isopco} C_{k,l,N, \alpha}^{rad} := {\mathcal R}_{k,l,N, \alpha}(B_1 \cap {\mathbb{R} ^N_+ }). \end{equation} We study the following isoperimetric problem: \\[0.5cm] {\sl Find the constant $C_{k,l,N, \alpha} \in [0, + \infty )$, such that} \begin{equation} \label{isopproblem} C _{k,l,N, \alpha} := \inf \{ {\mathcal R}_{k,l,N, \alpha} (M):\, \mbox{{\sl $M$ is measurable with $0<\mu _{l,\alpha} (M) <+\infty $.}} \} \end{equation} Moreover, we are interested in conditions on $k$, $l$ and $\alpha$ such that \begin{equation} \label{isoradial} {\mathcal R}_{k,l,N, \alpha} (M) \geq {\mathcal R}_{k,l,N, \alpha} (M^{ \star} ) \end{equation} holds for all measurable sets $M\subset {\mathbb{R} ^N_+ }$ with $ 0<\mu _{l,\alpha}(M)<+\infty $. \\[0.1cm] Let us begin with some immediate observations. \\ If $M$ is a measurable subset of $\mathbb R^{N}_{+}$ with finite $\mu _{l,\alpha}$-measure and $\mu_{k,\alpha} $-perimeter, then there exists a sequence of smooth sets $\{ M_n \} $ such that $$\lim_{n\to \infty } \mu _{l,\alpha} (M_n \Delta M) =0 \,\,\,\, \text{and} \,\, \lim_{n\to \infty } P_{\mu _{k,\alpha} } (M_n ) = P_{\mu _{k,\alpha}} (M) . $$ This property is well-known for Lebesgue measure (see for instance \cite{G}, Theorem 1.24) and its proof carries over to the weighted case. This implies that we also have \begin{equation} \label{CklNsmooth} C_{k,l,N, \alpha} = \inf \{ {\mathcal R}_{k,l,N, \alpha} (\Omega ):\, \Omega \subset \mathbb{R} ^{N}_+, \, \Omega \mbox{ smooth} \} . \end{equation} The functionals ${\mathcal R}_{k,l,N, \alpha } $ and ${\mathcal Q}_{k,l,N,\alpha } $ have the following homogeneity properties, \begin{eqnarray} \label{hom1} {\mathcal R}_{k,l,N, \alpha } (M ) & = & {\mathcal R}_{k,l,N, \alpha } (tM ) , \\ {\mathcal Q}_{k,l,N, \alpha } (u) & = & {\mathcal Q}_{k,l,N,\alpha } (u^t ), \end{eqnarray} where $t>0$, $M $ is a measurable set with $0<\mu_{l, \alpha} (M)<+\infty $, $u\in C_0 ^1 (\mathbb{R}^N_+ )\setminus \{ 0\}$, \\ $tM := \{tx:\, x\in M \} $ and $u^t (x):= u(tx) $, ($x\in \mathbb{R} ^N_+ $), and there holds \begin{equation} \label{isopconst2} C_{k,l,N, \alpha} ^{rad} = {\mathcal R}_{k,l,N, \alpha} (B_1^{+} ). \end{equation} Hence we have that \begin{equation} \label{relCC} C_{k,l,N, \alpha} \leq C_{k,l,N, \alpha} ^{rad} , \end{equation} and (\ref{isoradial}) holds if and only if $$ C_{k,l,N,\alpha } = C_{k,l,N,\alpha } ^{rad} . $$ Finally, we recall the following weighted isoperimetric inequality proved, for example, in \cite{BCM2} (see also \cite{XR} and \cite{ MadernaSalsa}). \begin{proposition}\label{BCM2} For all measurable sets $M\subset \mathbb{R} ^N_+$, with $0< \mu _{0, \alpha} (M)<+\infty $, the following inequality holds true \begin{equation}\label{isopclass} {\mathcal R} _{0,0,N, \alpha} (M) := \frac {P_{ \mu_{0, \alpha}} (M) } { \left( \mu_{0,\alpha} (M) \right)^{(N+\alpha-1)/(N+\alpha)} } \geq C_{0,0,N, \alpha} ^{rad}:= \frac {P_{ \mu_{0, \alpha}} (M^{ \star}) } { \left( \mu_{0,\alpha} (M^{ \star}) \right)^{(N+\alpha-1)/(N+\alpha)} } \,, \end{equation} where $M^{\star}=B_{R}^{+} $ with $R$ such that $\mu_{0, \alpha}(M)=\mu_{0, \alpha}(M^{ \star})$ \end{proposition} We recall that the isoperimetric constant $C_{0,0,N, \alpha} ^{rad}$ is explicitly computed in \cite{BCM2}, see also \cite{MadernaSalsa} for the case $N=2$. \begin{lemma} \label{hardylitt} Let $l>l' >-N -\alpha$. Then \begin{equation} \label{hardylitt2} \frac{\left( \mu _{l, \alpha} (M) \right) ^{1/(l+N+\alpha)} }{ \left( \mu _{l', \alpha} (M) \right) ^{1/(l'+N+\alpha)} } \geq \frac{\left( \mu _{l, \alpha} (M^{ \star}) \right) ^{1/(l+N+\alpha)} }{ \left( \mu _{l', \alpha} (M^{ \star}) \right) ^{1/(l'+N+\alpha)} } \end{equation} for all measurable sets $M\subset \mathbb{R} ^N_+ $ with $0<\mu_{l, \alpha}(M)<+\infty $. Equality holds only for half-balls $B_{R}^{+} $, ($R>0$). \end{lemma} {\sl Proof: } Let $M^{ \star} $ be the $\mu_{l, \alpha} $-symmetrization of $M$. Then we obtain, using the Hardy-Littlewood inequality, \begin{eqnarray*} \mu _{l' , \alpha} (M) =\int_Mx_N^\alpha |x| ^{l'} \, dx & = & \int_{\mathbb{R} ^N_+ } |x|^{l'-l} \chi _M (x)\, d\mu _{l, \alpha} (x) \\ & \leq & \int_{\mathbb{R} ^N _+} \left( |x|^{l'-l} \right) ^{ \star} \left( \chi _M \right) ^{ \star} (x)\, d\mu _{l, \alpha} (x) \\ & = & \int_{\mathbb{R} ^N_+ } |x|^{l'-l} \chi _{M^{ \star} } (x)\, d\mu _{l, \alpha} (x) \\ & = & \int_{M^{ \star} } x_N^\alpha |x|^{l' }\, dx =\mu _{l', \alpha } (M^{ \star} ). \end{eqnarray*} This implies (\ref{hardylitt2}). \noindent Next assume that equality holds in (\ref{hardylitt2}). Then we must have $$ \int_M |x|^{l'-l} \, d\mu _{l, \alpha} (x) = \int_{M^{ \star} } |x|^{l'-l} d\mu _{l, \alpha} (x) , $$ that is, $$ \int_{M\setminus M^{ \star} } |x|^{l'-l} \, d\mu _{l, \alpha} (x) = \int_{M^{ \star} \setminus M} |x|^{l'-l} d\mu _{l, \alpha} (x) . $$ Since $l'-l<0$, this means that $ \mu _l ( M\Delta M^{ \star} )=0$. The Lemma is proved. $\hfill \Box $ \begin{lemma} \label{rangekl1} Let $k,l, \alpha$ satisfy (\ref{ass1}). Assume that $l>l' >-N-\alpha$ and $C_{k,l,N, \alpha} = C_{k,l,N, \alpha} ^{rad} $. Then we also have $C_{k,l',N, \alpha} = C_{k,l',N, \alpha} ^{rad} $. Moreover, if $ {\mathcal R}_{k,l',N, \alpha} (M ) = C_{k,l',N, \alpha} ^{rad} $ for some measurable set $M\subset \mathbb{R} ^N_+ $, with $0< \mu _{l' , \alpha} (M) <+\infty $, then $M = B_{R}^{+}$ for some $R>0$. \end{lemma} {\sl Proof:} By our assumptions and Lemma \ref{hardylitt} we have for every measurable set $M$ with $0<\mu _{l, \alpha}(M) <+\infty $, \begin{eqnarray*} {\mathcal R}_{k,l',N, \alpha} (M ) & = & {\mathcal R}_{k,l,N, \alpha} (M ) \cdot \left[ \frac{ \left( \mu _{l, \alpha} (M) \right) ^{1/(l+N+\alpha)} }{ \left( \mu_{l', \alpha} (M) \right) ^{1/(l'+N+\alpha)} } \right] ^{k+N+\alpha-1} \\ & \geq & C_{k,l',N, \alpha}^{rad}, \end{eqnarray*} with equality only if $M = B^{+}_{R}$ for some $R>0$. $\hfill \Box $ \begin{lemma} \label{R2} Assume that $k \leq l+1$. Then \begin{equation} \label{ineqQR} C_{k,l,N, \alpha} = \inf \left\{ {\mathcal Q}_{k,l,N, \alpha} (u) :\, u\in C_0 ^1 (\mathbb{R}_+ ^N )\setminus \{ 0\} \right\} . \end{equation} \end{lemma} {\sl Proof: } The proof uses classical arguments (see, e.g. \cite{FleRi}). We may restrict ourselves to nonnegative functions $u$. By (\ref{isopproblem}) and the coarea formula we obtain, \begin{eqnarray} \label{coarea1} \int_{\mathbb{R} ^N_+ }x_N^\alpha |x|^k |\nabla u| \, dx & = & \int _0 ^{\infty } \int\limits_{u=t } x_N^\alpha |x|^k \, {\mathcal H} _{N-1 } (dx) \, dt \\ \nonumber & \geq & C_{k,l,N, \alpha} \int_0 ^{\infty } \left( \int_{u>t } x_N^\alpha |x|^l \, dx \right) ^{(k+N+\alpha-1)/(l+N+\alpha)} \, dt. \end{eqnarray} Further, Cavalieri's principle gives \begin{equation} \label{cavalieri} u(x)= \int_0 ^{\infty } \chi _{\{ u>t\} } (x)\, dt , \quad (x\in \mathbb{R} ^N ). \end{equation} Hence (\ref{cavalieri}) and Minkowski's inequality for integrals (see \cite{Stein}) lead to \begin{eqnarray} \label{ineqmeas} & & \\ \nonumber &&\int_{\mathbb{R} ^N_+ }x_N^\alpha |x|^l |u|^{(l+N+\alpha)/(k+N+\alpha-1)} \, dx \qquad \qquad \\ \nonumber &=& \int_{\mathbb{R} ^N _+}x_N^\alpha |x|^l \left| \int_0 ^{\infty} \chi_{\{ u>t\} } (x)\, dt \right| ^{(l+N+\alpha)/(k+N+\alpha-1)} \, dx \\ \nonumber & \leq & \left( \int_0 ^{\infty } \left( \int_{\mathbb{R}^N_+ }x_N^\alpha |x|^l \chi _{\{ u>t \} } (x) \, dx \right) ^{(k+N+\alpha-1)/(l+N+\alpha)} \, dt \right) ^{(l+N+\alpha)/(k+N+\alpha-1)} \\ \nonumber & = & \left( \int_0 ^{\infty } \left( \int_{ u>t }x_N^\alpha |x|^l \, dx \right) ^{(k+N+\alpha-1)/(l+N+\alpha)} dt \right) ^{(l+N+\alpha)/(k+N+\alpha-1)} . \end{eqnarray} Now (\ref{coarea1}) and (\ref{ineqmeas}) yield \begin{equation} \label{ineqQ1} {\mathcal Q}_{k,l,N, \alpha} (u) \geq C_{k,l,N, \alpha} \quad \forall u\in C_0 ^1\setminus \{ 0\} (\mathbb{R}_+ ^N ). \end{equation} To show (\ref{ineqQR}), let $\varepsilon >0$, and choose a smooth set $\Omega $ such that \begin{equation} \label{ineqR1} {\mathcal R}_{k,l,N,\alpha} (\Omega ) \leq C_{k,l,N,\alpha } +\varepsilon . \end{equation} It is well-known that there exists a sequence $\{ u_n \} \subset C_0 ^{\infty } (\mathbb{R} ^N )\setminus \{ 0\} $ such that \begin{eqnarray} \label{limperim} \lim_{n\to \infty } \int_{\mathbb{R}^N _+} x_N^\alpha |x|^k |\nabla u_n | \, dx = \int_{\partial \Omega }x_N^\alpha |x|^k \, {\mathcal H} _{N-1} (dx) , \\ \label{limmeas} \lim_{n\to \infty } \int_{\mathbb{R}_+ ^N } x_N^\alpha |x|^l |u_n |^{(l+N+\alpha)/(k+N+\alpha-1)} \, dx = \int_{ \Omega } x_N^\alpha|x|^l \, dx. \end{eqnarray} To do this, one may choose mollifiers of $\chi _{\Omega } $ as $u_n $ (see e.g. \cite{Talenti1}). Hence, for large enough $n$ we have \begin{equation} \label{ineqQ2} {\mathcal Q}_{k,l,N,\alpha } (u_n ) \leq C_{k,l,N,\alpha } + 2\varepsilon . \end{equation} Since $\varepsilon $ was arbitrary, (\ref{ineqQR}) now follows from (\ref{ineqQ1}) and (\ref{ineqQ2}). $\hfill \Box $ \section{Necessary conditions} In this section we assume that \begin{equation*} k+N+\alpha-1 >0 \ \mbox{ and } \ l+N+\alpha>0 . \end{equation*} The main result is Theorem \ref{R4} which highlights the phenomenon of symmetry breaking. \noindent The following result holds true. \begin{lemma} \label{R3} A necessary condition for \begin{equation} \label{C>0} C_{k,l,N, \alpha} >0 \end{equation} is \begin{equation} \label{k_l_ineq1} l \frac{N+\alpha-1}{N+\alpha} \leq k . \end{equation} \end{lemma} {\sl Proof:} Assume that $k<l(N+\alpha-1 )/(N+\alpha)$, and let $te_1 = (t, 0, \ldots , 0)$, ($t>2$). Since for any $x\in B_1 (te_1 )$, it results $t-1\le |x|\le t+1$, we have $$ {\mathcal R}_{k,l,N, \alpha} (B_1 (te_1 ) ) \leq D \frac{ (t +1)^k}{ (t-1) ^{l (k+N+\alpha-1)/(l+N+\alpha)} }. $$ where the positive constant $D= D(k,l, N, \alpha) $ is given by $$ D=\frac{ \displaystyle\int_{\partial (B_1 (te_1 )\cap \mathbb{R}_+^{N})}x_N^\alpha \, {\mathcal H}_{N-1} (dx) }{ \left( \displaystyle\int_{B_1 (te_1 )\cap \mathbb{R}_+^{N} }x_N^\alpha \, dx \right) ^{(k+N+\alpha-1)/(l+N+\alpha)} } $$ Since $k-l(k+N+\alpha-1)/(l+N+\alpha) <0$, it follows that $$ \lim_{t\to \infty } {\mathcal R}_{k,l,N,\alpha } (B_1 (te_1 ) ) =0. $$ $\hfill \Box $ \begin{theorem} \label{R4} A necessary condition for \begin{equation} \label{isop1} C_{k,l,N, \alpha} = C_{k,l,N, \alpha} ^{rad} \end{equation} is \begin{equation} \label{k_l_ineq2} l+1 \leq k + \frac{N+\alpha-1}{ k+N+\alpha-1} . \end{equation} \end{theorem} \medskip \begin{remark} \rm Theorem \ref{R4} means that if $l+1 \leq k + \frac{N+\alpha-1}{ k+N+\alpha-1}$, then symmetry breaking occurs, that is $C_{k,l,N, \alpha} < C_{k,l,N, \alpha} ^{rad} $. Our proof relies on the fact that the second variation of the perimeter for smooth volume-preserving perturbations from the ball $B_{1}^{+} $ is non-negative if and only if (\ref{k_l_ineq2}) holds. Note that this also follows from a general second variation formula with volume and perimeter densities, see \cite{Mo2}. \end{remark} {\sl Proof:} First we assume $N\geq 2$. Let $(r, \theta )$ denote $N$--dimensional spherical coordinates, such that $$ \theta _1 = \arccos \frac{x_N}{|x|} , \quad\theta_1 \in [0, \pi /2], $$ and $u \in C^2 (\mathbb{S}_{+}^{N-1} )$, $s\in C^2 (\mathbb{R})$ with $s(0)=0$, and define $$ U(t ) := \{ x=r\theta \in \mathbb{R}_+ ^N : \, 0\leq r < 1+ t u(\theta ) + s(t) \} , \quad (t\in \mathbb{R} ). $$ Note that $U(0)= B_{1}^{+} $. By the Implicit Function Theorem, we may choose $s$ in such a way that \begin{equation} \label{intid1} \int_{U(t)}x_N^\alpha |x|^l \, dx = \int _{B_{1}^{+} } x_N^\alpha |x|^l \, dx \quad \mbox{for $|t|<t_0$}, \end{equation} for some number $t_0 >0$. We set $s_1 := s'(0) $ and $s_2 := s^{\prime \prime} (0)$. Let $d \Theta$ be the surface element on the sphere and \begin{equation} \label{h} h:= h(\theta_1) = \cos^{\alpha} \theta_1 = \left( \frac{x_N}{|x|} \right)^{\alpha}. \end{equation} Since $$ \int_{U(t)} x_N^\alpha|x|^l \, dx = \int_{\mathbb{S}_+ ^{N-1 } } h \int_0 ^{1+ t u(\theta ) + s(t)} \rho ^{l+N+\alpha-1} \, d\rho \, d\Theta, $$ a differentiation at $t=0$ of (\ref{intid1}) leads to \begin{eqnarray} \label{intid2} 0 & = & \int_{\mathbb{S}_+ ^{N-1} } (u+ s_1 )\, h d\Theta \quad \mbox{and } \\ \label{intid3} 0 & = & (l+N+\alpha-1) \int_{\mathbb{S}_+^{N-1} } (u+ s_1 )^2 h \, d\Theta + s_2 \int_{\mathbb{S}_+^{N-1} } h \, d\Theta . \end{eqnarray} Next we consider the perimeter functional \begin{eqnarray} \label{perim} J(t) & := & \int_{\partial U(t)} x_N^\alpha |x|^k \, {\mathcal H}_{N-1} (dx) \\ \nonumber & = & \int_{\mathbb{S}_+ ^{N-1} } (1+tu + s(t) )^{k+N+\alpha-2} \sqrt{ (1+ tu+s(t) )^2 + t^2 |\nabla _{\theta } u|^2 } \, h \, d\Theta , \end{eqnarray} where $\nabla _{\theta }$ denotes the gradient on the sphere. Differentiation at $t=0$ of (\ref{perim}) leads to \begin{eqnarray*} J'(0) & = & (k+N+\alpha-1) \int_{\mathbb{S} _+^{N-1} } (u+ s_1 ) \, h \, d\Theta , \quad \mbox{and } \\ J^{\prime \prime } (0) & = & (k+N+\alpha-2) (k+N+\alpha-1) \int_{\mathbb{S} _+^{N-1} } (u+s_1 )^2 \, h \, d\Theta + \\ & & + (k+N+\alpha-1) s_2 \int_{\mathbb{S} _+^{N-1} } \, h \, d\Theta + \int_{\mathbb{S}_+ ^{N-1} } |\nabla _{\theta } u|^2 \, h \, d\Theta . \end{eqnarray*} By (\ref{intid2}) and (\ref{intid3}) this implies \begin{equation} \label{Jprime} J'(0) = 0, \end{equation} and \begin{equation} \label{Jprimeprime} J^{\prime \prime } (0) =(k+N+\alpha-1) (k-l-1) \int_{\mathbb{S}_+^{N-1} } (u+s_1 )^2 \, h \, d\Theta + \int_{\mathbb{S}_+^{N-1} } |\nabla _{\theta } u|^2 \, h \, d\Theta . \end{equation} Now assume that (\ref{isop1}) holds. Then we have ${\mathcal R}_{k,l,N,\alpha} (U(t)) \geq {\mathcal R}_{k,l,N,\alpha} (B_{1}^{+} )$ for all $t$ with $|t|<t_0 $. In view of (\ref{intid1}) this means that $J(t) \geq J(0) $ for $|t|<t_0 $, that is, \begin{equation} \label{Jderiv} J^{\prime \prime } (0) \geq 0 = J'(0). \end{equation} The second condition is (\ref{Jprime}), and the first condition implies, in view of (\ref{intid2}) and (\ref{Jprimeprime}), that \begin{eqnarray} \label{intineq1} 0 & \leq & (k+N+\alpha-1)(k-l-1) \int_{\mathbb{S}_+^{N-1} } v^2 \, h \,d\Theta + \int_{\mathbb{S} _+^{N-1} } |\nabla _{\theta } v|^2 \, h \, d\Theta \\ \nonumber & & \forall v\in C^2 (\mathbb{S} _+^{N-1} ) \ \mbox{ with } \ \int_{\mathbb{S}_+^{N-1} } v \, h \, d\Theta =0. \end{eqnarray} Applying Proposition 2.1 in \cite{BCM2}, we get $$ \int_{\mathbb{S} _+^{N-1} } |\nabla _{\theta } v|^2 \, h \, d\Theta \ge (N+\alpha-1) \int_{\mathbb{S}_+^{N-1} } v^2 \, h \, d\Theta $$ for any $ v\in C^2 (\mathbb{S} _+^{N-1} ) $ with $ \displaystyle\int_{\mathbb{S}_+^{N-1} }h v \, d\Theta =0$. The conclusion follows. $\hfill \Box $ \section{The case of negative $\alpha$} \noindent In this section we firstly show that the relative isoperimetric problem in $\mathbb{R}_{+}^{2}$ for $\alpha \in \left( -1,0\right) $ and $k=l=0$ has no solution. Nevertheless, in Theorem \ref{St_Not_Iso}, we prove that, the second variation of the perimeter w.r.t. volume-preserving smooth perturbations at the half circle is nonnegative for such values of the parameters. \noindent Throughout this section the points in $\mathbb{R}_{+}^{2}$ will be simply denoted by $(x,y)$. \bigskip \begin{theorem} \label{Not_Ex} Let \begin{equation} N=2,\text{ }\alpha \in \left( -1,0\right) \,\, \text{and } k=l=0 . \label{H_NE} \end{equation} Then there is no constant $C\in \left( 0,+\infty \right) $ such that \begin{equation*} \int_{\partial \Omega \backslash \left\{ y=0\right\} }y^{\alpha }dl\geq C\left( \displaystyle\int\limits_{\Omega }y^{\alpha }dxdy\right) ^{\frac{ \alpha +1}{\alpha +2}},\text{ for any set }\Omega \subset \mathbb{R}_{+}^{2}. \end{equation*} \end{theorem} \noindent {\sl Proof:} \ \ Let $0<a<b$ and \begin{equation*} \Omega _{a,b}:=\left\{ (x,y)\in \mathbb{R}_{+}^{2}:0<x<1,\text{ }a<y<b\right\} . \end{equation*} We have \begin{equation*} A_{\alpha }\left( \Omega _{a,b}\right) :=\displaystyle\int\limits_{\Omega _{a,b}}y^{\alpha }dxdy=\int_{a}^{b}t^{\alpha }dt=\frac{b^{\alpha +1}-a^{\alpha +1}}{\alpha +1}. \end{equation*} while \begin{equation*} P_{\alpha }\left( \Omega _{a,b}\right) :=\int_{\partial \Omega _{a,b}}y^{\alpha }dl=2\int_{a}^{b}t^{\alpha }dt+a^{\alpha }+b^{\alpha }= \frac{2}{\alpha +1}\left( b^{\alpha +1}-a^{\alpha +1}\right) +a^{\alpha }+b^{\alpha }. \end{equation*} Setting \begin{equation*} U:=a^{\alpha +1},\text{ \ }V:=b^{\alpha +1}-a^{\alpha +1}\hspace{0.5cm} (U,V>0) \end{equation*} we have \begin{equation*} A_{\alpha }\left( \Omega _{a,b}\right) =\frac{V}{\alpha +1}\ \text{\ and \ } P_{\alpha }\left( \Omega _{a,b}\right) =\frac{2}{\alpha +1}V+U^{\frac{\alpha }{\alpha +1}}+\left( U+V\right) ^{\frac{\alpha }{\alpha +1}}. \end{equation*} In order to conclude to proof we claim that $\forall \epsilon >0$ $\exists $ $0<a<b$ such that \begin{equation*} R_{\alpha }\left( \Omega _{a,b}\right) \equiv \frac{P_{\alpha }\left( \Omega _{a,b}\right) }{\left[ A_{\alpha }\left( \Omega _{a,b}\right) \right] ^{\frac{\alpha +1}{\alpha +2}}}<\epsilon . \end{equation*} First choose $V$ small enough to have \begin{equation*} 2\left( \alpha +1\right) ^{-\frac{1}{\alpha +1}}\text{ }V^{\frac{1}{\alpha +2 }}<\frac{\epsilon }{2} \end{equation*} and then $U$ large enough to have \begin{equation*} \frac{U^{\frac{\alpha }{\alpha +1}}+(U+V)^{\frac{\alpha }{\alpha +1}}}{ \left( \frac{1}{\alpha +1}\right) ^{\frac{\alpha +1}{\alpha +2}}V^{\frac{ \alpha +1}{\alpha +2}}}<\frac{\epsilon }{2}. \end{equation*} Then \begin{equation*} R_{\alpha }\left( \Omega _{a,b}\right) =2\left( \alpha +1\right) ^{-\frac{1}{ \alpha +1}}\text{ }V^{\frac{1}{\alpha +2}}+\frac{U^{\frac{\alpha }{\alpha +1} }+(U+V)^{\frac{\alpha }{\alpha +1}}}{\left( \frac{1}{\alpha +1}\right) ^{ \frac{\alpha +1}{\alpha +2}}V^{\frac{\alpha +1}{\alpha +2}}}<\frac{\epsilon }{2}+\frac{\epsilon }{2}=\epsilon . \end{equation*} \hfill $\Box $ \bigskip \noindent Now let $\alpha \in \left( -1,0\right) $ and consider the measure $d\nu =\cos ^{\alpha }t \, dt $. We introduce the weighted Sobolev space $H^{1}\left( \left( -\frac{\pi }{2},\frac{\pi }{2}\right) ;d\nu \right) $ which is made of functions $\phi :\left( -\frac{\pi }{2} ,\frac{\pi }{2}\right) \rightarrow \mathbb{R}$ such that \begin{eqnarray*} \left\Vert \phi \right\Vert _{H^{1}\left( \left( -\frac{\pi }{2},\frac{\pi }{ 2}\right) ; \,d\nu \right) }^{2} &=& \left\Vert \phi \right\Vert_{L^{2}\left( \left( -\frac{\pi }{2},\frac{\pi }{2} \right) ; \, d\nu \right) }^{2}+\left\Vert \phi ^{\prime }\right\Vert _{L^{2}\left( \left( -\frac{\pi }{2}, \frac{\pi }{2} \right) ; \, d\nu \right) }^{2} \\ &=& \int_{-\frac{\pi }{2}}^{\frac{\pi }{2}} \phi (t)^{2} \, d\nu + \int_{-\frac{\pi }{2}}^{\frac{\pi }{2}} \phi ^{\prime }(t)^{2} \, d\nu <\infty . \end{eqnarray*} Finally let \begin{equation*} V :=\left\{ \phi \in H^{1}\left( \left( -\frac{\pi }{2},\frac{\pi }{2} \right) ; \, d\nu \right) :\int_{-\frac{\pi }{2}}^{\frac{\pi }{2} }\phi \, d\nu =0\right\} . \end{equation*} In the following Lemma we prove that $ V $ is compactly embedded in $L^{2}\left( \left( -\frac{\pi }{2},\frac{\pi }{2} \right) ; \, d\nu \right) $. \begin{lemma} \label{embedd} If $\left\{ w_{n}\right\} _{n\in N}\subset V $ is such that \begin{equation*} \int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}w_{n}^{\prime }(t)^{2} \, d\nu \leq C\text{ \ }\forall n \in \mathbb{N} \end{equation*} then there exists $ w\in V $ such that there holds \begin{equation*} \lim_{n\rightarrow \infty }\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}\left\vert w_{n}(t)-w(t)\right\vert ^{2}\, d\nu =0\text{.} \end{equation*} \end{lemma} \noindent {\sl Proof:} \ Note that \begin{equation*} \int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}w_{n}^{\prime }(t)^{2}dt\leq \int_{- \frac{\pi }{2}}^{\frac{\pi }{2}}w_{n}^{\prime }(t)^{2}\cos ^{\alpha }tdt\leq C\text{ \ }\forall n \in \mathbb{N} . \end{equation*} By the definition of $V$ we can infer that for each $n\in \mathbb{N}$, there exists $ t_{n}\in (-\frac{\pi }{2},\frac{\pi }{2})$ such that, up to a subsequence, $w_{n}(t_{n})=0.$ So we have \begin{equation*} w_{n}(t)=\int_{t_{n}}^{t}w_{n}^{\prime }(\sigma )d\sigma \end{equation*} and therefore \begin{equation*} \left\vert w_{n}(t)\right\vert ^{2}\leq \left( \int_{- \frac{\pi }{2}}^{\frac{\pi }{2}}\left\vert w_{n}^{\prime }(\sigma )\right\vert d\sigma \right) ^{2}\leq \pi \int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}\left\vert w_{n}^{\prime }(\sigma )\right\vert ^{2}d\sigma \leq C\text{ \ }\forall n \in \mathbb{N}. \end{equation*} So $w_{n}$ is bounded in $H^{1} \left( -\frac{\pi }{2},\frac{\pi }{2} \right)$ and, therefore, there exists $w\in C^{0}\left( \left[ -\frac{\pi }{2},\frac{\pi }{2 }\right] \right) \cap H^{1} \left( -\frac{\pi }{2},\frac{\pi }{2} \right) $ such that, up to a subsequence, \begin{equation*} w_{n}(t)\rightarrow w(t)\text{ uniformly in }\left[ -\frac{\pi }{2},\frac{ \pi }{2}\right] . \end{equation*} The assertion easily follows, since \begin{equation*} \cos ^{\alpha }t\in L^{1}\left( -\frac{\pi }{2},\frac{\pi }{2}\right) \text{ \ }\forall \alpha \in (-1,0) . \end{equation*} \hfill $\Box $ \bigskip \noindent Now define the Rayleigh quotient \begin{equation*} Q(v):=\frac{\displaystyle\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}v^{\prime }(t)^{2}\cos ^{\alpha }tdt}{\displaystyle\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}v(t)^{2}\cos ^{\alpha }tdt}, \,\,\,\ \text{with} \,\,\,\, v \in V . \end{equation*} \begin{lemma} \label{W_Wirt} There holds \begin{equation*} \mu :=\min_{\phi \in V}Q(v)=1+\alpha . \end{equation*} \end{lemma} \noindent {\sl Proof:} \ \ Note that $\sin t\in V $. An integration by parts gives \begin{equation} Q(\sin t)=\frac{\displaystyle\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}\cos ^{\alpha +2}tdt}{ \displaystyle\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}\sin ^{2}t\cos ^{\alpha }tdt} = \frac{ \left( \alpha +1\right) \displaystyle\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}\sin ^{2}t\cos ^{\alpha }tdt}{\displaystyle\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}\sin ^{2}t\cos ^{\alpha }tdt}=\alpha +1, \label{sint} \end{equation} and, therefore \begin{equation*} \mu \leq \alpha +1. \end{equation*} Now, by contradiction, assume that \begin{equation*} \mu <1+\alpha . \end{equation*} By Lemma \ref{embedd} there exists a function $u\in V$ such that $Q(u)=\mu $ which satisfies the Euler equation \begin{equation} -\left( u^{\prime }\cos ^{\alpha }(t)\right) ^{\prime }=\mu u\cos ^{\alpha }(t)\text{ \ on \ }\left( -\frac{\pi }{2},\frac{\pi }{2}\right) . \label{eig_eq} \end{equation} We set \begin{equation*} R(v):=\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}v^{\prime }(t)^{2}\, d \nu -\mu \int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}v(t)^{2} \, d \nu , \text{ \ }v\in V, \end{equation*} and \begin{equation*} u_{1}(t)=\frac{u(t)-u(-t)}{2},\text{ \ }u_{2}(t)=\frac{u(t)+u(-t)}{2}. \end{equation*} We have \begin{equation*} R(u)=R(u_{1})+R(u_{2})=0. \end{equation*} Hence at least one of the following statements must be true \begin{equation} R(u_{1})\leq 0, \tag{i} \label{i} \end{equation} or \begin{equation} \label{ii} R(u_{2})\leq 0. \tag{ii} \end{equation} \noindent Our aim is to reach a contradiction by showing that (\ref{i}) and (\ref{ii}) are both false. \vspace{.5cm} \noindent \textbf{Case (i)}: Assume $R(u_{1})\leq 0.$ \noindent Since $u_{1}$ is odd we have \begin{equation*} v_{1}:=\frac{u_{1}(t)}{\sin t}\in C^{1}\left( \left[ -\frac{\pi }{2},\frac{ \pi }{2}\right] \right) \end{equation*} and \begin{equation*} R(u_{1})=\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}\left( v_{1}^{\prime }\sin t+v_{1}\cos t\right) ^{2}\cos ^{\alpha }tdt-\mu \int_{-\frac{\pi }{2}}^{ \frac{\pi }{2}}v_{1}^{2}\sin ^{2}t\cos ^{\alpha }tdt= \end{equation*} \begin{equation*} =\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}2v_{1}^{\prime }v_{1}\sin t\cos ^{\alpha +1}tdt+\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}\left( v_{1}^{\prime }\right) ^{2}\sin ^{2}t\cos ^{\alpha }tdt + \end{equation*} \begin{equation*} +\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}v_{1}^{2}\cos ^{\alpha +2}tdt+-\mu \int_{-\frac{\pi }{2}}^{\frac{\pi }{ 2 }}v_{1}^{2}\sin ^{2}t\cos ^{\alpha }tdt \end{equation*} \begin{eqnarray*} &=& (\alpha +1)\int_{-\frac{\pi }{2}}^{ \frac{\pi }{2}}v_{1}^{2}\sin ^{2}t\cos ^{\alpha }tdt-\int_{-\frac{\pi }{2} }^{ \frac{\pi }{2}}v_{1}^{2}\cos ^{\alpha +2}tdt+ \\ &&\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}\left( v_{1}^{\prime }\right) ^{2}\sin ^{2}t\cos ^{\alpha }tdt+\int_{-\frac{\pi }{2}}^{\frac{\pi }{2} }v_{1}^{2}\cos ^{\alpha +2}tdt-\mu \int_{-\frac{\pi }{2}}^{\frac{\pi }{2} }v_{1}^{2}\sin ^{2}t\cos ^{\alpha }tdt \end{eqnarray*} Recalling the assumption $\alpha +1-\mu >0$, we have \begin{eqnarray*} R(u_{1}) &=&(\alpha +1)\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}v_{1}^{2}\sin ^{2}t\cos ^{\alpha }tdt+\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}\left( v_{1}^{\prime }\right) ^{2}\sin ^{2}t\cos ^{\alpha }tdt-\mu \int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}v_{1}^{2}\sin ^{2}t\cos ^{\alpha }tdt \\ &=&(\alpha +1-\mu )\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}v_{1}^{2}\sin ^{2}t\cos ^{\alpha }tdt+\int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}\left( v_{1}^{\prime }\right) ^{2}\sin ^{2}t\cos ^{\alpha }tdt\geq 0, \end{eqnarray*} where equality holds if and only if $\ \mu =\alpha +1$ and $v_{1}$ is a constant. This contradicts our assumption. \vspace{.5cm} \noindent \textbf{Case (ii)}: Assume $R(u_{2})\leq 0.$ \noindent Since $u_{2}$ is even function belonging to $V$, we have \begin{equation*} 0 = \int_{-\frac{\pi }{2}}^{\frac{\pi }{2}}u_{2}\cos ^{\alpha }tdt = 2 \int_{0}^{\frac{\pi }{2}}u_{2}\cos ^{\alpha }tdt. \end{equation*} Then there exists $c\in \left( 0,\frac{\pi }{2}\right) $ such that \begin{equation*} u_{2}(c)=u_{2}(-c)=0. \end{equation*} From \eqref{eig_eq} we deduce that \begin{equation} \int_{-c}^{c}\left( u_{2}^{\prime }\right) ^{2}\cos ^{\alpha }tdt= -\int_{-c}^{c}u_{2}\left( u_{2}^{\prime }\cos ^{\alpha }t\right) ^{\prime }dt=\mu \int_{-c}^{c}u_{2}^{2}\cos ^{\alpha }tdt. \label{-c_+c} \end{equation} On the other hand, setting \begin{equation*} v_{2}:=u_{2}\cos ^{\frac{\alpha }{2}}t , \end{equation*} we obtain from (\ref{-c_+c}) \begin{eqnarray} \int_{-c}^{c}\left( u_{2}^{\prime }\right) ^{2}\cos ^{\alpha }tdt &=&\int_{-c}^{c}\left( v_{2}^{\prime }\cos ^{-\frac{\alpha }{2}}t+\frac{ \alpha }{2}v_{2}\cos ^{-\frac{\alpha }{2}-1}t\sin t\right) ^{2}\cos ^{\alpha }tdt \label{v2} \\ &=&\int_{-c}^{c}\left( v_{2}^{\prime }\right) ^{2}dt+\alpha \int_{-c}^{c}v_{2}v_{2}^{\prime }\tan tdt+\frac{\alpha ^{2}}{4} \int_{-c}^{c}v_{2}^{2}\tan ^{2}tdt. \notag \end{eqnarray} Since $v_{2}\left( \pm c\right) =0$ and $v_{2}\in C^{1}\left[ -c,c\right] $, the classical one-dimensional Wirtinger inequality implies that \begin{equation} \int_{-c}^{c}\left( v_{2}^{\prime }\right) ^{2}dt\geq \left( \frac{\pi }{2c} \right) ^{2}\int_{-c}^{c}v_{2}^{2}dt, \label{W_1d} \end{equation} where equality holds if and only if $v_{2}$ is proportional to $ \sin \left( \dfrac{\pi t}{2c} \right) $ Inequalities (\ref{-c_+c}) and (\ref{W_1d}) ensure \begin{eqnarray} \int_{-c}^{c}\left( u_{2}^{\prime }\right) ^{2}\cos ^{\alpha }tdt &\geq &\left( \frac{\pi }{2c}\right) ^{2}\int_{-c}^{c}v_{2}^{2}dt \\ &&-\frac{\alpha }{2}\int_{-c}^{c}v_{2}^{2}\left( 1+\tan ^{2}t\right) dt+ \frac{\alpha ^{2}}{4}\int_{-c}^{c}v_{2}^{2}\tan ^{2}tdt \notag \\ &=&\left( \frac{\pi ^{2}}{4c^{2}}-\frac{\alpha }{2}\right) \int_{-c}^{c}v_{2}^{2}dt+\left( \frac{\alpha ^{2}}{4}-\frac{\alpha }{2} \right) \int_{-c}^{c}v_{2}^{2}\tan ^{2}tdt \notag \\ &>&\left( \frac{\pi ^{2}}{4c^{2}}-\frac{\alpha }{2}\right) \int_{-c}^{c}v_{2}^{2}dt \notag \\ &=&\left( \frac{\pi ^{2}}{4c^{2}}-\frac{\alpha }{2}\right) \int_{-c}^{c}u_{2}^{2}\cos ^{\alpha }tdt. \notag \end{eqnarray} Finally equation \eqref{eig_eq} implies \begin{equation*} 1+\alpha >\mu >\frac{\pi ^{2}}{4c^{2}}-\frac{\alpha }{2}\geq 1-\frac{\alpha }{2} \end{equation*} and therefore \ $\frac{3}{2}\alpha >0 ,$ a contradiction. \hfill $\Box $ \begin{theorem} \label{St_Not_Iso} Let $N=2,\text{ }\alpha \in \left( -1,0\right) $ and $k=l=0$. Then the functional $J$ defined in \eqref{perim}, satisfies $J^{\prime \prime}(0) \geq 0$. \end{theorem} \noindent {\sl Proof:} The assertion follows from Lemma \ref{W_Wirt} and taking into account of \eqref{Jprimeprime}. \hfill $\Box $ \vspace{.5cm} \section{Main results} This section is devoted to the proof of Theorem \ref{maintheorem}, that is, we obtain sufficient conditions on $k,l$ and $N$ such that $ C_{k,l,N, \alpha} = C_{k,l,N, , \alpha} ^{rad}$ holds, or equivalently, \begin{equation} \label{ineqrad} {\mathcal R}_{k,l,N, \alpha} (M) \geq C_{k,l,N, \alpha}^{rad} \quad \mbox{for all measurable sets $M \subset \mathbb R^{N}_{+}$ with $0< \mu _{l, \alpha} (M) <+\infty $.} \end{equation} Proofs of Theorem \ref{ineqrad} are given in various subsections, each of which addresses one of the cases ofTheorem \ref{maintheorem}. First let us recall that the proof of case (i) of Theorem \ref{maintheorem} has been given in \cite{ABCMP_atti}. \begin{remark} \label{sufficiency} Condition (\ref{k_l_ineq1}), i.e. $l\frac{N+\alpha-1}{N+\alpha}\le k$ is a necessary and sufficient condition for $C_{k,l,N, \alpha} >0$. \end{remark} {\sl Proof:\/} The necessity follows from Lemma \ref{R3}, and the sufficiency in the case $l+1\leq k$ follows from case (i) in Theorem \ref{maintheorem}. Finally, assume that $k< l+1$. Then (\ref{isopproblem}) is equivalent to (\ref{ineqQR}), by Lemma \ref{R2}. Now the main Theorem of \cite{CKN} tells us that condition (\ref{k_l_ineq1}) is also sufficient for $C_{k,l,N, \alpha} >0$. $\hfill \Box $ \subsection{Proof of Theorem \ref{maintheorem}, case (ii).} The case $k \leq 0$ and $\alpha = 0$ has been addressed in \cite{ChiHo}, Theorem 1.3. We significantly extend such a result by considering all nonnegative values of $\alpha$ and treating, at least for some values of the parameters, the equality case in (\ref{isop1}). \begin{theorem} \label{th1bis} Let $k,l$ satisfy \begin{equation} \label{lk} l \frac{N+\alpha-1}{N+\alpha} \leq k \leq \min\{0, l+1\}. \end{equation} Then (\ref{isop1}) holds. Moreover if $l \frac{N+\alpha-1}{N+\alpha} < k$ and \begin{equation} \label{M=BR} {\mathcal R}_{k,l,N, \alpha} (M) = C_{k,l,N, \alpha} ^{rad} \ \mbox{ for some measurable set $M$ with $0<\mu _l (M)< +\infty $}, \end{equation} then $M= B_{R}^{+}$ for some $R>0$. \end{theorem} {\sl Proof :\/} Let $u\in C^{\infty }_0(\mathbb{R}_+ ^N)\setminus \{ 0 \} $. We set $$ y:=x|x|^\frac{k}{N+\alpha-1}\, , \quad v(y):=u(x)\, , \quad s:=r^\frac{k+N+\alpha-1}{N+\alpha -1}\, . $$ Using $N$-dimensional spherical coordinates, denoting with $\nabla_\theta$ the tangential part of the gradient on ${\mathbb S^{N-1}}$, we obtain \begin{eqnarray} \label{cambio1} & & \int_{\mathbb{R}_+ ^N} x_N^\alpha |x|^l |u|^{(l+N+\alpha)/(k+N+\alpha-1)} \, dx \\ \nonumber & = & \int_{\mathbb{S}^{N-1}_+} \int_0^{\infty} r^{l+N+\alpha-1} |u|^{(l+N+\alpha)/(k+N+\alpha-1) }\, h dr\, d\Theta \\ \nonumber & = & \frac{N+\alpha-1}{k+N+\alpha-1} \int_{\mathbb{S}^{N-1}_+} \int_0^{\infty} s^{\frac{l+N+\alpha}{k+N+\alpha-1}(N+\alpha-1)-1 } |v|^{(l+N+\alpha)/(k+N+\alpha-1)}\, h ds \, d\Theta \\ \nonumber & = & \frac{N+\alpha-1}{k+N+\alpha-1} \int_{\mathbb{R} _+^N} y_n^\alpha |y|^{\frac{l+N+\alpha}{k+N+\alpha-1}(N+\alpha-1)-N}|v|^{(l+N+\alpha)/(k+N+\alpha-1)}\, dy \\ \nonumber & = & \frac{N+\alpha-1}{k+N+\alpha-1} \int_{\mathbb{R} _+^N} |y|^{(l(N+\alpha-1)-k(N+\alpha))/(k+N+\alpha-1)} |v|^{(l+N+\alpha)/(k+N+\alpha-1)}\, dy \, . \end{eqnarray} Further we calculate \begin{eqnarray} \label{cambio2} & & \int_{\mathbb{R} _+^N} x_N^\alpha |x|^k |\nabla_x u| \, dx \\ \nonumber & = & \int_{\mathbb{S}^{N-1} _+} \int_0^{\infty} r^{k+N+\alpha-1} \left( u_r ^2 +\frac{|\nabla_\theta u|^2}{r^2} \right) ^{1/2}h \, dr \, d\Theta \\ \nonumber & = & \int_{\mathbb{S}^{N-1} } \int_0^{\infty} s^{N+\alpha-1} \left( v_s ^2+\frac{|\nabla_\theta v|^2}{s^2} \left( \frac{N+\alpha-1}{k+N+\alpha-1} \right) ^2 \right) ^{1/2} \, h \, ds \, d\Theta \nonumber \\ \nonumber & \geq & \int_{\mathbb{S}^{N-1} } \int_0^{\infty} s^{N+\alpha-1} \left( v_s ^2 +\frac{|\nabla_\theta v|^2}{s^2} \right) ^{1/2} \, h \, ds \, d\Theta \\ \nonumber & = & \int_{\mathbb{R}_+^N}y_N^\alpha |\nabla_y v| \, dy \, , \end{eqnarray} where we have used (\ref{lk}). By \eqref{cambio1} and \eqref{cambio2} we deduce, \begin{eqnarray} \label{Q2} & & \hspace {1cm} {\mathcal Q}_{k,l,N, \alpha}(u) \\ \nonumber & \geq & \frac{\displaystyle \int_{\mathbb R ^N_+} y_N^\alpha |\nabla_y v| \, dy}{\displaystyle \left( \int_{\mathbb R ^N_+} y_N^\alpha |y|^{l' }|v|^{(l+N+\alpha)/(k+N+\alpha-1)}\, dy \right) ^{(k+N+\alpha-1)/(l+N+\alpha)} } \left( \frac{k+N+\alpha-1}{N+\alpha-1} \right) ^{(k+N+\alpha-1)/(l+N+\alpha)} \\ \nonumber & = & \left( \frac{k+N+\alpha-1}{N+\alpha-1} \right) ^{(k+N+\alpha-1)/(l+N+\alpha)} {\mathcal Q}_{0,l' ,N, \alpha }(v)\, , \end{eqnarray} where we have set $l' :=\frac{l(N+\alpha-1)-k(N+\alpha)}{k+N+\alpha-1}$. Note that we have $-1 \leq l' \leq 0$ by the assumptions (\ref{lk}). \\ Hence we may apply Lemma \ref{R2} to both sides of (\ref{Q2}). This yields \begin{equation} \label{relationCC} C_{k,l,N, \alpha} \geq \left( \frac{k+N+\alpha-1}{N+\alpha-1} \right) ^{(k+N+\alpha-1)/(l+N+\alpha)} C_{0,l', N, \alpha} . \end{equation} Furthermore, Lemma \ref{rangekl1} tells us that \begin{equation} \label{CCrad} C_{0,l',N, \alpha} = C_{0,l', N, \alpha} ^{rad} . \end{equation} Since also $$ \left( \frac{k+N+\alpha-1}{N+\alpha-1} \right) ^{(k+N+\alpha-1)/(l+N+\alpha)} C_{0,l',N, \alpha} ^{rad} =C_{k,l,N, \alpha}^{rad} \, . $$ From this, (\ref{relationCC}) and (\ref{CCrad}), we deduce that $C_{k,l,N, \alpha}\ge C_{k,l,N, \alpha}^{rad}$. Since $C_{k,l,N, \alpha}\le C_{k,l,N, \alpha}^{rad}$ by definition, (\ref{isop1}) follows. \\ Next assume that ${\mathcal{R}}_{k,l,N, \alpha} (M) = C_{k,l,N, \alpha} ^{rad}$ for some measurable set $M \subset \mathbb R^{N}_{+}$ with $0<\mu _l (M)< +\infty $. If $l(N+\alpha-1)/(N+\alpha) <k$, then Lemma \ref{rangekl1} tells us that we must have $M=B_{R}^{+} $ for some $R>0$. $\hfill \Box $ \begin{remark} \rm $\text{}$ \\ {\bf (a)} A well-known special case of Theorem \ref{th1bis} is $k=0 = l $, see \cite{MadernaSalsa}, \cite{BCM} and \cite{XR}. \\ {\bf (b)} The idea to use spherical coordinates, and in particular the inequality (\ref{cambio2}) in our last proof, appeared already in some work of T. Horiuchi, see \cite{H} and \cite{HK}. \end{remark} \subsection{Proof of Theorem \ref{maintheorem}, case (iii).} Now we treat the case when $k$ assumes non-negative values. Throughout this subsection we assume $k\leq l+1$. The main result is Theorem \ref{th1ter}. Its proof is long and requires some auxiliary results. But the crucial idea is an interpolation argument that occurs in the proof of the following Lemma \ref{4.3}, formula (\ref{ineq2}). \begin{lemma} \label{4.3} Assume $l(N+\alpha-1)/(N+\alpha)\leq k$ and $k\geq 0$. Let $u\in C_0 ^1 (\mathbb{R}^N_+)\setminus \{ 0 \} $, $u\geq 0$, and define $y,z$ and $v$ by \begin{equation} \label{transf1} y:= x|x| ^{\frac{k}{N+\alpha-1}} , \ z:= |y| \ \mbox{ and }\ v(y) := u(x), \qquad x\in \mathbb{R}_+ ^N . \end{equation} Then for every $A\in \left[ 0, \frac{(N+\alpha-1) ^2}{ (k+N+\alpha-1 )^2 } \right] $, \begin{equation} \label{ineq1} {\mathcal Q}_{k,l,N, \alpha} (u) \geq \left( \frac{k+N+\alpha-1}{N+\alpha-1} \right) ^{ \frac{k+N+\alpha-1}{l+N+\alpha} } \cdot \frac{ \left( \displaystyle\int_{\mathbb{R}_+ ^N } y_N^\alpha|\nabla _y v| \, \, dy \right) ^A \cdot \left( \displaystyle\int_{\mathbb{R}_+ ^N } y_N^\alpha | v_z| \, dy \right) ^{1-A} }{ \left( \displaystyle\int_{\mathbb{R}_+ ^N } y_N^\alpha |y| ^{ \frac{l(N+\alpha-1)-k(N+\alpha)}{k+N+\alpha-1} } v ^{ \frac{l+N+\alpha}{k+N+\alpha-1} } \, dy \right) ^{ \frac{k+N+\alpha-1}{l+N+\alpha} } } . \end{equation} \end{lemma} {\sl Proof:} We calculate as in the proof of Theorem \ref{th1bis} , $$ \int_{\mathbb{R}_+^N }x_N^\alpha |x| ^k |\nabla _x u | \, dx = \int_{\mathbb{S}_+^{N-1} } \int_0^{\infty} s^{N+\alpha-1} \left( v_s ^2+\frac{|\nabla_\theta v|^2}{s^2} \left( \frac{N+\alpha-1}{k+N+\alpha-1} \right) ^2 \right) ^{1/2} \, h \, ds \, d\Theta $$ Since the mapping $$ t\longmapsto \log \left( \int _{{\mathbb S}_+^{N-1} } \int_0^{+\infty } z^{N+\alpha-1} \sqrt{ v_z ^2 + t\frac{|\nabla _{\theta } v|^2 }{z^2 } } \, h \, dz\, d\Theta \right) $$ is concave, we deduce that for every $A\in \left[ 0, \frac{(N+\alpha-1) ^2}{ (k+N+\alpha-1 )^2 } \right] $, \begin{eqnarray} \label{ineq2} & & \int_{\mathbb{R}_+^N }x_N^\alpha |x| ^k |\nabla _x u | \, dx \\ \nonumber & \geq & \left( \int_{\mathbb{S}_+^{N-1} } \int_0 ^{+\infty } z^{N+\alpha-1} \sqrt{ v_z ^2 + \frac{|\nabla _{\theta } v|^2 }{z^2 } } \, h \, dz \, d\Theta \right) ^A \cdot \left( \int_{\mathbb{S}_+^{N-1}} \int_0^{+\infty } z^{N+\alpha-1} |v_z | \, h \, dz\, d\Theta \right) ^{1-A} \\ \nonumber & = & \left( \int_{\mathbb{R}^N_+ } y_N^\alpha|\nabla _y v| \, dy \right) ^A \cdot \left( \int_{\mathbb{R}^N_+ } y_N^\alpha|v_z | \, dy \right) ^{1-A} . \end{eqnarray} Finally, we have \begin{equation} \label{equaldenom} \int_{\mathbb{R}^N_+ }x_N^\alpha |x| ^l u ^{ \frac{l+N+\alpha}{k+N+\alpha-1} } \, dx = \frac{N+\alpha -1}{k+N+\alpha -1} \int_{\mathbb{R}_+^N} y_N^\alpha |y| ^{ \frac{l(N+\alpha -1)-k(N+\alpha )}{k+N+\alpha -1} } v ^{ \frac{l+N+\alpha }{k+N+\alpha -1} } \, dy . \end{equation} Now (\ref{ineq1}) follows from (\ref{ineq2}) and (\ref{equaldenom}). $\hfill \Box $ \\[0.1cm] Next we want to estimate the right-hand-side of (\ref{ineq1}) from below. We will need a few more properties of the starshaped rearrangement. \begin{lemma} \label{4.2} Assume $l(N+\alpha-1)/(N+\alpha)\leq k$. Then we have for any function $v\in C_0 ^1 (\mathbb{R}^N_+ )\setminus \{ 0 \}$ with $v\geq 0$, \begin{eqnarray} \label{starsh5} & & \int_{\mathbb{R}_+^N } y_N^\alpha |y| ^{ \frac{l(N+\alpha-1)-k(N+\alpha)}{k+N+\alpha-1} } v ^{ \frac{l+N+\alpha}{k+N+\alpha-1} } \, dy \leq \int_{\mathbb{R}_+^N } y_N^\alpha |y| ^{ \frac{l(N+\alpha-1)-k(N+\alpha)}{k+N+\alpha-1} } \widetilde{v} ^{ \frac{l+N+\alpha}{k+N+\alpha-1} } \, dy, \\ \label{vL1} & & \frac{ y\cdot \nabla \widetilde{v} }{|y|} \equiv \frac{ \partial \widetilde{v} }{ \partial z } \in L^1 (\mathbb{R}_+ ^N ) \quad \mbox{and } \\ \label{starsh6} & & \int_{\mathbb{R}^N_+ } y_N^\alpha\left| \frac{ \partial v}{\partial z} \right| \, dy \geq \int_{\mathbb{R}^N_+ } y_N^\alpha\left|\frac{ \partial \widetilde{v} }{\partial z} \right| \, dy. \end{eqnarray} \end{lemma} {\sl Proof:} Let us prove (\ref{starsh5}). Set $$ w(y):= |y| ^{ \frac{l(N+\alpha-1)-k(N+\alpha)}{l+N+\alpha} } . $$ Since $l(N+\alpha-1)-k(N+\alpha)\leq 0$, we have $w= \widetilde{w} $. Hence (\ref{starsh5}) follows from (\ref{harlit}) and (\ref{monrearr}). \\ Next let $\zeta := z^N $ and define $V$ and $\hat{V} $ by $V(\zeta ,\theta) := v(z\theta )$, and $\widehat{V} (\zeta ,\theta ) := \widetilde{v} (z\theta )$. Observe that for each $\theta \in \mathbb{S}_+^{N-1} $, $\widehat{V} (\cdot , \theta ) $ is the equimeasurable non-increasing rearrangement of $V (\cdot ,\theta )$. Further we have $$ \frac{ \partial v }{ \partial z } = N\zeta ^{ \frac{N-1}{N} } \frac{ \partial V}{\partial \zeta } \ \mbox{ and } \ \frac{ \partial \widetilde{v} }{ \partial z } = N\zeta ^{ \frac{N-1}{N} } \frac{ \partial \widehat{V} }{ \partial \zeta } . $$ Since $\frac{\partial v}{\partial z} \in L^{\infty } (\mathbb{R}^N )$, Lemma \ref{Landes} tells us that for every $\theta \in {\mathbb S} ^{N-1} $, \begin{eqnarray*} \int_0^{+\infty } z^{N+\alpha-1} \left| \frac{\partial v}{\partial z} (z\theta ) \right| \, dz & = & \int_0 ^{+\infty } \zeta ^{ \frac{N+\alpha-1}{N} } \left| \frac{\partial V}{\partial \zeta } (\zeta ,\theta ) \right| \, d \zeta \\ & \geq & \int_0 ^{+\infty } \zeta ^{ \frac{N+\alpha-1}{N} } \left| \frac{\partial \widehat{V} }{\partial \zeta } (\zeta ,\theta )\right| \, d\zeta \\ & = & \int_0^{+\infty } z^{N+\alpha-1} \left| \frac{ \partial \widetilde{v} }{\partial z} (z\theta )\right| \, dz . \end{eqnarray*} Integrating this over ${\mathbb S}_+ ^{N-1}$, we obtain (\ref{starsh6}). $\hfill \Box$ A final ingredient is \begin{lemma} \label{4.1} Assume that $l(N+\alpha-1)/(N+\alpha)\leq k$, and let $M \subset \mathbb R^{N}_{+}$ be a bounded starshaped set. Then \begin{eqnarray} \label{holder1} & & \left( \int_M y_N^\alpha |y| ^{ \frac{l(N+\alpha-1) -k(N+\alpha)}{k+N+\alpha-1} } \, dy \right) ^{ \frac{k+N+\alpha-1}{l+N+\alpha} } \\ \nonumber & \leq & d_1 \left( \int_M y_N^\alpha\, dy \right) ^{ \frac{(N+\alpha-1)(l-k+1) }{ l+N+\alpha} } \cdot \left( \int_M y_N^\alpha|y|^{-1} \, dy \right) ^{ \frac{k(N+\alpha)-l(N+\alpha-1) }{ l+N+\alpha} } , \quad \mbox{ where} \\ & & \label{d1} d_1 = \left( \frac{k+N+\alpha-1}{l+N+\alpha} \right) ^{ \frac{k+N+\alpha-1}{l+N+\alpha} } \cdot \left( \frac{N+\alpha}{N+\alpha-1} \right) ^{ \frac{(N+\alpha-1)(l-k+1)}{l+N+\alpha} } . \end{eqnarray} Moreover, if $k<l+1$ and $l(N+\alpha-1)/(N+\alpha) <k$, then equality in (\ref{holder1}) holds only if $M=B_{R}^{+} $ for some $R>0$. \end{lemma} {\sl Proof:} Since $M$ is starshaped, there is a bounded measurable function $m : \mathbb{S} ^{N-1}_{+} \to [0, +\infty )$, such that \begin{equation} \label{Mrepresent} M= \{ z\theta :\, 0\leq z < m (\theta ), \ \theta \in \mathbb{S} ^{N-1}_{+} \} . \end{equation} Using H\"older's inequality we obtain \begin{eqnarray} \label{chain} & & \hspace{1cm} \int_M y_N^\alpha|y| ^{ \frac{l(N+\alpha-1) -k(N+\alpha)}{k+N+\alpha-1} } \, dy \\ \nonumber & = & \frac{k+N+\alpha-1}{(l+N+\alpha)(N+\alpha-1)} \int_{{\mathbb S}_+ ^{N-1} } m (\theta ) ^{ \frac{(l+N+\alpha)(N+\alpha-1)}{k+N+\alpha-1} } \, h \, d\Theta \\ \nonumber & = & \frac{k+N+\alpha-1}{(l+N+\alpha)(N+\alpha-1)} \int_{{\mathbb S}_+ ^{N-1} } m (\theta ) ^{ \frac{k(N+\alpha)-l(N+\alpha-1)}{k+N+\alpha-1}(N+\alpha-1) } m (\theta ) ^{ \frac{(N+\alpha-1)(l-k+1)}{k+N+\alpha-1} (N+\alpha)} \, h \, d\Theta \\ \nonumber & \leq & \frac{k+N+\alpha-1}{(l+N+\alpha)(N+\alpha-1)} \left( \int_{{\mathbb S}_+ ^{N-1} } m (\theta ) ^{N+\alpha} \, h \, d\Theta \right) ^{ \frac{(N+\alpha-1)(l-k+1)}{k+N+\alpha-1} } \\ \nonumber \qquad & \times & \left( \int_{{\mathbb S}_+ ^{N-1} } m (\theta ) ^{N+\alpha-1} \, h \, d\Theta \right) ^{ \frac{k(N+\alpha)- l(N+\alpha-1)}{k+N+\alpha-1} } \\ \nonumber & = & \frac{k+N+\alpha-1}{(l+N+\alpha)(N+\alpha-1)} \left( (N+\alpha) \int_M y_N^\alpha dy \right) ^{ \frac{(N+\alpha-1)(l-k+1)}{k+N+\alpha-1} } \times \\ \nonumber \qquad & \times & \left( (N+\alpha-1) \int_M |y| ^{-1} y_N^\alpha\, dy \right) ^{ \frac{k(N+\alpha)- l(N+\alpha-1)}{k+N+\alpha-1} } , \end{eqnarray} and (\ref{holder1}) follows. If $k<l+1$ and $l(N+\alpha -1)/(N+\alpha ) < k$, then (\ref{chain}) holds with equality only if $m (\theta )=\mbox{const }$. $\hfill \Box $ \\[0.1cm] Now we are ready to prove our main result. \begin{theorem} \label{th1ter} Assume $0\le k\leq l+1$ and \begin{equation} \label{crucial} l\leq \frac{(k+N+\alpha-1)^3 }{(k+N+\alpha-1)^2 - \frac{(N+\alpha-1)^2 }{ N+\alpha}} -N -\alpha. \end{equation} Then (\ref{isop1}) holds. Furthermore, if inequality (\ref{crucial}) is strict, then (\ref{M=BR}) holds only if $M=B_{R}^{+}$ for some $R>0$. \end{theorem} {\sl Proof: } First observe that the conditions $k\geq 0$ and (\ref{crucial}) also imply $l(N+\alpha-1)/(N+\alpha) \leq k$. Let $u \in C_0 ^{\infty } (\mathbb{R}_+^N)\setminus \{ 0\} $, $u\geq 0$, and let $v$ be given by (\ref{transf1}). In view of (\ref{crucial}), we may choose $$ A=\frac{(N+\alpha)(l-k+1)}{l+N+\alpha} $$ to obtain \begin{eqnarray} \label{ineq5bis} {\mathcal Q}_{k,l,N, \alpha} (u) & \geq & \left( \frac{k+N+\alpha-1}{N+\alpha-1} \right)^{ \frac{k+N+\alpha-1 }{ l+N+\alpha} } \times \\ \nonumber & \times & \frac{ \left( \displaystyle\int_{\mathbb{R} ^N } y_N^\alpha |\nabla _y v| \, dy \right) ^{ \frac{(N+\alpha)(l-k+1) }{ l+N+\alpha} } \cdot \left( \displaystyle\int_{\mathbb{R}_+ ^N } y_N^\alpha |v_z | \, dy \right) ^{ \frac{k(N+\alpha)-l(N+\alpha-1) }{ l+N+\alpha} } }{ \left( \displaystyle\int_{\mathbb{R}_+^N } y_N^\alpha |y| ^{ \frac{l(N+\alpha-1)-k(N+\alpha ) }{ k+N+\alpha-1} } v ^{ \frac{l+N+\alpha}{k+N+\alpha-1} } \, dy \right) ^{ \frac{k+N+\alpha-1}{l+N+\alpha} } } \end{eqnarray} . Further, (\ref{starsh6}) and Hardy's inequality yield \begin{equation} \label{ineq4} \int_{\mathbb{R}^N } y_N^\alpha |v_z| \, dy \geq \int_{\mathbb{R}^{N}_{+} } y_N^\alpha |\widetilde{v}_z| \, dy \geq (N+\alpha -1) \int_{\mathbb{R}^{N}_{+} } y_N^\alpha\frac{\widetilde{v}}{|y|} \, dy\,, \end{equation} where $\widetilde{v}$ denotes the starshaped rearrangement of $v$. Together with (\ref{ineq5bis}) and (\ref{starsh5}) this leads to \begin{eqnarray} \label{ineq5final} {\mathcal Q}_{k,l,N, \alpha} (u) & \geq & (N+\alpha-1) ^{ \frac{k(N+\alpha)-l(N+\alpha-1)}{l+N+\alpha} } \left( \frac{k+N+\alpha-1}{N+\alpha-1} \right) ^{ \frac{k+N+\alpha-1 }{ l+N+\alpha} } \cdot \\ \nonumber & & \cdot \frac{ \left( \displaystyle\int_{\mathbb{R}_+ ^N } y_N^\alpha|\nabla _y v| \, dy \right) ^{ \frac{(N+\alpha)(l-k+1) }{ l+N+\alpha} } \cdot \left( \displaystyle\int_{\mathbb{R}_+^N } y_N^\alpha\frac{\widetilde{v} }{|y|} \, dy \right) ^{ \frac{k(N+\alpha)-l(N+\alpha-1) }{ l+N+\alpha} } }{ \left( \displaystyle\int_{\mathbb{R}_+^N } y_N^\alpha |y| ^{ \frac{l(N+\alpha-1)-k(N+\alpha) }{ k+N+\alpha-1} } \widetilde{v} ^{ \frac{l+N+\alpha}{k+N+\alpha-1} } \, dy \right) ^{ \frac{k+N+\alpha-1}{l+N+\alpha} } } . \end{eqnarray} Now let $M $ be a bounded measurable subset of $\mathbb R^{N}_{+}$. Then combining (\ref{limperim}), (\ref{limmeas}) and the argument leading to (\ref{CklNsmooth}) we deduce that there exists a sequence of non-negative functions $\{ u_n \} \subset C_0 ^1 (\mathbb{R}^N_+ )$ such that \begin{equation} \label{lim1} \lim_{n\to \infty } \int_{\mathbb{R}_+^N } x_N^\alpha|x| ^k |\nabla u_n | \, dx = P_{\mu _k, \alpha } (M) \end{equation} and \begin{equation} \label{lim2} u_n \longrightarrow \chi_{M } \quad \mbox{ in $L^p (\mathbb{R}^{N}_{+} ) $ for every $p\geq 1 $.} \end{equation} We define $M ':= \{ y= x|x|^{\frac{k}{N+\alpha-1}} : \, x \in M \} $ and $v_n (y) := u_n (x) $. Let $\widetilde{v_n } $ and $\widetilde{M '} $ be the starshaped rearrangements of $v_n $ and $M ' $ respectively. Then (\ref{lim1}) and (\ref{lim2}) also imply \begin{eqnarray} \label{lim3} & & \lim_{n\to \infty } \int_{\mathbb{R}^N_+ } y_N^\alpha |\nabla _y v_n | \, dy = P_{\mu _0, \alpha } (M' ), \quad \mbox{and} \\ \label{lim4} & & \widetilde{v_n }\longrightarrow \chi _{\widetilde{M ' } } \ \mbox{ in $L^p (\mathbb{R}^{N}_{+} ) $ for every $p\geq 1 $.} \end{eqnarray} Choosing $u=u_n $ in (\ref{ineq5final}) and passing to the limit $n\to \infty $, we obtain, using (\ref{lim1}), (\ref{lim2}), (\ref{lim3}), (\ref{lim4}) and Proposition \ref{BCM2} \begin{eqnarray} \label{ineq6} & & {\mathcal R}_{k,l,N, \alpha} (M ) \\ \nonumber & \geq & (N+\alpha-1) ^{ \frac{k(N+\alpha)-l(N+\alpha-1)}{l+N+\alpha} } \left( \frac{k+N+\alpha-1}{N+\alpha-1} \right) ^{ \frac{k+N+\alpha-1}{l+N+\alpha} } \cdot \\ \nonumber & & \cdot \frac{ \left( P_{\mu _0, \alpha } (\widetilde{M'}) \right) ^{ \frac{(N+\alpha)(l-k+1)}{l+N+\alpha} } \cdot \left( \displaystyle\int_{\widetilde{M' } } \frac{y_N^\alpha dy}{|y| } \right) ^{ \frac{k(N+\alpha)-l(N+\alpha-1)}{l+N+\alpha} } }{ \left( \displaystyle\int_{\widetilde{M '} } y_N^\alpha |y| ^{ \frac{l(N+\alpha-1)-k(N+\alpha)}{k+N+\alpha-1} } \, dy \right) ^{ \frac{k+N+\alpha-1}{l+N+\alpha} } } \\ \nonumber & \geq & (N+\alpha-1) ^{ \frac{k(N+\alpha)-l(N+\alpha-1)}{l+N+\alpha} } \left( \frac{k+N+\alpha-1}{N+\alpha-1} \right) ^{ \frac{k+N+\alpha-1}{l+N+\alpha} } \left( C_{0,0,N, \alpha} ^{rad} \right) ^{ \frac{(N+\alpha)(l-k+1)}{l+N+\alpha} } \times \\ \nonumber & & \times \frac{ \left( \mu_{0,\alpha} (\widetilde{M'}) \right) ^{ \frac{(N+\alpha-1)(l-k+1)}{l+N+\alpha} } \cdot \left( \displaystyle\int_{\widetilde{M' } } \frac{ y_N^\alpha dy}{|y| } \right) ^{ \frac{k(N+\alpha)-l(N+\alpha-1)}{l+N+\alpha} } }{ \left( \displaystyle\int_{\widetilde{M '} } y_N^\alpha |y| ^{ \frac{l(N+\alpha-1)-k(N+\alpha)}{k+N+\alpha-1} } \, dy \right) ^{ \frac{k+N+\alpha-1}{l+N+\alpha} } } . \end{eqnarray} In view of (\ref{holder1}) and since $\mu _0 (M') = \mu_0 (\widetilde{M'})$ we finally get from this \begin{eqnarray} \label{ineq7} & & {\mathcal R}_{k,l,N, \alpha} (M) \\ &\geq& \nonumber (N+\alpha-1) ^{ \frac{k(N+\alpha)-l(N+\alpha-1)}{l+N+\alpha} } \left( \frac{k+N+\alpha-1}{N+\alpha-1} \right) ^{ \frac{k+N+\alpha-1}{l+N+\alpha} } \left( C_{0,0,N, \alpha} ^{rad} \right) ^{ \frac{(N+\alpha)(l-k+1)}{l+N+\alpha} } \frac{1}{d_1} \\ \nonumber & = & \left( \displaystyle\int_{\mathbb S^{N-1}_{+}} \,h \,d \Theta \right) ^{\frac{l-k+1}{l+N+\alpha} } \cdot (l+N+\alpha)^{\frac{k+N+\alpha-1}{l+N+\alpha} } = C_{k,l,N, \alpha} ^{rad} , \end{eqnarray} and (\ref{isop1}) follows by (\ref{CklNsmooth}). \\ Now assume that (\ref{M=BR}) holds. If inequality (\ref{crucial}) is strict, then Lemma \ref{rangekl1} tells us that we must have $M= B_{R}^{+} $ for some $R>0$. $\hfill \Box $ \begin{remark} \rm Note that if $N+\alpha \geq 3$ , then (\ref{crucial}) covers the important range $$ l=0\leq k\leq 1. $$ However, we emphasize that this is not true when $2\leq N+\alpha <3$. \end{remark} \medskip \noindent \section{Applications} In this section we provide some applications of our results. \subsection{P\'{o}lya-Szeg\"o principle} First we obtain a P\'{o}lya-Szeg\"o principle related to our isoperimetric inequality (\ref{isop1}) (cf. \cite{Talenti2}) Assume that the numbers $k, l$ and $\alpha$ satisfy (\ref{ass1}) and one of the conditions {\bf (i)}-{\bf (iii)} of Theorem \ref{maintheorem}. Then (\ref{mainineq}) implies \begin{equation} \label{Isop_klalpha} \int_{\partial \Omega } |x|^k x_N ^{\alpha } {\mathcal H}_{N-1}(dx) \geq \int_{\partial \Omega ^{ \star }} |x|^k x_N ^{\alpha } {\mathcal H} _{N-1}(dx) \end{equation} for every smooth set $\Omega \subset \mathbb{R} ^N_+ $, where $\Omega ^{\star}$ is the $\mu_{l,\alpha } $-symmetrization of $\Omega $. We will use (\ref{Isop_klalpha}) to prove the following \begin{theorem} \label{ps} (P\'{o}lya-Szeg\"o principle) Let the numbers $k,l$ and $\alpha $ satisfy one of the conditions {\bf (i)}-{\bf (iii)} of Theorem \ref{maintheorem}. Further, let $p\in [1, +\infty)$ and $m:= pk+(1-p) l $. Then there holds \begin{equation} \int_{\mathbb{R}^N _+ } \left\vert \nabla u\right\vert ^p d \mu _{m,\alpha } (x) \geq \int_{\mathbb{R}^N _+ }\left\vert \nabla u^{ \star }\right\vert ^{p} d\mu_{m,\alpha } (x) \quad \forall u\in {\mathcal D} ^{1,p} (\mathbb{R} ^N _+ , d\mu _{m, \alpha } ), \label{PS_k_l} \end{equation} where $u^{\star } $ denotes the $\mu _{l,\alpha } $-symmetrization of $u$. \end{theorem} {\sl Proof:} It is sufficient to consider the case that $u$ is non-negative. Further, by an approximation argument we may assume that $u \in C^{\infty}_{0}(\mathbb{R}^{N} ) $. Let \begin{eqnarray*} I & := & \int_{\mathbb{R}^{N}_+ } | \nabla u| ^{p} |x| ^{pk+(1-p)l} x_N ^{\alpha }\, dx \quad \mbox{and}\\ I ^{\star } & := & \int_{\mathbb{R}^{N}_+ } | \nabla u^{\star} | ^{p} |x| ^{pk+(1-p)l} x_N ^{\alpha }\, dx . \end{eqnarray*} The coarea formula yields \begin{eqnarray} \label{1coarea} I & = & \int_{0}^{\infty }\int_{u=t} |\nabla u| ^{p-1} |x| ^{pk+(1-p)l} x_N ^{\alpha }\, {\mathcal H}_{N-1}(dx)\, dt \quad \mbox{and} \\ \label{coarea2} I^{\star} & = & \int_{0}^{\infty }\int_{u^{\star } =t} |\nabla u^{\star} | ^{p-1} |x| ^{pk+(1-p)l} x_N ^{\alpha }\, {\mathcal H}_{N-1}(dx)\, dt . \end{eqnarray} Further, H\"older's inequality gives \begin{equation} \label{1holder} \int_{ u=t} |x|^k x_N ^{\alpha } \, {\mathcal H} _{N-1} (dx) \leq \left( \int_{ u=t} |x|^{kp +l(1-p)} |\nabla u| ^{p-1} x_N ^{\alpha } \, {\mathcal H} _{N-1} (dx) \right) ^{\frac{1}{p} } \cdot \left( \int_{ u=t} \frac{|x|^l x_N ^{\alpha }}{|\nabla u|} \, {\mathcal H}_{N-1} (dx) \right) ^{\frac{p-1}{p} } , \end{equation} for a.e. $t\in [0, +\infty )$. Hence (\ref{1coarea}) together with (\ref{1holder}) tells us that \begin{equation} \label{coarea3} I \geq \int_{0}^{\infty } \left( \int_{u=t} |x| ^{k} x_N ^{\alpha }\, {\mathcal H} _{N-1}(dx) \right) ^{p} \cdot \left( \int_{u=t}\frac{ |x| ^{l}x_N ^{\alpha }}{ | \nabla u| } x_N ^{\alpha } \, {\mathcal H}_{N-1}(dx) \right) ^{1-p} \, dt. \end{equation} Since $u^{\star} $ is a radial function, we obtain in an analogous manner, \begin{equation} \label{coarea4} I^{\star} = \int_{0}^{\infty } \left( \int_{u^{\star} =t} |x| ^{k} x_N ^{\alpha } \, {\mathcal H} _{N-1}(dx) \right) ^{p} \cdot \left( \int_{u^{\star} =t}\frac{ |x| ^{l}x_N ^{\alpha } }{ | \nabla u^{\star} | } \, {\mathcal H}_{N-1}(dx) \right) ^{1-p} \, dt. \end{equation} Observing that \begin{equation} \label{meas_u>t} \int_{u>t} |x|^{l} x_N ^{\alpha } \, dx = \int_{u^{\star }>t} |x|^{l} x_N ^{\alpha } \, dx \quad \forall t\in [0, +\infty ), \end{equation} Fleming-Rishel's formula yields \begin{equation} \label{flemingrishel} \int_{u=t } \frac{|x|^l x_N ^{\alpha }}{|\nabla u|} \, {\mathcal H}_{N-1} (dx) = \int_{u^{\star} =t } \frac{|x|^l x_N ^{\alpha }}{|\nabla u^{\star} |} \, {\mathcal H}_{N-1} (dx) \end{equation} for a.e. $t\in [0, +\infty )$. Hence (\ref{flemingrishel}) and (\ref{Isop_klalpha}) give \begin{eqnarray*} & & \int_{0}^{\infty } \left( \int_{u=t} |x|^k x_N ^{\alpha } \, {\mathcal H} _{N-1}(dx) \right) ^{p} \cdot \left( \int_{u=t}\frac{| x| ^{l} x_N ^{\alpha } }{ | \nabla u| } \, {\mathcal H}_{N-1}(dx) \right) ^{1-p} \, dt \\ & \geq & \int_{0}^{\infty }\left( \int_{u^{\star} =t} |x| ^{k} x_N ^{\alpha } \, {\mathcal H}_{N-1}(dx) \right) ^{p} \cdot \left( \int_{u^{\star}=t} \frac{|x|^{l} x_N ^{\alpha } }{|\nabla u^{\star} | } \, {\mathcal H} _{N-1}(dx)\right) ^{1-p} \, dt. \end{eqnarray*} Now (\ref{PS_k_l}) follows from this, (\ref{coarea3}) and (\ref{coarea4}). $\hfill \Box$ \\[0.1cm] An important particular case of Theorem \ref{ps} is \begin{corollary} \label{specialcasePS} Let $p\in [1, +\infty )$, $N+\alpha \geq 3 $, $a\geq 0 $, $u\in {\mathcal D} ^{1,p} (\mathbb{R}^N _+ , d\mu _{ap ,\alpha }) $, and let $u^{\star } $ be the $\mu_{0,\alpha } $-symmetrization of $u$. Then \begin{equation} \label{PSspecial} \int_{\mathbb{R}^N _+ } \left| \nabla u\right|^p \, d\mu _{ap, \alpha } (x) \geq \int_{\mathbb{R}^N _+ } \left| \nabla u^{\star} \right|^p \, d\mu _{ap, \alpha } (x) . \end{equation} \end{corollary} {\sl Proof: } We choose $k:= a $ and $l:= 0$. If $a\in [0,1]$ then $k,l$ satisfy either one of the conditions {\bf (ii)} or {\bf (iii)}, see also Remark 5.2. If $a\geq 1 $, then $k,l$ satisfy condition {\bf (i)} of Theorem \ref{maintheorem}. Hence (\ref{PSspecial}) follows from Theorem \ref{ps}. $\hfill \Box $ \subsection{Caffarelli-Kohn-Nirenberg-type inequalities} Next we will use Theorem \ref{ps} to obtain best constants in some inequalities of Caffarelli-Kohn-Nirenberg-type. Let $p,q, a, b$ be real numbers such that \begin{eqnarray} \label{CKNassump1} & & 1\leq p \leq q \left\{ \begin{array}{ll} \leq \frac{(N+\alpha )p}{N+\alpha -p} & \mbox{ if } \ p< N+\alpha \\ < +\infty & \mbox{ if } \ p\geq N + \alpha \end{array} \right. , \\ \nonumber & & a> 1-\frac{N+\alpha }{p}, \quad \mbox{and } \\ \nonumber & & b= b(a,p,q,N, \alpha ) = (N+\alpha ) \left( \frac{1}{p} -\frac{1}{q} \right) + a-1 . \end{eqnarray} We define \begin{eqnarray} \label{p*} p^* & := & \left\{ \begin{array}{ll} \frac{(N+\alpha )p}{N+\alpha -p} & \mbox{ if } p<N+\alpha \\ +\infty & \mbox{ if } p\geq N+\alpha \end{array} \right. , \\ & &\nonumber \\ \label{fctalE} E_{a,p,q,N, \alpha } (v) & := & \frac{\displaystyle\int_{\mathbb{R} ^N _+ } |x|^{ap} |\nabla v|^p x_N ^{\alpha }\, dx }{ \left( \displaystyle\int_{\mathbb{R}^N _+ } |x|^{bq} |v|^q x_N ^{\alpha } \, dx \right) ^{p/q} }, \quad v\in C_0 ^{\infty } (\mathbb{R}^N )\setminus \{ 0\} , \\ \label{SapqN} S_{a,p,q,N, \alpha } & := & \inf \{ E_{a,p,q,N,\alpha } (v): \, v\in C_0 ^{\infty } (\mathbb{R}^N ) \setminus \{ 0\} \}, \quad \mbox{and} \\ \label{SapqNrad} S_{a,p,q,N,\alpha } ^{rad} & := & \inf \{ E_{a,p,q,N,\alpha } (v): \, v\in C_0 ^{\infty } (\mathbb{R}^N )\setminus \{ 0\} , \ v \mbox{ radial }\}. \end{eqnarray} Note that with this new notation we have \begin{eqnarray} \label{E=Q} E_{k,1,\frac{l+N+\alpha }{k+N+\alpha -1} ,N, \alpha } (v) & = & {\mathcal Q}_{k,l,N,\alpha } (v) \quad \forall v\in C_0 ^{\infty } (\mathbb{R}^N )\setminus \{ 0\} , \\ \label{S=C} S_{k,1,\frac{l+N+\alpha }{k+N+\alpha -1} ,N, \alpha } (v) & = & C_{k,l,N,\alpha } \quad \mbox{and} \\ \label{Srad=Crad} S_{k,1,\frac{l+N+\alpha }{k+N+\alpha -1} ,N,\alpha } ^{rad} & = & C_{k,l,N,\alpha } ^{rad} . \end{eqnarray} \\ We are interested in the range of values $a$ (depending on $p,q,N$ and $\alpha $) for which \begin{equation} \label{S=S_rad} S_{a,p,q,N,\alpha } = S_{a,p,q,N,\alpha } ^{rad} \end{equation} holds. \\ First observe that the case $1<p=q$ (which is equivalent to $a-b=1$) corresponds to a weighted Hardy-Sobolev-type inequality. Note that inequality \eqref{eq:theorem:Hardy with weight} below was already known when $\alpha=0$ (see, for example \cite{HK} and references therein). We have: \begin{theorem} \label{hardysobolev} \label{theorem:Hardy with weight} Let $p\geq 1$, $\alpha\geq 0$ and $k\in\mathbb{R}$ be such that $N-p+\alpha +k>0$. Then we have \begin{equation} \label{eq:theorem:Hardy with weight} \int_{\mathbb{R}^N_+} |\nabla u(x)|^p \, d\mu _{k,\alpha } (x) \geq \left(\frac{N-p+k+\alpha }{p}\right)^p \int_{\mathbb{R}^N_+ } \frac{| u(x)|^p }{|x|^p } \, d\mu_{k,\alpha } (x) \end{equation} for all $u\in {\mathcal D} ^{1,p} (\mathbb{R}^N_+ , d\mu_{k,\alpha }) $ and \begin{equation} \label{constant} S_{a,p,p,N,\alpha } ^{rad} = S_{a,p,p,N,\alpha } =\left(\frac{N-p+k+\alpha }{p}\right)^p . \end{equation} Moreover there is no function $u\in {\mathcal D} ^{1,p}(\mathbb{R}^N_+,d\mu_{k,\alpha } )$ satisfying equality in \eqref{eq:theorem:Hardy with weight} and such that\\ $\int _{\mathbb{R}^N_+ } |\nabla u|^p d\mu_{k,\alpha } \neq 0.$ \end{theorem} {\sl Proof:} The first two steps follow the line of proof of \cite{GaPe}, Lemma 2.1. \\ \textit{Step 1.} Assume first that $u\in C_0^{\infty}(\mathbb{R}^N)$. Then we have for every $x\in \mathbb{R}^N _+ $, $$ |u(x)|^p= - \int_1^{\infty}\frac{d}{dt}|u(tx)|^p\, dt= - \int_1^{\infty} p|u(tx)|^{p-2}u(tx)\langle x,\nabla u(tx)\rangle \, dt . $$ Multiplying this with $x_N ^{\alpha } |x|^{k-p} $ and integrating over $\mathbb{R}^N _+$ we find \begin{eqnarray} \nonumber \int_{\mathbb{R}^N_+ }|u(x)|^p x_N ^{\alpha } |x|^{k-p} \, dx & = & - p\int_{1}^{\infty}\left[ \int_{\mathbb{R}^N_+ } |u(tx)|^{p-2} u(tx) \langle x, \nabla u(tx)\rangle x_N ^{\alpha } |x|^k \, dx \right] \, dt \\ \nonumber & = & - p\int_{1}^{\infty}\frac{1}{t^{N-p+\alpha +k }}\left[ \int_{\mathbb{R}^N_+ } \frac{|u(y)|^{p-2} u(y) }{|y|^{p}}\langle y, \nabla u(y)\rangle y_N ^{\alpha } |y|^k \, dy \right] \, dt \\ \label{identityhardy} & =& - \frac{p}{N-p+\alpha +k } \int_{\mathbb{R}^N_+} \frac{|u(x)|^{p-2} u(x) }{|x|^{p}}\langle x, \nabla u(x)\rangle x_N ^{\alpha } |x|^k \, dx . \end{eqnarray} Note that by a density argument (\ref{identityhardy}) still holds for functions $u\in {\mathcal D}^{1,p} (\mathbb{R}^N_+ , d\mu _{k,\alpha } )$. In view of the inequality \begin{equation} \label{eq:estimate nabla u by Cauch-Sch} - u(x) \langle x,\nabla u(x)\rangle \leq |u(x)||x| |\nabla u(x)| \end{equation} this leads to \begin{equation} \label{ineq1hardy} \int_{\mathbb{R}^N_+ }|u(x)|^p x_N ^{\alpha }|x|^{k-p} \, dx \leq \frac{p}{N-p+k+\alpha } \int_{\mathbb{R}^N_+ } \frac{|u(x)|^{p-1} }{|x|^{p-1}}|\nabla u(x)| x_N ^{\alpha } |x|^k \, dx . \end{equation} Using H\"older's inequality, with $p'$ being the conjugate exponent of $p$, we obtain that (this step is not necessary if $p=1$) \begin{eqnarray} \nonumber & & \int_{\mathbb{R}^N_+ } \frac{|u(x)|^{p-1}}{|x|^{p-1}}|\nabla u(x)|x_N ^{\alpha } |x|^k \, dx \\ \nonumber & = & \int_{\mathbb{R}^N_+}\left\{ \frac{|u(x)|^{p-1}}{|x|^{p-1}}\left[ x_N ^{\alpha } |x|^k \right] ^{1/p'} \right\} \left\{ |\nabla u(x)|\left[ x_N ^{\alpha } |x|^k \right] ^{1/p} \right\} \, dx \\ \label{ineq2hardy} & \leq & \left( \int_{\mathbb{R}^N_+ }|u(x)|^p x_N ^{\alpha } |x|^{k-p}\, dx \right)^{1/p'} \cdot \left( \int_{\mathbb{R}^N_+ }|\nabla u(x)|^p x_N ^{\alpha } |x|^k \, dx \right)^{1 /p} . \end{eqnarray} Plugging this estimate into (\ref{ineq1hardy}) concludes the first statement of the theorem. \smallskip \textit{Step 2.} Next we show (\ref{constant}). Let $\varepsilon >0$ and define $$ M_{\epsilon}=\frac{N-p+k+\alpha +\epsilon}{p},\qquad u_{\epsilon}(x)=\left\{\begin{array}{rl} 1&\text{ if }|x|\leq 1 \smallskip \\ |x|^{-M_{\epsilon}}&\text{ if }|x|>1. \end{array}\right. $$ Note that $$ \int_{\mathbb{R}^N_+}|\nabla u_{\epsilon}|^p x_N ^{\alpha } |x|^k \, dx ={M_{\epsilon}}^p\int_{\mathbb{R}^N_+ \backslash B_1}x_N ^{\alpha }|x|^{k-(M_{\epsilon}+1) p}\, dx. $$ Hence, by Lemma \ref{lemma:integrability w times power} (ii) below we obtain for any $\epsilon >0$ that $u_{\epsilon}\in {\mathcal D}^{1,p}(\mathbb{R}^N_+, d\mu_{k,\alpha } ).$ On the other hand, we have that $$ \int_{\mathbb{R}^N_+}|u_{\epsilon}(x)|^p x_N ^{\alpha }|x|^{k-p}\, dx= \int_{\mathbb{R}^N_+ \backslash B_1} x_N ^{\alpha } |x|^{k-(M_{\epsilon}+1)p}\, dx +\beta, $$ where, by Lemma \ref{lemma:integrability w times power} (i), $$ \beta=\int_{B_1^+ } x_N ^{\alpha }|x|^{k-p}<\infty. $$ Now set \begin{displaymath} \displaystyle{ Q_{\epsilon} = \frac{ \int_{\mathbb{R}^N_+ } |\nabla u_{\epsilon}|^p x_N ^{\alpha } |x|^{k} \, dx }{ \int_{\mathbb{R}^N_+} |u_{\epsilon}|^p x_N ^{\alpha } |x|^{k-p } \, dx }= \frac{ \int_{\mathbb{R}^N_+ \backslash B_1} x_N ^{\alpha } |x|^{k- (M_{\epsilon}+1)p}\, dx }{ \beta +\int_{\mathbb{R}^N_+\backslash B_1}|x|^{k-(M_{\epsilon}+1)p}} \, dx .} \end{displaymath} Note also that $(M_{\epsilon}+1)p=N+k+\alpha +\epsilon$. Therefore we obtain from Lemma \ref{lemma:integrability w times power} (iii) that $$ \lim_{\epsilon\to 0}Q_{\epsilon}=(M_0)^p=\left(\frac{N-p+k+\alpha }{p}\right)^p. $$ This proves the second equality in (\ref{constant}). The first equality in (\ref{constant}) follows from the fact that the approximating functions $u_{\varepsilon}$ are radial. \textit{Step 3.} Let us now show that there is no nontrivial function satisfying equality in \eqref{eq:theorem:Hardy with weight}. \\ Assume that equality holds in (\ref{eq:theorem:Hardy with weight}). Then there holds equality in (\ref{ineq1hardy}) and (\ref{ineq2hardy}). Hence we must have \begin{eqnarray} \label{identity3hardy} & & -u(x) \langle x, u(x)\rangle =|u(x)||x|\, |\nabla u(x)| \quad \mbox{and} \\ \label{identity4hardy} & & \frac{|u(x)|}{|x|} = \frac{p}{N-p+k+\alpha } \, |\nabla u(x)| \quad \mbox{for a.e. $x\in \mathbb{R}^N _+ .$} \end{eqnarray} An integration of this leads to \begin{equation} \label{u=} u(x) = |x|^{-(N-p+k+\alpha )/p} h\left( x|x|^{-1} \right) , \end{equation} with a measurable function $h: \mathbb{S} ^{N-1} _+ \to \mathbb{R}$. Since $|x|^{-1} u\in L^p (\mathbb{R}^N _+, d\mu_{k,\alpha }) $, this implies that $h=0$ a.e. on $\mathbb{S}^{N-1} _+ $. The claim is proved. $\hfill \Box$ \noindent \begin{lemma} \label{lemma:integrability w times power} Let $\delta >0$. Then \begin{eqnarray*} \mbox{(i)} & & \int_{B_1 ^+ }x_N ^{\alpha } |x|^{-N -\alpha +\delta }\, dx <\infty, \quad \mbox{ and } \\ \mbox{(ii)} & & \int_{\mathbb{R}^N_+ \backslash B_1} x_N ^{\alpha } |x|^{-N -\alpha -\delta }\, dx <\infty. \end{eqnarray*} Further, there holds $$ \lim_{\delta \to 0+0 }\int_{\mathbb{R}^N_+ \backslash B_1} x_N ^{\alpha }|x|^{-N -\alpha -\delta }\, dx =\infty. $$ \end{lemma} {\sl Proof: } We use $N$-dimensional spherical coordinates to show that \begin{align*} \int_{B_1 ^+ } x_N ^{\alpha } |x|^{-N -\alpha +\delta }=&\int_{\mathbb{S}^{N-1}_+ }\left( \int_0^1 \left( \frac{x}{|x|}\right) ^{\alpha } r^{-1+\delta } dr \right)d\mathcal{H}^{N-1}(x) \smallskip \\ =&\int_{\mathbb{S}^{N-1}_+ } \left(\frac{x}{|x|}\right) ^{\alpha }d\mathcal{H}^{N-1}(x)\left(\int_{0}^1 r^{-1+\delta }dr\right). \end{align*} From this (i) follows. (ii) and (iii) follow similarly. $\hfill \Box$ \\[0.1cm] \hspace*{1cm} From now on let us assume that \begin{equation} \label{maincase} 1<p<q \left\{ \begin{array}{ll} \leq p^* & \mbox{ if }\ p<N+\alpha \\ <+\infty & \mbox{ if } \ p\geq N+\alpha \end{array} \right. . \end{equation} We begin with the following \begin{lemma} \label{CKN} Assume that $a, b, p,q,N$ and $ \alpha $ satisfy the conditions (\ref{CKNassump1}) and (\ref{maincase}). Further, assume that there exist real numbers $k$ and $l$ which satisfy $l+N+\alpha >0$ and one of the conditions {\bf (i)}-{\bf (iii)} of Theorem \ref{maintheorem}, and such that \begin{eqnarray} \label{akl} & & ap = kp + l(1-p) \ \mbox{ and } \\ \label{bq<l} & & bq \leq l. \end{eqnarray} Then (\ref{S=S_rad}) holds. \end{lemma} {\sl Proof:} Let $u\in {\mathcal D} ^{1,p} (\mathbb{R} ^N _+, d\mu_{ap, \alpha } )\setminus \{ 0\} $, and let $u^{\star} $ be the $\mu_{l,\alpha} $-symmetrization of $u$. Then we have by Theorem \ref{ps} and (\ref{akl}), \begin{equation} \label{ps1} \int_{\mathbb{R} ^N _+ } |x|^{ap} |\nabla u| ^p x_N ^{\alpha } \, dx \geq \int_{\mathbb{R} ^N _+ } |x|^{ap} |\nabla u^{\star}| ^p x_N ^{\alpha } \, dx. \end{equation} Further, it follows from (\ref{hardylitt1}) and (\ref{bq<l}) that \begin{equation} \label{bqint} \int_{\mathbb{R} ^N _+ } |x|^{bq} | u| ^q x_N ^{\alpha } \, dx \leq \int_{\mathbb{R} ^N } |x|^{bq} | u^{\star}| ^q x_N ^{\alpha } \, dx. \end{equation} Finally, (\ref{ps1}) together with (\ref{bqint}) yield \begin{equation} \label{E>E*} E_{a,p,q,N,\alpha } (u) \geq E_{a,p,q,N,\alpha } (u^{\star} ), \end{equation} and the assertion follows. $\hfill \Box $ \\[0.1cm] Now we define \begin{eqnarray} \label{def_a1} a_1 & := & \frac{N+\alpha -1}{q-\frac{q}{p} +1} +1 -\frac{N+\alpha }{p}, \ \ \mbox{ and } \\ \label{def_a2} a_2 & := & \frac{N+\alpha -1}{(q- \frac{q}{p} +1 )\sqrt{ (N+\alpha )( \frac{1}{p} -\frac{1}{q})}} +1 -\frac{N+\alpha }{p} . \end{eqnarray} Observe that the conditions (\ref{maincase}) imply that \begin{equation} \label{a2>a1>0} a_2\geq a_1 \geq 0, \end{equation} and equality in the two inequalities holds iff $p<N+\alpha $ and $q=p^* $. \\ Moreover, an elementary calculation shows that \begin{eqnarray} \label{a1cond} a_1 & = & \max \Big\{ a: \, a= k + l\left( \frac{1}{p} -1\right) , \ bq\leq l , \\ \nonumber & & \qquad \qquad -N-\alpha < l \leq k \frac{N+\alpha }{N+\alpha -1 } \leq 0 \Big\} \quad \mbox{ and } \\ \label{a2cond} a_2 & = & \max \Big\{ a: \, a= k + l\left( \frac{1}{p} -1\right) , \ bq\leq l , \ k\geq 0, \\ \nonumber & & \qquad \qquad 0< l+ N+\alpha \leq \frac{ (k+N+\alpha -1)^3}{(k+N+\alpha -1)^2 - \frac{(N+\alpha -1)^2 }{N+\alpha } } \Big\} . \end{eqnarray} The main result of this section is the following \begin{theorem} \label{best_a} Assume that (\ref{maincase}) holds. Then we have \begin{equation} \label{s=s*} S_{a,p,q,N,\alpha } = S_{a,p,q,N,\alpha } ^{rad} \qquad \forall a\in \Big( 1-\frac{N+\alpha }{p} ,a_2 \Big]. \end{equation} \end{theorem} {\sl Proof: } Let $a \in \Big( 1-\frac{N+\alpha }{p} ,a_2 \Big]$. We define \begin{eqnarray} \label{l} l & := & q \left( a+ \frac{N+\alpha }{p} -1 \right) -N -\alpha , \quad \mbox{and } \\ \label{k} k & := & \left( 1+ q-\frac{q}{p} \right) \left( a+ \frac{N+\alpha }{p} -1 \right) -N-\alpha +1 . \end{eqnarray} This implies \begin{eqnarray*} a & = & k+l \left(\frac{1}{p} -1 \right) , \\ bq & = & l \quad \mbox{and} \\ l+ N+\alpha & = & \frac{k+ N+\alpha -1 }{ \frac{1}{q} -\frac{1}{p} +1} >0 . \end{eqnarray*} Now we split into two cases: \\[0.1cm] {\bf 1.} Let $a\leq a_1 $. \\ Then $$ k\leq 0, $$ and since $q\leq p^* $ if $p< N+\alpha $ and $q<+\infty $ otherwise, we have \begin{eqnarray*} l\frac{N+\alpha -1}{N+\alpha } -k & = & (k+N+\alpha -1) \frac{ -\frac{1}{N+\alpha } -\frac{1}{q} +\frac {1}{p} }{ \frac{1}{q}-\frac{1}{p} +1} \\ & \leq & 0. \end{eqnarray*} Hence we are in case {\bf (ii)} of Theorem \ref{maintheorem}, so that the assertion follows by Lemma \ref{CKN}, for $a \leq a_1 $. \\[0.1cm] {\bf 2.} Next let $a_1 \leq a\leq a_2 $. \\ This implies \begin{eqnarray} \nonumber k & \geq & 0 \quad \mbox{and } \\ \label{estk} k+ N+\alpha -1 & \leq & \frac{N+\alpha -1}{\sqrt{ (N+\alpha ) \left( \frac{1}{p}-\frac{1}{q} \right) } } . \end{eqnarray} Now, from (\ref{estk}) we deduce \begin{eqnarray*} & & l+N+\alpha - \frac{ (k+N+\alpha -1 ) ^3 }{ (k+N+\alpha -1 )^2 - \frac{(N+\alpha -1)^2 }{N+\alpha } } \\ & = & \frac{ (k+N+\alpha -1) \left( (k+N+\alpha -1)^2 \left( \frac{1}{p}-\frac{1}{q} \right) - \frac{(N+\alpha -1)^2 }{N+\alpha } \right) }{ \left( \frac{1}{q} -\frac{1}{p}+1 \right) \left( (k+N+\alpha -1)^2- \frac{(N+\alpha -1)^2 }{N+\alpha } \right) } \\ & \leq & 0. \end{eqnarray*} Hence we are in case {\bf (iii)} of Theorem \ref{maintheorem}, so that the assertion follows again by Lemma \ref{CKN} . $ \hfill \Box$ \\[0.1cm] {\bf Remark 6.1:} The characterizations (\ref{a1cond}) and (\ref{a2cond}) and the inequalities (\ref{a2>a1>0}) show that the bound $a_2 $ cannot be improved using our method. \\[0.1cm] Finally we evaluate the constants $S_{a,p,q,N,\alpha } ^{rad} $ and the corresponding radial minimizers. \\ For any radial function $v\in C_0 ^{\infty } (\mathbb{R}^N ) \setminus \{ 0\} $, it is easy to check the following equality $$ E_{a,p,q,N, \alpha } (v) = \left[B\left( \frac{ N-1}{2},\frac{\alpha +1}{2}\right) \right]^{1-\frac pq} \frac{ \pi ^{\frac{N-1}{2}\frac{q-p}{q}}}{ \left(\Gamma \left[ \frac{N-1}{2}\right)\right]^\frac{q-p}{q} } \frac{\displaystyle\int_{\mathbb{R} ^N _+ } |x|^{ap+\alpha} |\nabla v|^p \, dx }{ \left( \displaystyle\int_{\mathbb{R}^N _+ } |x|^{bq+\alpha} |v|^q \, dx \right) ^{p/q} }, $$ Therefore by Theorem 1.4 in \cite{Musina}, we deduce that the function $$ U(x)=\left(1+|x|^\frac{(N-p+ap+\alpha)(q-p)}{p(p-1)}\right)^\frac{p}{p-q}\,. $$ achieves the infimum of $E_{a,p,q,N, \alpha }$, that is $S_{a,p,q,N,\alpha } ^{rad}=E_{a,p,q,N, \alpha } (U)$. \subsection{Problems in an orthant} Among the possible extensions of our isoperimetric results we would like to address a problem in an orthant with monomial weights. Let $O_+ $ denote the orthant $$ O_+ := \{ x\in \mathbb{R} ^N :\, x_i >0 , \, i=1,\ldots , N \} , $$ and let $a _1 , \ldots , a _N $ be positive numbers. Using multi-index notation we have \begin{eqnarray*} {\bf a } & := & (a _1 , \ldots , a _N ), \\ | {\bf a } | & := & a _1 + \ldots + a _N , \\ x^{{\bf a}} & := & x_1 ^{a_1 } \cdots x_N ^{a_N } , \quad (x\in \mathbb{R}^N ). \end{eqnarray*} Following the lines of proof of Theorem 1.1 we obtain the following isoperimetric result. We leave the details to the reader. \begin{theorem} \label{secondmaintheorem} Let $N\in \mathbb{N} $, $N\geq 2$, $k,l \in \mathbb{R} $, ${\bf a} = (a _1 , \ldots , a _N ) $ where $a _i >0 $, ($i=1, \ldots ,N$), and $l+N+|{\bf a }| >0$. Further, assume that one of the following conditions holds: \\ {\bf (i)} $l+1\leq k $; \\ {\bf (ii)} $k\leq l+1$ and $ l\frac{N+|{\bf a}| -1}{N+|{\bf a}| } \leq k\leq 0$; \\ {\bf (iii)} $N\geq 2$, $ 0\leq k\leq l+1$ and \begin{equation}\label{l_1N3new} l\leq \frac{(k+N+|{\bf a}| -1)^3 }{(k+N+|{\bf a}|-1)^2 - \frac{(N+|{\bf a}|-1)^2 }{N+|{\bf a}| } } -N -|{\bf a}| \,. \end{equation} \\ Then \begin{equation} \label{mainineqnew} \displaystyle\int_{\partial \Omega } |x|^k x^{{\bf a}}\, {\mathcal H}_{N-1} (dx) \geq D \left( \displaystyle\int_{\Omega } |x|^l x^{{\bf a}}\, dx \right) ^{(k+N+|{\bf a}|-1)/(l+N+|{\bf a}|) } , \end{equation} for all smooth sets $\Omega $ in $O_+ $, where \begin{eqnarray} \label{defCklnew} D= D(k,l,N, {\bf a} ) & := & \frac{\displaystyle\int_{\partial B_1 } |x|^k x^{{\bf a}}\, {\mathcal H}_{N-1} (dx)} {\left( \displaystyle\int_{B_1 \cap O_+ } |x|^l x^{{\bf a}} \, dx \right ) ^{(k+N+|{\bf a}|-1)/(l+N+|{\bf a}|) } } . \end{eqnarray} Equality in (\ref{mainineqnew}) holds if $\Omega =B_R\cap O_+ $. \end{theorem} \section*{Acknowledgements} The authors are grateful to Gyula Csat\' o who kindly comunicated to us a proof of a general Hardy type inequality a particular case of which is Theorem \ref{hardysobolev}. The authors would to thanks University of Naples Federico II and South China University of Technology of Guangzhou for supporting some visiting appointment and their colleugues for their kind hospitality. \bigskip
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection{Classifier Transferability: Additional experimental results} \label{sec:additional_classifier_transfer} In Section 5.2 % of our paper we demonstrated that we can transfer a classification head to multiple different backbones. In that experiment, we froze the feature extractor in the finetuning phase to maintain compatibility. Here, we also explore an alternative version. In particular, during the initial training phase (Fig. 1c), % we add a (MobileNet V2) RP{} head. During fine-tuning, we freeze this RP{} head and use it encourage maintaining compatibility, while we allow the feature extractor to update its weights. Results of the orginal experiment (feature extractor frozen) and the just described variant (RP{} frozen) are shown in Fig.~\ref{fig:multiple_backbones_variant}. \begin{figure}[bt] \centering \includegraphics[width=1\linewidth]{plots/multiple_backbones_with_variant.pdf} \caption{\textbf{Accuracy when transferring classifier to compatible feature extractors.} } \label{fig:multiple_backbones_variant} \end{figure} We see that both variants achieve strong accuracy for all network component combinations. However, there is a trade-off: freezing RP{} allows the reference feature extractor to change, resulting in a higher accuracy for the reference combination. In contrast, freezing the reference feature extractor{} ensures better compatibility with the other extractors, resulting in slightly higher accuracy for the non-reference recombinations. Nevertheless, this shows that our method works well for maintaining compatibility, even when the parameters of the feature extractor{} are allowed to change. \section{Choice of self-supervision task}\label{sec:additional_discussion} As discussed in Sec. 3.1 of the main paper, we use rotation prediction~\cite{gidaris18iclr} as a self-supervised auxiliary task{}. We selected this task due to its simplicity and since it has been shown to work well for feature learning~\cite{gidaris18iclr,zhai19iccv,sun19arxiv_a,zhai19arxiv}. Our method could however also be used with other standard self-supervised tasks, such as solving jigsaw puzzles~\cite{noroozi16eccv}, colorization~\cite{zhang2016colorful} or exemplar classification~\cite{dosovitskiy14nips}. Importantly, self-supervised learning is continuously improved~\cite{he19arxiv, chen20arxiv, yan20arxiv,jing20pami}, where contrastive prediction has recently gained popularity~\cite{hadsell06cvpr,dosovitskiy14nips,oord18arxiv,bachman19nips,he19arxiv,chen20arxiv}. We hypothesize that these improvements will also translate to stronger compatibility when adopted in our method. Generally, to induce high compatibility, a self-supervised task should: (i) require the same features and (ii) be as related to the target task{} as possible. This ensures that the features that are important for the target task{} are made compatible and avoids conflicting objectives. A counter example would be to use a reconstruction objective, which requires accurate localization, together with a classification objective, which is invariant to the exact location and instead requires semantic reasoning. \subsection{Using batch statistics at test time and other variants} \label{sec:bn} As noted in Sec. 4 of the main paper, our network components use Batch Normalization (BN)~\cite{ioffe15icml}. By default, BN uses the statistics aggregated at training time for normalizing examples at test time. This however leads to an incorrect normalization when recombining components. In our main paper we therefore normalize features based on batch statistics of the test examples, in all our experiments. We explore variants here. First of all, we tried the standard Batch Normalization which aggregates BN statistics over the training set. Next, we tried recomputing BN statistics after recombination, by aggregating them over the complete test set, which primarily affects the BN statistics of the target task head. Note that updating BN statistics requires only images, not labels. Additionally, there exist alternative approaches to normalize feature statistics, which do not require any aggregated training statistics. In particular, we experimented with Layer Norm~\cite{ba16arxiv} and Instance Norm~\cite{ulyanov16arxiv}. Finally, another way to make the feature representations of the feature extractors compatible, is to ensure they have a certain magnitude. We experimented with adding an L2 normalization layer after the feature extractors, which makes the features to become unit length. Additionally, we tried adding a loss to encourage these features to become unit length. Results are in Fig.~\ref{fig:batchnorm}. \begin{figure*} \centering \includegraphics[width=\linewidth]{plots/joint_training_batch_norm.pdf} \caption{\textbf{Recombination accuracy for BatchNorm alternatives.} We explore different strategies for fixing the problem with unreliable BN statistics. We do this for IIW+DCC{}. Using batch-statistics at test time works best.} \label{fig:batchnorm} \end{figure*} First of all, we observe that using the normal aggregated training statistics works significantly worse than using batch statistics: we only get 32.4\% recombination accuracy. This shows that it is important to use accurate statistics when recombining network components. Next, using the aggregated statistics over the whole test set and simply using statistics per batch perform best, outperforming alternatives by a significant margin. Therefore we use single batch statistics at test time throughout our paper. \subsection{Reaching the compatibility upper bound} \label{sec:upper_bound} To study our compatibility methods in a controlled setting, we repeat the analysis of the main paper (Sec. 4 % ), this time training both networks on the CIFAR-10 dataset. As the tasks are identical in this setting, the components of the two networks could, in principle, become \emph{perfectly compatible}. In contrast, when the networks are trained on different two datasets, a classification head optimized for one is not expected to yield top accuracy on the other. We show results in Fig.~\ref{fig:cifar10_cifar10}. In this controlled setting, any combination of two or three of our compatibility methods allow to recombine network components into a new network $n_{ab}(\cdot)$ or $n_{ba}(\cdot)$ without a significant loss of accuracy: They all perform within 1.1\% of the upper bound of using the networks $n_{aa}(\cdot)$ and $n_{bb}(\cdot)$ directly. This shows that our methods are strong enough to achieve perfect compatibility, when the data allows it. In addition, advances in self-supervised learning could be used in our method to further improve this compatibility method (See Sec.~\ref{sec:additional_discussion} above). \begin{figure*}[th] \centering \includegraphics[width=1\linewidth]{plots/joint_training_cifar10_cifar10_40k.pdf} \caption{ \textbf{Recombination accuracy for different methods when training both networks on CIFAR-10.} The numbers are an average over 3 runs. } \label{fig:cifar10_cifar10} \end{figure*} \subsection{Best layer for making features compatible} \label{sec:layer_compatibility} We study where to split the network into a feature extractor and a classification head in order to obtain the best compatibility for IIW{}+RP{} (Fig.~\ref{fig:layer_compatibility}). The results show that splitting at later layers leads to slightly lower recombination accuracy{} until the first block of stage 3. Splitting even later results in considerable drops in recombination accuracy. This suggests that early features are the most compatible and re-usable across datasets, while mid-level features can also be made compatible quite well. Instead, we hypothesize that late features are already highly specific to the trained network, and therefore harder to make compatible. Similar observations were made in analysis papers~\cite{li16iclr,mehrer18ccn,kornblith19icml}, where a recurrent result is that early network layers consistently learn the same features, while later layers learn increasingly specialized and different features, even if networks are trained on the same dataset. \begin{figure*}[th] \centering \includegraphics[width=\linewidth]{plots/architecture_swipe_recombination_accuracy.pdf} \caption{ \textbf{Effects of where to split the network.} We plot recombination accuracy averaged over 5 runs for IIW{}+RP{}, while varying the layer at which we split the network into a feature extractor and target task head. } \label{fig:layer_compatibility} \end{figure*} \subsection{Varying the number of common classes} \label{sec:dcc_varying_classes} When we use DCC{} to produce compatible components, we use all 9 classes that STL-10 and CIFAR-10 have in common. Here, we investigate what effect the number of common classes in the DCC{} head has on performance, where we vary the classes used from 2 to 9. Fig.~\ref{fig:varying_classes} shows results. We find that the number of classes in the DCC{} head has a major effect on compatibility. Going from 9 to 8 classes leads to a minimal effect on performance. Instead, reducing to 6 classes or less has a strong negative effect on the recombination accuracy{} ($\leq 59.4\%$). In this regime, DCC{} is outperformed by RP{} (61.8\%). Note that while DCC{} trains only on the images with common classes, RP{} always trains on all all images. It is unclear whether the drop in accuracy is caused by a lack of diversity in classes, or by using less training data. We plan to investigate this in future work. \begin{figure*}[t] \centering \includegraphics[width=1\linewidth]{plots/num_classes_sweep.pdf} \caption{ \textbf{Recombination accuracy when varying the number of common classes.} The numbers are an average over 3 runs for IIW+DCC{}, where we vary the number of classes on which the shared DCC{} head is trained. } \label{fig:varying_classes} \end{figure*} \subsection{Per-dataset results for our analysis} \label{sec:per_dataset_results} In our main paper, we reported averages over the CIFAR-10 and STL-10 datasets (Fig.~2% ). For completeness and for better reproducability, we report also the per-dataset results in Fig.~\ref{fig:analysis_detailed}. \begin{figure*}[t] \centering \begin{subfigure}{\linewidth} \includegraphics[width=\linewidth]{plots/joint_training_results_cifar10.pdf} \caption{\textbf{Recombination accuracy, results on CIFAR-10 only.}} \end{subfigure} \begin{subfigure}{\linewidth} \includegraphics[width=\linewidth]{plots/joint_training_results_stl10.pdf} \caption{\textbf{Recombination accuracy, results on STL-10 only.}} \end{subfigure} \caption{\textbf{Recombination accuracy.} In this figure we show the individual results of CIFAR-10 and STL-10 (averages are in Fig.~2 of the main paper). % All numbers are averages over 10 runs, while the bars represent standard deviations.} \label{fig:analysis_detailed} \end{figure*} \subsection{Computational complexity \& Scalability} Our methods require limited additional computation time, allowing to apply them to make many different networks compatible. Specifically: (i) IIW does not require additional computation time. (ii) RP doubles the computational cost per task since each image needs to be processed twice (original and rotated version). (iii) DCC is cheaper, as it only requires the features to be passed through an additional head. In the number of tasks, our method scales linearly, as does training individual networks. Hence, our method scales well computationally, allowing us to use it on several datasets (Sec. 5.3) or networks (Sec. 5.2 \& 5.3) at once. \subsection{Unsupervised domain adaptation} \label{sec:uda} \para{Application.} We transfer knowledge from a source domain with labeled data to a target domain with unlabeled data. \para{Experimental setup (Fig.~\ref{fig:setups}b).} \new{We first train a model on the source training set, where the model consists of a} feature extractor{}, a classification head{} and an auxiliary rotation prediction head RP{} (initialized by training for rotation prediction{} on ILSVRC-12~\cite{russakovsky15ijcv}). We then want to adapt the feature extractor{} of this source model to the target domain while preserving compatibility with the original classification head. We do this by freezing the RP{} head while fine-tuning the feature extractor{} on the unlabeled target training set. For this we minimize the self-supervised RP{} loss for \att{1000} steps. Finally, we recombine this updated feature extractor{} with the source domain classification head{} to predict classes on the target domain. We report average class accuracy on the target test set. We evaluate adapting between CIFAR-10 and STL-10, as is common in this area~\cite{ghifary16eccv,shu18iclr,sun19arxiv_a}. We use here a larger WRN-28~\cite{zagoruyko16bmvc} architecture as in~\cite{sun19arxiv_a} (see Appendix~\ref{sec:implementation_details}). \para{Results (Tab.~\ref{tab:uda}).} We compare our method to previous approaches and two baselines based on our source model. One baseline uses the model as is. The other updates BN statistics at test time, which performs significantly better. This confirms their importance, as discussed in Sec.~\ref{sec:analysis} and observed by~\cite{li16icrlw}. Our method improves performance further and matches the state-of-the-art on adapting from CIFAR-10 to STL-10~\cite{lee19iccv} (\att{82.6\%}). The methods \cite{kumar18neurips,sun19arxiv_a} perform best for adapting from STL-10 to CIFAR-10. On average over both adaptation directions, our method is competitive (\att{78.8\%} for~\cite{kumar18neurips} \textit{vs.} \att{77.9\%} for us). Importantly, our method is simpler and faster than competing methods. % The state-of-the-art~\cite{kumar18neurips} combines multiple models, includes a domain discriminator~\cite{ganin15icml,ganin16jmlr}, employs a custom network architecture~\cite{shu18iclr}, and trains for 80000 steps on the joint source and target training sets. Instead, we use a single ResNet model and fine-tune only for 1000 steps on the target domain, which makes our method computationally faster. Finally,~\cite{sun19arxiv_a} gets significant gains by combining multiple self-supervised objectives, which we could potentially include as well. \input{domain_adaptation_table_v2.tex} \subsection{Compatibility across feature extractors with different architectures} \begin{figure}[t] \includegraphics[width=1\linewidth,trim=0 5pt 10pt 0, clip]{plots/multiple_backbones.pdf} \caption{\textbf{Accuracy when transferring a classification head to compatible feature extractors}. } \label{fig:multiple_backbones} \end{figure} \label{sec:backbones} \begin{figure*}[t] \centering \includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=0.96\linewidth]{plots/xp5_legend_v2.pdf} \begin{subfigure}[t]{0.3\textwidth} \includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1\linewidth]{plots/xp5_STL-10.pdf} \caption{STL-10} \end{subfigure} \hspace{8pt} \begin{subfigure}[t]{0.3\textwidth} \includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1\linewidth]{plots/xp5_CIFAR-10.pdf} \caption{CIFAR-10} \end{subfigure} \hspace{8pt} \begin{subfigure}[t]{0.3\textwidth} \includegraphics[trim={0cm 0cm 0cm 0cm},clip,width=1\linewidth]{plots/xp5_CIFAR-100.pdf} \caption{CIFAR-100} \end{subfigure} \caption{ \textbf{Average class accuracy of recombined components as a function of fine-tuning steps (log scale, up to 5 epochs).} We overlay accuracy values directly after recombination and after 1 epoch. Through training with RP{}, components are compatible, enabling direct recombination. } \label{fig:xp5} \end{figure*} \para{Application.} We want to achieve compatibility between feature extractors having different architectures, thus enabling transferring task heads across them. As a practical application we consider a single classification task which runs on many devices, each with a hardware-tailored network architecture (e.g. a powerful server, a standard desktop, a mobile phone). Normally, every time the set of classes to be recognized changes, all networks need to be retrained. Instead, if their feature extractors are compatible, only one extractor and its corresponding classification head need to be retrained. We can then transfer that classification head to all other models. This greatly facilitates deployment of the updated classifier to all client devices, especially if different people are responsible for maintaining the different models. \para{Experimental setup (Fig.~\ref{fig:setups}c).} We consider three feature extractor architectures: ResNet-56~\cite{he16cvpr}, Wide ResNet-56~\cite{zagoruyko16bmvc}, MobileNet V2~\cite{sandler18cvpr}. We combine these with a DCC{} head based on MobileNet V2. In this application, DCC{} not only encourages compatibility but also directly solves the target task (as there is just one task). We split MobileNet V2 into components after the 11-th inverted ResNet block (out of 17). To fit all extractors to a single DCC{} head, we add to each extractor a 1x1 convolution layer with 64 output channels. Differences in spatial resolution are resolved by the average pooling in the penultimate layer of the MobileNet V2 DCC{}. At first, we assume that we only have data for the first 5 classes of CIFAR-10. We use these to jointly train the three feature extractors with the DCC{} head. At this point, each `feature extractor plus DCC{}' network addresses the target task for a particular device. Next, suppose we obtain labeled data for 5 new classes (resulting in the full CIFAR-10 training set). Instead of re-training everything, we only want to update the DCC{} head. To do so, we first extend the classification layer of the DCC{} head to handle 10 classes. Then, we choose the trained Wide ResNet-56 as the \emph{reference feature extractor}. We freeze it, attach the DCC{} head, and fine-tune this combination on the CIFAR-10 training set. Finally, we attach the updated DCC{} to each individual extractor and evaluate on the CIFAR-10 test set. Note that in this process we updated none of the feature extractors after the initial training phase (updating it is investigated in Appendix~\ref{sec:additional_classifier_transfer}). \para{Results (Fig.~\ref{fig:multiple_backbones}).} As an upper bound we train the individual networks on CIFAR-10 (also with a rotation prediction head). As an optimistic lower bound we consider perfectly discriminating the first five classes, leading to 50\% accuracy. As Fig.~\ref{fig:multiple_backbones} shows, recombining either ResNet-56 or MobileNet V2 with the updated DCC{} head lead to excellent accuracy of 91.7\% and 91.6\% respectively. While the upper bounds are even higher at 94.6\% and 93.3\%, our method requires much less computation and greatly facilitates deployment. Part of the gap to the upper bound can be attributed to changes in architectures: we added 1x1 convolutions and use mixed architectures with a simple MobileNet head. If we redo the upper bound using these changed architectures, we get accuracies between 92.7\% and 92.8\%. This suggests that optimizing architectures would lead to even better results. \subsection{Faster transfer learning} \label{sec:faster_transfer} \para{Application.} In transfer learning, the goal is to improve results on a target task by reusing knowledge derived from a related source task. In the deep learning era, the standard approach is to reuse the feature extractor{} of a model trained on the source training set. This source feature extractor{} and a randomly initialized task-specific head are combined into a new model, which is then fine-tuned on the target training set. When there are many possible source tasks, this process is computationally expensive,\new{~\textit{e.g.} ~\cite{zamir18cvpr} reports consuming 50'000 GPU hours.} Instead, we propose to train an initial target task head{} and \emph{reuse} it when exploring different source tasks to transfer from (Fig.~\ref{fig:setups}d). For this, we recombine the source feature extractor{} and the initial target task head{} into a new model. When these components are compatible, the benefits of transferring from a potential source can be evaluated and capitalized on with no or little fine-tuning on the target training set. \para{Experimental setup (Fig.~\ref{fig:setups}d).} We study transferring from a model trained on ILSVRC-12 as the source task. For this, we simply replace the feature extractor{} of the target task{} model with the source one, while keeping the target task{} head. We make these components compatible by training with rotation prediction{} (RP{}) and an incremental training scheme (Sec.~\ref{sec:training_schemes}). In this scheme, we first need to set the weights of RP{} ($\vec{\Theta}_s$), which we obtain by training a model on CIFAR-100~\cite{krizhevsky09}. Then, the source and target models are trained with this frozen rotation prediction{} head, forcing their feature extractor{} to produce features that work with that same rotation prediction{} head. We compare our method against re-initializing the target task head or recombining independently trained components. For these baselines, we also use rotation prediction as an auxiliary task for fair comparison, but initialize the weights of its head randomly for each network. We evaluate transferring a feature extractor{} trained on ILSVRC-12~\cite{russakovsky15ijcv} to different target tasks, here CIFAR-10~\cite{krizhevsky09}, STL-10~\cite{coates11aistats}, or CIFAR-100~\cite{krizhevsky09}. We measure transfer efficiency as the accuracy directly after recombination, and after a few epochs of fine-tuning on the target task{}. \para{Results (Fig~\ref{fig:xp5}).} Our method achieves strong results in terms of accuracy on the target task, even without any fine-tuning. Here, the networks are trained separately and only made compatible via RP{} and optionally IIW{}. Nonetheless, our method achieves 40.0\%-66.4\% recombination accuracy{}, despite the differences in the datasets and their class vocabularies. Instead, the baselines yield random performance before fine-tuning, as expected. After 1 epoch of fine-tuning our methods are still significantly better than the baselines. They converge only after fine-tuning for several epochs (Fig~\ref{fig:xp5}). In summary, our method reduces the need for fine-tuning when transferring components. As this is a core part of existing transfer learning methods ~\cite{zamir18cvpr, dwivedi19cvpr, achille19iccv, yan20arxiv_b}, our method can help speed these up. \section{Introduction} \label{sec:introduction} \input{introduction.tex} \vspace{4pt} \section{Related Work} \label{sec:related_work} \input{related_work.tex} \vspace{4pt} \section{Method} \label{sec:method} \input{method.tex} \vspace{4pt} \section{Analysis of compatibility} \label{sec:experiments} \label{sec:analysis} \input{experiments.tex} \vspace{4pt} \section{Applications} \label{sec:applications} \input{applications.tex} \vspace{4pt} \section{Conclusion} \label{sec:conclusion} \input{conclusion.tex} {\small \bibliographystyle{aaai21} \subsection{Compatibility through Self-Supervision (RP{})} \label{sec:ss_alignment} We propose to make components compatible via a generally applicable auxiliary task{}, based on a self-supervised objective. Self-supervision relies on supervised learning techniques, but the labels are created from the unlabelled input data itself. We adopt the approach of previous methods like~\cite{noroozi16eccv,doersch17iccv,gidaris18iclr}. First, we transform an image $\vec{x}$ with $g\left(\vec{x},\vec{s}\right)$, a function which applies a transformation $\vec{s}$. Then, the task of the network is to predict what transformation was applied (its label). To achieve compatibility, this auxiliary task{} has its own head $s$, but operates on the features produced by the extractors of the respective target tasks (Fig.~\ref{fig:setups}a). Specifically, its prediction function is $s(f(\vec{x};\vec{\Phi}_t);\vec{\Theta}_{s})$, where $\vec{\Theta}_s$ are the parameters of the auxiliary task head{}. During training, we minimize the target task{} losses and the auxiliary task{} loss for both tasks: \begin{equation} \begin{split} \sum_{{t \in \{a,b\}}} \sum_{\substack{(\vec{x}_i, \vec{y}_i) \in \mathcal{D}_t}} \Big[ \!\ell_t \left(h\left(f\left(\vec{x}_i; \vec{\Phi}_t\right); \vec{\Theta}_t\right), \vec{y}_i\right) \\ + \! \frac{1}{|\mathcal{S}|} \sum_{{\vec{s} \in \mathcal{S}}} \ell_s \left(h\left(f\left(g\left(\vec{x}_i,\vec{s}\right); \vec{\Phi}_t\right); \vec{\Theta}_s\right), \vec{s}\right) \Big] \end{split} \label{eq:rp} \end{equation} where $\mathcal{S}$ is set of possible transformations that are applied, $\vec{\Theta}_s$ are the parameters of the auxiliary task head, and $\ell_s$ its associated loss. While there are target task{} parameters $\vec{\Phi}_t$ and $\vec{\Theta}_t$ specific to each task, we tie the auxiliary task{} parameters $\vec{\Theta}_{s}$ across tasks. This forces the feature extractors $f(\vec{x}_i;\vec{\Phi}_t)$ of each task $t$ to produce features that are compatible with the same auxiliary task head{}. As we show in Sec.~\ref{sec:analysis}, this leads to feature extractors that are compatible more generally, allowing to recombine the feature extractor of one with the target task head{} of the other. \para{Choice of self-supervision task.} Throughout this work we use rotation prediction~\cite{gidaris18iclr}. The input image is transformed by rotating it with an angle $\mathcal{S} = \{0\degree,90\degree,180\degree,270\degree\}$ and the task is to classify which rotation angle was applied. For simplicity we refer to this method as \textit{compatibility through rotation prediction} (RP), but any other self-supervised objective can be used here. We discuss considerations for choosing a suitable self-supervised task in Appendix~\ref{sec:additional_discussion}. \para{Trade-offs.} This compatibility method is very general. It only requires the shared self-supervised task to be both meaningful and non-trivial~\cite{sun19arxiv_a,tschannen19arxiv}. While such a task can be defined on almost any dataset, the quality of the induced compatibility depends on how much the target task{} and the auxiliary task{} rely on the same features. In theory, a weakly related or orthogonal self-supervised auxiliary task{} could negatively affect the performance of the network on the target task{}. \new{In practice though, it often improves performance~\cite{zhai19iccv,henaff19arxiv}. Similarly, in our experiments we only observe positive effects on performance when adding rotation prediction{}.} \subsection{Compatibility through Discriminating Common Classes (DCC{})} \label{sec:class_alignment} When tasks $a,b$ have common classes, we can directly use these to achieve compatibility, rather than resorting to a self-supervised loss. Hence, we propose an auxiliary task head{} $c$, which discriminates among these common classes. Specifically, we minimize the following loss: \begin{equation} \begin{split} \sum_{{t \in \{a,b\}}} \sum_{\substack{(\vec{x}_i, \vec{y}_i) \in \mathcal{D}_t}} \Big[ \ell_t \left(h\left(f\left(\vec{x}_i; \vec{\Phi}_t\right); \vec{\Theta}_t\right), \vec{y}_i\right) \\ + \! \ell_c \left(h\left(f\left(\vec{x}_i; \vec{\Phi}_t\right); \vec{\Theta}_c\right), \vec{y}_i\right) \cdot\vec{1}\left[\vec{y}_i\!\in\!\mathcal{C}\right] \Big] \end{split} \label{eq:dcc} \end{equation} where $\ell_c$ is the auxiliary task{} loss. It is computed only over examples in the set of common classes $\mathcal{C}$ ($\vec{1}$ is an indicator function returning 1 if its argument is true and 0 otherwise). \para{Trade-offs.} While this method is expected to achieve high compatibility, it requires the target tasks to have common classes. Depending on the scenario, the target tasks might actually have few or even no common classes. \subsection{Compatibility through Identical Initial Weights (IIW{})} \label{sec:iiw} \cite{zhang19icmlw} demonstrated that for many layers in a trained network, resetting the weights of that layer to their initial values leads to a limited loss in accuracy. This suggests that the initialization defines a set of random projections which strongly shape the trained feature space. Hence, we propose to encourage compatibility simply by starting the loss minimization of both tasks from identical initial weights{} (IIW). For this method, we initialize using either identical \emph{random} weights or identical \emph{pre-trained} weights (Sec.~\ref{sec:analysis}). \para{Trade-offs.} This method only works when both tasks have identical network architectures. Moreover, it only acts at the start of training, where it makes networks identical and thus perfectly compatible. \subsection{Training schemes} \label{sec:training_schemes} For RP{} and DCC{} we consider two training schemes: \emph{joint training{}} and \emph{incremental training{}}. In \emph{joint training{}}, we minimize~\eqref{eq:rp} (or~\eqref{eq:dcc}) by alternating between tasks $a$ and $b$, each time minimizing the loss over a single minibatch. This resembles multi-task training~\cite{doersch17iccv,maninis19cvpr}, but here each task has its own network, rather than having a single network with shared computation. By training jointly, both target tasks ${a, b}$ influence the auxiliary task head{} parameters and use that head to solve the auxiliary task{}. In \emph{incremental training{}}, we first train the network $n_{aa}$ by minimizing~\eqref{eq:rp} (or~\eqref{eq:dcc}) over task $a$ only. This also learns the parameters of the auxiliary task head{}. Later, we train the network $n_{bb}$ on task $b$, but use the auxiliary task head{} with its parameters frozen. This encourages compatibility between $n_{aa}$ and $n_{bb}$, without requiring both of them to be trained at the same time.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In 1973, a remarkable result was predicted by Bekenstein \cite{bk73} and Hawking \cite{bcw73} in the first history of science that the entropy of a dark star (so called BH) is proportional to the ``geometric quantity'', so called the area of event horizon (EH). This immediately suggests that the outer BH entropy \footnote{We know that when $\hslash= 0$, Quantum mechanics reduces to classical mechanics. If we take this limit in ${\cal S}_{+}$ we have divergence value of the outer entropy. Therefore we can say that there is no classical BH entropy. This ${\cal S}_{+}$ is a purely \emph{quantum BH entropy} for ${\cal H}^{+}$. Similarly, we can suggest that ${\cal S}_{-}$ is a purely \emph{quantum inner BH entropy}.} is given by \begin{eqnarray} {\cal S}_{+} &=& \frac{k_{B}c^3}{\hslash}\frac{{\cal A}_{+}}{4G} ~\label{cd1} \end{eqnarray} where $k_{B}$ is the Boltzman constant from statistical mechanics, $c$ is the speed of light in free space come from special theory of relativity, $\hslash$ is called reduced Planck constant and comes from quantum mechanics, ${\cal A}_{+}$ is the area of ${\cal H}^{+}$ come from purely geometry of the spacetime and $G$ is a universal constant that comes from gravity. If a BH has another horizon so called inner horizon or Cauchy horizon (${\cal H}^{-}$) there must exists \emph{inner BH entropy} which can be defined as \begin{eqnarray} {\cal S}_{-} &=& \frac{k_{B}c^3}{\hslash}\frac{{\cal A}_{-}}{4G} ~\label{cd2} \end{eqnarray} where ${\cal A}_{-}$ is the area of ${\cal H}^{-}$ also come from inner geometry. Now the product of outer BH entropy and inner BH entropy should read \begin{eqnarray} {\cal S}_{+} {\cal S}_{-} &=& \left(\frac{k_{B}c^3}{\hslash G}\right)^2 \frac{{\cal A}_{+}{\cal A}_{-}}{16} ~\label{cd3} \end{eqnarray} which implies that it is proportional to the product of the geometric quantity of ${\cal H}^{\pm}$. Now if we define the fundamental length scale so called Planck length, i.e., \begin{eqnarray} \ell_{Pl} &=& \sqrt{\frac{G \hslash}{c^3}} ~\label{cd4} \end{eqnarray} then the products of BH entropy as \begin{eqnarray} {\cal S}_{+} {\cal S}_{-} &=& \frac{{\cal A}_{+}{\cal A}_{-}}{16 \ell_{Pl}^4} ~\label{cd5} \end{eqnarray} where we have to set $k_{B}=1$. This area (or entropy) product formula of ${\cal H}^{\pm}$ for wide class of BHs \cite{ah09,cgp11,castro12,sd12,mv13,val13,pp14,jh,horava15,grg16,grg1} has been examined so far with out taking into account any logarithmic corrections. Without logarithmic corrections the product of area (or entropy) of ${\cal H}^{\pm}$ is universal in some cases and it fails to be universal in some cases also. But it is interesting when we have taken the logarithmic corrections of this product then the product should always not be universal. Our aim is here to derive the logarithmic correction to the BH entropy of ${\cal H}^{\pm}$ and its product \emph{via Cardy prescription} \cite{cardy,cardy1,vafa,ajak,carlip,carlip1,carlip2,carlip3,carlip4,carlip5,carlip6,solo,km,km1,skm,psm,suneeta,kk}. On the other hand in the framework of String theory for BPS (Bogomol'ni-Prasad-Sommerfield) class of BHs there has been a proposal that the product of inner and outer BH entropy is quantized in nature \cite{cgp11} and it should be \begin{eqnarray} {\cal S}_{+} {\cal S}_{-} &=& \left(2\pi \ell_{Pl}^2\right)^2 N , \,\, N \in {\mathbb{N}} ~.\label{cd6} \end{eqnarray} It should be noted that Larsen \cite{finn} proposed that the BH outer horizon as well as inner BH horizon is quantized in the units of Planck. That means the product of inner area (or inner entropy) and outer area (or outer entropy) of ${\cal H}^{\pm}$ is quantized in terms of Planck units. In the next section, we will calculate the logarithmic corrections to BH entropy product formula by using the \emph{Cardy formula}. \section{Logarithmic Corrections to BH Entropy Product Formula via Cardy method} In order to compute the logarithmic corrections to the density of states of ${\cal H}^{\pm}$, we begin with an arbitrary 2D CFT with central charges $c$ by using the Virasoro algebra of ${\cal H}^{\pm}$ \cite{brown,fran,ralph} $$ \left[L_{m, \pm}, L_{n, \pm} \right] = \left(m-n\right) L_{m+n, \pm}+\frac{c}{12} m \left(m^2-1\right) \delta_{m+n, 0} $$ $$ \left[\tilde{L}_{m, \pm}, \tilde{L}_{n, \pm} \right] = \left(m-n\right) \tilde{L}_{m+n, \pm}+\frac{c}{12} m \left(m^2-1\right) \delta_{m+n, 0}\\ $$ \begin{eqnarray} \left[L_{m, \pm}, \tilde{L}_{n, \pm} \right] &=& 0 ~ \label{eq1} \end{eqnarray} where the generators $L_{n, \pm}$ and $\tilde{L}_{n, \pm}$ are ``holomorphic'' and ``anti-holomorphic'' diffeomorphisms, respectively. At the same time we can define the partition function of ${\cal H}^{\pm}$ on the 2-torus of modulus $\tau=\tau_{1}+i\tau_{2}$ is defined to be \begin{eqnarray} {\cal Z}_{\pm} (\tau, \tilde{\tau}) &=& Tr \, e^{2\pi i\tau L_{0, \pm}} e^{-2\pi i \tilde{\tau} \tilde{L}_{0,\pm}} \nonumber\\ &=& \sum \rho_{\pm} \left(\Delta_{\pm}, \tilde{\Delta}_{\pm} \right) e^{2\pi i\tau \Delta_{\pm}} e^{-2\pi i \tilde{\tau} \tilde{\Delta}_{\pm}} ~\label{eq2} \end{eqnarray} where $\rho_{\pm}$ is the number of states with eigen values $L_{0, \pm}=\Delta_{\pm}$, $\tilde{L}_{0,\pm}=\tilde{\Delta}_{\pm}$. Now if we can compute the partition function ${\cal Z}_{\pm}$, we can calculate the density of states $\rho_{\pm}$ via contour integration. For this we can assume $q=e^{2\pi i\tau}$ and $\tilde{q}=e^{2\pi i\tilde{\tau}}$. Therefore one should find the contour integration for density of states of ${\cal H}^{\pm}$ \begin{eqnarray} \rho_{\pm} \left(\Delta_{\pm}, \tilde{\Delta}_{\pm} \right) &=& \frac{1}{(2\pi i)^2} \int \frac{dq}{q^{\Delta_{\pm}+1}} \frac{d\tilde{q}}{\tilde{q}^{\tilde{\Delta}_{\pm}+1}} {\cal Z}_{\pm} (q, \tilde{q}) ~\label{eq3} \end{eqnarray} where the contour integration evaluated from $q=0$ to $\tilde{q}=0$. Actually Cardy \cite{cardy,cardy1} found that the partition function of ${\cal H}^{\pm}$ is given by \begin{eqnarray} {\cal Z}_{\pm} (\tau, \tilde{\tau}) &=& \frac{Tr \, e^{2\pi i \left(L_{0, \pm}-\frac{c}{24}\right)\tau} e^{-2\pi i \left(\tilde{L}_{0, \pm}-\frac{c}{24}\right)\tilde{\tau}}}{e^{\frac{\pi c}{6}\tau_{2}}} ~. \label{eq4} \end{eqnarray} Interestingly, this quantity is ``modular-invariant''. It is also universal via CFT. Using this result we can evaluate the above integral by steepest descent method. Now let $\Delta_{0, \pm}$ be the lowest eigen value of $L_{0, \pm}$ and define \begin{eqnarray} \bar{{\cal Z}}_{\pm} (\tau) &=& \sum \rho_{\pm} \left(\Delta_{\pm} \right) e^{2\pi i \left(\Delta_{\pm}-\Delta_{0, \pm}\right)\tau} \nonumber\\ &=& \rho_{\pm} \left(\Delta_{0, \pm}\right)+ \rho_{\pm} \left(\Delta_{1, \pm}\right) e^{2\pi i \left(\Delta_{1, \pm}-\Delta_{0, \pm} \right)\tau} +... ~. \nonumber\\ \label{eq5} \end{eqnarray} For convenient, we have omitted the $\tilde{\tau}$ dependence. Then it can easily be shown that \begin{eqnarray} \rho_{\pm} \left(\Delta_{\pm}\right)= \int e^{\frac{2\pi i}{\tau}\left(\frac{c}{24}-\Delta_{0, \pm} \right)} e^{2\pi i\tau \left(\frac{c}{24}-\Delta_{\pm}\right)} \bar{{\cal Z}}_{\pm} \left(-\frac{1}{\tau}\right) d\tau ~.\label{eq6} \end{eqnarray} For large value of $\tau_{2}$, it can be shown that $\bar{{\cal Z}}_{\pm} \left(-\frac{1}{\tau}\right)$ gives us a constant value $\rho_{\pm} \left(\Delta_{0, \pm}\right)$. Therefore the above integral becomes \begin{eqnarray} \rho_{\pm} \left(\Delta_{\pm}\right) &\approx& \left(\frac{c}{96 \Delta_{\pm}^3} \right)^{\frac{1}{4}} e^{2\pi \sqrt{\frac{c\Delta_{\pm}}{6}}} ~.\label{eq7} \end{eqnarray} Now one can obtain the exponential part of the Eq. (\ref{eq7}) which is actually the Cardy formula. Now, one can apply this formula for calculating the entropy of ${\cal H}^{\pm}$ for rotating BTZ BH and compared it with the result obtained by Strominger in his work \cite{strom}. The BH event horizon and Cauchy horizon for rotating BTZ BH \cite{btz92,jetpl} is given by \begin{equation} r_{\pm}= \sqrt{4G_{3}{\cal M} \ell^2\left(1\pm \sqrt{1-\frac{J^2}{{\cal M}^2 \ell^2}} \right)} ~\label{eq8} \end{equation} where $G_{3}$ is the 3D Newtonian constant. Now it can be easily derived the ADM mass parameter and angular momentum parameter \begin{equation} M = \frac{r_{+}^2+r_{-}^2}{8G_{3}\ell^2}, \,\,\, J=\frac{r_{+}r_{-}}{4G_{3}\ell} ~\label{eq9} \end{equation} where $\ell^2=-\frac{1}{\Lambda}$ and $\Lambda$ is cosmological constant. The central charges derived by Brown and Henneaux \cite{brown} using the properties of asymptotic symmetries in 3D with negative cosmological constant which could be determined by a pair of Virasoro algebra, are \begin{eqnarray} c &=& \tilde{c}=\frac{3\ell}{2G_{3}} ~.\label{eq10} \end{eqnarray} The generators of the Brown and Henneaux Virasoro algebras derived in \cite{bana} are \begin{eqnarray} \Delta_{\pm} &=& \frac{\left(r_{\pm}+r_{\mp} \right)^2}{16 G_{3} \ell}, \,\,\, \tilde{\Delta}_{\pm} = \frac{\left(r_{\pm}-r_{\mp} \right)^2}{16 G_{3} \ell}~.\label{eq11} \end{eqnarray} Using Eq. (\ref{eq10}) and Eq. (\ref{eq11}), one can compute the exponential part in Eq. (\ref{eq7}) as \begin{eqnarray} 2\pi \sqrt{\frac{c \Delta_{\pm}}{6}}+2\pi \sqrt{\frac{\tilde{c} \tilde{\Delta}_{\pm}}{6}} &=& \frac{2\pi r_{\pm}}{4G_{3}} ~\label{eq12} \end{eqnarray} which gives the standard Bekenstein-Hawking entropy for 3D BH and it was first observed by Strominger \cite{strom} for the ${\cal H}^{+}$ in 1998. We examined here this entropy calculation is valid for both ${\cal H}^{\pm}$. Using Eq. (\ref{eq7}), one can easily compute the density of states of ${\cal H}^{\pm}$ as \begin{eqnarray} \rho_{\pm} \left(\Delta_{\pm}, \tilde{\Delta}_{\pm} \right) &\approx& \frac{8G_{3}\ell^2}{\left(r_{\pm}^2-r_{\mp}^2\right)^ {\frac{3}{2}}} e^{\frac{2\pi r_{\pm}}{4G_{3}}} ~.\label{eq13} \end{eqnarray} Therefore, one should calculate the logarithmic corrections to the entropy of ${\cal H}^{\pm}$ \begin{eqnarray} {\cal S}_{\pm} &\sim& \frac{2\pi r_{\pm}}{4G_{3}}-\frac{3}{2} \ln \left|\frac{r_{\pm}^2-r_{\mp}^2}{ G_{3}^2} \right|+const.\\ &=& \frac{2\pi r_{\pm}}{4G_{3}}-\frac{3}{2} \ln \left|\frac{2\pi r_{\pm}}{G_{3}} \right|- \frac{3}{2} \ln \left|\kappa_{\pm} \ell\right|+const.~ \label{eq14} \end{eqnarray} where the surface gravity is defined to be \begin{eqnarray} \kappa_{\pm} &=& \frac{r_{\pm}^2-r_{\mp}^2} {\ell^2 r_{\pm}} ~.\label{eq15} \end{eqnarray} Therefore the logarithmic terms in Eq. (\ref{eq14}) obtained by Kaul and Majumdar \cite{km1} for ${\cal H}^{+}$ for spherically symmetric BH in 4D have exactly the same form as we have seen from the above calculation. It should be noted that this is also valid for ${\cal H}^{-}$. Thus one can compute their product and should read off $$ {\cal S}_{+} {\cal S}_{-}=\frac{\pi^2}{4G_{3}} r_{+}r_{-} -\frac{3\pi}{4G_{3}}\left[r_{+}\ln \left|\frac{2\pi r_{-}}{G_{3}} \right|+r_{-}\ln \left|\frac{2\pi r_{+}}{G_{3}} \right|\right] $$ $$ -\frac{3\pi}{4G_{3}}\left[r_{+}\ln \left| \kappa_{-} \ell\right|+r_{-}\ln \left|\kappa_{+} \ell \right|\right] $$ $$ +\frac{9}{2}\left[ \ln \left|\frac{2\pi r_{+}}{G_{3}} \right| \ln \left| \kappa_{-} \ell\right|+ \ln \left|\frac{2\pi r_{-}}{G_{3}} \right| \ln \left| \kappa_{+} \ell\right|\right] $$ \begin{eqnarray} +\frac{9}{4} \ln \left|\frac{2\pi r_{+}}{G_{3}} \right| \ln \left|\frac{2\pi r_{-}}{G_{3}} \right| +\frac{9}{4} \ln \left| \kappa_{+} \ell\right| \ln \left| \kappa_{-} \ell\right|+const. ~. \label{eq16} \end{eqnarray} It follows from the above analysis that without logarithmic correction the product of entropy is always mass-independent (universal) but the problem is when we have taken into account the logarithmic correction term the product of ${\cal H}^{\pm}$ always dependent on the mass parameter that means it is not universal as well it is not quantized. This is the key result of this work. So far we have examined the logarithmic corrections to the entropy product formula of ${\cal H}^{\pm}$ using Cardy formula now we shall calculate the entropy using slightly different CFT described by the universal Virasoro algebra at the horizon \cite{carlip,carlip1,carlip2,carlip3,solo} with central charge \begin{eqnarray} c &=& \frac{3 {\cal A}_{\pm}\beta_{\pm}}{2\pi G \kappa_{\pm}} ~\label{eq17} \end{eqnarray} and an $L_{0, \pm}$ eigenvalue is \begin{eqnarray} \Delta_{\pm} &=& \frac{{\cal A}_{\pm}\kappa_{\pm}}{16\pi G \beta_{\pm}} ~.\label{eq18} \end{eqnarray} where ${\cal A}_{\pm}$ is the horizon area of ${\cal H}^{\pm}$, $\kappa_{\pm}$ is the surface gravity of ${\cal H}^{\pm}$ and $\beta_{\pm}$ is periodicity of ${\cal H}^{\pm}$. Now, reverting back these values in Eq. (\ref{eq7}), one obtains the density of states of ${\cal H}^{\pm}$ as \begin{eqnarray} \rho_{\pm} \left(\Delta_{\pm}\right) &\approx& \frac{c}{12} \frac{e^\frac{{\cal A}_{\pm}}{4G}}{\left(\frac{{\cal A}_{\pm}}{8\pi G} \right)^{\frac{3}{2}}} ~.\label{eq19} \end{eqnarray} For $c$ to be a universal constant one must need to be choose the value of $\beta_{\pm}$ such that it is independent of ${\cal A}_{\pm}$ then one obtains the entropy of ${\cal H}^{\pm}$: \begin{eqnarray} {\cal S}_{\pm} &\sim& \frac{{\cal A}_{\pm}}{4G}-\frac{3}{2}\ln\left(\frac{{\cal A}_{\pm}}{4G}\right) +const.+ ... ~\label{eq20} \end{eqnarray} where we have set $\ell_{Pl}^2=1$. Interestingly, the above Eq. (\ref{eq20}) is completely in agreement with the result of Kaul and Majumdar obtain for ${\cal H}^{+}$ only \cite{km1}. We have suggested here this entropy expression is valid for both ${\cal H}^{\pm}$. To summarize, we computed the logarithmic corrections to the BH entropy of inner and outer horizons, and their product by using the trick of Cardy formula. We have considered particularly rotating BTZ BH and showed when we have taken into account the logarithmic corrections to the entropy product it should not quite independent of the ADM mass parameter henceforth it should not be quantized. \bibliographystyle{model1-num-names}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Abstract} Many causal models of interest in epidemiology involve longitudinal exposures, confounders and mediators. However, in practice, repeated measurements are not always available. Then, practitioners tend to overlook the time-varying nature of exposures and work under over-simplified causal models. Our objective here was to assess whether - and how - the causal effect identified under such misspecified causal models relates to true causal effects of interest. We focus on two situations regarding the type of available data for exposures: when they correspond to $(i)$ ``instantaneous'' levels measured at inclusion in the study or $(ii)$ summary measures of their levels up to inclusion in the study. In each of these two situations, we derive sufficient conditions ensuring that the quantities estimated in practice under over-simplified causal models can be expressed as true longitudinal causal effects of interest, or some weighted averages thereof. Unsurprisingly, these sufficient conditions are very restrictive, and our results state that inference based on either ``instantaneous'' levels or summary measures usually returns quantities that do not directly relate to any causal effect of interest and should be interpreted with caution. They raise the need for repeated measurements and/or the development of sensitivity analyses when such data is not available. \\ \textbf{Keywords:} Causal inference, longitudinal model, identifiability, structural causal model. \section{Introduction} Etiologic epidemiology is concerned with the study of potential causes of chronic diseases based on observational data. Over the years, it has notably been successful in the identification of links between lifestyle exposures and the risk of developing cancer. Remarkable examples are tobacco smoke, alcohol and obesity that are now established risk factors for the development of a number of site-specific cancers \cite{AGU_BON, BAG_ROT, LAUBY}. Moreover, an accumulating body of biomarker measurements and -omics data provide important opportunities for investigating biological mechanisms potentially involved in cancer development. For example, cancer epidemiology is increasingly concerned by the study of the carcinogenic role of inflammation, insulin resistance and sex steroids hormones \cite{bradbury2019circulating, chan2011inflammatory, dossus2013hormonal}. The causal validity of such analyses relies on strong assumptions though, which have been formally described in the causal inference literature \cite{H_R, pearl2009statsurvey, pearl_book, rosenbaum1983, robins1986}. The very first assumption underlying most causal analyses is that the causal model is correctly specified. Most often, {\it e.g.}, when studying lifestyle exposures such as tobacco smoke, alcohol and obesity, but also biomarkers, the true causal model involves time-varying risk factors. Valid causal inference under such longitudinal causal models usually require repeated measurements for these time-varying variables \cite{G_Formula, VDW_Book, VDW_TCH}. However, such repeated measurements are rarely available in large observational studies, and simplified models that involve time-invariant variables only are usually considered instead. In particular, most studies on biomarkers have been conducted using information collected at recruitment only \cite{bradbury2019circulating, chan2011inflammatory, dossus2013hormonal}, since blood samples are usually collected only once, at recruitment, in large cohort studies such as the European Prospective Investigation into Cancer and Nutrition (EPIC) cohort study \cite{riboli2002european}, and the UK Biobank \cite{sudlow2015uk}. These studies were conducted after implicitly assuming that past levels of biomarkers are independent of risk of future cancer given current levels of biomarkers; see Figure \ref{Fig:Total_Effect_X} (\textit{L-a}) for a simple illustration, in the absence of confounders. If past levels of biomarkers may influence the outcome not only through their current levels (see, {\it e.g.}, Figure \ref{Fig:Total_Effect_X} (\textit{L-b})), the model considered in these analyses was oversimplified, and then misspecified. Issues arising when working under oversimplified longitudinal causal models have already been described in the statistical literature \cite{AALEN, article2, article1}. Moreover, general results on the identifiability of causal effects in the presence of unobserved variables can be used to study the identifiability of the causal effect of interest when ignoring the time-varying nature of exposures or, equivalently, when past levels of exposures are unobserved \cite{ShpitserPearl2006a, TianPearl2002, HuangValtorta2006, TianPearl2003}. However, little is known about the relationship between estimates derived under oversimplified longitudinal causal models and causal quantities of interest under the true longitudinal causal model. Filling this gap is the main objective of the present work. More precisely, we will derive sufficient conditions that guarantee that the quantity estimated in practice when working under misspecified models expresses as a particular weighted average of the longitudinal causal effects of interest. We will consider the most ``standard'' discrete longitudinal causal models \cite{G_Formula}, where the causal effect of interest is that of one exposure varying over some predefined discrete time interval, say $\llbracket 1, t_0 \rrbracket:=\{1, \ldots, t_0\}$, on one outcome $Y$ measured at some later time point $T>t_0$. Two situations will be considered regarding the available information for the exposures, which will include the exposure of interest and possibly additional factors such as mediators and confounders. First, we consider the situation where available data for the exposures correspond to their ``instantaneous'' levels at the time $t_0$ of recruitment in the study. Considering models depicted in Figure \ref{Fig:Total_Effect_X} (\textit{L-a}) and (\textit{L-b}), only data on $X_{t_0}$ would be available, while data on $\bar X_{t_0-1}$ would not. This can be regarded as the most common case, but also the worst one since information at one single point in time is available for the full exposure profile. Then, we will turn our attention to a more general and seemingly more favorable situation, where the available information for each exposure corresponds to a summary measure of its levels up to inclusion in the study. Considering exposures such as alcohol intake or dietary exposure, epidemiologists generally not only collect instantaneous levels (through 24-hour recall questionnaires), but also summary measures of past levels of exposure through food frequency questionnaires, which summarize levels of exposures over the last 6 months, 12 months or even 5 years \cite{slimani2002european}. Summary measures are also sometimes constructed from repeated measurements of exposures, when available \cite{ARNO, KUNZ}. This is increasingly common for exposures such as Body Mass Index (BMI) or alcohol intake, whose levels are sometimes available for each participant at different points in time (at recruitment, at 20 years-old, etc.). Cluster analysis can be performed to summarize the repeated measures into a categorical variable, whose categories correspond to certain ``shapes'' for the exposure profile, such as constantly low, constantly high, etc. Alternatively, the exposure profile can be summarized, e.g., by computing the number of years over a certain threshold, etc. \cite{ARNO, Arnold19}. In any case, the obtained summary measure is then regarded as the exposure of interest, and the underlying time-varying nature of the genuine exposure is not further considered. In other words, these summary measures are supposed to capture everything that matters with respect to the effect of the whole exposure profile on the outcome; see Figure \ref{Fig:Total_Effect_X} (\textit{L-c}) for a simple illustration. \begin{figure}[t] \begin{minipage}[c]{0.28\linewidth} \begin{center} \begin{tikzpicture}[scale=0.73, auto,swap] \node[var] (Xt)at(4.5,0){$\bar X_{t_0-1}$}; \node[var] (X)at(6.8,0){$X_{t_0}$}; \node[var] (Y)at(8.8,0){$Y$}; \draw[edge] (Xt)--(X); \draw[edge] (X)--(Y); \end{tikzpicture} \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.28\linewidth} \begin{center} \begin{tikzpicture}[scale=0.73, auto,swap] \node[var] (Xt)at(4.5,0){$\bar X_{t_0-1}$}; \node[var] (X)at(6.8,0){$X_{t_0}$}; \node[var] (Y)at(8.8,0){$Y$}; \node[var] (blank)at(8.8,-1){}; \draw[edge] (Xt)--(X); \draw[edge] (X)--(Y); \draw[edge] (Xt).. controls (6.5,0.8) ..(Y); \end{tikzpicture} \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.37\linewidth} \begin{center} \begin{tikzpicture}[scale=0.73, auto,swap] \node[var] (Xtt)at(4,0){$\bar X_{t_0-1}$}; \node[var] (Xt)at(6,0){$X_{t_0}$}; \node[var] (X)at(8,0){$\mathcall{X}$}; \draw[edge] (Xtt)--(Xt); \node[var] (Y)at(10,0){$Y$}; \draw[edge] (X)--(Y); \draw[edge] (Xt)--(X); \draw[edge] (Xtt).. controls (6.8,-0.8) ..(X); \end{tikzpicture} \end{center} \end{minipage} \vspace{0.2cm} \begin{minipage}[c]{0.28\linewidth} \begin{center} (\textit{L-a}) \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.28\linewidth} \begin{center} (\textit{L-b}) \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.37\linewidth} \begin{center} (\textit{L-c}) \end{center} \end{minipage} \caption{Examples of simple discrete longitudinal causal models with a time-varying exposure $(X_t)_{t\geq 1}$ and an outcome $Y$, in the absence of confounding. (\textit{L-a}) Past levels of exposures $\bar X_{t_0-1}$ have no effect on $Y$, except through current level of exposure $X_{t_0}$. (\textit{L-b}) Past levels of exposures $\bar X_{t_0-1}$ have an effect on $Y$ not only through $X_{t_0}$. (\textit{L-c}) The exposure process is assumed to affect the outcome only through some summary variable, $\mathcall X$.}\label{Fig:Total_Effect_X} \end{figure} The rest of the article is organized as follows. Section \ref{Sec:Notation} presents the notation that will be used throughout the article. In Sections \ref{Sec:Instantaneous} and \ref{Sec:SummaryMeasures}, we will then present our results, in the situation where instantaneous levels of exposures are available (Section \ref{Sec:Instantaneous}), or summary variables of past levels of exposures are available (Section \ref{Sec:SummaryMeasures}). We will present concluding remarks and recommendations in Section \ref{sec:Discu}. Most technical derivations are presented in the Appendix accompanying this article. \section{Notation}\label{Sec:Notation} For any positive integer $i$, we use the notation $\textbf{0}_i$ and $\textbf{1}_i$ for vectors $(0,\dots,0)\in\mathbb{R}^i$ and $(1,\dots,1)\in\mathbb{R}^i$ respectively. As mentioned above, we consider the setting that is classically adopted when working with time-varying predictors in causal inference \cite{G_Formula, VDW_Book}. More precisely, we assume that time-varying exposures, including the exposure of interest as well as potential mediators and confounders, are observable at discrete times over the time-window $\llbracket 1 ; T \rrbracket := \{1, \ldots, T\}$ for some $T>1$. For any $t\in\llbracket 1 ; T \rrbracket$, we let $X_t$ denote the exposure of interest at time $t$. Adopting the notation of VanderWeele \cite{VDW_Book}, we further denote the exposure profile until time $t$ by $\bar X_t = (X_1, X_1, \dots , X_t)$, while $\bar x_t$ stands for a specific (fixed) profile for the exposure of interest. Full exposure profile is denoted by $\bar X = \bar X_T= (X_1,X_2, \dots X_{T})$. When needed, we will use similar notation for auxiliary factors $(Z_t)_{t\geq 1}$, that may include pure mediator processes $(M_t)_{t\geq 1}$, as well as confounder processes $(W_t)_{t\geq 1}$ possibly affected by the exposure of interest. Unless otherwise stated, we assume that all the variables are binary to simplify the notation. We further denote by $t_0\in\llbracket 2 ; T \rrbracket$ the inclusion time in the study. While causal inference should generally rely on the observations of the full profile of exposures ($\bar X, \bar Z$), or at least their full profile prior to inclusion ($\bar X_{t_0}, \bar Z_{t_0}$), we assume in Section \ref{Sec:Instantaneous} that the available information at time $t_0$ consists in ($X_{t_0}, Z_{t_0}$) only. Next, Section \ref{Sec:SummaryMeasures} will be devoted to the case where we have access to some summary measures of $\bar X_{t_0}$ and $\bar Z_{t_0}$, which will be denoted by ${\mathcall X}$ and $\mathcall{Z}$, respectively. These summary measures are typically defined as deterministic functions of the exposure profiles. Considering, {\it e.g.}, summary measures of $\bar X_{t_0}$, typical examples include functions of the form $\mathcall{X} = \sum _{t=t'} ^{t_0} X_t$, and $\mathcall{X} = \mathbb{1} \lbrace \sum _{t=t'} ^{t_0} X_t \geq \tau \rbrace$ for some $1\leq t' \leq t_0$ and some threshold $\tau \in \mathbb{R}$. More simply, we can even have ${\mathcall X}=X_{t_0}$, which emphasizes the fact that the situation where summary measures are available encompasses the situation where instantaneous levels are available as a special case. For any pair of variables $(V ,U)$ and any potential value $u$ of $U$, we denote by $V^{U = u}$ the counterfactual variable corresponding to variable $V$ that would have been observed in the counterfactual world following the hypothetical intervention $do(U = u)$. We work under the setting of Structural Causal Models \cite{pearl2009statsurvey}, which especially entails that consistency conditions hold: for instance, $U=u$ implies $V = V^{U = u}$ . In addition, we assume that positivity conditions hold \cite{rosenbaum1983}. For any possibly counterfactual random variables $V$ and $U$, and any causal model $(Mod)$, we will use the notation $(V \indep U)_{Mod}$ to denote independence between variables $V$ and $U$ under the causal model ($Mod$). We will further let $\mathbb{E}_{Mod}\left(V^{U=u}\right)$ be the expectation of variable $V^{U=u}$ under causal model (\textit{Mod}). We will mostly consider such expectations for \textit{Mod} set to either the true causal longitudinal model (we will use indices $L$ and $LS$ for these longitudinal models when considering models involving instantaneous levels only, and summary variables, respectively) or the over-simplified model used for the analysis (we will use indices $CS$ - standing for cross-sectional - and $SV$ - standing for summary variables - for these models). In particular, a key quantity in our work is $ATE_L\left( \bar x_{t} ; \bar x^{*}_t\right) = \mathbb{E}_L( Y^{\bar X_{t} = \bar x_{t} } - Y^{\bar X_{t} = \bar x^{*}_{t} })$, for any two given profiles $\bar x_t$ and $\bar x^*_t$ for the exposure of interest, and some time $t$. This quantity is one measure of the total effect \cite{G_Formula, VDW_Book} of exposure up to time $t$ on the outcome variable $Y$ under a given longitudinal causal model $(L)$, as for instance the one given in Figure \ref{Fig:Total_Effect_X} (\textit{L-b}). More details will be be given in Section \ref{Sec:Instantaneous}. Because this quantity generally depends on the particular values for $\bar x_t$ and $\bar x^*_t$, averaged total effects can be defined for appropriate weights $\omega(\bar x_{t}, \bar x^*_{t})$ as $\sum_{\bar x_{t}}\sum_{\bar x^*_{t}} ATE_L\left( \bar x_{t} ; \bar x^{*}_t\right) \omega(\bar x_{t}, \bar x^*_{t}) $, with the two sums over $\{0,1\}^t$. We will also consider stratum-specific causal effects \cite{H_R}, with strata defined according to the levels of some possibly multivariate variable $U$ \begin{eqnarray} ATE_{L_{\mid U = u}} \left( \bar x_{t} ; \bar x^{*}_{t}\right) :=\mathbb{E}_L\left( Y^{\bar X_{t} = \bar x_{t} } - Y^{\bar X_{t} = \bar x^{*}_{t} } \mid U = u\right), \label{eq:stratum_sp_effect} \end{eqnarray} and weighted averages of the form $\sum_{u}\sum_{\bar x_{t}}\sum_{\bar x^*_{t}} ATE_{L_{\mid U = u}}\left( \bar x_{t} ; \bar x^{*}_t\right) \omega(\bar x_{t}, \bar x^*_{t}, u) $, for appropriate weights $\omega(\bar x_{t}, \bar x^*_{t}, u)$. Then, we need to introduce a specific symbol, $\Bumpeq$, to relate a causal effect defined under some over-simplified model to the quantity that is actually estimated in practice, and which is usually expressed under the true longitudinal causal model. Consider, {\it e.g.}, an over-simplified causal model $(CS)$, under which the causal effect $ATE_{CS}:=\mathbb{E}_{CS}\left( Y^{X_{t_0}=1} - Y^{X_{t_0}=0}\right)$ can be identified through the formula $\mathbb{E}_{CS}\left( Y \mid X_{t_0}= 1\right) - \mathbb{E}_{CS}\big( Y \mid X_{t_0} =0\big)$. Because this quantity will actually be estimated using data generated under the true longitudinal model, say $(L)$, the quantity estimated in practice turns out to be $\mathbb{E}_{L}\left( Y \mid X_{t_0}= 1\right) - \mathbb{E}_L\left( Y \mid X_{t_0}=0\right)$. We would then write $ATE_{CS} \ \Bumpeq \ \mathbb{E}_L\left( Y \mid X_{t_0}= 1\right) - \mathbb{E}_L\left( Y \mid X_{t_0}=0\right)$. We shall stress that $ATE_{CS} \ \Bumpeq \ \mathbb{E}_L\left( Y \mid X_{t_0}= 1\right) - \mathbb{E}_L\left( Y \mid X_{t_0}=0\right)$ does generally not imply $ATE_{CS} = \mathbb{E}_L\left( Y \mid X_{t_0}= 1\right) - \mathbb{E}_L\left( Y \mid X_{t_0}=0\right)$, unless, {\it e.g.}, $(CS)$ is correctly specified. For the sake of legibility, we will indistinctly use $ATE_{CS}$ for both the causal effect and the quantity estimated in practice in the text. In other respect, expectations and probabilities involving observed variables only will from now on be computed under the true longitudinal causal model, and so we will simply use notation like $\mathbb{E}(V)$ and $\mathbb{P}(V=v)$ for any observable variable $V$. Going back to the example above, we would therefore simply write $ATE_{CS} \ \Bumpeq \ \mathbb{E}\left( Y \mid X_{t_0}= 1\right) - \mathbb{E}\left( Y \mid X_{t_0}=0\right)$, which means that the quantity estimated in practice when working under the over-simplified causal model $(CS)$ is actually $\mathbb{E}_L\left( Y \mid X_{t_0}= 1\right) - \mathbb{E}_L\left( Y \mid X_{t_0}=0\right)$. See, {\it e.g.}, the proof of Theorem \ref{Theo_Insta_Strong} in Appendix \ref{Proof:Theo_Insta_Strong} for more details. Finally, in our causal diagrams, we will use as usual simple solid arrows $U \longrightarrow V$ to denote that $U$ is a potential cause of $V$, for any possibly multivariate random variables $U$ and $V$. In addition, double dashed arrows $V \dashrightarrow \ \ \raisebox{0.8ex}{\kern-2em$\dashleftarrow$}\ \ U$ will be used when ($i$) components of $U$ may cause components of $V$, $(ii)$ components of $U$ may be caused by components of $V$, but $(iii)$ any univariate component $\tilde U\subset U$ causing a univariate component $\tilde V \subset V$ cannot be caused by $\tilde V$. See Figure \ref{Fig:ATE_general_configuration} for a simple example of a causal diagram involving such double dashed arrows. We shall stress that our double dashed arrows have a different meaning than the usual dashed double-headed arrow $V \dashleftarrow {\kern-1.5em}\dashrightarrow U$ used in the literature \cite{TianPearl2002, TianPearl2003, ShpitserPearl2006a} when the $(U-V)$ relationship may be confounded by unmeasured variables. Moreover, point $(iii)$ ensures that the subgraph $V \dashrightarrow \ \ \raisebox{0.8ex}{\kern-2em$\dashleftarrow$}\ \ U$ is still a directed acyclic graph (DAG). \section{The case when exposure variables are measured at inclusion in the study only} \label{Sec:Instantaneous} \subsection{General model and results}\label{Sec:Results_Insta} \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.9, auto,swap] \node[var] (Xmm)at(6.5,1.25){$\bar X_{t_0 - 1}$}; \node[var] (X)at(8.8,1.25){$X_{t_0}$}; \node[var] (Zmm)at(5.8,2.5){$\bar Z_{t_0 - 1}$}; \node[var] (Z)at(8.2,2.5){$Z_{t_0}$}; \node[var] (Y)at(10,1.625){$Y$}; \draw[edge2] (Zmm).. controls (6.2,2) ..(Xmm); \draw[edge2] (Xmm).. controls (6,1.8) ..(Zmm); \draw[edge2] (Z).. controls (8.6,2) ..(X); \draw[edge2] (X).. controls (8.4,1.8) ..(Z); \draw[edge] (Zmm)--(Z); \draw[edge] (Zmm)--(X); \draw[edge] (Xmm)--(X); \draw[edge] (Xmm)--(Z); \draw[edge] (Z)--(Y); \draw[edge] (X)--(Y); \draw[edge] (Zmm).. controls (8.5,3.05) ..(Y); \draw[edge] (Xmm).. controls (8.95,0.75) ..(Y); \end{tikzpicture} (\textit{L}) \end{center} \begin{minipage}[c]{0.5\linewidth} \begin{center} \begin{tikzpicture}[scale=0.78, auto,swap] \node[var] (X)at(4,0){$X_{t_0-1}$}; \node[var] (Xt)at(6,0){$X_{t_0}$}; \draw[edge] (X)--(Xt); \node[var] (Y)at(8,0){$Y$}; \draw[edge] (X).. controls (6,-0.5) ..(Y); \draw[edge] (Xt)--(Y); \end{tikzpicture} \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.5\linewidth} \begin{center} \begin{tikzpicture}[scale=1, auto,swap] \node[var] (X)at(2,0){$X_{t_0}$}; \node[var] (Y)at(4,0){$Y$}; \draw[edge] (X)--(Y); \end{tikzpicture} \end{center} \end{minipage} \vspace{0.4cm} \begin{minipage}[c]{0.5\linewidth} \begin{center} (\textit{L.ex1}) \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.5\linewidth} \begin{center} (\textit{CS.ex1}) \end{center} \end{minipage} \vspace{0.5cm} \begin{minipage}[c]{0.5\linewidth} \begin{center} \begin{tikzpicture}[scale=0.9, auto,swap] \node[var] (U)at(8,3.5){ }; \node[var] (Xmm)at(7,1.25){$\bar X_{t_0 - 1}$}; \node[var] (X)at(8.8,1.25){$X_{t_0}$}; \node[var] (Zmm)at(6.5,2.5){$\bar W_{t_0 - 1}$}; \node[var] (Z)at(8.4,2.5){$W_{t_0}$}; \node[var] (Y)at(10,1.625){$Y$}; \draw[edge] (Zmm)--(Xmm); \draw[edge] (Z)--(X); \draw[edge] (Zmm)--(Z); \draw[edge] (Zmm)--(X); \draw[edge] (Xmm)--(X); \draw[edge] (Z)--(Y); \draw[edge] (X)--(Y); \draw[edge] (Xmm).. controls (8.95,0.75) ..(Y); \draw[edge] (Zmm).. controls (8.7,3.1) ..(Y); \end{tikzpicture} \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.5\linewidth} \begin{center} \begin{tikzpicture}[scale=1.2, auto,swap] \node[var] (X)at(2,0){$X_{t_0}$}; \node[var] (W)at(3,1){$W_{t_0}$}; \node[var] (Y)at(4,0){$Y$}; \draw[edge] (X)--(Y); \draw[edge] (W)--(Y); \draw[edge] (W)--(X); \end{tikzpicture} \end{center} \end{minipage} \vspace{0.2cm} \begin{minipage}[c]{0.5\linewidth} \begin{center} (\textit{L.ex2}) \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.5\linewidth} \begin{center} (\textit{CS.ex2}) \end{center} \end{minipage} \vspace{0.5cm} \begin{minipage}[c]{0.5\linewidth} \begin{center} \begin{tikzpicture}[scale=0.9, auto,swap] \node[var] (U)at(8,3.5){ }; \node[var] (Xmm)at(7,1.25){$ X_{1}$}; \node[var] (X)at(8.8,1.25){$X_{2}$}; \node[var] (Zmm)at(6.5,2.5){$W_{1}$}; \node[var] (Z)at(8.4,2.5){$W_{2}$}; \node[var] (Y)at(10,1.625){$Y$}; \draw[edge] (Zmm)--(Xmm); \draw[edge] (Z)--(X); \draw[edge] (Zmm)--(Z); \draw[edge] (Zmm)--(X); \draw[edge] (Xmm)--(X); \draw[edge] (Xmm)--(Z); \draw[edge] (Z)--(Y); \draw[edge] (X)--(Y); \end{tikzpicture} \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.5\linewidth} \begin{center} \begin{tikzpicture}[scale=1.2, auto,swap] \node[var] (X)at(2,0){$X_{t_0}$}; \node[var] (W)at(3,1){$W_{t_0}$}; \node[var] (Y)at(4,0){$Y$}; \draw[edge] (X)--(Y); \draw[edge] (W)--(Y); \draw[edge] (W)--(X); \end{tikzpicture} \end{center} \end{minipage} \vspace{0.2cm} \begin{minipage}[c]{0.5\linewidth} \begin{center} (\textit{L.ex3}) \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.5\linewidth} \begin{center} (\textit{CS.ex3}) \end{center} \end{minipage} \iffalse \vspace{0.2cm} \begin{minipage}[c]{0.5\linewidth} \begin{center} \begin{tikzpicture}[scale=0.9, auto,swap] \node[var] (U)at(8,3.5){ }; \node[var] (Xmm)at(7,1.25){$\bar X_{t_0 - 1}$}; \node[var] (X)at(8.8,1.25){$X_{t_0}$}; \node[var] (Zmm)at(6.5,2.5){$\bar W_{t_0 - 1}$}; \node[var] (Z)at(8.4,2.5){$W_{t_0}$}; \node[var] (Y)at(10,1.625){$Y$}; \draw[edge] (Zmm)--(Xmm); \draw[edge] (Z)--(X); \draw[edge] (Zmm)--(Z); \draw[edge] (Zmm)--(X); \draw[edge] (Xmm)--(X); \draw[edge] (Z)--(Y); \draw[edge] (Zmm).. controls (8.7,3.1) ..(Y); \end{tikzpicture} \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.5\linewidth} \begin{center} \begin{tikzpicture}[scale=1.2, auto,swap] \node[var] (X)at(2,0){$X_{t_0}$}; \node[var] (W)at(3,1){$W_{t_0}$}; \node[var] (Y)at(4,0){$Y$}; \draw[edge] (W)--(Y); \draw[edge] (W)--(X); \end{tikzpicture} \end{center} \end{minipage} \vspace{0.2cm} \begin{minipage}[c]{0.5\linewidth} \begin{center} (\textit{L.ex2'}) \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.5\linewidth} \begin{center} (\textit{CS.ex2'}) \end{center} \end{minipage} \vspace{0.2cm} \begin{minipage}[c]{0.5\linewidth} \begin{center} \begin{tikzpicture}[scale=0.9, auto,swap] \node[var] (U)at(8,3.5){ }; \node[var] (Xmm)at(7,1.25){$\bar X_{t_0 - 1}$}; \node[var] (X)at(8.8,1.25){$X_{t_0}$}; \node[var] (Zmm)at(6.5,2.5){$\bar W_{t_0 - 1}$}; \node[var] (Z)at(8.4,2.5){$W_{t_0}$}; \node[var] (Y)at(10,1.625){$Y$}; \draw[edge] (Zmm)--(Xmm); \draw[edge] (Z)--(X); \draw[edge] (Zmm)--(Z); \draw[edge] (Zmm)--(X); \draw[edge] (Xmm)--(X); \draw[edge] (Z)--(Y); \draw[edge] (X)--(Y); \end{tikzpicture} \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.5\linewidth} \begin{center} \begin{tikzpicture}[scale=1.2, auto,swap] \node[var] (X)at(2,0){$X_{t_0}$}; \node[var] (W)at(3,1){$W_{t_0}$}; \node[var] (Y)at(4,0){$Y$}; \draw[edge] (X)--(Y); \draw[edge] (W)--(Y); \draw[edge] (W)--(X); \end{tikzpicture} \end{center} \end{minipage} \vspace{0.2cm} \begin{minipage}[c]{0.5\linewidth} \begin{center} (\textit{L.ex2"}) \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.5\linewidth} \begin{center} (\textit{CS.ex2"}) \end{center} \end{minipage} \fi \vspace{0.5cm} \begin{minipage}[c]{0.4\linewidth} \begin{center} \begin{tikzpicture}[scale=0.9, auto,swap] \node[var] (Mmm)at(6.5,0){$M_{1}$}; \node[var] (Xmm)at(6.8,1.25){$X_{1}$}; \node[var] (M)at(8.5,0){$M_{2}$}; \node[var] (X)at(8.8,1.25){$X_{2}$}; \node[var] (Wmm)at(5.8,2.5){$ W_{1}$}; \node[var] (W)at(8.2,2.5){$W_{2}$}; \node[var] (Y)at(10,0.625){$Y$}; \draw[edge] (Wmm)--(Xmm); \draw[edge] (Wmm)--(W); \draw[edge] (Wmm)--(X); \draw[edge] (Xmm)--(X); \draw[edge] (Mmm)--(M); \draw[edge] (Xmm)--(Mmm); \draw[edge] (Xmm)--(W); \draw[edge] (Xmm)--(M); \draw[edge] (W)--(X); \draw[edge] (X)--(M); \draw[edge] (M)--(Y); \draw[edge] (X)--(Y); \draw[edge] (W).. controls (9.3,1.75) ..(Y); \draw[edge] (W).. controls (7.9,1.75) ..(M); \draw[edge] (Wmm).. controls (5.3,1.75) ..(Mmm); \draw[edge] (Wmm).. controls (7.3,1.75) ..(M); \draw[edge] (Wmm).. controls (8.9,3.15) ..(Y); \draw[edge] (Mmm).. controls (8.8,-0.55) ..(Y); \draw[edge] (Xmm).. controls (4.5,-0.75)and(8.25,-1.75) ..(Y); \end{tikzpicture} \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.3\linewidth} \begin{center} \begin{tikzpicture}[scale=0.95, auto,swap] \node[var] (Y)at(4,-0.5){$Y$}; \node[var] (X)at(2,0){$X_{t_0}$}; \node[var] (M)at(3.3,0.8){$M_{t_0}$}; \node[var] (W)at(2.5,1.8){$W_{t_0}$}; \draw[edge] (X)--(Y); \draw[edge] (X)--(M); \draw[edge] (M)--(Y); \draw[edge] (W).. controls (3.9,1.4) ..(Y); \draw[edge] (W)--(X); \draw[edge] (W)--(M); \end{tikzpicture} \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.3\linewidth} \begin{center} \begin{tikzpicture}[scale=0.95, auto,swap] \node[var] (Y)at(4,-0.5){$Y$}; \node[var] (X)at(2,0){$X_{t_0}$}; \node[var] (M)at(3.3,0.8){$M_{t_0}$}; \node[var] (W)at(2.5,1.8){$W_{t_0}$}; \draw[edge] (X)--(Y); \draw[edge] (X)--(M); \draw[edge] (M)--(Y); \draw[edge] (W).. controls (3.9,1.4) ..(Y); \draw[edge] (X)--(W); \draw[edge] (W)--(M); \end{tikzpicture} \end{center} \end{minipage} \vspace{-0.3cm} \begin{minipage}[c]{0.4\linewidth} \begin{center} (\textit{L.ex4}) \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.3\linewidth} \begin{center} (\textit{CS.Conf.ex4}) \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.3\linewidth} \begin{center} (\textit{CS.Med.ex4}) \end{center} \end{minipage} \captionof{figure}{(\textit{L}) General longitudinal causal model with time-varying exposure of interest $(X_t)_{t\geq 1}$, and additional time-varying process $(Z_t)_{t\geq 1}$. Particular cases are presented in (\textit{L.ex1}), (\textit{L.ex2}), (\textit{L.ex3}) and (\textit{L.ex4}), along with their over-simplified counterparts in (\textit{CS.Conf.ex1}), (\textit{CS.Conf.ex2}), (\textit{CS.Conf.ex3}), (\textit{CS.Conf.ex4}), and (\textit{CS.Med.ex4}). When the true longitudinal model is (\textit{L.ex4}), with a time-varying confounder $(W_t)_{t\geq 1}$ affected by the exposure, two possible over-simplified counterparts can be considered, depending on whether $(W_t)_{t\geq 1}$ is mainly considered as a confounder or a mediator. }\label{Fig:ATE_general_configuration} \end{figure} A general causal model where a time-varying exposure $(X_t)_{t\geq 1}$ potentially causes an outcome $Y$, can be compactly represented as in Figure \ref{Fig:ATE_general_configuration} ($L$). Here, variables $(Z_t)_{t\geq 1}$ are possibly multivariate, in which case their components may consist of pure mediators, pure confounders, as well as confounders influenced by the exposure of interest. Moreover, some components of $Z_{t_0}$ may be unobserved in practice. At each time $t\in {\llbracket 1 ; T \rrbracket}$, $X_t$ is a potential cause of $Y$ and is potentially caused by all components of $\bar X_{t-1}$, and by some or all components of $\bar Z_{t-1}$ and $Z_{t}$. At each time $t\in {\llbracket 1 ; T \rrbracket}$, $Z_t$ is a potential cause of $Y$, whose components are potentially caused by $\bar X_{t-1}$ and $\bar Z_{t-1}$. Components of $Z_t$ that are not causes of $X_t$ may further be caused by $X_t$. This general model could depict the case where the exposure of interest $(X_t)_{t\geq 1}$ stands for BMI at different ages, while the auxiliary variable $(Z_t)_{t\geq 1}$ would include measures of alcohol intake, physical activity and diet at different ages. Model ($L.ex4$) in Figure \ref{Fig:ATE_general_configuration} provides a less compact representation of a particular example of this general model, with $t_0 = 2$, and $Z_t = (M_t, W_t)$, where $(W_t)_{t\geq 1}$ is a confounder affected by the exposure, and $(M_t)_{t\geq 1}$ a pure mediator. Under such models, causal effects can be defined by considering hypothetical interventions on the full exposure profile $do(\bar X=\bar x)$. However, epidemiologists are often interested in the assessment of the predictive role of the exposure of interest, so a more natural measure of the causal effect of exposure on the outcome is \begin{eqnarray} ATE_L\left( \bar x_{t_0} ; \bar x^{*}_{t_0}\right) :=\mathbb{E}_L\left( Y^{\bar X_{t_0} = \bar x_{t_0} } - Y^{\bar X_{t_0} = \bar x^{*}_{t_0} }\right),\label{eq1} \end{eqnarray} for any given exposure profiles up to time $t_0$, $\bar x_{t_0}$ and $\bar x^{*}_{t_0}$ in $\left\lbrace 0, 1 \right\rbrace ^{t_0}$. Under some well known sets of assumptions on the causal model, including the consistency and sequential ignorability conditions \cite{pearl2009statsurvey, robins1986, rosenbaum1983}, the causal effect in Equation \eqref{eq1} can be expressed in terms of observable variables only. It can then be estimated if data on the full history of the variables up to time $t_0$ is available, assuming that some positivity conditions hold \cite{rosenbaum1983}. We recall that such positivity conditions will be assumed to hold throughout this article. However, when data on exposures are available at time $t_0$ only, $ATE_L\left( \bar x_{t_0} ; \bar x^{*}_{t_0}\right) $ can generally not be estimated. As mentioned in the Introduction, it is then common practice to implicitly $(i)$ overlook the time-varying nature of the exposures, $(ii)$ work under an over-simplified causal model (\textit{CS}), and $(iii)$ consider the causal effect $ATE_{CS}:=\mathbb{E}_{CS}\left( Y^{X_{t_0}=1} - Y^{X_{t_0}=0}\right)$ as the causal measure of interest. For example, if the true causal longitudinal model is model ($L.ex4$) of Figure \ref{Fig:ATE_general_configuration}, but only information on $Y$, $X_{t_0}$, $M_{t_0}$ and $W_{t_0}$ is available, most practitioners would implicitly work under the over-simplified model ($CS.Conf.ex4$) given in Figure \ref{Fig:ATE_general_configuration}. Then, because $(Y^{X_{t_0}=x_{t_0}} \indep X_{t_0} | W_{t_0})_{CS.Conf.ex4}$, the quantity of interest would be identified through \begin{eqnarray*} ATE_{CS.Conf.ex4} & \Bumpeq & \sum_{w_{t_0}} \left[ \mathbb{E}\left( Y \mid W_{t_0} = w_{t_0}, X_{t_0}=1\right) - \mathbb{E}\left( Y \mid W_{t_0} = w_{t_0}, X_{t_0}=0\right) \right] \\[-0.3cm] && \hspace{0.5cm} \times \mathbb{P}(W_{t_0} = w_{t_0}). \end{eqnarray*} It is noteworthy that, for some true longitudinal causal models, several over-simplified cross-sectional models may be considered. When the true causal longitudinal model is that of Figure \ref{Fig:ATE_general_configuration} ($L.ex4$), practitioners may consider $(W_t)_t$ mainly as a confounder and work with the over-simplified model ($CS.Conf.ex4$), but they may also consider $(W_t)_t$ mainly as a mediator and work with model ($CS.Med.ex4$). Because $(Y^{X_{t_0}=x_{t_0}} \indep X_{t_0})_{CS.Med.ex4}$, the quantity estimated in practice in the latter case would be $ATE_{CS.Med.ex4} \Bumpeq \mathbb{E}\left( Y \mid X_{t_0}=1\right) - \mathbb{E}\left( Y \mid X_{t_0}=0\right)$. Then, a natural question is whether - and how - the quantity estimated in practice when working under over-simplified causal models $(CS)$ relates to the longitudinal causal effects under the true model $(L)$, or to another causal effect of interest. Theorem \ref{Theo_Insta_Strong} below presents a sufficient condition under which the quantity estimated in practice actually equals $ATE_L\left( 1 ; 0 \right) :=\mathbb{E}_{L}\left( Y^{X_{t_0}=1} - Y^{X_{t_0}=0}\right)$, the causal effect of $X_{t_0}$ (which of course usually differs from that of $\bar X_{t_0}$, but can still be seen as a causal effect of interest). Theorem \ref{Theo_Insta_Weak} then presents a weaker sufficient condition under which $ATE_{CS}$ expresses as a weighted average of stratum specific longitudinal total effects \eqref{eq:stratum_sp_effect}. Detailed proofs of these results are given in Appendix \ref{Proof:Theo_Insta_Strong} for Theorem \ref{Theo_Insta_Strong}, and Appendix \ref{Proof:Theo_Insta_Weak} for Theorem \ref{Theo_Insta_Weak}. In Section \ref{sec:Illustr_Insta} below, we illustrate their implications by focusing on a few simple examples. \begin{Theorem}\label{Theo_Insta_Strong} If condition $(T1.Cond)$ below holds \begin{itemize} \item[] $(T1.Cond)$ \quad There exists some observed $W_{t_0} \subset Z_{t_0}$ taking values in $\Omega_{W_{t_0}}$, such that $(Y^{X_{t_0} =x_{t_0}} \indep X_{t_0} | W_{t_0})_{CS}$ and $(Y^{X_{t_0} =x_{t_0}} \indep X_{t_0} | W_{t_0})_{L}$ \end{itemize} then the quantity estimated in practice equals $ATE_L\left( 1 ; 0 \right)=\mathbb{E}_{L}\left( Y^{X_{t_0}=1} - Y^{X_{t_0}=0}\right)$: \begin{eqnarray} ATE_{CS} & \Bumpeq & \sum_{w_{t_0}\in{\Omega_{W_{t_0}}}} \left[ \mathbb{E}\left( Y \mid W_{t_0} = w_{t_0}, X_{t_0}= 1\right) - \mathbb{E}\left( Y \mid W_{t_0} = w_{t_0}, X_{t_0}=0\right) \right] \nonumber\\[-0.4cm] && \hspace{1.5cm} \times \mathbb{P}(W_{t_0} = w_{t_0}), \label{Eq:ATE_CS} \\ & = & ATE_L\left( 1 ; 0 \right). \end{eqnarray} In particular, if condition $(T1.Uncond)$ below holds \begin{itemize} \item[] $(T1.Uncond)$ \quad $(Y^{X_{t_0} =x_{t_0}} \indep X_{t_0})_{CS}$ and $(Y^{X_{t_0} =x_{t_0}} \indep X_{t_0})_L$ \end{itemize} then \begin{eqnarray*} ATE_{CS} \Bumpeq \mathbb{E}\left( Y \mid X_{t_0}= 1\right) - \mathbb{E}\left( Y \mid X_{t_0}=0\right) = ATE_L\left( 1 ; 0 \right). \end{eqnarray*} \end{Theorem} \begin{Theorem}\label{Theo_Insta_Weak} If condition $(T2.Cond)$ below holds \begin{itemize} \item[] $(T2.Cond)$ \quad There exists some observed $W_{t_0} \subset Z_{t_0}$ taking values in $\Omega_{W_{t_0}}$, such that $(Y^{X_{t_0} =x_{t_0}} \indep X_{t_0} | W_{t_0})_{CS}$ and $(Y^{\bar X_{t_0} = \bar x_{t_0}} \indep \bar X_{t_0}\mid W_{t_0})_{L}$ \end{itemize} then the quantity estimated in practice \begin{eqnarray} ATE_{CS} & \Bumpeq & \sum_{w_{t_0}\in{\Omega_{W_{t_0}}}} \left[ \mathbb{E}\left( Y \mid W_{t_0} = w_{t_0}, X_{t_0}= 1\right) - \mathbb{E}\left( Y \mid W_{t_0} = w_{t_0}, X_{t_0}= 0 \right) \right] \nonumber\\[-0.45cm] & &\hspace{1.9cm} \times \mathbb{P}(W_{t_0} = w_{t_0}),\nonumber \\[0.1cm] &=& \sum_{w_{t_0} \in {\Omega_{W_{t_0}}}} \sum_{\substack{\bar x_{t_0-1}\in \lbrace 0,1 \rbrace^{t_0-1} \\ \bar x^{*}_{t_0-1} \in \lbrace 0,1 \rbrace^{t_0-1}}} \big\{ATE_{L_{\mid W_{t_0} = w_{t_0}}}\left( (\bar x_{t_0-1},1) ; (\bar x^{*}_{t_0-1}, 0)\right) \nonumber \\[-0.9cm] && \hspace{3.8cm} \times \mathbb{P}(\bar X_{t_0-1} = \bar x_{t_0-1} \mid X_{t_0}=1, W_{t_0} = w_{t_0}) \nonumber \\ && \hspace{3.8cm} \times \mathbb{P}(\bar X_{t_0-1} = \bar x_{t_0-1}^{*} \mid X_{t_0}=0, W_{t_0} = w_{t_0}) \nonumber \\ && \hspace{3.8cm} \times \mathbb{P}(W_{t_0} = w_{t_0})\big\}.\label{Eq:Sufficient_cond} \end{eqnarray} In particular, if condition $(T2.Uncond)$ below holds \begin{itemize} \item[] $(T2.Uncond)$ \quad $(Y^{X_{t_0} =x_{t_0}} \indep X_{t_0})_{CS}$ and $(Y^{\bar X_{t_0} = \bar x_{t_0}} \indep \bar X_{t_0})_{L}$ \end{itemize} then \begin{eqnarray} ATE_{CS} & \Bumpeq & \mathbb{E}\left( Y \mid X_{t_0}= 1\right) - \mathbb{E}\left( Y \mid X_{t_0}=0\right) \nonumber\\ & = & \sum_{\substack{\bar x_{t_0-1}\in \lbrace 0,1 \rbrace^{t_0-1} \\ \bar x^{*}_{t_0-1} \in \lbrace 0,1 \rbrace^{t_0-1}}} \big\{ ATE_{L}\left( (\bar x_{t_0-1},1) ; (\bar x^{*}_{t_0-1}, 0)\right) \nonumber \\[-0.9cm] && \hspace{2.5cm} \times \mathbb{P}(\bar X_{t_0-1} = \bar x_{t_0-1} \mid X_{t_0}=1) \nonumber \\ && \hspace{2.5cm} \times \mathbb{P}(\bar X_{t_0-1} = \bar x_{t_0-1}^{*} \mid X_{t_0}=0 )\big\}.\label{Eq:Sufficient_uncond} \end{eqnarray} \end{Theorem} Theorem \ref{Theo_Insta_Strong} states that whenever there exists a set of observed variables that satisfies the ignorability condition for the exposure at time $t_0$, $X_{t_0}$, and the outcome under both the true and over-simplified causal models, then, the quantity estimated in practice equals the longitudinal total effect $ATE_L\left( 1 ; 0 \right)$. In the same way, Theorem \ref{Theo_Insta_Weak} states that whenever there exists a set of observed variables that satisfies $(i)$ the ignorability condition for the whole time-varying exposure profile, $\bar X_{t_0}$, and the outcome under the true longitudinal model, and $(ii)$ the ignorability condition for the exposure at time $t_0$, $X_{t_0}$, and the outcome under the over-simplified causal model, then the quantity estimated in practice can be written in terms of stratum specific longitudinal total effects. \subsection{Examples and illustration of the general results}\label{sec:Illustr_Insta} When the conditions of Theorem \ref{Theo_Insta_Strong} and Theorem \ref{Theo_Insta_Weak} are not satisfied, the quantity estimated in practice has to be interpreted with caution as its relationship with causal effects of interest usually remains unclear. See for example Web Supplementary Material \ref{Web_Supp_Mat_TV_Conf} where the case of the model ($L.ex2$) given in Figure \ref{Fig:ATE_general_configuration} is described in details. However, the conditions of our theorems being sufficient conditions only, there are a few cases where they are not satisfied but $ATE_{CS}$ is still an informative measure of the exposure effect. For example, denote by $(L.ex2')$ and $(CS.ex2')$ the versions of $(L.ex2)$ and $(CS.ex2)$, respectively, after removing the arrow from $X_{t_0}$ to $Y$. In this particular case where $X_{t_0}$ has no causal effect on $Y$ and only a pure time-varying confounder is present, we have $(Y \indep \bar X_{t_0} \mid \bar{W_{t_0}})_{L.ex2'}$, but we do not have $(Y \indep \bar X_{t_0} \mid {W_{t_0}})_{L.ex2'}$, so the conditions of our Theorems are not satisfied. Nevertheless, we still have $ATE_{CS.ex2'} = 0$, and the inference under the over-simplified model is valid. However, we shall stress that $ATE_{CS}$ can also be null in other situations where the exposure does affect the outcome, even when the condition of Theorem \ref{Theo_Insta_Weak} is satisfied (we will get back to this point below). When the conditions of Theorem \ref{Theo_Insta_Strong} are satisfied, the interpretation of $ATE_{CS}$ is straightforward as it simply equals $ATE_L(1;0)$. However, unsurprisingly, these conditions are very restrictive. For example, the condition $(Y^{X_{t_0} = x_{t_0}} \indep X_{t_0}\mid W_{t_0})_{L}$ is generally not satisfied under models ($L.ex1$), ($L.ex2$) and ($L.ex4$) given in Figure \ref{Fig:ATE_general_configuration}, because $\bar X_{t_0-1}$, and possibly $\bar W_{t_0-1}$, act as unmeasured confounders for the $(X_{t_0}-Y)$ relationship, but are ignored in the over-simplified models. On the other hand, the conditions of Theorem \ref{Theo_Insta_Strong} are verified under the very simple models ($L-a$) of Figure \ref{Fig:Total_Effect_X} and ($L.ex3$) of Figure \ref{Fig:ATE_general_configuration}, as well as under particular cases of model $(L.ex2)$, {\it e.g.}, when there is no arrow from $\bar X_{t_0-1}$ to $Y$ nor from $\bar W_{t_0-1}$ to $Y$. Before discussing the interpretation of $ATE_{CS}$ under the conditions of Theorem \ref{Theo_Insta_Weak}, we shall stress that these conditions are quite restrictive too. In particular, they are not satisfied under model ($L.ex4$) of Figure \ref{Fig:ATE_general_configuration}, where $(W_t)_{t > 1}$ is a confounder affected by the exposure. Under this model, sequential ignorability holds: $(Y^{\bar X_{t_0} = \bar x_{t_0}} \indep X_{1}\mid W_{1})_{L.ex4}$ and $\big( Y^{\bar X_{t_0} = \bar x_{t_0}} \indep \bar X_{t}\mid \lbrace \bar W_{t}, \bar X_{t-1} \rbrace \big)_{L.ex4}$ for any $t \in \llbracket 2 ; t_0 \rrbracket$ \cite{robins1986, G_Formula, H_R}. But the conditions of Theorem \ref{Theo_Insta_Weak} are not satisfied: we neither have $(Y^{\bar X_{t_0} = \bar x_{t_0}} \indep \bar X_{t_0}\mid \bar W_{t_0})_{L.ex4}$ nor $(Y^{\bar X_{t_0} = \bar x_{t_0}} \indep \bar X_{t_0}\mid W_{t_0})_{L.ex4}$, because $(W_t)_{t > 1}$ acts as both a confounder and a mediator in the $(\bar X_{t_0}-Y)$ relationship. The conditions of Theorem \ref{Theo_Insta_Weak} are generally not satisfied either under model ($L.ex2$) of Figure \ref{Fig:ATE_general_configuration}, because $\bar W_{t_0-1}$ affects $Y$ not through $W_{t_0}$ and $X_{t_0}$ only, and therefore acts as an unmeasured confounder in the $(\bar X_{t_0}-Y)$ relationship, which is ignored in model ($CS.ex2$). We will now discuss the interpretability of $ATE_{CS}$ when conditions of Theorem \ref{Theo_Insta_Weak} are satisfied by focusing on the simple example of model (\textit{L.ex1}) and its simplified counterpart $(CS.ex1)$ in Figure \ref{Fig:ATE_general_configuration}. Here, we have $(Y^{\bar X_{t_0} = \bar x_{t_0}} \indep \bar X_{t_0})_{{L.ex1}}$ and $(Y^{X_{t_0}=x_{t_0}}\indep X_{t_0})_{{CS.ex1}}$, so that Theorem \ref{Theo_Insta_Weak} ensures that $ATE_{CS.ex1}$ is a weighted sum of the longitudinal total effects that compare any possible pairs of exposure profiles up to time $t_0$, one of which terminating with $X_{t_0}=0$ and the other one with $X_{t_0}=1$ (see Equation \eqref{Eq:Sufficient_uncond}). However, the relevance of this particular weighted average is generally questionable. Indeed, because of the non-negative weights for terms like $ATE_{{L.ex1}} \left( (\textbf{0}_{t_0-1}, 1) ; (\textbf{1}_{t_0-1}, 0)\right)$, $ATE_{{CS.ex1}}$ can be null even for models under which each $X_t$, for $t=1, \ldots, t_0$, has a, say, positive effect on $Y$. This particular case illustrates that $ATE_{{CS}}$ generally has to be interpreted with caution even when conditions of Theorem \ref{Theo_Insta_Weak} are satisfied. The interpretation of the weighted average in Equation \eqref{Eq:Sufficient_uncond} is more straightforward if profiles $\bar x_{t_0-1}$ associated with large weights $\mathbb{P}( \bar X_{t_0 - 1} = \bar x_{t_0 - 1} \mid X_{t_0} = 1)$ correspond to globally more exposed profiles than the profiles $\bar x^{*}_{t_0-1}$ associated with large weights $\mathbb{P}( \bar X_{t_0 - 1} = \bar x^{*}_{t_0 - 1} \mid X_{t_0} = 0)$. In particular, this is the case when the exposure is ``stable'', more precisely when $X_t = 1 \Rightarrow X_{t'}=1$ for all $t' \geq t$. This stability assumption can be seen as a reasonable assumption (or approximation) for exposures such as obesity for instance. When it is satisfied, the only exposure profile that terminates with $x_{t_0} = 0$ is $\bar x_{t_0} = \textbf{0}_{t_0}$, and, under model (\textit{L.ex1}), $ATE_{CS}$ then reduces to \begin{eqnarray} \sum _{i=0} ^{t_0-1} ATE_L \left( ( {\bf 0}_i , {\bf 1}_{t_0 -i}) ; {\bf 0}_{t_0} \right) \times \mathbb{P}\left( \bar X_{t_0-1}=( {\bf 0}_i , {\bf 1}_{t_0 -i-1}) \mid X_{t_0}=1\right) . \label{Eq:ATE_stable} \end{eqnarray} The stability assumption then guarantees that $ATE_{CS}$ is a weighted sum of all the longitudinal causal effects comparing the ever-exposed profiles to the single never-exposed profile. Weights in the equation above are sensible as they correspond to the actual proportions of subjects with exposure profiles $( {\bf 0}_i , {\bf 1}_{t_0 -i})_{i \in \llbracket 0, t_0 - 1 \rrbracket}$ among the subpopulation of exposed individuals at time $t_0$. Therefore, $ATE_{CS}$ can be regarded as a meaningful quantity under model (\textit{L.ex1}) if the stability assumption further holds. The fact that $ATE_{CS}$ is a meaningful quantity under the stability assumption extends to the situation where a time-invariant observed confounder $W$ is added to model (\textit{L.ex1}). However, we recall that if the confounder is time-varying, as in Figure \ref{Fig:ATE_general_configuration} ($L.ex2$), the conditions of Theorem \ref{Theo_Insta_Weak} are not satisfied, and $ATE_{CS}$ has usually no clear meaning, even when both the exposure and confounder processes are stable. We refer to Web Supplementary Material \ref{Web_Supp_Mat_TV_Conf} for more details on this particular case. To recap, when only instantaneous levels of exposures at inclusion are available, the quantity estimated in practice when working under over-simplified models has generally to be interpreted with caution, even when the conditions of Theorem \ref{Theo_Insta_Weak} are satisfied. Except for a few exceptions, and unsurprisingly, the quantity estimated in practice can only be unambiguously related to causal effects of interest when the conditions of Theorem \ref{Theo_Insta_Strong} are satisfied. We have shown this was notably the case under model (\textit{L-a}) of Figure \ref{Fig:Total_Effect_X}, where the effect of $\bar X_{t_0}$ on the outcome is entirely mediated by $X_{t_0}$. Interestingly, this situation arises as a particular case of the model presented in Figure \ref{Fig:Total_Effect_X} (\textit{L-c}) where a summary variable ${\mathcall X}$ is assumed to mediate the whole effect of $\bar X_{t_0}$ on the outcome. In the following Section, we consider more general situations where data collected at time $t_0$ corresponds to such summary measures of past levels of exposures, as is sometimes assumed, or implicitly assumed, in epidemiological studies. \section{The case when summaries of past levels of exposures are available}\label{Sec:SummaryMeasures} \subsection{General models and results}\label{Sec:SV_general_results} We will now turn our attention to the situation where data collected at time $t_0$ concerns summary measures of past levels of exposures, and where the whole effect of exposures on the outcome $Y$ is captured by these summary measures \cite{ARNO, KUNZ, DeRubeis19, Yang19, Zheng18, Fan08, Platt10, Arnold19}. A general representation of such models is given in Figure \ref{Fig:ATE_SV_general_configuration} ($LS$), where, as in the previous section, $(Z_t)_{t\geq 1}$ can be multivariate, and so can $\mathcall{Z}$. Moreover, some components of $\mathcall{Z}$ may be unobserved. Again, $(X_t)_{t\geq 1}$ could stand for BMI at different ages, and $(Z_t)_{t\geq 1}$ could include measures of alcohol intake, physical activity and diet at different ages, while ${\mathcall X}$ and ${\mathcall Z}$ would be any appropriate summary measures of $\bar X_{t_0}$ and $\bar Z_{t_0}$, respectively. The simplest model of this form is the one given in Figure \ref{Fig:Total_Effect_X} (\textit{L-c}), and corresponds to the absence of any confounding process. Other examples are given in Figure \ref{Fig:ATE_SV_general_configuration}; we will present them in more details below. \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.9, auto,swap] \node[var] (Xmm)at(6.6,1.25){$\bar X_{t_0}$}; \node[var] (X)at(8.3,1.25){$\mathcall{X}$}; \node[var] (Wmm)at(6.0,2.75){$\bar Z_{t_0}$}; \node[var] (W)at(7.7,2.75){$\mathcall{Z}$}; \node[var] (Y)at(9.3,2.2){$Y$}; \draw[edge2] (Wmm).. controls (6.35,2.2) ..(Xmm); \draw[edge2] (Xmm).. controls (6.2,1.8) ..(Wmm); \draw[edge] (Wmm)--(W); \draw[edge] (Xmm)--(X); \draw[edge] (X)--(Y); \draw[edge] (W)--(Y); \end{tikzpicture} \end{center} \vspace{-1cm} \begin{center} (\textit{LS}) \end{center} \vspace{-0.4cm} \begin{minipage}[c]{0.45\linewidth} \begin{center} \begin{tikzpicture}[scale=0.72, auto,swap] \node[var] (X0)at(0,0){$X_1$}; \node[var] (X1)at(1.7,0){$X_2$}; \node[var] (dotX)at(3.4,0){$\dots$}; \node[var] (Xt)at(5.2,0){$X_{t_0}$}; \node[var] (X)at(6.8,0){$\mathcall{X}$}; \node[var] (W0)at(-0.6,1.5){$W_1$}; \node[var] (W1)at(1.2,1.5){$W_2$}; \node[var] (Wt)at(4.7,1.5){$W_{t_0}$}; \node[var] (W)at(6.6,1.5){$\mathcall{W}$}; \node[var] (dotW)at(3,1.5){$\dots$}; \node[var] (Y)at(8.2,0.75){$Y$}; \draw[edge] (X0)--(X1); \draw[edge] (W0)--(W1); \draw[edge] (W1)--(X1); \draw[edge] (W0)--(X0); \draw[edge] (Wt)--(Xt); \draw[edge] (X1)--(dotX); \draw[edge] (W1)--(dotW); \draw[edge] (dotX)--(Xt); \draw[edge] (dotW)--(Wt); \draw[edge] (Xt)--(X); \draw[edge] (Wt)--(W); \draw[edge] (X)--(Y); \draw[edge] (W)--(Y); \draw[edge] (X0).. controls (5.4,-1) ..(X); \draw[edge] (X1).. controls (5.7,-0.7) ..(X); \draw[edge] (W0).. controls (4.2,2.7) ..(W); \draw[edge] (W1).. controls (5,2.2) ..(W); \draw[edge] (W0).. controls (2.8,2.2) ..(Wt); \draw[edge] (W1).. controls (3.4,1.9) ..(Wt); \draw[edge] (X0).. controls (2.7,-1) ..(Xt); \draw[edge] (X1).. controls (2.9,-0.5) ..(Xt); \end{tikzpicture} \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.09\linewidth} \end{minipage}\hfill \begin{minipage}[c]{0.27\linewidth} \begin{center} \begin{tikzpicture}[scale=0.8, auto,swap] \node[var] (Xt)at(5,0){$\bar X_{t_0}$}; \node[var] (X)at(6.4,0){$\mathcall{X}$}; \node[var] (Wt)at(4.5,1.5){$\bar W_{t_0}$}; \node[var] (W)at(6,1.5){$\mathcall{W}$}; \node[var] (Y)at(7.7,0.75){$Y$}; \draw[edge] (Wt)--(Xt); \draw[edge] (Xt)--(X); \draw[edge] (Wt)--(W); \draw[edge] (X)--(Y); \draw[edge] (W)--(Y); \end{tikzpicture} \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.21\linewidth} \begin{center} \begin{tikzpicture}[scale=0.78, auto,swap] \node[var] (X)at(6,0){$\mathcall{X}$}; \node[var] (Y)at(9.4,0){$Y$}; \node[var] (W)at(7.7,1){$\mathcall{W}$}; \draw[edge] (W)--(Y); \draw[edge] (W)--(X); \draw[edge] (X)--(Y); \end{tikzpicture} \end{center} \end{minipage} \vspace{0.1cm} \begin{minipage}[c]{0.45\linewidth} \begin{center} (\textit{LS.ex1}) \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.09\linewidth} \end{minipage}\hfill \begin{minipage}[c]{0.27\linewidth} \begin{center} (\textit{LS.compact.ex1}) \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.21\linewidth} \begin{center} (\textit{SV.ex1}) \end{center} \end{minipage} \vspace{0.6cm} \begin{minipage}[c]{0.48\linewidth} \begin{center} \begin{tikzpicture}[scale=0.8, auto,swap] \node[var] (X1)at(2,0){${ X}_{1}$}; \node[var] (X2)at(3.5,0){${ X}_{2}$}; \node[var] (M1)at(2.3,1.25){${M}_{1}$}; \node[var] (M2)at(3.8,1.25){${M}_{2}$}; \node[var] (dotX)at(5,0){$\dots$}; \node[var] (dotM)at(5.3,1.25){$\dots$}; \node[var] (Xmm)at(6.5,0){$X_{t_0}$}; \node[var] (Mmm)at(6.8,1.25){$M_{t_0}$}; \node[var] (X)at(8.2,0){$\mathcall{X}$}; \node[var] (M)at(8.5,1.25){$\mathcall{M}$}; \node[var] (Y)at(9.7,0.625){$Y$}; \draw[edge] (X)--(Y); \draw[edge] (M)--(Y); \draw[edge] (M1)--(M2); \draw[edge] (M2)--(dotM); \draw[edge] (X1)--(X2); \draw[edge] (X1)--(Mmm); \draw[edge] (X2)--(Mmm); \draw[edge] (X1)--(M2); \draw[edge] (dotX)--(Xmm); \draw[edge] (dotM)--(Mmm); \draw[edge] (X2)--(dotX); \draw[edge] (Xmm)--(X); \draw[edge] (Mmm)--(M); \draw[edge] (X1)--(M1); \draw[edge] (X2)--(M2); \draw[edge] (Xmm)--(Mmm); \draw[edge] (X1).. controls (6.5,-0.95) ..(X); \draw[edge] (X2).. controls (6.5,-0.85) ..(X); \draw[edge] (X1).. controls (4.7,-0.95) ..(Xmm); \draw[edge] (X2).. controls (5.2,-0.35) ..(Xmm); \draw[edge] (M1).. controls (6,2.25) ..(M); \draw[edge] (M2).. controls (6,2.1) ..(M); \draw[edge] (M1).. controls (5,2.25) ..(Mmm); \draw[edge] (M2).. controls (5.5,1.65) ..(Mmm); \end{tikzpicture} \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.05\linewidth} \end{minipage}\hfill \begin{minipage}[c]{0.22\linewidth} \begin{center} \begin{tikzpicture}[scale=0.72, auto,swap] \node[var] (Xt)at(4.5,0){$\bar X_{t_0}$}; \node[var] (X)at(6.5,0){$\mathcall{X}$}; \node[var] (Mt)at(5,1.5){$\bar M_{t_0}$}; \node[var] (M)at(7,1.5){$\mathcall{M}$}; \node[var] (Y)at(8.5,0.75){$Y$}; \draw[edge] (Xt)--(Mt); \draw[edge] (Xt)--(X); \draw[edge] (Mt)--(M); \draw[edge] (X)--(Y); \draw[edge] (M)--(Y); \end{tikzpicture} \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.05\linewidth} \end{minipage}\hfill \begin{minipage}[c]{0.2\linewidth} \begin{center} \begin{tikzpicture}[scale=0.9, auto,swap] \node[var] (Y)at(4,-0.5){$Y$}; \node[var] (X)at(2,0){$\mathcall{X}$}; \node[var] (M)at(3.5,1){$\mathcall{M}$}; \draw[edge] (X)--(Y); \draw[edge] (X)--(M); \draw[edge] (M)--(Y); \end{tikzpicture} \end{center} \end{minipage} \vspace{0.1cm} \begin{minipage}[c]{0.48\linewidth} \begin{center} (\textit{LS.ex2}) \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.05\linewidth} \end{minipage}\hfill \begin{minipage}[c]{0.22\linewidth} \begin{center} (\textit{LS.compact.ex2}) \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.05\linewidth} \end{minipage}\hfill \begin{minipage}[c]{0.2\linewidth} \begin{center} (\textit{SV.ex2}) \end{center} \end{minipage} \vspace{0.6cm} \begin{minipage}[c]{0.48\linewidth} \begin{center} \begin{tikzpicture}[scale=0.8, auto,swap] \node[var] (Xmm)at(6.5,0){$\bar X_{t_0}$}; \node[var] (Mmm)at(7.2,1.25){$\bar M_{t_0}$}; \node[var] (X)at(8.5,0){$\mathcall{X}$}; \node[var] (M)at(8.8,1.25){$\mathcall{M}$}; \node[var] (Wmm)at(5.8,2.5){$\bar W_{t_0}$}; \node[var] (W)at(8.2,2.5){$\mathcall{W}$}; \node[var] (Y)at(10,0.625){$Y$}; \draw[edge] (Wmm)--(W); \draw[edge] (Xmm)--(X); \draw[edge] (Mmm)--(M); \draw[edge] (Xmm)--(Mmm); \draw[edge] (Xmm)--(Mmm); \draw[edge] (Wmm)--(Xmm); \draw[edge] (Wmm)--(Mmm); \draw[edge] (W)--(Y); \draw[edge] (M)--(Y); \draw[edge] (X)--(Y); \end{tikzpicture} \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.48\linewidth} \begin{center} \begin{tikzpicture}[scale=0.9, auto,swap] \node[var] (Y)at(4,-0.5){$Y$}; \node[var] (X)at(2,0){$\mathcall{X}$}; \node[var] (M)at(3.2,0.9){$\mathcall{M}$}; \node[var] (W)at(2.5,1.8){$\mathcall{W}$}; \draw[edge] (X)--(Y); \draw[edge] (X)--(M); \draw[edge] (M)--(Y); \draw[edge] (W).. controls (3.9,1.4) ..(Y); \draw[edge] (W)--(X); \draw[edge] (W)--(M); \end{tikzpicture} \end{center} \end{minipage} \vspace{0.2cm} \begin{minipage}[c]{0.48\linewidth} \begin{center} (\textit{LS.ex3}) \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.48\linewidth} \begin{center} (\textit{SV.ex3}) \end{center} \end{minipage} \vspace{0.6cm} \begin{minipage}[c]{0.48\linewidth} \begin{center} \begin{tikzpicture}[scale=0.78, auto,swap] \node[var] (W2)at(2,0){$W_1$}; \node[var] (W3)at(4,0){$W_2$}; \node[var] (Wdot)at(5.5,0){$\dots$}; \node[var] (Wt)at(7,0){$W_{t_0}$}; \node[var] (W)at(8.5,0){$\mathcall{W}$}; \node[var] (X2)at(2,-1.5){$X_1$}; \node[var] (X3)at(4,-1.5){$X_2$}; \node[var] (Xdot)at(5.5,-1.5){$\dots$}; \node[var] (Xt)at(7,-1.5){$X_{t_0}$}; \node[var] (X)at(8.5,-1.5){$\mathcall{X}$}; \node[var] (Y)at(9.75,-0.75){$Y$}; \draw[edge] (X2)--(X3); \draw[edge] (X2)--(W3); \draw[edge] (X2).. controls (3.5, -0.85)..(Wt); \draw[edge] (X3)--(Wt); \draw[edge] (W2)--(W3); \draw[edge] (W2)--(X2); \draw[edge] (W2)--(X3); \draw[edge] (W2).. controls (3.5, -0.75)..(Xt); \draw[edge] (W3)--(Xt); \draw[edge] (W3)--(X3); \draw[edge] (W3)--(Wdot); \draw[edge] (Wdot)--(Wt); \draw[edge] (Wt)--(W); \draw[edge] (Wt)--(Xt); \draw[edge] (X3)--(Xdot); \draw[edge] (Xdot)--(Xt); \draw[edge] (Xt)--(X); \draw[edge] (X)--(Y); \draw[edge] (W)--(Y); \draw[edge] (W2).. controls (5.1,0.75) ..(Wt); \draw[edge] (W2).. controls (5,1.15) ..(W); \draw[edge] (W3).. controls (5.1,0.5) ..(Wt); \draw[edge] (W3).. controls (6.2,1.15) ..(W); \draw[edge] (X2).. controls (5.1,-2.2) ..(Xt); \draw[edge] (X2).. controls (5,-2.5) ..(X); \draw[edge] (X3).. controls (5.1,-1.9) ..(Xt); \draw[edge] (X3).. controls (6.2,-2.5) ..(X); \end{tikzpicture} \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.26\linewidth} \begin{center} \begin{tikzpicture}[scale=0.65, auto,swap] \node[var] (X)at(6,0){$\mathcall{X}$}; \node[var] (Y)at(10,0){$Y$}; \node[var] (W)at(8,1){$\mathcall{W}$}; \draw[edge] (W)--(Y); \draw[edge] (W)--(X); \draw[edge] (X)--(Y); \end{tikzpicture} \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.26\linewidth} \begin{center} \begin{tikzpicture}[scale=0.65, auto,swap] \node[var] (X)at(6,0){$\mathcall{X}$}; \node[var] (Y)at(10,0){$Y$}; \node[var] (W)at(8,1){$\mathcall{W}$}; \draw[edge] (W)--(Y); \draw[edge] (X)--(W); \draw[edge] (X)--(Y); \end{tikzpicture} \end{center} \end{minipage} \vspace{0.2cm} \begin{minipage}[c]{0.48\linewidth} \begin{center} (\textit{LS.ex4}) \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.26\linewidth} \begin{center} (\textit{SV.Conf.ex4}) \end{center} \end{minipage}\hfill \begin{minipage}[c]{0.26\linewidth} \begin{center} (\textit{SV.Med.ex4}) \end{center} \end{minipage} \captionof{figure}{(\textit{LS}) General longitudinal causal model with time-varying exposure of interest $(X_t)_{t\geq 1}$, and a additional time-varying process $(Z_t)_{t\geq 1}$, in the situation where exposure profiles only affect the outcome through some summary variables $\mathcall{X}$ and $\mathcall{Z}$. Particular cases are presented in (\textit{LS.ex1}) (or more compactly in (\textit{LS.compact.ex1})), , (\textit{LS.ex2}) (or more compactly in (\textit{LS.compact.ex2})), (\textit{LS.ex3}) and (\textit{LS.ex3}), along with their over-simplified counterparts in (\textit{SV.ex1}), (\textit{SV.ex2}), (\textit{SV.ex3}), (\textit{SV.Conf.ex4}), and (\textit{SV.Med.ex4}). When the true longitudinal model is (\textit{LS.ex4}), with a time-varying confounder $(W_t)_{t\geq 1}$ affected by the exposure, two possible over-simplified counterparts can be considered, depending on whether $(W_t)_{t\geq 1}$ is mainly considered as a confounder or a mediator.} \label{Fig:ATE_SV_general_configuration} \end{figure} Let us first discuss about the causal effects of interest in this setting. Distinct exposure profiles $\bar x_{t_0}$ leading to ${\mathcall X =\mathcall x}$, for any potential value ${\mathcall x}$ of ${\mathcall X}$, can be seen as distinct versions of the ``compound treatment'' ${\mathcall x}$ \cite{VDW_H, VDW_H_2}, in the particular case where versions precede what we will refer to as treatment ${\mathcall X}$, or ${\mathcall x}$, below. Moreover, because summary variables are deterministic functions of exposure profiles, interventions on the latter, but not on the former, can be implemented in practice. As a result, $Y^{{\mathcall X}={\mathcall x}}$, although mathematically grounded, may not have a clear practical meaning. Then, and as we will now describe, causal effects of natural interest under models involving summary variables actually depend on whether or not the versions of ${\mathcall X}$ are relevant \cite{VDW_H}. Adopting the same terminology as in \cite{VDW_H}, we will say that versions of treatment ${\mathcall X}$ are irrelevant, when all versions $\bar x_{t_0}$ leading to ${\mathcall X =\mathcall x}$ also lead to the same effect on the outcome, or, more precisely when condition $(Irrel)$ below holds: \begin{itemize} \item[] $(Irrel)$ \quad $Y^{\bar X_{t_0}=\bar x_{t_0}} = Y^{{\mathcall X}={\mathcall x}}$ for any $\bar x_{t_0}$ such that $\bar X_{t_0}=\bar x_{t_0} \Rightarrow {\mathcall X}={\mathcall x}$. \end{itemize} When the versions are irrelevant, as in model (\textit{L-c}) of Figure \ref{Fig:Total_Effect_X} for example, we have $ATE_{LS}(\bar x_{t_0} ; \bar x_{t_0}^{*}) = ATE_{LS}({\mathcall x} ; {\mathcall x}^{*}) = \mathbb{E}_{LS}\big( Y^{\mathcall{X} = \mathcall{x}} $ $- Y^{\mathcall{X} = \mathcall{x}^{*}}\big)$, for any $\bar x_{t_0}$ and $\bar x_{t_0}^{*}$ leading to $\mathcall{X} = \mathcall{x}$ and $\mathcall{X} = \mathcall{x}^{*}$, respectively. As a result, $\mathbb{E}_{LS}\left( Y^{\mathcall{X} = \mathcall{x}} - Y^{\mathcall{X} = \mathcall{x}^{*}}\right)$ is well-defined and constitutes a causal effect of interest. On the other hand, we will say that versions of the treatment are relevant when $Y^{\bar X_{t_0}=\bar x_{t_0}}$ and $Y^{\bar X_{t_0}=\bar x'_{t_0}}$ may be different even though $\bar x_{t_0}$ and $\bar x_{t_0}^{'}$ are two exposure profiles leading to the same value $\mathcall{x}$ for $\mathcal{X}$. For example, this is typically the case under model (\textit{LS}) of Figure \ref{Fig:ATE_SV_general_configuration}, since $\bar X_{t_0}$ affects $Y$ not only through $\mathcall{X}$ but also through some components of ${\mathcall Z}$. Indeed, we can have ${\mathcall Z}^{\bar X_{t_0}=\bar x_{t_0}}\neq {\mathcall Z}^{\bar X_{t_0}=\bar x'_{t_0}}$, and, in turn $Y^{\bar X_{t_0}=\bar x_{t_0}}\neq Y^{\bar X_{t_0}=\bar x'_{t_0}}$, for some exposure profiles $\bar x_{t_0}$ and $\bar x_{t_0}^{'}$ leading to the same value $\mathcall{X} = \mathcall{x}$. Then, when versions are relevant, we typically have $ATE_{LS}(\bar x_{t_0} ; \bar x_{t_0}^{*}) \neq ATE_{LS}(\bar x'_{t_0} ; \bar x_{t_0}^{'*})$, even if both $\bar x_{t_0}$ and $\bar x'_{t_0}$ lead to $\mathcall{X} = \mathcall{x}$ and both $\bar x_{t_0}^{*}$ and $\bar x_{t_0}^{'*}$ lead to $\mathcall{X} = \mathcall{x}^{*}$. Therefore, although still mathematically grounded, the quantity $ATE_{LS}({\mathcall x} ; {\mathcall x}^{*}) = \mathbb{E}_{LS}\left( Y^{\mathcall{X} = \mathcall{x}} - Y^{\mathcall{X} = \mathcall{x}^{*}}\right)$ is not well defined from a ``practical point of view'', and cannot be considered as a quantity of interest. Among other possibilities, the quantity \begin{eqnarray} && \hspace{-0.7cm} \sum_{x_{t_0}} \lbrace \mathbb{E}_{LS} \big(Y^{\bar x_{t_0}} \big) \times \mathbb{P}( \bar X_{t_0} = \bar x_{t_0} \mid \mathcall{X} = \mathcall{x})\rbrace -\sum_{x_{t_0}^*} \lbrace\mathbb{E}_{LS} \big(Y^{\bar x_{t_0}^*} \big) \times \mathbb{P}( \bar X_{t_0} = \bar x_{t_0}^* \mid \mathcall{X} = \mathcall{x}^*)\rbrace\nonumber \\ && \hspace{-0.2cm} = \sum_{\bar x_{t_0}} \sum_{\bar x_{t_0}^{*}} \lbrace ATE_{LS}(\bar x_{t_0} ; \bar x_{t_0}^{*}) \times \mathbb{P}(\bar X_{t_0} = \bar x_{t_0} \mid \mathcall{X} = \mathcall{x}) \times \mathbb{P}(\bar X_{t_0} = \bar x_{t_0}^{*} \mid \mathcall{X} = \mathcall{x}^{*})\rbrace,\label{Average} \end{eqnarray} can be regarded as a causal effect of interest. It corresponds to the difference between the expectation of the outcome in the following two counterfactual populations. In the first one, for any profile $\bar x_{t_0} $ leading to ${\mathcall X} = {\mathcall x}$, a proportion $\mathbb{P}(\bar X_{t_0} = \bar x_{t_0} \mid \mathcall{X} = \mathcall{x})$ of the individuals undergo the intervention $do(\bar X_{t_0} = \bar x_{t_0} )$. This can be regarded as a natural way to ``implement'' $do({\mathcall X} = {\mathcall x})$ in the population. In the second counterfactual population, for any profile $\bar x^*_{t_0} $ leading to ${\mathcall x}^*$, a proportion $\mathbb{P}(\bar X_{t_0} = \bar x^*_{t_0} \mid \mathcall{X} = \mathcall{x}^*)$ of the individuals undergo the intervention $do(\bar X_{t_0} = \bar x^*_{t_0} )$, which is again a natural way to ``implement'' $do({\mathcall X} = {\mathcall x}^*)$ in the population. Other averages could be considered, such as weighted averages of longitudinal stratum-specific causal effects. In addition, we shall stress that the interpretability of such quantity is not always straightforward, as was already the case for the weighted averages in Theorem \ref{Theo_Insta_Weak} of Section \ref{Sec:Instantaneous}; we will get back to this point later. Irrespective of the relevance of the treatment, when only data on $\mathcall{X}$ and $\mathcall{Z}$ are considered or available, practitioners generally $(i)$ overlook the time-varying nature of the exposures, $(ii)$ work under an over-simplified causal model (\textit{SV}), and $(iii)$ consider the causal effect $ATE_{SV} \left(\mathcall{x} ; \mathcall{x}^{*}\right) =\mathbb{E}_{SV}\left( Y^{\mathcall{X} = \mathcall{x}} - Y^{\mathcall{X} = \mathcall{x}^{*}}\right)$, for any $\mathcall{x} \neq \mathcall{x}^{*}$, as the causal effect of interest. For example, when the true longitudinal model is model ($LS.ex1$) given in Figure \ref{Fig:ATE_SV_general_configuration}, they would implicitly work under model ($SV.ex1$), while they would typically work under the simplified model ($SV.ex2$) if the true model is ($LS.ex2$). Again, there are true longitudinal models under which distinct over-simplified models may be considered in practice. Depending on whether $(W_t)_{t\geq 1}$ is considered to mainly act as a confounder or a mediator under the model ($LS.ex4$) of Figure \ref{Fig:ATE_SV_general_configuration}, practitioners would work under either model ($SV.Conf.ex4$) or model ($SV.Med.ex4$). In any case, given an over-simplified model ($SV$), the causal measure of interest $ATE_{SV} \left(\mathcall{x} ; \mathcall{x}^{*}\right)$ would be estimated in practice and, again, a natural question is whether - and how - this quantity relates to the longitudinal causal effects under the true longitudinal model $(LS)$. Here again, we will use $ATE_{SV}$ when referring to either the causal effect or the quantity estimated in practice in the text. Theorem \ref{Theo_SV_Weak} presents a sufficient condition under which the quantity estimated in practice expresses as a weighted average of stratum specific longitudinal total effects. It is the analogue of Theorem \ref{Theo_Insta_Weak} in Section \ref{Sec:Results_Insta}. \begin{Theorem}\label{Theo_SV_Weak} If condition $(T3.Cond)$ below holds \begin{itemize} \item[] (T3.Cond) \quad There exists some observed $\mathcall{W} \subset \mathcall{Z}$ taking its values in $\Omega_{\mathcall{W}}$, such that $(Y^{\mathcall{X} = \mathcall{x}} \indep \mathcall{X} \mid \mathcall{W})_{SV}$ and $\mathcall{W}$ $(Y^{\bar X_{t_0} = \bar x_{t_0}} \indep \bar X_{t_0} \mid \mathcall{W})_{LS}$ \end{itemize} then the quantity estimated in practice \begin{eqnarray*} ATE_{SV} \left(\mathcall{x} ; \mathcall{x}^{*}\right) & \Bumpeq & \sum_{\mathcall{w} \in \Omega_{\mathcall{W}}} \left[ \mathbb{E}\left( Y \mid \mathcall{W} = \mathcall{w}, \mathcall{X} = \mathcall{x} \right) - \mathbb{E}\left( Y \mid \mathcall{W} = \mathcall{w}, \mathcall{X} = \mathcall{x}^{*} \right) \right]\\[-0.3cm] && \hspace{1.2cm} \times \mathbb{P}(\mathcall{W}=\mathcall{w}), \end{eqnarray*} equals \begin{eqnarray} && \hspace{-0.2cm} \sum_{\mathcall{w} \in \Omega_{\mathcall{W}}} \sum_{\substack{\bar x_{t_0} \in \lbrace 0,1 \rbrace^{t_0} \\ \bar x^{*}_{t_0} \in \lbrace 0,1 \rbrace^{t_0}}} \lbrace ATE_{LS_{\mid \mathcall{W} = \mathcall{w}}}\left( \bar x_{t_0} ; \bar x^{*}_{t_0}\right) \times \mathbb{P}(\bar X_{t_0} = \bar x_{t_0} \mid \mathcall{X} = \mathcall{x}, \mathcall{W} = \mathcall{w}) \nonumber \\[-0.9cm] && \hspace{6.05cm} \times \mathbb{P}(\bar X_{t_0} = \bar x_{t_0}^{*} \mid \mathcall{X} = \mathcall{x}^{*}, \mathcall{W} = \mathcall{w}) \nonumber \\ && \hspace{6.05cm} \times \mathbb{P}(\mathcall{W} = \mathcall{w})\rbrace. \label{Eq:SV_Sufficient_cond} \end{eqnarray} In particular, if condition $(T3.Uncond)$ below holds \begin{itemize} \item[] $(T3.Uncond)$ \quad $(Y^{\mathcall{X} = \mathcall{x}} \indep \mathcall{X})_{SV}$ and $(Y^{\bar X_{t_0} = \bar x_{t_0}} \indep \bar X_{t_0})_{LS}$ \end{itemize} then \begin{eqnarray} ATE_{SV}(\mathcall{x} ; \mathcall{x}^*) & \Bumpeq & \mathbb{E}\left( Y \mid\mathcall{X} = \mathcall{x} \right) - \mathbb{E}\left( Y \mid \mathcall{X} = \mathcall{x}^{*} \right), \nonumber \\ & = & \sum_{\substack{\bar x_{t_0} \in \lbrace 0,1 \rbrace^{t_0} \\ \bar x^{*}_{t_0} \in \lbrace 0,1 \rbrace^{t_0}}} \lbrace ATE_{LS}\left( \bar x_{t_0} ; \bar x^{*}_{t_0}\right) \times \mathbb{P}(\bar X_{t_0} = \bar x_{t_0} \mid \mathcall{X} = \mathcall{x}) \nonumber \\[-0.9cm] && \hspace{4.4cm} \times \mathbb{P}(\bar X_{t_0} = \bar x_{t_0}^{*} \mid \mathcall{X} = \mathcall{x}^{*}) \rbrace.\label{Eq:SV_Sufficient_uncond} \end{eqnarray} \end{Theorem} An analogue of Theorem \ref{Theo_Insta_Strong} could be given too: if there exists some observed $\mathcall{W} \subset \mathcall{Z}$ taking values in $\Omega_{\mathcall{W}}$, such that $(Y^{\mathcall{X} = \mathcall{x}} \indep \mathcall{X} | \mathcall{W})_{SV}$ and $(Y^{\mathcall{X} = \mathcall{x}} \indep \mathcall{X} | \mathcall{W}))_{LS}$, then the quantity estimated in practice equals $ATE_{LS}({\mathcall x}, {\mathcall x'})$. However, the latter quantity being generally not-well defined from a practical point-of-view unless condition $(Irrel)$ holds, we consider a slightly stronger sufficient condition in Theorem \ref{Theo_SV_Weak_Irrelevance} below. \begin{Theorem}\label{Theo_SV_Weak_Irrelevance} Assume that condition $(Irrel)$ holds. If, in addition, either condition $(T3.Cond)$ or $(T3.Uncond)$ holds, then \begin{eqnarray*} ATE_{SV}(\mathcall{x}; \mathcall{x}^*) \Bumpeq ATE_{LS}(\mathcall{x}; \mathcall{x}^*) = ATE_{LS}\left( \bar x_{t_0} ; \bar x^{*}_{t_0}\right), \end{eqnarray*} for any $\bar x_{t_0}$ and $\bar x_{t_0}^{*}$ leading to to $\mathcall{X} = \mathcall{x}$ and $\mathcall{X} = \mathcall{x}^{*}$, respectively. \end{Theorem} Detailed proofs of Theorems \ref{Theo_SV_Weak} and \ref{Theo_SV_Weak_Irrelevance} are given in Appendices \ref{Proof:Theo_SV_Weak} and \ref{Proof:Theo_SV_Weak_Irrelevance}, respectively. In Section \ref{sec:Illustr_SV}, we illustrate their implications by focusing on a few simple examples. \subsection{Examples and illustration of the general results }\label{sec:Illustr_SV} When the conditions of Theorem \ref{Theo_SV_Weak_Irrelevance} are satisfied, the interpretation of the quantity estimated in practice, $ATE_{SV} \left(\mathcall{x} ; \mathcall{x}^{*}\right)$, is straightforward as it equals $ATE_{LS}(\bar x_{t_0} ; \bar x_{t_0}^{*})$ for any $\bar x_{t_0}$ and $\bar x_{t_0}^{*}$ leading to $\mathcall{X} = \mathcall{x}$ and $\mathcall{X} = \mathcall{x}^{*}$, respectively. However, these conditions are very restrictive. Among the examples presented in Figure \ref{Fig:ATE_SV_general_configuration}, they are only satisfied under model (\textit{LS.ex1}), for which the over-simplified counterpart is model (\textit{SV.ex1}). As made clearer below, condition $(Irrel)$ is not satisfied for models (\textit{LS.ex2}), (\textit{LS.ex3}), and (\textit{LS.ex4}) of Figure \ref{Fig:ATE_SV_general_configuration}. On the other hand, under model (\textit{LS.ex1}), versions are irrelevant, and we further have $\big(Y^{\mathcall{X}= \mathcall{x}}\indep \mathcall{X} \mid \mathcall{W}\big)_{SV.ex1}$ and $\big( Y^{\bar X_{t_0} = \bar x_{t_0}} \indep \bar X_{t_0} \mid \mathcall{W}\big)_{LS.ex1}$. Therefore, the conditions of Theorem \ref{Theo_SV_Weak_Irrelevance} are satisfied, and even if model (\textit{SV.ex1}) is misspecified (${\mathcall W}$ is not a confounder for the $({\mathcall X}-Y)$ relationship under the true model (\textit{LS.ex1})), the parameter estimated under this over-simplified model retains the parameter of interest $ATE_{L}({\mathcall x} ; {\mathcall x}^{*})$. In other words, observing $\mathcall{X}$ and $\mathcall{W}$ is sufficient to infer the causal effect of $\bar X_{t_0}$ on $Y$ under model $(LS.ex1)$. Below, we will discuss the interpretability of the weighted average in Equations \eqref{Eq:SV_Sufficient_cond} and \eqref{Eq:SV_Sufficient_uncond} when the conditions of Theorem \ref{Theo_SV_Weak} are satisfied. Before that, we shall stress that these conditions are also quite restrictive. They are satisfied in the pure mediation setting, in the absence of further confounding, as depicted in model (\textit{LS.ex2}) of Figure \ref{Fig:ATE_SV_general_configuration}; see model (\textit{SV.ex2}) for its over-simplified counterpart. First note that, because ${\bar X_{t_0}}$ has an effect on the outcome not only through ${\mathcall X}$ but also through ${\mathcall M}$ under this model, treatment versions are relevant as mentioned above. Nevertheless, $\big( Y^{\mathcall{X} = \mathcall{x}} \indep \mathcall{X}\big)_{SV.ex2}$ and $\big( Y^{\bar X_{t_0} = \bar x_{t_0}} \indep \bar X_{t_0}\big)_{LS.ex2}$, so that Theorem \ref{Theo_SV_Weak} ensures that $ATE_{SV}(\mathcall{x}; \mathcall{x}^*)$ expresses as the weighted average of longitudinal total effects given in Equation \eqref{Eq:SV_Sufficient_uncond}. The conditions of Theorem \ref{Theo_SV_Weak} are still satisfied in the presence of an additional time-invariant pure confounder. But, they are not satisfied anymore if the additional confounder is time-varying, as in model ($LS.ex3$): because of the presence of a time-varying mediator and of a time-varying confounder, $\mathcall{W}$ is no longer sufficient to block all back-door paths between $\bar X_{t_0}$ and $Y$ (except if $\bar W_{t_0}$ has no direct effect on $\bar M_{t_0}$). Consequently, if the true model is ($LS.ex3$), the quantity estimated in practice generally has to be interpreted with caution. See Web Supplementary Material \ref{Web_Supp_Mat_Total_Effect_Time_Varying_Mediator_TV_Confounder} for more details. Interestingly, this is in sharp contrast with the scenario of model $(LS.ex1)$, where only a time-varying pure confounder, and no time-varying pure mediator, was present, and in which case we have already explained that Theorem \ref{Theo_SV_Weak_Irrelevance} guaranteed that $ATE_{SV}$ had a clear interpretation. In other words, in the presence of time-varying confounding, the existence of a time-varying mediator is crucial, although it is generally overlooked when the focus is on the estimation of total effects: if there exists a time-varying mediator on top of the time-varying confounder, the conditions of Theorem \ref{Theo_SV_Weak} are not satisfied, and information on summary variables is generally not enough to derive interpretable causal effects. Another simple example where the conditions of Theorem \ref{Theo_SV_Weak}, and {\it a fortiori}, those of Theorem \ref{Theo_SV_Weak_Irrelevance}, are not satisfied arises when a time-varying confounder is affected by the exposure of interest, as in Figure \ref{Fig:ATE_SV_general_configuration} (\textit{LS.ex4}). First, treatment versions are relevant in this case, since ${\bar X_{t_0}}$ has an effect on the outcome not only through ${\mathcall X}$, but also through ${\mathcall W}$. Moreover, we recall that in this case, two over-simplified models, (\textit{SV.Conf.ex4}) and (\textit{SV.Med.ex4}), may be considered, depending on whether $(W_t)_{t\geq 1}$ is mainly regarded as a confounder or a mediator. Irrespective of the considered over-simplified model, the conditions of Theorem \ref{Theo_SV_Weak} are not satisfied. Indeed, while sequential ignorability holds (more precisely, $(Y^{\bar X_{t_0} = \bar x_{t_0}} \indep X_{1}\mid W_{1})_{LS.ex4}$ and $\big( Y^{\bar X_{t_0} = \bar x_{t_0}} \indep \bar X_{t}\mid \lbrace \bar W_{t}, \bar X_{t-1} \rbrace \big)_{LS.ex4}$, for any $t \in \llbracket 2 ; t_0 \rrbracket$), we do not have $(Y^{\bar X_{t_0} = \bar x_{t_0}} \indep \bar X_{t_0}\mid \mathcall{W})_{LS.ex4}$, and we do not have $(Y^{\bar X_{t_0} = \bar x_{t_0}} \indep \bar X_{t_0})_{LS.ex4}$ either, because $(W_t)_{t > 1}$ acts as both a confounder and a mediator in the $(\bar X_{t_0}-Y)$ relationship. Therefore, and as detailed in Web Supplementary Material \ref{Web_Supp_Mat_Total_Effect_Time_Varying_confounder_affected}, the quantity estimated under an over-simplified model has to be interpreted with caution if the true longitudinal model is (\textit{L.ex4}), as it generally differs from the causal effects of interest. We will now provide numerical examples to illustrate the magnitude of these differences. We consider a causal model of the same form as $(LS.ex4)$ in Figure \ref{Fig:ATE_SV_general_configuration}, with $t_0=5$, binary variables $X_t$ and $W_t$ for all $t=1, \ldots, 5$, and a continuous outcome $Y$. For any variable $U$ involved in this model, denote the exogeneous variable and structural function corresponding to $U$ by $\xi_U$ and $f_U$, respectively. We consider the causal model where $\xi_Y\sim {\mathcall N}(0, 1)$ while all other exogeneous variables are univariate random variables uniformly distributed on $[0, 1]$, and \begin{eqnarray} f_{W_1} \left( \xi_{W_1} \right) \hspace{-0.2cm} &=& \hspace{-0.2cm} {\rm 1\mskip-4.4mu l} \left\lbrace W_1 \leq 0.1 \right\rbrace, \label{Eq:Structural_Functions_Confounder_Affected} \\ f_{X_1} \left( W_1, \xi_{X_1} \right) \hspace{-0.2cm} &=& \hspace{-0.2cm} {\rm 1\mskip-4.4mu l} \left\lbrace X_1 \leq {\rm expit} \left( \alpha W_1 + c_{X_1} \right) \right\rbrace \nonumber, \\ f_{W_t} \left( \bar W_{t-1}, \bar X_{t-1}, \xi_{W_t} \right) \hspace{-0.2cm} &=& \hspace{-0.2cm} {\rm 1\mskip-4.4mu l} \Big\lbrace W_t \leq {\rm expit}\big( \gamma \sum_{t' < t} W_{t'} + \rho \alpha X_{t-1} + c_{W_t} \big)\Big\rbrace, \forall t \in \llbracket 2 ; t_0 \rrbracket \nonumber,\\ f_{X_t} \left( \bar W_{t}, \bar X_{t-1}, \xi_{X_t} \right) \hspace{-0.2cm} &=& \hspace{-0.2cm} {\rm 1\mskip-4.4mu l} \Big\lbrace X_t \leq {\rm expit} \big( \alpha \sum_{t' \leq t} W_{t'} + \beta X_{t-1} + c_{X_t} \big) \Big\rbrace, \forall t \in \llbracket 2 ; t_0 \rrbracket \nonumber, \\ f_Y(\mathcall{X}, \mathcall{W}, \xi_Y) \hspace{-0.2cm} &=& \hspace{-0.2cm} \mu_0 + \mu_X \mathcall{X} - \mu_W \mathcall{W} + \xi_Y\nonumber. \end{eqnarray} Here ${\rm expit}(\cdot)$ denotes the sigmoid function, ${\rm 1\mskip-4.4mu l}\{\cdot\}$ denotes the indicator function, $\mathcall{X} = \mathbb{1}\left( \sum_{t=1} ^{t_0} X_t \geq 3\right)$ and $\mathcall{W} = \mathbb{1}\left( \sum_{t=1} ^{t_0} W_t \geq 3\right)$. Constant terms $c_{W_1}$, $c_{W_t}$, and $c_{X_t}$ were chosen so that prevalences of $X_t$ and $W_t$ are about $0.1$ for all $t$ and any combination of the parameters $\alpha$, $\beta$, $\gamma$ and $\rho$. For instance, we set $c_{W_1} = {\rm logit}(0.1) - \dfrac{0.1}{\alpha}$, with ${\rm logit}(p)=\log[p/(1-p)]$, for $p\in[0,1]$. In this model, parameter $\alpha$ governs the strength of the effect of $W_t$ on $X_{t'}$ for $t'\geq t$, while the strength of the effect of $X_t$ on $W_{t+1}$ is governed by the product $\rho\alpha$. The special case $\rho=0$ corresponds to the scenario where the confounder is not affected by the exposure of interest (pure confounding), while $\alpha=0$ corresponds to the case where the exposure of interest and the confounder are not causally related (no mediation, no confounding). On the other hand, as parameter $\rho$ increases, we get closer to the pure mediation setting as the effect of the ``confounder'' on the exposure of interest gets more and more negligible compared to the effect of the exposure on the ``confounder''. For negative values of parameter $\alpha$, this simple causal model could be regarded as a simplified version of the causal model describing obesity on the age interval, say, [20-30] (process $X_t$), physical activity on the same age interval [20-30] (process $W_t$) and blood pressure at, say, 35 years old ($Y$). \begin{figure} \begin{center} \includegraphics[scale=0.45]{Confounder_Affected_Exposure.pdf} \end{center} \caption{Analytic values of $ATE_{SV.Conf}\left(1;0 \right)$ (in black), $ATE_{SV.Med}\left(1;0\right)$ (in green), $ATE_L(\bar x_{t_0} ; \bar x_{t_0}^{*})$ (in grey) for each couple of exposure profiles leading to $\mathcall{X}= 1$ and $\mathcall{X} = 0$ and the weighted average \eqref{Average} of all these possible comparisons (in blue) under the causal model described in Equation \eqref{Eq:Structural_Functions_Confounder_Affected}.} \label{Illu_Confounder_Affected} \end{figure} Under this model, we can derive the analytic expression of $(i)$ $ATE_{LS.ex4}(\bar x_{t_0};$ $\bar x_{t_0}^{*})$, for any pair of exposure profiles $(\bar x_{t_0}; \bar x_{t_0}^{*})$ leading to $\mathcall{X} = 1$ and $\mathcall{X} = 0$, $(ii)$ the weighted average given in Equation \eqref{Average}, but also $(iii)$ $ATE_{SV.Conf.ex4}\left(1;0\right)$, and $(iv)$ $ATE_{SV.Med.ex4}\left(1;0\right)$. Figure \ref{Illu_Confounder_Affected} presents the values of these four quantities for $\alpha\in[-3,3]$, $\rho\in \left\lbrace 0, 0.1, 0.5, 1, 2, 5, 10 \right\rbrace$ and $\mu_W \in \left\lbrace 0.5, 1, 2\right\rbrace$. The other parameters were set to $\gamma = \beta = 1$, $\mu_0 = 1$ and $\mu_X = 1$. In the pure confounding case (when $\rho = 0$), $ATE_{SV.Conf.ex4} \left(\mathcall{x} ; \mathcall{x}^{*}\right)$ equals $ATE_{LS.ex4}\left( \bar x_{t_0} ; \bar x_{t_0}^{*}\right)$ for any $\bar x_{t_0}$ and $\bar x_{t_0}^{*}$ leading to $\mathcall{X} = \mathcall{x}$ and $\mathcall{X} = \mathcall{x}^{*}$, as expected, and thus equals the quantity of interest given in Equation \eqref{Average} as well. This also happens when $\alpha = 0$, which corresponds to the ``no mediation and no confounding'' scenario, in which case $ATE_{SV.Conf} = ATE_{SV.Med} = ATE_L\left( \bar x_{t_0} ; \bar x_{t_0}^{*}\right)$ for any $\bar x_{t_0}$ and $\bar x_{t_0}^{*}$ leading to $\mathcall{X} = \mathcall{x}$ and $\mathcall{X} = \mathcall{x}^{*}$, and so the weighted average given in Equation \eqref{Average} is also equal to $ATE_{SV.Conf}$. For all other combinations of parameters, both $ATE_{SV.Conf}$ and $ATE_{SV.Med}$ differ from the weighted average of Equation \eqref{Average}. When $\rho \in \{0.1, 0.5\}$, $(W_t)_{t\geq 1}$ mostly acts as a confounder (and not so much as a mediator), and the difference between $ATE_{SV.Conf}$ and the weighted average is generally limited. As $\rho$ increases, the difference between $ATE_{SV.Conf}$ and the weighted average increases too. Moreover, because the effect of ${\mathcall W}$ on $Y$ is $-\mu_W$, the indirect effect of the exposure process through $(W_t)_t$ is negative for positive $\alpha$, so that the weighted average can be negative, while $ATE_{SV.Conf}$ suggests a positive association, for some combinations of large values for $\rho$, $\alpha$ and $\mu_W$. On the other hand, when $\rho$ is large, $(W_t)_{t\geq 1}$ mostly acts as a mediator, and the difference between $ATE_{SV.Med}$ and the weighted average is typically small. It is also noteworthy that the weighted average \eqref{Average} happens to lie between $ATE_{SV.Conf.ex4}\left(1;0 \right)$ and $ATE_{SV.Med.ex4}\left(1;0\right)$ in all the settings presented in Figure \ref{Illu_Confounder_Affected}, although it does not hold true in general. Finally, let us discuss the interpretability of the weighted average of Equation \eqref{Average} (in blue in Figure \ref{Illu_Confounder_Affected}), which may or may not equal the quantity estimated in practice (basically, it equals $ATE_{SV.Conf}$ if $\alpha=0$ or $\rho=0$, while it is equal to $ATE_{SV.Med}$ if $\alpha=0$, and approximately equal to $ATE_{SV.Med}$ if $\alpha \ll \rho$). Figure \ref{Illu_Confounder_Affected} nicely illustrates that the values of the ``individual'' causal effects $ATE_{LS}\left( \bar x_{t_0} ; \bar x_{t_0}^{*}\right)$, for different pairs of profiles $\bar x_{t_0} $ and $\bar x^*_{t_0} $ leading respectively to ${\mathcall X}=1$ and ${\mathcall X}=0$, may be quite heterogeneous for some combination of the parameters, while they are more homogeneous for others combinations. For instance, for negative values of $\alpha$ and $\rho\leq 2$, the values of the individual causal effects $ATE_{LS}\left( \bar x_{t_0} ; \bar x_{t_0}^{*}\right)$ are quite homogeneous. In particular, versions are actually irrelevant for $\rho=0$ or $\alpha=0$, and all the individual causal effects $ATE_{LS}\left( \bar x_{t_0} ; \bar x_{t_0}^{*}\right)$ are equal. In all these cases, the weighted average is then straightforward to interpret. However, the values of the individual causal effects are quite heterogeneous for other combinations of the parameters, especially when both $\rho$ and $\alpha$ are large: in these situations, the weighted average of Equation \eqref{Average}, and consequently $ATE_{SV.Med}$, have to be interpreted with caution. This echoes our discussion at the end of Section \ref{sec:Illustr_Insta} where possibly substantially different individual causal effects, such as $ATE_{L}\big( {\bf 1}_{t_0} ; {\bf 0}_{t_0} \big)$ and $ATE_{L}\big( ({\bf 0}_{t_0}, 1) ; ({\bf 1}_{t_0-1}, 0) \big)$, could contribute to the weighted average given in Equation \eqref{Eq:Sufficient_uncond} unless, {\it e.g.}, some stability assumption held. Actually, a closer inspection into the values of the weights $\mathbb{P}(\bar X_{t_0} = \bar x_{t_0} \mid \mathcall{X} = \mathcall{x}) \times \mathbb{P}(\bar X_{t_0} = \bar x_{t_0}^{*} \mid \mathcall{X} = \mathcall{x}^{*})$ is instructive. For example, consider the case where $\alpha = 3$, $\rho = 10$ and $\mu_W = 2$. In this case, the weighted average in Equation \eqref{Average} is a weighted sum of the three following terms mostly, whose cumulative weight is more than 82\%: $ATE_{LS}\big( {\bf 1}_{5} ; {\bf 0}_{5} \big) = -1$, $ATE_{LS}\big( ( {\bf 0}_1 , {\bf 1}_{4}) ; {\bf 0}_{5} \big)= -1$ and $ATE_{LS}\big( ( {\bf 0}_2 , {\bf 1}_{3}) ; {\bf 0}_{5} \big)= 0.8$. Interestingly, although these three causal effects compare the single never-exposed profile to three ever-exposed types of profiles, their values are substantially different. This is again due to the negative indirect effect of the exposure through the $(W_t)_{t\geq 1}$ process. In this particular example, the weighted average of longitudinal causal effects mostly compares ever-exposed profiles to the single never-exposed profile, and can therefore be seen as a causal quantity of interest, even if it is the average of quite different ``individual'' causal effects. This situation can of course arise as well for the weighted average of Equation \eqref{Eq:ATE_stable} under the stability assumption in Section \ref{sec:Illustr_Insta}. \section{Discussion}\label{sec:Discu} The longitudinal nature of risk factors is most often overlooked in epidemiology. In this article, we investigated whether causal effects derived when working under simplified, hence generally misspecified, models could still be related to causal effects of potential interest. We focused on two situations regarding exposures: when inference is based on ($i)$ their ``instantaneous'' levels measured at inclusion in the study, and $(ii)$ some summary measures of their levels up to inclusion in the study, assuming that these summary measures capture the whole effect of the exposure processes on the outcome. Unsurprisingly, our results are mostly negative, in the sense that the quantity estimated in practice when working under over-simplified causal models has generally no clear interpretation in terms of longitudinal causal effects of interest, except under very simple longitudinal causal models. Under the conditions of Theorems \ref{Theo_Insta_Strong} or Theorem \ref{Theo_SV_Weak_Irrelevance}, the quantity estimated in practice has a clear interpretation, as it coincides with longitudinal total effects. But, these conditions are very restrictive. Under slightly less restrictive conditions, Theorem \ref{Theo_Insta_Weak} and Theorem \ref{Theo_SV_Weak} ensure that the quantity estimated in practice expresses as a weighted average of longitudinal causal effects. But, these conditions are still quite restrictive, and the interpretability of these weighted averages is not always straightforward. When inference is based on instantaneous levels of exposures measured at inclusion, practitioners should be extremely cautious when interpreting their results as the quantity of interest can generally not be related to any causal effects of interest. A noticeable exception is when a stability assumption holds for the exposure profile and no time-varying confounder is present. In the situation where summary measures are available and capture the whole effect of past levels of exposures, the quantity estimated in practice can be related to causal effects of interest under a few simple causal models. This is the case when the versions of the treatment are irrelevant, and either condition (\textit{T3.Cond}) or (\textit{T3.Uncond}) is verified, as for example in the presence of a time-varying pure confounder only; see model ($LS.ex1$) of Figure \ref{Fig:ATE_SV_general_configuration}. When the versions are relevant and condition (\textit{T3.Cond}) or (\textit{T3.Uncond}) is verified, the quantity of interest can be expressed as a weighted average of causal effects of interest: this is notably the case in the presence of a time-varying pure mediator only; see model ($LS.ex2$) of Figure \ref{Fig:ATE_SV_general_configuration}. Moreover, as soon as a time-varying confounder affected by the exposure is present, and/or both time-varying pure mediators and confounders are present, the quantity estimated in practice has to be interpreted with caution since it can generally not be related to any causal effect of interest. We shall stress that even if time-varying pure mediators are generally overlooked when the focus is on total effects, they are likely to exist in most cases. As soon as time-varying confounders exist too, summary variables are no longer sufficient to derive meaningful estimates for total causal effects. Overall, our results are in line with, and complete, those of previous works, which established the necessity of applying appropriate statistical methods on repeated measurements of exposures when the true causal model is longitudinal \cite{G_Formula, article2, article1}. Even if measurements of exposures are available at baseline only, it would be good practice to still consider the true longitudinal causal model, rather than its over-simplified counterpart. General results on the identifiability of causal effects in the presence of unobserved variables could then be applied \cite{TianPearl2002, ShpitserPearl2006a, HuangValtorta2006} to check whether some longitudinal causal effects of interest can be identified from the available data, even if this will only be the case under very particular and simple causal models. The development of sensitivity analyses may be required for more general models. But, above all, we believe that forthcoming observational studies should plan the collection of repeated measurements, as a few studies already did, including for biomarkers \cite{koges2017}. Following these recommendations is likely even more critical when considering time-varying outcomes as in survival analysis, and when targeting causal effects defined on multiplicative scales such as relative risks and odds-ratios. \section*{Acknowledgments} The authors are grateful to Stijn Vansteelandt for insightful comments on preliminary versions of this article. \section*{Disclaimers} Where authors are identified as personnel of the International Agency for Research on Cancer / World Health Organization, the authors alone are responsible for the views expressed in this article and they do not necessarily represent the decisions, policy or views of the International Agency for Research on Cancer / World Health Organization. \vspace{0.8cm} \begin{appendices} {\noindent\Large{\bf Appendices}} \section{Technical details in the situation where instantaneous levels at inclusion in the study are available} \subsection{Proof of Theorem \ref{Theo_Insta_Strong}}\label{Proof:Theo_Insta_Strong} Consider a longitudinal model ($L$) as depicted in Figure \ref{Fig:ATE_general_configuration}, and assume that there exists $W_{t_0} \subset Z_{t_0}$ taking values in some space $\Omega_{W_{t_0}}$ such that the conditional ignorability condition $Y^{X_{t_0} = x_{t_0} } \indep X_{t_0} \mid W_{t_0}$ holds. Then for any $x_{t_0}$ and $x_{t_0}^{*}$ in $\lbrace0,1\rbrace$, usual arguments of causal inference (that is, the application of the ignorability condition, and the consistency and positivity conditions) \cite{pearl2009statsurvey, robins1986, rosenbaum1983}, yields \begin{eqnarray*} ATE_L \left( x_{t_0} ; x^{*}_{t_0} \right) &:=& \mathbb{E}_L\left( Y^{X_{t_0} = x_{t_0} } - Y^{X_{t_0} = x^{*}_{t_0} } \right), \\ &=& \hspace{-0.5cm} \sum_{w_{t_0} \in \Omega_{W_{t_0}}} \mathbb{E}_L\left( Y^{X_{t_0} = x_{t_0} } - Y^{X_{t_0} = x^{*}_{t_0} } \mid W_{t_0} = w_{t_0} \right) \times \mathbb{P}( W_{t_0} = w_{t_0}), \\ &=& \hspace{-0.5cm} \sum_{w_{t_0} \in \Omega_{W_{t_0}}} \hspace{-0.2cm} \Big[ \mathbb{E}_{L} \Big( Y^{X_{t_0} = x_{t_0} } \mid W_{t_0} = w_{t_0}, {X_{t_0} = x_{t_0} } \Big) \\[-0.4cm] && \hspace{1cm} - \mathbb{E}_{ L} \Big( Y^{X_{t_0} = x^{*}_{t_0} } \mid W_{t_0} = w_{t_0}, {X_{t_0} = x^{*}_{t_0} } \Big) \Big] \times \mathbb{P}( W_{t_0} = w_{t_0}),\\ &=& \hspace{-0.5cm} \sum_{w_{t_0} \in \Omega_{W_{t_0}}} \hspace{-0.2cm} \Big[ \mathbb{E} \Big( Y \mid W_{t_0} = w_{t_0}, {X_{t_0} = x_{t_0} } \Big) - \mathbb{E} \Big( Y \mid W_{t_0} = w_{t_0}, {X_{t_0} = x^{*}_{t_0} } \Big) \Big]\\[-0.3cm] && \hspace{1cm} \times \mathbb{P}( W_{t_0} = w_{t_0}). \end{eqnarray*} Now, consider an over-simplified model ($CS$) under which $Y^{X_{t_0} = x_{t_0} } \indep X_{t_0} \mid W_{t_0}$. Then the quantity estimated in practice when working under this over-simplified model is, for any $x_{t_0}$ and $x_{t_0}^{*}$ in $\lbrace0,1\rbrace$, \begin{eqnarray*} ATE_{CS}\left(x_{t_0} ;x^{*}_{t_0}\right) &:=& \mathbb{E}_{CS}\left( Y^{X_{t_0}=x_{t_0}} - Y^{X_{t_0}=x_{t_0}^{*}}\right), \hspace{15cm}\\ \end{eqnarray*} \begin{eqnarray*} &=& \hspace{-0.5cm} \sum_{w_{t_0} \in \Omega_{W_{t_0}}}\hspace{-0.2cm} \left[ \mathbb{E}_{CS} \left( Y^{X_{t_0}=x_{t_0}} \mid W_{t_0} = w_{t_0}\right) - \mathbb{E}_{CS}\left( Y^{X_{t_0}=x_{t_0}^*} \mid W_{t_0} = w_{t_0} \right) \right] \\[-0.3cm] && \hspace{0.9cm} \times \mathbb{P}(W_{t_0} = w_{t_0}),\\ &=& \hspace{-0.5cm} \sum_{w_{t_0} \in \Omega_{W_{t_0}}}\hspace{-0.2cm} \Big[ \mathbb{E}_{CS} \left( Y^{X_{t_0}=x_{t_0}} \mid W_{t_0} = w_{t_0}, X_{t_0}=x_{t_0}\right) \\[-0.4cm] && \hspace{1cm} - \mathbb{E}_{CS} \left( Y^{X_{t_0}=x_{t_0}^*} \mid W_{t_0} = w_{t_0}, X_{t_0}=x^{*}_{t_0}\right) \Big] \times \mathbb{P}(W_{t_0} = w_{t_0}),\\ &\Bumpeq & \hspace{-0.5cm} \sum_{w_{t_0} \in \Omega_{W_{t_0}}}\hspace{-0.2cm} \left[ \mathbb{E} \left( Y \mid W_{t_0} = w_{t_0}, X_{t_0}=x_{t_0}\right) - \mathbb{E}\left( Y \mid W_{t_0} = w_{t_0}, X_{t_0}=x^{*}_{t_0}\right) \right] \\[-0.3cm] && \hspace{0.9cm} \times \mathbb{P}(W_{t_0} = w_{t_0})\\ &=& ATE_L \left( x_{t_0} ; x^{*}_{t_0} \right). \end{eqnarray*} Using similar arguments, if $(Y^{X_{t_0} = x_{t_0}} \indep X_{t_0})_{L}$ and $(Y^{X_{t_0} = x_{t_0}} \indep X_{t_0})_{CS}$, it follows that \begin{eqnarray*} ATE_{CS}\left(x_{t_0} ;x^{*}_{t_0}\right) & \Bumpeq & \mathbb{E} \Big( Y \mid { X_{t_0} = x_{t_0} } \Big) - \mathbb{E} \Big( Y \mid {X_{t_0} = x^{*}_{t_0} } \Big)\\ &=&ATE_L \left( x_{t_0} ; x^{*}_{t_0} \right), \hspace{10cm} \end{eqnarray*} which completes the proof of Theorem \ref{Theo_Insta_Strong}. \subsection{Proof of Theorem \ref{Theo_Insta_Weak}}\label{Proof:Theo_Insta_Weak} Consider again a longitudinal model ($L$) as depicted in Figure \ref{Fig:ATE_general_configuration}, and assume that there exists $W_{t_0} \subset Z_{t_0}$ taking values in some space $\Omega_{W_{t_0}}$ such that $Y^{\bar X_{t_0} = \bar x_{t_0}} \indep \bar X_{t_0} \mid W_{t_0}$. Then for any $\bar x_{t_0}$ and $\bar x^{*}_{t_0}$ in $\left\lbrace 0, 1 \right\rbrace ^{t_0}$, usual arguments of causal inference \cite{pearl2009statsurvey, robins1986, rosenbaum1983} yield \begin{eqnarray*} ATE_L\left( \bar x_{t_0} ; \bar x^{*}_{t_0}\right) &:=& \mathbb{E}_L\left( Y^{\bar X_{t_0} = \bar x_{t_0} } - Y^{\bar X_{t_0} = \bar x^{*}_{t_0} } \right), \\ &=& \hspace{-0.5cm} \sum_{w_{t_0} \in \Omega_{W_{t_0}}} \hspace{-0.2cm} \Big[ \mathbb{E} \Big( Y \mid W_{t_0} = w_{t_0}, {\bar X_{t_0} = \bar x_{t_0} } \Big) - \mathbb{E} \Big( Y \mid W_{t_0} = w_{t_0}, {\bar X_{t_0} = \bar x^{*}_{t_0} } \Big) \Big]\\[-0.3cm] && \hspace{0.95cm} \times \mathbb{P}( W_{t_0} = w_{t_0}). \end{eqnarray*} Now, consider an over-simplified model ($CS$) under which $Y^{X_{t_0} = x_{t_0}} \indep X_{t_0} \mid W_{t_0}$. Then the quantity estimated in practice when working under this over-simplified model is, for any $x_{t_0}$ and $x_{t_0}^{*}$ in $\lbrace0,1\rbrace$, \begin{eqnarray*} ATE_{CS}\left(x_{t_0} ;x^{*}_{t_0}\right) &:=& \mathbb{E}_{CS}\left( Y^{X_{t_0}=x_{t_0}} - Y^{X_{t_0}=x_{t_0}^{*}}\right),\\ & \Bumpeq & \hspace{-0.5cm} \sum_{w_{t_0} \in \Omega_{W_{t_0}}} \hspace{-0.2cm} \left[ \mathbb{E}\left( Y \mid W_{t_0} = w_{t_0}, X_{t_0}=x_{t_0}\right) - \mathbb{E}\left( Y \mid W_{t_0} = w_{t_0}, X_{t_0}=x^{*}_{t_0}\right) \right] \\[-0.3cm] && \hspace{0.95cm} \times \mathbb{P}(W_{t_0} = w_{t_0}). \end{eqnarray*} But, under model ($L$), we have, for any $x_{t_0}$ in $\lbrace0, 1\rbrace$ and $w_{t_0} \in \Omega_{W_{t_0}}$, \begin{eqnarray*} \mathbb{E}\left( Y \mid W_{t_0} = w_{t_0}, X_{t_0}=x_{t_0}\right) &=& \sum_{\bar x_{t_0-1}} \mathbb{E} \Big( Y \mid W_{t_0} = w_{t_0}, {\bar X_{t_0} = \bar x_{t_0} } \Big) \\[-0.4cm] && \hspace{0.75cm} \times \mathbb{P}(\bar X_{t_0-1} = \bar x_{t_0-1} \mid X_{t_0}=x_{t_0}, W_{t_0} = w_{t_0}) , \\ &=& \sum_{\bar x_{t_0-1}} \mathbb{E}_L \Big( Y^{\bar X_{t_0} = \bar x_{t_0} } \mid W_{t_0} = w_{t_0}, \Big) \\[-0.4cm] && \hspace{0.75cm} \times \mathbb{P}(\bar X_{t_0-1} = \bar x_{t_0-1} \mid X_{t_0}=x_{t_0}, W_{t_0} = w_{t_0}), \end{eqnarray*} where the sum is over all possible values of $\bar X_{t_0 -1}$ in $\lbrace 0,1 \rbrace^{t_0-1}$. Therefore, \begin{eqnarray*} ATE_{CS}\left(x_{t_0} ;x^{*}_{t_0}\right) &\Bumpeq& \sum_{w_{t_0} \in \Omega_{W_{t_0}}} \sum_{\substack{\bar x_{t_0-1} \\ \bar x^{*}_{t_0-1}}} \lbrace ATE_{L_{\mid W_{t_0} = w_{t_0}}}\left( \bar x_{t_0} ; \bar x^{*}_{t_0}\right) \\[-0.8cm] && \hspace{2.8cm} \times \mathbb{P}(\bar X_{t_0-1} = \bar x_{t_0-1} \mid X_{t_0}=x_{t_0}, W_{t_0} = w_{t_0}) \nonumber \\ && \hspace{2.8cm} \times \mathbb{P}(\bar X_{t_0-1} = \bar x_{t_0-1}^{*} \mid X_{t_0}=x_{t_0}^{*}, W_{t_0} = w_{t_0}) \nonumber \\ && \hspace{2.8cm} \times \mathbb{P}(W_{t_0} = w_{t_0}) \rbrace, \end{eqnarray*} which establishes the result under condition ($T2.Cond$).\\ The proof of the result under condition ($T2.Uncond$) follows from similar, but simpler, arguments and is therefore omitted. \section{Technical details in the situation where summary measures of past exposures are available} \subsection{Proof of Theorem \ref{Theo_SV_Weak}}\label{Proof:Theo_SV_Weak} Consider a longitudinal model ($LS$) as depicted in Figure \ref{Fig:ATE_SV_general_configuration}, and assume that there exists $\mathcall{W} \subset \mathcall{Z}$ taking its values in some space $\Omega_{\mathcall{W}}$ such that $Y^{\bar X_{t_0} = \bar x_{t_0} } \indep X_{t_0} \mid \mathcall{W}$. Then, for any $\bar x_{t_0}$ and $\bar x^{*}_{t_0}$ in $\left\lbrace 0, 1 \right\rbrace ^{t_0}$, usual arguments of causal inference \cite{pearl2009statsurvey, robins1986, rosenbaum1983}, yield \begin{eqnarray*} ATE_{LS}\left( \bar x_{t_0} ; \bar x^{*}_{t_0}\right) &=& \hspace{-0.2cm} \sum_{\mathcall{w} \in \Omega_{\mathcall{W}}} ATE_{LS_{\mid \mathcall{W} = \mathcall{w}}} \left( \bar x_{t_0} ; \bar x^{*}_{t_0}\right) \times \mathbb{P}( \mathcall{W} = \mathcall{w} ) , \\ &=& \hspace{-0.2cm} \sum_{\mathcall{w} \in \Omega_{\mathcall{W}}} \mathbb{E}_{LS}\left( Y^{\bar X_{t_0} = \bar x_{t_0} } - Y^{\bar X_{t_0} = \bar x^{*}_{t_0} } \mid \mathcall{W} = \mathcall{w} \right) \times \mathbb{P}( \mathcall{W} = \mathcall{w}), \\ \end{eqnarray*} \begin{eqnarray*} \hspace{3cm} &=& \hspace{-0.2cm} \sum_{\mathcall{w} \in \Omega_{\mathcall{W}}} \Big[ \mathbb{E}_{LS} \Big( Y^{\bar X_{t_0} = \bar x_{t_0} } \mid \mathcall{W} = \mathcall{w}, {\bar X_{t_0} = \bar x_{t_0} } \Big) \\[-0.3cm] && \hspace{1.05cm} - \mathbb{E}_{LS} \Big( Y^{\bar X_{t_0} = \bar x^{*}_{t_0} } \mid \mathcall{W} = \mathcall{w}, {\bar X_{t_0} = \bar x^{*}_{t_0} } \Big) \Big] \times \mathbb{P}( \mathcall{W} = \mathcall{w}),\\ &=& \hspace{-0.2cm} \sum_{\mathcall{w} \in \Omega_{\mathcall{W}}} \Big[ \mathbb{E} \Big( Y \mid \mathcall{W} = \mathcall{w}, {\bar X_{t_0} = \bar x_{t_0} } \Big) - \mathbb{E} \Big( Y \mid \mathcall{W} = \mathcall{w}, {\bar X_{t_0} = \bar x^{*}_{t_0} } \Big) \Big]\\[-0.2cm] && \hspace{0.95cm} \times \mathbb{P}(\mathcall{W} = \mathcall{w}). \end{eqnarray*} Now, consider an over-simplified model ($SV$) under which $Y^{\mathcall{X} = \mathcall{x}} \indep \mathcall{X} \mid \mathcall{W}$. Then, the quantity estimated in practice when working under this over-simplified model is, for any given $\mathcall{x},\mathcall{x}^{*}$, \begin{eqnarray*} ATE_{SV} \left(\mathcall{x} ; \mathcall{x}^{*}\right) &:=&\mathbb{E}_{SV}\left( Y^{\mathcall{X} = \mathcall{x}} - Y^{\mathcall{X} = \mathcall{x}^{*}}\right),\\ &\Bumpeq & \hspace{-0.2cm} \sum_{\mathcall{w} \in \Omega_{\mathcall{W}}} \left[ \mathbb{E}\left( Y \mid \mathcall{W} = \mathcall{w}, \mathcall{X} = \mathcall{x} \right) - \mathbb{E}\left( Y \mid \mathcall{W} = \mathcall{w}, \mathcall{X} = \mathcall{x}^{*} \right) \right] \nonumber \\[-0.3cm] && \hspace{0.95cm} \times \mathbb{P}(\mathcall{W}=\mathcall{w}). \end{eqnarray*} But, because $\bar X_{t_0}$ \textit{d}-separates $\mathcall{X}$ and $\mathcall{W}$ under model ($LS$) \cite{pearl_book, Verma_Pearl88}, we have, for any $\bar x_{t_0}$ in $\lbrace 0 , 1\rbrace ^{t_0}$ and any $\mathcall{w}$ in $\Omega_{\mathcall{W}}$, \begin{eqnarray*} \mathbb{E} \Big( Y \mid \mathcall{W} = \mathcall{w}, {\bar X_{t_0} = \bar x_{t_0} } \Big) &=& \sum_{\mathcall{x}} \mathbb{E} \Big( Y \mid \mathcall{W} = \mathcall{w}, \mathcall{X} = \mathcall{x}, {\bar X_{t_0} = \bar x_{t_0} } \Big) \\[-0.2cm] && \hspace{0.6cm} \times \mathbb{P}(\mathcall{X} = \mathcall{x} \mid \mathcall{W} = \mathcall{w}, {\bar X_{t_0} = \bar x_{t_0} } ),\\ &=& \sum_{\mathcall{x}} \mathbb{E} \Big( Y \mid \mathcall{W} = \mathcall{w}, \mathcall{X} = \mathcall{x}, {\bar X_{t_0} = \bar x_{t_0} } \Big) \\[-0.2cm] && \hspace{0.6cm} \times \mathbb{P}(\mathcall{X} = \mathcall{x} \mid {\bar X_{t_0} = \bar x_{t_0} } ), \\ &=& \mathbb{E} \Big( Y \mid \mathcall{W} = \mathcall{w}, \mathcall{X} = \mathcall{x}, {\bar X_{t_0} = \bar x_{t_0} } \Big), \end{eqnarray*} with $\mathcall{x}$ corresponding to the value taken by $\mathcall{X}$ when $\bar X_{t_0} = \bar x_{t_0}$. In other respect, for any $\mathcall{x}$, we have \begin{eqnarray*} \mathbb{E}\left( Y \mid \mathcall{W} = \mathcall{w}, \mathcall{X} = \mathcall{x} \right) &=& \sum_{\bar x_{t_0}} \mathbb{E}\left( Y \mid \mathcall{W} = \mathcall{w}, \mathcall{X} = \mathcall{x} , {\bar X_{t_0} = \bar x_{t_0} } \right) \\[-0.4cm] && \hspace{0.75cm} \times \mathbb{P}(\bar X_{t_0} = \bar x_{t_0} \mid \mathcall{W} = \mathcall{w}, \mathcall{X} = \mathcall{x} ) , \\ &=& \sum_{\bar x_{t_0}} \mathbb{E}\left( Y \mid \mathcall{W} = \mathcall{w}, {\bar X_{t_0} = \bar x_{t_0} } \right) \\[-0.4cm] && \hspace{0.75cm} \times \mathbb{P}(\bar X_{t_0} = \bar x_{t_0} \mid \mathcall{W} = \mathcall{w}, \mathcall{X} = \mathcall{x} ) , \\ &=& \sum_{\bar x_{t_0}} \mathbb{E}_{LS} \Big( Y^{\bar X_{t_0} = \bar x_{t_0} } \mid \mathcall{W} = \mathcall{w} \Big) \\[-0.4cm] && \hspace{0.75cm} \times \mathbb{P}(\bar X_{t_0} = \bar x_{t_0} \mid \mathcall{W} = \mathcall{w}, \mathcall{X} = \mathcall{x} ), \end{eqnarray*} where the second equality comes from the fact that $\bar X_{t_0} = \bar x_{t_0} \Rightarrow \mathcall{X} = \mathcall{x}$ for any $\bar x_{t_0}$ such that $\mathbb{P}(\bar X_{t_0} = \bar x_{t_0} \mid \mathcall{W} = \mathcall{w}, \mathcall{X} = \mathcall{x})\neq 0$. This finally yields \begin{eqnarray*} ATE_{SV}(\mathcall{x}; \mathcall{x}^*) \hspace{-0.1cm} & \Bumpeq & \hspace{-0.3cm} \sum_{\mathcall{w} \in \Omega_{\mathcall{W}}} \sum_{\substack{\bar x_{t_0} \\ \bar x^{*}_{t_0}}} \lbrace ATE_{LS_{\mid \mathcall{W} = \mathcall{w}}}\left( \bar x_{t_0} ; \bar x^{*}_{t_0}\right) \times \mathbb{P}(\bar X_{t_0} = \bar x_{t_0} \mid \mathcall{X} = \mathcall{x}, \mathcall{W} = \mathcall{w}) \nonumber \\[-0.9cm] && \hspace{4.85cm} \times \mathbb{P}(\bar X_{t_0} = \bar x_{t_0}^{*} \mid \mathcall{X} = \mathcall{x}^{*}, \mathcall{W} = \mathcall{w})\\ && \hspace{4.85cm} \times \mathbb{P}(\mathcall{W} = \mathcall{w})\rbrace, \end{eqnarray*} where the sums are over all $\bar x_{t_0}$ and $\bar x_{t_0}^*$ in $\lbrace 0 , 1 \rbrace^{t_0}$ such that $\mathbb{P}(\bar X_{t_0} = \bar x_{t_0} \mid \mathcall{W} = \mathcall{w}, \mathcall{X} = \mathcall{x})$ and $\mathbb{P}(\bar X_{t_0} = \bar x_{t_0}^* \mid \mathcall{W} = \mathcall{w}, \mathcall{X} = \mathcall{x}^*)$, respectively, are not null.\\ The proof of the result under condition ($T3.Uncond$) follows from similar, but simpler, arguments and is therefore omitted. \subsection{Proof of Theorem \ref{Theo_SV_Weak_Irrelevance}}\label{Proof:Theo_SV_Weak_Irrelevance} First consider a model (\textit{LS}) as depicted in Figure \ref{Fig:ATE_SV_general_configuration}, and assume that the versions of the treatment are irrelevant, and that there exists $\mathcall{W} \subset \mathcall{Z}$ such that $\big( Y^{\bar X_{t_0} = \bar x_{t_0} } \indep \bar X_{t_0} \mid \mathcall{W} \big)_{LS}$ and $\big( Y^{\mathcall{X} = \mathcall{x}} \indep \mathcall{X} \mid \mathcall{W} \big)_{SV}$. Consider any given $\mathcall x \neq \mathcall x^{*}$; for any $\bar x_{t_0}$ such that $\bar X_{t_0}=\bar x_{t_0} \Rightarrow {\mathcall X}={\mathcall x}$, we have $Y^{\bar X_{t_0}=\bar x_{t_0}} = Y^{{\mathcall X}={\mathcall x}}$. Therefore, for such $\bar x_{t_0}$, we {\it a fortiori} have $\mathbb{E} \big( Y^{\bar X_{t_0}=\bar x_{t_0}} \mid \mathcall{W} = \mathcall{w} \big)= \mathbb{E} \big( Y^{{\mathcall X}={\mathcall x}} \mid \mathcall{W} = \mathcall{w} \big)$ for any $\mathcall{w}$ in $\Omega_{\mathcall{w}}$. As a result, for any $\mathcall{w}$ in $\Omega_{\mathcall{w}}$ and any $\bar x_{t_0}$ and $\bar x_{t_0}^{*}$ leading respectively to $\mathcall{X} = \mathcall{x}$ and $\mathcall{X} = \mathcall{x}^{*}$, $ATE_{LS_{\mid \mathcall{W} = \mathcall{w}}}(\bar x_{t_0} ; \bar x_{t_0}^{*}) = ATE_{LS_{\mid \mathcall{W} = \mathcall{w}}}({\mathcall x} ; {\mathcall x}^{*})$. According to the result of Theorem \ref{Theo_SV_Weak}, we finally have \begin{eqnarray*} ATE_{SV}(\mathcall{x}; \mathcall{x}^*) \hspace{-0.1cm} &{\Bumpeq}& \hspace{-0.3cm}\sum_{\mathcall{w} \in \Omega_{\mathcall{W}}} \sum_{\substack{\bar x_{t_0} \\ \bar x^{*}_{t_0}}} \lbrace ATE_{LS_{\mid \mathcall{W} = \mathcall{w}}}({\mathcall x} ; {\mathcall x}^{*}) \times \mathbb{P}(\bar X_{t_0} = \bar x_{t_0} \mid \mathcall{X} = \mathcall{x}, \mathcall{W} = \mathcall{w}) \nonumber \\[-0.9cm] && \hspace{4.7cm} \times \mathbb{P}(\bar X_{t_0} = \bar x_{t_0}^{*} \mid \mathcall{X} = \mathcall{x}^{*}, \mathcall{W} = \mathcall{w})\\ && \hspace{4.7cm} \times \mathbb{P}(\mathcall{W} = \mathcall{w}) \rbrace,\\ &=& \hspace{-0.3cm} \sum_{\mathcall{w} \in \Omega_{\mathcall{W}}} ATE_{LS_{\mid \mathcall{W} = \mathcall{w}}}({\mathcall x} ; {\mathcall x}^{*}) \times \mathbb{P}(\mathcall{W} = \mathcall{w}),\\ &=& ATE_{LS}(\mathcall{x}; \mathcall{x}^*)\\ &=& ATE_{LS}(\bar x_{t_0} ; \bar x_{t_0}^{*}). \end{eqnarray*} In the last equality, $\bar x_{t_0}$ and $\bar x_{t_0}^{*}$ are two profiles leading to $\mathcall{X} = \mathcall{x}$ and $\mathcall{X} = \mathcall{x}^{*}$, respectively. The proof of the result under condition ($T3.Uncond$) follows from similar, but simpler, arguments and is therefore omitted. \end{appendices} \newpage \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Quantum chromodynamics (QCD) is currently understood as the fundamental theory of strong interactions, but its strong coupling and non-Abelian nature make it challenging to fully understand. The QCD vacuum undergoes significant changes at high temperatures, which can be observed in the early universe and in quark-gluon plasma created in ultrarelativistic heavy-ion collisions. Even at high temperatures, the interaction remains strong near the QCD transition temperature $T_c$, and the vacuum structure becomes complex due to spontaneous symmetry breaking (SSB) specific to non-Abelian gauge systems. In Yang-Mills theory at finite temperature, the deconfinement phase transition is described by the Polyakov loop $L$, the expectation value of which is related to the free energy $E_q$ of a static single quark at temperature $T$: $\langle L \rangle \propto e^{-E_q/T}$. From a symmetry perspective, the deconfinement phase is described as SSB of the $\mathbb{Z}_N$ center symmetry of the SU($N$) gauge group \cite{Yaffe}. In this case, the Polyakov loop $\langle L \rangle$ serves as the order parameter associated with $\mathbb{Z}_N$ symmetry during the phase transition \cite{Wilson74, Polyakov77, Debbio97}. When the thermal expectation value is zero, i.e., $\langle L \rangle = 0$, quarks are confined and $\mathbb{Z}_N$ symmetry is manifest. On the other hand, $\langle L \rangle \neq 0$ signifies quark deconfinement and SSB of $\mathbb{Z}_N$ symmetry, even though the action of the Yang-Mills theory is $\mathbb{Z}_N$-invariant. Reflecting broken $\mathbb{Z}_N$ symmetry, there are $N$ different vacua in the SU($N$) Yang-Mills theory at high temperatures in a homogeneous deconfined system. After one of these vacua is randomly chosen, quantum and thermal effects cause local fluctuations around the selected vacuum. For finite $N$, the $\mathbb{Z}_N$ discrete symmetry does not allow for a Nambu-Goldstone mode to appear in the spontaneously $\mathbb{Z}_N$-broken phase. However, in the large-$N$ limit, which is often used in the context of 1/$N_c$ expansion, we speculate that the global center symmetry $\mathbb{Z}_N$ becomes an approximately continuous U(1)-like group, allowing for the appearance of quasi-Nambu-Goldstone modes as local fluctuations in a $\mathbb{Z}_N$-broken vacuum. In fact, the mass of the fluctuation along a particular direction becomes zero, as we will show in Sec. II. The large-$N$ limit not only provides a deeper understanding of QCD, but it is also academically interesting due to its connection to AdS/CFT correspondence, the accuracy of saddle point approximations, the dominance of planar diagrams \cite{tHooft74}, and the quarkyonic phase in large-$N$ high density QCD \cite{McLerran:2007qj}. \begin{figure}[htbp] \includegraphics[width=86mm]{Domain_wall.png} \caption{The domain structure of quark-gluon plasma in the experimental situation. The arrows in the figure represent the phases of the vacuum expectation value of the Polyakov loops. Each domain, separated by the domain walls, can be characterized by the phase and they are stabilized as different vacua.} \label{domainwall} \end{figure} In reality, quark-gluon plasma in experiments is confined to a finite volume, allowing for transitions between different vacua in the entire system. In a high-energy heavy ion collision, which is one potential way to create quark-gluon plasma, the system immediately after the phase transition is expected to be divided into many small-volume domains, each of which has a homogeneous configuration and a randomly chosen vacuum out of $N$, as shown in Fig.~\ref{domainwall}. In other words, domain walls separate the system, with different domains taking on different vacuum configurations. Such a domain structure may be important to explain experimental signals of large opacity and near-ideal-fluid properties of the quark gluon plasma \cite{Asakawa:2012yv}. Since the volume of each domain is finite, the global vacuum-to-vacuum transition rate or the lifetime of a vacuum will also be finite. Furthermore, in response to the emergence of Nambu-Goldstone modes in the large-$N$ limit, it can be expected that the lifetime of a vacuum in a system with a relatively small volume would vanish in the limit. The instability and vacuum-to-vacuum transition can be quantitatively evaluated as a function of the volume of the system and $N$. In Sec. II, we examine local fluctuations around a specific $\mathbb{Z}_N$ vacuum and demonstrate that some fluctuations become massless modes in the large-$N$ limit in both a toy model and the pure Yang-Mills theory. In Sec. III, we consider vacuum-to-vacuum transitions in the deconfinement phase and convert this problem into a thermal penetration problem in a one-dimensional quantum mechanical system. In Sec. IV, we examine the $\mathbb{Z}_N$-broken vacuum with finite volume $V$ and evaluate its lifetime and stability as a function of $V$ and $N$. Finally, in Sec. V, we summarize and conclude. \section{Symmetry conversion} \begin{figure*}[htbp] \includegraphics[width=129mm]{summary.png} \caption{It is possible for the discrete symmetry to be converted into a continuous one. In the large-$N$ limit, a $\mathbb{Z}_N$-symmetric potential is expected to become U(1)-symmetric, leading to the conversion of SSB of $\mathbb{Z}_N$ symmetry into that of a continuous symmetry. This produces a massless mode in the angle direction.} \label{summary} \end{figure*} To begin with, consider an arbitrary action $S[\varphi]$ involving a quantum field $\varphi$ with global $\mathbb{Z}_N$ symmetry: $S[\varphi] = S[z \varphi]$ $(z \in \mathbb{Z}_N)$ and assume that the symmetry is spontaneously broken in terms of the vacuum expectation value $\langle \varphi \rangle \neq z \langle \varphi \rangle$. In other words, theories with $N$ different vacua degenerate under such specific conditions as low temperature. The question is whether the large-$N$ limit can bring about some qualitative changes in the theoretical structure. It is important to keep in mind that $\mathbb{Z}_N$ is a discrete group, not a continuous one. Nambu-Goldstone's theorem \cite{Nambu60, Goldstone62} states that the SSB of a global continuous symmetry leads to the emergence of massless modes, the number of which corresponds to the number of broken degrees of freedom. In this sense, the SSB of $\mathbb{Z}_N$ symmetry does not necessarily result in the emergence of Nambu-Goldstone modes, unlike any continuous symmetries. However, massless modes will emerge under the large-$N$ limit because, in this limit, $\mathbb{Z}_N$ symmetry is expected to become approximately continuous U(1)-like symmetry, as shown in Fig.\ref{summary}. This transformation can play a crucial role in the analysis of large-$N$ systems. \section{Toy model: \\Ising-like $\mathbb{Z}_N$ spin system} We will now describe the above transformation in statistical mechanics using a simple toy model with the Hamiltonian: \begin{eqnarray} \mathcal{H} &=& - J \sum_{\braket{i, j}} (S_i^* S_j + \mathrm{c.c.}) \cr &=&-\sum_{i, j} S_i^* \hat J_{ij} S_j =- S^\dagger \hat J S, \end{eqnarray} where $ S_i = e^{i \frac{2 \pi n}{N}} \quad (n = 0, 1, ... , N-1) $ are spin variables, $J>0$ is an interaction parameter and $\braket{i,j}$ denotes an adjacent pair of the spin variables. We define an Hermitian interaction coefficient matrix: \begin{eqnarray} \hat J_{ij} = \left\{ \begin{array}{ll} J \quad \text{($i$ and $j$ are nearest neighbors)}\\ 0 \quad \text{(otherwise)} \end{array} \right. \end{eqnarray} for later convenience. It is clear that the Hamiltonian above is invariant under the global $\mathbb{Z}_N$ transformation $S_i \rightarrow e^{i \frac{2\pi n}{N}}S_i$. In the following, we will show that this symmetry is spontaneously broken in the low-temperature region, and the $N \rightarrow \infty$ limit generates a novel long-range correlation. The partition function for this system reads \begin{eqnarray} Z = \mathrm{Tr} \exp \left[\beta S^{\dagger} \hat{J} S\right], \end{eqnarray} where $\beta$ represents the inverse temperature in the system and we write spin variables in the form of vector $S \equiv (S_1, S_2, ... , S_N)$. To derive the free energy with spatial dependence, we introduce a complex-valued auxiliary field $\phi = (\phi_1, ..., \phi_N)$, where $\phi_i = l_i e^{i \theta_i}$. We express the partition function as a function of this auxiliary field through the Hubbard-Stratonovich transformation. Namely, inserting a trivial equation: \begin{eqnarray} \int \mathcal{D}\phi \exp \left[ -\beta (\phi^{\dagger} - S^{\dagger} \hat{J} ) \hat{J}^{-1} (\phi - \hat{J} S)\right] =1 \end{eqnarray} into $Z$ to obtain \begin{widetext} \begin{eqnarray} Z &=& \mathrm{Tr} \int \mathcal{D}\phi \exp \left[ \beta S^{\dagger} \hat{J} S -\beta (\phi^{\dagger} - S^{\dagger} \hat{J} ) \hat{J}^{-1} (\phi - \hat{J} S) \right] \nonumber \\ &=& \int \mathcal{D}\phi \exp \left[ - \beta \phi^{\dagger} \hat{J}^{-1} \phi + \sum_i \ln \left\{ \sum_{n=0}^{N-1} \exp \left[ 2 \beta l_i \cos \left( \frac{2 \pi n}{N} - \theta_i\right) \right]\right\}\right]. \end{eqnarray} \end{widetext} After performing the Fourier transform $\phi_i = \frac{1}{\sqrt{N_{lat}}}\sum_{\boldsymbol{k}} \tilde{\phi}_{\boldsymbol{k}} e^{i \boldsymbol{k} \cdot \boldsymbol{x}i}$ and taking the thermodynamic limit $N_{lat} \rightarrow \infty$, we can approximately reduce the sum over discrete variables $\phi_i$ to an integration over continuous ones\footnote{$N_{lat}$ denotes the number of all the spin variables}. By defining $\tilde{\phi}_{\boldsymbol{k}} = \frac{1}{a^d \sqrt{N_{lat}}} \tilde{\phi}(\boldsymbol{k})$ and $\phi_i = \phi(\boldsymbol{x})$, the first and second terms in the exponential function become \begin{widetext} \begin{eqnarray} - \beta\sum_{i, j} \phi_i^* \hat{J}_{ij}^{-1} \phi_j &\longrightarrow& - \beta \int \frac{\mathrm{d}^d \boldsymbol{k}}{(2 \pi)^d} \tilde{\phi}^*(\boldsymbol{k}) \tilde{J}^{-1}(\boldsymbol{k}) \tilde{\phi}(\boldsymbol{k}) \nonumber \\ &\simeq& - \beta \int \frac{\mathrm{d}^d \boldsymbol{k}}{(2 \pi)^d} \tilde{\phi}^*(\boldsymbol{k}) \left\{ \tilde{J}^{-1}(0) + \left( \frac{- \tilde{J}''(0)}{2 \tilde{J}^2(0)}\right) \boldsymbol{k}^2 \right\} \tilde{\phi}(\boldsymbol{k}) \nonumber \\ &=& - \beta \int \mathrm{d}^d \boldsymbol{r} \left[ \tilde{J}^{-1}(0) l(\boldsymbol{r})^2 + \left( \frac{- \tilde{J}''(0)}{2 \tilde{J}^2(0)}\right) (\nabla \phi(\boldsymbol{r}) )^{\dagger} (\nabla \phi(\boldsymbol{r}) )\right]. \end{eqnarray} \end{widetext} In the second line, we expand $\tilde{J}^{-1}(\boldsymbol{k}) = \tilde{J}^{-1}(k)$ (by rotational symmetry of the interaction) in terms of $k$ and ignore the higher order terms\footnote{This approximation, known as the gradient expansion in statistical mechanics, is justified when the spatial dependence of $l(\boldsymbol{r})$ is gentle compared to the typical energy scale}. Additionally, the third term in the exponential function becomes \begin{widetext} \begin{eqnarray} && \sum_i\ln \left\{ N + \sum_n \frac{1}{2} (2 \beta l_i)^2 \cos^2 \left( \frac{2 \pi n}{N} - \theta_i\right) + \sum_{n=0}^{N-1} \frac{1}{4!} (2 \beta l_i)^4 \cos^4 \left( \frac{2 \pi n}{N} - \theta_i\right) + \mathcal{O}(l_i^6) \right\} \nonumber \\ &&\quad \equiv \sum_i \ln N + \sum_i \ln \left( 1 + \beta^2 P_i l_i^2 + \beta^4 Q_i l_i^4 + \mathcal{O}(l_i^6) \right) \nonumber \\ && \quad \sim \sum_i \left[\beta^2 P_i l_i^2 - \beta^4 \left( \frac{P_i^2}{2} - Q_i \right) l_i^4 + \mathcal{O}(l_i^6) \right]\nonumber \\ && \quad \longrightarrow -\beta \int \mathrm{d}^d \boldsymbol{r} \left[ - \frac{\beta}{a^d}P(\boldsymbol{r}) l(\boldsymbol{r})^2 +\frac{\beta^3}{a^d} \left(\frac{P(\theta(\boldsymbol{r}))^2}{2} - Q(\theta(\boldsymbol{r})) \right) l(\boldsymbol{r})^4 + \mathcal{O}(l(\boldsymbol{r})^6)\right]. \end{eqnarray} \end{widetext} In the third line, we expand the logarithmic function and subtract a constant from it. This results in \begin{eqnarray} Z = \int \mathcal{D}\phi \exp \left[ -\beta \int \mathrm{d}^d \boldsymbol{r} g(l, \theta)\right]. \\ \nonumber \end{eqnarray} $g(l, \theta)$ is a \textit{Ginzburg-Landau functional} of this system, which corresponds to the Lagrangian density in the imaginary time formulation used in elementary particle physics. It is defined as follows: \begin{widetext} \begin{eqnarray} g(l, \theta) &=&\left( \frac{- \tilde{J}''(0)}{2 \tilde{J}^2(0)}\right) (\nabla \phi(\boldsymbol{r}) )^{\dagger} (\nabla \phi(\boldsymbol{r}) ) + \left(\tilde{J}^{-1}(0) - \frac{\beta}{a^d}P(\theta(\boldsymbol{r})) \right) l(\boldsymbol{r})^2 +\frac{\beta^3}{a^d} \left(\frac{P(\theta(\boldsymbol{r}))^2}{2} -Q(\theta(\boldsymbol{r})) \right) l(\boldsymbol{r})^4 \nonumber \\ &\equiv& D (\nabla l(\boldsymbol{r}) )^2 +D l(\boldsymbol{r})^2 (\nabla \theta (\boldsymbol{r}) )^2 + A(\theta(\boldsymbol{r})) l(\boldsymbol{r})^2 + B (\theta(\boldsymbol{r})) l(\boldsymbol{r})^4, \end{eqnarray} where \begin{eqnarray} A(\theta(\boldsymbol{r})) = \tilde{J}^{-1}(0) - \frac{2\beta}{a^d} \left[ \frac{1}{N} \sum_{n=0}^{N-1} \cos^2 \left( \frac{2 \pi n}{N} - \theta(\boldsymbol{r}) \right) \right] \\ B(\theta(\boldsymbol{r})) = \frac{2\beta^3}{3 a^d} \left[ 3 \left(\frac{1}{N} \sum_{n=0}^{N-1} \cos^2 \left( \frac{2 \pi n}{N} - \theta(\boldsymbol{r}) \right) \right)^2 - \frac{1}{N} \sum_{n=0}^{N-1} \cos^4 \left( \frac{2 \pi n}{N} - \theta(\boldsymbol{r}) \right)\right]. \end{eqnarray} \end{widetext} For large $\beta$, one can assume that $A(\theta(\boldsymbol{r})) < 0$ for arbitrary $\theta(\boldsymbol{r})$, i.e. the $\mathbb{Z}_N$ symmetry of a vacuum is spontaneously broken. Since this auxiliary field can be interpreted as a spontaneous magnetization, the SSB yields a finite magnetic moment, which is a simple description of ferromagnetism. To obtain the correlation length of fluctuation along the $\theta$ direction around a vacuum $\phi(\boldsymbol{r}) = l_0(\boldsymbol{r})$, we can freeze the fluctuation in the $l$ direction and substitute $\phi(\boldsymbol{r}) = l_0(\boldsymbol{r})e^{i \theta(\boldsymbol{r})}$ into the Ginzburg-Landau functional $g(l, \theta)$. This gives us \begin{widetext} \begin{eqnarray} g(l, \theta) \simeq D l_0(\boldsymbol{r})^2 (\nabla \theta (\boldsymbol{r}) )^2 + \left( l_0(\boldsymbol{r})^2 \frac{\partial^2 A(\theta(\boldsymbol{r}))}{\partial \theta^2} \bigg|_{\theta=0} + l_0(\boldsymbol{r})^4 \frac{\partial^2 B(\theta(\boldsymbol{r}))}{\partial \theta^2} \bigg|_{\theta=0} \right) \theta(\boldsymbol{r})^2 && \nonumber \\ \qquad + \text{(the higher order terms of $l$ and $\theta$)}.&& \end{eqnarray} \end{widetext} The resulting correlation length is given by \begin{eqnarray} \xi^2 &=& \left( \frac{1}{D} \frac{\partial^2 A(\theta(\boldsymbol{r}))}{\partial \theta^2} \bigg|_{\theta=0} + \frac{l_0(\boldsymbol{r})^2}{D} \frac{\partial^2 B(\theta(\boldsymbol{r}))}{\partial \theta^2} \bigg|_{\theta=0} \right)^{-1}. ~~~~~~~ \end{eqnarray} This expression represents the typical distance that the mode can propagate. Considering \begin{eqnarray} \frac{\partial^2}{\partial \theta^2}\frac{1}{N} \sum_{n=0}^{N-1} \cos^2 \left( \frac{2 \pi n}{N} - \theta(\boldsymbol{r}) \right) \Bigg|_{\theta=0} \nonumber \\ \quad \longrightarrow \frac{\partial^2}{\partial \theta^2} \int_0^1 dt \cos^2(2\pi t - \theta(\boldsymbol{r})) \Bigg|_{\theta=0} = 0 \end{eqnarray} and \begin{eqnarray} \frac{\partial^2}{\partial \theta^2}\frac{1}{N} \sum_{n=0}^{N-1} \cos^4 \left( \frac{2 \pi n}{N} - \theta(\boldsymbol{r}) \right) \Bigg|_{\theta=0} \nonumber \\ \quad \longrightarrow \frac{\partial^2}{\partial \theta^2} \int_0^1 dt \cos^4(2\pi t - \theta(\boldsymbol{r})) \Bigg|_{\theta=0} = 0, \end{eqnarray} then a conclusion can be followed by the discussion above: \begin{eqnarray} \lim_{N \rightarrow \infty}\xi =\infty . \end{eqnarray} This correlation length clearly indicates the emergence of a Nambu-Goldstone mode. The massless mode corresponds to one along the $\theta$ direction in the U(1)-symmetric Heisenberg model. \section{Polyakov loop effective action\\ for SU($N$) Yang-Mills theory} Next, let us move to the SU($N$) thermal Yang-Mills theory on the lattice \cite{Lange10, Fukushima11, Fromm12, Diakonov04}. In the quenched approximation, the partition function for Wilson's action is given by \begin{equation} Z = \int \mathcal{D} U \ \exp \left[\frac{1}{2g^2} \sum_{\square} \mathrm{Re} \ \mathrm{tr}\square_{\mu \nu} \right], \end{equation} where $\mathcal{D}U = \prod_x \mathrm{d}U(x)$ is the product of the Haar measure on SU($N$), $U_{\mu}(x)= \exp[-i g a A_{\mu}(x)] \in {\rm SU}(N)$ are the link variables, and $\square_{\mu \nu}(x) = U_{\mu}(x) U_{\nu}(x+\hat{\mu}) \ U^{\dagger}{\mu}(x + \hat{\nu}) \ U^{\dagger}{\nu}(x)$ are the plaquettes, which are the smallest Wilson loops. The sum $\sum_{\square}$ is taken over all possible plaquettes. Since the temperature is finite in the system, we impose the periodic boundary condition with the period of inverse temperature $\beta = a N_{\tau}$ along imaginary time $\tau$ direction: \begin{eqnarray} A_{\mu}(\tau, \boldsymbol{x}) = A_{\mu}(\tau+\beta, \boldsymbol{x}) \end{eqnarray} on the gauge field. $N_{\tau}$ is the number of links along the $\tau$ direction. An effective action involving the Polyakov loops on the lattice: \begin{eqnarray} L_{\boldsymbol{s}} = \prod_{i_{\tau}=0}^{N_{\tau}} U_{\tau} (i_{\tau} a, \boldsymbol{s}a) \in {\rm SU}(N) \end{eqnarray} is formulated as follows: \begin{eqnarray} Z &=& \int \mathcal{D}U(x) e^{-S[U]} \nonumber \\ &=& \int \mathcal{D}U(x) e^{-S[U]} \int \mathcal{D}\phi_{\boldsymbol{s}} \delta \left( \phi_{\boldsymbol{s}} - \frac{1}{N}\mathrm{tr}(L_s)\right)\nonumber \\ &\equiv& \int \mathcal{D}\phi_{\boldsymbol{s}} e^{-S_{\mathrm{eff}}[\phi]} \end{eqnarray} where $\phi_{\boldsymbol{s}} \equiv \frac{1}{N} \mathrm{tr} L_{\boldsymbol{s}}$ is a normalized, traced Polyakov loop and index $\boldsymbol{s} = (s_x, s_y, s_z)$ represents a spatial coordinate on the lattice. \begin{figure}[htbp] \includegraphics[width=86mm]{Polyakov_loop.png} \caption{A Polyakov loop and a plaquette on the lattice are shown in the figure. At finite temperature, the Euclidean spacetime is periodic in the imaginary time $\tau$ direction. After integrating over all spatial coordinates, the action depends only on the Polyakov loops under the strong coupling approximation. The thermal expectation value of the loop serves as an order parameter for the quark/hadron transition.} \label{PL} \end{figure} In this derivation, it is easy to see the structure of the symmetry through the effective action $S_{\mathrm{eff}}[\phi]$. However, it is not clear how to calculate path integrals with such constraint conditions in general. Therefore, we need to find another method, such as the Hubbard-Stratonovich transformation in the toy model above, to obtain a concrete form of the effective action. \subsection{Formulation by strong coupling expansion} The effective action can be obtained through a strong coupling expansion. After the integration of all the link variables in spatial directions according to the general methodology in quantum field theory on the lattice\cite{Rothe12, Fukushima17}, we obtain \begin{eqnarray} Z \simeq \int \mathcal{D} L_{\boldsymbol{s}} \exp \left[\left(\frac{1}{g^2 N}\right) ^{N_{\tau}} \sum_{<\boldsymbol{s}, \boldsymbol{t}>} \mathrm{tr}( L_{\boldsymbol{s}} ^{\dagger}) \ \mathrm{tr}( L_{\boldsymbol{t}} )\right].~~~~~~ \end{eqnarray} Although it is not trivial that the effective action depends exclusively on the Polyakov loops (i.e. $\int \mathcal{D} U_4(x) = \int \mathcal{D} L_{\boldsymbol{s}}$), this can be justified at the lowest order of the strong coupling expansion. Let us consider the $N=3$ case first. Using the string tension at zero temperature $\sigma \equiv a^{-2} \ln (g^2 N)$, we can write down a Polyakov loop partition function: \begin{eqnarray} Z &\simeq& \int \mathcal{D}L_{\boldsymbol{s}} \exp \left[ e^{- \beta \sigma a} \sum_{<\boldsymbol{s}, \boldsymbol{t}>} \mathrm{tr}( L_{\boldsymbol{s}} ^{\dagger}) \ \mathrm{tr}( L_{\boldsymbol{t}} ) \right] \nonumber \\ &=& \int \mathcal{D}\phi_{\boldsymbol{s}} \exp \left[J \! \sum_{<\boldsymbol{s}, \boldsymbol{t}>} \phi_{\boldsymbol{s}}^* \phi_{\boldsymbol{t}} \! + \! \sum_{\boldsymbol{s}} \ln H^{N=3}_{\boldsymbol{s}}\right],~~~~~~ \label{phi_ac} \end{eqnarray} where $H^{N=3}_{\boldsymbol{s}}$ is the Jacobian required when invariant integration on SU(3) is translated into one over a traced SU(3) element $\phi_{\boldsymbol{s}}$: \begin{eqnarray} \mathcal{D}L_{\boldsymbol{s}} = \mathcal{D} \phi_{\boldsymbol{s}} \prod_{\boldsymbol{s}} H^{N=3}_{\boldsymbol{s}}. \end{eqnarray} $\phi_{\boldsymbol{s}}$ takes a complex value and obviously satisfies $|\phi_{\boldsymbol{s}}|\leq 1$. In this model, the adjacent variables interact with each other in exactly the same way as the interaction of the toy model in Sec. II. One can simplifies the shape of this Jacobian by taking the Polyakov gauge, in which all the Polyakov loops are diagonal: $L_{\boldsymbol{s}} = \mathrm{diag}(e^{i\theta_1(\boldsymbol{s})}, e^{i\theta_2(\boldsymbol{s})}, e^{i\theta_3(\boldsymbol{s})})$. In such specific cases as $N=3$, the form of $H$ has exactly calculated as a function of $\phi$'s explicitly: \begin{eqnarray} H^{N=3}(\phi) &=& 1 - 6|\phi|^2 -3|\phi|^4 + 8 \mathrm{Re}\ \phi^3, \label{N=3} \end{eqnarray}as shown in Fig.\ref{Haar_N=2,3} \cite{Fukushima17, Uhlman07}. Thus, we obtain an effective action: \begin{eqnarray} -S^{N=3}_{\mathrm{eff}}[\phi] &=& J\sum_{<\boldsymbol{s}, \boldsymbol{t}>} \phi_{\boldsymbol{s}}^* \phi_{\boldsymbol{t}} \\ \nonumber &&+\sum_{\boldsymbol{s}} \ln(1 - 6|\phi_{\boldsymbol{s}}|^2 -3|\phi_{\boldsymbol{s}}|^4 + 8 \mathrm{Re}\ \phi_{\boldsymbol{s}}^3 ), \end{eqnarray} which respects $\mathbb{Z}_3$ symmetry in terms of $\phi$ as expected. Additionally, $N=2$ case can be dealt with in parallel using the Haar measure involving $\phi \in \mathbb{R}$: \begin{eqnarray} H^{N=2}(\phi) &=& 1 - \phi^2 \label{N=2} \end{eqnarray} as shown in Fig.\ref{Haar_N=2,3}. \begin{figure}[htbp] \centering \includegraphics[width=86mm]{Haar2_graph.png}\\ \centering \includegraphics[width=86mm]{Haar3_graph.png} \centering \caption{\label{Haar_N=2,3} The figures show the SU($N$) Haar measure for $N=2$ (upper panel) and $N=3$ (lower panel) as a function of the traced Polyakov loop $\phi$. In the case of $N=2$, $H(\phi)$ is a real function, which is a unique property of this case. In the case of $N=3$, $H(\phi)$ is a function of a complex variable, with support within the region where $|\phi_i|\leq 1$.} \end{figure} In the case of SU($N$) with $N \geq 4$, the Haar measure $H$ is not a single-valued function of only $\phi$ and $\phi^*$, in contrast to the case for $N=2$ and $N=3$. This is because the degrees of freedom of $L$ are not sufficient when $N>3$. Specifically, $L$ includes $N-1$ independent variables ($\theta_1, ..., \theta_N$ and one constraint condition), which exceeds the two variables ($\phi$ and $\phi^*$) when $N>3$. Therefore, in general, $H$ is a function with $N-1$ arguments: for even $N = 2m$, the arguments are $\phi^{(k)} = \frac{1}{N} \sum_i e^ {i k\theta_i} = \frac{1}{N} \mathrm{tr}(L^k)$ ($k = 1, 2, ...,m-1$), their complex conjugates, and the self-dual variable $\phi^{(m)}$; for odd $N=2m+1$, the arguments are $\phi^{(k)}$ ($k = 1, 2, ...,m$) and their complex conjugates. Under the transformation $L \longrightarrow z_{N,n} L$ (where $z_{N,n} = e^{i \frac{2 \pi n}{N}} \in \mathbb{Z}_N$), each argument transforms as follows: \begin{eqnarray} \phi^{(k)} \longrightarrow z_{N,n}^k\phi^{(k)}, \ \phi^{(k)*} \longrightarrow (z_{N,n}^k)^* \phi^{(k)*}. \end{eqnarray} The general form of $H$ reads $ H = H(\phi, \phi^*, \phi^{(2)}, \phi^{(2)*}, ...)$ and respects $\mathbb{Z}_N$ symmetry, especially $H^{N=2} = H^{N=2} (\phi = \phi^*)$ and $H^{N=3} = H^{N=3} (\phi, \phi^*)$ as shown in Eq.(\ref{N=3}), Eq.(\ref{N=2}). The exact form of Haar measure of SU($N$)\cite{Conrey10, Uhlman07} enables us to find the measure as follows: \begin{eqnarray} \mathcal{D}L_{\boldsymbol{s}} &=& \mathcal{D} \phi_{\boldsymbol{s}} \mathcal{D} \phi^{(2)}_{\boldsymbol{s}}... \prod_{\boldsymbol{s}} H_{\boldsymbol{s}} \nonumber \\ &=& \mathcal{D} \phi_{\boldsymbol{s}}\mathcal{D} \phi^{(2)}_{\boldsymbol{s}}... \prod_{{\boldsymbol{s}}} \prod_{i<j} |e^{i\theta_i} - e^{i \theta_j}|^2. \end{eqnarray} In fact, for larger values of $N$, the contribution from $\phi^{(k)}$ $(k \geq 2)$ is much smaller than that from $\phi$ around a specific vacuum, and these small terms can be ignored as an approximation. Similar to the toy model in Sec. III or the SU(3) analysis above, it is expected that the configuration will localize in one of the $N$ different vacua. Let us consider one of the vacuums that implements $\mathrm{Arg}\phi=0$. The important fact that $\phi = 1$ under the condition of $\theta_i=0$ for arbitrary $i$'s implies that each $\theta_i$ is distributed around $0$ in the vicinity of the $\phi = 1$, i.e. $\mathrm{Arg}\phi = 0$ vacuum. Hence we can assume each $\theta_i$ localizes according to a probability distribution: \begin{eqnarray} P(\theta_i) = \exp \left[ - \frac{\theta_i^2 }{2 \alpha} \right] \end{eqnarray} with a parameter $\alpha$. Under the assumption, the expectation value of $\phi$ and $\phi^{(k)}$ would be \begin{eqnarray} \langle \phi \rangle \simeq \frac{1}{N}\sum_{i=1}^N \cfrac{\int_{- \infty}^{\infty}d\theta P(\theta) e^{i\theta_i}}{\int_{- \infty}^{\infty} d\theta P(\theta)} =e^{- \frac{\alpha}{2}} \end{eqnarray} and \begin{eqnarray} \langle \phi^{(k)} \rangle \simeq \frac{1}{N}\sum_{i=1}^N \cfrac{\int_{- \infty}^{\infty}d\theta P(\theta) e^{i k \theta_i}}{\int_{- \infty}^{\infty} d\theta P(\theta)} =e^{- \frac{k^2\alpha}{2}}. \end{eqnarray} Here we expand the integral interval to infinity considering the way $\theta$ localizes. Therefore, $\langle \phi \rangle \gg \langle \phi^{(k)} \rangle$ is followed and the dominant contribution comes from $\phi$ term and its complex conjugate in the Haar measure $H$ as long as our discussion is limited to the vicinity of one vacuum that implements $\mathrm{Arg}\phi = 0$. Due to the $\mathbb{Z}_N$ symmetry, we can conclude that $H$ depends on $\phi^* \phi$, $(\phi^* \phi)^2$ and $\mathrm{Re} \ \phi^N$ predominantly at the lowest order. Furthermore, one more constraint is placed on the coefficients of each term in $H$. That is, $H(\phi = 1)$ must vanish because if $\theta_i = 0$ for arbitrary $i$'s, i.e. $\phi = \frac{1}{N} \sum_i e^ {i \theta_i} = 1$, then $H = 0$. Taking the $\mathbb{Z}_N$symmetry into consideration, we obtain \begin{eqnarray} H(\phi = e^{i\frac{2 \pi n}{N}}) =0 \quad (n = 0, ... , N-1). \end{eqnarray} After all, the Haar measure with the dominant terms takes the following simple form: \begin{eqnarray} H(\phi) = 1- \lambda_2 |\phi|^2 - \lambda_4 |\phi|^4 + \lambda_N \mathrm{Re}\ \phi^N. \nonumber \\ (1 -\lambda_2 - \lambda_4 + \lambda_N =0) \end{eqnarray} This approximated form satisfies $\mathbb{Z}_N$ symmetry and the condition of $H(\phi=e^{i\frac{2 \pi n}{N}})=0$. $\lambda_2$, $\lambda_4$ and $\lambda_N$ are constants depending only on $N$, and $\lambda_N$ is expected to vanish in the large-$N$ limit because this term violates U(1) symmetry, which the large-$N$ theory is expected to respect. Under this approximation, the partition function reads \begin{widetext} \begin{eqnarray} Z &=& \int \mathcal{D}\phi_{\boldsymbol{s}} \exp\left[J \sum_{<{\boldsymbol{s}}, {\boldsymbol{t}}>} \phi_{\boldsymbol{s}}^* \phi_{\boldsymbol{t}} +\sum_{\boldsymbol{s}} \ln(1- \lambda_2 \phi_{\boldsymbol{s}}^2 - \lambda_4 \phi_{\boldsymbol{s}}^4 + \lambda_N \mathrm{Re} \ \phi_{\boldsymbol{s}}^N\right] \nonumber \\ &=& \int \mathcal{D}\phi_{\boldsymbol{s}} \exp \left[a^3 \sum_{\boldsymbol{s}} \left\{J \sum_{j = 1}^3 \phi^*_{\boldsymbol{s}} \nabla^2_j \phi_{\boldsymbol{s}} \ \frac{1}{a} + \frac{6J}{a^3} |\phi_{\boldsymbol{s}}|^2 + \frac{1}{a^3} \ln\left(1- \lambda_2 \phi_{\boldsymbol{s}}^2 - \lambda_4 \phi_{\boldsymbol{s}}^4 + \lambda_N \mathrm{Re}\ \phi_{\boldsymbol{s}}^N \right) \right\} \right] \nonumber \\ &\equiv & \int \mathcal{D}\phi(x) \exp \left[ - \int \mathrm{d^3} x \ \mathcal{L}_{\mathrm{eff}} [\phi(x)]\right], \end{eqnarray} \end{widetext} where \begin{eqnarray} \nabla^2_j \ \phi_{\boldsymbol{s}} \equiv \sum_j \frac{\phi_{{\boldsymbol{s}}+\hat{j}} + \phi_{{\boldsymbol{s}}-\hat{j}} - 2 \phi_{\boldsymbol{s}}}{a^2} \end{eqnarray} and $\phi(x) \equiv \frac{\phi_{\boldsymbol{s}}}{\sqrt{a}}$. This rewriting of the action allows us to deal with the system as one described by the following 3-dimensional Euclidean effective Lagrangian: \begin{eqnarray} \mathcal{L}_{\mathrm{eff}} [\phi (x)] &\equiv& J|\nabla \phi|^2 +V_{\mathrm{eff}}(\phi) \nonumber \\ &=& J|\nabla\phi|^2 - \frac{6J}{a^2} |\phi|^2 \nonumber \\ && -\frac{1}{a^3} \ln \Big(1 - a \lambda_2 |\phi|^2 - a^2 \lambda_4 |\phi|^4 \nonumber \\ && + a^{N/2} \lambda_N \mathrm{Re} \ \phi^N \Big). \label{Action_1} \end{eqnarray} In the large-$N$ limit, it is not appropriate to expand the potential and ignore higher order terms as is commonly done in SU(3) analyses. This is because the potential has a non-analytic property at $|\phi(x)| = \frac{1}{\sqrt{a}}$, which becomes significant in the large-$N$ limit. If higher order terms are ignored, a vacuum may protrude from the region $|\phi(x)|< \frac{1}{\sqrt{a}}$, leading to a divergence in the potential that prevents this from occurring. It is important to take this non-analytic behavior into account when considering the large-$N$ limit of SU($N$) Yang-Mills theory.\\ \subsection{Emergence of a quasi-Nambu-Goldstone mode} In the subsection, we ensure that $\mathcal{L}_{\mathrm{eff}}$ yields a massless mode in the large-$N$ limit as in the toy model in Subsec. A. Let us consider the fluctuation around a vacuum: \begin{eqnarray} \phi(x) = \frac{v}{\sqrt{2}} \left( \equiv \frac{\bar{v}}{\sqrt{2a}} \right) \end{eqnarray} given as a one of the solutions for $\frac{\partial V_{\mathrm{eff}}(\phi)}{\partial \phi} = 0$. Substituting the non-linear representation of the fluctuation: \begin{eqnarray} \phi(x) = \frac{1}{\sqrt{2}}(v + \rho(x)) e^{i\frac{\chi(x)}{v}} \end{eqnarray} into $\mathcal{L}_{\mathrm{eff}}$, we get a reduced Lagrangian: \begin{eqnarray} \mathcal{L}_{\mathrm{eff}} [\phi (x)] = J \left[\frac{1}{2}(\nabla \chi(x))^2 + \frac{m_{\chi}^2}{2}\chi(x)^2 \right]\nonumber \\ + (\text{the other terms}) \end{eqnarray} after a little long calculation in Appendix \ref{massderive}. Here the mass of eigenmode along the angle direction is \begin{widetext} \begin{eqnarray} m^2_{\chi} = \frac{ e^{\beta \sigma a}}{2 a^2} \frac{\lambda_N \left( v \sqrt{\cfrac{a}{2}} \right)^{N-2}}{ 1 - \lambda_2 \left( v \sqrt{\cfrac{a}{2}} \right)^2 - \lambda_4 \left( v \sqrt{\cfrac{a}{2}} \right)^4 + \lambda_N \left( v \sqrt{\cfrac{a}{2}} \right)^N} . \end{eqnarray} \end{widetext} The upper bound condition for $|\phi(x)|$, i.e. $\frac{v}{\sqrt{2}} < \frac{1}{\sqrt{a}} $ suggests \begin{eqnarray} \lim_{N \rightarrow \infty} m_{\chi} = 0. \end{eqnarray} Now we conclude that in the limit a quasi-Nambu-Goldstone mode emerges. It is important to note that the Nambu-Goldstone mode in this model does not propagate in spacetime, but rather represents a static and spatial long-range correlation. This is because the imaginary-time dependence has already been integrated out and real-time evolution is not included in the model. Additionally, while the mode remains massive for finite values of $N$, it only becomes massless in the limit. This is the reason for the initial use of the term "quasi." However, for simplicity, we will refer to it as the Nambu-Goldstone mode in the following discussion. The physical significance of this mode will be further addressed in the next subsection. Since the mode is originated from the fluctuation along the angle direction, we can see that it surely corresponds to a Nambu-Goldstone mode in a U(1)-symmetric quantum field theory. In QCD, with three colors, the mass of this mode is approximately $m_{\chi} \simeq 4.8 \ \mathrm{GeV}$. This mass is calculated using a set of parameters: $a=0.4 \ \mathrm{fm}$, $\beta^{-1}=400 \ \mathrm{MeV}$, $\sigma = 1 \ \mathrm{GeV/fm}$, and $v\sqrt{\frac{a}{2}}=0.7$, as well as the $\lambda$ values in Eq. (\ref{N=3}). While the mass of this mode is large compared to the typical mass scale of QCD, it becomes massless in the ideal large-$N$ limit. \subsection{Physical meaning of the quasi-Nambu-Goldstone mode} In this subsection, we consider the physical meaning of the quasi-Nambu-Goldstone mode $\chi$. The physical meaning of the quasi-Nambu-Goldstone mode $\chi$ can be understood by considering its time independence and its spatial correlation. At the mean-field level, the mode is described by the following integral in the Euclidean formalism: \begin{eqnarray} Z_{\chi}=\int \mathcal{D}\chi \exp \left[-\int \mathrm{d}^3x \left(\frac{1}{2} (\nabla \chi)^2+\frac{1}{2} m_\chi^2 \chi^2 \right) \right] \end{eqnarray} This leads to a spatial correlation of the form: \begin{eqnarray} \langle \chi(\vec x) \chi(\vec y) \rangle = \frac{1}{4\pi r} e^{-m_\chi r}\\ \nonumber \end{eqnarray} where $r = |\vec x - \vec y|$. This mode therefore gives a Yukawa-type spatial correlation with a range of $1/m_\chi$. In the large-$N$ limit, the mass $m_\chi$ becomes zero and the mode gives a Coulomb-type spatial correlation with an infinite range, affecting the entire system. \section{Quantum mechanical description} In this section, we evaluate the vacuum-to-vacuum transition qualitatively in a system with finite volume. In order to estimate the transition, we need to consider the penetration through all possible paths in the $\phi$-plane. However, for large $N$, we can assume that the dominant path is along the circumference $|\phi| = \frac{v}{\sqrt{2}}$ and reduce the problem to one dimension. To do this, we need to separate the effective Lagrangian $\mathcal{L}_{\mathrm{eff}}$ in Eq. (\ref{Action_1}) involving $\phi(x) = l(x) e^{i\theta(x)}$ into two different functionals involving $l$ and $\theta$ respectively: \begin{widetext} \begin{eqnarray} \mathcal{L}_{\mathrm{eff}} [\phi(x)] &\equiv& J \left\{ (\nabla l)^2 + l^2 (\nabla \theta)^2 \right \} - \frac{6J}{a^2} l^2 -\frac{1}{a^3} \ln \left(1 - a \lambda_2 l^2 - a^2 \lambda_4 l^4 + a^{N/2} \lambda_N l^N \cos(N \theta) \right) \nonumber \\ &\simeq& \left[ J (\nabla l)^2 - \frac{6J}{a^2} l^2 + \frac{1}{a^3} (a\lambda_2 l^2 + a^2 \lambda_4 l^4) \right] + \left[ J l^2 (\nabla \theta)^2 - a^{N/2-3} \lambda_N l^N \cos(N \theta) \right] \nonumber \\ &\equiv& \mathcal{L}_l[l(x)] + \mathcal{L}_{\theta}[\theta(x)]. \end{eqnarray} \end{widetext} The approximation in the second line can be justified considering $l\sqrt{a} < 1$ and $N$ is large enough. Using the fact that $N$ different vacua are distributed along the circumference with the radius $\frac{v}{\sqrt{2}}$, we get the action involving only $\theta$ by freezing $l$: \begin{eqnarray} S_{\theta} = \int_V \mathrm{d}^3 \boldsymbol{x} \left[\frac{J v^2}{2} (\nabla \theta)^2 + V_0(\theta) \right], \end{eqnarray} where \begin{eqnarray} V_0(\theta) \equiv \frac{\lambda_N}{a^3}\left( v \sqrt{\cfrac{a}{2}} \right)^N (1-\cos(N \theta)) \Big]. \end{eqnarray} where the potential is adjusted so that the lower bound of potential is zero. Before examining the properties of this action in detail, we need to ensure that quantum corrections do not violate important symmetries as in the Coleman-Weinberg potential \cite{Coleman73}, but rather preserve the qualitative behavior. To do this, we will compute the corrected potential including the effect of quantum fluctuations from the partition function with a source term: \begin{eqnarray} Z_{\theta}[\lambda] = \int \mathcal{D}\theta \exp \left[ -S_{\theta} (\theta) + \theta \cdot \lambda \right], \end{eqnarray} where $\theta \cdot \lambda$ is an abbreviation for $\int_V \mathrm{d}^3 \boldsymbol{x} \theta(\boldsymbol{x}) \ \lambda(\boldsymbol{x})$. According to a general methodology in the theory of quantum scalar fields, generating functional of connected Green's functions: \begin{eqnarray} W_{\theta}[\lambda] &\equiv& -\ln Z_{\theta} [\lambda] \nonumber \\ &=& S_{\theta}(\theta_{\lambda}) - \theta_{\lambda} \cdot \lambda + \frac{1}{2} \ln \det S^{(2)}_{\theta}(\theta_{\lambda}) \end{eqnarray} can be defined by using saddle point configuration $\theta_{\lambda}$, which satisfies $\frac{\partial S_{\theta}}{\partial \theta} - \lambda = 0$. $S^{(2)}_{\theta}(\theta)$ represents the second functional derivative of $S_{\theta}$. By using the Legendre transformation with respect to the variable $\lambda$, we are able to obtain an effective action whose argument is the vacuum expectation value: \begin{eqnarray} \Gamma _{\theta} (\langle \theta \rangle) &\equiv& W_{\theta} [\lambda] + \langle \theta \rangle \cdot \lambda \nonumber \\ &=& S(\langle \theta \rangle) + \frac{1}{2} \ln \det S^{(2)}_{\theta}(\langle \theta \rangle) + \mathcal{O}(\hbar^2) \nonumber \\ &=& S(\langle \theta \rangle) + \frac{1}{2} \mathrm{Tr} \ln \Bigg( -J v^2 \nabla^2 \nonumber \\ && + \frac{\lambda_N N^2}{a^3}\left( v \sqrt{\cfrac{a}{2}} \right)^N \cos( N \langle \theta \rangle) \Bigg) + \mathcal{O}(\hbar^2) \nonumber \\ & \sim & S(\langle \theta \rangle) + \frac{1}{2} V \int \frac{\mathrm{d}^3 \boldsymbol{k}}{(2 \pi)^3} \ln(k^2 + m^2). \end{eqnarray} In the last line, we extract a constant and define \begin{eqnarray} m^2 \equiv \cfrac{\lambda_N e^{\beta \sigma a}}{a^3 v^2}\left( v \sqrt{\cfrac{a}{2}} \right)^N \cos( N \langle \theta \rangle). \end{eqnarray} Fortunately, the correction terms do not exhibit any divergences due to the three-dimensional nature of the space. Upon performing the integration, we obtain the effective potential to within a one-loop approximation: \begin{eqnarray} V _{\mathrm{eff}} (\langle \theta \rangle) &=& V_0(\langle \theta \rangle) - \frac{1}{2} \ \frac{\Gamma(-3/2)}{(4 \pi)^{3/2}} (m^2)^{3/2} + \mathcal{O}(\hbar^2) \nonumber \\ &\simeq& V_0(\langle \theta \rangle) - \frac{1}{12\pi} m^3. \end{eqnarray} \begin{figure}[htbp] \includegraphics[width=86mm]{effective_potential.png} \caption{The tree level potential and its one-loop quantum correction are shown. It can be seen that the location of the minimum points and the symmetry are preserved.} \label{potential} \end{figure} The results shown in Fig.\ref{potential} clarify two points: that quantum corrections do not violate the vital symmetry of the action, and that the contribution from quantum corrections vanishes faster than the tree-level term in the large-$N$ limit. As long as our discussion is limited to systems with large enough values of $N$, we can be confident that quantum corrections do not play a significant role. Therefore, we can safely ignore quantum corrections and consider this problem at the tree level going forward. In the following, we will assume that the field $\theta(\boldsymbol{x})$ has an imaginary-time dependence in the theory under consideration and treat $\theta$ as a real scalar field on a 1+3 dimensional Euclidean spacetime. We introduce a second time-derivative with a parameter $Z$ and redefine $S_{\theta}$ as follows: \begin{eqnarray} S_{\theta} &\equiv& \int \mathrm{d}\tau \int_V \mathrm{d}^3 \boldsymbol{x} \Bigg[\frac{J v^2}{2 a}\left\{ Z(\partial_{\tau} \theta)^2 + (\nabla \theta)^2 \right \} \nonumber \\ &&+ \frac{\lambda_N}{a^4}\left( v \sqrt{\cfrac{a}{2}} \right)^N (1-\cos(N \theta)) + \mathcal{O}(\hbar^2)\Bigg]. ~~~~ \end{eqnarray} The imaginary time formalism allows for $Z$ to be a non-trivial constant, as space and time are no longer compatible variables in this framework. However, for the purpose of qualitatively evaluating the nature of the action, we assume $Z=1$. In order to analyze the homogeneous configuration in a finite domain, we assume that $\theta(\tau, \boldsymbol{x})$ is homogeneous with respect to the spatial coordinate $\boldsymbol{x}$: \begin{eqnarray} S_{\theta} &=& \int \mathrm{d}\tau \Bigg[ \frac{V J v^2}{2 a} \left( \partial_{\tau} \theta \right)^2 \nonumber \\ &&+ \frac{\lambda_N V}{a^4}\left( v \sqrt{\cfrac{a}{2}} \right)^N (1 - \cos(N\theta))\Bigg] . \label{onedirection} \end{eqnarray} The action represents that of a quantum dynamic particle, and we can interpret the vacuum-to-vacuum transition as the dynamics of a particle following this action. \section{ Vacuum instability} \begin{figure*}[htbp] \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[width=86mm]{rate_NV.png} \end{minipage} % \begin{minipage}[b]{0.48\linewidth} \centering \includegraphics[width=86mm]{rate_N3.png} \end{minipage} \centering \caption{The figures show the total transition rate of the vacuum as a function of $V^{1/3}$ and $N$ (left panel) and for $N=3$ (right panel). These rates were calculated using the following set of parameters: lattice spacing $a = 0.4 \ \mathrm{fm}$, temperature $T = 400 \ \mathrm{MeV}$, string tension at zero temperature $\sigma = 1.0 \ \mathrm{GeV/fm}$, $\lambda_N = 1$ and the vacuum expectation value $v\sqrt{\frac{a}{2}} = 0.7$. The saturation of the rate in the large-$N$ region indicates that the transition is primarily driven by thermal processes, rather than quantum tunnelling.} \label{grapf_V,N} \end{figure*} \begin{figure*}[htbp] \begin{minipage}[b]{0.48\linewidth}% \centering \includegraphics[width=86mm]{lifetime_NV.png} \label{lifetime} \end{minipage \begin{minipage}[b]{0.48\linewidth}% \centering \includegraphics[width=86mm]{lifetime_N3.png} \end{minipage} \centering \caption{The figures show the lifetime of the vacuum as a function of $V^{1/3}$ and $N$ (left panel) and for $N=3$ (right panel). These lifetimes were calculated using the same set of parameters as in Fig.\ref{grapf_V,N}. There is a specific line on which the lifetime sharply increases, dividing the $(V^{1/3}, N)$ plane into two distinct regions. In the region of large $N$ and small $V$, the vacuum is unstable and can easily transition to another vacuum, while in the region of small $N$ and large $V$, the vacuum is stable.} \label{grapf2_V,N} \end{figure*} Finally, let us move to the estimation of the lifetime of a specific vacuum based on the Eq.(\ref{onedirection}): \begin{eqnarray} S_{\theta} &\equiv& \int \mathrm{d}\tau \Bigg[\frac{M(V, N)}{2} \left( \partial_4 \theta\right)^2 \nonumber \\ && + \frac{V_0(V, N)}{2} (1 - \cos(N\theta))\Bigg]. \end{eqnarray} Consider a particle in a system whose energy is distributed according to the canonical ensemble. The transition between two adjacent wells can occur in two ways: \begin{enumerate} \item A particle with energy $E \geq V_0(V, N)$ can surmount the potential well, which is a purely classical phenomenon. \item A particle with energy $E < V_0(V, N)$ can tunnel through the potential well, which is a purely quantum, non-perturbative phenomenon. \end{enumerate} We define the transition rate from one well to an adjacent one per unit of imaginary time as the thermal transition rate $\Gamma_{\mathrm{th}}(V, N)$ for case 1 and the tunneling transition rate $\Gamma_{\mathrm{tun}} (V, N; E)$ for case 2. The total transition rate per unit time, $\Gamma_{\mathrm{tot}}(V, N)$, is given by the sum of these two rates: \begin{eqnarray} \Gamma_{\mathrm{tot}}(V, N) =\Gamma_{\mathrm{th}} (V, N) + \langle \Gamma_{\mathrm{tun}}(V, N; E) \rangle_{\beta}, \end{eqnarray} where $\langle \bullet \rangle_{\beta}$ denotes a thermal expectation value. \subsection{On the ($V^{1/3}, N$)-plane} In this subsection, we analytically calculate the transition rates $\Gamma_{\mathrm{tot}}(V,N)$ and estimate the lifetime of a vacuum $\tau_v(V,N) = 1/\Gamma_{\mathrm{tot}}(V,N)$. $\Gamma_{\mathrm{th}} (V, N)$ can be evaluated easily: \begin{eqnarray} \Gamma_{\mathrm{th}}(V, N) &=& \frac{1}{\beta} \ \frac{\displaystyle \int_{V_0}^{\infty} d E e^{-\beta E}}{\displaystyle \int_{0}^{\infty} d E e^{-\beta E}} \nonumber \\ &=& \frac{1}{\beta} \ e^{-\beta V_0(V, N)}. \end{eqnarray} Here $1/\beta$ is a typical frequency of thermal fluctuation, which is usually called \textit{attempt frequency} in solid state physics. $\Gamma_{\mathrm{tun}}(V, N; E)$ can be evaluated using penetration rate per collision $P(V, N; E)$, or the \textit{Gamow factor}: \begin{eqnarray} \Gamma_{\mathrm{tun}}(V, N; E) = 2E \cdot P(V, N; E). \end{eqnarray} While it is possible to obtain $P(V, N; E)$ by solving the Schrödinger equation analytically, a semi-classical estimate using the WKB approximation is sufficient in this case: \begin{eqnarray} &&P(V, N; E) \nonumber \\ && \quad = \exp \left[ -2 \int^{\frac{2\pi}{N} - \theta_0}_{\theta_0} d\theta \sqrt{2 M (V(\theta) - E)} \right], \end{eqnarray} where $V(\theta_0) = E \left(0 \leq \theta_0 \leq \cfrac{\pi}{N} \right)$. Using the expression, \begin{eqnarray} &&\langle \Gamma_{\mathrm{tun}}(V, N; E) \rangle_{\beta} \nonumber \\ && \quad = \int _0^{V_0} d E \ 2E \cdot P(V, N; E) e^{-\beta E}. \end{eqnarray} The $(V, N)$-dependence of the total transition rate $\Gamma_{\mathrm{tot}}$ is shown in Fig.\ref{grapf_V,N}, where it can be seen that the rate steeply decreases on a specific line. It is worth noting that the crossover lines also depend on the lattice spacing, which should be considered as a parameter of the model. In order to balance the strong coupling approximation, on which the model is based, with the asymptotic freedom of the Yang-Mills theory, the strong coupling requires the dynamics of the model to be treated in the long range interaction. Therefore, the lattice spacing is set to 0.4 $\mathrm{fm}$, which is relatively large compared to the typical length scale of hadrons. Figure.\ref{grapf2_V,N} shows the lifetime as a function of $(V, N)$. This figure reveals that the $(V, N)$-plane is divided into two different phases: the stable vacuum phase and the unstable vacuum phase. In other words, the vacuum configuration of systems in the unstable vacuum phase is not stable, while if the volume is large enough for the system to be in the stable vacuum phase, then the vacuum is unlikely to decay. In conclusion, the boundary between these two phases is a fuzzy crossover, characterized by the steep increase in the lifetime. This "critical line," on which the lifetime intercepts 1 $\mathrm{fm}$, is shown in Fig.\ref{intercept}, and determines the temperature in the system. Our results provide insight into the structure of quark-gluon plasma domains and domain walls in the deconfinement phase. In high-energy heavy ion collision experiments, it is expected that the system will be divided into thousands of small domains, each with a different vacuum configuration. Given that the vacuum configuration of the system will change to another vacuum and the adjacent vacuum domains will become homogeneous if the volume does not reach the threshold, the typical lower bound of quark-gluon plasma domain volumes can be characterized by the critical lines. \begin{figure}[htbp] \includegraphics[width=86mm]{tau_NV.png} \caption{The crossover transition from the unstable vacuum phase to the stable vacuum phase is shown for different temperatures: $T = 400 \ \mathrm{MeV}, 800 \ \mathrm{MeV}, 1200 \ \mathrm{MeV}, 1600 \ \mathrm{MeV}$. The lifetime of the vacuum lasts for $1.0 \ \mathrm{fm}$ on the transition lines, which corresponds to the typical length scale for a hadron. Domains smaller than the transition thresholds, or unstable domains, would shrink and disappear, while those larger than the thresholds would be stabilized.} \label{intercept} \end{figure} \subsection{In extreme cases} Additionally, let us consider the behavior of the vacuum stability in such extreme cases as $V \rightarrow \infty$ and large-$N$. In $V \rightarrow \infty$ limit, since both $V_0(V, N)$ and $M(V, N)$ diverge, \begin{eqnarray} \lim_{V \rightarrow \infty}\Gamma_{\mathrm{th}}(V,N) = 0. \end{eqnarray} Using $\theta_0 = 0$ from $V_0(1- \cos(N\theta)) -E \simeq V_0(1- \cos(N\theta)) = 0$, which is followed by $V_0 \gg E$, the Gamow factor in WKB approximation can be calculated: \begin{eqnarray} P(V, N; E) &\simeq& P(V,N;0) \nonumber \\ &=&\exp \left[ -2 \int^{\frac{2\pi}{N}}_0 d\theta \sqrt{2 M V(\theta)} \right] \nonumber \\ &=& \exp \left[ - \frac{16 \sqrt{M V_0}}{N}\right]. \end{eqnarray} Thus, \begin{eqnarray} &&\lim_{V\rightarrow \infty}\langle \Gamma_{\mathrm{tun}}(V, N; E) \rangle_{\beta} \nonumber\\ &&\qquad \simeq \lim_{V\rightarrow \infty} 2\beta \ P(V, N; 0)\int ^{\infty}_0 d E \ E e^{-\beta E} \nonumber \\ && \qquad = 0. \end{eqnarray} Therefore, we can say that vacua become irreducible by $V \rightarrow \infty$, i.e. $\Gamma_{\mathrm{tot}} \rightarrow 0$. In other words, the vacuum get stable: \begin{eqnarray} \lim_{V \rightarrow \infty}\tau_v(V,N) = \infty. \end{eqnarray} In the large-$N$ limit, $V_0(V, N) \rightarrow 0$ and $M(V, N) \rightarrow \infty$ is followed by \begin{eqnarray} \lim_{N \rightarrow \infty}\Gamma_{\mathrm{th}}(V,N) = \frac{1}{\beta}. \end{eqnarray} and \begin{eqnarray} \lim_{N \rightarrow \infty} \langle \Gamma_{\mathrm{tun}}(V, N; E) \rangle_{\beta} &=& \Gamma_{\mathrm{tun}} (E=0)\nonumber \\ &=&0 \end{eqnarray} considering $0 < E \leq V_0$. Therefore, the dominant transition is a thermal one rather than tunnelling effect in the large-$N$ limit. In this case, a particle is completely free from the potential, and the lifetime of a certain vacuum is \begin{eqnarray} \lim_{N \rightarrow \infty}\tau_v(V,N) =\beta, \end{eqnarray} comparable to the time scale of the thermal fluctuation. \section{Summary and Conclusion} In this paper, we have used the strong coupling expansion in lattice gauge theory to construct an effective model for the SU($N$) pure Yang-Mills theory at finite temperature. By employing the Polyakov loop as the confinement order parameter, we have extended the known SU($3$) case to the SU($N$). Our model includes the SU($N$) invariant Haar measure, which gives a nontrivial contribution to the potential of the traced Polyakov loop. This allows us to describe the spontaneous symmetry breaking (SSB) of the $\mathbb{Z}_N$ symmetry in the deconfinement phase in terms of a complex scalar field theory with $\mathbb{Z}_N$ symmetry. In the first part of the paper, we have focused on the large-$N$ limit of SU($N$) and analyzed the overall structure of the theory in this limit. We have calculated the quantum fluctuations around one vacuum and found that the fluctuations of the traced Polyakov loop along the angle direction become a Nambu-Goldstone mode in the large-$N$ limit. This indicates that the $\mathbb{Z}_N$-symmetric theory is transformed into a U(1)-symmetric theory in the large-$N$ limit. In the second part of the paper, we have studied the $\mathbb{Z}_N$ structure of the quark-gluon plasma with finite volume in the context of high-energy heavy-ion collisions. We have used the large-$N$ and WKB semi-classical approximations to compute transition probabilities between different vacua and calculated the lifetime of a domain until it transitions to another vacuum. In particular, we have shown that in the infinite volume limit the lifetime vanishes and in the large-$N$ limit it becomes equal to the value calculated only from thermal fluctuations. This suggests that the discrete symmetry of the theory becomes continuous in the large-$N$ limit. In the last part of the paper, we have examined the lifetime of the vacuum as a function of the volume and the number of colors. We have found that the $(V^{1/3}, N)$-plane is divided into stable and unstable vacuum regions, with a crossover transition at the boundary. This implies that a quark-gluon plasma domain with a volume below the threshold is unstable and quickly transitions to another vacuum, while a domain is stable if its volume surpasses it. We have also predicted that in experimental situations, the quark-gluon plasma exists as domains above a certain length scale, which is determined by the threshold and the temperature. In conclusion, we have provided a successful description of the vacuum in the deconfinement phase of SU($N$) pure Yang-Mills theory as an effective model of the traced Polyakov loop with $\mathbb{Z}_N$ symmetry. Future work includes improving the strong coupling expansion and large-$N$ approximations on which our model is based, as well as extending the model to include fermions and finite baryon densities. It will also be interesting to further explore the physical role of the phase of the Polyakov loop. \begin{acknowledgments} H.S. is supported in part by the Grants-in-Aid for Scientific Research [19K03869] from Japan Society for the Promotion of Science. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} In this paper, we initiate the study of the global stability properties of anisotropic systems of wave equations. With fixed parameters $\lambda_1$ and $\lambda_2$, a good example to keep in mind is \begin{equation} \label{eq:anisotropicintro} \begin{aligned} \Box \psi = -\partial_t^2 \psi + \partial_x^2 \psi + \partial_y^2 \psi = (\partial_t \psi)^2 (\partial_t \phi) \\ \Box' \phi = - \partial_t^2 \phi + \lambda_1^{-2} \partial_x^2 \phi + \lambda_2^{-2} \partial_y^2 \phi = (\partial_t \phi)^2 (\partial_t \psi) \end{aligned} \end{equation} in $2 + 1$ dimensions (see Theorem~\ref{thm:mainthm} and the discussion thereafter for a precise description of the kinds of admissible nonlinearities). We shall specifically be concerned with proving stability of the trivial solution. Because the problem is in $2 + 1$ dimensions and waves decay at a rate of $t^{-{1 \over 2}}$, this system is critical in terms of decay. We show that, nonetheless, the trivial solution is globally stable assuming that a kind of \emph{null condition} is present. This null condition will require that $\lambda_1 \ne 1$ and $\lambda_2 \ne 1$ in the above, guaranteeing that the light cones cones for the equations intersect transversally. Interactions involving at least one factor of each wave will then satisfy a kind of null condition (see the discussion after Theorem~\ref{thm:mainthm}). This problem is motivated by trying to understand phenomena related to \emph{birefringence}, which is associated with different waves moving at different speeds in different directions. The study of anisotropic systems of wave equations was introduced to the author by Sergiu Klainerman in the larger context of studying hyperbolic equations whose characteristics have multiple sheets (see Sections~\ref{sec:anisotropicintro} and \ref{sec:multiplecharacteristics}). The anisotropic system we study here is closely related to a system of equations that arises from applying a symmetry reduction to the equations governing the propagation of light in a \emph{biaxial crystal} (see Section~\ref{sec:multiplecharacteristics}). In order to prove global nonlinear stability, we shall introduce a strategy for proving estimates based on bilinear energy estimates and a duality argument. This paper also aims to describe this strategy. One of the main difficulties in this problem is that the equations do not share many symmetries, and even though both equations are in reality the $\Box$ operator associated to the flat metric, the Lorentz operators associated to both equations are different. One can check that the only weighted commutator (see \eqref{eq:VFs}) from the vector field method that can effectively be used is the scaling vector field $S = t \partial_t + r \partial_r$. Commuting with other weighted vector fields creates bad terms because, for example, the commutator adapted to $\psi$ could hit $\phi$, introducing growing weights in $t$. We must still use the fact that $S$ can effectively be used as a commutator in a fundamental way. We now briefly describe the role of billinear energy estimates. If $\psi$ and $f$ are such that $\Box \psi = \Box f = 0$, then $\int_{\Sigma_t} \partial_t \psi \partial_t f + \partial^i \psi \partial_i f d x$ is independent of $t$. Thus, solutions of the homogeneous wave equation have infinitely many conserved quantities because their inner products with other solutions are constant in $t$. A more geometric way of saying this is that pairs of solutions to the wave equation obey bilinear spacetime integration by parts formulas, and these formulas are in fact available on general globally hyperbolic Lorentzian manifolds (see Section~\ref{sec:bilinearestimates}). Our strategy is, then, to prove estimates for $\psi$ by using these bilinear estimates and making a good choice for the data for the other solution $f$ (see Section~\ref{sec:decayestimates} where this is described in more detail). We believe that this idea can be used in other settings, but we focus on using this to prove decay and stability statements in this paper. The class of data chosen for $f$ can be thought of as testing $\psi$ against smoothed out fundamental solutions. We require three ingredients in order to apply this strategy to global stability. The first tool consists of bilinear integration by parts formulas like the bilinear energy estimates (see Section~\ref{sec:bilinearestimates}). The second is decay rates for solutions to the homogeneous wave equation with a sufficiently large class of data. The third is a duality argument. We now elaborate. \begin{enumerate} \item Everything is purely physical space, relying on bilinear energy estimates. Bilinear energy estimates are well known, but we shall use them to track how much energy of the solution is present in various scale $1$ balls (see Section~\ref{sec:decayintro}). These estimates and their counterparts in general globally hyperbolic Lorentzian manifolds are discussed in Section~\ref{sec:bilinearestimates}. \item The method uses decay for solutions to the homogeneous wave equation as a black box. Decay for the homogeneous wave equation may be established using whatever method one prefers. The rates must only be sufficiently strong for the application at hand. \item Pointwise decay follows from the bilinear energy estimates and pointwise decay for solutions to the homogeneous wave equation arising from a sufficiently large class of test functions. Indeed, controlling the integral of $\psi$ against a large class of test functions gives estimates on norms of $\psi$ using a duality argument. We will also use the fact that we can freely commute with translation vector fields. \end{enumerate} The proof of global stability for anisotropic systems of wave equations will require us to also use the scaling vector field $S = t \partial_t + r \partial_r$ in a fundamental way. This follows the philosophy introduced by Klainerman in \cite{Kl85} showing the importance of taking advantage of operators that commute with the equation (see Sections~\ref{sec:history}, \ref{sec:anisotropicintro}, and \ref{sec:multiplecharacteristics}). In practice, most of the work goes into controlling the nonlinear errors (see Section~\ref{sec:decayestimates} to see how these terms appear). In order to better describe the method, we shall prove two easier stability results in before turning to study anisotropic systems of wave equations. We believe that these examples will be instructive in how bilinear energy estimates can be effectively used. The rest of the introduction consists of a discussion of existing work and how it relates to this paper (Section~\ref{sec:history}), a more detailed description of the strategy used to prove decay (Section~\ref{sec:decayintro}), a description of the first simple applications (Section~\ref{sec:simpleintro}), a description of anisotropic systems of wave equations (Section~\ref{sec:anisotropicintro}), and a discussion of some related directions (Section~\ref{sec:relateddirections}). We end this introductory part with a discussion in Section~\ref{sec:multiplecharacteristics} specifically about how the study of anisotropic wave equations fits into a larger research program concerning hyperbolic equations with multiple characteristics. Then, Section~\ref{sec:decayestimates} uses the bilinear estimates we have described and a duality argument to prove the decay estimates we shall use in the rest of the paper. Section~\ref{sec:simpleapps} uses the decay estimates in two simple nonlinear settings. Finally, Section~\ref{sec:anisotropic} proves global stability of the trivial solution for an anisotropic system of wave equations satisfying the null condition. This final section is the most substantial, and contains the main Theorem in the paper (Theorem~\ref{thm:mainthm}). \subsection{Nonlinear wave equations and related results} \label{sec:history} We shall now provide a short history of stability results for nonlinear wave equations, and we shall also discuss some previous results using techniques which are similar in spirit to some of the strategies used in this paper. The reader may wish to look at this in conjunction with Section~\ref{sec:decayintro} in order to better understand the connections with previous work. The study of global stability for nonlinear wave equations started with the work of Klainerman in \cite{Kla80}. The stability mechanism comes from the fact that solutions to the wave equation arising from localized initial data decay at quantitative rates as $t \rightarrow \infty$. This dispersive behavior allows one to control the contribution of the nonlinearity and treat it perturbatively. In this pioneering work, Klainerman used the dispersive estimate to take advantage of decay, and this allowed him to prove global stability for a large class of nonlinear wave equations. The proof was later simplified by Klaineramn-Ponce in \cite{KlaPon83}. Then, in the groundbreaking work \cite{Kl85}, Klainerman introduced the \emph{vector field method}, allowing him to prove decay for solutions to nonlinear equations by using a collection of weighted vector fields that satisfy good commutation properties with $\Box$. He was able to use this method to show global stability of the trivial solution for cubic nonlinearities in $3 + 1$ dimensions, and for quadratic nonlinearities in $n + 1$ dimensions for $n \ge 4$. More specifically, the family of weighted vector fields consisted of the Lorentz and scaling vector fields, which can be represented by \begin{equation} \label{eq:VFs} \begin{aligned} S = t \partial_t + r \partial_r \hspace{5 mm} \text{and} \hspace{5 mm} \Omega_{\mu \nu} = x_\mu \partial_\nu - x_\nu \partial_\mu, \end{aligned} \end{equation} where $x^\mu$ represent coordinates on $n + 1$ dimensional Minkowski space. Using these weighted vector fields leads to a weighted Sobolev inequality called the \emph{Klainerman-Sobolev inequality}, which reads \begin{equation} \label{eq:KlaSob} \begin{aligned} |f| (t,r,\omega) \le {C \over (1 + t + r)^{{n - 1 \over 2}} (1 + |t - r|^{{1 \over 2}})} \sum_{|\alpha| \le \lceil n / 2 \rceil} \Vert \Gamma^\alpha f \Vert_{L^2 (\Sigma_t)}, \end{aligned} \end{equation} where $\Sigma_t$ denotes the constant $t$ hypersurface, and where $\Gamma^\alpha$ represents strings of the weighted vector fields introduced above along with the translation vector fields. As was described in Section~\ref{sec:introduction}, we will not have access to all of the weighted commutators in \eqref{eq:VFs}. Nonetheless, we do have access to the scaling vector field, and this plays a key role in the proof of the main Theorem, Theorem~\ref{thm:mainthm}, in Section~\ref{sec:anisotropic}. This restriction in $3 + 1$ dimensions is fundamental, as was shown by John in \cite{Joh81}. In this work, John showed that general quadratic nonlinearities can lead to blow up in finite time, even when the data are arbitrarily small (but nonzero) and compactly supported. An example of an equation that experiences this blow up is \[ \Box \phi = -(\partial_t \phi)^2. \] The work of Klainerman \cite{Kl85} was preceded by work of John-Klainerman in \cite{JohKla84} where they were able to establish almost global existence results for nonlinear wave equations having quadratic nonlinearities in $3 + 1$ dimensions. This work involved using the fundamental solution for spherically symmetric solutions to the wave equation. The error from being spherically symmetric was controlled using the rotation vector fields, and this is one of the first papers where the usefulness of weighted vector fields for global problems was apparent. As was described above, the class of data we allow for the auxiliary multipliers makes them behave like smoothed out fundamental solutions, so the error integrals we must control have similarities with the integrals encountered by John-Klainerman in this work. Several physical systems in $3 + 1$ dimensions are modeled by hyperbolic equations having quadratic nonlinearities, meaning that any global stability result for these systems would have to have some mechanism that avoids the examples studied by John in \cite{Joh81}. These equations often have nonlinearities which exhibit some kind of \emph{null condition}, originally introduced by Klainerman in \cite{Kla82}. In the context of nonlinear wave equations in $3 + 1$ dimensions, quadratic nonlinearities obeying the null condition are better behaved and satisfy improved estimates. These improved estimates lead to global stability of the trivial solution (see \cite{Kla86} and \cite{Chr86}), as opposed to the examples studied by John in \cite{Joh81}. Then, in the monumental work \cite{ChrKl93}, Christodoulou-Klainerman were able to show that Minkowski space is globally nonlinearly stable with respect to sufficiently small and localized perturbations as a solution to the Einstein vacuum equations. Their work heavily used the vector field method, and it also required identifying a kind of null condition present in the Einstein vacuum equations. These themes have continued to be the subject of intense study ever since. One prominent direction among much of this recent work is that it involves situations in which we do not have access to all of the vector fields in \eqref{eq:VFs} for one reason or another. The first such work was by Klainerman-Sideris in \cite{KlaSid96} in which they developed a version of the vector field method to study hyperbolic equations without the use of boosts, motivated by studying physical systems without this symmetry. We note that the scaling vector field played a fundamental role in that work, similar to this one. In addition, we mention the exciting recent work concerning black hole stability (see, for example, \cite{DafRodShl16}, \cite{DafHolRod19}, and \cite{KlaSze20} and the references therein). These works required extending the philosophy of using weighted vector fields as commutators and multipliers to nontrivial backgrounds. New manifestations of some kind of null condition have also been found in various equations. In the context of nonlinear wave equations, Lindblad-Rodnianski identified a weaker form of the null condition called the \emph{weak null condition} which is present in the Einstein vacuum equations in wave gauge. This allowed them to provide a different proof of the stability of Minkowski space based on wave gauge in the works \cite{LinRod03}, \cite{LinRod05}, and \cite{LinRod10} (see also recent generalizations by Keir in \cite{Kei18}). In addition to these purely physical space methods, there have been other approaches to study wave equations and other equations with a dispersive mechanism. One approach, called the method of \emph{spacetime resonances}, involves using the Duhamel formula for the propagator to study the nonlinear interactions in a very precise way. This method was introduced in \cite{GerMasSha09}, and it takes advantage of both oscillations in the nonlinearity and differences in group velocity. The main difficulty is then in understanding what happens in regions where the oscillations match with the oscillations of the linear flow (time resonances) and in regions where wave packets interact for a long time (space resonances). Among the vast literature concerning the spacetime resonances method, we mention in particular the hyperbolic applications found in the works \cite{PusSha13} and \cite{DenPus20}. The first concerns the long time dynamics of solutions to equations satisfying the null condition, while the second concerns the long time dynamics of solutions to equations satisfying the weak null condition. We mention also the wave packet approach to studying wave equations. Using phase space decompositions such as wave packets has proven to be very useful in the study of wave equations (we note in particular the celebrated result \cite{SmiTat05} of Smith-Tataru which provides a sharp local well posedness result in $H^s$ spaces for general quasilinear wave equations). Wave packets are often constructed by tilings in phase space which saturate the uncertainty principle, and the tilings are chosen such that the wave equation approximately becomes a transport equation on each piece for some time scale. Other constructions of wave packets are given in the work \cite{KlaRodTao02} of Klainerman-Rodnianski-Tao. This paper provides novel proofs of bilinear estimates which were motivated by applications to low regularity results, and it also contains two wave packet constructions different from the one described above. One of these wave packet constructions involves ``two time cutoffs" in which a solution to the wave equation is truncated at both ends of a tube in order to construct a wave packet adapted to that tube. This construction provided inspiration for the strategy used in this paper. The work of Ifrim-Tataru in \cite{IfrTat15} describes how one can use wave packets to study global problems for nonlinear dispersive equations. They derive effective ODEs for inner products with wave packets, and this allows them to prove modified scattering for the solution. This philosophy of testing against other functions is also used in this paper (see Section~\ref{sec:decay}). Another use of testing against wave packets is found in the paper \cite{JeoOh19} by Jeong-Oh. Unlike the other examples described above, this paper proves ill posedness for certain regimes of the Hall and electron magnetohydrynamic equations, so this may seem out of place. However, the proof of ill posedness involves tracking inner products with wave packets solving the adjoint equation to a certain linearized equation, providing another example of the philosophy of studying an equation by testing its solutions against other functions (see Section~\ref{sec:decay}). As will be described in Section~\ref{sec:anisotropicdescription}, a careful analysis of both the geometry of interaction (in particular the volume of interaction) and frame decompositions is important when studying anisotropic systems of wave equations (the geometry in general is important throughout the paper). We note that the author has previously worked on other problems where the same kinds of ideas have been useful. The work \cite{AndPas19} joint with Pasqualotto studied the stability of the trivial solution to nonlinear wave equations with respect to perturbations localized around several points, while the work \cite{AndZba20} joint with Zbarsky studied the stability and instability of plane wave solutions. \subsection{Proving decay using bilinear energy estimates} \label{sec:decayintro} Let us now describe the strategy we will use to study the solutions to the equations in question. We shall use the notation $\Sigma_t$ to denote the usual constant $t$ spacelike hypersurfaces, and we shall denote by $r$ the usual ``spatial" radial coordinate. For this schematic discussion, we shall assume that we want to study the solutions to the equation \[ \Box \psi = F \] in $n + 1$ dimensions. Because we are interested in proving decay for localized data, we shall assume that the data for $\psi$ is smooth and compactly supported in the unit ball in $\Sigma_0$. Moreover, because $F$ is a nonlinearity involving $\psi$ in practice, we shall take $F$ to be an arbitrary smooth function supported where $t - r \ge -1$. We shall use test functions and a duality argument (along with commuting the equation for $\psi$ with translation vector fields) in order to prove statements about the solution to the equation. The goal is to show that $\psi$ behaves like a solution to the homogeneous equation. Solutions to the homogeneous wave equation arising from sufficiently regular data supported in the unit ball (i.e., the case of $F = 0$ above) follow a specific profile. The solution is concentrated along the light cone $t = r$ in a spherically symmetric way that is consistent with conservation of $\partial_t$ energy, where we recall that the $\partial_t$ energy of $f$ through $\Sigma_t$ is given by \[ {1 \over 2} \int_{\Sigma_t} (\partial_t \psi)^2 + \sum_{i = 1}^n (\partial_i \psi)^2 d x. \] An annular region $A_t \subset \Sigma_t$ of thickness $2$ and inner radius $t - 1$ has volume comparable to $t^{n - 1}$. This is precisely the region along the light cone $t = r$ in $\Sigma_t$ where we expect all of the energy of the solution to be concentrated if we apply the heuristic principle that waves propagate at the speed of light. In order to be consistent with energy conservation and being almost spherically symmetric, the solution must be of size $r^{-{n - 1 \over 2}} \approx t^{-{n - 1 \over 2}}$ along the light cone. A spacelike ball $B \subset \Sigma_t$ of radius comparable to $1$ whose center lies in $A_t$ should, therefore, contain energy comparable to $t^{-(n - 1)}$. This is because the ball has volume comparable to $1$, and it would take roughly $t^{n - 1}$ such balls to cover the annular region. Thus, because the energy in each ball should be roughly the same, each ball should have energy roughly comparable to $t^{-(n - 1)}$. See Figure~\ref{fig:energyannulus} for a picture of this configuration. The fact that each such ball has energy comparable to $t^{-(n - 1)}$ is usually a consequence of commuting the equation with rotation vector fields. \begin{figure} \centering \begin{tikzpicture} \draw[very thick] (0,0) circle (4); \draw[very thick] (0,0) circle (3.7); \draw (3.8,0) circle (0.2); \draw (3.85,0.3) circle (0.2); \draw (3.8,0.55) circle (0.2); \draw (3.78,0.8) circle (0.2); \end{tikzpicture} \caption{This annulus depicts the region containing most of the energy of a solution to the homogeneous wave equation in $2 + 1$ dimensions arising from smooth data supported in the unit disk in $\Sigma_t$. This annulus is a subset of $\Sigma_t$, has an outer radius of $t + 1$, and has an inner radius of $t - 1$. Thus, the annulus has area roughly $t$. We have covered a small portion of the annulus with disks of radius comparable to the thickness of the annulus, which is comparable to $1$. It would take roughly $t$ such disks to cover this annulus.} \label{fig:energyannulus} \end{figure} The motivation of the strategy used in this paper comes from trying to directly show that the energy will equidistribute among the balls in this way (every ball should roughly have energy $t^{n - 1}$). Because this ball is of radius comparable to $1$, a unit scale Sobolev embedding will then imply a pointwise decay rate of $t^{{n - 1 \over 2}}$, which is the expected rate. For applications to anisotropic systems of wave equations as in Section~\ref{sec:anisotropic}, this is a suitable way to show decay because we are still able to commute with translation vector fields. This procedure of showing that the energy equidistributes in this way among the balls can be thought of as replacing commuting with rotation vector fields. In reality, the energy of the solution will not be located only along the light cone $t = r$, but it will also be present within the light cone. However, it is well known that the energy should decay away from the light cone. Thus, in addition to showing directly that the energy roughly equidistributes as if the solution was spherically symmetric, we shall also show that the energy in a unit sized ball decays as the center of the ball moves away from the light cone $t = r$. This replaces showing decay in $u$ by commuting with weighted vector fields (although we shall have to figure out how to combine this method with commuting with the scaling vector field to study anisotropic wave equations in Section~\ref{sec:anisotropic}). We now describe how bilinear estimates allow us to show that the energy of the solution behaves in this way. Suppose we are given a scale $1$ ball $B \subset \Sigma_s$. We let $\tau$ denote the $u$ coordinate of the center of this ball. Thus, the spacetime coordinates of the center of the ball $B$ are $(s,x_0^1,\dots,x_0^n)$ where $s - \sqrt{(x_0^1)^2 + \dots + (x_0^n)^2} = \tau$. Now, suppose that we wish to show that the energy of the solution present in this ball is of the expected size. This means that we wish to show that \begin{equation} \label{eq:introendec} \begin{aligned} \Vert \partial \psi \Vert_{L^2 (B)}^2 \approx {C \over (1 + s)^{n - 1} (1 + |\tau|)^{2 p}}, \end{aligned} \end{equation} where $p$ measures the decay away from the light cone of the solution and depends on the application. Now, let $f$ be an arbitrary solution to the homogeneous wave equation $\Box f = 0$. Because $\psi$ solves the equation $\Box \psi = F$, we note that $\Box \partial^\alpha \psi = \partial^\alpha F$. The estimates we shall use are of the form \begin{equation} \label{eq:testest} \begin{aligned} \int_{\Sigma_s} (\partial_t \partial^\alpha \psi) (\partial_t f) + \sum_{i = 1}^n (\partial_i \partial^\alpha \psi) (\partial_i f) d x \\ = \int_{\Sigma_0} (\partial_t \partial^\alpha \psi) (\partial_t f) + \sum_{i = 1}^n (\partial_i \partial^\alpha \psi) (\partial_i f) d x - \int_0^s \int_{\Sigma_t} (\partial^\alpha F) (\partial_t f) d x d t. \end{aligned} \end{equation} These integration by parts identities are simply bilinear energy estimates, and we note that versions of this hold in general globally hyperbolic Lorentzian manifolds (see Section~\ref{sec:decay}). When $F = 0$, we note that this identity reduces to the fact that the inner product in $\dot{H}^1$ is a conserved quantity for pairs of solutions to the homogeneous wave equation. This identity contains a lot of freedom. Because we are interested in getting estimates for $\psi$, the data for $\psi$ and $F$ are both rigid. However, we have not specified the data for $f$, and this identity holds regardless of how we pick data for $f$. Because we are interested in proving estimates for $\psi$ in $B$, it is natural to allow the data for $f$ to be supported in, say, $2 B$, which is the ball of twice the radius of $B$ that is centered at the same point. Let us denote the trace of $f$ in $\Sigma_s$ by $f_0$, and let us denote the trace of its time derivative by $f_1$. The functions $f_0$ and $f_1$ can be freely specified to give a unique solution $f$ to the homogeneous wave equation. We are thus restricting $f_0$ and $f_1$ to be supported in $2 B$. In this case, the above identity becomes \begin{equation} \label{eq:introlocen} \begin{aligned} \int_{2 B} (\partial_t \partial^\alpha \psi) (f_1) + \sum_{i = 1}^n (\partial_i \partial^\alpha \psi) (\partial_i f_0) d x \\ = \int_{\Sigma_0} (\partial_t \partial^\alpha \psi) (\partial_t f) + \sum_{i = 1}^n (\partial_i \partial^\alpha \psi) (\partial_i f) d x - \int_0^s \int_{\Sigma_t} (\partial^\alpha F) (\partial_t f) d x d t. \end{aligned} \end{equation} By this choice of data for the auxiliary multiplier $f$, we note that can use a bilinear energy estimate to bound averages of derivatives of $\psi$ against the functions $f_0$ and $f_1$. If we allow $f_0$ and $f_1$ to vary among a suitable class of functions which are supported in $2 B$, a duality argument will result in estimates for $\psi$ and its derivatives localized to $B$ (see Section~\ref{sec:dualityargument}). These localized estimates for $\psi$ are what we set out to prove. We now briefly discuss one way in which \eqref{eq:introlocen} can be used to show \eqref{eq:introendec}. If we allow for $f_0$ and $f_1$ to vary among all $C^k$ and $C^{k - 1}$ functions, respectively, which are compactly supported in $2 B$, then sufficient commutation with translation vector fields and a duality argument can lead to \[ \Vert \partial \psi \Vert_{L^2 (B)} \le C \sup_{f_0, f_1, |\alpha| \le K(k)} \int_{2 B} (\partial_t \partial^\alpha \psi) f_1 + \sum_{i = 1}^n (\partial_i \partial^\alpha \psi) (\partial_i f_0) d x. \] Thus, in order to show \eqref{eq:introendec}, it suffices to show that the right hand side of \eqref{eq:introlocen} is bounded by ${C \over (1 + s)^{{n - 1} \over 2} (1 + |\tau|)^p}$ for all admissible $f_0$, $f_1$, and $\alpha$. Let us now assume that $k$ is chosen sufficiently large so that we have pointwise decay rates for solutions to the homogeneous wave equation in that class. More precisely, we choose $k$ so large such that we have that \[ |\partial f| \le {C \over (1 + s - t)^{{n - 1 \over 2}}} {1 \over 1 + |s - t - \sqrt{(x^1 - x_0^1)^2 + \dots + (x - x_0^n)^2}|^p} \] for $t \le s - t$. Such a rate can be proven in several different ways, and we recall that the sharp value of $p$ depends on the dimension. For example, the Klainerman-Sobolev Inequality implies this with $p = {1 \over 2}$ in arbitrary dimensions. As another example, in the case of $n = 2$, the sharp value of $p$ is ${3 \over 2}$, as can be seen using the fundamental solution. After choosing $k$ such that this is true, we can now examine the right hand side of \eqref{eq:introlocen}. The integral over $\Sigma_0$ is controlled in terms of $f$, which we have information on, and the data for $\psi$. Because the data for $\psi$ is compactly supported in the unit ball, an examination of the geometry involved tells us that \[ {1 \over (1 + s - t)^{{n - 1} \over 2} (1 + |s - t - \sqrt{(x - x_0^1)^2 + \dots + (x - x_0^n)^2}|)^p} \] is comparable to \[ {1 \over (1 + s)^{{n - 1} \over 2} (1 + |\tau|)^p} \] in the support of the data for $\psi$ (see Figure~\ref{fig:lightconesaux} in Section~\ref{sec:decay}). Thus, we see that \[ \int_{\Sigma_0} (\partial_t \partial^\alpha \psi) (\partial_t f) + \sum_{i = 1}^n (\partial_i \partial^\alpha \psi) (\partial_i f) d x \le {C \over (1 + s)^{{n - 1} \over 2} (1 + |\tau|)^p} \Vert \psi \Vert_{C^{|\alpha| + 1} (\Sigma_0)}. \] Thus, assuming that we can write \begin{equation} \label{eq:errorreqboundintro} \begin{aligned} \int_0^s \int_{\Sigma_t} (\partial^\alpha F) (\partial_t f) d x d t \le {C \over (1 + s)^{{n - 1} \over 2} (1 + |\tau|)^p} \Vert \psi \Vert_{C^{|\alpha| + 1}, (\Sigma_0)} \end{aligned} \end{equation} we will have shown \eqref{eq:introendec}, which is the decay estimate we set out to prove. These considerations are all carried out in detail in Section~\ref{sec:decay}. In practice, the most difficult thing to do is of course to control the error integrals involving $F$. This is done in a few simple examples in Section~\ref{sec:simpleapps}, and it is done for a class of anisotropic systems of wave equations in Section~\ref{sec:anisotropic}. We note that several aspects of this strategy can be modified as needed (see Section~\ref{sec:relateddirections} for a discussion of some of these potential modifications). \subsection{Simple applications} \label{sec:simpleintro} Before applying thee ideas to anisotropic systems of wave equations, we shall discuss two simple applications which we believe to be instructive. The first consists of the following problem. Let $\chi$ be a smooth cutoff function equal to $1$ for $|x| \le 1$ and equal to $0$ for $|x| \ge 2$. Then, working in $3 + 1$ dimensions, we take $\gamma$ to be the null curve $(t,t,0,0)$ in the $t$, $x$ plane, and we define $r_\gamma^2 (t,x,y,z) = (x - t)^2 + y^2 + z^2$. We then take the semilinear wave equation \begin{equation} \label{eq:simpappintro1} \begin{aligned} \Box \phi = -\chi (r_\gamma) (\partial_t \phi)^2. \end{aligned} \end{equation} We have, thus, taken one of the examples studied by John in \cite{Joh81} (see Section~\ref{sec:history} for more of a discussion on this) and have multiplied the nonlinearity by a function which localizes it to a tubular neighborhood of the null geodesic $\gamma$. The asymptotic system (see \cite{Hor97}) for this equation blows up in finite time, and yet we shall show in Section~\ref{sec:simpleapps} that the trivial solution of this equation is globally nonlinearly stable. In addition to showing how the error integrals that arise when using this method can be controlled, this example shows the effect that angular localization can have when studying wave equations, a phenomenon which is present in anisotropic systems of wave equations (see Section~\ref{sec:anisotropicintro}). Indeed, we note that the cutoff function means that the nonlinearity only takes effect in a single angular direction from the origin. In fact, an analysis of the proof in Section~\ref{sec:simpleapps} shows that we still have global stability for the equation \[ \Box \phi = -\chi \left (r_\gamma \left (x,{y \over (1 + t)^\omega},{z \over (1 + t)^\omega},t \right ) \right ) (\partial_t \phi)^2 \] for any $\omega < {1 \over 2}$. The next simple application will be proving global stability for a cubic nonlinear wave equation in $3 + 1$ dimensions. The proof will actually give global stability and decay rates for an equation of the form \begin{equation} \label{eq:simpappintro2} \begin{aligned} \Box \phi = h (\partial_t \phi)^3, \end{aligned} \end{equation} where $h$ is any smooth function with bounded $C^k$ norm for $k$ sufficiently large. This example will show how an analysis of null geometry is necessary for controlling the error integrals in a simpler setting. In both cases, the proofs use a bootstrap argument. The bootstrap argument involves decay estimates and energy estimates. The decay estimates are shown using the bilinear energy estimates as described in Section~\ref{sec:decayintro}. There is an energy level corresponding to $N$ derivatives in $L^2$, and we prove pointwise decay for only, say, the first ${3 N \over 4}$ derivatives in order to close. The energy estimates are interpolated with the pointwise decay estimates in order to close the pointwise estimates. Closing the pointwise estimates requires an analysis of the geometry of the interaction between the nonlinearity and the auxiliary multipliers $f$ described in Section~\ref{sec:decayintro}. This is carried out in Section~\ref{sec:simpleapps}. \subsection{Applications to anisotropic systems of wave equations} \label{sec:anisotropicintro} Anisotropic systems of wave equations arise as the simplest model problem before studying more complicated equations having characteristics with multiple sheets. Notable physical examples of such hyperbolic equations include the equations of crystal optics (see \cite{CouHil62}), compressible magnetohydrodynamics (see \cite{CouHil62}), and crystalline materials (see \cite{Chr98}). Characteristics with multiple sheets are related to the physical phenomenon of birefringence. If we wish to understand the solutions of these equations, it is beneficial to try to study the effects of having multiple characteristics in a setting which does not require us to understand the other novel phenomena they exhibit (such as \emph{conical refraction} in the case of a biaxial crystal in crystal optics). Thus, we study anisotropic wave equations in order to try to isolate the effects birefringence in these equations from the other phenomena they exhibit (see Section~\ref{sec:multiplecharacteristics} for more discussion on the role of anisotropic wave equations in the larger context of studying hyperbolic equations with multiple characteristics). In fact, anisotropic systems of wave equations arise naturally in the study of \emph{uniaxial crystals}, and also in the study of biaxial crystals under a certain symmetry reduction. We postpone a more thorough discussion of these issues and more generally of hyperbolic equations having multiply sheeted characteristics to Section~\ref{sec:multiplecharacteristics}. For a nonlinear equation such as \eqref{eq:anisotropicintro}, one mathematical difficulty arises from the fact that the two wave equations only share the scaling symmetry. The proof of global stability will combine the ideas described in Section~\ref{sec:decayintro} with the vector field method. We shall only have access to the scaling vector field $S$. A rough version of the main Theorem is given below (see Theorem~\ref{thm:mainthm} for the precise statement): \begin{theorem}[rough version of the main Theorem] The trivial solution to nonlinear wave equations of the form \eqref{eq:anisotropicintro} is asymptotically stable as long as $\lambda_1 \ne 1$ and $\lambda_2 \ne 1$. \end{theorem} The main Theorem will actually cover equations more general than \eqref{eq:anisotropicintro}, and will allow for higher order nonlinearities, quasilinear terms, and other cubic nonlinearities (see Section~\ref{sec:anisotropic}). The null condition can be summarized as saying that cubic interactions do not have the same wave for all three factors, and that $\lambda_1 \ne 1$ and $\lambda_2 \ne 1$. This restriction on $\lambda_1$ and $\lambda_2$ guarantees that the light cones intersect transversally, and it will allow us to control the nonlinear interaction. We shall now provide only a very brief description of the proof (see Section~\ref{sec:anisotropicdescription} for a more thorough description). The proof will once again follow from using a bootstrap argument. Pointwise estimates and $L^2$ based energy estimates will be propagated. There are two key ideas that must be used. \begin{enumerate} \item Controlling the error integrals will require a fine analysis of the geometry. This requires understanding the geometry of the intersections of various cones. The condition that $\lambda_1 \ne 1$ and $\lambda_2 \ne 1$ will be required to prove estimates on the geometry which allow us to effectively control the nonlinear interactions. \item The scaling vector field $S$ must be used. It will help in getting good powers of $\tau$ when controlling the error integrals (see \eqref{eq:errorreqboundintro}). The use of this operator is fundamental to making the proof work, and this follows in the philosophy introduced by Klainerman in \cite{Kl85}. \end{enumerate} The details of all of these considerations are carried out in Section~\ref{sec:anisotropic}. \subsection{Related directions} \label{sec:relateddirections} We now briefly discuss some potential directions for further study related to this paper. For the anisotropic system of wave equations studied in Section~\ref{sec:anisotropic}, there are several questions and improvements one may wish to pursue. \begin{enumerate} \item One natural improvement is to remove compact support and to instead have data decaying away from the origin. The estimates within the light cone from this paper should still be applicable in this case, but they would have to be supplemented by an analysis of the geometry of the region outside of the light cones. Specifically, it seems as though the main new ingredient would be bounds on the behavior of the geometry of the scaling vector field and how it interacts with the geometry of the various cones in question in this region. We believe that the analysis in Section~\ref{sec:SGeometry} could be useful in understanding the behavior of the scaling vector field in this region. We discuss this further in the remarks following the statement of Theorem~\ref{thm:mainthm}. \item Another natural direction is to investigate the problem in $3 + 1$ dimensions with a quadratic nonlinearity satisfying the analogous null condition. The analysis of the geometry becomes more complicated because of the increase in dimension. However, we believe that the techniques in this paper could be applicable, and we hope that this will be the topic of future work. \item It is also natural to consider the problem consisting of a system of more than two wave equations with more than two different light cones. When the analogous null condition is satisfied, we believe that the techniques in this paper would be applicable. In fact, having a product of 3 waves with different light cones seems to be even better behaved, particularly when the three light cones do not intersect in the same four lines. \item The fact that the null condition we consider suppresses most parallel interactions could lead to improved low regularity local well posedness results. This would be analogous to results concerning nonlinear wave equations satisfying the classical null condition (see, for example, \cite{KlaMac93} for the improved estimates that null forms satisfy in a low regularity setting, and for a description of how this can lead to improved local well posedness results). \end{enumerate} The strategy of controlling the equation using bilinear estimates also has further flexibility that has not been used in this paper. An example of this flexibility is that we may control integrals over different regions using a similar strategy. Performing a bilinear energy estimate between data on $\Sigma_0$ and, say, a null hypersurface is one instance of this. The data for the auxiliary multiplier can be posed on this null hypersurface in an appropriate way. We also believe that this could be useful for understanding solutions to wave equations in other contexts. Also, this kind of strategy could be used for other equations having some kind of bilinear energy structure. Another possible use is that we can use the bilinear estimates in Section~\ref{sec:bilinearestimates} in a more refined way in order to provide a finer analysis of the phase space behavior of solutions to wave equations (in order to show, for example, improved decay of good derivatives). We can also further interface these ideas with existing techniques, such as more fully using weighted vector fields. We hope that exploiting this finer phase space analysis will be the topic of forthcoming work with Samuel Zbarsky. \subsection{Hyperbolic equations with characteristics having multiple sheets} \label{sec:multiplecharacteristics} The problem of studying the stability of systems of anisotropic wave equations was posed to the author by Sergiu Klainerman in the larger context of studying hyperbolic equations with multiple characteristics. In \cite{AndKla21}, Klainerman and the author discuss several interesting directions related to this program. In addition to the equations of crystal optics mentioned above, other physically relevant examples include compressible magnetohydrodynamics and the equations governing elasticity (see the work of Christodoulou \cite{Chr98} where he presents a general framework for these kinds of systems). In this introduction, we focus mostly on the role of anisotropic systems of wave equations in the context of studying crystal optics, but we encourage the reader to look at \cite{AndKla21} for a much more thorough discussion. In terms of stability, we run into obstacles when applying the vector field method in the usual way to hyperbolic systems of equations with multiple characteristics because different components in the system can have different natural operators associated to them. However, there are still some operators present for anisotropic systems of wave equations, namely the translation vector fields and the scaling vector field. The strategy of using bilinear energy estimates can interface with commuting operators from the vector field method, and we in fact have to use the scaling vector field in a fundamental way in the proof of the main Theorem in Section~\ref{sec:anisotropic}. This more broadly follows the philosophy introduced by Klainerman in \cite{Kl85} which emphasizes the importance of being able to take advantage of good commutators (and in particular weighted commutators for decay). Moreover, as is discussed in \cite{AndKla21}, physically significant hyperbolic systems with multiple characteristics (such as crystal optics) still often exhibit a compatible scaling symmetry. Thus, the fact that anisotropic systems of wave equations still have access to the scaling vector field is representative of interesting physical situations. As is also discussed in \cite{AndKla21}, it is natural to look for other operators that satisfy good commutation properties with equations exhibiting this birefringent behavior. This is an important part of the larger goal of robustly extending the vector field method to these kinds of equations. Similar to vector fields, we believe that these operators could be very valuable in this more exotic context. These operators require much study themselves, and are a key part of providing a fine tuned analysis of the linear equations of, say, crystal optics. These operators may no longer be vector fields, but may instead be, for example, second order operators. We note that second order operators were used by Andersson--Blue in \cite{AndBlu15} in the study of the wave equation on black hole spacetimes. The system \eqref{eq:anisotropicintro} admits such an operator. We now describe Klainerman's procedure in this particular case. If we look at the symbols of $\Box$ and $\Box'$ by taking the spacetime Fourier transform, they are given by $\tau^2 - |\xi|^2$ and $\tau^2 - \lambda_1^{-2} \xi_1^2 - \lambda_2^{-2} \xi_2^2$. Level sets of these two functions are hyperboloids in frequency space. These hyperboloids are two dimensional in a three dimensional space, and they intersect transversally along families of curves. Because the symbol acts by multiplication, vector fields in frequency space which are tangent to level sets of the symbol annihilate the symbol, and thus commute with the equation because they annihilate the symbol. We can thus find an operator that commutes with both equations by looking at the vector field generated by the family of curves formed by intersecting these level sets. This vector field in frequency space can essentially be found by taking the cross product between the normal vector fields of the two families of hyperboloids. One can check that this results in the operator \[ -\Lambda_1 \tau \xi_1 \partial_{\xi_2} - \Lambda_2 \tau \xi_2 \partial_{\xi_1} - \Lambda_3 \xi_1 \xi_2 \partial_\tau, \] subject to the constraints that $\Lambda_1 + \Lambda_2 - \Lambda_3 = 0$ and $\lambda_2^{-2} \Lambda_1 + \lambda_1^{-2} \Lambda_2 - \Lambda_3 = 0$ (of course, multiplying this operator by any constant still commutes with both equations as well). This vector field in frequency space can be seen to correspond to the second order differential operator \[ \Lambda_1 y \partial_t \partial_x + \Lambda_2 x \partial_t \partial_y + \Lambda_3 t \partial_x \partial_y \] in physical space. We can then hope to use operators of this kind to further study solutions to anisotropic systems of wave equations, and we can look for similar kinds of operators in other systems. For example, the weights in these operators lead to weighted Sobolev inequalities, similar to how the weights in the Lorentz and scaling vector fields lead to the weighted Klainerman-Sobolev inequality (see \eqref{eq:VFs} and \eqref{eq:KlaSob}). We refer the reader to \cite{AndKla21} for a much more thorough discussion of this circle of ideas. For now, we just note that we hope to further explore uses of these operators together with Klainerman. We end this Section with a more precise description of the relation between anisotropic systems of wave equations and crystal optics. These are a simpler counterpart to biaxial crystals. While they still exhibit birefringent behavior, they do not display other complicated phenomenon, such as conical refraction. In the general case, it may be that the electrical and magnetic properties of the media themselves change as a function of the electric and magnetic fields. This corresponds to having the \emph{dielectric tensor} $\epsilon$ and \emph{magnetic permeability} $\mu$ can depend on the electric and magnetic fields. This leads to a nonlinear system of equations. For this discussion, we shall greatly simplify the situation by considering a constant, diagonal matrix $\epsilon$ and a constant $\mu$. More precisely, we take the dielectric tensor $\epsilon$ to be given by \[ \epsilon = \begin{pmatrix} \epsilon_1 & 0 & 0 \\ 0 & \epsilon_2 & 0 \\ 0 & 0 & \epsilon_3 \end{pmatrix}. \] The three constants $\epsilon_i$ in this matrix are called the \emph{dielectric constants}. When they are all equal, we are left with the usual Maxwell equations governing the propagation of light in an isotropic media. When two of the constants are equal to each other but distinct from the third, the resulting equations then govern the propagation of light in a uniaxial crystal. Finally, when all three of the constants are distinct from each other, the resulting equations govern the propagation of light in a biaxial crystal. The magnetic permeability $\mu$ is taken to be a multiple of the identity, corresponding to a magnetically isotropic material. This can be thought to be $1$ for simplicity (see \cite{CouHil62}, \cite{Lie91}, \cite{Lie89}, and \cite{AndKla21} for a more thorough discussion). Let us now write down the equations in the most general form. We shall denote by $E$ the electric field and $B$ the magnetic field. Thus, in $3 + 1$ dimensions, both the electric and magnetic fields have $3$ components. The Maxwell equations are then given by \begin{equation} \label{eq:dielectricMaxwell} \begin{aligned} -\partial_t (\epsilon E) + \curl(B) = 0 \\ -\partial_t (\mu B) - \curl(E) = 0 \\ \divr(\epsilon E) = \divr(\mu B) = 0. \end{aligned} \end{equation} It is easy to see that the last set of equation can be thought of as constraints because they propagate in the evolutionary equations as long as they hold initially. Commuting the evolutionary equations in \eqref{eq:dielectricMaxwell} with $\partial_t$, expanding them out in components, and using the equations to get rid of the magnetic field $B$ gives us the system of equations \begin{equation} \begin{aligned} \epsilon_1 \partial_t^2 E^1 = \partial_y^2 E^1 + \partial_z^2 E^2 - \partial_x \partial_y E^2 - \partial_x \partial_z E^3 \\ \epsilon_2 \partial_t^2 E^2 = \partial_z^2 E^2 + \partial_x^2 E^2 - \partial_y \partial_z E^3 - \partial_y \partial_x E^1 \\ \epsilon_3 \partial_t^2 E^3 = \partial_x^2 E^3 + \partial_y^2 E^3 - \partial_z \partial_x E^1 - \partial_z \partial_y E^2 \\ \epsilon_1 \partial_x E^1 + \epsilon_2 \partial_y E^2 + \epsilon_3 \partial_z E^3 = 0. \end{aligned} \end{equation} Let us now make the symmetry assumption that the electric field does not depend on $z$, meaning that all $\partial_z$ derivatives vanish. This results in the system of equations \begin{equation} \begin{aligned} \epsilon_1 \partial_t^2 E^1 = \partial_y^2 E^1 - \partial_x \partial_y E^2 \\ \epsilon_2 \partial_t^2 E^2 = \partial_x^2 E^2 - \partial_y \partial_x E^1 \\ \epsilon_3 \partial_t^2 E^3 = \partial_x^2 E^3 + \partial_y^2 E^3 \\ \epsilon_1 \partial_x E^1 + \epsilon_2 \partial_y E^2 = 0. \end{aligned} \end{equation} We can now use the divergence free condition, which we know propagates in the evolution as long as it holds initially, to write the equations as \begin{equation} \begin{aligned} -\partial_t^2 E^1 + \epsilon_2^{-1} \partial_x^2 E^1 + \epsilon_1^{-1} \partial_y^2 E^1 = 0 \\ -\partial_t^2 E^2 + \epsilon_2^{-1} \partial_x^2 E^2 + \epsilon_1^{-1} \partial_y^2 E^2 = 0 \\ -\partial_t^2 E^3 + \epsilon_3^{-1} \partial_x^2 E^3 + \epsilon_3^{-1} \partial_y^2 E^3 = 0 \\ \epsilon_1 \partial_x E^1 + \epsilon_2 \partial_y E^2 = 0. \end{aligned} \end{equation} The evolutionary equations listed here can be compared with \eqref{eq:anisotropicintro}, and we see that they are of a similar form. Indeed, we have a system of three equations. One equation (the one for $E^3$) has circular light cones while the other two equations (the ones for $E^1$ and $E^2$) have elliptical light cones. Thus, the equations we study in Section~\ref{sec:anisotropic} are related to this symmetry reduction for biaxial crystals. Moreover, we can easily see that the null condition that $\lambda_1 \ne 1$ and $\lambda_2 \ne 1$ is analogous to the condition that the $\epsilon_i$ all be distinct, meaning that the crystal is biaxial.\footnote{This is the first half of the null condition we need to prove global stability. It would be interesting to see if the second half, that the same wave not be cubed, corresponds to some physical property of the material.} When the crystal is uniaxial, we see that there are two possibilities. If $\epsilon_1 = \epsilon_2$, the equations become an isotropic system of wave equations with multiple speeds. These kinds of equations can be treated using the techniques introduced by Klainerman--Sideris in \cite{KlaSid96} (see also \cite{Sogge08}). If $\epsilon_1 = \epsilon_3$ or $\epsilon_2 = \epsilon_3$, the equations become anisotropic, but it is like the situation in which one of $\lambda_1$ or $\lambda_2$ is equal to $1$. Although the results in Section~\ref{sec:anisotropic} thus seem to be relevant for the study of biaxial crystals under this symmetry reduction, we emphasize that the problem outside of symmetry is much more difficult and requires a much finer analysis. This symmetry reduction has made the equations for each component of the electric field decouple, which does not occur in the general case of a biaxial crystal. This decoupling removes many of the interesting difficulties present in biaxial crystals, such as conical refraction. The first step in analyzing these more complicated phenomena was undertaken by Liess in \cite{Lie91}, resulting in a global stability statement in \cite{Lie89} arising from using machinery similar to that introduced by Klainerman--Ponce in \cite{KlaPon83}. The linear analysis provided by Liess in \cite{Lie91} shows that the situation is much more difficult for these kinds of systems than for the usual wave equation. The stability result in \cite{Lie89} does not cover the class of nonlinearities considered in this paper, and in particular, it does not exploit any kind of null condition (it treats fourth order and higher nonlinearities). To our knowledge, this is the only global stability result for a biaxial crystal outside of symmetry assumptions. Providing sharper global stability statements requires a finer analysis. We believe this to be an extremely interesting direction for future work, and we refer the reader once again to \cite{AndKla21} for a more thorough discussion. \subsection{Acknowledgements} The author is extremely indebted to his advisor, Sergiu Klainerman, for making the author aware of this problem. The author is grateful to him both for his extremely valuable suggestions, especially concerning the importance of scaling, and for discussions with him about the role of this problem in a larger research program for studying hyperbolic equations with multiple characteristics. The author is also very thankful for several useful conversations with Mihalis Dafermos, Ross Granowski, Christoph Kehle, Jonathan Luk, Sung-Jin Oh, Federico Pasqualotto, Igor Rodnianski, Yakov Shlapentokh-Rothman, and Samuel Zbarsky. \section{Decay estimates} \label{sec:decayestimates} In this section, we shall prove the estimates that will lead to decay. Decay comes from bilinear integration by parts formulas which show that certain quantities are conserved at the linear level. The energy estimate is a special case of these estimates. These bilinear estimates are simple, but they are quite general, with versions being true on general Lorentzian manifolds. We record these geometric estimates in Section~\ref{sec:bilinearestimates}. In this paper, when we are interested in controlling $\psi$ where $\Box \psi = F$, the estimates are applied by taking an auxiliary solution of the homogeneous wave equation $\Box f = 0$ and using this as a multiplier for $\psi$. To go from these bilinear estimates to pointwise decay statements, we use a duality argument. Controlling integral averages of a function $\psi$ against a sufficiently large class of test functions results in estimates on various norms for $\psi$. Thus, by allowing the data to vary appropriately for $f$, we prove that $\psi$ decays pointwise through a duality argument. The duality argument is described in Section~\ref{sec:dualityargument}, and the duality argument and bilinear estimates are combined to prove pointwise estimates in Section~\ref{sec:decay}. The decay rates this gives depend on decay rates for the homogeneous wave equation. Decay rates for solutions to the homogeneous wave equation can be taken as a black box, and can be proven in whatever way one prefers. \subsection{Basic bilinear estimates} \label{sec:bilinearestimates} We begin with the estimates on Minkowski space $\mathbb{R}^{n + 1}$. The first is a bilinear energy estimate and the second comes from integrating $\phi \Box \psi$ by parts. In this paper, we shall only use estimates between two time slices $\Sigma_{t_1}$ and $\Sigma_{t_2}$, but we note that we are of course free to do estimates in regions with different geometry (such as, for example, null hypersurfaces). This is expressed in the more geometric versions of these integration by parts formulas below which simply show that we can integrate appropriate divergence and use Stokes' Theorem. \begin{lemma} \label{lem:bilinenM} Let $\psi$ and $\phi$ be smooth functions decaying sufficiently rapidly at infinity with $\Box \psi = F$ and with $\Box \phi = G$. Then, we have that \begin{equation} \begin{aligned} \int_{\Sigma_{t_2}} \partial_t \psi \partial_t \phi + \partial^i \psi \partial_i \phi d x + \int_{t_1}^{t_2} \int_{\Sigma_s} F \partial_t \phi d x d s \\ = \int_{\Sigma_{t_1}} \partial_t \psi \partial_t \phi + \partial^i \psi \partial_i \phi d x - \int_{t_1}^{t_2} \int_{\Sigma_s} G \partial_t \psi d x d s. \end{aligned} \end{equation} More generally, we have that \begin{equation} \begin{aligned} \int_{\Sigma_{t_2}} \partial_t \partial^\alpha \psi \partial_t \phi + \partial^i \partial^\alpha \psi \partial_i \phi d x + \int_{t_1}^{t_2} \int_{\Sigma_s} \partial^\alpha F \partial_t \phi d x d s \\ = \int_{\Sigma_{t_1}} \partial_t \partial^\alpha \psi \partial_t \phi + \partial^i \partial^\alpha \psi \partial_i \phi d x - \int_{t_1}^{t_2} \int_{\Sigma_s} G \partial^\alpha \partial_t \psi d x d s. \end{aligned} \end{equation} \end{lemma} \begin{proof} We compute the following: \begin{equation} \begin{aligned} \Box \psi \partial_t \phi = - \partial_t^2 \psi \partial_t \phi + \partial_i \partial^i \psi \partial_t \phi = - \partial_t (\partial_t \psi \partial_t \phi) + \partial_t \psi \partial_t^2 \phi + \partial_i (\partial^i \psi \partial_t \phi) - \partial^i \psi \partial_i \partial_t \phi \\ = \partial_t (-\partial_t \psi \partial_t \phi) + \partial_t \psi \partial_t^2 \phi + \partial_i (\partial^i \psi \partial_t \phi) - \partial_t (\partial^i \psi \partial_i \phi) + \partial_t \partial^i \psi \partial_i \phi \\ = \partial_t (-\partial_t \psi \partial_t \phi) + \partial_t \psi \partial_t^2 \phi + \partial_i (\partial^i \psi \partial_t \phi) - \partial_t (\partial^i \psi \partial_i \phi) + \partial^i (\partial_t \psi \partial_i \phi) - \partial_t \psi \partial^i \partial_i \phi \\ = \partial_t (-\partial_t \psi \partial_t \phi) + \partial_t (-\partial^i \psi \partial_i \phi) + \partial_i (\partial^i \psi \partial_t \phi) + \partial^i (\partial_t \psi \partial_i \phi) - \partial_t \psi \Box \phi. \end{aligned} \end{equation} Integrating between $\Sigma_{t_1}$ and $\Sigma_{t_2}$ gives us that \begin{equation} \begin{aligned} \int_{t_1}^{t_2} \int_{\Sigma_s} \Box \psi \partial_t \phi d x d s \\ = \int_{\Sigma_{t_2}} - \partial_t \psi \partial_t \phi - \partial^i \psi \partial_i \psi d x - \int_{\Sigma_{t_1}} - \partial_t \psi \partial_t \phi - \partial^i \psi \partial_i \psi d x - \int_{t_1}^{t_2} \int_{\Sigma_s} \partial_t \psi \Box \phi d x d s. \end{aligned} \end{equation} Using the fact that $\Box \psi = F$ and that $\Box \phi = G$ gives us the first estimate. The second estimate comes from commuting the equation for $\psi$ with $\partial^\alpha$. \end{proof} \begin{lemma} \label{lem:bilinintM} Let $\psi$ and $\phi$ be smooth functions decaying sufficiently rapidly at infinity with $\Box \psi = \Box \phi = 0$. Then, we have that \begin{equation} \begin{aligned} \int_{\Sigma_{t_2}} \partial_t \psi \phi - \psi \partial_t \phi d x + \int_{t_1}^{t_2} F \phi d x d s \\ = \int_{\Sigma_{t_1}} \partial_t \psi \phi - \psi \partial_t \phi d x + \int_{t_1}^{t_2} G \psi d x d s. \end{aligned} \end{equation} More generally, we have that \begin{equation} \begin{aligned} \int_{\Sigma_{t_2}} \partial_t \partial^\alpha \psi \phi - \partial^\alpha \psi \partial_t \phi d x + \int_{t_1}^{t_2} \partial^\alpha F \phi d x d s \\ = \int_{\Sigma_{t_1}} \partial_t \partial^\alpha \psi \phi - \partial^\alpha \psi \partial_t \phi d x + \int_{t_1}^{t_2} G \partial^\alpha \psi d x d s. \end{aligned} \end{equation} \end{lemma} \begin{proof} We compute the following: \begin{equation} \begin{aligned} \Box \psi \phi = - \partial_t^2 \psi \phi + \partial_i \partial^i \psi \phi = - \partial_t (\partial_t \psi \phi) + \partial_t \psi \partial_t \phi + \partial_i (\partial^i \psi \phi) - \partial^i \psi \partial_i \phi \\ = - \partial_t (\partial_t \psi \phi) + \partial_t (\psi \partial_t \phi) - \psi \partial_t^2 \phi + \partial_i (\partial^i \psi \phi) - \partial^i (\psi \partial_i \phi) + \psi \partial^i \partial_i \phi \\ = - \partial_t (\partial_t \psi \phi) + \partial_t (\psi \partial_t \phi) + \psi \Box \phi + \partial_i (\partial^i \psi \phi) - \partial^i (\psi \partial_i \phi). \end{aligned} \end{equation} Integrating between $\Sigma_{t_1}$ and $\Sigma_{t_2}$ gives us that \begin{equation} \begin{aligned} \int_{t_1}^{t_2} \int_{\Sigma_s} \Box \psi \phi d x d s = \int_{\Sigma_{t_2}} - \partial_t \psi \phi + \psi \partial_t \phi d x - \int_{\Sigma_{t_1}} - \partial_t \psi \phi + \psi \partial_t \phi d x + \int_{t_1}^{t_2} \int_{\Sigma_s} \psi \Box \phi d x d s. \end{aligned} \end{equation} Using the fact that $\Box \psi = F$ and $\Box \phi = G$ gives us the desired result. \end{proof} In order to more accurately show the geometric character of these identities, we provide versions of these formulas which are available on arbitrary globally hyperbolic Lorentzian manifolds $(M,g)$. \begin{lemma} \label{lem:bilinint} Let $D$ be a domain in $M$. Moreover, let $\Box_g \psi = F$ and let $\Box_g \phi = G$. We have that \[ \int_D div(\nabla \psi \phi) - div(\nabla \phi \psi) d vol(g) = \int_D \phi F - \psi G d vol(g). \] \end{lemma} \begin{proof} We can write the wave equation $\Box_g \psi = F$ as $div(\nabla \psi) = tr (\nabla^2 \psi) = F$, where $\nabla$ is the covariant derivative operator associated with the Lorentzian metric $g$. Now, we have that \begin{equation} \begin{aligned} div(\nabla \psi \phi) = \phi div (\nabla \psi) + g(d \psi,d \phi) = \phi \Box_g \psi + g(d \psi,d \phi). \end{aligned} \end{equation} Similarly, we have that \begin{equation} \begin{aligned} div(\nabla \phi \psi) = \psi div (\nabla \phi) + g(d \psi,d \phi) = \psi \Box_g \phi + g(d \psi,d \phi) \end{aligned} \end{equation} Subtracting the second equation from the first gives us that \begin{equation} \begin{aligned} \phi \Box_g \psi - \psi \Box_g \phi = div(\nabla \psi \phi) - div(\nabla \phi \psi). \end{aligned} \end{equation} This gives us the desired result. \end{proof} Integrating this identity and using the divergence theorem can give us fluxes analogous to Lemma~\ref{lem:bilinintM} on a general Lorentzian manifold. \begin{lemma} Let $T [\psi,\phi]$ denote the bilinear energy momentum tensor given by \begin{equation} \begin{aligned} T [\psi,\phi] = d \psi \otimes d \phi + d \phi \otimes d \psi - g (d \psi,d \phi) g. \end{aligned} \end{equation} Then, we have that \[ div(T [\psi,\phi]) = (\Box \psi) d \phi + (\Box \phi) d \psi. \] \end{lemma} \begin{proof} We begin by noting that $T [\psi,\phi]$ is symmetric, so the divergence is well defined. This can be written as \begin{equation} \begin{aligned} T_{\mu \nu} [\psi,\phi] = \partial_\mu \psi \partial_\nu \phi + \partial_\nu \psi \partial_\nu \phi - g_{\alpha \beta} \partial^\alpha \psi \partial^\beta \phi g_{\mu \nu}. \end{aligned} \end{equation} Computing the divergence means contracting with $\nabla^\mu$, where $\nabla$ is the connection associated with $g$. We have that \begin{equation} \begin{aligned} \nabla^\mu T_{\mu \nu} [\psi,\phi] = (\nabla^\mu \partial_\mu \psi) \partial_\nu \phi + \partial_\mu \psi \nabla^\mu \partial_\nu \phi + \nabla^\mu \partial_\nu \psi \partial_\mu \phi + \partial_\nu \psi (\nabla^\mu \partial_\mu \phi) \\ - g_{\alpha \beta} \nabla^\mu \partial^\alpha \psi \partial^\beta \phi g_{\mu \nu} - g_{\alpha \beta} \partial^\alpha \psi \nabla^\mu \partial^\beta \phi g_{\mu \nu}. \end{aligned} \end{equation} Now, we note that $\partial_\mu \psi \nabla^\mu \partial_\nu \phi = g_{\alpha \beta} \partial^\alpha \psi \nabla^\mu \partial^\beta \phi g_{\mu \nu} = \nabla^2_{\mu \nu} \phi \partial^\mu \psi = (\nabla^2 \phi) (\nabla \psi,\cdot)$, where we have used the fact that the Hessian of a function is symmetric (we have also used the musical isomorphism induced by the metric between covariant and contravariant tensors). We have an analogous equality for the other terms with $\psi$ and $\phi$ interchanged. Putting everything together gives us that \[ \nabla^\mu T_{\mu \nu} [\psi,\phi] = (\nabla^\mu \partial_\mu \psi) \partial_\nu \phi + (\nabla^\mu \partial_\mu \phi) \partial_\nu \psi. \] This is exactly the statement that \[ div(T [\psi,\phi]) = (\Box \psi) d \phi + (\Box \phi) d \psi, \] as desired. \end{proof} Because the divergence of $T [\psi,\phi]$ can be written in terms of $\Box_g \psi$ and $\Box_g \phi$, it can be contracted with appropriate vector fields to find useful identities after integrating over a spacetime domain and applying Stokes' Theorem, just as is the case with the usual energy momentum tensor ${1 \over 2} T [\psi,\psi]$. The resulting identities will then also depend on the deformation tensors of the vector fields. For example, the identities in Lemma~\ref{lem:bilinenM} which are specific to Minkowski space correspond to contracting this bilinear energy momentum tensor with the vector field $\partial_t$, which is a Killing field (i.e., it has a vanishing deformation tensor). We refer the reader to, for example, \cite{Ali10} for more thorough description of integrating the contraction of energy momentum tensors with vector fields in order to derive useful quantities. \subsection{Duality argument} \label{sec:dualityargument} The estimates from Section~\ref{sec:bilinearestimates} lead us to think that we should be able to control the averages of $\psi$ and its derivatives weighted by functions $f$ as long as we can control the error integrals involving $F$ in \eqref{eq:testest}. Moreover, because we can commute with unit derivatives in practice, the same is true of $\partial^\alpha \psi$. If $f$ is allowed to vary in a sufficiently large class and if $|\alpha|$ is allowed to be sufficiently large, it is well known that control over these averages will imply bounds for $\psi$ and some of its derivatives. Because we require pointwise decay, we have chosen to immediately prove pointwise estimates using these averages. Of course, these averages can be used to prove estimates for other $L^p$ spaces, including the energy space. Thus, we could modify the following argument to first prove an energy bound, and then prove pointwise decay using the usual Sobolev embedding. We shall now provide a proof of this duality argument for completeness. Although this result is far from sharp, it suffices for our applications. We assume that we are on some $\Sigma_t$, and that we have that $\left |\int_{\Sigma_t} \partial^\alpha \psi f \right | d x \le M$ where $M$ is some explicit constant. Moreover, we assume that $\psi$ is some smooth function, and that $f$ is allowed to be any function with $C^k$ norm bounded by another constant $D$. Our goal is to show that we, in fact, have control over $\Vert \psi \Vert_{L^\infty}$ explicitly in terms of the constants $M$ and $D$ as long as we can take all $\omega$ with $|\omega|$ sufficiently large depending on $k$. \begin{proposition} Let $\psi$ be some smooth function on $\mathbb{R}^n$ such that, for any smooth function $f$ supported in any ball of radius $1$ having $C^k$ norm at most $D$, we have that \begin{equation} \begin{aligned} \left |\int \partial^\alpha \psi(x) f(x) d x \right | \le M \end{aligned} \end{equation} for all $|\alpha| \le k + n + 1$. Then, there exists some universal constant $C$ depending only on $k$ and $n$ such that \begin{equation} \begin{aligned} \Vert \psi \Vert_{L^\infty} \le {C M \over D}. \end{aligned} \end{equation} \end{proposition} \begin{proof} It suffices to get control over $\psi$ near the origin, as control over any other location follows in the same way. We fix a smooth, radially symmetric cutoff function $\chi$ equal to $1$ in the ball of radius ${1 \over 2}$, decaying monotonically to $0$. The function is identically equal to $0$ outside of the ball of radius $1$. Let $\Vert \chi \Vert_{C^{2 k + n + 1}} = E$. Then, we have that ${1 \over E} \chi$ has $C^{2 k + n + 1}$ norm controlled by $1$. Now, we take the cube centered at the origin with side lengths equal to $10$. We note that this cube contains the support of $\chi$. By identifying the faces of this cube in the usual way, we can think of the function $\chi \psi$ as living on the flat torus. We take the functions $e^{{1 \over 5} \pi l \cdot x}$ for $l \in \mathbb{Z}^n$. Integrating against these functions give us the Fourier coefficients of a given function. The bounds on integrals against test functions are not directly useful when integrating against the functions $e^{{1 \over 5} \pi l \cdot x}$ because they have growing $C^k$ norm as $\Vert l \Vert_{\ell^\infty}$ increases. Thus, we divide by $\Vert l \Vert_{\ell^\infty}^k$, giving us the functions ${1 \over \Vert l \Vert_{\ell^\infty}^k} e^{{1 \over 5} \pi l \cdot x}$. These functions have $C^k$ norm controlled by some constant $A$ independent of $l$. Thus, multiplying by $D$ and dividing by $A$, we get that the functions $f_l (x) := {D \over A} {1 \over \Vert l \Vert_{\ell^\infty}^k} e^{{1 \over 5} \pi l \cdot x}$ have $C^k$ norm controlled by $D$. We now consider the Fourier coefficients of the function $\chi \psi$. We shall use the regularity of this function to show that its Fourier coefficients decay by integrating against the functions $f_l$. By the assumptions given above, we have that \begin{equation} \begin{aligned} {D \over E A \Vert l \Vert_{\ell^\infty}^k} |c_l| = \left |\int {1 \over E} \chi(x) \psi(x) f_l (x) d x \right | \le M C_1, \end{aligned} \end{equation} where $c_l$ is the Fourier coefficient of $\chi \psi$ associated to $l$, and where $C_1$ depends on $k$ (this constant arises from controlling the $C^k$ norm of ${1 \over E} \chi f_l$ in terms of the $C^k$ norm of $\chi$ and the $C^k$ norm of $f_l$). Now, we have that \begin{equation} \begin{aligned} \left ({\pi \over 5} \right )^{|\alpha|} {\Vert l \Vert_{\ell^\infty}^{|\alpha|} D \over E A \Vert l \Vert_{\ell^\infty}^k} |c_l| = \left |\int {1 \over E} \partial^\alpha (\chi(x) \psi(x)) f_l (x) d x \right | \le M C_2, \end{aligned} \end{equation} where $C_2$ depends on $|\alpha|$ and $k$, and where the partial derivatives in $\partial^\alpha$ are all taken in the direction corresponding to the largest component of $l$. Taking $|\alpha| = k + n + 1$ gives us that \begin{equation} \begin{aligned} \left ({\pi \over 5} \right )^{|\alpha|} \Vert l \Vert_{\ell^\infty}^{n + 1} {D \over E A} |c_l| = \left |\int {1 \over E} \partial^\alpha (\chi(x) \psi(x)) f_l (x) d x \right | \le M C''. \end{aligned} \end{equation} This gives us that \begin{equation} \begin{aligned} |c_l| \le C {M E A \over D \Vert l \Vert_{\ell^\infty}^{n + 1}}, \end{aligned} \end{equation} where the constant $C$ depends on the $C^k$ norms of the $f_l$ (we recall that this is $A$, which is independent of $l$), and the numbers $k$ and $|\alpha| = k + n + 1$. Now, we have that \begin{equation} \begin{aligned} \chi(x) \psi(x) = \sum_{l \in \mathbb{Z}^n} c_l e^{{1 \over 5} \pi l \cdot x}. \end{aligned} \end{equation} Plugging in the above estimates gives us the quantitative estimate \begin{equation} \begin{aligned} |\chi(x) \psi(x)| \le \sum_{l \in \mathbb{Z}^n} |c_l| \le {C M \over D}, \end{aligned} \end{equation} where $C$ is some absolute constant depending only on $n$, $k$, and the functions $\chi$ and $f_l$. This is the desired result. \end{proof} \subsection{Decay estimates} \label{sec:decay} We now proceed to one specific application of the above considerations. We shall use the bilinear estimates in Minkowski space from Section~\ref{sec:bilinearestimates} and the duality argument from Section~\ref{sec:dualityargument} to prove pointwise decay for solutions to wave equations. We shall include inhomogeneities in the estimates. In our applications, these inhomogeneities are nonlinearities. Let $\Box \psi = F$, and suppose that $\psi$ has compactly supported data in the unit ball. Suppose that we want to show pointwise decay for $\psi$. We then take auxiliary solutions of the homogeneous wave equation $\Box f = 0$, and decay will be a result of applying the bilinear estimates from Section~\ref{sec:bilinearestimates} to $\psi$ and $f$, allowing the data for $f$ to vary in an appropriate class, and then applying the duality argument from Section~\ref{sec:dualityargument}. The data for $f$ is chosen as follows. Suppose we want to show decay for $\psi$ at some point $(s,x_0^i)$. We then consider the ball $B$ of radius $1$ centered at $x_0^i$ in $\Sigma_s$, and we allow the data $f_0 (x) = f(s,x)$ and $f_1 (x) = -\partial_t f(s,x)$ for $f$ to vary among all smooth functions whose $C^k$ norm (where $k$ will be specified shortly) is of size at most $10$ and which are supported in $B \subset \Sigma_s$. More specifically, we have chosen the nonlinearities to only involve $\partial_t$ derivatives for simplicity, so we shall always set $f_0 = 0$. If other derivatives are involved instead, this argument can be modified to treat those cases. The estimate from Lemma~\ref{lem:bilinenM} says that \[ \int_{\Sigma_s} \partial_t \psi f_1 d x = \int_{\Sigma_0} \partial_t \psi \partial_t f + \partial^i \psi \partial_i f d x - \int_0^s \int_{\Sigma_t} F \partial_t f d x d t. \] Thus, we have that \[ \left |\int_{\Sigma_s} \partial_t \psi f_1 d x \right | \le \left |\int_{\Sigma_0} \partial_t \psi \partial_t f + \partial^i \psi \partial_i f d x \right | + \left |\int_0^s \int_{\Sigma_t} F \partial_t f d x d t \right | \] We denote by $\tau$ the $u$ coordinate of $(s,x_0^i)$ (i.e., $\tau = s - \sqrt{\sum_{i = 1}^n (x_0^i)^2}$). In practice, we know that solutions to the homogeneous wave equation with compactly supported and sufficiently regular data (like $f$) decay in a certain way. Let us thus assume that $k$ is chosen so large such that \[ |\partial f| \le {C \over (1 + s - t)^{{n - 1 \over 2}}} {1 \over 1 + |s - t - \sqrt{(x^1 - x_0^1)^2 + \dots + (x - x_0^n)^2}|^p} \] for $0 \le t \le s$. For example, when $n = 2$, we can take $k = 2$ and $p = {3 \over 2}$ using the fundamental solution (this is of course not sharp). As a consequence of this, we have that \begin{equation} \label{eq:auxdec} \begin{aligned} \left |\int_{\Sigma_0} \partial_t \psi \partial_t f + \partial^i \psi \partial_i f d x \right | \le {C \over (1 + s)^{{n - 1 \over 2}} (1 + |\tau|^p)} \Vert \partial \psi \Vert_{L^1 (\Sigma_0)}, \end{aligned} \end{equation} where $p > 0$ measures the decay away from the light cone. Here, we are using the fact that the data for $\psi$ is compactly supported in the unit ball in $\Sigma_0$. Indeed, the symmetry of the configuration guarantees that the distance from the center of the unit ball in $\Sigma_0$ to the light cone associated with $f$ is comparable to $\tau$, and clearly the height of this cone is given by $s$. This can be seen in Figure~\ref{fig:lightconesaux}. \begin{figure} \centering \begin{tikzpicture} \draw[dashed] (0,0) -- (4,4); \draw[ultra thick] (4,4) -- (5,5); \draw[dashed] (0,0) -- (-1,1); \draw[ultra thick] (-1,1) -- (-5,5); \draw[ultra thick] (3,5) -- (2.7,4.7); \draw[dashed] (2.7,4.7) -- (-1,1); \draw[ultra thick] (-1,1) -- (-2,0); \draw[ultra thick] (3,5) -- (3.27,4.73); \draw[ultra thick] (4,4) -- (8,0); \draw[dashed] (3.27,4.73) -- (4,4); \draw[dotted] (3,5) -- (5,5); \node (tau) at (4,5.2) {$\tau$}; \draw[dotted] (3,0) -- (3,5); \node (s) at (3.2,2) {$s$}; \draw[dotted] (0,0) -- (3,0); \node (a) at (1.5,0.2) {$s - \tau$}; \draw[thick,domain=-180:-85] plot ({1.75*cos(\x)},{1.75+0.3*sin(\x)}); \draw[dashed,domain=-85:0] plot ({1.75*cos(\x)},{1.75+0.3*sin(\x)}); \draw[dashed,domain=0:180] plot ({1.75*cos(\x)},{1.75+0.3*sin(\x)}); \draw[dashed,domain=-180:-105] plot ({3+1.75*cos(\x)},{3.25+0.3*sin(\x)}); \draw[thick,domain=-105:0] plot ({3+1.75*cos(\x)},{3.25+0.3*sin(\x))}); \draw[dashed,domain=0:180] plot ({3+1.75*cos(\x)},{3.25+0.3*sin(\x))}); \draw[thick] (0,5) ellipse (5 and 0.35); \draw[thick] (-2,0) arc(-180:0:5 and 0.35); \draw[dashed] (-2,0) arc(-180:-360:5 and 0.35); \draw[color=red,rotate around={32:(1.5,2.5)}] (-1.36,2.5) arc(-180:0:2.88 and 0.2); \draw[dashed,color=red,rotate around={32:(1.5,2.5)}] (-1.36,2.5) arc(-180:-360:2.88 and 0.2); \end{tikzpicture} \caption{A configuration of light cones for $\psi$ and $f$. The light cone for $\psi$ is the upward opening cone, and the light cone for $f$ is the downward opening cone. The vertical distance between the tips of the cones is $s$, and the horizontal distance from the tip of the downward opening cone and the upward opening cone is $\tau$. The intersection of the two cones is a tilted spacelike ellipse whose major axis is comparable to $s$ and whose minor axis is comparable to $\sqrt{\tau} \sqrt{s}$ (this follows from the results in Sections~\ref{sec:geometry} and \ref{sec:SGeometry}). This ellipse appears in red in the above figure. We also refer the reader to Figure~\ref{fig:psifSigma_t} in Section~\ref{sec:geometry} to see what this looks like in each $\Sigma_t$.} \label{fig:lightconesaux} \end{figure} Thus, we have the estimate \begin{equation} \label{eq:decayest} \begin{aligned} \left |\int_{\Sigma_s} \partial_t \psi f_1 d x \right | \le {C \over (1 + s)^{{n - 1 \over 2}} (1 + |\tau|^p)} \Vert \partial \psi \Vert_{L^1 (\Sigma_0)} + \left |\int_0^s \int_{\Sigma_t} F \partial_t f d x d t \right | \end{aligned} \end{equation} This is the form of the estimate we shall use. When combined with commutation with translation vector fields and the duality argument from Section~\ref{sec:dualityargument}, we get pointwise decay for $\partial_t \psi$ assuming that \begin{equation} \label{eq:reqerrorbounds} \begin{aligned} \left |\int_0^s \int_{\Sigma_t} F \partial_t f d x d t \right | \le {C \over (1 + s)^{{n - 1} \over 2} (1 + |\tau|^p)}. \end{aligned} \end{equation} In this paper, we note that $\psi$ is of size $\epsilon$ and $F$ is schematically of size $\epsilon^q$ for some $q > 1$ in practice. We will be able to recover these estimates in a bootstrap argument. The decay estimate resulting from \eqref{eq:decayest} and the duality argument in Section~\ref{sec:dualityargument} is recorded in the following proposition. \begin{proposition} \label{prop:decay} Let \[M [\psi] (s,x_0^i) = \sup_{|\alpha| \le k + n + 1} \sup_{f_1} \left ({C \over (1 + s)^{{n - 1 \over 2}} (1 + |\tau|^p)} \Vert \partial \partial^\alpha \psi \Vert_{L^1 (\Sigma_0)} + \left |\int_0^s \int_{\Sigma_t} (\partial^\alpha F) \partial_t f d x d t \right | \right ), \] where $(s,x_0^i)$ denotes the center of the unit ball in $\Sigma_s$ where $f_1$ is supported, where $\tau = s - \sqrt{\sum_{i = 1}^n (x_0^i)^2}$, and where the $C^k$ norm of $f_1$ is bounded by $10$. Then, we have that \[ |\partial_t \psi| (s,x_0^i) \le C M [\psi] (s,x_0^i). \] \end{proposition} The value of $p$ will be specified in each application. We end this Section by remarking that while Proposition~\ref{prop:decay} is based on using duality and the bilinear energy estimate from Lemma~\ref{lem:bilinenM}, there is a similar estimate coming from Lemma~\ref{lem:bilinint}. Proposition~\ref{prop:decay} is used in Section~\ref{sec:anisotropic} to study anisotropic wave equations. In Section~\ref{sec:simpleapps}, we shall actually use a version of Proposition~\ref{prop:decay} adapted to Lemma~\ref{lem:bilinint} instead. The statement and proof are analogous. \section{Simple applications} \label{sec:simpleapps} In this Section, we shall use the strategy described in Section~\ref{sec:decayestimates} in order to prove global stability of the trivial solution $\psi = 0$ in a few specific cases. The problems are artificial, but they show how to use Section~\ref{sec:decayestimates} in simplified settings, and they help us examine some of the behavior of solutions to nonlinear wave equations. Because the main purpose of these applications is to give an idea of how to handle anisotropic systems of wave equations, less detail will be given than is found in Section~\ref{sec:anisotropic}. The first result of this Section, Proposition~\ref{prop:localizednonlinearity}, deals with a localized nonlinearity. The second result, Proposition~\ref{prop:cubic}, deals with a general cubic nonlinearity in $3 + 1$ dimensions. Both of these problems were described in Section~\ref{sec:simpleintro}. Both of the following results are in $3 + 1$ dimensions. We shall use Proposition~\ref{prop:decay} for the application in Section~\ref{sec:localizednonlinearity}, and we shall use the version of Propisition~\ref{sec:decay} adapted to Lemma~\ref{lem:bilinint} in Section~\ref{sec:cubic} (this is described after Proposition~\ref{sec:decay}). In order to do so, we need to assume pointwise decay of solutions to the homogeneous wave equation for a suitable class of initial data. We shall use the rates given by the fundamental solution, meaning that we shall assume that solutions arising from, say, $C^2$ initial data supported on the ball of radius $R$ are supported where $|u| \le R$ and decay like ${1 \over (1 + t)}$. This is like taking $n = 3$ and, schematically, $p = \infty$ in \eqref{eq:auxdec}. We do not require decay this strong in $u$ for solutions to the homogeneous equation, but we shall assume it for simplicity. Before turning to these applications, we note that these examples require similar analysis to that of anisotropic systems of wave equations, which are considered in Section~\ref{sec:anisotropic}. Both examples require an analysis of the geometry involved, which is the most technically involved part of Section~\ref{sec:anisotropic}. More precisely, the example in Section~\ref{sec:localizednonlinearity} exhibits how angular localization can help in control nonlinear effects, and the example in Section~\ref{sec:cubic} exhibits how it can be important to understand the geometry of intersecting cones. There are analogous considerations that go into analyzing anisotropic systems of wave equations in Section~\ref{sec:anisotropic}. \subsection{Global stability with a localized nonlinearity} \label{sec:localizednonlinearity} We recall the setting of \eqref{eq:simpappintro1}. The function $r_\gamma^2 (t,x,y,z) = (x - t)^2 + y^2 + z^2$ denotes a radial distance from the null geodesic in the $t$, $x$ plane emanating from the spacetime origin. Then, for $\chi$ a smooth cutoff function equal to $1$ for $|x| \le 1$ and equal to $0$ for $|x| \ge 2$, we are interested in solving \eqref{eq:simpappintro1}, which we recall is given by \begin{equation} \label{eq:beameq} \begin{aligned} \Box \psi = -\chi(r_\gamma) (\partial_t \psi)^2 \end{aligned} \end{equation} for small data compactly supported in the unit ball. We have the following Proposition. \begin{proposition} \label{prop:localizednonlinearity} There exists $\epsilon_0 > 0$ and $N \in \mathbb{N}$ such that, for all $\epsilon \le \epsilon_0$, the trivial solution to \eqref{eq:beameq} is globally nonlinearly stable with respect to perturbations $\psi(0,x) = \psi_0 (x)$ and $\partial_t \psi(0,x) = \psi_1 (x)$ which are compactly supported in the unit ball and have that $\Vert \partial \psi_0 \Vert_{H^N} + \Vert \psi_1 \Vert_{H^N} \le \epsilon$. Moreover, the solutions to this equation satisfy the decay estimate \[ |\partial_t \partial^i \psi| \le {C \epsilon \over 1 + t} \] for all $|i| \le {3 N \over 4}$. \end{proposition} \begin{proof} We shall take as bootstrap assumptions an energy bound on $\psi$ as well as a pointwise bound. We let $T$ be the maximal real number such that \begin{enumerate} \item We have that \begin{equation} \label{eq:bootstrapenergylocalized} \begin{aligned} \sup_{0 \le t \le T} \Vert \partial \psi \Vert_{H^N (\Sigma_t)} \le C \epsilon^{{3 \over 4}} (1 + t)^{\sqrt{\epsilon}}. \end{aligned} \end{equation} \item We have that \begin{equation} \label{eq:bootstrappointwiselocalized} \begin{aligned} |\partial_t \partial^i \psi| (t,x) \le {C \epsilon^{{3 \over 4}} \over 1 + t} \end{aligned} \end{equation} for all $0 \le t \le T$ and $|i| \le {3 N \over 4}$. \end{enumerate} We shall now recover both of these bootstrap assumptions. The energy will be recovered using an energy estimate, Gronwall's inequality, and the pointwise bootstrap assumptions, while the pointwise bootstrap assumptions will be recovered using Proposition~\ref{prop:decay}. We first recover the bootstrap assumptions for the energy. Let $0 \le s \le T$. We commute the equation \eqref{eq:beameq} with $\partial^i$ for $|i| \le N$. Then, a $\partial_t$ energy estimate gives us that \begin{equation} \label{eq:enestbeam} \begin{aligned} \Vert \partial \partial^i \psi \Vert_{L^2 (\Sigma_s)}^2 = \Vert \partial \partial^i \psi \Vert_{L^2 (\Sigma_0)}^2 + 2 \int_0^s \int_{\Sigma_t} -\partial^i (\chi (r_\gamma) (\partial_t \psi)^2) (\partial_t \partial^i \psi) d x d t. \end{aligned} \end{equation} We note that at least one factor of $\partial_t \psi$ will have fewer than ${3 N \over 4}$ derivatives on it after applying the product rule to $\partial^\alpha$. Thus, using the pointwise bootstrap assumptions in \eqref{eq:enestbeam} gives us that \[ E(s) \le E(0) + C \epsilon^{{3 \over 4}} \int_0^s {1 \over 1 + t} E(t) d t, \] where \[ E(t) = \sum_{j \le i} \int_{\Sigma_t} (\partial_t \partial^j \psi)^2 + (\partial_x \partial^j \psi)^2 + (\partial_y \partial^j \psi)^2 + (\partial_z \partial^j \psi)^2 d x. \] An application of Gronwall's inequality then gives us that $E(s) \le 2 \epsilon^2 (1 + t)^{\sqrt{\epsilon}}$ for $\epsilon$ sufficiently small, as desired. We shall now recover the pointwise bootstrap assumptions. By Proposition~\ref{prop:decay}, it suffices to show that \[ M [\partial^i \psi] (s,x) \le {C \epsilon \over 1 + s} \] for all $0 \le s \le T$ and $|i| \le {3 N \over 4}$, where $M$ is as in Propsition~\ref{prop:decay}. Now, because we are assuming \eqref{eq:bootstrappointwiselocalized} for the pointwise decay of the auxiliary functions $f$, we see that it suffices to control \[ \int_0^s \int_{\Sigma_t} -\partial^j \partial_i (\chi (r_\gamma) (\partial_t \psi)^2) (\partial_t f) d x d t \] because the terms involving initial data in $M$ in Proposition~\ref{prop:decay} are controlled by ${C \epsilon \over 1 + t}$. Now, after interpolating between the pointwise bootstrap assumptions and the energy bootstrap assumptions, we have that \[ |\partial^j \partial^i (\chi (r_\gamma) (\partial_t \psi)^2)| (t,x) \le {C \epsilon^{3 \over 2} \over (1 + t)^{2 - \delta}} \] for any $\delta > 0$ as long as $N$ is taken sufficiently large as a function of $\delta$. Thus, taking $\tilde{\chi} (x)$ to be a smooth, positive function equal to $1$ for $|x| \le 5$ and equal to $0$ for $|x| \ge 10$, we have that \begin{equation} \label{eq:errorintlocalized} \begin{aligned} \left |\int_0^s \int_{\Sigma_t} -\partial^j \partial_i (\chi (r_\gamma) (\partial_t \psi)^2) (\partial_t f) d x d t \right | \le C \epsilon^{{3 \over 2}} \int_0^s \int_{\Sigma_t} \tilde{\chi} (r_\gamma) {1 \over (1 + t)^{2 - \delta}} {1 \over 1 + s - t} d t \\ \le C \epsilon^{{3 \over 2}} \Vert \tilde{\chi} \Vert_{L^1} \int_0^s {1 \over (1 + t)^{2 - \delta}} {1 \over 1 + s - t} d t \le C \epsilon^{{3 \over 2}} {1 \over 1 + s}, \end{aligned} \end{equation} as desired. \end{proof} Examining this proof, we see that the localization given by $\chi$ is fundamental. It is interesting to note that we can allow the support of $\chi$ to expand in $t$ in the angular directions as long as the rate is smaller than $t^{{1 \over 2}}$, which is the wave packet scaling. An examination of these terms in the absence of the cutoff shows that the wave packet scaling also appears naturally when analyzing the example \[ \Box \phi = -(\partial_t \phi)^2 \] studied by John in \cite{Joh81} which leads to blow up. This gain in localization is similar to what will happen for anisotropic systems of wave equations in Section~\ref{sec:anisotropic}. We recall that this corresponds to an equation of the form \[ \Box \phi = -\chi \left (r_\gamma \left (x,{y \over (1 + t)^\omega},{z \over (1 + t)^\omega},t \right ) \right ) (\partial_t \phi)^2, \] where $\omega$ denotes the rate at which the support of $\chi$ expands. If we allow the support of $\chi$ to expand at a rate of $t^{{1 \over 2}}$ in the angular directions (i.e., if we take $\omega = {1 \over 2}$), it still appears as though there is a substantial gain in the volume of interaction. Indeed, the total volume of worst interaction (i.e., the volume of intersection of the sphere with the cutoff) will be $t$ instead of $t^2$, which is what it usually is. However, wave packets that propagate along the null geodesic at the center of the cutoff will still experience amplification from interactions on a set of maximal measure. This can be seen by examining the integrals in \eqref{eq:errorintlocalized} that come from picking auxiliary multipliers whose data are posed along this null geodesic. Thus, because these wave packets behave as if the nonlinearity was just $-(\partial_t \phi)^2$, the example that John studied, we cannot close the argument, and we believe that this may in fact still lead to blow up. We note that we have allowed the support of $\chi$ to expand only in the angular directions in the sense that the $y$ and $z$ directions are roughly angular in the region of the support of $\chi$. If we allow the support to expand in the radial direction as well (this corresponds roughly to $x$), the analysis must be slightly different, as the solution should decay faster away from the light cone. \subsection{Global stability with a general cubic nonlinearity} \label{sec:cubic} We now turn to the second of the simple applications. We have the following proposition. \begin{proposition} \label{prop:cubic} Let $\psi$ satisfy the nonlinear wave equation \begin{equation} \begin{aligned} \Box \psi = (\partial_t \psi)^3 \end{aligned} \end{equation} in $\mathbb{R}^{3 + 1}$ with initial data given by $\psi(0,x) = \psi_0 (x)$ and $\partial_t \psi(0,x) = \psi_1 (x)$. We assume that $\psi_0 (x)$ and $\psi_1 (x)$ are smooth functions supported in the unit ball in $\Sigma_0$ with $H^{N + 1}$ norm of size $\epsilon$, where the minimum size of $N$ is determined by the proof. As long as $\epsilon$ is sufficiently small, the above perturbations converge back to the trivial solution pointwise at a rate of ${1 \over 1 + t}$. \end{proposition} We note that the same proof works for a nonlniearity of $h (\partial_t \psi)^3$ where $h$ is an arbitrary function whose $C^N$ norm is bounded. \begin{proof} We shall take as bootstrap assumptions energy and pointwise bounds on $\psi$. Let $\delta > 0$ be some sufficiently small real number. We let $T$ be the maximal real number such that, for all $0 \le t \le T$, \begin{enumerate} \item we have that \begin{equation} \label{eq:cubicenbootstrap} \begin{aligned} |\partial^\alpha \psi| (t,x) \le {C \epsilon^{{3 \over 4}} \over (1 + t) (1 + |u|^{1 - \delta})} \end{aligned} \end{equation} for $|\alpha| \le {N \over 2} + 1$ and where $\partial^\alpha$ contains at most a single $t$ derivative \item we have that \begin{equation} \label{eq:cubicpointwisebootstrap} \begin{aligned} \Vert \partial^{\alpha} \partial \psi \Vert_{L^2 (\Sigma_t)} \le C \epsilon^{{3 \over 4}} \end{aligned} \end{equation} for all $|\alpha| \le N$. \end{enumerate} We shall recover that, for all $t$ for which the above estimates hold true, we in fact have the stronger estimates coming from replacing $\epsilon^{{3 \over 4}}$ in the bootstrap assumptions with $C \epsilon$. This will close the bootstrap argument, which will complete the proof of Proposition~\ref{prop:cubic}. In order to effectively analyze this problem, it will be useful to introduce some notation. We shall denote by $(t,r,\theta)$ usual polar coordinates on $\mathbb{R}^{3 + 1}$ where $t$ is the usual time coordinate, $r^2 = x^2 + y^2 + z^2$, and $\theta$ denotes coordinates on $S^2$. There are of course null coordinates $(v,u,\theta)$ which are defined by $v = t + r$ and $u = t - r$. Because we will use auxiliary multipliers as is described in Section~\ref{sec:decay}, it will also be useful to introduce coordinates adapted to these auxiliary multipliers. We take some point $(s,x_0,y_0,z_0)$ as the center of the ball in $\Sigma_s$ where the data for the auxiliary multiplier will be supported. Following the notation in Section~\ref{sec:decayestimates}, we denote by $\tau$ the $u$ coordinate of this point, meaning that $\tau = s - \sqrt{x_0^2 + y_0^2 + z_0^2}$. We shall denote by $(t,r',\theta')$ polar coordinates adapted to this point (meaning that $(r')^2 = (x - x_0)^2 + (y - y_0)^2 + (z - z_0)^2$), and we shall denote by $(v',u',\theta')$ null coordinates adapted to this point. Because we are thinking of solving for $f$ backwards in time, we are actually setting $v' = s - t + r'$ and $u' = s - t - r'$. Thus, for $0 \le t \le s$, we have that $f$ is compactly supported in $u'$ if we use the strong Huygens principle. Because $\psi$ decays in $u$ and $f$ decays in $u'$, it is additionally useful to introduce coordinates which are well adapted to how both of these functions decay. Thus, it makes sense to use coordinates given by $(t,r,r',\phi)$ and $(t,u,u',\phi)$. Given level sets of $r$ and $r'$, we have that the intersection of these two spheres is either empty, a circle, or a point. When it is a circle, $\phi$ denotes an angular coordinate on this circle. The coordinates degenerate when the intersection is a point, but this is a set of measure $0$. The volume form of $\mathbb{R}^{3 + 1}$ is comparable to \begin{equation} \label{eq:dtdrdr'dphi} \begin{aligned} {r r' \over R} d t \wedge d r \wedge d r' \wedge d \phi, \end{aligned} \end{equation} and also to \begin{equation} \label{eq:dtdudu'dphi} \begin{aligned} {r r' \over R} d t \wedge d u \wedge d u' \wedge d \phi, \end{aligned} \end{equation} where $R^2 = (x_0)^2 + (y_0)^2 + (z_0)^2$. We refer the reader to \cite{AndPas19} where this fact is proved and where these coordinates are described in more detail. With these considerations in hand, we can now turn to recovering the bootstrap assumptions. It is clear that assumed bootstrap assumptions \eqref{eq:cubicenbootstrap} and \eqref{eq:cubicpointwisebootstrap} directly recover the energy bootstrap assumptions. Indeed, if we commute the equation $N$ times with unit partial derivatives in order to propagate $\Vert \partial^\alpha \partial \psi \Vert_{L^2 (\Sigma_t)}$ for all $|\alpha| \le N$, in the nonlinearity, there can be at most ${N \over 2} + 1$ derivatives falling on the term with the second most derivatives. We can, thus, put the term with the most derivatives in $L^2$, and the pointwise decay for the two remaining terms is enough to give us a convergent integral. We must now recover the pointwise bootstrap assumptions. We will now recover the pointwise bounds on some $\Sigma_s$ with $0 \le s \le T$. We shall use a version of Proposition~\ref{prop:decay} in order to do so. In particular, we shall use one adapted to Lemma~\ref{lem:bilinintM} instead of Lemma~\ref{lem:bilinenM}. This results in pointwise decay for $\psi$ itself and not just $\partial_t \psi$. The necessary result analogous to Proposition~\ref{prop:decay} and the duality argument this requires follow in similar fashions to the ones proved in Section~\ref{sec:decay}. We are, thus, using $f$ as a multiplier for the wave equation as opposed to $\partial_t f$. In order to implement this strategy, and following the notation above, we take some point $(s,x_0,y_0,z_0)$ in $\Sigma_s$. We recall that $\tau$ is the $u$ coordinate of this point. We consider initial data for an auxiliary solution to the wave equation $f$ having initial data whose support is contained in a unit ball centered at this point in $\Sigma_s$. Now, we must recover pointwise bounds up to order ${s \over 2} + 1$. More precisely, we must show that \begin{equation} \begin{aligned} |\partial^\alpha \psi| \le {C \epsilon \over (1 + s) (1 + |\tau|^{1 - \delta})}, \end{aligned} \end{equation} where $|\alpha| \le {N \over 2} + 1$. We must control error integrals of the kind found in \eqref{eq:reqerrorbounds} with $f$ instead of $\partial_t f$. This provides pointwise bounds for $\partial^\alpha \psi$ after commuting with $\partial^\alpha$, and then further commuting by $\partial^\beta$ for all $|\beta| \le k + n + 1 = 6$. Thus, in order to show decay for $|\partial^\alpha \psi|$, we must commute the equation by $\partial^\beta \partial^\alpha$ for all $|\beta| \le 6$ where $\beta$ consists of only spatial derivatives. We consider the worst term in this commutation, which is when all of the derivatives fall on the same term in the nonlinearity. We have that \begin{equation} \begin{aligned} \Box \partial^\beta \partial^\alpha \psi = N + (\partial_t \psi)^2 (\partial^\beta \partial^\alpha \partial_t \psi), \end{aligned} \end{equation} where the other terms in $N$ can be treated in either the same way as we shall handle this remaining term, or in even easier ways. Now, we know that we can write \begin{equation} \begin{aligned} \partial^\beta \partial^\alpha \partial_t \psi = \partial^\beta \partial^{\alpha'} \partial_i \partial_t \psi, \end{aligned} \end{equation} where $\partial_i$ is some spatial derivative. If $\partial^{\alpha'}$ contains a time derivative (of which it can have at most one), we can rewrite this as \begin{equation} \begin{aligned} \partial^\beta \partial_i \partial^{\alpha''} \psi + N, \end{aligned} \end{equation} where $N$ consists of higher order nonlinearities after using the equation. These terms are all better, so we disregard them. If it does not contain a time derivative, we can incorporate the $\partial_t$, giving us \begin{equation} \begin{aligned} \partial^\beta \partial_i \partial^{\alpha''} \psi. \end{aligned} \end{equation} In either case, we note that $\partial^{\alpha''} \psi$ is something we can assume pointwise bounds for using the bootstrap assumptions. This means, of course, that taking any ball $B$ of radius $2$ in some $\Sigma_t$ where $t \le s$ having center with some $u$ coordinate $u$, we have that \begin{equation} \label{eq:cubicenball1} \begin{aligned} \Vert \partial^{\alpha''} \psi \Vert_{L^2 (B)} \le {C \epsilon^{{3 \over 4}} \over (1 + t) (1 + |u|^{1 - \delta})}. \end{aligned} \end{equation} Similarly, using the bootstrap assumptions on the energy, we have that \begin{equation} \label{eq:cubicenball2} \begin{aligned} \Vert \partial^\gamma \partial^\beta \partial_i \partial^{\alpha''} \psi \Vert_{L^2 (B)} \le C \epsilon^{{3 \over 4}}, \end{aligned} \end{equation} for all $\gamma$ such that $|\gamma| + |\beta| + 1 + |\alpha''| \le N$, where $\partial^\gamma$ consists of only spatial derivatives. Now, using Sobolev embedding in $\mathbb{R}^3$, we know that \begin{equation} \begin{aligned} \Vert \partial^\beta \partial_i (\chi \partial^{\alpha''} \psi) \Vert_{L^\infty} \le \Vert \partial^\beta \partial_i (\chi \partial^{\alpha''} \psi) \Vert_{H^2 (\Sigma_t)}, \end{aligned} \end{equation} where $\chi$ is a smooth cutoff function equal to $1$ in the ball of radius $1$ concentric with $B$, decaying monotonically to $0$, and equal to $0$ outside of $B$. Moreover, we have that \begin{equation} \begin{aligned} \Vert \partial^\beta \partial_i (\chi \partial^{\alpha''} \psi) \Vert_{H^2 (\Sigma_t)} \le \Vert \chi \partial^{\alpha''} \psi \Vert_{H^9 (\Sigma_t)}, \end{aligned} \end{equation} where we are using the fact that $|\beta| \le 6$. Now, we have that $\Vert \chi \partial^{\alpha''} \psi \Vert_{L^2 (\Sigma_t)} \le {C \epsilon^{{3 \over 4}} \over (1 + t) \left (1 + |u|^{1 - \delta} \right )}$ by \eqref{eq:cubicenball1} above, and we additionally have that $\Vert \chi \partial^{\alpha''} \psi \Vert_{H^{{N \over 2} - 1}} \le C \epsilon^{{3 \over 4}}$ by \eqref{eq:cubicenball2} above. Interpolating and with $N$ sufficiently large in terms of $\delta$ gives us that \begin{equation} \begin{aligned} \Vert \chi \partial^{\alpha''} \psi \Vert_{H^9 (\Sigma_t)} \le {C \epsilon \over (1 + t)^{1 - \delta} \left (1 + |u|^{(1 - \delta)^2} \right )}, \end{aligned} \end{equation} meaning that we have that \begin{equation} \begin{aligned} |\partial^\beta \partial_i (\chi \partial^{\alpha''} \psi)| \le {C \epsilon \over (1 + t)^{1 - \delta} \left (1 + |u|^{(1 - \delta)^2} \right )}. \end{aligned} \end{equation} Now, because $\chi = 1$ in a ball of radius $1$ concentric with $B$, this implies that \begin{equation} \begin{aligned} |\partial^\beta \partial_i \partial^{\alpha''} \psi| \le {C \epsilon \over (1 + t)^{1 - \delta} \left (1 + |u|^{(1 - \delta)^2} \right )} \end{aligned} \end{equation} on that same ball. Because this can be done for any such ball, we see that we can use the pointwise estimates for $\partial^\beta \partial^\alpha \psi$ up to a loss of $t^\delta$ and $u^\delta$. Thus, after commuting the equation by $\partial^\beta \partial^\alpha$ in order to apply Proposition~\ref{prop:decay}, we see that the error integral we must control is of the form \begin{equation} \begin{aligned} \left |\int_0^s \int_{\Sigma_t} N' \phi d x d t \right | + \left |\int_0^s \int_{\Sigma_t} (\partial_t \psi)^2 (\partial^\beta \partial^\alpha \partial_t \psi) f d x d t \right |, \end{aligned} \end{equation} where, once again, the term $N'$ can be controlled in the same way as the other. We can now plug in the pointwise bounds for $\partial_t \psi$ and $\partial^\beta \partial^\alpha \psi = \partial^\beta \partial_i \partial^{\alpha''} \psi$ from the above to get the desired result. Now, we note that $|f| \le {C \over 1 + s - t}$ by the asymptotics we have for the linear wave equation, and we note that it is additionally compactly supported in the $u'$ coordinate. Thus, going into $(t,r,r',\phi)$ coordinates, we have that the error integral is controlled by \begin{equation} \begin{aligned} C \epsilon^{{9 \over 4}} \int_{{\tau \over 10}}^{s - {\tau \over 10}} \int_0^{s + 2} \int_{s - t - 5}^{s - t + 5} \int_0^{2 \pi} {1 \over (1 + t)^{3 - \delta} (1 + |u|^{{5 \over 2}})} {1 \over 1 + s - t} {(1 + s - t) (1 + t) \over s + 1} d \phi d r d r' d t \end{aligned} \end{equation} as long as $\tau \le {9 s \over 10}$ (we are using that, when this is satisfied, we have that $R$ is comparable to $s$ in \ref{eq:dtdrdr'dphi}). We are also assuming that $\delta$ is sufficiently small in order to bound the losses in decay in $u$, which depend on $\delta$, by ${1 \over 2}$. The integral bounds on $r'$ come from the fact that $f$ is compactly supported with respect to $u'$. The fact that the lower bound on the $t$ integral is ${\tau \over 10}$ and the upper bound is $s - {\tau \over 10}$ deserves explanation. We are trying to show that $|\partial^\alpha \psi|$ is controlled on $\Sigma_s$ at a point whose $u$ coordinate is given by this $\tau$. Thus, the initial data for $f$ is prescribed in the ball of radius $1$ centered at this point. Because of this, and using the strong Huygens principle for $\phi$ and a domain of dependence argument for $\psi$, the support of $\phi$ and the support of $\psi$ do not intersect in $\Sigma_t$ for $t \le {\tau \over 10}$. These considerations hold only when $\tau \ge 20$. Otherwise, we simply take $0$ as the lower bound for this integral. These considerations will be worked out more rigorously in an analogous setting in Lemma~\ref{lem:rrbar}. Using the fact that \begin{equation} \begin{aligned} \int_0^{s + 2} {1 \over (1 + |u|^{{5 \over 2}})} d r \le C, \end{aligned} \end{equation} we can control the above integral by \begin{equation} \begin{aligned} {C \epsilon^{{9 \over 4}} \over 1 + t} \int_{{\tau \over 10}}^{s - {\tau \over 10}} {1 \over (1 + t)^{2 - \delta}} d s \le {C \epsilon^{{9 \over 4}} \over (1 + s) (1 + |\tau|^{1 - \delta})}. \end{aligned} \end{equation} We now consider the region $\tau \ge {9 t \over 10}$. For this, we further decompose into two regions. Without loss of generality, we have that the data for $f$ is supported in a ball whose center has $y$ and $z$ coordinate equal to $0$, and we have that $(r')^2 = (x - a)^2 + y^2 + z^2$. We first consider the case where $a \ge 100$. Using the $(t,u,u',\phi)$ coordinate system, we have that the error integral is controlled by \begin{equation} \begin{aligned} C \epsilon^{{9 \over 4}} \int \int \int \int {1 \over (1 + t)^{3 - \delta} (1 + |u|^{{5 \over 2}})} {1 \over 1 + s - t} {(1 + s - t) (1 + t) \over a} d \phi d u d u' d t, \end{aligned} \end{equation} where we shall establish the bounds of integration shortly. We first note that the integral in $u'$ is can always be taken from $-5$ to $5$ because $f$ is compactly supported in $u'$. Then, we shall break up the integral in $u$ into pieces of length $10$. More specifically, we can control the above integral by \begin{equation} \begin{aligned} C \epsilon^{{9 \over 4}} \sum_{i = 0}^\infty \int_{5 (i - 1)}^{5 (i + 1)} \int \int \int_{-5}^5 {1 \over (s + 1)^{3 - \delta} (1 + |5 i|^{{5 \over 2}})} {1 \over 1 + s - t} {(1 + s - t) (s + 1) \over \overline{u}} d u_2 d s d \theta d u_1. \end{aligned} \end{equation} Now, we note that the bounds for $\phi$ are of course given by $0$ and $2 \pi$. The final range we must establish is that of $t$. We take the line $t = 5 (i + 1) + x$ and $t = s + a - x + 5$ in the plane where $y = z = 0$, and we calculate the $s$ coordinate of the intersection. This is given by $s = {t + a + 5 (i + 2) \over 2}$. Similarly, we take the line $s = 5 (i - 1) - x$ and the line $s = t - a + x - 5$ and calculate the $s$ coordinate of the intersection. This is given by $t = {s - a + 5 (i - 2) \over 2}$. These give us the bounds of integration, as we know that the lowest and highest $t$ coordinates must occur on the plane where $y = z = 0$ (this is considered more carefully in an analogous situation in Lemma~\ref{lem:rtr's-t}). Thus, the integral is controlled by \begin{equation} \begin{aligned} C \epsilon^{{9 \over 4}} \sum_{i = 0}^\infty \int_{5 (i - 1)}^{5 (i + 1)} \int_{{s - a + 5 (i - 2) \over 2}}^{{s + a + 5 (i + 1) \over 2}} \int_0^{2 \pi} \int_{-5}^5 {1 \over (1 + t)^{3 - \delta} \left (1 + |5 i|^{{5 \over 2}} \right )} {1 \over 1 + s - t} {(1 + s - t) (1 + t) \over a} d u' d t d \phi d u. \end{aligned} \end{equation} Now, we note that the length of $t$ integration is always comparable to $a$ by the above bounds (note that $a \ge 100$). Moreover, in this region, both $1 + t$ and $1 + s - t$ are comparable to $s$. Thus, we have that we can control the above integral by \begin{equation} \begin{aligned} C \epsilon^{{9 \over 4}} \sum_{i = 0}^\infty {1 \over 1 + |5 i|^{{5 \over 2}}} {1 \over (1 + s)^{2 - \delta}}, \end{aligned} \end{equation} giving us the desired result. The final region where $\overline{u} \le 100$ can be controlled by comparing with a downward pointing light cone of thickness $200$ where the tip lies along the $t$ axis. \end{proof} \section{Anisotropic systems of wave equations} \label{sec:anisotropic} We now precisely describe the setting of the main application for the techniques introduced in this paper. We recall that \[ \Box = -\partial_t^2 + \partial_x^2 + \partial_y^2, \] and we introduce the notation \[ \Box' = -\partial_t^2 + \lambda_1^{-2} \partial_x^2 + \lambda_2^{-2} \partial_y^2. \] We assume that ${1 \over 4} \le \lambda_1^2 \le {1 \over 2}$ and $2 \le \lambda_2^2 \le 4$. We shall consider the following anisotropic system of wave equations in $2 + 1$ dimensions: \begin{equation} \label{eq:anisotropic} \begin{aligned} \Box \psi + (\partial_t \phi)^2 (\partial_t \psi) (\partial_t^2 \psi) = (\partial_t \phi) (\partial_t \psi)^2 + (\partial_t \psi) (\partial_t \phi)^2 + (\partial_t \psi)^4 \\ \Box' \phi + (\partial_t \psi)^2 (\partial_t \phi) (\partial_t^2 \phi) = (\partial_t \psi) (\partial_t \phi)^2 + (\partial_t \phi) (\partial_t \psi)^2 + (\partial_t \phi)^4. \end{aligned} \end{equation} We take smooth data for these equations $\psi(0,x) = \psi_0 (x)$, $\partial_t \psi(0,x) = \psi_1 (x)$, $\phi(0,x) = \phi_0 (x)$, and $\partial_t \phi(0,x) = \phi_1 (x)$ supported in the ball of radius ${1 \over 10}$. Moreover, we assume that the data are arbitrary functions with $\Vert \psi_0 \Vert_{H^{N + 1}} + \Vert \psi_1 \Vert_{H^N} + \Vert \phi_0 \Vert_{H^{N + 1}} + \Vert \phi_1 \Vert_{H^N} \le \epsilon$ for some $\epsilon$. With this, we are ready to state the main Theorem of this paper. \begin{theorem} \label{thm:mainthm} There exist $\epsilon_0 > 0$ sufficiently small and $N_0$ sufficiently large such that the trivial solution of \eqref{eq:anisotropic} is globally nonlinearly asymptotically stable with respect to the perturbations described above whenever $\epsilon \le \epsilon_0$ and $N \ge N_0$. \end{theorem} Before proceeding to the proof of this result, we shall make some brief remarks. \begin{enumerate} \item \textbf{The fundamental restriction on $\lambda_1$ and $\lambda_2$ is that $\lambda_1 \ne 1$ and $\lambda_2 \ne 1$}. Indeed, the semilinear terms are cubic, which is critical in $2 + 1$ dimensions because solutions to the wave equation only decay like $t^{-{1 \over 2}}$. The condition that $\lambda_1 \ne 1$ and $\lambda_2 \ne 1$ produces a kind of null condition on interactions between these waves. When $\lambda_1 = 1$ or $\lambda_2 = 1$, the light cones are tangent instead of intersecting transversally, and parallel interactions in the tangential direction are not effectively suppressed. We are unsure what happens in this case, but would not be surprised if this situation would lead to blow up. In our setting, however, the null condition is thus that $\lambda_1 \ne 1$, $\lambda_2 \ne 1$, and that the same wave not appear cubed. \item The other restrictions on $\lambda_1$ and $\lambda_2$ are purely for convenience. We restricted to the case $\lambda_1 < 1 < \lambda_2$ to force the light cones to intersect. In case both $\lambda_1$ and $\lambda_2$ are either smaller than $1$ or greater than $1$, the argument seems to still work, with the integrals appearing to be easier to control. We further restricted to the case of ${1 \over 4} \le \lambda_1^2 \le {1 \over 2}$ and $2 \le \lambda_2^2 \le 4$ simply to make the equations symmetric under the change of coordinates described in Section~\ref{sec:coordinates}. Indeed, an appropriate rescaling of the coordinates makes $\Box'$ become $\Box$ (see Section~\ref{sec:coordinates}), and this restriction ensures that the form of the equation is preserved under this change of coordinates. We do, however, note that the estimates are not uniform in $\lambda_1$ and $\lambda_2$, and a variety of constants may degenerate as $\lambda_1$ or $\lambda_2$ approach $0$, $1$, or $\infty$. When $\lambda_1$ and $\lambda_2$ are uniformly away from these quantities, we believe the estimates can be made uniform. \item The quasilinear terms in \eqref{eq:anisotropic} are present to show that the method is applicable to quasilinear equations. They can be more general. \item We have included the fourth order nonlinearity just to show that it can be handled. We can, of course, introduce higher order nonlinearities and higher order mixed nonlinearities and deal with them in the same way. We note, however, that the equation for $\psi$ does not have a nonlinear term depending only on $\phi$, and similarly for the equation for $\phi$. While we believe the techniques used in this paper are applicable to study this problem, they would require a new analysis. In fact, because the light cones for $\psi$ and $\phi$ have different causal structures, introducing these kinds of nonlinearities is related to removing the assumption of compact support, which we discuss below. Also, the derivatives in the nonlinearity are seen to all be $\partial_t$, and Proposition~\ref{prop:decay} is specifically adapted to showing decay for this derivative. Treating other derivatives in the nonlinearity requires relatively minor modifications to the considerations in Section~\ref{sec:decay}. \item We have not tried to optimize the results in this paper, and they are far from sharp in terms of, for example, regularity. Also, it seems as though the same strategy can work after weakening the assumptions. For example, it seems as though we can assume less decay for solutions to the homogeneous wave equation and a very similar argument will, at least until we assume less than integrable decay in $u$ for first derivatives. With this assumption, the analysis may have to be changed. A very natural way to try to strengthen the result is to remove the assumption that the data are compactly supported (and to replace this with the assumption that the data are localized), and which we now discuss. \item The proof uses that the data are compactly supported in the unit ball. However, we believe that the strategy followed in the proof of Theorem~\ref{thm:mainthm} can be used in the more general setting of data that decay sufficiently rapidly away from the origin. This would, however, require an analysis of the different geometry arising from this situation (see Section~\ref{sec:closingpointwise} to see where the geometry is necessary, and see Sections~\ref{sec:geometry} and \ref{sec:SGeometry} to see where the geometry involved in the proof of Theorem~\ref{thm:mainthm} is studied). More precisely, we note that it seems as though all of the nonlinear terms except for $(\partial_t \psi) (\partial_t \phi)^2$ in the equation for $\psi$ (and the analogous term in the equation for $\phi$) can be treated using the considerations below along with the observation that, if $\tau < -100$, we in fact have that $|u|$ is comparable to $\tau$ in the entire support of the error integral (see Sections~\ref{sec:decayestimates} and \ref{sec:coordinates} where $u$ and $\tau$ are described). For the remaining term, it seems as though an analysis of how the geometry of the scaling vector field interacts with the light cone of $f$ when $\tau < 0$ is necessary. \item The estimates on the error integrals involving cubic interactions below are far from optimal. In Section~\ref{sec:anisotropicdescription}, some of the additional gains from studying the geometry more carefully will be described. \end{enumerate} \subsection{Description of the proof} \label{sec:anisotropicdescription} The proof follows the same strategy as the proofs in Section~\ref{sec:simpleapps}. Indeed, we use the strategy outlined in Section~\ref{sec:decayestimates} to prove decay. Thus, we have to control error integrals of the form \[ \int_0^s \int_{\Sigma_t} F(\psi,\phi) \partial_t f d x d t, \] where $f$ is the auxiliary solution to the wave equation (see Proposition~\ref{prop:decay}). Once again, we shall interpolate between the energy and pointwise estimates in order to effectively take advantage of the transversal intersection of the cones. The main differences arise from the fact that we shall combine this method with commuting with weighted vector fields from the vector field method. We shall commute with the scaling vector field $S = t \partial_t + r \partial_r$. Because we need a norm that allows us to interpolate with scaling as well, we must use a Morawetz estimate. To avoid discussing this in detail, we shall actually use a simpler estimate that is very similar to a Morawetz estimate (see Lemma~\ref{lem:spacetimeL2}). The important fact is that this estimate, like the Morawetz estimate, is sharp in terms of decay (up to a loss of $t^\delta$). Controlling the error integrals will require more geometric information than was needed for the simple applications in Section~\ref{sec:simpleapps}. However, as was noted in the remarks after Theorem~\ref{thm:mainthm}, we are still not bounding at least some of the terms optimally. One example of this stems from the fact that tubes formed by considering $u$ and $\overline{u}$ neighborhoods of thickness comparable to $1$ intersect the downward pointing light cone of $f$ in a set of small measure in $\tau$ (see Section~\ref{sec:coordinates} for a description of $\overline{u}$). Facts such as these, which can be seen by examining the intersections of the three cones more carefully, will not be needed in this argument. However, it may be useful to keep in mind for other applications that it seems to be possible to get stronger estimates. We shall now describe the proof in a bit more detail. We shall first focus on how the scaling vector field is used in the context of the strategy described in Section~\ref{sec:decay}. In order to apply Proposition~\ref{prop:decay} in a bootstrap argument, we must show that the nonlinear error integral has good powers of $\tau$. The vector field $S$ will help us in getting good powers of $\tau$. The main observation is that the downward opening light cone associated to the auxiliary multiplier $f$ is everywhere pierced by $S$ (see Figure~\ref{fig:lightconesaux}). This means that good derivatives of $f$ and $S$ span the tangent space of $\mathbb{R}^{2 + 1}$. We may, thus, write any vector field in terms of the frame consisting of $S$ and good derivatives of $f$. In particular, we may schematically write that $\partial_t f = \overline{\partial}_f f + \gamma S f$ where $\overline{\partial}_f$ denotes good derivatives of $f$. Thus, we may write that \[ \int_0^s \int_{\Sigma_t} F(\psi,\phi) \partial_t f d x d t = \int_0^s \int_{\Sigma_t} F(\psi,\phi) (\overline{\partial}_f f + \gamma S (f)) d x d t. \] The term involving good derivatives of $f$ is better than the one involving $S$, so to a first approximation, we can assume that this term satisfies improved estimates. For the term involving $S$, we can integrate by parts in $S$, giving us that, schematically, \[ \int_0^s \int_{\Sigma_t} F(\psi,\phi) \gamma S(f) d x d t = \int_0^s \int_{\Sigma_t} S(F(\psi,\phi)) \gamma f + S(\gamma) F(\psi,\phi) f + \gamma F(\psi,\phi) f d x d t. \] Now, because we may commute the equations for $\psi$ and $\phi$ with $S$, we note that $S(F(\psi,\phi)) \approx F(\psi,\phi)$ in the sense that any estimates that $F(\psi,\phi)$ satisfies are, to a first approximation, satisfied by $S(F(\psi,\phi))$. Thus, the gain comes from noting that $\gamma$ and $S(\gamma)$ satisfy good estimates in certain regions. More specifically, they can be shown to be of size ${1 \over \tau}$. Thus, we can use $S$ to gain a power of ${1 \over \tau}$, which helps in showing that the nonlinear error integrals are of the necessary size. Proving these estimates on $\gamma$ and $S(\gamma)$ involves studying the geometry of $S$ in relation to the various cones in question. This is carried out in Section~\ref{sec:SGeometry}. We now describe how we estimate the geometry involved. To control quartic and higher order nonlinearities, we must estimate the geometry coming from the intersection of a forward openining and downward opening cone. This is because the error integrals are of the form \[ \int_0^s \int_{\Sigma_t} (\partial_t \psi)^4 (\partial_t f) d x d t, \] and because $\psi$ decays away from a forward opening cone while $f$ decays away from a downward opening cone. To a first approximation, $\psi$ is supported in a forward pointing cone of thickness $1$ and $f$ is supported in a downward pointing cone of thickness $1$. If we take the intersection of this configuration with $\Sigma_t$, we are left with two annuli of thickness $1$, and it is natural to compute the volume of intersection order to control the interaction. Of course, these kinds of estimates depend on the parameters $s$ and $\tau$. These estimates are one example of the kind of analysis that must be done in order to control the nonlinearity. Such estimates are carried out in Section~\ref{sec:geometry}. The interactions between $\psi$ and $\phi$ can be controlled using the condition that $\lambda_1 \ne 1$ and $\lambda_2 \ne 1$. Geometrically, this means that either one cone is contained within the other, or that the two cones intersect transversally. This paper focuses on the case where the two light cones intersect transversally as this case is more interesting, although we note that a similar proof can be used in the other case. Thus, we restrict ourselves to this case. There are now two forward opening cones to consider, the one associated to $\psi$ and the one associated to $\phi$. If we take two such cones of thickness $1$, if we look in $\Sigma_t$, we are left with two annuli of thickness $1$. One annulus is circular and the other is elliptical. When the cones intersect transversally, we note that the area of intersection of these annulur regions is comparable to $1$. Thus, decay away from the light cone gives localization to a small set, and the mechanism of improvement becomes similar to the case of the localized nonlinearity which was studied in Section~\ref{sec:localizednonlinearity} above. Indeed, the total area of the set of interaction in $\Sigma_t$ becomes $1$ instead of $t$, which is what it is between waves having the same light cone. Figure~\ref{fig:crosssections} shows the cross sections of the cones for different values of $\lambda_1$ and $\lambda_2$. The left configuration is representative of the case considered in this paper (i.e., $\lambda_1 < 1$ and $\lambda_2 > 1$). The second corresponds to $\lambda_1 = 1$, which we have just discussed. The third is a configuration where $\lambda_1 > 1$ and $\lambda_2 > 1$, and although it is not treated in this paper, we believe that similar ideas can be used in this case. These considerations also reveal why having $\lambda_1 \ne 1$ and $\lambda_2 \ne 1$ is important. If we consider the case where, say, $\lambda_1 = 1$, we can see that the area of intersection of the two annuli in $\Sigma_t$ is comparable to $\sqrt{t}$. Given that cubic nonlinearities are critical in $2 + 1$ dimensions, one may ask why the gain over $t$ is not enough. However, in this case, there exist wave packets which still experience amplification caused by interactions on sets with maximal measure. Indeed, when $\lambda_1 = 1$, we note that the two light cones intersect in two null lines. Wave packets propagating along those null lines will interact in essentially the same way as if the light cones were the same. This configuration looks very much like a $2 + 1$ dimensional version of the localized nonlinearity studied in Section~\ref{sec:localizednonlinearity} above, where we take $\omega = {1 \over 2}$. Because of this behavior, we believe that this configuration is likely to lead to blow up in finite time. \begin{figure} \centering \begin{tikzpicture} \draw[color=blue,ultra thick] (-2,0) circle (1.5); \draw[color=red,ultra thick] (-2,0) ellipse (2 and 1); \draw[color=blue,ultra thick] (3.7,0) circle (1.5); \draw[color=red,ultra thick] (3.7,0) ellipse(1.5 and 1); \draw[color=blue,ultra thick] (9.4,0) circle (1.5); \draw[color=red,ultra thick] (9.4,0) ellipse(1 and 0.5); \end{tikzpicture} \caption{Cross sections of the light cones for $\phi$ and $\psi$ for different $\lambda_1$ and $\lambda_2$. More specifically, the blue circles represent intersections of the light cone of $\psi$ with $\Sigma_t$, and the red ellipses represent intersections of the light cone of $\phi$ with $\Sigma_t$.} \label{fig:crosssections} \end{figure} We hope it is clear from the above considerations that the geometry of the cones must be effectively analyzed. This was already present in the example considered in Section~\ref{sec:cubic}, and we in fact use some of the same observations that are found therein. However, the geometry requires more substantial analysis in the case of anisotropic systems of wave equations which we are now considering. \subsection{Notation, coordinates, and the parameters} \label{sec:coordinates} We take $\mathbb{R}^{2 + 1}$ with the usual $(t,x,y)$ coordinates. In these coordinates, we recall that the equations in question (see \eqref{eq:anisotropic}) are of the form \begin{equation} \begin{aligned} \Box \psi = -\partial_t^2 \psi + \partial_x^2 \psi + \partial_y^2 \psi = F(\partial_t \psi,\partial_t \phi,\partial_t^2 \psi), \\ \Box' \phi = -\partial_t^2 \psi + \lambda_1^{-2} \partial_x^2 \psi + \lambda_2^{-2} \partial_y^2 \psi = F(\partial_t \psi,\partial_t \phi,\partial_t^2 \psi). \end{aligned} \end{equation} We shall also use the coordinates $(t,r,\theta)$ and $(t,\overline{r},\overline{\theta})$. These coordinates are defined as follows. The functions $r$ and $\theta$ are the usual polar coordinates on $\mathbb{R}^2$, meaning that $r = \sqrt{x^2 + y^2}$ and $\theta = \arctan \left ({y \over x} \right )$. We now consider the coordinates $(t,\overline{x},\overline{y})$ given by $\overline{x} = \lambda_1 x$ and $\overline{y} = \lambda_2 y$. We then have that $\partial_{\overline{x}} = \lambda_1^{-1} \partial_x$ and that $\partial_{\overline{y}} = \lambda_2^{-1} \partial_y$. Thus, in the $(t,\overline{x},\overline{y})$ coordinates, the equations can be written in the form \begin{equation} \begin{aligned} \Box \psi = -\partial_t^2 \psi + \lambda_1^2 \partial_{\overline{x}}^2 \psi + \lambda_2^2 \partial_{\overline{y}}^2 \psi &= F(\partial_t \psi,\partial_t \phi,\partial_t^2 \psi), \\ \Box' \phi = -\partial_t^2 \phi + \partial_{\overline{x}}^2 \phi + \partial_{\overline{y}}^2 \phi &= G(\partial_t \psi,\partial_t \phi,\partial_t^2 \phi). \end{aligned} \end{equation} We define $\overline{r} = \sqrt{\overline{x}^2 + \overline{y}^2} = \sqrt{\lambda_1^2 x^2 + \lambda_2^2 y^2}$, and we define $\overline{\theta} = \arctan \left ({\overline{y} \over \overline{x}} \right ) = \arctan \left ({\lambda_2 y \over \lambda_1 x} \right )$. We also introduce the functions $u = t - r$ and $\overline{u} = t - \overline{r}$. We have that \[ S = t \partial_t + x \partial_x + y \partial_y = t \partial_t + r \partial_r = t \partial_t + \overline{r} \partial \overline{r}. \] This is a consequence of the fact that \[ r \partial_r = x \partial_x + y \partial_y = \lambda_1 x \lambda_1^{-1} \partial_x + \lambda_2 y \lambda_2^{-1} \partial_y = \overline{x} \partial_{\overline{x}} + \overline{y} \partial_{\overline{y}} = \overline{r} \partial_{\overline{r}}. \] More geometrically, there is a metric $g$ adapted to $\Box$ and a metric $\overline{g}$ adapted to $\Box'$. In $(t,x,y)$ coordinates, the metric $g$ is given by the constant coefficient diagonal matrix whose entries are $-1$, $1$, and $1$ along the diagonal. The metric $\overline{g}$ is also constant coefficient and diagonal, but its entries are given by $-1$, $\lambda_1^2$, and $\lambda_2^2$. In $(t,\overline{x},\overline{y})$ coordinates, the metric $\overline{g}$ is constant coefficient and diagonal with entries $-1$, $1$, and $1$, while the metric $g$ is constant coefficient and diagonal with entries $-1$, $\lambda_1^{-2}$, and $\lambda_2^{-2}$. Because it will be necessary to consider the the geometry involving the level sets of $u$ and $\overline{u}$ (these are cones opening to the future with circular and elliptical cross sections, respectively), we note that the level set $u = 0$ and the level set $\overline{u} = 0$ intersect in four straight lines emanating from the spacetime origin. These four straight lines each have a constant $\theta$ coordinate depending only on $\lambda_1$ and $\lambda_2$. These coordinates can be found by solving for $\tan(\theta) = {y \over x}$ in the equations \[ t^2 = x^2 + y^2 = \lambda_1^2 x^2 + \lambda_2^2 y^2. \] We shall denote the $\theta$ coordinates of these lines by $\Theta_1$, $\Theta_2$, $\Theta_3$, and $\Theta_4$, taking $0 < \Theta_1 < {\pi \over 2}$, ${\pi \over 2} < \Theta_2 < \pi$, $\pi < \Theta_3 < {3 \pi \over 4}$, and ${3 \pi \over 4} < \Theta_4 < 2 \pi$. We note that all four of these values are bounded uniformly away from half integer multiples of $\pi$ given the restrictions we have placed on $\lambda_1$ and $\lambda_2$. We similarly use $\overline{\Theta}$ and $\overline{\Theta}_i$ for $1 \le i \le 4$ to denote the $\overline{\theta}$ coordinates of these quantities. These quantities are related to the $\Theta_i$ because \[ \tan(\overline{\theta}) = {\overline{y} \over \overline{x}} = {\lambda_2 y \over \lambda_1 x} = {\lambda_2 \over \lambda_1} \tan(\theta). \] Once again, the function $f$ will always denote an auxiliary solution to the homogeneous wave equation $\Box f = 0$ or $\Box' f = 0$. Which equation $f$ solves will be clear from the context. Because $f$ is the test function we use as a multiplier, it is natural to also introduce coordinates adapted to $f$. We introduce $r'$, $\theta'$, and $u'$ for $f$ when $f$ solves the equation $\Box f = 0$. When it solves $\Box' f = 0$, we introduce $\overline{r}'$, $\overline{\theta}'$, and $\overline{u}'$ instead, and these quantities are defined analogously (see below). Now, after rescaling $x$ and $y$ to $\overline{x}$ and $\overline{y}$, we note that we can always assume that $f$ solves $\Box f = 0$. Thus, in the following discussion, we shall make this assumption. We denote by $s$ the time slice in which data for $f$ is posed. We then denote by $\tau$ the $u$ coordinate of the center of the ball in $\Sigma_s$ where the data for $f$ is supported, and by $\Theta$ the $\theta$ coordinate of this point. If the center of this ball is given by $(s,a,b)$, we denote by $r'$ the radial coordinate away from this point, that is, $(r')^2 = (x - a)^2 + (y - b)^2$, and we denote by $\theta'$ the angular coordinate for this point normalized such that $\theta' = 0$ along the line $\theta = \Theta$, meaning that $\theta' = \arctan \left ({y - b \over x - a} \right ) - \Theta$. When proving geometric estimates that depend on $\tau$ and $s$, it will often be possible to assume that $b = 0$. When $b \ne 0$, we may of course redefine coordinates to make $b = 0$. We must be cautious, however, about circumstances where the geometry of the other light cone must be taken into account (rotating an ellipse does not produce the same ellipse because it changes the axes). Because we shall often not have to take the geometry of all three cones into account at the same time, this is still a useful observation. The only time we must consider the geometry of all three light cones for a geometric estimate (i.e., the geometry of the light cones for $\psi$, $\phi$, and $f$) is in Lemma~\ref{lem:dr'drbar}. Assuming that $b = 0$ is the same as assuming that $\Theta = 0$. The reader may also wish to consult Figure~\ref{fig:psifSigma_t} where a diagram of some of these quantities is given. We shall also introduce null coordinates adapted to $\psi$, $\phi$, and $f$. For $\psi$, the null coordinates are given by $(u,v,\theta)$ where $u = t - r$, $v = t + r$, and $\theta = \theta$. The null coordinate systems $(\overline{u},\overline{v},\overline{\theta})$ and $(u',v',\theta')$ are defined analogously. More precisely, we note that $\overline{v} = t + \overline{r}$, $\overline{u} = t - \overline{r}$, $v' = s - t + r'$, and $u' = s - t - r'$. We note that \[ S = t \partial_t + x \partial_x + y \partial_y = v \partial_v + u \partial_u = \overline{v} \partial_{\overline{v}} + \overline{u} \partial_{\overline{u}}. \] Because we often have to consider the functions $\psi$, $\phi$, and $f$, we shall use $\overline{\partial}_\psi$, $\overline{\partial}_\phi$, and $\overline{\partial}_f$ to denote good derivatives for $\psi$, $\phi$, and $f$, respectively. These are precisely the derivatives tangent to the light cones adapted to these functions. We shall also introduce the null frames $L = \partial_t + \partial_r$, $\underline{L} = \partial_t - \partial_r$, and $\partial_\theta = \partial_\theta$. The null frames $\overline{\underline{L}}$, $\overline{L}$, $\partial_{\overline{\theta}}$ and $\underline{L}'$, $L'$, $\partial_{\theta'}$ are defined analogously. We note that these are simply rescaled versions of the null coordinate vector fields (for example, $L$ is a constant rescaling of $\partial_v$ and $\underline{L}$ is a constant rescaling of $\partial_u$). The good derivatives $\overline{\partial}_\psi$ of $\psi$ consist of $L$ and $\partial_\theta$, and similarly for the good derivatives of $\phi$ and $f$. When doing estimates on the geometry in Sections~\ref{sec:geometry} and \ref{sec:SGeometry}, we can freely assume that $s$ is larger than some fixed constant. In the proof of Theorem~\ref{thm:mainthm}, we use $\epsilon$ to absorb all nonlinear contributions for $t \le {1 \over \delta_0^{10}}$. Thus, when analyzing the geometry in those sections in order to recover the pointwise estimates, we can assume that $s \ge {1 \over \delta_0^{10}}$. We shall sometimes use $O$ notation, in which $f$ is said to be $O(g)$ if $f \le C g$ for some $C$. There are some instances in which we will have expressions which are $O(\delta_0)$ (see below where the parameter $\delta_0$ is described). In these cases, the constant $C$ will be implied to not depend on $\delta_0$. We shall have to commute with several vector fields. In the simple applications studied in Section~\ref{sec:simpleapps}, we only commuted with translation vector fields. However, we shall also have to use the scaling vector field. Thus, in this section, the admissible commutation fields will be strings of translation vector fields and scaling vector fields. We shall denote by $\Gamma^\alpha$ such a string of scaling vector fields and translation vector fields. This collection of vector fields is closed under Lie brackets because the commutator of two commutation fields is another commutation field. Indeed, translation vector fields commute with each other and the scaling vector field commutes with itself. We also have that the commutator between $S$ and a translation vector field is the negative of that translation vector field. Thus, for example, we have that \[ [S,\partial_t] = - \partial_t. \] This shows that the commutator of any two of our commutation fields is another commutation field. Thus, by induction, we have that \[ \Gamma^\alpha (\partial_t h) = (\partial_t \Gamma^\alpha h) + \sum_\gamma c_\gamma (\partial_t \Gamma^\gamma h) \] for an appropriate collection of $\gamma$ and where $c_\gamma = \pm 1$, and we moreover have that $|\gamma| \le |\alpha| - 1$. These facts will be used in the proof. Now, given data supported in the ball of radius $1$ whose $C^2$ norm is bounded by $10$, we note that the solution to the linear wave equation $\Box f = 0$ has that \begin{equation} \label{eq:assumeddecay0} \begin{aligned} |f| \le {C \over \sqrt{1 + s - t} (1 + |u'|)^{{1 \over 2}}}. \end{aligned} \end{equation} This follows from the fundamental solution. Then, by commuting the homogeneous equation with Lorentz fields, we have that \begin{equation} \label{eq:assumeddecay1} \begin{aligned} |\partial f| \le {C \over \sqrt{1 + s - t} (1 + |u'|)^{{3 \over 2}}}, \end{aligned} \end{equation} and we have that \begin{equation} \label{eq:assumeddecay2} \begin{aligned} |\overline{\partial}_f f| \le {C \over (1 + s - t)^{{3 \over 2}} (1 + |u|^{{1 \over 2}})}. \end{aligned} \end{equation} Thus, we shall take $n = 2$ and $p = {3 \over 2}$ in Proposition~\ref{prop:decay}. The improved decay of good derivatives is important because we shall decompose $\partial_t$ in terms of $S$ and $\overline{\partial}_f$, as was described in Section~\ref{sec:anisotropicdescription}. Similarly, we note that the solution to the linear wave equation $\Box' f = 0$ with the same kind of data has that \begin{equation} \label{eq:assumeddecay3} \begin{aligned} |\partial_t f| \le {C \over \sqrt{1 + s - t} (1 + |\overline{u}'|)^{{3 \over 2}}}, \end{aligned} \end{equation} and that \begin{equation} \label{eq:assumeddecay4} \begin{aligned} |\overline{\partial}_f f| \le {C \over (1 + s - t)^{{3 \over 2}} (1 + |\overline{u}'|^{{1 \over 2}})}. \end{aligned} \end{equation} We shall use these decay rates, but an examination of the proof below reveals that less decay would be enough. The main result that would require modification is Lemma~\ref{lem:annuliu'dec}, but even there, we believe the modification is not substantial. Moreover, we emphasize once again that it suffices to take decay rates as a black box. Indeed, all that is required is a proof of decay for the homogeneous equation with a sufficiently large class of test functions as data. The test functions we are taking are all smooth functions supported in a ball of radius $1$ with $C^2$ norm bounded by $10$. We finally note that we make use of three smallness parameters in the following proof. They are $\epsilon$, $\delta$, and $\delta_0$. The parameter $\epsilon$ can be chosen to depend on $\delta$, $\delta_0$, and the constants involved in the problem, and it controls the size of the initial data. It must be chosen sufficiently small for the bootstrap assumptions to be recovered. Because we are allowing $\epsilon$ to depend on $\delta$ and $\delta_0$, we also allow the expressions $C$ and $c$ to depend on $\delta$ and $\delta_0$ as well. The expressions $C$ and $c$ represent positive numbers which may be very large or small depending on some universal constants and on $\delta$ and $\delta_0$. More specifically, $C$ will represent something that may be very large (such as, for example, ${1 \over \delta_0}$), while $c$ will represent something that may be very small (such as $\delta_0$). There are a few places where it will be important to have constants that do not depend on $\delta_0$ so that we can get smallness relative to them by making $\delta_0$ smaller (see, for example, Lemma~\ref{lem:dr'drbar}). Whenever this is necessary, we shall explicitly say so. Now, the parameters $\delta \le {1 \over 2}$ and $\delta_0 \le {1 \over 2}$ must be chosen sufficiently small to make the following proof work. The parameter $\delta$ controls a loss in decay that comes from interpolation (see Lemma~\ref{lem:interpolation}), and it is also the parameter we have used to denote the (necessary unless restricting to a dyadic region) loss in decay from a modified Morawetz estimate (see Lemma~\ref{lem:spacetimeL2}). Meanwhile, the parameter $\delta_0$ is our universal smallness parameter for geometric estimates. For example, it will be beneficial to consider two regimes, one where $\tau$ is small compared to $s$ and another where it is comparable to $s$. We shall thus consider the cases $\tau \le \delta_0 s$ and $\tau \ge \delta_0 s$. There will be several other cases where we must compare quantities, and $\delta_0$ is the parameter we shall use to make this comparison. There exist bounds for $\delta$ and $\delta_0$ in the form of universal constants that make the following proof work, but we have not computed them. They can, however, be computed in principle. We finally note the regularity parameter $N$. This denotes the $L^2$ Sobolev space for the initial data. This parameter is allowed to depend on $\delta$, and we note that $N \rightarrow \infty$ as $\delta \rightarrow 0$. This is required in order to effectively interpolate (see Lemma~\ref{lem:interpolation}). However, for every fixed $\delta > 0$, we note that $N$ is a finite positive integer. The dependence of $N$ on $\delta$ can be explicitly computed (meaning that a lower bound for the regularity we require on the initial data can be computed), but we have not done so. \subsection{Additional linear estimate} \label{sec:Morawetzreplacement} We only require one additional linear estimate for this problem. In order to get a sharper result, it would be beneficial to use a Morawetz estimate, which schematically says that $\Vert r^{-{1 \over 2}} \partial \psi \Vert_{L_t^2 L_x^2}$ is controlled by the energy (the real estimate either requires a slight loss in the power of $r$ or a restriction to a dyadic region). We shall not discuss this estimate here, but will rather use a very easy estimate which suffices for this problem. \begin{lemma} \label{lem:spacetimeL2} We have that \begin{equation} \begin{aligned} \int_{\Sigma_s} (1 + t)^{-{\delta \over 2}} \left [(\partial_t \psi)^2 + (\partial_x \psi)^2 + (\partial_y \psi)^2 \right ] d x + \int_0^s \int_{\Sigma_t} \delta (1 + t)^{-1 - \delta} \left [(\partial_t \psi)^2 + (\partial_y \psi)^2 + (\partial_x \psi)^2 \right ] d x d t \\ = \int_{\Sigma_0} (1 + t)^{-{\delta \over 2}} \left [(\partial_t \psi)^2 + (\partial_x \psi)^2 + (\partial_y \psi)^2 \right ] d x + \int_0^s \int_{\Sigma_t} (\Box \psi) (1 + t)^{-{\delta \over 2}} (\partial_t \psi) d x d t. \end{aligned} \end{equation} \end{lemma} \begin{proof} This follows from simply using $(1 + t)^{-{\delta \over 2}} \partial_t \psi$ as a multiplier for the wave equation. \end{proof} This estimate is useful because it is sharp in terms of decay up to a power of $t^{{\delta \over 2}}$ (the Morawetz estimate on dyadic regions is sharp). Moreover, it is a spacetime $L^2$ norm, meaning that it allows for interpolation with the scaling vector field $S$. \subsection{Estimates on geometry} \label{sec:geometry} We first record several observations which shall be used in Section~\ref{sec:closingpointwise}. These geometric estimates shall be needed when using $\partial_t f$ as a multiplier for the equation for $\psi$. In order to apply Proposition~\ref{prop:decay}, we must show that the error integrals are sufficiently small in terms of $s$ and $\tau$. These estimates will be used to control these quantities. Thus, several of the following estimates depend on the parameters $s$ and $\tau$. We recall that we take data for $f$ supported in a unit disk whose center has coordinates $(s,a,b)$. The data consists of the trace of $f$ and $\partial_t f$ in $\Sigma_s$. Moreover, we recall that $\tau = s - \sqrt{a^2 + b^2}$. Except in Lemma~\ref{lem:dr'drbar}, we shall assume that $b = 0$, meaning that $a + \tau = s$. Lemma~\ref{lem:dr'drbar} is the only result which requires us to simultaneously compare the geometry of light cones adapted to $\psi$, $\phi$, and $f$ at the same time. In order to translate these results to the general case where $b$ is potentially nonzero, we can simply replace instances of $a$ with instances of $s - \tau$. Many of the following lemmas will deal with a configuration arising from looking at the geometry of the light cone for $\psi$ and the light cone for $f$ in a single $\Sigma_t$ slice. Figure~\ref{fig:psifSigma_t} shows what this looks like. The reader may wish to keep this figure and also Figure~\ref{fig:lightconesaux} in mind throughout this Section, as they both provide valuable intuition for why the results are true. Throughout this Section, we recall that we can take $s \ge {1 \over \delta_0^{10}}$. \begin{figure} \centering \begin{tikzpicture} \draw[ultra thick] (0,0) -- (7.4,0); \draw[very thick] (0,0) circle (3); \draw[very thick] (7.4,0) circle (5); \draw (0,0) -- (2.62,1.46); \node (r) at (1.21,0.83) {$r$}; \draw (7.4,0) -- (2.62,1.46); \node (r') at (5.06,0.95) {$r'$}; \node (theta) at (0.75,0.23) {$\theta$}; \node (theta') at (5.8,0.23) {$\vartheta'$}; \end{tikzpicture} \caption{The circles denote the intersection of light cones adapted to $\psi$ and $f$ in some fixed $\Sigma_t$. The line segment connects the centers of the two resulting circles. The length of this line segment is $a = s - \tau$, where we note that we are assuming that $b = 0$. When $b \ne 0$, there is an appropriate modification for this formula. We note that $r = t - u$ and $r' = s - t - u'$. We also recall that $\vartheta = \pi - \theta$ and that $\vartheta' = \pi - \theta'$. For the spacetime picture, we refer the reader to Figure~\ref{fig:lightconesaux}. The two points at which the circles intersect correspond to the two points of intersection of the red ellipse with some $\Sigma_t$. Of course, it is possible for this intersection to be a single point when the circles are tangent to each other, or to be empty when the light cones are no longer intersecting.} \label{fig:psifSigma_t} \end{figure} The first lemma says that $r$ is comparable to $t$ and $r'$ is comparable to $s - t$ when we are close to the light cones for $\psi$ and $f$. \begin{lemma} \label{lem:rtr's-t} Let $\tau \ge 100$. \begin{enumerate} \item We have that $r + r' - a \le 2 r$, and similarly that $r + r' - a \le 2 r'$. \item We have that $r \ge {\tau - u - u' \over 2} \ge 0$, and similarly that $r' \ge {\tau - u - u' \over 2} \ge 0$. \item In the region where $u \le \delta_0 \tau$ and $u' \le \delta_0 \tau$, we have that $r' \ge {99 \over 100} (s - t) \ge {\tau \over 10}$ and that $r \ge {99 \over 100} t \ge {\tau \over 10}$. \item In the region where $u' \le \delta_0 \tau$ and $r \le t + 1$, we have that $t \ge {\tau \over 10}$. Similarly, in the region where $u \le \delta_0 \tau$ and $r' \le s - t + 1$, we have that $s - t \ge {\tau \over 10}$. \end{enumerate} Moreover, in the region where $-1 \le u \le \delta_0 \tau$ and $-1 \le u' \le \delta_0 \tau$, we have that $r' \ge {99 \over 100} (s - t) \ge {\tau \over 10}$ and that $r \ge {99 \over 100} t \ge {\tau \over 10}$. \end{lemma} \begin{proof} We begin by using the triangle inequality to note that \[ a - r' \le r \le a + r'. \] Thus, we have that \[ 0 \le r + r' - a \le 2 r'. \] Because the argument is symmetric in $r$ and $r'$, this proves the first assertions. Now, we note that \begin{equation} \label{eq:tauuu'rr'ar} \begin{aligned} \tau - u - u' = \tau - (t - r) - (s - t - r') = \tau - s + r + r' = r + r' - a \le 2 r. \end{aligned} \end{equation} Similarly, we have that \begin{equation} \label{eq:tauuu'rr'ar'} \begin{aligned} \tau - u - u' = \tau - (t - r) - (s - t - r') = \tau - s + r + r' = r + r' - a \le 2 r'. \end{aligned} \end{equation} This implies the lower bounds on $r$ and $r'$ in terms of $\tau - u - u'$. We now assume the further restrictions on $u$ and $u'$, which imply that \[ {\tau \over 2} \le \tau - u - u' = r + r' - a \le 2 r. \] From this, it follows that \[ r \ge {\tau \over 4}. \] Now, because $u \le \delta_0 \tau$, we have that \[ {r \over t} = {r \over r + u} \ge {r \over r + \delta_0 \tau} = {1 \over 1 + {\delta_0 \tau \over r}} \ge {99 \over 100} \] for $\delta_0$ sufficiently small. This gives us the desired result for $r$ and $t$, and by symmetry, the same argument gives us the desired result for $r'$ and $s - t$. The fourth and final assertion follows from the second assertion, noting that $t = r + u$, and using that $u' \le \delta_0 \tau$. This implies the lower bound on $t$, and the lower bound on $s - t$ follows in an analogous way way. \end{proof} The next lemma allows us to effectively use the fact that the light cones intersect transversally. It is here that the assumption that $\lambda_1 \ne 1$ and $\lambda_2 \ne 1$ is fundamental. In the region along both light cones, we expect that the Jacobian of the $(r,\overline{r})$ coordinate system in $\Sigma_t$ is comparable to $1$ because the light cones intersect transversally (see Figure~\ref{fig:crosssections}). This will allow us to effectively take advantage of decay in $u$ and $\overline{u}$. The necessary properties of the $(r,\overline{r})$ coordinate system within each $\Sigma_t$ along both light cones is established in the following lemma. \begin{lemma} \label{lem:rrbar} In the region where $t \ge 100$, $r \le t + 1$, $\overline{r} \le t + 1$, $|u| \le {1 \over 100} t$, and $|u'| \le {1 \over 100} t$, we have that $(r,\overline{r})$ is a well defined coordinate system on each $\Sigma_t$ with uniformly bounded Jacobian. More precisely, we have that \begin{equation} \label{eq:rrbar} \begin{aligned} d x \wedge d y = {r \overline{r} \over (\lambda_2^2 - \lambda_1^2) x y} d r \wedge d \overline{r}, \end{aligned} \end{equation} and we have that \[ \left |{r \overline{r} \over (\lambda_2^2 - \lambda_1^2) x y} \right | \le C, \] where $C$ depends only on $\lambda_1$ and $\lambda_2$, and is uniform given our restrictions on $\lambda_1$ and $\lambda_2$. \end{lemma} \begin{proof} We begin by noting that $r$ and $\overline{r}$ are comparable to $t$ in this region. Thus, we have that $r$ and $\overline{r}$ are comparable to each other. More precisely, we have that \begin{equation} \label{eq:r/rbar} \begin{aligned} {9 \over 10} \le {r \over \overline{r}} \le {10 \over 9} \end{aligned} \end{equation} because the conditions on $u$ and $u'$ imply that \[ {99 \over 100} \le {r \over t} \le {101 \over 100}, \] and similarly for ${\overline{r} \over t}$. We shall now establish the formula \eqref{eq:rrbar}. We shall then show that $x$ and $y$ are also comparable to $r$ and $\overline{r}$ in this region from which the desired result will follow. We have that \[ r d r = x d x + y d y, \] and we have that \[ \overline{r} d \overline{r} = \lambda_1^2 x d x + \lambda_2^2 y d y. \] Thus, we have that \[ r \overline{r} d r \wedge d \overline{r} = (\lambda_2^2 - \lambda_1^2) x y d x \wedge d y. \] This implies \eqref{eq:rrbar}, as desired. Now, in order to show that $x$ and $y$ are comparable to $r$ and $\overline{r}$ in this region, we go into polar coordinates $(r,\theta)$. We shall show that $\theta$ is bounded uniformly away from $0$, ${\pi \over 2}$, $\pi$, and ${3 \pi \over 2}$. This will imply the desired result. Writing $\overline{r}$ in polar coordinates, we have that \[ \overline{r}^2 = \lambda_1^2 x^2 + \lambda_2^2 y^2 = \lambda_1^2 r^2 + (\lambda_2^2 - \lambda_1^2) y^2 = \lambda_1^2 r^2 + (\lambda_2^2 - \lambda_1^2) r^2 \sin^2 (\theta). \] Now, for $\theta$ sufficiently close to $0$ or to $\pi$, we have that \[ \overline{r}^2 \le {11 \over 10} \lambda_1^2 r^2. \] Thus, using the bounds on $\lambda_1^2$, we have that \[ {\overline{r} \over r} \le {3 \over 4}. \] This contradicts the bounds \eqref{eq:r/rbar}. Similarly, for $\theta$ sufficiently close to ${\pi \over 2}$ or ${3 \pi \over 2}$, we have that \[ {\overline{r} \over r} \ge {3 \over 2}, \] once again contradicting \eqref{eq:r/rbar}. \end{proof} The next lemma writes $x$, $x - a$, and $y$ in terms of $r$ and $r'$ when $\tau$ is small in the region along the light cones for $\psi$ and $f$. The fact that $\tau$ is small and that we are along the light cones for $\psi$ and $f$ gives us smallness in ${1 \over a}$ and ${\tau \over a}$, and it lets us treat $u$ and $u'$ as errors. \begin{lemma} \label{lem:xyrr'} Let $\tau \le \delta_0 s$. \begin{enumerate} \item We have that \begin{equation} \begin{aligned} x = r \left (1 - {\tau - u - u' \over r} + {\tau - u - u' \over a} - {(\tau - u - u')^2 \over 2 r a} \right ). \end{aligned} \end{equation} \item We have that \begin{equation} \begin{aligned} x - a = -r' \left (1 - {\tau - u - u' \over r'} + {\tau - u - u' \over a} - {(\tau - u - u')^2 \over 2 r' a} \right ). \end{aligned} \end{equation} \item We have that \begin{equation} \begin{aligned} y^2 = r^2 - r^2 \left (1 + {\tau - u - u' \over a} - {\tau - u - u' \over r} - {(\tau - u - u')^2 \over 2 r a} \right )^2 \le C r (\tau - u - u'). \end{aligned} \end{equation} \item We have that \begin{equation} \begin{aligned} y^2 = (r')^2 - (r')^2 \left (1 + {\tau - u - u' \over a} - {\tau - u - u' \over r'} - {(\tau - u - u')^2 \over 2 r' a} \right )^2 \le C r' (\tau - u - u'). \end{aligned} \end{equation} \end{enumerate} Moreover, we have that $a \ge (1 - \delta_0) s$. \end{lemma} \begin{proof} We have that $x^2 + y^2 = r^2$, and that $(x - a)^2 + y^2 = (r')^2$. This second expression expands out to $x^2 - 2 x a + a^2 + y^2 = (r')^2$. Now, subtracting this from the first expression gives \begin{equation} \label{eq:xrr'} \begin{aligned} 2 x a = r^2 - (r')^2 + a^2. \end{aligned} \end{equation} Now, we write $r = t - u$ and $r' = s - t - u'$. From this, it follows that \begin{equation} \label{eq:rr'atau} \begin{aligned} r + r' = s - u - u' = a + \tau - u - u'. \end{aligned} \end{equation} Solving for $r'$ and plugging this in to \eqref{eq:xrr'} gives us \begin{equation} \begin{aligned} 2 x a = r^2 - (a + \tau - r - u - u')^2 + a^2 \\ = r^2 - r^2 + 2 a r + 2 r (\tau - u - u') - a^2 - 2 a (\tau - u - u') - (\tau - u - u')^2 + a^2 \\ = 2 a r + 2 r (\tau - u - u') - 2 a (\tau - u - u') - (\tau - u - u')^2. \end{aligned} \end{equation} Thus, we have that \begin{equation} \begin{aligned} x = r + {r (\tau - u - u') \over a} - (\tau - u - u') - {(\tau - u - u')^2 \over 2 a} \\ = r \left (1 + {\tau - u - u' \over a} - {\tau - u - u' \over r} - {(\tau - u - u')^2 \over 2 r a} \right ), \end{aligned} \end{equation} giving us the first equality. For $x - a$, we note that \eqref{eq:xrr'} implies that \[ x - a = {r^2 - (r')^2 - a^2 \over 2 a}. \] Thus, using \eqref{eq:rr'atau} to solve for $r$ in terms of $r'$, $a$, $\tau$, $u$, and $u'$ gives us that \[ x - a = {-2 a r' + 2 a (\tau - u - u') - 2 r' (\tau - u - u') + (\tau - u - u')^2 \over 2 a}. \] Factoring out $-r'$ gives us that \[ x - a = -r' \left (1 - {\tau - u - u' \over r'} + {\tau - u - u' \over a} - {(\tau - u - u')^2 \over 2 a r'} \right ), \] as desired. The expressions for $y^2$ now follow simply from using the equation relating $r$, $x$, and $y$, and the equation relating $r'$, $x - a$, and $y$. We have that \[ y^2 = r^2 - x^2 = r^2 - r^2 \left (1 + {\tau - u - u' \over a} - {\tau - u - u' \over r} - {(\tau - u - u')^2 \over 2 r a} \right )^2, \] and we have that \[ y^2 = (r')^2 - (x - a)^2 = (r')^2 - (r')^2 \left (1 + {\tau - u - u' \over a} - {\tau - u - u' \over r'} - {(\tau - u - u')^2 \over 2 r' a} \right )^2. \] Now, we note that \[ \tau - u - u' = r + r' - a \ge 0 \] by \eqref{eq:tauuu'rr'ar}. Moreover, by Lemma~\ref{lem:rtr's-t}, we have that \[ {\tau - u - u' \over r} \le C, \] and we have that \[ {\tau - u - u' \over r'} \le C. \] The desired bounds on the expression for $y$ then follow after noting that ${r \over a} \le C$ and ${r' \over a} \le C$ because $\tau \le \delta_0 s$. The final point that $a \ge (1 - \delta_0) s$ follows immediately because $a = s - \tau$. \end{proof} In order to control higher order nonlinearities such as $(\partial_t \psi)^4$, we must analyze the $(r,r')$ coordinate system more carefully. We have the following lemma. \begin{lemma} \label{lem:annuliu'dec} We have that \begin{equation} \label{eq:rr'drdr'} \begin{aligned} r r' d r \wedge d r' = a y d x \wedge d y. \end{aligned} \end{equation} Moreover, let $\tau \ge 100$ and let $\tau \le \delta_0 s$. We then fix any $\Sigma_t$ with $0 \le t \le s$, and we take the annular region of thickness $1$ within $\Sigma_t$ given by $b \le u \le b + 1$ subject to the constraint that $-1 \le b \le \delta_0 \tau - 1$. Similarly, we take the annular region of thickness approximately $\delta_0 \tau$ within $\Sigma_t$ given by $-1 \le u' \le \delta_0 \tau$. Then, we have that \begin{equation} \label{eq:u'decest1} \begin{aligned} \int_{\Sigma_t} \chi_{b \le u \le b + 1} \chi_{-1 \le u' \le \delta_0 \tau} {1 \over 1 + |u'|^{{1 \over 2}}} d x \le C \min(\sqrt{1 + t} \log(1 + t),\sqrt{1 + s - t}). \end{aligned} \end{equation} \end{lemma} Before turning to the proof, we mention that this integral estimate arises from trying to control interactions where $|u|$ and $|u'|$ are small. We are able to assume that we have compact support in $|u|$ because of the strong decay in $|u|$. However, an integration by parts in the scaling vector field will result in only $|u'|^{{1 \over 2}}$ decay in $|u'|$ on the auxiliary multiplier $f$, so we must understand these integrals as opposed to simply understanding the intersections of two annular regions of thickness $1$. Moreover, the conditions we have imposed on the regions may seem very arbitrary. The estimates are used to control quartic interactions in the nonlinearity. The motivation can be summarized by saying that we are most concerned with the region along the light cones for $\psi$ and $f$, and this determines the restrictions above. \begin{proof} We have that \[ r d r = x d x + y d y, \] and that \[ r' d r' = (x - a) d x + y d y. \] From this, it immediately follows that \[ r r' d r \wedge d r' = a y d x \wedge d y, \] as desired. There are now $6$ separate regions to consider. We first consider two cases depending on the size of $t$. When $t \le {3 s \over 4}$, we consider $3$ regions adapted to $r$, and when $t \ge {s \over 4}$, we consider $3$ regions adapted to $r'$. We first consider the regions adapted to $r$ when $t \le {3 s \over 4}$. We also note that the integrand is $0$ unless if ${\tau \over 10} \le t \le s - {\tau \over 10}$ by Lemma~\ref{lem:rtr's-t}, and we note that $a \ge {s \over 2}$ because $\tau \le \delta_0 s$. The first region is where $|\theta| > \delta_0$ and $|\pi - \theta| > \delta_0$. Figure~\ref{fig:thetavartheta'small} shows schematically what this region looks like. \begin{figure} \centering \begin{tikzpicture} \draw[very thick] (0,0) circle (4); \draw[very thick] (0,0) circle (3.8); \draw[dashed] (0,0) -- (4.5,0); \draw[very thick] (6,0) circle (5); \draw[very thick] (6,0) circle (3); \draw[dashed] (6,0) -- (1.5,0); \draw (0,0) -- (2.85,2.65); \draw (6,0) -- (2.85,2.65); \node (theta) at (0.5,0.2) {$\theta$}; \node (vartheta') at (5.3,0.2) {$\vartheta'$}; \end{tikzpicture} \caption{One possible configuration that schematically describes the regions where $t \le {3 s \over 4}$ and $|\vartheta| \ge \delta_0$, and also where $t \ge {s \over 4}$ and $|\theta'| \ge \delta_0$. The thin annulus should be thought of as being the annulus where $b \le u \le b + 1$. The thick annulus should be thought of as being the annulus where $-1 \le u' \le \delta_0 \tau$. The points in question are those which are in both annuli. The integral we are computing involves the function ${1 \over 1 + |u'|^{{1 \over 2}}}$, which decays as we move away from the outer circle of the thick annulus towards the inner circle. In these regions, the important fact we are using is that $y \theta$ and $y \vartheta'$ satisfy good lower bounds (see Lemma~\ref{lem:yvartheta}), meaning that we have good bounds on the Jacobian \eqref{eq:rr'drdr'}. We are thus using that $u \le \delta_0 \tau$ and $u' \le \delta_0 \tau$.} \label{fig:thetavartheta'small} \end{figure} In this region, we have that $y = r \sin(\theta)$ is comparable to $r$, meaning that \[ {r r' \over a y} \le C. \] Thus, the integral is controlled by \[ \int_{t - b - 1}^{t - b} \int_{s - t - \delta_0 \tau}^{s - t + 1} {1 \over 1 + |u'|^{{1 \over 2}}} d r' d r \le C (1 + \sqrt{\tau}) \le C (1 + \sqrt{t}). \] The second region is where $|\theta| \le \delta_0$. This region also looks roughly like the one shown in Figure~\ref{fig:thetavartheta'small}. We begin by noting that we must have that $|\theta| \ge c {\sqrt{\tau} \over \sqrt{s}}$ because $t \le {3 \over 4} s$ by Lemma~\ref{lem:tvartheta}. We thus also have that $y^2 \ge c r \tau$, meaning that $y \ge c \sqrt{r} \sqrt{\tau}$. Indeed, by Lemma~\ref{lem:yvartheta}, we have that $y \theta \ge {\tau \over 10}$ in this region, meaning that $y^2 \ge c r y \theta \ge c r \tau$, where we are using that $r \theta$ is comparable to $y$ because $|\theta| \le \delta_0$. Thus, we have that \[ {r r' \over a y} \le C {\sqrt{r} \over \sqrt{\tau}}. \] Thus, the integral is controlled by \[ \int_{t - b - 1}^{t - b} \int_{s - t - \delta_0 \tau}^{s - t + 1} {\sqrt{r} \over \sqrt{\tau} \left (1 + |u'|^{{1 \over 2}} \right )} d r' d r \le C \sqrt{1 + t}. \] We finally consider the region where $|\vartheta| = |\pi - \theta| \le \delta_0$. Figure~\ref{fig:varthetasmall} shows schematically what this region looks like. \begin{figure} \centering \begin{tikzpicture} \draw[very thick] (-4,0) circle (3); \draw[very thick] (-4,0) circle (2.8); \draw (-4,0) -- (-6.5,1.5); \node (vartheta1) at (-4.7,0.2) {$\vartheta$}; \draw[dashed] (-4,0) -- (-7,0); \draw[very thick,domain=-10:10] plot ({18-25*cos(\x)},{25*sin(\x)}); \draw[very thick,domain=-10:10] plot ({18-24*cos(\x)},{24*sin(\x)}); \draw[very thick] (4,0) circle (3); \draw[very thick] (4,0) circle (2.8); \draw (4,0) -- (2,2.15); \node (vartheta2) at (3.55,0.2) {$\vartheta$}; \draw[dashed] (4,0) -- (1,0); \draw[very thick,domain=-10:10] plot ({26.3-25*cos(\x)},{25*sin(\x)}); \draw[very thick,domain=-10:10] plot ({26.3-24*cos(\x)},{24*sin(\x)}); \end{tikzpicture} \caption{Two configurations giving a rough idea of the regime where $|\vartheta| = |\pi - \theta| \le \delta_0$. The thin annulus should be thought of as being the annulus where $b \le u \le b + 1$. The long outer arc should be thought of as being the outer edge given by $r' = s - t + 1$. The distance between the long outer arc and the long inner arc is comparable to $\delta_0 \tau + 1$ and can be much larger than $1$ in practice, which is why this thickness is much larger than the thickness of the annulus. The points in question are the points in the annulus, to the right of the outer long arc, and to the left of the inner long arc. Of course, the picture is a bit misleading because these points should also have very small $|\vartheta|$ value. The integral we are computing involves the function ${1 \over 1 + |u'|^{{1 \over 2}}}$, which decays as we move away from the outer long arc towards the inner long arc.} \label{fig:varthetasmall} \end{figure} We now go into polar coordinates adapted to $r$. We now note the relation \[ r' \cos(\vartheta') = a - x = r \cos(\vartheta) + a. \] We want to express $r'$ and $\vartheta'$ in terms of $r$ and $\vartheta$. We take a derivative of this expression in $\vartheta$, and we get \begin{equation} \label{eq:dr'dvartheta1} \begin{aligned} {\partial r' \over \partial \vartheta} \cos(\vartheta') - r' \sin(\vartheta') {\partial \vartheta' \over \partial \vartheta} = -r \sin(\vartheta). \end{aligned} \end{equation} Thus, we have that \[ {\partial r' \over \partial \vartheta} \cos(\vartheta') = -y \left (1 - {\partial \vartheta' \over \partial \vartheta} \right ), \] meaning that \begin{equation} \label{eq:dr'dvartheta2} \begin{aligned} \left |{\partial r' \over \partial \vartheta} \right | \ge c |y| \left (1 - {\partial \vartheta' \over\partial \vartheta} \right ), \end{aligned} \end{equation} where we have used the fact that $|\vartheta| \le \delta_0$ implies that $|\vartheta'| \le 10 \delta_0$ by Lemma~\ref{lem:yvartheta} (in fact, we have that $|\vartheta'| \le |\vartheta|$ in this region) along with the fact that $y = r \sin(\vartheta) = r' \sin(\vartheta')$. We shall show that the derivative is approximately equal to $c -y$. We shall then be able to integrate ${\partial u' \over \partial \vartheta}$ in order to control the size of ${1 \over 1 + |u'|^{{1 \over 2}}}$ in the integrand. We shall now bound ${\partial \vartheta' \over \partial \vartheta}$ in this region. We recall that \[ \vartheta' = \arctan \left ({y \over a - x} \right ) = \arctan \left ({r \sin(\vartheta) \over r \cos(\vartheta) + a} \right ). \] Thus, we have that \[ {\partial \vartheta' \over \partial \vartheta} = {1 \over 1 + {r^2 \sin^2 (\vartheta) \over (r \cos(\vartheta) + a)^2}} \left ({r \cos(\vartheta) \over r \cos(\vartheta) + a} + {r^2 \sin^2(\vartheta) \over (r \cos(\vartheta) + a)^2} \right ) \] Now, we have that \[ r \cos(\vartheta) + a = a \left ({r \over a} \cos(\vartheta) + 1 \right ) \ge {a \over 2}, \] where we have used the fact that \begin{equation} \label{eq:rtaucomp} \begin{aligned} {r \over a} \le C {\tau \over a} \end{aligned} \end{equation} for some $C$ independent of $\delta_0$ in the region in question, and also that $\tau \le \delta_0 s \le 2 \delta_0 a$. We shall now pause briefly to show that ${r \over a} \le C {\tau \over a}$ for some $C$ independent of $\delta_0$ indeed holds. Because $|\vartheta| \le \delta_0$, we have that $x = -r \cos(\vartheta) \le -{r \over 2}$. Thus, using Lemma~\ref{lem:xyrr'}, we have that \[ r \left (1 - {\tau - u - u' \over r} + {\tau - u - u' \over a} - {(\tau - u - u')^2 \over 2 r a} \right ) \le -{r \over 2}. \] Moreover, using that \[ {\tau - u - u' \over a} \le 100 \delta_0, \] we have that \[ -(\tau - u - u') \left (1 - {r \over a} + O(\delta_0) \right ) \le -{3 \over 2} r. \] Thus, because ${r \over a} \le 10$, we have that \[ r \le 100 \tau, \] for $\delta_0$ sufficiently small. This allows us to say that, in fact, we have that \[ {r \over a} \le C {\tau \over a} \] for some $C$ independent of $\delta_0$ (for example, we can take $C = 100$), as desired. Using this, we have that \begin{equation} \label{eq:dtheta'dthetaRpsi} \begin{aligned} \left |{\partial \vartheta' \over \partial \vartheta} \right | \le C {r \over a} \le C {\tau \over a}. \end{aligned} \end{equation} Because the constant $C$ in this expression does not depend on $\delta_0$ and because ${\tau \over a} \le 2 \delta_0$, this expression can be made arbitrarily small by picking $\delta_0$ sufficiently small. From this and from \eqref{eq:dr'dvartheta2}, it follows that \begin{equation} \label{eq:dr'dvarthetaRpsi} \begin{aligned} \left |{\partial r' \over \partial \vartheta} \right | \ge c |y| = c r |\sin(\vartheta)| \ge c r |\vartheta|. \end{aligned} \end{equation} This allows us to argue as follows. We have that the integral in \eqref{eq:u'decest1} is controlled by \[ \int_{t - b - 1}^{t - b} \int_{-\delta_0}^{\delta_0} \chi_{-1 \le u' \le \delta_0 \tau} {1 \over 1 + |u'|^{{1 \over 2}}} r d \vartheta d r. \] Because the integrand is symmetric with respect to reflections over the $x$ axis and because the region of integration is as well, it suffices to control the integral for $\vartheta$ from $0$ to $\delta_0$ instead. Now, for fixed $r$, we pick $\vartheta_0 (r)$ such that $u'$ is minimized under the constraint that $u' \ge -1$ (otherwise, the integrand is $0$). We note then that the integrand is supported where $\vartheta \ge \vartheta_0 (r)$ because we in fact have that ${\partial r' \over \partial \vartheta} < 0$ for $\vartheta$ small and positive by \eqref{eq:dr'dvartheta1}. Thus, the integral we must control is given by \[ \int_{t - b - 1}^{t - b} \int_{\vartheta_0 (r)}^{\delta_0} \chi_{-1 \le u' \le \delta_0 \tau} {1 \over 1 + |u'|^{{1 \over 2}}} r d \vartheta d r. \] Using \eqref{eq:dr'dvarthetaRpsi}, we then have that \[ u'(r,\vartheta) = u'(r,\vartheta_0) + \int_{\vartheta_0}^{\vartheta} {\partial u' \over \partial \vartheta} d \vartheta_1 \ge -1 + c \int_{\vartheta_0}^{\vartheta} r \vartheta_1 d \vartheta_1 \ge -1 + c r (\vartheta_0 + \vartheta) (\vartheta - \vartheta_0), \] where we have used that $u'(r,\vartheta_0) \ge -1$ by the presence of $\chi_{u' \ge -1}$. We now excise a neighborhood of thickness ${10 \over \sqrt{t}}$ around $\vartheta_0$. When $\vartheta - \vartheta_0 \ge {10 \over \sqrt{t}}$, we note that \begin{equation} \label{eq:u'estRpsi} \begin{aligned} 1 + |u'(r,\vartheta)| \ge c r (\vartheta - \vartheta_0)^2 \ge c. \end{aligned} \end{equation} Now, we have that \[ \int_{t - b - 1}^{t - b} \int_{\vartheta_0 (r)}^{\delta_0} {1 \over 1 + |u'|^{{1 \over 2}}} r d \vartheta d r \le \int_{t - b - 1}^{t - b} \int_{\vartheta_0 (r)}^{\vartheta_0 (r) + {10 \over \sqrt{t}}} {1 \over 1 + |u'|^{{1 \over 2}}} r d \vartheta d r + \int_{t - b - 1}^{t - b} \int_{\vartheta_0 (r) + {10 \over \sqrt{t}}}^{\delta_0} {1 \over 1 + |u'|^{{1 \over 2}}} r d \vartheta d r, \] where the second integral is taken to be $0$ when $\vartheta_0 (r) + {10 \over \sqrt{t}} \ge \delta_0$. We now examine each of these integrals. For the first integral, we note that the inner integral is controlled by \[ C \int_{\vartheta_0 (r)}^{\vartheta_0 (r) + {10 \over \sqrt{t}}} d \vartheta \le {C \over \sqrt{t}}, \] meaning that the first integral is controlled by $C(1 + \sqrt{t})$, as desired. For the second integral, we use \eqref{eq:u'estRpsi} to show that $|u'|$ is relatively large, giving us that \[ \int_{\vartheta_0 (r) + {10 \over \sqrt{t}}}^{\delta_0} {1 \over 1 + |u'|^{{1 \over 2}}} d \vartheta \le C \int_{\vartheta_0 (r) + {10 \over \sqrt{t}}}^{\delta_0} {1 \over \sqrt{r} (\vartheta - \vartheta_0 (r))} d \vartheta \le {C (1 + \log(1 + t)) \over \sqrt{r}}. \] From this, it follows that the second integral is controlled by $C(1 + \log(1 + t) \sqrt{t})$. This proves the correct upper bound in the region where $t \le {3 \over 4} s$. We now turn to the regions where $t \ge {s \over 4}$, which are adapted to $r'$. The first region is where $|\theta'| > \delta_0$ and $|\pi - \theta'| > \delta_0$, the second region is where $|\pi - \theta'| \le \delta_0$, and the third region is where $|\theta'| \le \delta_0$. The bounds in the first two regions follow from the same argument to control the corresponding regions adapted to $r$. They both also roughly look like the configuration found in Figure~\ref{fig:thetavartheta'small}. We thus turn to control the final region where $|\theta'| \le \delta_0$. Figure~\ref{fig:theta'small} gives an idea of what this region looks like. \begin{figure} \centering \begin{tikzpicture} \draw[very thick] (-4,0) circle (3); \draw[very thick] (-4,0) circle (1.5); \draw (-4,0) -- (-1.2,0.9); \node (theta'1) at (-2.8,0.2) {$\theta'$}; \draw[dashed] (-4,0) -- (-1,0); \draw[very thick,domain=-10:10] plot ({25*cos(\x)-26},{25*sin(\x)}); \draw[very thick,domain=-10:10] plot ({24.7*cos(\x)-26},{24.7*sin(\x)}); \draw[very thick] (4,0) circle (3); \draw[very thick] (4,0) circle (1.5); \draw (4,0) -- (5.7,2.3); \node (theta'2) at (4.35,0.2) {$\theta'$}; \draw[dashed] (4,0) -- (7,0); \draw[very thick,domain=-10:10] plot ({25*cos(\x)-19},{25*sin(\x)}); \draw[very thick,domain=-10:10] plot ({24.7*cos(\x)-19},{24.7*sin(\x)}); \end{tikzpicture} \caption{Two configurations giving a rough idea of the regime where $|\theta'| \le \delta_0$. The two large arcs correspond to the region where $b \le u \le b + 1$. The thick annulus corresponds to $-1 \le u' \le \delta_0 \tau$, and it can in general be rather large. The points in question are the points in the annulus, to the left of the outer long arc, and to the right of the inner long arc. Just like in Figure~\ref{fig:varthetasmall}, the picture is a bit misleading because these points should also have very small $|\theta'|$ value. The integral we are computing involves the function ${1 \over 1 + |u'|^{{1 \over 2}}}$, which decays as we move away from the outer circle in the annulus towards the inner circle.} \label{fig:theta'small} \end{figure} We use the relation \[ r \cos(\theta) = r' \cos(\theta') + a. \] We shall analyze this in the $(r,\theta)$ coordinate system. Taking a derivative with respect to $\theta$ gives us that \[ -r \sin(\theta) = {\partial r' \over \partial \theta} \cos(\theta') - r' \sin(\theta') {\partial \theta' \over \partial \theta}. \] Thus, we have that \begin{equation} \label{eq:dr'dthetaRf} \begin{aligned} {\partial r' \over \partial \theta} \cos(\theta) = y \left ({\partial \theta' \over \partial \theta} - 1 \right ). \end{aligned} \end{equation} Now, we have that \[ \theta' = \arctan \left ({y \over x - a} \right ) = \arctan \left ({r \sin(\theta) \over r \cos(\theta) - a} \right ) \] From this, it follows that \[ {\partial \theta' \over \partial \theta} = {1 \over 1 + {r^2 \sin^2 (\theta) \over (r \cos(\theta) - a)^2}} \left ({r \cos(\theta) \over r \cos(\theta) - a} + {r^2 \sin^2(\theta) \over (r \cos(\theta) - a)^2} \right ) \] We now use the relations \[ r \cos(\theta) - a = r' \cos(\theta') \] and \[ r \sin(\theta) = r' \sin(\theta') \] in order to conclude that \[ {\partial \theta' \over \partial \theta} = {1 \over 1 + {\sin^2 (\theta') \over \cos^2 (\theta')}} \left ({r \cos(\theta) \over r' \cos(\theta')} + {\sin^2(\theta') \over \cos^2(\theta')} \right ). \] Thus, we have that \begin{equation} \label{eq:dtheta'dthetaRf} \begin{aligned} {1 \over 1 + \delta_0} {r \cos(\theta) \over r' \cos(\theta')} \le {\partial \theta' \over \partial \theta} \le (1 + \delta_0) {r \cos(\theta) \over r' \cos(\theta')}. \end{aligned} \end{equation} Now, we have that the integral in \eqref{eq:u'decest1} is controlled by \begin{equation} \label{eq:u'decest2} \begin{aligned} \int_{t - b - 1}^{t - b} \int_{-\theta_1 (r)}^{\theta_1 (r)} {1 \over 1 + |u'|^{{1 \over 2}}} r d \theta d r \end{aligned} \end{equation} for some $\theta_1 (r)$ (note that we have used that this region is symmetric with respect to reflections over the $x$ axis). These points are determined by the restrictions that $r' \le s - t + 1$ and $-\delta_0 \le \theta' \le \delta_0$. We shall now bound the inner integral \[ \int_{-\theta_1 (r)}^{\theta_1 (r)} {1 \over 1 + |u'|^{{1 \over 2}}} d \theta \] by $C {\sqrt{s - t} \over t}$. By symmetry, it suffices to control the integral from $0$ to $\theta_1 (r)$ instead. Now, we note that ${\partial \theta' \over \partial \theta}$ is bounded from below by ${1 \over 2} {r \over r'}$ by \eqref{eq:dtheta'dthetaRf} (note that ${r \over r'} \ge {1 \over 10 \delta_0}$ in this region by an argument similar to the one used to show \eqref{eq:rtaucomp}). Thus, using this, \eqref{eq:dr'dthetaRf}, and Lemma~\ref{lem:rtr's-t}, we have that \[ {\partial r' \over \partial \theta} \ge c {r^2 \over r'} \theta \ge c {t^2 \over s - t} \theta \] for some $c$ in this region. Now, at $(r,\theta_1 (r))$, we note that we have that $u' = -1$. Thus, for $0 \le \theta \le \theta_1 (r)$, we have that \[ |u'(r,\theta)| = \left |u'(r,\theta_1 (r)) - \int_\theta^{\theta_1 (r)} {\partial u' \over \partial \theta} d \theta_0 \right | \ge c {t^2 \over s - t} \theta_1 (r) (\theta_1 (r) - \theta) - 1. \] Thus, in this region, we have that \[ {1 \over 1 + |u'|^{{1 \over 2}}} \le {C \over 1 + {t \sqrt{\theta_1 (r)} \over \sqrt{s - t}} \sqrt{\theta_1 (r) - \theta}}. \] Now, we have that \begin{equation} \begin{aligned} \int_0^{\theta_1 (r)} {1 \over 1 + {t \sqrt{\theta_1 (r)} \over \sqrt{s - t}} \sqrt{\theta_1 (r) - \theta}} d \theta \le {\sqrt{s - t} \over t \sqrt{\theta_1 (r)}} \int_0^{\theta_1 (r)} {1 \over \sqrt{\theta_1 (r) - \theta}} d \theta = {\sqrt{s - t} \over t \sqrt{\theta_1 (r)}} \int_0^{\theta_1 (r)} {1 \over \sqrt{x}} d x \\ \le C {\sqrt{s - t} \over t}. \end{aligned} \end{equation} Plugging this into \eqref{eq:u'decest2} and integrating in $r$ then gives us the desired estimate in this region. \end{proof} Before proceeding, we make a brief remark about the proof of Lemma~\ref{lem:annuliu'dec}. It is interesting to note the asymmetry in the logarithmic factor that occurs in $t$ but not in $s - t$. This can be explained by comparing Figure~\ref{fig:varthetasmall} and Figure~\ref{fig:theta'small}. As can be seen by examining \eqref{eq:rr'drdr'}, the worst contributions from the volume form come when the level sets of $r$ and $r'$ become tangent. Moreover, we note that we have a decaying factor in $u'$. In Figure~\ref{fig:varthetasmall}, $|u'|$ is essentially minimized when these level sets are tangent, while in Figure~\ref{fig:theta'small}, $|u'|$ is maximized when these level sets are tangent. Thus, the region of tangency and worst decay coincide in the region corresponding to Figure~\ref{fig:varthetasmall} (which occurs when $t$ is relatively small), but not in the region corresponding to Figure~\ref{fig:theta'small}. We shall also need a simpler estimate involving the intersection of an annulus of thickenss one and an annulus of thickness at most $\tau$. \begin{lemma} \label{lem:annuliarea} Let $100 \le \tau \le \delta_0 s$. Moreover, in $\Sigma_t$, let $-1 \le b \le 10 \tau$ and let $-1 \le b' \le \tau$. Then, the area of intersection of an annulus of thickness $1$ and radius $r = t - b \le t + 1$ adapted to $r$ and another annulus of thickness $10 \tau$ and radius $r' = s - t - b' \le s - t + 1$ adapted to $r'$ is at most $C (1 + \sqrt{t}) \sqrt{\tau}$. In other words, we have that \begin{equation} \begin{aligned} \int_{\Sigma_t} \chi_{b \le u \le b + 1} \chi_{b' \le u' \le b' + 100 \tau} d x \le C (1 + \sqrt{t}) \sqrt{\tau} \end{aligned} \end{equation} Similarly, the intersection an annulus of thickness $1$ and radius $r = t - b \le t + 1$ and another annulus of thickness $1$ and radius $r' = s - t - b' \le s - t - 1$ has area at most $C \min(1 + \sqrt{t},1 + \sqrt{s - t})$. In other words, we have that \[ \int_{\Sigma_t} \chi_{b \le u \le b + 1} \chi_{b' \le u' \le b' + 1} d x \le C \min(1 + \sqrt{t},1 + \sqrt{s - t}). \] \end{lemma} \begin{proof} The proof follows from similar considerations to those in the proof of Lemma~\ref{lem:annuliu'dec}. We shall now describe the geometric considerations. We consider several different regimes depending on the relative size of $t$, $s - t$, and $\tau$. We begin by noting that the result is clearly true when $t \le {1 \over 100 \delta_0} \tau$ because $C (1 + \sqrt{t}) \sqrt{\tau}$ controls the area of the annulus of radius $t$ and thickness $1$. We thus assume that $t \ge {1 \over 100 \delta_0} \tau$. This implies that $r = t - u \ge {1 \over 100 \delta_0} \tau - 10 \tau - 1 \ge 1000 \tau$, which implies that $|\vartheta| \ge \delta_0$ (see \eqref{eq:rtaucomp}). We then consider two additional cases, one where $|\theta| \ge \delta_0$ and one where $|\theta| \le \delta_0$. When $|\theta| \ge \delta_0$, the Jacobian in the $(r,r')$ coordinate system is well behaved (note that $\tau \le \delta_0 s$), meaning that we may argue as in the proof of Lemma~\ref{lem:annuliu'dec} (see Figure~\ref{fig:thetavartheta'small}). We may thus restrict ourselves to the case where $|\theta| \le \delta_0$. Now, we first assume that $s - t \le {1 \over 100 \delta_0} \tau$. We then have that the diameter of the annulus adapted to $r'$ is controlled by $C \tau$, and we have that $t \ge {9 \over 10} s$ in this region. Thus, using Lemma~\ref{lem:largecirclearc}, we get that the area of intersection is at most $C \tau \le C (1 + \sqrt{t}) \sqrt{\tau}$. Thus, we may assume that $s - t \ge {1 \over 100 \delta_0} \tau$. This forces us to have $|\theta'| \ge \delta_0$ (this is analogous to how having $t \ge {1 \over 100 \delta_0} \tau$ forces $|\vartheta| \ge \delta_0$). We now further consider two cases, one where $|\vartheta'| \ge \delta_0$ and another where $|\vartheta'| \le \delta_0$. In the first case, the Jacobian in $(r,r')$ coordinates is well behaved, and we can argue as in the proof of Lemma~\ref{lem:annuliu'dec} (see Figure~\ref{fig:thetavartheta'small}). We are thus left with the case where $|\theta| \le \delta_0$ and $|\vartheta'| \le \delta_0$. In this region, we consider the natural foliation of the $r$ adapted annulus by circles. If we look at the intersection of any circle in this foliation with the $r'$ annulus, we pick the point with largest $x$ coordinate in this intersection, and we call this value $x_0$. Because the region is symmetric in reflection across the $x$ axis, we can assume that $y \ge 0$. We then note that the smallest possible $x$ coordinate that is still in the intersection is greater than or equal to $x_0 - 100 \tau$. We can simply compute that increasing $\theta$ by $1000 {\sqrt{\tau} \over \sqrt{r}}$ will always decrease the $x$ coordinate by at least $100 \tau$ (note that $r \ge 1000 \tau$ for $\delta_0$ sufficiently small because $t \ge \delta_0^{-1} \tau$). This means that the arc length of the circle contained in this annulus is at most $1000 r {\sqrt{\tau} \over \sqrt{r}} \le C (1 + \sqrt{t}) \sqrt{\tau}$. This gives us the first desired result. The second result follows from similar considerations. The most substantially different cases are when $|\vartheta| \le \delta_0$ or when $|\theta'| \le \delta_0$, so we shall now describe how to treat these regions. Because the integrals are now symmetric in $r$ and $r'$ after sending $t$ to $s - t$ (this is because both annuli have thickness $1$ now), we can assume that we are considering the case where $|\vartheta| \le \delta_0$. Because the region is once again symmetric with respect to reflections over the $x$ axis, we can restrict ourselves to $\vartheta \ge 0$. We look at the natural foliation of the $r$ annulus by circles. Taking any of these circles, we look at the minimum value of $\vartheta \ge 0$ for which the circle intersects the $r'$ annulus. We can then simply compute that increasing the value of $\vartheta$ by ${C \over \sqrt{r}}$ will case the value of $r'$ to decrease by at least $1$ (see \eqref{eq:dr'dvarthetaRpsi}), meaning that after this change in $\vartheta$, the circle must exit the support of the $r'$ annulus, which has thickness $1$. Thus, the arc length of the circle contained in the $r'$ annulus is controlled by $C (1 + \sqrt{t})$, giving us the desired result. \end{proof} We must now get estimates for the $(\theta,u,u')$ coordinate system when $\tau$ is large (meaning that $\tau \ge \delta_0 s$). This will allow us to effectively integrate in this region. The following result is effectively a consequence of the fact that an upward opening cone and a downward opening cone intersect uniformly transversally when $\tau$ is comparable to $s$ (see Figure~\ref{fig:lightconesaux}). \begin{lemma} \label{lem:thetauu'} We have that \[ -r d \theta \wedge d u \wedge d u' = (1 + \sin(\theta) \sin(\theta') + \cos(\theta) \cos(\theta')) d t \wedge d x \wedge d y. \] Moreover, let $\tau \ge \delta_0 s$. Then, in the region where $-1 \le u \le \delta_0 \tau$ and $-1 \le u' \le \delta_0 \tau$, we have that \[ |1 + \sin(\theta) \sin(\theta') + \cos(\theta) \cos(\theta')| \ge c > 0, \] meaning that \[ d t \wedge d x \wedge d y = J_{\theta,u,u'} r d \theta \wedge d u \wedge d u' \] with $|J_{\theta,u,u'}| \le C$. \end{lemma} \begin{proof} We have that \[ -r d \theta = {y \over r} d x - {x \over r} d y. \] Moreover, we have that $d \theta \wedge d r \wedge d r'$ vanishes because the one forms involved are all intrinsic to the $\Sigma_t$ hypersurfaces, which are two dimensional. Thus, we have that \[ -r d \theta \wedge d u \wedge d u' = r d \theta \wedge d t \wedge (d r + d r') = d t \wedge (-r d \theta) \wedge (d r + d r'). \] We shall now estimate $-r d \theta \wedge (d r + d r')$. We have that \[ d r + d r' = \left ({x \over r} d x + {x - a \over r'} \right ) d x + \left ({y \over r} + {y \over r'} \right ) d y. \] Thus, we have that \begin{equation} \begin{aligned} -r d \theta \wedge (d r + d r') = \left [{y \over r} \left ({y \over r} + {y \over r'} \right ) + {x \over r} \left ({x \over r} + {x - a \over r'} \right ) \right ] d x \wedge d y \\ = \left (1 + \sin(\theta) \sin(\theta') + \cos(\theta) \cos(\theta') \right ) d x \wedge d y, \end{aligned} \end{equation} as desired. We shall now prove a lower bound on \[ \left |1 + \sin(\theta) \sin(\theta') + \cos(\theta) \cos(\theta') \right |, \] from which the desired result will follow. We begin by noting that $\sin(\theta)$ and $\sin(\theta')$ always have the same sign, meaning that $\sin(\theta) \sin(\theta')$ is always positive. Thus, when $x \le 0$ or $x \ge a$, we have a lower bound of $1$ because $\cos(\theta)$ and $\cos(\theta')$ also have the same sign, meaning that $\cos(\theta) \cos(\theta')$ is also positive. We may thus restrict ourselves to the region where $0 \le x \le a$. Now, because $\tau \ge \delta_0 s$ and because of the bounds on $|u|$ and $|u'|$, we note that $|\theta| \ge c$ and $|\theta'| \ge c$ when $0 \le x \le x - a$ for some constant $c$ depending only on $\delta_0$ by Lemma~\ref{lem:xyrr'}. Indeed, when $0 \le x \le a$, $y^2$ is minimized when $x = 0$ and $x = a$. The only other critical point in this region corresponds to a maximum for $y^2$ (this behavior can be seen geometrically by examining the red ellipse in Figure~\ref{fig:lightconesaux}, and it can be explicitly computed by calculating the intersection of the cones of constant $u$ and constant $u'$ subject to the constraint that $u \le \delta_0 \tau$ and $u' \le \delta_0 \tau$). Now, at $x = 0$, we still have that $r$ is comparable to $s$ by Lemma~\ref{lem:rtr's-t}. This means that $y^2$ must still be comparable to $s^2$ at this point (we can in fact explicitly compute that it is equal to $(\tau - u - u')^2 \left (1 - {\tau - u - u' \over s} + {(\tau - u - u')^2 \over s^2} \right ) \ge c s^2$ for $\tau \ge \delta_0 s$ when $x = 0$ and $x = a$, see Lemma~\ref{lem:xyrr'}). By symmetry, $y^2$ must also be comparable to $s^2$ at $x = a$, meaning that $y^2$ is uniformly comparable to $s^2$ whenever $0 \le x \le a$, proving the lower bound on $|\theta|$ and $|\theta'|$ because $y^2 = r^2 \sin^2 (\theta) = (r')^2 \sin^2 (\theta')$. Thus, we have that $1 + \sin(\theta) \sin(\theta') \ge 1 + c^2$, and this implies the desired result. \end{proof} Finally, we must get estimates for the $(\overline{r},r')$ coordinate system. We shall only need these estimates when $\tau \le \delta_0 s$, $r' \ge 10$, $t \ge (1 - 20 \delta_0) s$, and $|\vartheta'| \le \delta_0$. As a consequence of these assumptions, we note that the level sets of $r'$ and $r$ approximately coincide. Thus, the fact that this is true morally follows from Lemma~\ref{lem:rrbar}. \begin{lemma} \label{lem:dr'drbar} Let $\tau \le \delta_0 s$. Then, in the region where $t \ge (1 - 20 \delta_0) s$, $|\vartheta'| \le \delta_0$, $-1 \le u$, $-1 \le \overline{u} \le \delta_0 \tau$, and $-1 \le u' \le \delta_0 \tau$, we have that \[ d x \wedge d y = J_{\overline{r},r'} d \overline{r} \wedge d r', \] where \[ |J_{\overline{r},r'}| \le C. \] \end{lemma} \begin{proof} For this proof only, we shall denote by $C$ a constant which is independent of $\delta_0$. Moreover, because we need to analyze the geometry of all three light cones in question at the same time, we shall not assume $b = 0$. Thus, the downward opening light cone has its tip at the point $(s,a,b)$ (see Section~\ref{sec:coordinates}). We recall that $\Theta$ denotes the $\theta$ coordinate of the points where $r' = 0$. We also recall that $\Theta_i$ for $i = 1, 2, 3, 4$ denote the four lines of intersection of the cones $u = 0$ and $\overline{u} = 0$, and we recall the analogous five quantities in terms of $\overline{\theta}$. We shall first show that $\Theta$ must be very close to one of the $\Theta_i$. Without loss of generality, we can assume that $0 \le \Theta \le {\pi \over 2}$. We shall show that $\Theta$ must be very close to $\Theta_1$ in this case, and this will give us the desired estimates. The arguments for the other cases follow in an analogous way by symmetry. The first observation is that, in the region in question, we have that $\Theta_1 - C \delta_0 \le \theta \le \Theta_1 + C \delta_0$ where $C$ depends only on $\lambda_1$ and $\lambda_2$. Indeed, outside of this region, we have that $\overline{u}$ will no longer be between $-1$ and $\delta_0 \tau$. This is because ${\partial \overline{u} \over \partial \theta} (r,\theta) \ge c r$ for some $c$ depending only on $\lambda_1$ and $\lambda_2$. To see this, we recall that $\partial_\theta = x \partial_y - y \partial_x$, and we note that \[ 2 \overline{r} {\partial \overline{r} \over \partial \theta} (r,\theta) = (x \partial_y - y \partial_x) (\lambda_1^2 x^2 + \lambda_2^2 y^2) = 2 (\lambda_2^2 - \lambda_1^2) x y. \] The desired result then follows because $x$ and $y$ are both comparable to $r$ and $\overline{r}$ in this region. A similar argument works to show that $\overline{\Theta}_1 - C \delta_0 \le \overline{\theta} \le \overline{\Theta_1} + C \delta_0$ because we have that $u \le 100 \delta_0 s$ in the region in question. The second observation is that we must have that $|\Theta - \Theta_1| \le C \delta_0$ where $C$ depends only on $\lambda_1$ and $\lambda_2$ in order for the region to be nonempty. Indeed, given that we are restricting to the region where $t \ge (1 - 20 \delta_0) s$, we note that the region in question requires that $r' \le 20 \delta_0 s + 1$. This restricts us to the region where $|\theta - \Theta| \le 100 \delta_0$ (see also Lemma~\ref{lem:largecirclearc}). Thus, because $\theta$ is already restricted to being close to $\Theta_1$, we thus have that $|\Theta - \Theta_1| \le C \delta_0$, as desired. See Figure~\ref{fig:vartheta'smalltlarge} to see a geometric description of what is going on. These considerations will allow us to compare $\overline{\theta}$ and $\theta'$ with $\overline{\Theta}_1$ and $\Theta_1$, respectively. \begin{figure} \centering \begin{tikzpicture} \draw[very thick] (0,0) circle (4.5); \draw[very thick] (0,0) circle (3.8); \draw[very thick] (0,0) ellipse (6 and 3); \draw[very thick] (0,0) ellipse (5.7 and 2.7); \draw[red,fill=red] (3.1,2.6) circle (0.15); \end{tikzpicture} \caption{A figure showing why $\theta$ must be close to $\Theta$ and $\Theta_1$, and why $\overline{\theta}$ must be close to $\overline{\Theta}_1$, where close means within some constant multiplied by $\delta_0$. The red disk represents $r' \le 20 \delta_0 s + 1$, which is the largest $r'$ can be when $t \ge (1 - 20 \delta_0) s$. The line $\theta = \Theta$ is the line going from the center of the circular annulus to the center of the red disk. The circular annulus is $(1 - 50 \delta_0) s \le r \le s + 1$ and the elliptical annulus is $(1 - 30 \delta_0) s \le \overline{r} \le s + 1$. We are interested in points contained in the intersection of both annuli and the red disk. Because $\tau \le \delta_0 s$, we can see that we are restricted to a small range of $\theta$ and $\overline{\theta}$ values. Of course, we additionally have the restriction that $|\vartheta'| \le \delta_0$, but we have not drawn this restriction.} \label{fig:vartheta'smalltlarge} \end{figure} Because we have that $|\vartheta'| \le \delta_0$ and that $|\overline{\theta} - \overline{\Theta}_1| \le C \delta_0$, it is natural to get expressions for $d \overline{r}$ and $d r'$ in terms of $\overline{\theta}$ and $\vartheta'$. We have that \[ \overline{r} d \overline{r} = \lambda_1^2 x d x + \lambda_2^2 y d y. \] Using that $x = \lambda_1^{-1} \overline{x} = \lambda_1^{-1} \overline{r} \cos(\overline{\theta})$ and that $y = \lambda_2^{-1} \overline{y} = \lambda_2^{-1} \overline{r} \sin(\overline{\theta})$, this gives us that \[ d \overline{r} = \lambda_1 \cos(\overline{\theta}) d x + \lambda_2 \sin(\overline{\theta}) d y. \] Now, for $d r'$, we first note that \[ \theta' = \arctan \left ({y - b \over x - a} \right ) - \Theta. \] Moreover, we have that \[ d r' = {x - a \over r'} d x + {y - b \over r'} d y. \] Using that ${x - a \over r'} = \cos \left (\arctan \left ({y - b \over x - a} \right ) \right )$ and that ${y - b \over r'} = \sin \left (\arctan \left ({y - b \over x - a} \right ) \right )$ then gives us that \[ d r' = \cos(\theta' + \Theta) d x + \sin(\theta' + \Theta) d y. \] Thus, we have that \[ d \overline{r} \wedge d r' = (\lambda_1 \cos(\overline{\theta}) \sin(\theta' + \Theta) - \lambda_2 \sin(\overline{\theta}) \cos(\theta' + \Theta)) d x \wedge d y. \] Now, we have that \[ \vartheta' = \pi - \theta' = \pi - \arctan \left ({y - b \over x - a} \right ) - \Theta. \] From the bounds on $\vartheta'$, it follows that \[ \left |\arctan \left ({y - b \over x - a} \right ) - \pi - \Theta \right | \le \delta_0. \] Thus, we have that $\theta' + \Theta = \pi + \Theta + O(\delta_0) = \pi + \Theta_1 + O(\delta_0)$. From this, it follows that \[ d \overline{r} \wedge d r' = (\lambda_2 \sin(\overline{\Theta}_1) \cos(\Theta_1) - \lambda_1 \cos(\overline{\Theta_1}) \sin(\Theta_1) + O(\delta_0)) d x \wedge d y. \] This implies the desired result for $\delta_0$ sufficiently small because the quantity is nonzero when $\delta_0 = 0$. Indeed, dividing by $\cos(\Theta_1)$ and $\cos(\overline{\Theta}_1)$ (which we can do because $\Theta_1$ and $\overline{\Theta}_1$ are uniformly bounded away from half integer multiples of $\pi$ given our restrictions on $\lambda_1$ and $\lambda_2$), we get that \begin{equation} \begin{aligned} {1 \over \cos(\Theta_1) \cos(\overline{\Theta}_1)} d \overline{r} \wedge d r' = (\lambda_2 \tan(\overline{\Theta}_1) - \lambda_1 \tan(\Theta_1) + O(\delta_0)) d x \wedge d y \\ = \left ({\lambda_2 \overline{y} \over \overline{x}} - {\lambda_1 y \over x} + O(\delta_0) \right ) d x \wedge d y = \left ({(\lambda_2^2 - \lambda_1^2) \sqrt{1 - \lambda_1^2} \over \lambda_1 \sqrt{\lambda_2^2 - 1}} + O(\delta_0) \right ) d x \wedge d y, \end{aligned} \end{equation} which is bounded away from $0$ uniformly by the restrictions we have placed on $\lambda_1$ and $\lambda_2$ for $\delta_0$ sufficiently small. We note that we have simply solved for ${y \over x}$ in the equations \[ t^2 = x^2 + y^2 = \lambda_1^2 x^2 + \lambda_2^2 y^2, \] which gives the different values of $\tan(\theta)$ and $\tan(\overline{\theta})$ corresponding to the four lines of intersection of the two forward opening light cones. This gives us the desired result. \end{proof} The final geometric result we shall need concerns the intersections of circles with sets having small diameter relative to the radius of the circle. \begin{lemma} \label{lem:largecirclearc} Let $S$ be a circle of radius $r$, and let $D$ be a region with diameter $d \le {1 \over 2} r$. Then, we have that the length of the portion of $S$ contained in $D$ is controlled by $C d$. \end{lemma} \begin{proof} This follows immediately by going into polar coordinates adapted to $S$ and noting that the set of all $\theta$ for which $S \cap D$ is nonempty must be contained in an interval of length at most $C {d \over r}$. Indeed, if this were not true, it would contradict that $D$ has diameter $d$ because the arc length along $S$ is comparable to the Euclidean distance at scales which are small compared to the radius of the circle. \end{proof} We note that an analogous statement holds for ellipses. \subsection{Scaling vector field geometry} \label{sec:SGeometry} Throughout this Section, we recall that we can take $s \ge {1 \over \delta_0^{10}}$. Moreover, we shall once again be assuming without loss of generality that $b = 0$, meaning that $a = s - \tau$ (see Section~\ref{sec:coordinates}). We shall use the above ideas in addition with the scaling vector field $S = t \partial_t + r \partial_r = t \partial_t + x \partial_x + y \partial_y$. We first note that $S$ satisfies good commutation properties with both $\Box$ and $\Box'$. \begin{lemma} The vector field $S$ has that $[\Box,S] = 2 \Box$. Similarly, we have that $[\Box',S] = 2 \Box'$. \end{lemma} \begin{proof} This follows easily from a computation. \end{proof} Because of this, $S$ can effectively be used as a commutator for this system of equations. In order to use $S$ in the way described in Section~\ref{sec:anisotropicdescription}, we must find a way to write general vectors in the frame consisting of $S$ and $\overline{\partial}_f$, the good derivatives of $f$. This is the content of the following lemma, which writes $\underline{L}'$ in terms of $S$ and $\overline{\partial}_f$. We introduce the expression \begin{equation} \begin{aligned} \gamma = -{1 \over t + r' + {a (x - a) \over r'}}. \end{aligned} \end{equation} This expression arises as a coefficient in frame decompositions, and it shall be very important in the following. \begin{lemma} \label{lem:du'S} Let $\tau \ge 100$. In the region where $u' \le \delta_0 \tau$, we have that \begin{equation} \label{eq:du'S} \begin{aligned} \underline{L}' h = 2 \gamma S(h) + \gamma \left (t - r' - {a (x - a) \over r'} \right ) L' (h) + \gamma {2 a y \over (r')^2} \partial_{\theta'} (h) \end{aligned} \end{equation} where $h$ is any smooth function. \end{lemma} \begin{proof} We have that \[ S = t \partial_t + r \partial_r = -{1 \over 2} t (L' + \underline{L}') + r' \partial_{r'} + a \partial_x = -{1 \over 2} t (L' + \underline{L}') + {1 \over 2} r' (L' - \underline{L}') + a \partial_x. \] Moreover, we have that \[ \partial_x = {\partial r' \over \partial x} \partial_{r'} + {\partial \theta' \over \partial x} \partial_{\theta'} = {x - a \over r'} \partial_{r'} - {y \over (r')^2} \partial_{\theta'} = {x - a \over 2 r'} (L' - \underline{L}') - {y \over (r')^2} \partial_{\theta'}. \] Thus, we have that \[ 2 S = -t (L' + \underline{L}') + r' (L' - \underline{L}') + {a (x - a) \over r'} (L' - \underline{L}') - {2 a y \over (r')^2} \partial_{\theta'}. \] Thus, we have that \[ -\left (t + r' + {a (x - a) \over r'} \right ) \underline{L}' = 2 S + \left (t - r' - {a (x - a) \over r'} \right ) L' + {2 a y \over (r')^2} \partial_{\theta'}. \] If we recall that $\gamma^{-1} = -\left (t + r' + {a (x - a) \over r'} \right ) = -{t r' + (r')^2 + a (x - a) \over r'}$, we thus have that \[ \underline{L}' = 2 \gamma S + \gamma \left (t - r' - {a (x - a) \over r'} \right ) L' + \gamma {2 a y \over (r')^2} \partial_{\theta'}, \] as desired. \end{proof} Terms not involving $S$ and having the factor of $\gamma$ will be improved because $\gamma$ will be shown to be small. Moreover, they contain good derivatives of $f$, meaning that they gain a weight of ${1 \over r'}$, which is comparable to ${1 \over s - t}$ in the region in question. The term with $S$ is dangerous because $S f$ can be large for the auxiliary multiplier. However, we can integrate it by parts and use the fact that we can commute the equation with $S$. The gain is then that the factor of $\gamma$ remains. However, when integrating by parts, it is possible for $S$ to hit $\gamma$, meaning that we must control $S(\gamma)$ as well. Geometrically, the fact that the function $\gamma$ is well behaved follows from the fact that $S$ everywhere pierces the downward opening light cone of $f$, and from the fact that it has length comparable to $t + r$. The angle between $S$ and the normal to the tangent space of the downward pointing light cone for $f$ may, however, be close to ${\pi \over 2}$, meaning that $S$ may be almost tangent to the light cone for $f$. The reader may wish to keep Figure~\ref{fig:lightconesaux} in order to get an idea of the interplay of geometry between the scaling vector field and the light cones in question. The smallness of $\gamma$ and the control of $S(\gamma)$ are the content of the following lemma. \begin{lemma} \label{lem:gammaSgammabound} Let $\tau \ge 100$, let $-1 \le u' \le \delta_0 \tau$, and let $|u| \le 100 \tau$. \begin{enumerate} \item We have that \begin{equation} \label{eq:gammaest} \begin{aligned} |\gamma| \le {C \over \tau}. \end{aligned} \end{equation} \item In the region where $|\vartheta'| = |\pi - \theta'| \ge \delta_0 / 2$ and $r' \ge 10$, we have that \begin{equation} \label{eq:Sgammaesttheta'} \begin{aligned} |S(\gamma)| \le {C \over r'}. \end{aligned} \end{equation} \item In the region where in fact $-1 \le u \le \delta_0 \tau$, we have that \begin{equation} \label{eq:Sgammaestusmall} \begin{aligned} |S(\gamma)| \le {C \over \tau}. \end{aligned} \end{equation} \item In the region where $t \le (1 - 10 \delta_0) s$, we have that \begin{equation} \label{eq:Sgammaesttlarge} \begin{aligned} |S(\gamma)| \le {C \over \tau}. \end{aligned} \end{equation} \item In the region where $\tau \ge \delta_0 s$ and $r' \ge 10$, we have that \begin{equation} \label{eq:Sgammaesttaularge} \begin{aligned} |S(\gamma)| \le {C \over r'}. \end{aligned} \end{equation} \end{enumerate} \end{lemma} Before proceeding to the proof, let us try to briefly motivate these different regions. The choice of the regions is motivated by the fact that we have to take advantage of different things in different areas. We can then decompose the error integrals in Section~\ref{sec:closingpointwise} into sums of integrals over these localized regions, control each one individually, and then add together the (finitely many) resulting bounds. When $\tau \ge \delta_0 s$, the scaling vector field uniformly pierces the light cone for $f$, so we expect to have good estimates in this region. When $s - t$ is comparable to $s$, we can use the fact that all of the expressions will end up having good powers of $r'$. Because $r'$ is then comparable to $s - t$ along the light cone for $f$, we can hope to take advantage of this. When $u \le \delta_0 \tau$ and $u' \le \delta_0 \tau$, we can control how tangent the two circles from Figure~\ref{fig:psifSigma_t} are. This is important for controlling $\gamma$ and $S(\gamma)$ because the coordinate $y$ naturally appears, and preventing the circles from becoming too tangent is the same thing as getting a lower bound on the $y$ coordinate, which is the vertical distance in Figure~\ref{fig:psifSigma_t}. When $|\vartheta'| \ge \delta_0$, we can get strong estimates for $\gamma$, allowing us to control everything. \begin{proof} We shall do several computations and specialize as necessary. The case where $\tau \le \delta_0 s$ will require the most work. We have that \begin{equation} \label{eq:gammaest1} \begin{aligned} \gamma = -{r' \over t r' + (r')^2 + a (x - a)} = -{1 \over t + r' + a \cos(\theta')} = -{1 \over t + s - t - u' + (s - \tau) \cos(\theta')} \\ = -{1 \over s - u' + (s - \tau) \cos(\theta')}. \end{aligned} \end{equation} Now, we note that \[ s - u' + (s - \tau) \cos(\theta') \ge {\tau \over 2} \] because $u' \le \delta_0 \tau$. This proves the first assertion \eqref{eq:gammaest} about $\gamma$. Before proceeding, we shall make a few remarks. When $\tau \ge \delta_0 s$, the denominator in \eqref{eq:gammaest1} is very large, and this makes the proof easier. Moreover, we note that it will be natural to consider $\vartheta' = \pi - \theta'$. This is because the most delicate region is where $\theta'$ is close to $\pi$. As we shall see below, we shall break up the expressions into regions where $\vartheta'$ is of different sizes depending on $\tau$ and $s$. We note that \begin{equation} \label{eq:Sgammaest1} \begin{aligned} S (\gamma) = -S \left ({1 \over t + r' + a \cos(\theta')} \right ) = {t \over (t + r' + a \cos(\theta'))^2} + {r \partial_r r' + a r \partial_r \cos(\theta') \over (t + r' + a \cos(\theta'))^2} \\ = (t + r \partial_r r' + a r \partial_r \cos(\theta')) \gamma^2. \end{aligned} \end{equation} The way in which these terms can be controlled is geometrically motivated. We begin by noting that $\partial_r r' \approx -1$ on a large set when $\tau$ is small relative to $s$ (when $\tau \ge \delta_0 s$, the proof is more immediate, as we shall see). Thus, we think that $\gamma^2 (t + r \partial_r r') \approx \gamma^2 u$. Moreover $y$ should be small relative to other quantities on a large set by Lemma~\ref{lem:xyrr'}. Thus, we expect to be able to control expressions with $y$ in the numerator and other quantities in the denominator. More precisely, we note that $t + r \partial_r r' + a r \partial_r \cos(\theta')$ vanishes in the limit as $\tau \rightarrow 0$. Of course, when $\tau \rightarrow 0$, we have that $\gamma$ blows up because the downward opening and upward opening light cones become tangent (see Figure~\ref{fig:lightconesaux}). However, this gives us hope that this term can be controlled when $\tau$ is small relative to $s$, and that \eqref{eq:Sgammaest1} is small. We now make these ideas precise. We shall first calculate the quantity $t + r \partial_r r' + a r \partial_r \cos(\theta')$. We have that \[ r \partial_r r' = x \partial_x (r') + y \partial_y (r') = {x (x - a) \over r'} + {y^2 \over r'}, \] and that \[ r \partial_r \cos{\theta'} = x \partial_x \left ({x - a \over r'} \right ) + y \partial_y \left ({x - a \over r'} \right ) = {x \over r'} - {x (x - a)^2 \over (r')^3} - {y^2 (x - a) \over (r')^3}. \] Thus, we have that \begin{equation} \label{eq:Sr'theta'} \begin{aligned} S(\gamma) = \gamma^2 \left (t + r \partial_r r' + a r \partial_r \cos(\theta') \right ) = \gamma^2 \left [t + {x (x - a) \over r'} + {y^2 \over r'} + {a x \over r'} - {a x (x - a)^2 \over (r')^3} - {a y^2 (x - a) \over (r')^3} \right ]. \end{aligned} \end{equation} This already suffices to control $S(\gamma)$ when $\tau \ge \delta_0 s$ (we note that $\tau \le s$ always holds). Indeed, in this region, we have that $\gamma^2 \le {C \over s^2}$. Thus, every term in \eqref{eq:Sr'theta'} multiplied by $\gamma^2$ is bounded by ${C \over r'}$. This implies \eqref{eq:Sgammaesttaularge}, and it also implies \eqref{eq:Sgammaesttheta'}, \eqref{eq:Sgammaestusmall}, and \eqref{eq:Sgammaesttlarge} in the region where $\tau \ge \delta_0 s$ (this requires comparing $r'$ with $\tau$ using Lemma~\ref{lem:rtr's-t} for \eqref{eq:Sgammaestusmall} and Lemma~\ref{lem:tvartheta} for \eqref{eq:Sgammaesttlarge}). Thus, for the remainder of the proof, we shall assume that $\tau \le \delta_0 s$. Now, before proceeding, we shall explicitly group the terms in \eqref{eq:Sr'theta'} into three different categories because they will be dealt with differently. Following the discussion at the beginning of the proof, we shall bound the terms in \eqref{eq:Sr'theta'} with powers of $y$ in the numerator directly. We shall treat $t + {x (x - a) \over r'}$ as a single term and extract a power of $u$ plus other terms which are well behaved. Then, we shall treat ${a x \over r'} - {a x (x - a)^2 \over (r')^3}$ as a single term, noting that it is equal to ${a x y^2 \over (r')^3}$. Thus, the first group consists of \[ \gamma^2 \left [t + {x (x - a) \over r'} \right ]. \] The second group consists of \[ \gamma^2 \left [{a x \over r'} - {a x (x - a)^2 \over (r')^3} \right ]. \] The third group consists of \[ \gamma^2 \left [{y^2 \over r'} - {a y^2 (x - a) \over (r')^3} \right ]. \] The second group of terms is the hardest to control. Following the discussion at the beginning of the proof, we shall bound the terms in \eqref{eq:Sr'theta'} with powers of $y$ in the numerator directly. We shall treat $t + {x (x - a) \over r'}$ as a single term and extract a power of $u$ plus other terms which are well behaved. Then, we shall treat ${a x \over r'} - {a x (x - a)^2 \over (r')^3}$ as a single term, noting that it is equal to ${a x y^2 \over (r')^3}$. Using Lemma~\ref{lem:xyrr'}, we have that \begin{equation} \begin{aligned} {x (x - a) \over r'} = -r \left (1 - {\tau - u - u' \over r} + {\tau - u - u' \over a} - {(\tau - u - u')^2 \over 2 r a} \right ) \\ \times \left (1 - {\tau - u - u' \over r'} + {\tau - u - u' \over a} - {(\tau - u - u')^2 \over 2 r' a} \right ). \end{aligned} \end{equation} Expanding and grouping this with $t$ in \eqref{eq:Sr'theta'} as described above, we are able to extract a factor of $u = t - r$. Indeed, using \eqref{eq:tauuu'rr'ar}, we have that \[ t + {x (x - a) \over r'} = t - r + {r \over r'} (\tau - u - u') + O(\tau - u - u'), \] where we have used that $0 \le \tau - u - u' \le C r'$ (see Lemma~\ref{lem:rtr's-t}). We have that $\gamma^2 (t - r) \le C \gamma$ because $|u| \le C \tau$. Moreover, we have that $\gamma^2 O(\tau - u - u') \le C \gamma$ because $\gamma \le {C \over \tau}$. The only remaining term is ${r \over r'} (\tau - u - u')$. Because $a \ge {s \over 2}$ when $\tau \le \delta_0 s$, we have that \[ {r \over r'} (\tau - u - u') \le C {a r (\tau - u - u') \over (r')^2}, \] and this term (see \eqref{eq:Sgammaworstterm}) is controlled below. Altogether, we have so far shown that \begin{equation} \label{eq:Sgammat-r} \begin{aligned} \gamma^2 \left [t + {x (x - a) \over r'} \right ] \le C \gamma + C \gamma^2 {a r (\tau - u - u') \over (r')^2}. \end{aligned} \end{equation} This reduces controlling the first group of terms to controlling \eqref{eq:Sgammaworstterm}, which comes up in the second group of terms. Now, by Lemma~\ref{lem:xyrr'}, we have that \[ y^2 = (r')^2 - (r')^2 \left (1 - {\tau - u - u' \over r'} + {\tau - u - u' \over a} - {(\tau - u - u')^2 \over 2 r' a} \right )^2 \le C r'(\tau - u - u') \] because we are now assuming that $\tau \le \delta_0 s$. Thus, we have that \begin{equation} \label{eq:Sgammaworstterm} \begin{aligned} \left |{a x \over r'} - {a x (x - a)^2 \over (r')^3} \right | = \left |{a x ((r')^2 - (x - a)^2) \over (r')^3} \right | = \left |{a x y^2 \over (r')^3} \right | \le C {a r (\tau - u - u') \over (r')^2}. \end{aligned} \end{equation} Multiplying by $\gamma^2$, we have that \[ \left |\gamma^2 \left ({a x \over r'} - {a x (x - a)^2 \over (r')^3} \right ) \right | \le C {a r (\tau - u - u') \over (r')^2 (t + r' + a \cos(\theta'))^2}. \] We now have everything in place to establish \eqref{eq:Sgammaesttheta'}. We first recall that \[ t + r' + a \cos(\theta') = t + s - t - u' + (s - \tau) \cos(\theta') = s - u' + (s - \tau) \cos(\theta'). \] Now, for $|\vartheta'| = |\theta' - \pi| \ge \delta_0 / 2$, we have that \begin{equation} \label{eq:gammavarthetalarge} \begin{aligned} |s - u' + (s - \tau) \cos(\theta')| \ge c s^2 \end{aligned} \end{equation} for some $c > 0$, resulting in the bound \[ |\gamma| \le {C \over s}, \] and also the bound \[ \left |\gamma^2 \left ({a x \over r'} - {a x (x - a)^2 \over (r')^3} \right ) \right | \le C {a r (\tau - u - u') \over s^2 (r')^2} \le {C \over r'}, \] where we have used Lemma~\ref{lem:rtr's-t} to say that ${|\tau - u - u'| \over r'} \le C$. This controls the second group of terms in this region. We also note that \[ \gamma^2 {y^2 \over r'} \le {C \over r'}, \] and that \[ \gamma^2 {a y^2 (x - a) \over (r')^3} \le {C \over s}, \] which controls the third group of terms in this region. Along with \eqref{eq:Sgammat-r}, these considerations prove \eqref{eq:Sgammaesttheta'}, and they also establish \eqref{eq:Sgammaestusmall} in the region where $|\vartheta'| \ge \delta_0 / 2$ after using Lemma~\ref{lem:rtr's-t} to say that $r' \ge c \tau$ when $u \le \delta_0 \tau$, which is an assumption for the estimate \eqref{eq:Sgammaestusmall}. All that remains is to establish \eqref{eq:Sgammaesttlarge} and \eqref{eq:Sgammaestusmall}. More precisely, we must only consider the case of $\tau \le \delta_0 s$ in the region where $|\vartheta'| \le \delta_0$ by what we have done above. We shall focus first on the most difficult term given by \[ \gamma^2 {a r (\tau - u - u') \over (r')^2}. \] In the grouping we defined earlier, this corresponds to controlling the second group of terms. The third group of terms will be controlled after. Now, when $|\vartheta'| \le \delta_0$, we compute the Taylor expansion around $\vartheta' = \pi - \theta' = 0$ for $t + r' + a \cos(\theta') = s - u' + (s - \tau) \cos(\theta')$, giving us that \begin{equation} \begin{aligned} s - u' + (s - \tau) \cos(\theta') = s - u' - (s - \tau) + {1 \over 2} (s - \tau) (\vartheta')^2 + (s - \tau) O((\vartheta')^4) \\ = (\tau - u') + {1 \over 2} (s - \tau) (\vartheta')^2 + (s - \tau) O((\vartheta')^4). \end{aligned} \end{equation} Thus, it suffices to control \begin{equation} \label{eq:Sgammaworstterm1} \begin{aligned} {a r \tau \over (r')^2 \left (\tau - u' + {1 \over 2} (s - \tau) (\vartheta')^2 + (s - \tau) O((\vartheta')^4) \right )^2}. \end{aligned} \end{equation} Let us first assume that $t \le (1 - 10 \delta_0 s)$. Then, the quantity \eqref{eq:Sgammaworstterm1} is controlled by \[ C {a r \tau \over (r')^2 \tau^2}. \] By Lemma~\ref{lem:tvartheta}, we have that $r' \ge c s$ in this region, meaning that this quantity is controlled by \[ {C \over \tau}. \] This controls this term in the region where $t \le (1 - 10 \delta_0 s)$. More generally, there is a transition of dominant terms in the denominator between $\tau - u'$ and ${1 \over 2} (s - \tau) (\vartheta')^2$ when they are equal. This occurs when $(\vartheta')^2 = 2{\tau - u' \over s - \tau}$. Now, we have that $\tau - u' \ge (1 - \delta_0) \tau$ because $u' \le \delta_0 \tau$. Moreover, we are assuming that $\tau \le \delta_0 s$. This means that $(\vartheta')^2$ must be comparable to ${\tau \over s}$. Thus, it is natural to consider two regions, one where $(\vartheta')^2 \le C {\tau \over s}$ and the other where $(\vartheta')^2 > C {\tau \over s}$ for some constant $C$. However, for technical reasons, it is easier to work instead with the region where $t \le c s$ and $t \ge c s$ for some $0 < c < 1$. We prove in Lemma~\ref{lem:tvartheta} that $t \ge (1 - 10 \delta_0) s$, $u' \le \delta_0 \tau$, and $u \le \delta_0 \tau$ implies that $|\vartheta'| > c {\sqrt{\tau} \over \sqrt{s}}$ when $\tau \le \delta_0 s$. We note that we can now freely assume all of these conditions because all that remains to be shown for the second group of terms is that they are controlled by ${C \over \tau}$ with these assumptions. Now, in this region, we have that the quantity \eqref{eq:Sgammaworstterm1} is controlled by \begin{equation} \label{eq:gammabound1} \begin{aligned} C {a r \tau \over (r')^2 s^2 (\vartheta')^4}. \end{aligned} \end{equation} Moreover, we note that $r' \vartheta'$ is comparable to $y$ (note that we are assuming that $|\vartheta'| \le \delta_0$). Then, because $y \vartheta'$ is comparable to $\tau$ by Lemma~\ref{lem:yvartheta} below, we have that \[ {a r \tau \over (r')^2 s^2 (\vartheta')^4} \le C {\tau \over (r')^2 (\vartheta')^4} \le C {\tau \over y^2 (\vartheta')^2} \le {C \over \tau}. \] This completes the control of the second group of terms in every region. We now proceed to the third and final group of terms. We have that \[ \gamma^2 {a y^2 (x - a) \over (r')^3} \le C \gamma^2 {a \tau \over r'}, \] where we have used Lemma~\ref{lem:xyrr'} to control $y^2$. Now, for $t \le (1 - 10 \delta_0) s$, we have that $r'$ is comparable to $s$ (see Lemma~\ref{lem:tvartheta}), meaning that this quantity is controlled by \[ C \gamma^2 \tau \le {C \over \tau}. \] When $t \ge (1 - 10 \delta_0) s$, we have that ${r \over r'} \ge 1$ because $\tau \le \delta_0 s$, meaning that \[ C \gamma^2 {a \tau \over r'} \le C \gamma^2 {a r \tau \over (r')^2}, \] and the bound follows in the same way as for the term \eqref{eq:Sgammaworstterm} above. Finally, for the last term in this third group of terms, we use Lemma~\ref{lem:xyrr'} to control $y^2$, giving us that \[ \gamma^2 {y^2 \over r'} \le C \gamma^2 \tau \le {C \over \tau}, \] as desired. \end{proof} We now control the other coefficients from the expansion in \eqref{eq:du'S}, which will allow us to finally control the error integrals. Now, because the $L'$ derivative gains a weight of ${1 \over r'}$ when applied to $f$ (see \eqref{eq:assumeddecay2}), it is natural to consider instead the expression \[ 2 \gamma S(h) + \gamma {1 \over r'} \left (t - r' - {a (x - a) \over r'} \right ) r' L' (h) + \gamma {2 a y \over (r')^2} \partial_{\theta'} (h). \] Note that $\partial_{\theta'}$ is already correctly normalized. We then have the following lemma which controls the coefficients of $L'$ and $\partial_{\theta'}$. \begin{lemma} \label{lem:du'dv'dtheta'} Let $u \le 100 \tau$, and let $-1 \le u' \le \delta_0 \tau$. \begin{enumerate} \item In the region where $u \le \delta_0 \tau$, we have that \[ \left |\gamma {1 \over r'} \left (t - r' - {a (x - a) \over r'} \right ) \right | \le {C \over \tau}, \] and we have that \[ \left |\gamma {2 a y \over (r')^2} \right | \le {C \over \tau}. \] \item In the region where $r' \ge 10$ and $|\vartheta'| = |\pi - \theta| \ge \delta_0 / 2$, we have that \[ \left |\gamma {1 \over r'} \left (t - r' - {a (x - a) \over r'} \right ) \right | \le {C \over r'}, \] and we have that \[ \left |\gamma {2 a y \over (r')^2} \right | \le {C \over r'}. \] \item In the region where $t \le (1 - 10 \delta_0) s$ and $r' \ge 10$, we have that \[ \left |\gamma {1 \over r'} \left (t - r' - {a (x - a) \over r'} \right ) \right | \le {C \over \tau} + {C \over r'}, \] and we have that \[ \left |\gamma {2 a y \over (r')^2} \right | \le {C \over \tau} + {C \over r'}. \] \item In the case where $\tau \ge \delta_0 s$, we have that \[ \left |\gamma {1 \over r'} \left (t - r' - {a (x - a) \over r'} \right ) \right | \le {C \over r'}, \] and we have that \[ \left |\gamma {2 a y \over (r')^2} \right | \le {C \over r'}. \] \end{enumerate} \end{lemma} \begin{proof} We first consider the relative size of $\tau$ and $s$. When $\tau \ge \delta_0 s$, the result is immediate because $\gamma$ is controlled by ${C \over s}$ by Lemma~\ref{lem:gammaSgammabound}. We may thus assume that $\tau \le \delta_0 s$. Moreover, when $t \le (1 - 10 \delta_0) s$, we note that $r'$ is comparable to $s$ (see Lemma~\ref{lem:tvartheta}). Thus, in this region, we can bound the coefficients by $C \gamma$, as desired. We now consider the region where $t \ge (1 - 20 \delta_0) s$ and $|\vartheta'| \le \delta_0$. With these conditions, we are only interested in the case where we additionally have that $u \le \delta_0 \tau$. We note that the coefficient of $r' L'$ is bounded by \[ C \gamma + {C \gamma a \over r'}. \] The first of these terms is bounded by ${C \over \tau}$ by Lemma~\ref{lem:gammaSgammabound}. For the second term, we note that \[ \gamma {a \over r'} \le C {a \over r' s (\vartheta')^2} \le C {1 \over y \vartheta'}, \] where we have bounded $\gamma$ in the same way as in \eqref{eq:gammabound1}. This quantity is then controlled by ${C \over \tau}$ by Lemma~\ref{lem:yvartheta}. Then, we have that \[ \left |\gamma {2 a y \over (r')^2} \right | \le C \gamma {a \over r'}, \] meaning that the same argument works here. Finally, in the region where $|\vartheta'| \ge \delta_0 / 2$ and $r' \ge 10$, we note that $\gamma$ is comparable to ${1 \over s}$. Thus, we have that \[ \gamma {a \over r'} \le {C \over r'}, \] giving us the desired result. We note that we are using Lemma~\ref{lem:rtr's-t} to say that $r' \ge c \tau$ in the region where $u \le \delta_0 \tau$. \end{proof} \begin{lemma} \label{lem:tvartheta} Let $\tau \le \delta_0 s$. \begin{enumerate} \item In the region where $t \ge {1 \over 4} s$, $u' \le \delta_0 \tau$, and $u \le \delta_0 \tau$, we have that $|\vartheta'| \ge c {\sqrt{\tau} \over \sqrt{s}}$, and we have that $|\vartheta'| \ge {1 \over 10} |\theta|$. An analogous statement holds when $t \le {3 \over 4} s$ with the roles of $\vartheta'$ and $\theta$ interchanged. \item In the region where $t \le (1 - 10 \delta_0) s$ and $u' \le \delta_0 \tau$, we have that $r' \ge c s$. Similarly, in the region where $t \ge 10 \delta_0 s$ and $u \le \delta_0 \tau$, we have that $r \ge c s$. \end{enumerate} \end{lemma} \begin{proof} We first assume that $t \le (1 - 10 \delta_0) s$ and that $u' \le \delta_0 \tau$. We then have that $s - t \ge 10 \delta_0 s$. Moreover, because $u' \le \delta_0 \tau$ and $\tau \le s$, we have that $u' \le \delta_0 s$. Thus, we have that \[ r' = s - t - u' \ge 9 \delta_0 s. \] An analogous argument shows a similar lower bound for $r$ instead of $r'$. This proves the second assertion. We shall now prove the first assertion. We shall perform the argument to find a lower bound for $|\vartheta'|$. Finding a lower bound for $|\theta'|$ follows in an analogous way. We assume that $t \ge {1 \over 4} s$, $u' \le \delta_0 \tau$, and that $u \le \delta_0 \tau$. We then have that $r = t - u \ge {1 \over 5} s$, and we have that \begin{equation} \label{eq:r'boundtlarge} \begin{aligned} r' \le s - t + 1 \le {3 \over 4} s + 1. \end{aligned} \end{equation} Then, because $y = r \sin(\theta) = r' \sin(\theta') = r' \sin(\vartheta')$, we must have that $|\vartheta'| \ge {1 \over 10} |\theta|$. This can be seen by consulting Figure~\ref{fig:psifSigma_t}. More precisely, we have that \[ {\sin(\vartheta') \over \sin(\theta)} = {r \over r'} \ge {1 \over 5}. \] Moreover, without loss of generality, we have that $0 \le \theta \le {\pi \over 2}$ (we cannot have ${\pi \over 2} \le \theta \le \pi$ because we have that $|x - a| \le r' \le {3 \over 4} s + 1$). We thus have that ${2 \theta \over \pi} \le \sin(\theta) \le \theta$. Without loss of generality, we may also assume that $0 \le \vartheta' \le \pi$. In the region where $\vartheta' \ge {\pi \over 2}$, the inequality is then obvious, so we may also assume that $0 \le \vartheta' \le {\pi \over 2}$. In this region, we once again have that ${2 \vartheta' \over \pi} \le \sin(\vartheta') \le \vartheta'$. Thus, we have that \[ \vartheta' \ge \sin(\vartheta') \ge {1 \over 5} \sin(\theta) \ge {1 \over 5} {2 \over \pi} \theta \ge {1 \over 8} \theta. \] This implies that $|\vartheta'| \ge {1 \over 8} |\theta|$ in this region, giving us the desired result. We finally turn to showing that $|\vartheta'| \ge c {\sqrt{\tau} \over \sqrt{s}}$. By symmetry, we can take $\vartheta' > 0$, and without loss of generality, we may assume that $\vartheta' \le \delta_0$ (otherwise, there is nothing left to show). By Lemma~\ref{lem:xyrr'}, we have that \begin{equation} \begin{aligned} a - x = r' \left (1 - {\tau - u - u' \over r'} + {\tau - u - u' \over a} - {(\tau - u - u')^2 \over 2 r' a} \right ) \\ = r' - r' {\tau - u - u' \over r'} \left (1 - {r' \over a} + {\tau - u - u' \over 2 a} \right ). \end{aligned} \end{equation} Because $r' \le {3 \over 4} s + 1$ and because $a \ge (1 - \delta_0 s)$ (see Lemma~\ref{lem:xyrr'}), we have that \begin{equation} \label{eq:factolowerbound} \begin{aligned} 1 - {r' \over a} + {\tau - u - u' \over 2 a} \ge {1 \over 10}. \end{aligned} \end{equation} Then, because $\vartheta' \le \delta_0$, we have that $y \le r' \delta_0$, meaning that the triangle inequality gives us that \[ r' - r' \delta_0 \le a - x = r' - r' {\tau - u - u' \over r'} \left (1 - {r' \over a} + {\tau - u - u' \over 2 a} \right ) \le r' - r' {\tau - u - u' \over 10 r'}. \] From this, it follows that \[ {\tau - u - u' \over r'} \le 10 \delta_0. \] Now, we have that \[ y^2 = (r')^2 - (r')^2 \left (1 + {\tau - u - u' \over a} - {\tau - u - u' \over r'} - {(\tau - u - u')^2 \over 2 r' a} \right )^2. \] Because $\vartheta' \le \delta_0$ and $\sin(\vartheta')$ is comparable to $\vartheta'$ for $\vartheta'$ small, we can write that \[ (r')^2 (\vartheta')^2 + (r')^2 O((\vartheta')^4) = (r')^2 - (r')^2 \left (1 + {\tau - u - u' \over a} - {\tau - u - u' \over r'} - {(\tau - u - u')^2 \over 2 r' a} \right )^2. \] Thus, we have that \begin{equation} \begin{aligned} (r')^2 (\vartheta')^2 + (r')^2 O((\vartheta')^4) = (r')^2 - (r')^2 \left [1 + {\tau - u - u' \over r'} \left ({r' \over a} - 1 - {\tau - u - u' \over 2 a} \right ) \right ]^2 \\ = (r')^2 - (r')^2 \\ \times \left [1 + {2 (\tau - u - u') \over r'} \left ({r' \over a} - 1 - {\tau - u - u' \over 2 a} \right ) + {(\tau - u - u')^2 \over (r')^2} \left ({r' \over a} - 1 - {(\tau - u - u')^2 \over 2 r' a} \right )^2 \right ] \\ = 2 r' (\tau - u - u') \left (1 - {r' \over a} + {\tau - u - u' \over 2 a} + O(\delta_0) \right ). \end{aligned} \end{equation} Thus, using \eqref{eq:factolowerbound}, we have that \[ (r')^2 (\vartheta')^2 + (r')^2 O((\vartheta')^4) \ge {1 \over 20} r' (\tau - u - u'). \] From this, it follows that \[ (\vartheta')^2 \ge c {\tau - u - u' \over r'}, \] giving us the desired result. \end{proof} \begin{lemma} \label{lem:yvartheta} Let $\tau \le \delta_0 s$. In the region where $t \ge {1 \over 4} s$, $|\vartheta'| \le \delta_0$, $u \le \delta_0 \tau$ and $u' \le \delta_0 \tau$, we have that $y \vartheta' \ge {1 \over 10} \tau$. Similarly, in the region where $t \le {3 \over 4} s$, $|\theta| \le \delta_0$, $u \le \delta_0 \tau$, and $u' \le \delta_0 \tau$, we have that $y \theta \ge {1 \over 10} \tau$. \end{lemma} \begin{proof} We shall prove the result for $\vartheta'$. The result for $\theta$ follows in an analogous way. We begin by noting that we must have that $0 \le x \le a = s - \tau$ in this region. Thus, we have that $r \cos(\theta) + r' \cos(\vartheta') = a = s - \tau$, where we recall that $\vartheta' = \pi - \theta'$ (see Figure~\ref{fig:psifSigma_t} to see that this is true because $r \cos(\theta) + r' \cos(\theta')$ gives the horizontal distance between the center adapted to $r$ and the center adapted to $r'$, and the line segment connecting these two points has length $s - \tau$). By symmetry, we can consider the region where $y \ge 0$. We have that $y = r \sin(\theta) = r' \sin(\theta') = r' \sin(\vartheta')$. Moreover, by Lemma~\ref{lem:tvartheta}, we have that $0 \le \theta \le 10 \vartheta'$. Thus, we have that \[ r \sqrt{1 - \sin^2 (\theta)} + r' \sqrt{1 - \sin^2 (\vartheta')} = s - \tau. \] Taylor expanding, this gives us that \[ r + r' - {1 \over 2} \sin^2 (\theta) r - {1 \over 2} \sin^2 (\vartheta') r' + r O(\sin^4 (\theta)) + r' O(\sin^4 (\vartheta')) = s - \tau. \] Thus, we have that \[ -{1 \over 2} y \sin(\theta) - {1 \over 2} y \sin(\vartheta') + y O(\theta^3 + (\vartheta')^3) = s - \tau - r - r'. \] We note that $r = t - u \ge t - \delta_0 \tau$ and $r' = s - t - u' \ge s - t - \delta_0 \tau$. Thus, we have that \[ s - \tau - r - r' \le s - \tau - (t - u) - (s - t - u') = -\tau + u + u' \le -{9 \tau \over 10}, \] meaning that \[ -y \sin(\vartheta') + y O((\vartheta')^3) \le -{\tau \over 7}. \] From this, it follows that $y \vartheta' \ge {\tau \over 10}$ when $\delta_0$ is sufficiently small, as desired. \end{proof} With these estimates, we can now control all of the nonlinear error integrals effectively. \subsection{Bootstrap assumptions} \label{sec:bootstrap} We now list the bootstrap assumptions. For $N$ a sufficiently large integer which will be chosen to make the proof work, we let $T$ be the largest real number such that \begin{equation} \begin{aligned} \sum_{|\alpha| \le N} \Vert (1 + t)^{-\epsilon} \partial \Gamma^\alpha \psi \Vert_{L_t^\infty ([0,T]) L_x^2} \le \epsilon^{{3 \over 4}}, \\ \sum_{|\alpha| \le N} \Vert (1 + t)^{-{1 \over 2} - {\delta \over 4}} \partial \Gamma^\alpha \psi \Vert_{L_t^2 ([0,T]) L_x^2} \le \epsilon^{{3 \over 4}}, \\ \sum_{|\alpha| \le {3 N \over 4}} \Vert (1 + t)^{{1 \over 2}} (1 + |u|)^{{3 \over 2} - \delta} \partial \Gamma^\alpha \psi \Vert_{L_t^\infty ([0,T]) L_x^\infty} \le \epsilon^{{3 \over 4}}. \end{aligned} \end{equation} We shall show that assuming these estimates allows us to improve the constants on the right hand side of these norms to $C \epsilon$. This will prove global stability of the trivial solution. In the remainder of this section, we shall recover the energy bootstrap assumptions, which follow easily. We shall also interpolate between the pointwise estimates and the energy estimates. The next section will recover the pointwise bootstrap assumptions. We shall now recover the bootstrap assumption for the energy. For every $\alpha$ with $|\alpha| \le N$, it suffices to show that $\Vert \partial \Gamma^\gamma \phi \Vert_{L^2 (\Sigma_t)} \le C \epsilon$. Thus, we commute the equation with $\Gamma^\alpha$. We have that \[ \Box \Gamma^\alpha \psi + (\partial_t \phi)^2 (\partial_t \psi) (\partial_t^2 \Gamma^\alpha \psi) = F' \] for an appropriate function $F'$. We shall now explicitly do the energy estimate at this level. If we use $\partial_t \Gamma^\alpha \psi$ as a multiplier, we see that \begin{equation} \label{eq:enest} \begin{aligned} \int_{\Sigma_s} (1 - (\partial_t \phi)^2 (\partial_t \psi)) (\partial_t \Gamma^\alpha \psi)^2 + (\partial_x \Gamma^\alpha \psi)^2 + (\partial_y \Gamma^\alpha \psi)^2 d x \\ \le \int_{\Sigma_s} (1 - (\partial_t \phi)^2 (\partial_t \psi)) (\partial_t \Gamma^\alpha \psi)^2 + (\partial_x \Gamma^\alpha \psi)^2 + (\partial_y \Gamma^\alpha \psi)^2 d x + \int_0^s \int_{\Sigma_t} |F| |\partial_t \Gamma^\alpha \psi| d x d t, \end{aligned} \end{equation} where $F$ is such that \begin{equation} \label{eq:enesterror} \begin{aligned} |F| \le C \sum_{\beta \le \alpha} |\Gamma^\beta ((\partial_t \phi) (\partial_t \psi)^2)| + C \sum_{\beta \le \alpha} |\Gamma^\beta ((\partial_t \phi)^2 (\partial_t \psi))| + C \sum_{\beta \le \alpha} |\Gamma^\beta ((\partial_t \psi)^4)| = F_1 + F_2 + F_3. \end{aligned} \end{equation} Now, at least two of the factors in $F_1$ and $F_2$ must have fewer than ${3 N \over 4}$ derivatives on them after applying the chain rule with the $\Gamma^\beta$. Similarly, at least three of the factors in $F_3$ must have fewer than ${3 N \over 4}$ derivatives of them. Moreover, we note that \[ |\Gamma^\beta (\partial_t \psi)| \le \sum_{\beta_1 \le \beta} |\partial_t \Gamma^{\beta_1} \psi| \] because $[\partial_t,\partial] = 0$ and $[\partial_t,S] = \partial_t$. Using the pointwise bootstrap assumptions on these factors with fewer derivatives thus tells us that \begin{equation} \label{eq:topordererror} \begin{aligned} |F| \le {C \epsilon^{{6 \over 4}} \over (1 + t)} \sum_{\beta \le \alpha} (|\partial \Gamma^\beta \psi| + |\partial \Gamma^\beta \phi|). \end{aligned} \end{equation} Repeating the same argument for $\phi$ gives us an expression analogous to \eqref{eq:enest} that satisfies the same estimate. We now set \[ E(t) = \sum_{|\alpha| \le N} \int_{\Sigma_t} (\partial_t \Gamma^\alpha \psi)^2 + (\partial_x \Gamma^\alpha \psi)^2 + (\partial_y \Gamma^\alpha \psi)^2 d x + \sum_{|\alpha| \le N} \int_{\Sigma_t} (\partial_t \Gamma^\alpha \phi)^2 + (\partial_x \Gamma^\alpha \phi)^2 + (\partial_y \Gamma^\alpha \phi)^2 d x \] By the bootstrap assumptions and \eqref{eq:topordererror}, we have that \eqref{eq:enest} implies that \[ E(s) \le 2 E(0) + C \epsilon^{{6 \over 4}} \int_0^s {1 \over (1 + t)} \sum_{|\beta| \le N} (|\partial \Gamma^\beta \psi| + |\partial \Gamma^\beta \phi|) |\partial_t \Gamma^\alpha \psi| d x d t \le 2 E(0) + C \epsilon^{{6 \over 4}} \int_0^s {1 \over (1 + t)} E(t) d t. \] Thus, by Gronwall's inequality, we have that \[ E(t) \le 2 E(0) (1 + t)^{2 \epsilon} \le 2 \epsilon^2 (1 + t)^{2 \epsilon}. \] This recovers the bootstrap assumption for the energy. Now, taking $(1 + t)^{-{\delta \over 4}} \partial_t \psi$ as a multiplier and using the bootstrap assumptions, we see that see that the same argument shows that \[ \int_0^s \int_{\Sigma_t} (1 + t)^{-1 - {\delta \over 4}} \left [(\partial_t \Gamma^\gamma \psi)^2 + (\partial_x \Gamma^\gamma \psi)^2 + (\partial_y \Gamma^\gamma \psi)^2 \right ] d x d t \le C \epsilon^2 + C \epsilon^3 \int_0^s \int_{\Sigma_t} (1 + t)^{-1 + 2 \epsilon - {\delta \over 4}}. \] This last integral converges when $\delta > 8 \epsilon$, giving us the desired result. All that remains is to recover the pointwise bootstrap assumptions. In order to recover these, we shall first interpolate between the pointwise bootstrap assumptions and the energy bootstrap assumptions. The next section uses these interpolated estimates in order to recover the pointwise bootstrap assumptions, which will complete the proof of the main Theorem. The interpolated estimates are the content of the following proposition. \begin{lemma} \label{lem:interpolation} Let $|\gamma| \le {3 N \over 4}$. For $N$ sufficiently large as a function of $\delta$ and for $|\beta| \le 10$, we have that \[ |\Gamma^\beta \partial \Gamma^\gamma \psi| (t,r,\theta) \le {C \epsilon^{{3 \over 4}} \over (1 + t)^{{1 \over 2} - {\delta \over 2}} (1 + |u|^{{3 \over 2} - 2 \delta})} \] \end{lemma} \begin{proof} We must only consider the region where $t \ge 10$ because the weights do not matter for small $t$. Let $h = \partial \Gamma^\gamma \psi$. We define a coordinate system on $t > 0$ in $\mathbb{R}^{2 + 1}$ as follows. We associate the coordinates $(\rho,\tilde{X},\tilde{Y})$ to the point $(t,x,y)$ in the usual flat coordinates by $\rho = t$, $\tilde{X} = {x \over t}$, and $\tilde{Y} = {y \over t}$. These coordinates are motivated simply by the fact that the scaling vector field points in the same direction as one of the coordinate vector fields (specifically, $\partial_\rho$, as is shown in \eqref{eq:Spartial_rho}). We shall use Sobolev embedding plus an interpolation result in these coordinates in order to prove the desired result. We note that these coordinates are motivated by the fact that we require a coordinate system which will allow us to effectively interpolate with $S = t \partial_t + r \partial_r$. We must first examine the coordinate system a bit more carefully. We have that \[ d \rho = d t, \hspace{5 mm} d \tilde{X} = {1 \over \rho} d x - {x \over t^2} d t, \hspace{5 mm} d \tilde{Y} = {1 \over \rho} d y - {y \over t^2} d t. \] Thus, we have that \[ d vol = d t \wedge d x \wedge d y = d \rho \wedge \left (\rho d \tilde{X} + {\tilde{X} \over \rho} d \rho \right ) \wedge \left (\rho d \tilde{Y} + {\tilde{Y} \over \rho} d \rho \right ) = \rho^2 d \rho \wedge d \tilde{X} \wedge d \tilde{Y}. \] Similarly, we have that \begin{equation} \label{eq:Spartial_rho} \begin{aligned} \partial_\rho = {\partial t \over \partial \rho} \partial_t + {\partial x \over \partial \rho} \partial_x + {\partial y \over \partial \rho} \partial_y = \partial_t + \tilde{X} \partial_x + \tilde{Y} \partial_y. \end{aligned} \end{equation} Thus, we have that \[ \rho \partial_\rho = \rho \partial_t + \rho \tilde{X} \partial_x + \rho \tilde{Y} \partial_y = t \partial_t + x \partial_x + y \partial_y = S. \] We also have that \[ \partial_{\tilde{X}} = {\partial t \over \partial \tilde{X}} \partial_t + {\partial x \over \partial \tilde{X}} \partial_x + {\partial y \over \partial \tilde{X}} \partial_y = \rho \partial_x, \] and similarly that \[ \partial_{\tilde{Y}} = \rho \partial_y. \] Let $p$ be a point where we want to show the desired estimate. We shall let $(t_0,x_0,y_0)$ denote its usual coordinates and $(\rho_0,\tilde{X}_0,\tilde{Y}_0)$ denote its coordinates in this modified coordinate system. We shall also denote by $u_0$ its $u$ coordinate. Now, we take two smooth cutoff functions. The first $\chi_\rho (s)$ is defined to be $1$ for $s \ge {3 \over 4}$ and $0$ for $s \le {1 \over 2}$. The second $\chi_{x,y} (s)$ is defined to be $1$ for $|s| \le {1 \over 4}$ and $0$ for $|s| \ge {1 \over 2}$. We then consider the function $\tilde{h} = \chi_\rho \left ({\rho \over \rho_0} \right ) \chi_{x,y} (\rho_0 \sqrt{(\tilde{X} - \tilde{X}_0)^2 + (\tilde{Y} - \tilde{Y}_0)^2}) h$. This has localized $h$ around the point $(\rho_0,\tilde{X}_0,\tilde{Y}_0)$ at scale $1$ in the $\tilde{X}$ and $\tilde{Y}$ directions and at scale $\rho_0$ in the $\rho$ direction (by scale, we mean relative to the $(t,x,y)$ coordinate system). Setting $X = \rho_0 \tilde{X}$ and $Y = \rho_0 \tilde{Y}$ renormalizes the length of partial derivatives to have length comparable to $1$. Indeed, in the support of $\tilde{h}$, we note that the volume form in $(\rho,X,Y)$ coordinates is equal to \[ {\rho^2 \over \rho_0^2} d \rho \wedge d X \wedge d Y. \] This is comparable to $d \rho \wedge d X \wedge d Y$. We also have that $\partial_X = {\rho \over \rho_0} \partial_x$ is comparable to $\partial_x$ are comparable, and similarly with $\partial_Y$ and $\partial_y$. Now, the function $\tilde{h}$ can be thought of as being defined in $(\rho,X,Y)$ in the region where $\rho \le \rho_0$. Moreover, the pointwise bootstrap assumptions tell us that \[ \int_{\rho \le \rho_0} \rho_0^{-1 - {\delta \over 2}} \tilde{h}^2 {\rho^2 \over \rho_0^2} d \rho d X d Y \le C \epsilon^{{3 \over 2}} {1 \over (1 + t_0)^{1 + {\delta \over 2}} (1 + |u_0|^{3 - 2 \delta})}, \] where we have used the fact that $t$ and $1 + |u|$ are comparable to $t_0$ and $1 + |u_0|$, respectively, in the support of $\tilde{h}$, and where we have used the fact that the volume of the support of $h$ is comparable to $t_0$. Moreover, the energy bootstrap assumptions tell us that \[ \sum_{|\beta| \le N - |\alpha|} \int_{\rho \le \rho_0} \rho_0^{-1 - {\delta \over 2}} (\Gamma^\beta \tilde{h})^2 {\rho^2 \over \rho_0^2} d \rho d x'' d y'' \le C \sum_{|\beta| \le N - |\alpha|} \int_0^{t_0} \int_{\Sigma_t} (1 + t)^{-1 - {\delta \over 2}} (\Gamma^\beta \partial \psi)^2 d x d t \le C \epsilon^{{3 \over 2}}. \] Let us consider the function \[ H = \tilde{h}^2 {\rho^2 \over \rho_0^2}. \] We shall show that \[ \int_{\rho \le \rho_0} (\Gamma_{\rho_0}^\alpha H)^2 d \rho d X d Y \le C \sum_{|\beta| \le |\alpha|} \int_{\rho \le \rho_0} (\Gamma^\beta \tilde{h})^2 {\rho^2 \over \rho_0^2} d \rho d X d Y, \] where $\Gamma_{\rho_0}^\alpha$ corresponds to strings of the operators $\rho_0 \partial_\rho$, $\partial_X$, and $\partial_Y$. Indeed, we recall that $\Gamma$ consists of translation vector fields along with $S = t \partial_t + r \partial_r$. This means that $\partial_X = {t \over \rho_0} \partial_x$, $\partial_Y = {t \over \rho_0} \partial_y$, and $\rho_0 \partial_\rho = {\rho_0 \over t} S = \rho_0 \partial_t + {r \rho_0 \over t} \partial_r$. This means that $\Gamma_{\rho_0} = Q \Gamma$ for an appropriate coefficient $Q$. Because $\rho_0$ is comparable to $t$ in the support of $H$, we see that we have that \[ |\Gamma_{\rho_0} H| \le C |\Gamma H|, \] i.e., these coefficients are bounded. Moreover, we see that we have that \[ \Gamma^\beta \left ({t \over \rho_0} \right ) \le C, \] and that \[ \Gamma^\beta \left ({\rho_0 \over t} \right ) \le C. \] These facts imply the desired result by expanding the $\Gamma_{\rho_0}$ operators in terms of the $\Gamma$ operators. Indeed, in the case of two operators, we schematically have that \[ |\Gamma_{\rho_0} (\Gamma_{\rho_0} H))| = |Q \Gamma ( Q \Gamma H)| \le |Q| |\Gamma(Q)| |\Gamma(H)| + |Q| |Q| |\Gamma (\Gamma (H))|, \] and the bounds on the coefficients above implies the desired result. Thus, we have in fact shown that \[ |\Gamma_{\rho_0}^\alpha H| \le C \sum_{|\beta| \le |\alpha|} |\Gamma^\beta H|. \] A similar argument shows us that \begin{equation} \label{eq:commutatorsrescaled} \begin{aligned} |\Gamma^\alpha H| \le C \sum_{|\beta| \le |\alpha|} |\Gamma^\beta H|. \end{aligned} \end{equation} Thus, interpolating between these $L^2$ based bounds (we can use an interpolation result in the half spaces $\rho \le \rho_0$ in $(\rho,X,Y)$ coordinate space because they have a nice boundary, see \cite{AdaFou03}) tells us that, for $N$ sufficiently large in terms of $\delta$, we have that \[ \sum_{|\beta| \le 10} \int_{\rho \le \rho_0} \rho_0^{-1 - {\delta \over 2}} (\Gamma_{\rho_0}^\beta H)^2 d \rho d X d Y \le C \epsilon^{{3 \over 2}} {1 \over (1 + t_0) (1 + |u_0|^{3 - 3 \delta})}. \] Then, applying a Sobolev inequality that is weighted in the $\rho$ direction by $\rho_0$ in the $(\rho,X,Y)$ coordinate system tells us that \[ \sum_{|\beta| \le 8} |\rho_0^{-{1 \over 2} - {\delta \over 4}} \Gamma_{\rho_0}^\beta \tilde{h}| \le C \epsilon^{{3 \over 4}} {1 \over (1 + t_0) (1 + |u_0|)^{{3 \over 2} - {3 \over 2} \delta}}. \] Thus, we altogether have that \[ \sum_{|\beta| \le 8} |\Gamma_{\rho_0}^\beta \tilde{h}| \le C \epsilon^{{3 \over 4}} {1 \over (1 + t_0)^{{1 \over 2} - {\delta \over 2}} (1 + |u_0|)^{{3 \over 2} - 2 \delta}}. \] Using \eqref{eq:commutatorsrescaled} above then gives us the desired result. \end{proof} We shall also use the fact that $\psi$ is supported where $u \ge -1$ and $\phi$ is supported where $\overline{u} \ge -1$. As discussed in Section~\ref{sec:relateddirections}, we do not believe that having compact support is fundamental to the argument working. We also note that this is trivial when the equation is semilinear instead of quasilinear. \begin{proposition} \label{prop:compactsupport} Let $0 \le t \le T$. The bootstrap assumptions imply that $\psi$ is supported where $u \ge -1$ and that $\phi$ is supported where $\overline{u} \ge -1$. \end{proposition} \begin{proof} Let $0 \le s \le T$. We take a point $(s,x_0,y_0)$ in $\Sigma_s$ with $s - \sqrt{x_0^2 - y_0^2} \le -1$. We then take the ball of radius $\delta$ around this point in $\Sigma_s$. We look at the domain of influence of this ball intersected with $\Sigma_t$ for $0 \le t \le s$. In every $\Sigma_t$, this is a ball whose radius is equal to $s - t + \delta$ and whose center is $(s - t,x_0,y_0)$. By construction, at $t = 0$, this does not intersect the support of the data for $\psi$ for $\delta$ sufficiently small. We set \[ E(t) = \int_{B_{s - t + \delta} (x_0,y_0)} (\partial_t \psi)^2 + (\partial_x \psi)^2 + (\partial_y \psi)^2 d x. \] This is the energy in the intersection of the domain of influence of the small ball chosen above and the time slice $\Sigma_t$. We now think of the equation for $\psi$ as a wave equation whose principal part if $\Box$, putting both the quasilinear and semilinar terms on the right hand side. We do a $\partial_t$ energy estimate on the truncated cone determined by this construction and ignore the positive terms on the side of the cone, giving us that \[ E(s) \le E(0) + \int_0^s {C \epsilon^{{3 \over 2}} \over 1 + t} E(t) d t, \] where we have used the pointwise bootstrap assumptions. Gronwall's inequality then gives us that $E(s) = 0$ because $E(0) = 0$. An analogous argument works for $\phi$. \end{proof} \subsection{Closing the pointwise estimates} \label{sec:closingpointwise} In order to recover the bootstrap assumptions for the pointwise estimates, we must show that \[ |\partial_t \Gamma^{\sigma_1} \psi| (t,x) \le {C \epsilon \over (1 + t)^{{1 \over 2}} (1 + |u|)^{{3 \over 2} - \delta}} \] for all $|\sigma_1| \le {3 N \over 4}$. Commuting the equation with $\Gamma^\alpha$, we get the equation \[ \Box \Gamma^\alpha \psi + (\partial_t \psi) (\partial_t^2 \phi) (\partial_t \Gamma^\alpha \psi) = F^\alpha \] for an appropriate $F^\alpha$ which contains only semilinear terms. Now, if we want to apply Proposition~\ref{prop:decay} in order to show pointwise decay for this quantity, we must additionally commute with the operators $\partial^{\sigma_2}$ tangent to $\Sigma_t$ for all $|\sigma_2| \le 6$. Thus, it suffices to commute with $\Gamma^{\sigma_2}$ for all $|\sigma_2| \le 6$. We thus commute the equation with $\Gamma^{\sigma_2} \Gamma^{\sigma_1}$. Thus, in order to recover the pointwise bootstrap assumptions, it suffices to show that \[ M [\Gamma^\sigma \psi] (s,x_0^i) \le {C \epsilon \over (1 + s)^{{1 \over 2}} (1 + |\tau|^{{3 \over 2} - \delta})} \] for all $|\sigma| \le {3 N \over 4} + 6$ and for all $s \le T$ (see Proposition~\ref{prop:decay} for a description of $M$). Now, when controlling $M$, we note that the estimate on the term from data follows immediately from assuming linear decay for the auxiliary multipliers $f$. Thus, for any admissible auxiliary multiplier $f$, we must simply control the error integrals \begin{equation} \label{eq:errorint} \begin{aligned} \int_0^s \int_{\Sigma_t} F^\sigma (\partial_t f) d x d t - \int_0^s \int_{\Sigma_t} (\partial_t \phi)^2 (\partial_t \psi) (\partial_t^2 \Gamma^\sigma \psi) (\partial_t f) d x d t. \end{aligned} \end{equation} Now, by examining the equations \eqref{eq:anisotropic}, we note that $F^\sigma$ arises from applying vector fields to one of three different semilinear terms. The first is \[ (\partial_t \psi)^2 (\partial_t \phi), \] the second is \[ (\partial_t \psi) (\partial_t \phi)^2, \] and the third is \[ (\partial_t \psi)^4. \] Thus, in addition to the quasilinear term in \eqref{eq:errorint}, we must control the error integrals arising from these semilinear terms. We discussed in the remarks following the statement of Theorem~\ref{thm:mainthm} that the same proof works for other nonlinearities. The other nonlinear terms can all be controlled in essentially the same way as one of these terms. The error integral \eqref{eq:errorint} can thus be controlled by \begin{equation} \label{eq:ErrorIntegral} \begin{aligned} \sum_{\sigma' \le \sigma} \left |\int_0^s \int_{\Sigma_t} \Gamma^{\sigma'} ((\partial_t \psi) (\partial_t \phi)^2) (\partial_t f) d x d t \right | + \sum_{\sigma' \le \sigma} \left |\int_0^s \int_{\Sigma_t} \Gamma^{\sigma'} ((\partial_t \psi)^4) (\partial_t f) d x d t \right | \\ + \sum_{\sigma' \le \sigma} \left |\int_0^s \int_{\Sigma_t} \Gamma^{\sigma'} ((\partial_t \psi)^2 (\partial_t \phi)) (\partial_t f) d x d t \right | + \left |\int_0^s \int_{\Sigma_t} (\partial_t \phi)^2 (\partial_t \psi) (\partial_t^2 \Gamma^\sigma \psi) (\partial_t f) d x d t \right | \end{aligned} \end{equation} We recall that the data for $f$ is contained in the ball of radius $1$ in $\Sigma_s$ whose center is $(s,x_0,y_0)$. Moreover, we recall that the $u$ coordinate of the center of this ball is given by $\tau$, meaning that $\tau = s - \sqrt{x_0^2 + y_0^2}$. In the following, we shall always use the pointwise estimates for every term up to a ${\delta \over 2}$ loss. These interpolation arguments allowing the use of pointwise estimates (with small losses) on all factors in the nonlinearity when recovering estimates that do not involve the most number of derivatives are standard, but we describe it here for completeness. We note that this argument is far from optimal in terms of the number of derivatives required. We recall that using Proposition~\ref{prop:decay} to recover the pointwise bootstrap assumptions requires commuting with a total of ${3 N \over 4} + 6$ derivatives. This means that cubic nonlinearities will have at least two factors having no more than ${3 N \over 8} + 5$ derivatives on them while quartic nonlinearities will have at least three factors with this property. Thus, for ${3 N \over 8} + 5 \le {3 N \over 4}$, we can directly apply the bootstrap assumptions to use pointwise estimates on these factors. Moreover, we note that there can be at most one factor in every nonlinearity having at most ${3 N \over 4} + 8$ derivatives on it. For this factor, we can use the interpolation result in Lemma~\ref{lem:interpolation} to give us pointwise decay up to ${\delta \over 2}$ losses for this factor. We see that the resulting pointwise bounds are worst for one of the three semilinear terms described above. Thus, it suffices to control the error integrals involving these terms in \eqref{eq:ErrorIntegral}. We shall now consider each kind of term separately. Moreover, we shall assume throughout that $\tau \ge 100$ and that $s \ge {1 \over \delta_0^{10}}$. When $s \le {1 \over \delta_0^{10}}$, we may control terms simply by choosing $\epsilon$ sufficiently small (the decay does not really matter at this point for controlling the nonlinearity because we can use the higher power of $\epsilon$ and the fact that $s$ is uniformly bounded to absorb everything). Similarly, when $\tau \le 100$, the dependence on $\tau$ no longer matters, as it can be absorbed by choosing $\epsilon$ smaller. The estimates when $\tau \le 100$ can be recovered from similar considerations as in the following (and by taking $\epsilon$ sufficiently small). We begin with terms of the form \begin{equation} \label{eq:psi2phi1} \begin{aligned} \left |\int_0^s \int_{\Sigma_t} \Gamma^\sigma ((\partial_t \phi) (\partial_t \psi)^2) (\partial_t f) d x d t \right |. \end{aligned} \end{equation} The error integrals must be broken up into several regions. Broadly speaking, there are three main regions. The first and most delicate region is the region close to the light cones for both $\psi$ and $f$, where ``close" is measured with respect to $\tau$. In this region, we integrate by parts using the scaling vector field $S$ and use the results of Section~\ref{sec:SGeometry}. The second region is the region away from the light cone for $\psi$, and the third region is away from the light cone for $f$. These regions are better because of decay in $u$ and $u'$. All three of these main regions must be further decomposed mainly for the technical fact that Lemma~\ref{lem:rrbar} means that the Jacobian in $(t,r,\overline{r})$ coordinates on $\mathbb{R}^{2 + 1}$ is bounded only when we are comparable to $t$ close to the light cones for both $\psi$ and $\phi$. Thus, we must for example decompose depending on how far away we are from the light cone associated to $\phi$. We must introduce several cutoff functions in the proof, so let us now describe the notation a bit. These cutoff functions will allow us to effectively use the geometric estimates from Sections~\ref{sec:geometry} and \ref{sec:SGeometry}. There will be times when these cutoff functions will be hit by derivatives, and it will be appropriate to bound them in terms of cutoff functions adapted to slightly larger regions. In these cases, we shall abuse notation and simply use the same cutoff functions. These cutoff functions will be pullbacks of cutoff functions on $\mathbb{R}$. Thus, for example, a cutoff function may be of the form $\chi \left ({u \over t} \right )$ where $\chi$ is a smooth function from $\mathbb{R} \rightarrow \mathbb{R}$ and $u$ and $t$ are the usual coordinates. Thus, given such a cutoff function $\chi$, a vector field $V$ hitting $\chi$ will always result in something of the form $h \chi'$ for some smooth function $h$. In the example above, if $V = S$, we have that \[ S \chi \left ({u \over t} \right ) = -{u \over t} \chi' \left ({u \over t} \right ) - {r \over t} \chi' \left ({u \over t} \right ) = -{1 \over t} \chi' \left ({u \over t} \right ). \] Let $\chi : \mathbb{R} \rightarrow \mathbb{R}$ be a smooth cutoff function equal to $1$ for $x \le {\delta_0 \over 2}$ and equal to $0$ for $x \ge \delta_0$. Then, we take the functions $\chi_{\phi,C} = \chi \left ({\overline{u} \over t} \right )$, $\chi_\psi = \chi \left ({u \over \tau} \right )$, $\chi_{\psi,C} = \chi \left ({u \over t} \right )$, and $\chi_f = \chi \left ({u' \over \tau} \right )$. We also define the cutoff functions $\chi_\psi^c = 1 - \chi_\psi$, and similarly for the others. We shall sometimes need other analogous cutoffs, such as $\chi_\phi = \chi \left ({\overline{u} \over \tau} \right )$, and we shall introduce them following the notational conventions set above. These cutoff functions will be used to localize the error integral to the regions described above. The most delicate regions are those along the light cones for both $\psi$ and $f$ because they require decomposition $\partial_{u'}$ in terms of $S$ and $\overline{\partial}_f$. The other regions can be bounded directly using the bootstrap assumptions. We note that the region along all three light cones will be the only one that returns ${1 \over (1 + s)^{{1 \over 2}} \tau^{{3 \over 2} - \delta}}$ up to a power of $\delta$. The other regions are all better. We first describe the region away from the light cone for $f$. This is given by the integral \begin{equation} \label{eq:psi2phi1fc} \begin{aligned} \left |\int_0^s \int_{\Sigma_t} \chi_f^c \Gamma^\sigma ((\partial_t \phi) (\partial_t \psi)^2) (\partial_t f) d x d t \right |. \end{aligned} \end{equation} When also along the light cones for $\psi$ and $\phi$, we use the bootstrap assumptions, the interpolation result Lemma~\ref{lem:interpolation}, and the assumed decay rates for $f$ (see \eqref{eq:assumeddecay0} and \eqref{eq:assumeddecay1}) to give us that this integral is controlled by \begin{equation} \begin{aligned} \left |\int_0^s \int_{\Sigma_t} \chi_{\psi,C} \chi_{\phi,C} \chi_f^c \Gamma^\sigma ((\partial_t \phi) (\partial_t \psi)^2) (\partial_t f) d x d t \right | \\ \le {C \epsilon^{{9 \over 4}} \over \tau^{{3 \over 2}}} \int_0^s \int_{\Sigma_t} \chi_{r \le t + 1} \chi_{\psi,C} \chi_{\phi,C} {1 \over (1 + t)^{{3 \over 2} - \delta}} {1 \over 1 + |u|^{3 - 3 \delta}} {1 \over 1 + |\overline{u}|^{{3 \over 2} - 2 \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d x d t. \end{aligned} \end{equation} We ignore the power of $\epsilon$ because it is already large enough to recover the bootstrap assumption. Going into $r \overline{r}$ coordinates as in Lemma~\ref{lem:rrbar} and integrating in each $\Sigma_t$, the remaining quantity is controlled by \[ {C \over \tau^{{3 \over 2}}} \int_0^s {1 \over (1 + t)^{{3 \over 2} - \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d t \le {C \over (1 + s)^{{1 \over 2}} \tau^{{3 \over 2}}}. \] Now, in the region which is instead away from the light cone for $\phi$, we have that \begin{equation} \begin{aligned} \left |\int_0^s \int_{\Sigma_t} \chi_{\psi,C} \chi_{\phi,C}^c \chi_f^c \Gamma^\sigma ((\partial_t \phi) (\partial_t \psi)^2) (\partial_t f) d x d t \right | \\ \le {C \over \tau^{{3 \over 2}}} \int_0^s \int_{\Sigma_t} \chi_{r \le t + 1} \chi_{\psi,C} \chi_{\phi,C}^c {1 \over (1 + t)^{3 - 3 \delta}} {1 \over 1 + |u|^{3 - 3 \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d x d t \\ \le {C \over \tau^{{3 \over 2}}} \int_0^s {1 \over (1 + t)^{2 - 3 \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d t \le {C \over (1 + s)^{{1 \over 2}} \tau^{{3 \over 2}}}. \end{aligned} \end{equation} Then, in the region which is instead away from the light cone for $\psi$, we have that \begin{equation} \label{eq:psiIc} \begin{aligned} \left |\int_0^s \int_{\Sigma_t} \chi_{\psi,C}^c \chi_f^c \Gamma^\sigma ((\partial_t \phi) (\partial_t \psi)^2) (\partial_t f) d x d t \right | \\ \le {C \epsilon^{{9 \over 4}} \over \tau^{{3 \over 2}}} \int_0^s \int_{\Sigma_t} \chi_{\psi,C}^c {1 \over (1 + t)^{{9 \over 2} - 3 \delta}} {1 \over 1 + |\overline{u}|^{{3 \over 2} - \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d x d t \\ \le {C \epsilon^{{9 \over 4}} \over \tau^{{3 \over 2}}} \int_0^s {1 \over (1 + t)^{{7 \over 2} - 3 \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d t \le {C \epsilon^{{9 \over 4}} \over (1 + s)^{{1 \over 2}} \tau^{{3 \over 2}}}. \end{aligned} \end{equation} This has completely treated the integrals in the region away from the light cone for $f$. We now consider the region away from the light cone for $\psi$. We must in fact decompose this into two regions depending on how far away we are. We first consider the regions that are far from the light cone for $\psi$, but not too far. This corresponds to using the cuttoffs $\chi_\psi^c$ and $\chi_{\psi,C}$. When also close to the light cone for $\phi$, we can write \begin{equation} \label{eq:errorpsifarphiclose11} \begin{aligned} \left |\int_0^s \int_{\Sigma_t} \chi_\psi^c \chi_{\psi,C} \chi_{\phi,C} \Gamma^\sigma ((\partial_t \phi) (\partial_t \psi)^2) (\partial_t f) d x d t \right | \\ \le C \epsilon^{{9 \over 4}} \int_0^s \int_{\Sigma_t} \chi_\psi^c \chi_{\psi,C} \chi_{\phi,C} {1 \over (1 + t)^{{3 \over 2} - \delta}} {1 \over 1 + |u|^{3 - 3 \delta}} {1 \over 1 + |\overline{u}|^{{3 \over 2} - 2 \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d x d t. \end{aligned} \end{equation} Going into $r \overline{r}$ coordinates once again and using the fact that $u \ge {\delta_0 \over 10} \tau$ in this region gives us that the integral is in fact controlled by \[ {C \over \tau^{2 - 3 \delta}} \int_0^s {1 \over (1 + t)^{{3 \over 2} - \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d t \le {C \over (1 + s)^{{1 \over 2}} \tau^{2 - 3 \delta}}. \] Now, when far from the light cone for $\phi$, we have that \begin{equation} \label{eq:errorpsifarphiclose12} \begin{aligned} \left |\int_0^s \int_{\Sigma_t} \chi_\psi^c \chi_{\psi,C} \chi_{\phi,C}^c \Gamma^\sigma ((\partial_t \phi) (\partial_t \psi)^2) (\partial_t f) d x d t \right | \\ \le C \epsilon^{{9 \over 4}} \int_0^s \int_{\Sigma_t} \chi_\psi^c \chi_{\psi,C} \chi_{\phi,C}^c {1 \over (1 + t)^{3 - 3 \delta}} {1 \over 1 + |u|^{3 - 3 \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d x d t \\ \le {C \epsilon^{{9 \over 4}} \over \tau^{3 - 3 \delta}} \int_0^s {1 \over (1 + t)^{2 - 3 \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d t \le {C \epsilon^{{9 \over 4}} \over (1 + s)^{{1 \over 2}} \tau^{3 - 3 \delta}}. \end{aligned} \end{equation} We now consider the same region except with $\chi_{\psi,C}^c$ instead. Using that ${1 \over 1 + |u|^{3 - 3 \delta}} \le C {1 \over (1 + t)} {1 \over \tau^{2 - 3 \delta}}$ in this region, we then have that \begin{equation} \label{eq:psiveryfar1} \begin{aligned} \left |\int_0^s \int_{\Sigma_t} \chi_\psi^c \chi_{\psi,C}^c \Gamma^\sigma ((\partial_t \phi) (\partial_t \psi)^2) (\partial_t f) d x d t \right | \\ \le {C \epsilon^{{9 \over 4}} \over \tau^{2 - 3 \delta}} \int_0^s \int_{\Sigma_t} \chi_{\overline{r} \le t + 1} {1 \over (1 + t)^{{5 \over 2} - \delta}} {1 \over 1 + |\overline{u}|^{{3 \over 2} - 2 \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d x d t. \end{aligned} \end{equation} Going into $(t,\overline{r},\overline{\theta})$ coordinates and using the decay in $\overline{u}$ to control the integrals on $\Sigma_t$ by $C (1 + t)$, the integral is then bounded by \[ {C \epsilon^{{9 \over 4}} \over \tau^{2 - 3 \delta}} \int_0^s {1 \over (1 + t)^{{3 \over 2} - \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d t \le {C \epsilon^{{9 \over 4}} \over (1 + s)^{{1 \over 2}} \tau^{2 - 3 \delta}}. \] The only region that remains is the region along the light cones for both $\psi$ and $f$. This integral is given by \begin{equation} \label{eq:errorpsiphif} \begin{aligned} \int_0^s \int_{\Sigma_t} \chi_\psi \chi_f \Gamma^\sigma ((\partial_t \phi) (\partial_t \psi)^2) (\partial_t f) d x d t. \end{aligned} \end{equation} We first pick coordinates where $b = 0$, meaning that the center adapted to $r'$ lies on the new $x$ axis as is described in Section~\ref{sec:coordinates} (we shall do this freely below without mentioning it again). Now, we write \begin{equation} \label{eq:dtfdecomp} \begin{aligned} \partial_t = -{1 \over 2} (L' + \underline{L}') = -{1 \over 2} L' - \gamma S - {1 \over 2} \gamma \left (t - r' - {a (x - a) \over r'} \right ) L' - \gamma {a y \over (r')^2} \partial_{\theta'}, \end{aligned} \end{equation} where we have used Lemma~\ref{lem:du'S}. This results in the integral \begin{equation} \label{eq:errorintworst1} \begin{aligned} \int_0^s \int_{\Sigma_t} \chi_\psi \chi_f \Gamma^\sigma ((\partial_t \phi) (\partial_t \psi)^2) \left [-{1 \over 2} L' - \gamma S - {1 \over 2} \gamma \left (t - r' - {a (x - a) \over r'} \right ) L' - \gamma {a y \over (r')^2} \partial_{\theta'} \right ] f d x d t. \end{aligned} \end{equation} We focus first on the term with $S$. The integral we must control is \[ \int_0^s \int_{\Sigma_t} \chi_\psi \chi_f \Gamma^\sigma ((\partial_t \phi) (\partial_t \psi)^2) (\gamma S f) d x d t. \] We integrate this term by parts to put the scaling vector field on the other terms. Doing so shows that it suffices to control \begin{equation} \label{eq:Sibp} \begin{aligned} \left |\int_0^s \int_{\Sigma_t} \chi_\psi \chi_f \Gamma^\sigma ((\partial_t \phi) (\partial_t \psi)^2) (S \gamma) f d x d t\right | + \left |\int_0^s \int_{\Sigma_t} \gamma \chi_\psi \chi_f S \Gamma^\sigma ((\partial_t \phi) (\partial_t \psi)^2) f d x d t \right | \\ + \left |\int_0^s \int_{\Sigma_t} \gamma S(\chi_\psi \chi_f) \Gamma^\sigma ((\partial_t \phi) (\partial_t \psi)^2) f d x d t \right | + \left |\int_0^s \int_{\Sigma_t} \gamma \chi_\psi \chi_f \Gamma^\sigma ((\partial_t \phi) (\partial_t \psi)^2) f d x d t \right |, \end{aligned} \end{equation} where we have used the fact that the integrand is compactly supported where ${\tau \over 10} \le t \le s - {\tau \over 10}$, meaning that there are no boundary terms from integrating by parts. We begin by estimating $\gamma S (\chi_\psi)$ and $\gamma S (\chi_f)$. We note that $\partial_v \chi_\psi = 0$ and $|\partial_u \chi_\psi| \le {C \over \tau}$. Thus, we have that \[ |\gamma S (\chi_\psi)| = |\gamma| |v \partial_v \chi_\psi + u \partial_u \chi_\psi| \le C |\gamma|. \] An analogous argument works to control $\gamma S$ applied to cutoffs adapted to $\phi$ (see also Section~\ref{sec:coordinates}), and a similar argument works for the other kinds of cutoffs adapted to $\psi$ and $\phi$. Now, for $\chi_f$, we shall use the background Euclidean structure on $\mathbb{R}^3$. We note that $\nabla_e \chi_f = h \underline{L}'$ for some smooth function $h$ with $|h| \le {C \over \tau}$ where $\nabla_e \chi_f$ is the Euclidean gradient of $\chi_f$. We have that \[ |S (\chi_f)| = |\langle S,\nabla_e \chi_f \rangle_e| = |h| |\langle S,\underline{L}' \rangle_e|, \] where $\langle \cdot,\cdot \rangle_e$ denotes the Euclidean inner product on $\mathbb{R}^3$. Now, using Lemma~\ref{lem:du'S} to write $S$ in terms of the frame $L'$, $\underline{L}'$, and ${1 \over r'} \partial_{\theta'}$, and noting that these vectors are mutually orthogonal with respect to the Euclidean inner product, we have that \[ |\langle S,\underline{L}' \rangle_e| \le {C \over \gamma}. \] Thus, we altogether have that \[ |S(\chi_f)| \le {C \over \tau \gamma}, \] meaning that \[ |\gamma S(\chi_f)| \le {C \over \tau}. \] Using these estimates along with Lemma~\ref{lem:gammaSgammabound} gives us that these integrals are all controlled by \begin{equation} \label{eq:worsterrorint1} \begin{aligned} {C \epsilon^{{9 \over 4}} \over \tau} \int_{\tau \over 10}^{s - {\tau \over 10}} \int_{\Sigma_t} \chi_{r \le t + 1} \chi_f \chi_\psi {1 \over (1 + t)^{{3 \over 2} - 2 \delta}} {1 \over 1 + |u|^{3 - 3 \delta}} {1 \over 1 + |\overline{u}|^{{3 \over 2} - 2 \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d x d t. \end{aligned} \end{equation} We note that we have used the fact that the integrand is only supported for ${\tau \over 10} \le t \le s - {\tau \over 10}$ (see Lemma~\ref{lem:rtr's-t}). Now, we further decompose this integral depending on the distance to the light cone for $\phi$. When close to the light cone for $\phi$, we can go into $r \overline{r}$ coordinates and the Jacobian will be bounded by Lemma~\ref{lem:rrbar}. We thus have that \begin{equation} \begin{aligned} {1 \over \tau} \int_{\tau \over 10}^{s - {\tau \over 10}} \int_{\Sigma_t} \chi_{r \le t + 1} \chi_f \chi_\psi \chi_{\phi,C} {1 \over (1 + t)^{{3 \over 2} - \delta}} {1 \over 1 + |u|^{3 - 3 \delta}} {1 \over 1 + |\overline{u}|^{{3 \over 2} - 2 \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d x d t \\ \le {C \over \tau} \int_{\tau \over 10}^{s - {\tau \over 10}} \int_{\Sigma_t} \chi_{r \le t + 1} \chi_f \chi_\psi \chi_{\phi,C} {1 \over (1 + t)^{{3 \over 2} - \delta}} {1 \over 1 + |u|^{3 - 3 \delta}} {1 \over 1 + |\overline{u}|^{{3 \over 2} - 2 \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d r d \overline{r} d t \\ \le {C \over \tau} \int_{{\tau \over 10}}^{s - {\tau \over 10}} {1 \over (1 + t)^{{3 \over 2} - \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d t \le {C \over (1 + s)^{{1 \over 2}} \tau^{{3 \over 2} - \delta}}. \end{aligned} \end{equation} We now consider the region far from the light cone of $\phi$. We have that \begin{equation} \begin{aligned} {C \over \tau} \int_{\tau \over 10}^{s - {\tau \over 10}} \int_{\Sigma_t} \chi_{r \le t + 1} \chi_f \chi_\psi \chi_{\phi,C}^c {1 \over (1 + t)^{{3 \over 2} - \delta}} {1 \over 1 + |u|^{3 - 3 \delta}} {1 \over 1 + |\overline{u}|^{{3 \over 2} - 2 \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d x d t \\ \le {C \over \tau} \int_{\tau \over 10}^{s - {\tau \over 10}} \int_{\Sigma_t} \chi_{r \le t + 1} \chi_f \chi_\psi {1 \over (1 + t)^{3 - 3 \delta}} {1 \over 1 + |u|^{3 - 3 \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d x d t \\ \le {C \over \tau} \int_{{\tau \over 10}}^{s - {\tau \over 10}} {1 \over (1 + t)^{2 - 3 \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d t \le {C \over (1 + s)^{{1 \over 2}} \tau^{2 - 3 \delta}} \end{aligned} \end{equation} We now consider the terms without $S$ in \eqref{eq:errorintworst1}. Using Lemma~\ref{lem:du'dv'dtheta'} and improved decay of good derivatives of $f$ (see \eqref{eq:assumeddecay2}), we see that the integrals are once again controlled by \[ {C \epsilon^{{9 \over 4}} \over \tau} \int_{{\tau \over 10}}^{s - {\tau \over 10}} \int_{\Sigma_t} \chi_{r \le t + 1} \chi_\psi \chi_f {1 \over (1 + t)^{{3 \over 2} - \delta}} {1 \over 1 + |u|^{3 - 3 \delta}} {1 \over 1 + |\overline{u}|^{{3 \over 2} - 2 \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d x d t. \] Thus, the bounds for these terms follow in the same way as for \eqref{eq:worsterrorint1}. We now turn to control the terms of the form \[ \left |\int_0^s \int_{\Sigma_t} \Gamma^\sigma ((\partial_t \psi) (\partial_t \phi)^2) (\partial_t f) d x d t \right |. \] We once again introduce cutoff functions adapted to the light cones for $\psi$, $\phi$, and $f$. We first consider the region along all three light cones. This is given by \[ \left |\int_0^s \int_{\Sigma_t} \chi_\psi \chi_\phi \chi_f \Gamma^\sigma ((\partial_t \psi) (\partial_t \phi)^2) (\partial_t f) d x d t \right | \] This integral can be handled in exactly the same way as \eqref{eq:errorpsiphif}. Moreover, the integral over the region away from the light cone for $f$ can be controlled in a similar way to the term \eqref{eq:psi2phi1fc}, and the integral away from the light cone for $\phi$ can be controlled in a similar way to the terms \eqref{eq:errorpsifarphiclose11}, \eqref{eq:errorpsifarphiclose12}, and \eqref{eq:psiveryfar1}. We now turn to the final region (which is the most difficult new region for these terms compared to the terms of the form \eqref{eq:psi2phi1}). This is the region away from the light cone of $\psi$ but still along the light cones for $\phi$ and $f$. This is given by \[ \left |\int_0^s \int_{\Sigma_t} \chi_\psi^c \chi_\phi \chi_f \Gamma^\sigma ((\partial_t \psi) (\partial_t \phi)^2) (\partial_t f) d x d t \right| \] Let us briefly describe why this term is problematic for this cubic nonlinearity but not the other one. The worst region is when we are along the light cone for $f$ and the linear factor in the nonlinearity (which is composed of one linear and one quadratic factor in both cases). In this region, we want to decompose $\partial_t$ in terms of $S$ and $\overline{\partial}_f$, as was done for the term \eqref{eq:errorintworst1}. For that term, because we are restricted to $u$ being small relative to $\tau$, we have good estimates for $S$ by Lemma~\ref{lem:gammaSgammabound}. However, in this term, $u$ is comparable to $\tau$, so we do not have good estimates for $S$ everywhere. We do, however, have a good power of $\tau$ (this is in fact almost enough already to close the argument). Thus, we must decompose into regions where $S$ is still useful and other regions where the geometry helps us. For technical reasons (to avoid $r' = 0$), we must further decompose this into a region where $r'$ is comparable to $s - t$ and a region where $r'$ is much smaller than $s - t$. We thus define the function \[ \chi_{f,I} = \chi \left ({r' - 100 \over 1 + s - t} \right ). \] Then, in the support of $\chi_{f,I}$ and where $s - t \ge 0$, we note that $1 + |u'| \ge c (1 + s - t)$. We now consider first the region which is within the light cone for $f$ in this sense. This is given by \begin{equation} \label{eq:errorIntf2} \begin{aligned} \left |\int_0^s \int_{\Sigma_t} \chi_\psi^c \chi_{f,I} \chi_\phi \chi_f \Gamma^\sigma ((\partial_t \psi) (\partial_t \phi)^2) (\partial_t f) d x d t \right|. \end{aligned} \end{equation} Now, using that $1 + |u'| \ge c (1 + s - t)$ in this region, we have that this integral is controlled by \[ C \epsilon^{{9 \over 4}} \int_0^s \int_{\Sigma_t} \chi_\psi^c \chi_{f,I} \chi_\phi \chi_f {1 \over (1 + t)^{{3 \over 2} - \delta}} {1 \over 1 + \tau^{{3 \over 2} - 2 \delta}} {1 \over 1 + |\overline{u}|^{3 - 2 \delta}} {1 \over (1 + s - t)^{{3 \over 2}}} d x d t. \] We in fact could improve the power of $1 + s - t$ from ${3 \over 2}$ to $2$, but we use ${3 \over 2}$ in order to treat this term the same way as terms that follow. We ignore the power of $\epsilon$ which is larger than $\epsilon^{{3 \over 4}}$, and we focus on the powers of $s$ and $\tau$. We break this integral up into two regions, one where $t \ge (1 - 10 \delta_0) s$ and another where $t \le (1 - 10 \delta_0) s$. In the first region, we note that the support of $\chi_f \chi_{f,I}$ is a set of very small diameter compared to the scale of the ellipses which are the level sets of $r'$ in the region where $u' \le \delta_0 \tau$. Thus, we use Lemma~\ref{lem:largecirclearc} (or more specifically, the version for an ellipse described after Lemma~\ref{lem:largecirclearc}) to note that the integral is controlled by \[ C {1 \over (1 + s)^{{3 \over 2} - \delta}} {1 \over 1 + \tau^{{3 \over 2} - 2 \delta}} \int_{(1 - 10 \delta_0) s}^s {1 \over (1 + s - t)^{{1 \over 2}}} d t \le C {1 \over (1 + s)^{1 - \delta}} {1 \over 1 + \tau^{{3 \over 2} - 2 \delta}} \] Then, in the region where $t \le (1 - 10 \delta_0) s$, we note that $1 + s - t \ge c (1 + s)$. Thus, the integral is controlled by \[ C {1 \over (1 + s)^{{3 \over 2}}} {1 \over 1 + \tau^{{3 \over2} - 2 \delta}} \int_0^{(1 - 10 \delta_0) s} \int_{\Sigma_t} \chi_{\overline{r} \le t + 1} {1 \over (1 + t)^{{3 \over 2} - \delta}} {1 \over 1 + |\overline{u}|^{3 - 2 \delta}} d x d t. \] Going into $(t,\overline{r},\overline{\theta})$ coordinates then gives us that this is controlled by \[ C {1 \over (1 + s)^{{3 \over2}}} {1 \over 1 + \tau^{{3 \over 2} - 2 \delta}} \int_0^{(1 - 10 \delta_0) s} {1 \over (1 + t)^{{1 \over 2} - \delta}} d t \le C {1 \over (1 + s)^{1 - \delta}} {1 \over 1 + \tau^{{3 \over 2} - 2 \delta}}. \] We now consider the region associated with $\chi_{f,I}^c$ instead, and we note that $r' \ge 10$ and $s - t \ge 10$ in the support of $\chi_{f,I}^c$. The integral we must control is then \[ \left |\int_0^s \int_{\Sigma_t} \chi_\psi^c \chi_{f,I}^c \chi_\phi \chi_f \Gamma^\sigma ((\partial_t \psi) (\partial_t \phi)^2) (\partial_t f) d x d t \right| \] Now, using \eqref{eq:dtfdecomp} to decompose $\partial_t f$, we are left with an integral of the form \begin{equation} \label{eq:errorintworst2} \begin{aligned} \int_0^s \int_{\Sigma_t} \chi_\psi^c \chi_{f,I}^c \chi_f \chi_\phi \Gamma^\sigma ((\partial_t \phi)^2 (\partial_t \psi)) \left [-{1 \over 2} L' - \gamma S - {1 \over 2} \gamma \left (t - r' - {a (x - a) \over r'} \right ) L' - \gamma {a y \over (r')^2} \partial_{\theta'} \right ] f d x d t. \end{aligned} \end{equation} Most of the regions that follow will use this decomposition of $\partial_t$ in terms of $S$ and $\overline{\partial}_f$. There is only one region that will not. We must now consider two cases, one in which $\tau \le \delta_0 s$ and one in which $\tau \ge \delta_0 s$. We begin with the easier case of $\tau \ge \delta_0 s$. We first consider the terms not involving $S$. By Lemma~\ref{lem:du'dv'dtheta'}, we know that the integral of these terms is controlled by \[ C \int_0^s \int_{\Sigma_t} \chi_\psi^c \chi_{f,I}^c \chi_f \chi_\phi \left |\Gamma^\sigma ((\partial_t \phi)^2 (\partial_t \psi)) \right | {1 \over r'} {1 \over (1 + s - t)^{{1 \over 2}}} {1 \over 1 + |u'|^{{1 \over 2}}} d x d t. \] These terms can thus be controlled in the same way as \eqref{eq:errorIntf2} above because $r'$ is comparable to $1 + s - t$ in the support of the integrand. For the term involving $S$, we integrate by parts in $S$ as in \eqref{eq:Sibp} and note that, by Lemma~\ref{lem:gammaSgammabound}, we have that $|S(\gamma)| \le {C \over r'}$. Thus, the integral is controlled by \begin{equation} \begin{aligned} C \int_0^s \int_{\Sigma_t} |\gamma| |S(\chi_\psi^c \chi_{f,I}^c \chi_\phi \chi_f)| |\Gamma^\sigma ((\partial_t \psi) (\partial_t \phi)^2)| |f| d x d t \\ + C \int_0^s \int_{\Sigma_t} \chi_\psi^c \chi_{f,I}^c \chi_\phi \chi_f |\Gamma^\sigma ((\partial_t \psi) (\partial_t \phi)^2)| {1 \over r'} |f| d x d t \\ + C \int_0^s \int_{\Sigma_t} \chi_\psi^c \chi_{f,I}^c \chi_\phi \chi_f |S \Gamma^\sigma ((\partial_t \psi) (\partial_t \phi)^2)| {1 \over r'} |f| d x d t. \end{aligned} \end{equation} We note that $S$ is well behaved when applied to any of the cutoff functions involved. Indeed, the discussion after \eqref{eq:Sibp} shows that $S$ is well behaved when applied to some of the cutoff functions involved, and controlling the others follows similarly after writing $S$ in terms of $L$, $\underline{L}'$, and $\partial_{\theta'}$ and then using Lemma~\ref{lem:du'dv'dtheta'}. More precisely, we have that \begin{equation} \label{eq:Scutoffs1} \begin{aligned} |\gamma| |S(\chi_\psi^c \chi_{f,I}^c \chi_\phi \chi_f)| \le \left ({C \over \tau} + {C \over r'} \right ) \chi_\psi^c \chi_{f,I}^c \chi_\phi \chi_f, \end{aligned} \end{equation} where we recall that we are abusing notation and using the cutoffs on the right hand side to represent appropriate cutoffs over slightly larger regions. Thus, these terms can be controlled in the same way as \eqref{eq:errorIntf2} above. We also note that there are no boundary terms from the integration by parts because the integrand is compactly supported where $s - t \ge 10$ (because of the cutoff $\chi_{f,I}^c$) and $t \ge {\tau \over 10}$ (by Lemma~\ref{lem:rtr's-t}). When $\tau \le \delta_0 s$, we must further decompose. We first consider the case of small $t$ (where $s - t$ is comparable to $s$) and the case of large $t$. In the case of large $t$, we will have to further decompose in terms of the behavior of $f$. This motivates us to introduce the cutoff function $\chi_t = \chi \left ({s - t \over 20 s} \right )$. This function is supported where $t \ge (1 - 20 \delta_0) s$, and it is equal to $1$ when $t \ge (1 - 10 \delta_0) s$. Thus, the function $\chi_t^c = 1 - \chi_t$ is supported where $t \le (1 - 10 \delta_0) s$. When $t \le (1 - 10 \delta_0) s$, we note that $|S(\gamma)| \le {C \over \tau}$ by Lemma~\ref{lem:gammaSgammabound}. We then decompose $\partial_t$ in terms of $S$ as we have done several times before, giving us the integral \begin{equation} \label{eq:worstint2tsmall} \begin{aligned} \int_0^s \int_{\Sigma_t} \chi_t^c \chi_\psi^c \chi_{f,I}^c \chi_f \chi_\phi \Gamma^\sigma ((\partial_t \phi)^2 (\partial_t \psi)) \\ \times \left [-{1 \over 2} L' - \gamma S - {1 \over 2} \gamma \left (t - r' - {a (x - a) \over r'} \right ) L' - \gamma {a y \over (r')^2} \partial_{\theta'} \right ] f d x d t. \end{aligned} \end{equation} For the term involving $S$, we integrate by parts as we have done before, and the bound on $|\gamma|$ and $|S(\gamma)|$ given by Lemma~\ref{lem:gammaSgammabound} in the support of the integrand gives us that the resulting integral is controlled by \begin{equation} \begin{aligned} \int_0^s \int_{\Sigma_t} |\gamma| |S(\chi_\psi^c \chi_{f,I}^c \chi_\phi \chi_f \chi_t^c)| |\Gamma^\sigma ((\partial_t \psi) (\partial_t \phi)^2)| |f| d x d t \\ + {C \over \tau} \int_0^s \int_{\Sigma_t} \chi_\psi^c \chi_{f,I}^c \chi_\phi \chi_f \chi_t^c |S \Gamma^\sigma ((\partial_t \psi) (\partial_t \phi)^2)| |f| d x d t \\ + {C \over \tau} \int_0^s \int_{\Sigma_t} \chi_\psi^c \chi_{f,I}^c \chi_\phi \chi_f \chi_t^c |\Gamma^\sigma ((\partial_t \psi) (\partial_t \phi)^2)| |f| d x d t. \end{aligned} \end{equation} We recall that we have good control over $|\gamma| |S(\chi_\psi^c \chi_{f,I}^c \chi_\phi \chi_f \chi_t^c)|$ (see \eqref{eq:Scutoffs1} and note that $S \chi_t \le C$). Now, by Lemma~\ref{lem:rtr's-t}, we note that these integrals are supported in the region where $t \ge {\tau \over 10}$. We may thus control these terms in a similar same way as either \eqref{eq:worsterrorint1} or \eqref{eq:errorIntf2} above, giving us that the integrals are controlled by \[ C \epsilon^{{9 \over 4}} {1 \over (1 + s)^{{1 \over 2}} (1 + \tau^{{3 \over 2} - \delta})}, \] as desired (we note that we are in fact throwing away good powers of $\tau$ that we do not need). We can then use Lemma~\ref{lem:du'dv'dtheta'} and improved decay of good derivatives of $f$ to control the other terms in the error integral which do not involve $S$ in a similar way. Indeed, this gives us that those integrals are controlled by \[ C \epsilon^{{9 \over 4}} \int_0^s \int_{\Sigma_t} \left ({1 \over \tau} + {1 \over r'} \right ) \chi_\psi^c \chi_{f,I}^c \chi_\phi \chi_f \chi_t^c |\Gamma^\sigma ((\partial_t \psi) (\partial_t \phi)^2)| {1 \over (1 + s - t)^{{1 \over 2}}} {1 \over 1 + |u'|^{{1 \over 2}}} d x d t, \] which can be dealt with in the same way as the terms above. In the remaining region where $t \ge (1 - 20 \delta_0) s$, we introduce more cutoff functions adapted to $f$. We take $\chi_{f,\theta'} (\theta') = \chi(|\vartheta'|) = \chi(|\pi - \theta'|)$. This localizes us to the set of points where $|\vartheta'| \le \delta_0$. We note that, in the support of $\chi_{f,I}'$ and $(\chi_{f,I} ^c)'$, we have that there exists some constant $c$ such that $1 + |u'| \ge c (1 + s - t)$. Now, decomposing $\partial_t$ in terms of $S$ as we have done before, we have that \[ \chi_f \chi_t \chi_{f,\theta'}^c \chi_{f,I} ^c \partial_t f = -{1 \over 2} \chi_f \chi_t \chi_{f,\theta'}^c \chi_{f,I} ^c (L' + \underline{L}') f. \] The term with the $L'$ derivative on $f$ is better, so we focus on the other term. We have that \[ \chi_f \chi_t \chi_{f,\theta'}^c \chi_{f,I} ^c \underline{L}' f = \underline{L}' (\chi_f \chi_t \chi_{f,\theta'}^c \chi_{f,I} ^c f) - \underline{L'} (\chi_f \chi_t \chi_{f,\theta'}^c \chi_{f,I} ^c) f. \] Now, we have that \[ |\underline{L}' (\chi_f \chi_t \chi_{f,\theta'}^c \chi_{f,I} ^c)| \le {C \over \tau} + {C \over s} + {C \over 1 + s - t}. \] Let $h = \chi_f \chi_t \chi_{f,\theta'}^c \chi_{f,I} ^c f$. We now use Lemma~\ref{lem:du'S} to write that \[ \underline{L}' (h) = 2 \gamma S(h) + {\gamma \over r'} \left (t - r' - {a (x - a) \over r'} \right ) r' L' (h) + \gamma {2 a y \over (r')^2} \partial_{\theta'} (h) \] Now, we note that \[ |r' L'(\chi_f \chi_t \chi_{f,\theta'}^c \chi_{f,I} ^c)| \le C, \] and similarly, we note that \[ |\partial_{\theta'} (\chi_f \chi_t \chi_{f,\theta'}^c \chi_{f,I} ^c)| \le C. \] Thus, by Lemma~\ref{lem:du'dv'dtheta'} (note that $\vartheta' \ge \delta_0 / 2$ in the support of the integrand), we have that \begin{equation} \begin{aligned} \left |\int_0^s \int_{\Sigma_t} \chi_\psi^c \chi_{f,I}^c \chi_\phi \chi_f \Gamma^\sigma ((\partial_t \psi) (\partial_t \phi)^2) (\partial_t f) d x d t \right | \\ \le C \left |\int_0^s \int_{\Sigma_t} \chi_\phi \chi_\psi^c \Gamma^\sigma ((\partial_t \psi) (\partial_t \phi)^2) \gamma S(\chi_f \chi_{f,I}^c \chi_t \chi_{f,\theta'}^c f) d x d t \right | \\ + \int_0^s \int_{\Sigma_t} \left ({C \over \tau} + {C \over 1 + s - t} \right ) \chi_\psi^c \chi_\phi \chi_f \chi_{f,I}^c \chi_t \chi_{f,\theta'}^c |\Gamma^\sigma ((\partial_t \psi) (\partial_t \phi)^2)| {1 \over (1 + s - t)^{{1 \over 2}}} {1 \over 1 + |u'|^{{1 \over 2}}} d x d t. \end{aligned} \end{equation} The second of these terms can be controlled in the same way as the terms above (see \eqref{eq:worsterrorint1} and \eqref{eq:errorIntf2}). For the term involving $S$, we integrate by parts in $S$ just as before. Because we have appropriate control over $S(\chi_\phi)$, $S(\chi_\psi^c)$, and $S(\gamma)$ in the region in question by Lemma~\ref{lem:gammaSgammabound}, the desired result follows in an analogous way as the terms above (more precisely, the integral can once again be controlled in the same way as either \eqref{eq:worsterrorint1} or \eqref{eq:errorIntf2}). We now consider the same region where $|\vartheta'| \le \delta_0$. For this term, we shall not decompose $\partial_t$ in terms of $S$ and $\overline{\partial}_f$, and the integral is instead given by \[ \int_0^s \int_{\Sigma_t} \chi_\psi^c \chi_{f,I}^c \chi_\phi \chi_f \chi_{f,\theta'} \Gamma^\sigma ((\partial_t \psi) (\partial_t \phi)^2) (\partial_t f) d x d t. \] In this region, we note that the Jacobian in $(\overline{r},r')$ coordinates is well behaved by Lemma~\ref{lem:dr'drbar}. Thus, we have that \begin{equation} \begin{aligned} \left |\int_0^s \int_{\Sigma_t} \chi_\psi^c \chi_{f,I}^c \chi_\phi \chi_t \chi_f \chi_{f,\theta'} \Gamma^\sigma ((\partial_t \psi) (\partial_t \phi)^2) (\partial_t f) d x d t \right | \\ \le C \epsilon^{{9 \over 4}} {1 \over (1 + s)^{{3 \over 2} - \delta}} {1 \over 1 + \tau^{{3 \over 2} - 2 \delta}} \int_{{s \over 2}}^s \int_{\Sigma_t} \chi_\psi^c \chi_{f,I}^c \chi_\phi \chi_f \chi_{f,\theta'} {1 \over 1 + |\overline{u}|^{3 - 4 \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} {1 \over 1 + |u'|^{{3 \over 2}}} d x d t \\ \le C \epsilon^{{9 \over 4}} {1 \over (1 + s)^{{3 \over 2} - \delta}} {1 \over 1 + \tau^{{3 \over 2} - 2 \delta}} \int_{s \over 2}^s \int_{\Sigma_t} {1 \over 1 + |\overline{u}|^{3 - 4 \delta}} {1 \over 1 + |u'|^{{3 \over 2}}} {1 \over (1 + s - t)^{{1 \over 2}}} d \overline{r} d r' d t \\ \le C \epsilon^{{9 \over 4}} {1 \over (1 + s)^{{3 \over 2} - \delta}} {1 \over 1 + \tau^{{3 \over 2} - 2 \delta}} \int_{{s \over 2}}^s {1 \over (1 + s - t)^{{1 \over 2}}} d t \le C \epsilon^{{9 \over 4}} {1 \over (1 + s)^{1 - \delta}} {1 \over 1 + \tau^{{3 \over 2} - 2 \delta}}, \end{aligned} \end{equation} as desired. We finally consider quartic terms. The integrals we must control are of the form \[ \int_0^s \int_{\Sigma_t} \Gamma^\sigma ((\partial_t \psi)^4) (\partial_t f) d x d t. \] We begin with the region away from the light cone for $\psi$, which is the easiest to control. We first assume that $\tau \le \delta_0 s$. Let $j_\tau$ denote the greatest integer which is less than or equal to ${\delta_0 \tau \over 10}$. After discretizing the integral within each $\Sigma_t$ in terms of $u$ and $u'$, we have that \begin{equation} \begin{aligned} \left |\int_0^s \int_{\Sigma_t} \chi_\psi^c \Gamma^\sigma ((\partial_t \psi)^4) (\partial_t f) d x d t \right | \\ \le C \epsilon^3 \int_0^s \sum_{j = j_\tau}^\infty \sum_{k = 1}^\infty \int_{\Sigma_t} \chi_{j - 1 \le u \le j + 1} \chi_{k - 1 \le u' \le k + 1} {1 \over (1 + j)^{6 - 5 \delta}} {1 \over (1 + k)^{{3 \over 2}}} {1 \over (1 + t)^{2 - \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d x d t. \end{aligned} \end{equation} We note that the lower bound $j = j_\tau$ comes from the fact that $\chi_\psi^c = 0$ outside of this region. Now, by Lemma~\ref{lem:annuliarea}, we have that \[ \int_{\Sigma_t} \chi_{j - 1 \le u \le j + 1} \chi_{k - 1 \le u' \le k + 1} d x \le C (1 + t)^{{1 \over 2}}. \] Thus, we have that \begin{equation} \begin{aligned} \left |\int_0^s \int_{\Sigma_t} \chi_\psi^c \Gamma^\sigma ((\partial_t \psi)^4) (\partial_t f) d x d t \right | &\le C \epsilon^3 {1 \over (1 + \tau)^{5 - 5 \delta}} \int_0^s {1 \over (1 + t)^{{3 \over 2} - \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d t \\ &\le C \epsilon^3 {1 \over (1 + \tau)^{5 - 5 \delta}} {1 \over (1 + s)^{{1 \over 2}}}. \end{aligned} \end{equation} We now assume that $\tau \ge \delta_0 s$. We then have that $1 + |u| \ge c (1 + s)$, meaning that we have that \begin{equation} \begin{aligned} \left |\int_0^s \int_{\Sigma_t} \chi_\psi^c \Gamma^\sigma ((\partial_t \psi)^4) (\partial_t f) d x d t \right | &\le C \epsilon^3 {1 \over (1 + s)^{5 - 5 \delta}} \int_0^s {1 \over (1 + t)^{1 - \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d t \\ &\le C \epsilon^3 {1 \over (1 + s)^{{11 \over 2} - 6 \delta}}, \end{aligned} \end{equation} as desired. We now consider the integral in the region determined by $\chi_\psi$. We must once again consider two cases depending on the size of $\tau$ relative to $s$. We begin with the easier case given by $\tau \ge \delta_0 s$ when $\tau$ is comparable to $s$. When $\tau \ge \delta_0 s$, we further consider two cases depending on how large $u'$ is compared to $\tau$ in the usual way. When $u' \le \delta_0 \tau$, we must control the integral \[ \left |\int_0^s \int_{\Sigma_t} \chi_\psi \chi_f \Gamma^\sigma ((\partial_t \psi)^4) (\partial_t f) d x d t \right |. \] Decomposing $\partial_t$ in terms of the frame consisting of $S$ and good derivatives for $f$ as we have done several times, we are left with \[ \left |\int_0^s \int_{\Sigma_t} \chi_\psi \chi_f \Gamma^\sigma ((\partial_t \psi)^4) \left [-{1 \over 2} L' - \gamma S - {1 \over 2} \gamma \left (t - r' - {a (x - a) \over r'} \right ) L' - \gamma {a y \over (r')^2} \partial_{\theta'} \right ] d x d t \right |. \] We integrate by parts in $S$ as we have done several times before and use Lemma~\ref{lem:du'dv'dtheta'} to bound the other terms directly. Using in addition Lemma~\ref{lem:gammaSgammabound}, we get that the resulting integrals are controlled by \[ C \epsilon^3 {1 \over s} \int_0^s \int_{\Sigma_t} \chi_\psi \chi_f {1 \over (1 + t)^{2 - \delta}} {1 \over 1 + |u|^{6 - 8 \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} {1 \over 1 + |u'|^{{1 \over 2}}} d x d t, \] where we have used that $r$ and $r'$ are comparable to $s$ where the integrand us supported by Lemma~\ref{lem:rtr's-t} (note that this also implies that $t$ and $s - t$ are comparable to $s$ in this region). We then go into $(\theta,u,u')$ coordinates as in Lemma~\ref{lem:thetauu'}, giving us that the integral is controlled by \[ C \epsilon^3 {1 \over (1 + s)^{{5 \over 2} - \delta}} \int_0^{2 \pi} \int_{-1}^{\delta_0 \tau} \int_{-1}^{\delta_0 \tau} {1 \over 1 + |u|^{6 - 8 \delta}} {1 \over 1 + |u'|^{{1 \over 2}}} d u' d u d \theta \le C \epsilon^3 {1 \over (1 + s)^{2 - \delta}}, \] as desired. Now, when $u' \ge \delta_0 \tau$, we note that the integral is controlled by \[ C \epsilon^3 {1 \over (1 + s)^{{3 \over 2}}} \int_0^s \int_{\Sigma_t} \chi_{r \le t + 1} {1 \over (1 + t)^{2 - \delta}} {1 \over 1 + |u|^{6 - 8 \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d x d t, \] where we are using that $\tau \ge \delta_0 s$. Thus, using the decay in $u$ to bound the integral on each $\Sigma_t$ by $C (1 + t)$, we have that this is controlled by \[ C \epsilon^3 {1 \over (1 + s)^{{3 \over 2}}} \int_0^s {1 \over (1 + t)^{1 - \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d t \le C \epsilon^3 {1 \over (1 + s)^{2 - \delta}}, \] as desired. We finally consider the case where $\tau \le \delta_0 s$. We once again decompose relative to the distance to the light cone adapted to $f$. We first consider the region where $u' \ge \delta_0 \tau$. Now, we note that when $u' \ge 10 \tau$, we have that $\psi$ is identically $0$ by domain of dependence. Thus, we have that the integrand in question is supported where $\delta_0 \tau \le u' \le 10 \tau$. We shall use the fact that this interval in $u'$ is comparable to $\tau$ in length. The integral we must control is given by \[ \int_0^s \int_{\Sigma_t} \chi_\psi \chi_{\delta_0 \tau \le u' \le 10 \tau} \Gamma^\sigma ((\partial_t \psi)^4) (\partial_t f) d x d t. \] This integral is controlled by \[ C \epsilon^3 {1 \over \tau^{{3 \over 2}}} \int_0^s \int_{\Sigma_t} \chi_\psi \chi_{\delta_0 \tau \le u' \le 10 \tau} {1 \over (1 + t)^{2 - \delta}} {1 \over 1 + |u|^{6 - 8 \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d x d t. \] We break up the integral into two regions in $t$, one from $0$ to $\tau$ and the other from $\tau$ to $s$. In the first region, we use the decay in $u$ to bound the integral on each $\Sigma_t$ by $C (1 + t)$, giving us that the integral is controlled by \[ C \epsilon^3 {1 \over (1 + s)^{{1 \over 2}}} {1 \over \tau^{{3 \over 2}}} \int_0^\tau {1 \over (1 + t)^{1 - \delta}} d t \le C \epsilon^3 {1 \over (1 + s)^{{1 \over 2}}} {1 \over \tau^{{3 \over 2} - \delta}}. \] The remaining integral is from $\tau$ to $s$. Discretizing this integral in $u$ within each $\Sigma_t$ then gives us that this is controlled by \[ C \epsilon^3 \sum_{j = -1}^\infty {1 \over \tau^{{3 \over 2}}} \int_\tau^s \int_{\Sigma_t} \chi_\psi \chi_{\delta_0 \tau \le u' \le 10 \tau} \chi_{j \le u \le j + 1} {1 \over (1 + t)^{2 - \delta}} {1 \over (2 + j)^{6 - 8 \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d x d t. \] Now, for $j$ fixed, we note that the inner integral in $x$ in the expression \[ \int_\tau^s \int_{\Sigma_t} \chi_\psi \chi_{u'} \chi_{j \le u \le j + 1} {1 \over (1 + t)^{2 - \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d x d t \] is an integral over the intersection of two annular regions, one adapted to $\psi$ having thickness $1$ and the other adapted to $f$ having thickness comparable to $\tau$. Thus, by Lemma~\ref{lem:annuliarea}, we have that \[ \int_{\Sigma_t} \chi_\psi \chi_{u'} \chi_{j \le u \le j + 1} d x \le C (1 + \sqrt{t}) \sqrt{\tau} \] Integrating the remaining integral in $t$ and summing in $j$ then gives us that the integral we desired to bound is controlled by \[ C \epsilon^3 {1 \over (1 + s)^{{1 \over 2}}} {1 \over \tau^{{3 \over 2} - \delta}}, \] as desired. The only remaining region is where $u' \le \delta_0 \tau$. The integral we must control is given by \[ \int_0^s \int_{\Sigma_t} \chi_\psi \chi_f \Gamma^\sigma ((\partial_t \psi)^4) (\partial_t f) d x d t. \] Decomposing $\partial_t$ in terms of $S$ and good derivatives for $f$ as we have done before results in the integral \[ \int_0^s \int_{\Sigma_t} \chi_\psi \chi_f \Gamma^\sigma ((\partial_t \psi)^4) \left [-{1 \over 2} L' - \gamma S - {1 \over 2} \gamma \left (t - r' - {a (x - a) \over r'} \right ) \partial_{v'} - \gamma {a y \over (r')^2} \partial_{\theta'} \right ] f d x d t. \] We integrate by parts the term having $S$ and bound the others directly. We first focus on the term with $S$. Integrating by parts as we have several times before, we have that the integral is controlled by \[ C \epsilon^3 {1 \over \tau} \int_{{\tau \over 10}}^{s - {\tau \over 10}} \int_{\Sigma_t} \chi_\psi \chi_f {1 \over (1 + t)^{2 - {\delta \over 2}}} {1 \over 1 + |u|^{6 - 8 \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} {1 \over 1 + |u'|^{{1 \over 2}}} d x d t. \] We note that we are now using the slightly improved interpolation for the power of $t$ (i.e., a ${\delta \over 2}$ loss instead of a $\delta$ loss). This is because this integral will have a further logarithmic loss coming from Lemma~\ref{lem:annuliu'dec}. Indeed, discretizing the integral in $u$ (and taking $j_\tau$ to be the smallest integer such that $j_\tau \ge \delta_0 \tau$) gives us that this is controlled by \[ C \epsilon^3 {1 \over \tau} \sum_{j = -1}^{j_\tau} {1 \over (2 + j)^{6 - 8 \delta}} \int_{{\tau \over 10}}^{s - {\tau \over 10}} \int_{\Sigma_t} \chi_f \chi_{j \le u \le j + 1} {1 \over (1 + t)^{2 - {\delta \over 2}}} {1 \over (1 + s - t)^{{1 \over 2}}} {1 \over 1 + |u'|^{{1 \over 2}}} d x d t. \] Now, by Lemma~\ref{lem:annuliu'dec}, we note that \[ \int_{\Sigma_t} \chi_f \chi_{j \le u \le j + 1} {1 \over 1 + |u'|^{{1 \over 2}}} d x \le C (1 + \sqrt{t}) \log(1 + t) \le C (1 + t)^{{1 \over 2} + {\delta \over 2}}. \] Thus, the remaining integral is controlled by \[ \int_{{\tau \over 10}}^{s - {\tau \over 10}} {1 \over (1 + t)^{{3 \over 2} - \delta}} {1 \over (1 + s - t)^{{1 \over 2}}} d t \le C (1 + s)^{-{1 \over 2}} \tau^{-{1 \over 2} + \delta}. \] Summing in $j$ then gives us that the whole integral is controlled by \[ C \epsilon^3 {1 \over (1 + s)^{{1 \over 2}} \tau^{{3 \over 2} - \delta}}, \] as desired. The terms not involving $S$ can be controlled in a similar way after using Lemma~\ref{lem:du'dv'dtheta'}. Because we have now controlled all of the error integrals required to use Proposition~\ref{prop:decay}, we have recovered the pointwise bootstrap assumptions, and we have thus completed the proof of Theorem~\ref{thm:mainthm}. \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{introduction} A micro-sensor (or simply a sensor) is a small sized and low powered electronic device with limited computational and communication capabilities. A WSN may contain some ten to millions of such sensors. If the sensors are deployed randomly, or the sensors move about after deployment, finding the locations of sensors (\textit{localization}) is an important issue in WSN. Localization requires communication of several necessary information between sensors over the network and a lot of computations. All these comes at the cost of high energy consumption. So far, research have mainly been focused on finding efficient localization techniques in static sensor networks (where the sensor nodes do not change their positions after deployment) \cite{BHE00,MKQP01,RSPS02}. In a WSN, sensors may be deployed either by some design with predefined infrastructure or through random manual placements of sensors. After being deployed, the sensors may remain static or move with time. In both the cases, the positions of the sensors need to be determined. Bulusu et al.~\cite{BHE00} proposed localization technique without using GPS. The techniques for finding locations of sensors in static networks are costly. As sensors in mobile WSN change their positions frequently, many localization calls are necessary to track a mobile sensor. A fast mobile sensor may require frequent localizations, draining the valuable energy quickly. To reduce the number of localization calls, positions of sensors in different time instant can be predicted or estimated from the history of the path of the sensor \cite{BM02,HE04}. Dynamic sensor networks have immense applications giving assistance to mobile soldiers in a battle field, health monitoring, in wild-life tracking \cite{JOWMPR02}, etc. A moving sensor needs to find its position frequently. Using GPS may not be appropriate due to its low accuracy, high energy consumption, cost and size. An optimized localization technique of static sensor network is used to find the current position of a mobile sensor. Tilak et al \cite{TKAK05} proposed some techniques for tracking mobile sensors based on \textit{dead reckoning} to control the number of costly localization operations. Among these techniques, the best performance is achieved by \textit{MADRD}. It estimates the position of a sensor, in stead of localizing the sensor every time it moves. Error in the estimated position grows with time. Every time localization is called, the error in the estimated position is calculated. Depending on the value of this error the time for the next localization is fixed. Fast mobile sensors trigger localization with higher frequency for a given level of accuracy in position estimation. We proposed a technique to estimate positions of mobile sensors with a control on localization calls and with lower energy dissipation. \textit{The main focus of this paper is as follows:} In this paper, a method is proposed to estimate the positions of a mobile sensor, in stead of localizing every time when its position is required. The proposed method estimates the position of a sensor only when it is required by a base station. By this algorithm with a slight modification, a mobile sensor may find its locations locally (i.e., distributively) rather than centrally in a base station. The information of an inactive sensor is ceased to be communicated. Most calculations are carried out at the base station to reduce arithmetic complexity of sensors. Localizations are called with a time interval, $T$. In this paper, we consider that the sensors moves with the Random Waypoint Mobility Model (RWP). We have seen that energy consumption may be regulate with the parameter $T$. An analytical expression for expected error in position estimation are deduced. It helps to fix the value of $T$ to regulate the energy dissipation controlling the the number of localization calls with a knowledge of rate of changes in the direction of path of a sensor depending on the applications. The proposed method gives higher accuracy in estimation for a particular energy cost and vice versa. Both the analytical formula and simulation studies show that our proposed algorithm incurs significantly lower error than that of MADRD even consuming equal energy. Some part of this paper was published in a conference paper~\cite{SMM06}. In the rest of the paper, Section~\ref{prob:stmt} describes the problem for tracking mobile sensor. In Section~\ref{early:wrk} we discuss related works as well as our motivation to propose an estimation method using interpolation. Section~\ref{protocol} describes the proposed algorithm for tracking mobile sensors. Section~\ref{analysis} deals with the analysis of the algorithm and different advantages. In Section~\ref{sim:res} simulation results are presented. Finally, we present our conclusion in Section~\ref{conclude}. \section{Problem Statement and Performance Measures} \label{prob:stmt} The position of a sensor is determined by a standard localization method. We assume that the location determined by this localization represents the actual position of the sensor at that moment. The sensors are completely unaware of the mobility pattern. Therefore, the actual position of a sensor $S$ any time $t$ is unknown. The position may be estimated or found by localization call. The absolute error in location estimation may be calculated as: $$ error_{abs}= \sqrt{(x-\hat{x})^2+(y-\hat{y})^2}$$ where ~$(x,y)$ and $(\hat{x},\hat{y})$ denote the actual and estimated positions at time $t$ respectively. Frequent calls for localization consume enormous energy. To design an algorithm that optimizes both accuracy and energy dissipation simultaneously is very difficult. An efficient, robust and energy aware protocol is required to decide whether the location of the sensor would be estimated with a desired level of accuracy or found by localization with an acceptable level of energy cost. \section{Related Works and Motivation of this Work} \label{early:wrk} Researchers have mainly focused their attention to discovering efficient methods of localization technique in static sensor networks \cite{PCB00,SHS01,TFBD01}. Thurn et al \cite{TFBD01} proposed probabilistic techniques using Monte Carlo localization (MCL) to estimate the location of mobile robots with probabilistic knowledge on movement over predefined map. They used a small number of seed nodes (nodes with known position), as \textit{beacon}s. These nodes have some extra hardware. Hu et al \cite{HE04} introduced the sequential MCL to exploit the mobility without extra hardware. WSNs generally remain embedded in an unmapped terrain and sensors have no control on their mobility. To reduce the number of localization calls was used~\cite{BM02} for saving energy. The positions of a mobile sensor at different time instant are estimated from the history of the path of the sensor. Tilak et. al~\cite{TKAK05} tried to reduce the frequency of localizations for finding the position of mobile sensors. They proposed techniques: 1) SFR (\textit{Static Fixed Rate}), 2) DVM (\textit{Dynamic Velocity Monotonic}) and 3) MADRD (\textit{Mobility Aware Dead Reckoning Driven}). SFR periodically calls some classical localization operation. In this protocol, at the time of reporting an event to the base station the sensor sends its position obtained in last localization. Therefore, localization operations are called unnecessarily when a sensor is not moving. On the other hand, reported location may suffer a large error from the actual position in moment of reporting the event. DVM adaptively calls some localization with the mobility of the sensors. In DVM, localizations are called with greater frequency when the sensor moves fast and lower frequency when it moves slowly in a straight line. A sensor with high mobility drains the energy quickly and dies soon. If a sensor suddenly moves with very high speed from rest, then error in reported location becomes very high. The third method, MADRD, predicts locations of a sensor from its motion between last two localizations using extrapolation. In MADRD, every time when localization is called the actual position is reported. If the expected error (the distance between reported position and the position according to prediction) is compared to a {\it threshold error}, $E_{thresh}$, (implementation dependent). If the expected error exceeds $E_{thresh}$, the position predictor becomes erroneous quickly. Localization calls should be triggered with higher frequency. Again a sensor with high speed calls localizations frequently. Our goal is to reduce the error consuming energy no more than that of MADRD. Figure~\ref{fig:motive} shows that estimated path due to MADRD fluctuates depending on the actual motion in between last two calls. \begin{figure}[h!] \centering \includegraphics[width=.8\textwidth]{motivation.jpg} \caption{Figure showing the estimated path using interpolation and extrapolation.} \label{fig:motive} \end{figure} As opposed to MADRD, estimation using interpolation depends on localization calls enclosing the point to be estimated rather than last calls. Our intuition is that estimation by interpolation will be more guided by the actual motion than MADRD. If the sensors change the mobility pattern (i.e., speed, direction of path etc.) very frequently MADRD incurs high error in the position estimation. We propose the algorithm MAINT. We proved our intuition by deriving average error for MADRD as well as MAINT. We support the analytical result by simulations. Both the analytical formula as well as simulation studies show that our proposed algorithm incurs significantly lower error than that of MADRD even consuming equal energy. \section{Localization Protocol for Tracking Mobile Sensors} \label{protocol} In several applications, a mobile sensor may frequently change its position and direction of its path of mobility with time. A simple strategy for finding its position is the use of standard localization methods at any time. But if the position of the sensor is required frequently, this method is very costly. SFR calls a classical localization operation periodically with a fixed time interval. To respond a query from the base station, a sensor sends its position obtained from the last localization. When a sensor remains still or moves fast, in both cases, the reported position suffers a large error. In DVM, localization is called adaptively with the mobility of the sensors. The time interval for the next call for localization is calculated as the time required to traverse the \textit{threshold distance} (a distance, traversed by the sensor, location estimation assumed to be error prone) with the velocity of the sensor between last two points in the sequence of localization calls. In case of high mobility, a sensor calls localization frequently. If a sensor suddenly moves with very high speed from rest, error in the estimated location becomes very high. In MADRD, the velocity is calculated from the information obtained from last two localized points. The predictor estimates the position with this velocity and communicates to the query sender. At the localization point, the localized position is reported to the query sender and the \textit{distance error} is calculated as the distance between the predicted position and reported position. If the error in position estimation exceeds threshold error (application dependent), the predictor appears to be erroneous and localization needs to be triggered more frequently. The calculation of error is necessary every time a localization called. Also, a sensor with high speed calls localizations frequently. We have proposed a method, MAINT, to estimate the current position with better trade off between the energy consumption and accuracy. MAINT uses interpolation which gives better estimation in most cases. \subsection{Mobility Aware Interpolation (MAINT)} In some applications, the base station may need the locations of individual sensors at different times. The location may be required to be attached to the data gathered by the sensors in response to a query. However, the data may not be required immediately. In such cases, the number of localization calls may be reduced by delaying the response. We propose a localization control scheme by estimating positions using interpolation. The sensor holds the queries requiring the the location, into a list, {\tt queryPoints} and sends the event to the base station padding the time of occurrence. At the following localization point, the sensor sends these two localized positions to each of the query senders in the time interval between these two localization points which are already in the list. The base station estimates the positions with more accuracy by interpolation with this information. The time interval of localization calls is as simple as in SFR. It eliminates all the arithmetic overheads as opposed to MADRD and the error prone nature in sudden change of speeds. Unnecessary calls of localizations for slow sensors may be avoided. To reduce the energy dissipation, the localization method may be called with higher time interval. The localization may be called immediately after receiving the query for real time applications or some special purpose. Each sensor runs a process described by Algorithm~\ref{algo:main}. \begin{algorithm}[h!] \caption{(MAINT: Proposed algorithm)} \begin{algorithmic}[1] \State Let $(x_1,y_1)$ denotes the last localization point occurred at time $t_1$. \State Set $\mathtt{queryPoints} \gets \emptyset$. \While{(a query received from a sensor $S$ at time $t > t_1$)} \State Append $S$ to $\mathtt{queryPoints}$, if $S \notin \mathtt{queryPoints}$. \If{(response to the query is immediate) or ($t \geq t_1 + T$)} \State Call an optimized localization method \State Let $(\hat{x},\hat{y})$ be the location obtained from the method; \While{($\mathtt{queryPoints} \neq \emptyset $)} \State Extract a query sender, say $S'$, from $\mathtt{queryPoints}$; \State Send $t_1 $, $t$, $(x_1,y_1)$ and $(\hat{x},\hat{y})$ to $S'$ \EndWhile \State Set $t_1=t$ and $(x_1,y_1)$ = $(\hat{x},\hat{y})$. \EndIf \EndWhile \end{algorithmic} \label{algo:main} \end{algorithm} After receiving a message from a sensor, the base station waits until it gets location information of the sender, $S$. If the processing of the message is immediate, the base station may send location query to the node $S$. The base station extracts localization points from the response obtained from $S$ against the location query and estimates the location of $S$ as follows:\\ \hrule \begin{algorithmic}[1] \State Let $(x_1,y_1)$ and $(x_2,y_2)$ be the localized positions of $S$ at $t_1$ and $t_2$ respectively. Let the base station require the position of $S$ at the time $t'$. \If{($t'\in [t_1,t_2]$)} \State Calculate the velocity vector as follows: \State ~~~~ $\dot{x} = (x_2 - x_1)/(t_2-t_1)$; ~~~~~~ $\dot{y} = (y_2 - y_1)/(t_2-t_1)$; \State Estimate the position of $s$ at time $t'\in [t_1,t_2$] as follows: \State ~~~~ $\hat{x} = x_1 + \dot{x}(t'-t_1)$; ~~~~~~ $\hat{y} = y_1 + \dot{y}(t'-t_1)$; \EndIf \end{algorithmic} \hrule ~\\[.1mm] The base station estimates the locations of those sensors only whose events are being processed recently by the base station. The location of a sensor at a particular time instant on demand are estimated from the locations obtained in the previous and next localizations nearest to the time instant. We explain the proposed algorithm, MAINT, with an example. Figure~\ref{fig:algo} describes pictorially the algorithm with an example. Suppose, the sensor calls the localization at time $t_0$ and gets the position $(x_1,y_1)$. Suppose, the sensor receives a query for its location, at time $t_1$ from the destination $D_1$ (may be a sensor or base station). \begin{figure}[h!] \centering \includegraphics[width=.9\textwidth,keepaspectratio=true]{algorithm.jpg} \caption{Describing the algorithm with an example.} \label{fig:algo} \end{figure} Instead sending the location immediately, the sensor keeps track of the query inserting $t_1$ and $D_1$ into a list $\mathtt{queryPoints}$. Similarly, it receives the queries at time $t_2$ and $t_3$ from the destinations $D_2$ and $D_3$ respectively. To keep track these query points, $t_2$, $D_2$ and $t_3$, $D_3$ are appended in $\mathtt{queryPoints}$. After time $T$ the MAINT calls the localization to know the actual position $(x_2,y_2)$. The sensor sends the message consisting of $t_0$, $(x_1,y_1)$ and $T$, $(x_2,y_2)$ corresponding to these two localization points to the query senders $D_1$, $D_2$ and $D_3$. The query senders find the locations of the sensor extracting the information from the message. To reduce the message size, the sensor itself may calculate the velocity from the localization point at $t_0$ and that at $T$. Using this velocity, calculate the locations at $t_1$, $t_2$ and $t_3$ and sends the locations to $D_1$, $D_2$ and $D_3$ respectively. It increases arithmetic overhead in the sensor but reduces traffic through the network. \section{Energy and Error Analysis} \label{analysis} Localization in static network is very costly. Finding position of mobile sensors, it needs frequent localization calls. We assume, the energy consumption is proportional to the number of localization calls. So we measure energy in terms of number of localization calls. In this work we are reducing the number of localization calls for the shake of energy saving rather than efficient method. \subsection{Mobility Model} The Random Waypoint (RWP) \cite{JM96,BW02} is a commonly used synthetic model for mobility. We carried out the simulation study as well as analysis with RWP mobility model. The parameters used in the model are described as follows: \begin{itemize} \item Each node moves along a zigzag line from one waypoint to the next. The next waypoint is selected randomly over a given convex area with two parameters \textit{time} and \textit{velocity}. \item At the beginning of each leg, a random velocity is drawn from the velocity distribution and reach the next waypoint at random time drawn from the time distribution. \end{itemize} In Figure\,\ref{fig:rwp} the sensor starts from $S_0(x_0,y_0)$ and reaches $S_T(x_T,y_T)$, the point at time $T$. The sequence of the waypoints attended by the sensor in the time interval $[0,T]$ is $S_0,S_1,\cdots,S_n,S_T$. \begin{figure}[!h] \centering \setlength{\unitlength}{.6mm} \begin{picture}(120,37)(0,0) \put(12.5,1.5){\includegraphics[width=.5\textwidth,keepaspectratio=true]{rwp}} \put(5,0){$S_0$} \put(36,30){$S_1$} \put(46.5,16.5){$S_2$} \put(51,30){$S_3$} \put(50,7){$S_{i-1}$} \put(79,28){$S_{i}$} \put(79,7){$S_{n}$} \put(115,0){$S_{T}$} \put(65.1,17){\rotatebox{0}{$\bullet$}} \put(62,12){\rotatebox{5}{$P(x,y)$}} \end{picture} \caption{A snapshot of Actual Path of Sensor under Random Waypoint Model} \label{fig:rwp} \end{figure} Let $t_0$ and $t_T$ be time instances respectively for two consecutive calls of MAINT. The positions of the sensor are known without any error at the time instances $t_0$ and $t_T$. Estimated positions of the sensor in between $t_0$ and $t_T$ may be erroneous. Without loss of generality, we assume that $t_0=0$, $t_T=T$ and $(x_0,y_0)=(0,0)$. Because, the error analysis remains similar in between any two consecutive calls of MAINT. We assume, at any waypoint $S_{i}$, the sensor draws the time interval $t_{i+1}$ as well as the velocity vector $(u_{i+1},v_{i+1})$ randomly and independently. The sensor reaches the next waypoint $S_{i+1}$ after the time $t_{i+1}$ with the velocity vector $(u_{i+1},v_{i+1})$. For the sake of simplicity we assume, the time interval follows the exponential distribution with mean $\frac{1}{\lambda}$ and the velocity components $u_{i+1}$ and $v_{i+1}$ are independent and identically distributed with $Normal(0,\sigma)$ at any waypoint for $i=0,1,\cdots,n$. Let $P(x,y)$ be the position of of the sensor at a random time $t$ in $(0,T)$, if the sensor follows the RWP mobility model. \subsection{Actual Motion Analysis: Related Parameters and Expressions} From the theory of probability and stochastic process~\cite{M94,B78}, we can say that the event of occurring waypoints, according to the above mobility model, follows the Poisson Process with parameter $\lambda$. Consider a random variable $N(t)$ that denotes the number of waypoints in the interval $(0,t)$. $N(t)$ follows the Poisson distribution with mean $\lambda t$. The probability mass function (\textit{pmf}) is \begin{equation} \Pr(N(t)=k)=\frac{(\lambda t)^k }{k!}e^{-\lambda t}, \mbox{~~~~for~~} k=0,1,2,\cdots,\infty. \label{eq:PrN} \end{equation} The sum $T_i = t_1+t_2+ \cdots + t_i$ (with $T_0\equiv 0$) represents the time occurring $i$th waypoint, $S_i$. Since $t_i$s are independent and identically distributed following exponential distribution with parameter $\lambda$ (mean $\frac{1}{\lambda}$), the random variable $T_i$ follows the distribution $\Gamma(\lambda,i)$. The pdf is $$ f_{T_i}(\tau)=\frac{\lambda^i \tau^{i-1}e^{-\lambda \tau}}{\Gamma(i)}~~~~where~~0<\tau<\infty. $$ Let $(X_i,Y_i)$ represent the position of $i$th waypoint, $1\leq i\leq n$. $X_i$ and $Y_i$ are independent and identically distributed where \[ \begin{array}{lclc} X_i &=& X_{i-1}+u_it_i= u_1t_1+u_2t_2+\cdots+u_it_i & and \\ Y_i &=& Y_{i-1}+v_it_i= v_1t_1+v_2t_2+\cdots+v_it_i \end{array} \] with $(X_0,Y_0)=(0,0)$. The velocity components $u_i$ and $v_i$ are independent and both follow the distribution $Normal(0,\sigma)$.\\[-2mm] Given $n$ waypoints have been occurred by time $\tau$. Let $T_k\mid N(\tau)=n$ denote the waiting time of the $k$-th waypoint $(1 \leq k \leq n)$ under the given setup. \begin{result} \label{res:pdf:Tt_k|n} The joint pdf of ~$(T_{k-1},t_k)\mid N(\tau)=n$~ for ~$2\leq k \leq n$~ is \begin{eqnarray*} \lefteqn{f_{T_{k-1}t_k}\left((x,y) \mid N(\tau)=n\right)}\\ &=& \left\{ \begin{array}{ll} \frac{n!}{(k-2)!\;(n-k)!}\cdot\frac{x^{k-2}}{\tau^n}\left(\tau-x-y\right)^{n-k}, & 0 < x < \tau, ~ 0 < x +y < \tau\\[2mm] 0, & otherwise. \end{array}\right. \end{eqnarray*} \end{result} \proof{ For ~$2\leq k \leq n$, $0 < x < \tau, 0 < x +y < \tau$, the probability element \[\begin{array}{rcl} \lefteqn{\Pr((x<T_{k-1} \leq x+dx, ~y<t_k \leq y+dy)\mid (N(\tau)=n))} \\[2mm] & = & \frac{\Pr(x<T_{k-1} \leq x+dx,~ y<t_k \leq y+dy~ and~ N(\tau)=n)}{\Pr(N(\tau)=n)}\\[2mm] & = & \Pr(x<T_{k-1} \leq x+dx) \cdot \frac{\Pr(y<t_k \leq y+dy\mid T_{k-1}=x)\cdot \Pr((N(\tau)=n)\mid (T_k=x+y))}{\Pr(N(\tau)=n)} \\[2mm] & = & \Pr(x<T_{k-1}\leq x+dx)\cdot \Pr(y<t_k\leq y+dy) \cdot \frac{\Pr(N(\tau-x-y)=n-k)} {\Pr(N(\tau)=n)}\\[2mm] & = & \frac{\frac{\lambda^{k-1}\,x^{k-2}}{\Gamma(k-1)}\,e^{-\lambda x}\,dx\cdot \lambda\,e^{-\lambda y}\,dy\cdot \frac{\lambda^{n-k}\,(\tau-x-y)^{n-k}}{(n-k)!} \,e^{-\lambda (\tau - x - y)}}{\frac{\lambda^n\,\tau^n}{n!}\,e^{-\lambda \tau}}\\[3mm] & = & \frac{n!}{(k-2)!\;(n-k)!}\,\cdot\frac{x^{k-2}}{\tau^n}~(\tau-x-y)^{n-k}\,dx\,dy \end{array}\] Hence the pdf. } \begin{result} \label{res:pdf:T_k|n} The pdf of ~$T_k\mid N(\tau)=n$~ for $1\leq k \leq n$ is given by \[f_{T_k}(x\mid N(\tau)=n)= \left\{ \begin{array}{lcl} \frac{n!}{(k-1)!\;(n-k)!}\cdot\frac{x^{k-1}}{\tau^k}\left(1-\frac{x}{\tau}\right)^{n-k}, & & 0 < x < \tau\\[2mm] 0, & & otherwise. \end{array}\right. \] \end{result} \proof{ From Result~\ref{res:pdf:Tt_k|n}, for $2\leq k \leq n$, the pdf of ($T_{k-1}\mid N(\tau)=n$) is \[\begin{array}{rcl} \lefteqn{f_{T_{k-1}}\left(x\mid N(\tau)=n\right)}\\[1mm] & = & \left\{ \begin{array}{lcl} \frac{n!}{(k-2)!\;(n-k)!}\cdot \frac{x^{k-2}}{\tau^n} \cdot \int_0^{\tau - x}\left(\tau-x-y\right)^{n-k}\, dy, & & 0 < x < \tau \\[1.5mm] 0, & & otherwise. \end{array}\right.\\[4mm] & = & \left\{ \begin{array}{lcl} \frac{n!}{(k-2)!\;(n-k+1)!}\cdot \frac{x^{k-2}}{\tau^n} \cdot \left(\tau-x\right)^{n-k+1}, & & 0 < x < \tau\\[1.5mm] 0, & & otherwise. \end{array}\right. \end{array}\] The pdf of ~$T_k\mid N(\tau)=n$~ for ~$1\leq k \leq n-1$ is \[\begin{array}{rcl} f_{T_k}\left(x\mid N(\tau)=n\right) & = & \left\{ \begin{array}{lcl} \frac{n!}{(k-1)!\;(n-k)!}\cdot \frac{x^{k-1}}{\tau^n} \cdot \left(\tau-x\right)^{n-k}, & & 0 < x < \tau\\[1.5mm] 0, & & otherwise. \end{array}\right. \end{array}\] For ~$0 < x < \tau$, the probability element, \[\begin{array}{rcl} \lefteqn{\Pr(x<T_n \leq x+dx\mid N(\tau)=n)} & \\[3mm] & = & \frac{\Pr(x<T_n \leq x+dx~ and~ N(\tau)=n)}{\Pr(N(\tau)=n)} \\[3mm] & = & \frac{\Pr(x<T_n \leq x+dx)\cdot \Pr(N(\tau)=n\mid T_n=x)}{\Pr(N(\tau)=n)} \\[3mm] & = & \frac{\Pr(x<T_n \leq x+dx)\cdot \Pr(N(\tau-x)=0)} {\Pr(N(\tau)=n)}\\[3mm] & = & \frac{\lambda^n\,x^{n-1}}{\Gamma(n)}\,e^{-\lambda x}\,dx\cdot e^{-\lambda (\tau-x)}/ \left(\frac{\lambda^n\,\tau^n}{n!}\,e^{-\lambda \tau}\right)\\[3mm] & = & n\cdot\frac{x^{n-1}}{\tau^n}\,dx \end{array}\] Hence, the pdf of ~$T_k\mid N(\tau)=n$~ is followed, for $1\leq k \leq n$. } \begin{result} \label{res:Exp:T_k|n} $E(T_k\mid N(\tau)=n) = \frac{k\,\tau}{n+1}$ and $E(T_k^2\mid N(\tau)=n) = \frac{k(k+1)\,\tau^2}{(n+1)(n+2)}$, for~ $1\leq k \leq n$. \end{result} \proof{ Using the pdf as in Result~\ref{res:pdf:T_k|n}, ~for ~$1\leq k \leq n$, we may write the expected values $E(T_k^m\mid N(\tau)=n)$ as follows: \[\begin{array}{rcl} {E(T_k^m\mid N(\tau)=n)} & = & \int_0^{\tau}x^m\,f_{T_k}\left(x\mid N(\tau)=n\right)\, dx\\[2mm] & = & \frac{n!\;\tau^m}{(k-1)!\,(n-k)!} \int_0^{\tau}\left(\frac{x}{\tau}\right)^{m+k-1} \,\left(1-\frac{x}{\tau}\right)^{n-k}\, \frac{dx}{\tau}\\[3mm] & = & \frac{n!\;\tau^m}{(k-1)!\,(n-k)!} \int_0^{1}x^{m+k-1}\,\left(1-x\right)^{n-k}\,dx\\[3mm] & = & \frac{n!\;\tau^m}{(k-1)!\,(n-k)!}\cdot\frac{\Gamma(m+k)\;\Gamma(n-k+1)}{\Gamma(n+m+1)}\\[3mm] & = & \frac{n!\;(m+k-1)!}{(k-1)!\;(n+m)!} ~ \tau^m \end{array}\] Substituting, $m$ by $1$ and $2$,~ we may have the expectations ~ $E(T_k\mid N(\tau)=n)$ ~ and ~ $E(T_k^2\mid N(\tau)=n)$, ~ for~ $1\leq k \leq n$. Hence the result follows. } Let $t_k\mid N(\tau)=n$ represent the time interval between the $(k-1)$th and the $k$th waypoints under the given setup when we are given that exactly $n$ waypoints have occurred in the interval $(0,\tau)$. \begin{result} \label{res:pdf:t_k|n} The density function of ~$t_k\mid N(\tau)=n$,~ for ~$1\leq k \leq n$~ is \[ f_{t_k}(y \mid N(\tau)=n) = \left\{\begin{array}{ll} \frac{n}{\tau}\left(1-\frac{y}{\tau}\right)^{n-1}, & 0\leq y<\tau\\[2mm] 0, & otherwise. \end{array}\right.\] \end{result} \proof{ The random variable $T_1 = t_1$. The distribution of ~($t_1\mid N(\tau)=n$) is same as the distribution of ($T_1\mid N(\tau)=n$). From the Result~\ref{res:pdf:Tt_k|n}, the distribution of ~ $t_k\mid N(\tau)=n$ ~for $2\leq k \leq n$ is \[\begin{array}{rcl} \lefteqn{f_{t_k}\left(y\mid N(\tau)=n\right)}\\[1mm] & = & \left\{\begin{array}{ll} \frac{n!}{(k-2)!(n-k)!}\cdot \frac{(\tau-y)^{n-1}}{\tau^n} \int_0^{\tau-y}(\frac{x}{\tau-y})^{k-2} (1-\frac{x}{\tau-y})^{n-k}\,\frac{dx}{\tau-y}, & 0\leq y < \tau\\[2mm] 0, & otherwise. \end{array}\right.\\[4mm] & = & \left\{ \begin{array}{lcl} \frac{n!}{(k-2)!(n-k)!}\cdot \frac{(\tau-y)^{n-1}}{\tau^n} \int_0^{1}t^{k-2}\,(1-t)^{n-k}\, dt, & & 0\leq y < \tau\\[2mm] 0, & & otherwise. \end{array}\right.\\[4mm] & = & \left\{ \begin{array}{lcl} \frac{n!}{(k-2)!(n-k)!}\cdot \frac{(\tau-y)^{n-1}}{\tau^n} \cdot \frac{\Gamma(k-1)\,\Gamma(n-k+1)}{\Gamma(n)}, & & 0\leq y < \tau\\[2mm] 0, & & otherwise. \end{array}\right. \end{array}\] Thus the pdf of ~$t_k\mid N(\tau)=n$ ~for $1\leq k \leq n$ is \[\begin{array}{rcl} f_{t_k}\left(y\mid N(\tau)=n\right) & = & \left\{ \begin{array}{lcl} \frac{n}{\tau}\,\left(1-\frac{y}{\tau}\right)^{n-1}, & & 0\leq y < \tau\\[3mm] 0, & & otherwise. \end{array}\right.\\[-3mm] \end{array}\] } \begin{result} \label{res:Exp:t_k|n} $E(t_k\mid N(\tau)=n)$ = $\frac{\tau}{n+1}$ and $E(t_k^2\mid N(\tau)=n)$ = $\frac{2\tau^2}{(n+1)(n+2)}$ \end{result} \proof{ Using the pdf as in Result~\ref{res:pdf:T_k|n}, ~ for ~ $1\leq k \leq n$, ~ we have \[\begin{array}{rcl} E(t_k^m\mid N(\tau)=n) & = & \int_0^{\tau}y^m\,f_{t_k}\left(y\mid N(\tau)=n\right)\,dy\\[3mm] & = & n\,\tau^m \int_0^{\tau}\left(\frac{y}{\tau}\right)^m \,\left(1-\frac{y}{\tau}\right)^{n-1}\, \frac{dy}{\tau}\\[3mm] & = & n\,\tau^m \int_0^1 y^m\,\left(1-y\right)^{n-1}\,dy\\[3mm] & = & n\,\tau^m\cdot\frac{\Gamma(m+1)\;\Gamma(n)}{\Gamma(n+m+1)}\\[3mm] & = & \frac{n!\;m!}{(n+m)!} ~ \tau^m \end{array}\] Putting $m =1$ and $m=2$ in the above relation, we can have $E(t_k\mid N(\tau)=n)$ and $E(t_k^2\mid N(\tau)=n)$, for $1\leq k \leq n$. } \begin{result} \label{res:Exp:X_k|n} $E(X_k\mid N(\tau)=n) = 0$ and $E(X_k^2\mid N(\tau)=n)$ = $\frac{2k\,\sigma^2\,\tau^2}{(n+1)\,(n+2)}$, ~for~ $1 \leq k \leq n$. \end{result} \proof{ $X_k = \sum_{i=1}^k u_i\,t_i $, for $1 \leq k \leq n$~ and $t_k$\,s and $u_k$\,s are independent. \[\begin{array}{rcl} E(X_k\mid N(\tau)=n) & = & E\left(\left(\sum_{i=1}^k u_i\,t_i\right)\mid N(\tau)=n\right)\\[2mm] & = & \sum_{i=1}^kE\left(u_i\right)\, E\left(t_i\mid N(\tau)=n\right),\\ & & \hspace*{.5cm} \mbox{since, $u_i$\,s are independent on $t_i$\,s and $N(\tau)=n$}\\ & = & 0,\hspace*{2.6cm} \mbox{since $E(u_i)=0$.} \end{array}\] Similarly, ~ $E(X_k^2\mid N(\tau)=n)$ \[\begin{array}{rl} & = E\left(\left(\sum_{i=1}^k u_i\,t_i\right)^2\mid N(\tau)=n\right)\\[3mm] & = E\left(\left(\sum_{i=1}^k u_i^2\,t_i^2 + 2\, \sum_{i<j=1}^k u_i\,u_j\,t_i\,t_j\right) \mid N(\tau)=n\right)\\[1mm] & = \sum\limits_{i=1}^k E(u_i^2)\,E(t_i^2\mid N(\tau)=n) + 2\sum\limits_{i<j=1}^k E(u_i)\,E(u_j)\,E(t_i\,t_j \mid N(\tau)=n)\\[.1mm] & = \sigma^2\sum\limits_{i=1}^k E(t_i^2\mid N(\tau)=n), ~~~~~~~~~~ \mbox{since, $E(u_i)=0$ and $E(u_i^2)=\sigma^2$}\\[3mm] & = \frac{2k\,\sigma^2\,\tau^2}{(n+1)\,(n+2)}, ~~~~~~~~~~~~~~~~~~~~~ \mbox{using Result~\ref{res:Exp:t_k|n}.}\\[-3mm] \end{array}\] } \subsubsection{Actual Position of Sensor} We analyze the motion of the sensor in between two consecutive calls of localization. Because, the pattern of the motion remains similar in between any two consecutive localization points. Let $P(x,y)$ be the position of the sensor at a random time $t$, $0< t < T$. Consider the random variable $(X,Y)$ that represents the position of $P$. Let $i$ waypoints occur in the interval $(0,t)$, i.e. $0\leq T_i\leq t < T_{i+1}$. Given $N(t)=i$. Then we have \begin{center} $X = X_i + (t-T_i)u_{i+1}$ ~~~~~~~~ $Y = Y_i + (t-T_i)v_{i+1}$ \end{center} for ~ $i=0,1,2,\ldots,\infty$ where $T_0 = 0$ and $(X_0,Y_0)=(0,0)$. \begin{theorem} \label{cor:Exp:X|i} $E(X\mid N(t)=i) = 0$ and $E(X^2\mid N(t)=i)$ = $\frac{2}{i+2}\,\sigma^2\,t^2$. \end{theorem} \proof{ Consider ~$X = X_i + (t-T_i)\,u_{i+1}$, given ~$N(t) = i$, for a fixed ~$t\in (0,T)$. \[\begin{array}{rcl} E(X\mid N(t)=i) & = & E(X_i\mid N(t)=i) + E((t-T_i)\mid N(t)=i)\,E(u_{i+1})\\ & = & 0, ~~~~~~~~~~~~~~ \mbox{using ~$E(u_{i+1})=0$~ and ~Result~\ref{res:Exp:X_k|n}.} \end{array}\] Similarly, ~$E(X^2\mid N(t)=i)$ \[\begin{array}{rcl} & = & E(X_i^2\mid N(t)=i) + E(u_{i+1}^2)\,E((t-T_i)^2\mid N(t)=i)\\[.1mm] & & \hspace*{5cm} + 2\,E(u_{i+1})\,E((t-T_i)\mid N(t)=i)\\ & = & E(X_i^2\mid N(t)=i) + \sigma^2\,(t^2 - 2t\;E(T_i\mid N(t)=i) + E(T_i^2\mid N(t)=i))\\[.5mm] & = & \frac{2i\,\sigma^2\,t^2}{(i+1)\,(i+2)} + \sigma^2\,\left(t^2 - 2t\;\frac{i\,t}{i+1} + \frac{i\,(i+1)\,t^2}{(i+1)\,(i+2)}\right), ~~~~ \mbox{using Result~\ref{res:Exp:T_k|n} and \ref{res:Exp:X_k|n}.}\\ & = & \frac{2}{i+2}\;\sigma^2\,t^2. \\[-3mm] \end{array}\] } \begin{theorem} \label{theo:Exp:X} The expectation of $X$ and $X^2$ are given by $E(X)=0$ and $$E(X^2)=2\sigma^2\left(\frac{t}{\lambda} - \frac{1}{\lambda^2}+\frac{1}{\lambda^2} e^{-\lambda t}\right).$$ \end{theorem} \proof{ The random variable $X$ represents the $x$-coordinate of the sensor at time $t$. In the a particular time instant the expected value of $X^2$ is given by \[\begin{array}{rcl} E(X^2) & = & E[E(X^2\mid N(t)=i)]\\[2mm] & = & E\left(\frac{2\sigma^2t^2}{i+2}\right), ~~\mbox{using Theorem~\ref{cor:Exp:X|i}}\\[3mm] & = & 2\sigma^2t^2\sum\limits_{i=0}^{\infty}\frac{1}{i+2}\Pr(N(t)=i)\\[4mm] & = & 2\sigma^2t^2\sum\limits_{i=0}^{\infty}\frac{1}{i+2} \,\frac{(\lambda t)^i}{i\,!}e^{-\lambda t}\\[5mm] & = & 2\sigma^2t^2\left(\frac{1}{\lambda t} - \frac{1}{\lambda^2 t^2} +\frac{1}{\lambda^2 t^2} e^{-\lambda t}\right)\\[4mm] & = & 2\sigma^2\left(\frac{t}{\lambda} - \frac{1}{\lambda^2} +\frac{1}{\lambda^2} e^{-\lambda t}\right)\\[-2mm] \end{array}\] } \subsection{Estimation by MAINT and Error Analysis} Assume two consecutive calls of MAINT occur at the times $0$ and $T$. In Figure\,\ref{ErrEnrg}, $S_0,S_1,\ldots,S_n,S_T$ is the actual path of the sensor in between the times $0$ and $T$. \begin{figure}[!h] \setlength{\unitlength}{.7mm} \begin{picture}(418,57)(-30,0) \put(11.5,1.2){\includegraphics[width=.6\textwidth,keepaspectratio=true]{bendmulti} } \put(110,51.5){\tiny \it MADRD} \put(110,48.5){\tiny \it MAINT} \put(110,45.5){\tiny \it ACTUAL } \put(5,0){$S_0$} \put(36,29){$S_1$} \put(47,17){$S_2$} \put(51,29){$S_3$} \put(50,9){$S_{i-1}$} \put(81,28){$S_{i}$} \put(77,8.5){$S_{n}$} \put(116,0){$S_{T}$} \put(81,53){$S_{T}'$} \put(65.2,41){\rotatebox{5}{$\bullet$}} \put(65,37){\rotatebox{0}{$P''$} \put(61,15){\rotatebox{0}{$\bullet$}} \put(61,11){\rotatebox{5}{$P$} \put(70,0){$\bullet$} \put(68,3){\rotatebox{5}{$P'$} \end{picture} \caption{Showing position estimates at an intermediate point by MADRD and MAINT.} \label{ErrEnrg} \end{figure} Let $P(x,y)$ be the actual position of the sensor at a random time $t\in (0,T)$ when it follows the said RWP mobility model. Let $P'\,(x_{est},y_{est})$ be the estimated position of the sensor at $t$ according to MAINT. Let $(X_{est},Y_{est})$ denotes the random variable to estimated position $P'$ by MAINT at time $t$. Then we have $$X_{est}=\frac{X_T}{T}t,~~~~Y_{est}=\frac{Y_T}{T}t$$ where $(X_T,Y_T)$ is the random position of the sensor at time $T$. \subsubsection{Error Analysis} In the analysis, we consider RWP mobility model. We assume the waypoints follows the Poisson process. Since localizations occur at time $0$ and $T$, we consider the motion in the time interval $[0,T]$. For error calculation we take a location estimation at a random time $t \in [0,T]$. Due to the memory less property of Poison process, we may break the complete scenario in two independent Poisson processes with same parameter, one in the interval $[0,t]$ and another in $[t,T]$. As a whole, these two processes represent the same process as in $[0,T]$. Suppose, $N'(\tau)=m$ denote the event that $m$ waypoints occur in $[t,T]$. \begin{theorem} \label{theo:N'} $\Pr(N'(\tau)=m) = \Pr(N(\tau-t)=m)$. \end{theorem} \proof{ It follows from the memory less property of Poisson process~\cite{B78}. } \begin{theorem} \label{theo:NN'} $\Pr(N(t)=i,N'(T)=j) = \frac{\lambda ^{i+j}e^{-\lambda T}}{i!\,j!}t^i (T-t)^{j}$. \end{theorem} \proof{ From the Poisson process we can say that the events $N(t)=i$ and $N'(T)=j$ are independent. Therefore, $$\Pr(N(t)=i,N'(T)=j) = \Pr(N(t)=i)\Pr(N'(T)=j).$$ From the equation~(\ref{eq:PrN}) and the Theorem~\ref{theo:N'} we have \begin{eqnarray*} \Pr(N(t)=i,N'(T)=j) &=& \Pr(N(t)=i)\Pr(N(T-t)=j) \\ &=& \frac{(\lambda t)^i}{i!}\,e^{-\lambda t}\cdot\frac{(\lambda(T-t))^{j}}{j!}\, e^{-\lambda(T-t)} \\ &=& \frac{\lambda ^{i+j}\,e^{-\lambda T}}{i!\,j!}t^i (T-t)^{j} \end{eqnarray*} } \paragraph{Expected Error in MAINT} Consider a random time $t$ in the interval $(0,T)$. Let $S_1$, $S_2$, $\cdots$, $S_i$ be $i$ waypoints occurred in $[0,t]$ and $j$ waypoints $S_{i+1}$, $S_{i+2}$, $\cdots$, $S_{i+j}$ occur in the time interval $[t,T]$. Under this setup, the actual position of the sensor at time $t$ is given by the random variable $(X,Y)$ where $$X=X_i+(t-T_i)u_{i+1}$$ $$ Y=Y_i+(t-T_i)v_{i+1}.$$ Since, the process in $[0,t]$ and in $[t,T]$ are independent to each other, we may assume the waypoints $S_{i+1}$, $S_{i+2}$, $\cdots$, $S_{i+j}$ occurs just like the system starts from the time $t$ where the position of the sensor is $(X,Y)$. Due to the memory less property of the Poisson Process, we may obtain the time occurrences of the waypoints $S_{i+1}$, $S_{i+2}$, $\cdots$, $S_{i+j}$ form the same Poisson process over the time interval $[0,T-t]$ with an additional time $t$. Let $T'_k$ and $t'_k$ denote the time of occurrence of the position of the waypoint $S_{i+k}$ and time interval between two waypoints of the motion of the sensor in $[0,T-t]$. If $(X'_k,Y'_k)$ denote the the random position of the waypoint $S_{i+k}$ taking $(X,Y)$ as the origin, we have $$X'_k = X'_{k-1} + t'_ku_{i+k} = \sum_{m=1}^k t'_mu_{i+m},$$ $$Y'_k = Y'_{k-1} + t'_kv_{i+k} = \sum_{m=1}^k t'_mv_{i+m}$$ for $k= 0,1,2, \ldots, \infty$ where $T'_0=0$ and $(X'_0,y'_0)=(0,0)$. The random velocity vector at any waypoint is independent to time of occurrence of the waypoint. So $(X_i,Y_i)$ and $(X'_k,Y'_k)$ are independent for $i\geq 1, k\geq 1$. If we assume the coordinates of the waypoint $S_{i+j}$ in the whole process over $[0,T]$ as $(X_{i+j},Y_{i+j})$, we have $X_{i+j}=X+X'_j$, $Y_{i+j}=Y+Y'_j$ and $T_{i+j}=t+T'_j$. If $(X',Y')$ denote the position of the sensor at time $T$ due to the process over $[0,T-t]$ under the condition that $N'(T)=j$, i.e., $N(T-t) = j$ then \begin{center} $ X' = X'_j+(T-t-T'_j)u_{i+j+1}, ~~~~~~~~ Y' = Y'_j+(T-t-T'_j)v_{i+j+1}.$ \end{center} The position of the sensor at time $T$, $(X_T,Y_T)$, due to the process over $[0,T]$ under the conditions that $N(t)=i$ and $N'(T)=j$, may be obtained as \begin{center} $ X_T = X+X'_j+(T-t-T'_j)u_{i+j+1} = X + X',$ \\[2mm] $Y_T = Y+Y'_j+(T-t-T'_j)v_{i+j+1} = Y + Y'.$ \end{center} Therefore, the estimated positions at time $t$ may written as: \begin{center} $ X_{est} = \frac{X+X'}{T}\,t, ~~~~~~~~~~ Y_{est} = \frac{Y+Y'}{T}\,t$ \end{center} Let $error_t$ denote the expected squared error in the position estimation by MAINT at a random time $t$ in $[0,T]$. Thus, $error_t$ can be expressed as: \begin{equation} \begin{array}{rcl} error_t & = & E\left[(X-X_{est})^2+(Y-Y_{est})^2\right] \\[1mm] & = & 2E\left[\left(X-X_{est}\right)^2\right], ~~ \mbox{since $X-X_{est}$ and $Y-Y_{est}$ are iid} \\[2mm] & = & 2E\left[\left(X-\frac{X+X'}{T}\,t\right)^2\right] \\[3mm] & = & 2E\left[\left\{\left(1-\frac{t}{T}\right)X-\frac{t}{T}X'\right\}^2\right] \\[2mm] & = & 2\left[\left(1-\frac{t}{T}\right)^2 E(X^2)+\frac{t^2}{T^2}E\left(X'^2\right) - 2\frac{t}{T}\left(1-\frac{t}{T}\right)E(XX')\right] \end{array} \label{eq:errot_t} \end{equation} \begin{theorem} \label{theo:Exp:X'} The expectation of $X'$ and $X'^2$ are given by $E(X')=0$ and \begin{center} $E(X'^2)=2\sigma^2\left\{\frac{T-t}{\lambda} - \frac{1}{\lambda^2} +\frac{1}{\lambda^2} \,e^{-\lambda (T-t)}\right\}. $ \end{center} \end{theorem} \proof{ We have seen that $X' = X'_j+(T-t-T'_j)u_{i+j+1}$ is the $x$-coordinate of the sensor at time $T-t$ in the process over $[0,T-t]$. So $X'$ has similar properties as $X$ except $N'(T)=j$, i.e. $N(T-t)=j$ instead of $N(t)=i$. The event $N'(T)$ is independent with the scenario prior to the time $t$, i.e. independent with $N(T)$. Thus, the result follows from Theorem~\ref{theo:Exp:X} replacing $T-t$ instead of $t$. } \begin{theorem}\label{theo:Exp:XX'|ij} Given $N(t) = i$ and $N'(T)=j$ for a particular time $t\in [0,T]$. The expectation of $XX'$ may be given as \begin{center} $E\left[XX'\mid N(t)=i,N'(T)=j\right]=\frac{\sigma^2t(T-t)}{(i+1)(j+1)}, ~~~~~~~~ for ~~ i,j\geq 0.$ \end{center} \end{theorem} \proof{ Under the given condition $N(t) = i$ and $N'(T)=j$ for a particular time $t\in [0,T]$, we know $X=X_i+(t-T_i)u_{i+1}$ and $X' = X'_j+(T-t-T'_j)u_{i+j+1}$ as stated earlier. Therefore, we have \[\begin{array}{rcl} \lefteqn{E\left[XX'\mid N(t)=i,N'(T)=j\right]} \\[1mm] & = & E\left[\left\{X_i + (t-T_i) u_{i+1}\right\} \left\{X'_j + (T-t-T'_j) u_{i+j+1}\right\}\right] \\[1mm] & = & E(t-T_i)\left\{E\left(u_{i+1}X'_j\right) + E\left(T-t-T'_j\right) E\left(u_{i+1}u_{i+j+1}\right)\right\} \\[1mm] & = & \left\{\begin{array}{lrl} E(t-T_i)E\left(u_{i+1}X'_j\right), & ~~~~~~ for & i\geq 0, ~ j \geq 1 \\[1mm] (T-t)E(t-T_i)E\left(u_{i+1}^2\right), & for & i\geq 0, ~ j = 0 \end{array}\right.\\[4mm] & = & \left\{\begin{array}{lrl} E(t-T_i)E\left(t'_1\right)E\left(u_{i+1}^2\right), & ~~~~~~ for & i\geq 0, ~ j\geq 1 \\[1mm] (T-t)E(t-T_i)E\left(u_{i+1}^2\right), & for & i\geq 0, ~ j = 0 \end{array}\right.\\[4mm] & = & \left\{\begin{array}{lrl} \sigma^2 \frac{T-t}{j+1}(t-\frac{it}{i+1}), & ~~~~~~ for & i\geq 0, ~ j\geq 1 \\[1mm] \sigma^2 (T-t)(t-\frac{it}{i+1}), & for & i\geq 0, ~ j= 0 \end{array}\right.\\[5mm] & = & \frac{\sigma^2\, t(T-t)}{(i+1)(j+1)}, ~~~~~~ for ~~ i\geq 0, ~ j\geq 0\\[-5mm] \end{array}\] } \begin{theorem} \label{theo:Exp:XX'} For a particular $t\in [0,T]$, the expectation of $XX'$ is given as \begin{center} $E(XX')=\frac{\sigma^2}{\lambda^2}\left\{1-e^{-\lambda t} -e^{-\lambda (T-t)} +e^{-\lambda T}\right\}.$ \end{center} \end{theorem} \proof{ The random variables $X$ and $X'$ represent the positions of the sensor in the decomposed motions of the sensor into intervals $[0,t]$ and $[t,T]$ as discussed earlier. The expectation of $XX'$ is given as \[\begin{array}{rcl} E(XX') & = & E\left[E\left[XX'\mid N(t)=i,N'(T)=j\right]\right] \\[2mm] & = & E\left[\frac{\sigma^2t(T-t)}{(i+1)(j+1)}\right], ~~~~ \mbox{using Theorem~\ref{theo:Exp:XX'|ij}}\\[2mm] & = & \sum\limits_{i=0}^{\infty}\sum\limits_{j=0}^{\infty}\left[\frac{\sigma^2t(T-t)}{(i+1)(j+1)}\, \cdot \Pr(N(t)=i,N'(T)=j)\right]\\[3mm] & = & \sum\limits_{i=0}^{\infty}\sum\limits_{j=0}^{\infty}\left[\frac{\sigma^2t(T-t)}{(i+1)(j+1)}\cdot \frac{\lambda^{i+j}\;t^i(T-t)^j}{i!\;j!}\,e^{-\lambda T} \right], \mbox{by Theorem\,\ref{theo:NN'}}\\[3mm] & = & \frac{\sigma^2}{\lambda^2}\,e^{-\lambda T} \sum\limits_{i=0}^{\infty}\frac{\lambda^{i+1}t^{i+1}}{(i+1)!} \sum\limits_{j=0}^{\infty} \frac{\lambda^{j+1}(T-t)^{j+1}}{(j+1)!}\\[2mm] & = & \frac{\sigma^2}{\lambda^2}\,e^{-\lambda T} \sum\limits_{i=1}^{\infty}\frac{\lambda^{i}t^{i}}{i!} \sum\limits_{j=1}^{\infty} \frac{\lambda^{j}(T-t)^{j}}{j!}\\[3mm] & = & \frac{\sigma^2}{\lambda^2}\,e^{-\lambda T}\left(e^{\lambda t}-1\right) \left(e^{\lambda (T-t)}-1\right)\\[3mm] & = & \frac{\sigma^2}{\lambda^2}\left\{1-e^{-\lambda t} -e^{-\lambda (T-t)} +e^{-\lambda T}\right\}\\[-4mm] \end{array}\] } Using the Theorems~\ref{theo:Exp:X},\ref{theo:Exp:X'},\ref{theo:Exp:XX'}, the equation~(\ref{eq:errot_t}) reduces to the following: \[\begin{array}{rcl} error_t & = & 2\left[\left(1-\frac{t}{T}\right)^2 E(X^2)+\frac{t^2}{T^2}E\left(X'^2\right) - 2\frac{t}{T}\left(1-\frac{t}{T}\right)E(XX')\right] \\[3mm] & = & \frac{4\sigma^2}{\lambda^2 T^2}\left[ t^2 \left\{\lambda(T-t) - 1 + e^{-\lambda(T-t)}\right\} + (T-t)^2\left(\lambda t - 1 + e^{-\lambda t}\right)\right. \\[2mm] & & \left. - \left(1+e^{-\lambda T}\right) t(T-t) + t(T-t)\,e^{-\lambda t} + t(T-t)\,e^{-\lambda (T - t)}\right \end{array}\] Therefore, the average of squared error, denoted by $error(avg)$, in the location estimation by MAINT is given as follows: \[\begin{array}{rcl} error(avg) & = & \frac{1}{T}\int\limits_0^T error_t~dt \\[2mm] & = & \frac{4\sigma^2}{\lambda^2 T^3}\left[ \int_0^T t^2 \left\{\lambda(T-t) - 1 + e^{-\lambda(T-t)}\right\}\,dt\right. \\[2mm] & & + \int_0^T(T-t)^2\left(\lambda t-1+e^{-\lambda t}\right)\,dt - \left(1+e^{-\lambda T}\right)\int_0^T t(T-t)\,dt\\[2mm] & & + \left. \int_0^T t(T-t)\,e^{-\lambda t}\,dt + \int_0^T t(T-t)\,e^{-\lambda (T - t)}\,dt \right] \\[2mm] & = & \frac{4\sigma^2}{\lambda^2 T^3}\left[ 2 \int_0^T t^2 \left\{\lambda(T-t) - 1 + e^{-\lambda(T-t)}\right\}\,dt \right. \\[2mm] & & - \left(1+e^{-\lambda T}\right)\frac{T^3}{6} + \left. 2\int_0^T t(T-t)\,e^{-\lambda (T - t)}\,dt \right], \\ & & \hspace*{2cm} \mbox{by fundamental theory of integral calculus}\\[2mm] & = & \frac{4\sigma^2}{\lambda^2 T^3}\left[ 2 \int_0^T \left(\lambda T\, t^2 - \lambda t^3 - t^2\right)\,dt - \left(1+e^{-\lambda T}\right)\frac{T^3}{6} \right. \\[1mm] & & + \left. 2T\,e^{-\lambda T}\int_0^T t\,e^{\lambda t}\,dt\right]\\[2mm] & = & \frac{4\sigma^2}{\lambda^2 T^3}\left[ 2\left(\frac{\lambda T^4}{12} - \frac{T^3}{3}\right) - \left(1+e^{-\lambda T}\right)\frac{T^3}{6} \right. \\[2mm] & & + \left. 2T\,e^{-\lambda T}\left(\frac{1}{\lambda^2} - \frac{1}{\lambda^2}e^{\lambda T} + \frac{T}{\lambda}e^{\lambda T}\right)\right] \\[2mm] & = & \frac{4\sigma^2}{\lambda^2 T^3} \left(\frac{\lambda T^4}{6} - \frac{5T^3}{6} + \frac{2T^2}{\lambda} - \frac{2T}{\lambda^2} + \frac{2T}{\lambda^2}\,e^{-\lambda T} - \frac{T^3}{6}\,e^{-\lambda T} \right) \\[2mm] & = & \frac{2\sigma^2}{3\lambda^2}\left[\lambda T - 5 + \frac{12}{\lambda T} - \frac{12}{\lambda^2 T^2} + \frac{12}{\lambda^2 T^2} \, e^{-\lambda T} - e^{-\lambda T} \right] \end{array}\] If we assume $T$ and $\lambda$ grow with $\frac{T}{\lambda} = constant = C$, we get \[\begin{array}{rcl} \lefteqn{error(avg)}\\[2mm] & = & \frac{2\sigma^2}{3}\left[C - \frac{5\,C^2}{T^2} + \frac{12\,C^3}{T^4} - \frac{12\,C^4}{T^6} + \frac{12\,C^4}{T^6} \, e^{-T^2/C} - e^{-T^2/C} \right] \\[3mm] & = & \frac{2\sigma^2\, C}{3}, ~~~~~as~~~ T\rightarrow\infty ~~with~~ T=C\,\lambda \end{array}\] From the above result, we see that the average error approaches to zero when $T$, the time period of localizations, tends to zero. The error grows as $T$ becomes large. It is very important to see that as $\lambda$ becomes very large, i.e., sensor changes its direction more frequently, the error becomes very small. If both $T$ and $\lambda$ grows with constant ratio i.e., $\frac{T}{\lambda} = constant$, the error approaches to a constant value. Therefore, if we have prior knowledge of the rate of direction of the motion sensors, we well control the energy with an acceptable level of error by adjusting the value of $C$. \section{Analysis by Simulations} \label{sim:res} Simulation studies were carried out using ns-2~\cite{NS} to compare the performance of the proposed technique with that of MADRD. In the simulation study, we concentrated mainly on the average error distance for different number of localization counts. We assume that the sensors move with RWP mobility model with parameters as in Table~\ref{Sim:Param}. \begin{table}[h] \vspace*{-2mm} \centering \begin{tabular}{|l|l|} \hline Mobility model & Random Waypoint Model\\ \hline Velocity components distribution & $Normal(0,\sigma)$, $\sigma = 5.0$ unit\\ \hline Time gap between waypoints distribution & Exponential mean $\frac{1}{\lambda} = 10.0$ sec\\ \hline \end{tabular} \caption{Relevant parameters used in simulation } \label{Sim:Param} \end{table} \vspace*{-1mm} In this work, velocity components are chosen from independent Normal distribution and time interval between any pair of consecutive waypoints is chosen from the Exponential distribution. During the simulation, we use the parameters described in Table~\ref{Sim:Param}. In this model, the time is measured in $sec$. The velocities are measured in $unit/sec$. The error in position are measured in distance unit (i.e., $unit$ as in Table~\ref{Sim:Param}). The simulation process was carried out over for a time span of $100\,secs$. Using MADRD and MAINT, we estimate the position of a mobile sensor at a random time in $[0,100\,secs]$. The error for these estimated positions are computed with the actual positions. We also observed the number of localization calls in $[0,100\,secs]$. The experiment is approximately repeated $10000$ times. The data are grouped with respect to number of localization calls. In Figure~\ref{ErrEnrgFig}, we plot the average error for different number of localization counts. \begin{figure}[h] \vspace*{-3mm} \centerline{\includegraphics[width=.7\textwidth]{analysis} } \caption{Average error and localization counts for MADRD and the proposed technique.} \label{ErrEnrgFig} \vspace*{-2.5mm} \end{figure} This figure shows that MAINT performs uniformly better than MADRD. For fixed error level, the localization count and hence the energy consumption in MAINT is significantly lower. Also, for comparable numbers of localization count, MAINT has much lower average error. It estimates the position of a sensor with less error and even consuming less energy. MAINT locates a mobile sensor with nearly exact position of the sensor consuming approximately half energy than that of MADRD. MAINT requires higher memory to hold the history of location queries. We can hold limited number of most recent query points. However, MAINT saves the valuable energy at the cost of cheap memory. In Figure~\ref{Sim:ErrTheo}, we compare expected error in position estimation using MAINT with an expression. \begin{figure}[h!] \vspace*{-5mm} \centering \input{simTimeVsErrMu10S5} \caption{Average error in estimation by the proposed technique and theoretical analysis.} \label{Sim:ErrTheo} \vspace*{-2.5mm} \end{figure} This simulation was carried out under the same RWP model in C++ environment. In the course of this simulation, MAINT calls localization procedure with fixed time period $T$. This process is repeated at least $100$ times for a particular value of $T$. The average error are plotted with respect to several values of $T$ in Figure~\ref{Sim:ErrTheo}. This shows that error may be computed from the deduced expression. Simulation studies with $\frac{T}{\lambda}=50.0$ and $\sigma = 10.0$ are shown in the Figure~\ref{Sim:ErrVarT}. \begin{figure}[h!] \vspace*{-5mm} \centering \input{simTimeVsErrC50S10} \caption{Average error in estimation by the proposed technique with $\frac{T}{\lambda}=50.0$.} \label{Sim:ErrVarT} \vspace*{-2.5mm} \end{figure} It shows the asymptotic nature of the average error. This plotting shows that the average error becomes stable, if $T$ varies with $\frac{T}{\lambda}=constant$. However, from Figure~\ref{Sim:ErrTheo} and \ref{Sim:ErrVarT}, we observe that our theoretical analysis well supported by the simulation studies. \section{Conclusion} \label{conclude} The technique, proposed in this paper, estimates the location of a mobile sensor. Tilak et al. \cite{TKAK05} proposed MADRD, which uses extrapolation and the position of the sensor is estimated by the velocity in between last two localization points. In our proposed method MAINT, we use interpolation. The velocity is calculated from the last and next localization points. In the simulation studies, we see that MAINT estimates the position of the sensor with much lower error than that of MADRD. If the parameters of the model are known, at any moment, the error in the position estimation may be computed from the deduced expression, instead using actual position. The time interval can control the energy dissipation. A constant error limit can be maintained if the time period of localization increases proportionally to the rate of change of direction of its motion. Increasing time period, the energy may be saved with a stable error limit. From analysis, we observe that when a sensor changes the direction in its motion, our proposed technique provides location with very low error as oppose to the methods proposed by Tilak et al. Work is in progress to analyze the performances of the proposed model under other movement models like the Gaussian movement model, Brownian motion model etc. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Similarity measure is widely used in various data mining and machine learning tasks. In clustering analysis, its impact to the quality of result is critical\cite{steinbach2004challenges}. A recent proposal of data dependent similarity called Isolation Kernel has enabled SVM to produce better classification accuracy by simply replacing the commonly used data independent kernel (such as Gaussian kernel) with Isolation Kernel \cite{ting2018IsolationKernel}. This is made possible on datasets of varied densities because Isolation Kernel is adaptive to local data distribution such that two points in a sparse region are more similar than two points of equal inter-point distance in a dense region. Despite this success, the kernel characteristic has not been formally proven yet. This paper extends this line of research by investigating a different implementation of Isolation Similarity. We provide a formal proof of the characteristic of the Isolation Similarity for the first time since its introduction. In addition, we focus on using Isolation Similarity to improve the clustering performance of density-based clustering. This paper identifies shortcomings of tree-based method currently employed in inducing Isolation Similarities \cite{ting2018IsolationKernel}. Instead, we investigate a different method to induce the data dependent Isolation Similarity, and evaluate its clustering performance using DBSCAN \cite{ester1996density} in comparison with the two existing improvements of density-based clustering, i.e., DScale \cite{DSCALE} and DP \cite{rodriguez2014clustering}. The rest of the paper is organised as follows. We reiterate Isolation Kernel, identify the shortcomings of using the tree-based method to induce Isolation Similarity, provide the proposed alternative that employs a nearest neighbour method, the lemma and proof of the characteristic of Isolation Similarity, and the investigation in using Isolation Similarity in density-based clustering. The descriptions of existing works are framed in order to clearly differentiate from the contributions we made here. \section{Isolation Kernel} \label{sec_Isolation_Kernel} Isolation Kernel/Similarity is first proposed by \cite{ting2018IsolationKernel} as a new similarity which can adapt to density structure of the given dataset, as opposed to commonly used data independent kernels such as Gaussian and Laplacian kernels. In the classification context, Isolation Kernel has been shown to be an effective means to improve the accuracy of SVM, especially in datasets which have varied densities in the class overlap regions \cite{ting2018IsolationKernel}. This is achieved by simply replacing the commonly used data independent kernel such as Gaussian and Laplacian kernels with the Isolation Kernel. In the context of SVM classifiers, Isolation Kernel \cite{ting2018IsolationKernel} has been shown to be more effective than existing approaches such as distance metric learning \cite{zadeh2016geometric,Wang2015}, multiple kernel learning \cite{rakotomamonjy2008simplemkl,MKL2011} and Random Forest kernel \cite{Breiman2000,Davis2014}. The characteristic of Isolation Kernel is akin to one aspect of human-judged similarity as discovered by psychologists \cite{Krumhansl,Tversky}, i.e., human will judge the two same Caucasians as less similar when compared in Europe (which have many Caucasians) than in Asia. We restate the definition and kernel characteristic \cite{ting2018IsolationKernel} below. \begin{framed} Let $D=\{x_1,\dots,x_n\}, x_i \in \mathbb{R}^d$ be a dataset sampled from an unknown probability density function $x_i \sim F$. Let $\mathcal{H}_\psi(D)$ denote the set of all partitions $H$ that are admissible under $D$ where each isolating partition $\theta \in H$ isolates one data point from the rest of the points in a random subset $\mathcal D \subset D$, and $|\mathcal D|=\psi$. \begin{definition} For any two points $x, y \in \mathbb{R}^d$, Isolation Kernel of $x$ and $y$ wrt $D$ is defined to be the expectation taken over the probability distribution on all partitioning $H \in {\mathcal H}_\psi(D)$ that both $x$ and $y$ fall into the same isolating partition $\theta \in H$: \begin{equation} K_\psi(x,y|D) = {\mathbb E}_{{\mathcal H}_\psi(D)} [\mathbb{I}(x,y \in \theta\ | \ \theta \in H)] \label{eqn_kernel} \end{equation} \noindent where $\mathbb{I}(B)$ is the indicator function which outputs 1 if $B$ is true; otherwise, $\mathbb{I}(B)=0$. \label{def_Isolation_Kernel} \end{definition} In practice, $K_\psi$ is estimated from a finite number of partitionings $H_i \in \mathcal{H}_\psi(D), i=1,\dots,t$ as follows: \begin{eqnarray} K_\psi(x,y|D) = \frac{1}{t} \sum_{i=1}^t \mathbb{I}(x,y \in \theta\ | \ \theta \in H_i) \label{eqn_kernel2} \end{eqnarray} The characteristic of Isolation Kernel is: {\bf two points in a sparse region are more similar than two points of equal inter-point distance in a dense region}, i.e., \vspace{2mm} {\bf Characteristic of $K_\psi$}: $\forall x, y \in \mathcal{X}_\mathsf{S}$ and $\forall x',y' \in \mathcal{X}_\mathsf{T}$ such that $\parallel x-y \parallel\ =\ \parallel x'- y'\parallel$, $K_\psi$ satisfies the following condition: \begin{eqnarray} K_\psi( x, y) > K_\psi( x', y') \label{eqn_condition} \end{eqnarray} \noindent where $\mathcal{X}_\mathsf{S}$ and $\mathcal{X}_\mathsf{T}$ are two subsets of points in sparse and dense regions of $\mathbb{R}^d$, respectively; and $\parallel x-y\parallel$ is the distance between $x$ and $y$. To get the above characteristic, the required property of the space partitioning mechanism is to create large partitions in the sparse region and small partitions in the dense region such that \emph{two points are more likely to fall into a same partition in a sparse region than two points of equal inter-point distance in a dense region}. \end{framed} \section{Shortcomings of\\ tree-based isolation partitioning} \label{sec_shortcomings} Isolation Kernel \cite{ting2018IsolationKernel} employs isolation trees or iForest \cite{liu2008isolation} to measure the similarity of two points because its space partitioning mechanism produces the required partitions which have volumes that are monotonically decreasing wrt the density of the local region. Here we identify two shortcomings in using isolation trees to measure Isolation Similarity, i.e., each isolation tree (i) employs axis-parallel splits; and (ii) is an imbalanced tree. Figure \ref{fig_compare}(a) shows an example partitioning due to axis-parallel splits of an isolation tree. The tree-based isolating partitions generally satisfy the requirement of small partitions in dense region and large partitions in sparse region. \begin{figure} \centering \begin{subfigure}{0.23\textwidth} \centering \includegraphics[width=\textwidth]{partition-axisparallel} \caption{Axis-parallel splitting} \end{subfigure} \begin{subfigure}{0.23\textwidth} \centering \includegraphics[width=\textwidth]{partition-voronoi} \caption{NN partitioning} \end{subfigure} \caption{Examples of two isolation partitioning mechanisms: Axis-parallel versus nearest neighbour (NN). On a dataset having two (uniform) densities, i.e., the right half has a higher density than the left half.} \label{fig_compare} \end{figure} However, it produced some undesirable effect, i.e., some partitions are always overextended for the first few splits close to the root of an imbalanced tree\footnote{Imbalanced trees are a necessary characteristic of isolation trees \cite{liu2008isolation} for their intended purpose of detecting anomalies, where anomalies are expected to be isolated with few splits; and normal points can only be isolated using a large number of splits.}. These are manifested as elongated rectangles in Figure \ref{fig_compare}(a). While using balanced trees can be expected to overcome this problem, the restriction to hyper-rectangles remains due to the use of axis-parallel splits. To overcome these shortcomings of isolation trees, we propose to use a nearest neighbour partitioning mechanism which creates a Voronoi diagram \cite{aurenhammer1991voronoi} where each cell is an isolating partition (i.e., isolating one point from the rest of the points in the given sample.) An example is provided in Figure \ref{fig_compare}(b). Note that these partitions also satisfy the requirement of small partitions in the dense region and large partitions in the sparse region. But they do not have the undesirable effect of elongated rectangles. We provide our implementation in the next section. \section{Nearest neighbour-induced\\ Isolation Similarity} \label{sec_NN_Method} Instead of using trees in its first implementation \cite{ting2018IsolationKernel}, we propose to implement Isolation Similarity using nearest neighbours. Like the tree method, the nearest neighbour method also produces each $H$ model which consists of $\psi$ isolating partitions $\theta$, given a subsample of $\psi$ points. Rather than representing each isolating partition as a hyper-rectangle, it is represented as a cell in a Voronoi diagram \cite{aurenhammer1991voronoi}, where the boundary between two points is the equal distance from these two points. While the Voronoi diagram is nothing new, its use in measuring similarity is new. Using the same notations as used earlier, $H$ is now a Voronoi diagram, built by employing $\psi$ points in $\mathcal D$, where each isolating partition or Voronoi cell $\theta \in H$ isolates one data point from the rest of the points in $\mathcal D$. We call the point which determines a cell as the cell centre. Given a Voronoi diagram $H$ constructed from a sample $\mathcal{D}$ of $\psi$ points, the Voronoi cell centred at $z \in \mathcal{D}$ is: \[ \theta[z] = \{x \in \mathbb{R}^d \ | \ z = \argmin_{\mathsf{z} \in \mathcal{D}} \ell_p(x - \mathsf{z})\}. \] \noindent where $\ell_p(x, y)$ is a distance function and we use $p=2$ as Euclidean distance in this paper. \label{sec_proof} \begin{definition} For any two points $x, y \in \mathbb{R}^d$, the nearest neighbour-induced Isolation Similarity of $x$ and $y$ wrt $D$ is defined to be the expectation taken over the probability distribution on all Voronoi diagrams $H \in {\mathcal H}_\psi(D)$ that both $x$ and $y$ fall into the same Voronoi cell $\theta \in H$: \begin{eqnarray} K_\psi(x,y\ |\ D) &=& {\mathbb E}_{{\mathcal H}_\psi(D)} [\mathbb{I}(x,y \in \theta[z]\ | \ \theta[z] \in H)] \nonumber \\ &=& {\mathbb E}_{\mathcal{D} \sim D} [\mathbb{I}(x,y\in \theta[z]\ | \ z\in \mathcal{D})] \nonumber \\ &=& P(x,y\in \theta[z]\ | \ z\in \mathcal{D} \subset D) \label{eqn_kernel_anne} \end{eqnarray} \noindent where $P$ denotes the probability. \label{def_anne_similarity} \end{definition} The Voronoi diagram has the required property of the space partitioning mechanism to produce large partitions in a sparse region and small partitions in a dense region. This yields the characteristic of Isolation Similarity : {\bf two points in a sparse region are more similar than two points of equal inter-point distance in a dense region}. The use of nearest neighbours facilitates a proof of the above characteristic that was previously hampered by the use of trees. We provide the proof in the next section. \section{Lemma and Proof of the characteristic of Isolation Similarity} Let $\rho(x)$ denote the density at point $x$, a lemma based on definition 4 is given below: \begin{lemma} $\forall x, y \in \mathcal{X}_\mathsf{S}$ (sparse region) and $\forall x',y' \in \mathcal{X}_\mathsf{T}$ (dense region) such that $\forall_{z\in \mathcal{X}_\mathsf{S}, z'\in \mathcal{X}_\mathsf{T}} \ \rho(z)<\rho(z')$, the nearest neighbour-induced Isolation Similarity $K_\psi$ has the characteristic that for $\ell_p(x-y)\ =\ \ell_p(x'- y')$ implies \begin{eqnarray} P(x,y\in \theta[z]) > P(x',y'\in \theta[z']) \equiv \hspace{3cm} \nonumber \\ K_\psi( x, y\ |\ D) > K_\psi( x', y'\ |\ D) \nonumber \label{eqn_condition0} \end{eqnarray} \end{lemma} Sketch of the proof: (i) If two points fall into the same Voronoi cell, then the distances of these individual points to this cell centre must be shorter than those to every other cell centre (or at most equal to those to one other cell centre) in a Voronoi diagram formed by all these cell centres. (ii) In a subset of $\psi$ points, sampled from $D$, used to form a Voronoi diagram, the probability of two points falling into the same Voronoi cell can then be estimated based on the condition stated in (i). (iii) The probability of two points of equal inter-point distance falling into the same Voronoi cell is a monotonically decreasing function wrt the density of the cell. \begin{proof} Let a local region $V(x,y)$ covering both $x$ and $y$ as a ball centred at the middle between $x$ and $y$ having $\ell_p(x,y)$ as the diameter of the ball. Assume that the density in $V(x,y)$ is uniform and denoted as $\rho(V(x,y))$. Let $\mathcal{N}_\epsilon(x)$ be the $\epsilon$-neighbourhood of $x$, i.e., $\mathcal{N}_\epsilon(x)=\lbrace y \in D ~|~ \ell_p(x,y) \leqslant \epsilon \rbrace$. The probability of both $x$ and $y$ are in the same Voronoi cell $\theta [z]$ is equivalent to the probability of a point $z\in \mathcal{D}$ being the nearest neighbour of both $x$ and $y$ wrt all other points in $\mathcal{D}$, i.e., the probability of selecting $\psi-1$ points which are all located outside the region $U(x,y,z)$, where $U(x,y,z)=\mathcal{N}_{\ell_p(x,z)}(x) \cup \mathcal{N}_{\ell_p(y,z)}(y)$. To simplify notation, $z\in \mathcal{D}$ is omitted. Then the probability of $x,y\in \theta[z]$ can be expressed as follows: \begin{eqnarray} P(x,y\in \theta[z]\ | \ z\in V(x,y)) \mbox{\hspace{3cm}} \nonumber\\ = P(z_1,z_2,\dots,z_{(\psi-1)} \notin U(x,y,z)) \hspace{7mm} \nonumber\\ \propto (1-{\mathbb E}_{z \sim V(x,y)} [|U(x,y,z)|]/|D|)^{(\psi-1)} \nonumber \label{pro2} \end{eqnarray} \noindent where $|W|$ denotes the cardinality of $W$. Assume that $U(x,y,z)$ is also uniformly distributed, having the same density $\rho(V(x,y))$, the expected value of $|U(x,y,z)|$ can be estimated as: \begin{eqnarray} {\mathbb E}_{z \sim V(x,y)} [|U(x,y,z)|] \mbox{\hspace{4cm}} \nonumber \\ = {\mathbb E}_{z \sim V(x,y)} [\upsilon(U(x,y,z)) \times \rho(V(x,y))] \nonumber\\ = {\mathbb E}_{z \sim V(x,y)} [\upsilon(U(x,y,z))] \times \rho(V(x,y)) \nonumber \label{pro4} \end{eqnarray} \noindent where $\upsilon(W)$ denotes the volume of $W$. Thus, we have \begin{eqnarray} P(x,y\in \theta[z]\ | \ z\in V(x,y)) \propto \mbox{\hspace{3cm}} \nonumber\\ \Big(1-{\mathbb E}_{z \sim V(x,y)} [\upsilon(U(x,y,z))]\times \frac{\rho(V(x,y))}{|D|}\Big)^{(\psi-1)} \label{pro5} \end{eqnarray} In other words, the higher the density in the area around $x$ and $y$, the smaller $P(x,y\in \theta[z]\ | \ z\in V(x,y))$ is, as the volume of $V(x,y)$ is constant given $x$ and $y$. Given two pairs of points from two different regions but of equal interpoint distance as follows: $\forall x, y \in \mathcal{X}_\mathsf{S}$ (sparse region) and $\forall x',y' \in \mathcal{X}_\mathsf{T}$ (dense region) such that $\ell_p(x, y)\ =\ \ell_p(x', y')$. Assume that data are uniformly distributed in both regions, and we sample $z,z' \in \mathcal{D}$ from $D$ such that $z\in V(x,y)$ and $z'\in V(x',y')$. We have ${\mathbb E}_{z \sim V(x,y)} [\upsilon(U(x,y,z))]={\mathbb E}_{z' \sim V(x',y')} [\upsilon(U(x',y',z'))]$ because the volume of $V(x,y)$ is equal to that of $V(x',y')$ for $\ell_p(x, y)\ =\ \ell_p(x', y')$, independent of the density of the region. Supposing that we choose a sufficient large sample size $\psi$ of $\mathcal{D}$ which contains points from both $V(x,y)$ and $V(x',y')$. When the data are uniformly distributed in $U(x,y,z) \in \mathcal{X}_\mathsf{S}$ and $U(x',y',z') \in \mathcal{X}_\mathsf{T}$, based on Equation \ref{pro5}, we have \begin{eqnarray} P(x,y\in \theta[z]\ | \ z\in V(x,y)) > \hspace{3cm} \nonumber \\ P(x',y'\in \theta[z']\ | \ z'\in V(x',y')) \nonumber\\ \equiv K_\psi(x,y\ |\ D) > K_\psi(x,y\ |\ D) \hspace{3cm}\nonumber \label{eq10} \end{eqnarray} This means that $x'$ and $y'$ (in a dense region) are more like to be in different cells than $x$ and $y$ in $V(x,y)$ (in a sparse region), as shown in Figure~\ref{fig_compare}. \hfill $\square$\\ \end{proof} \newpage A simulation validating the above analysis is given in Figure~\ref{gap}. It compares $P(x,y\in \theta_\mathsf{S})$ and $P(x',y'\in \theta_\mathsf{T})$ when $x,y$ from a sparse region and $x',y'$ from a dense region with equal inter-point distance. Given a fixed $\psi < |D|$ or a fixed inter-point distance, properties observed from Figure~\ref{gap} are given as follows: \begin{enumerate} \item $P(x,y\in \theta_\mathsf{S})>P(x',y'\in \theta_\mathsf{T})$. \item The rate of decrease of $P(x',y'\in \theta_\mathsf{T})$ is faster than that of $P(x,y\in \theta_\mathsf{S})$. Thus $P(x',y'\in \theta_\mathsf{T})$ reaches 0 earlier. \end{enumerate} \begin{figure} \centering \captionsetup{justification=centering} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[width=100pt]{demo1.png} \caption{Two regions of different densities} \label{gap:a} \end{subfigure}\\ \begin{subfigure}{0.23\textwidth} \includegraphics[width=110pt]{dis02.png} \caption{$\psi$ increases \\ Inter-point distance=0.2} \label{gap:c} \end{subfigure} \begin{subfigure}{0.23\textwidth} \includegraphics[width=110pt]{psi15.png} \caption{$\psi$=15 \\ Inter-point distance increases} \label{gap:e} \end{subfigure} \caption{(a) Reference points used in the simulations, where inter-point distance $\parallel x-y \parallel\ =\ \parallel x'- y'\parallel$ increases. Simulation results as $\psi$ increases (b); and as inter-point distance increases (c). $t=10000$ is used.} \label{gap} \end{figure} \section{Isolation Dissimilarity and contour maps} \label{sec_example_contour_map} To be consistent with the concept of distance as a kind of dissimilarity, we use Isolation Dissimilarity hereafter: Isolation dissimilarity: $\mathfrak p_\imath(x,y)= 1 - K_\psi(x,y)$. Like $\ell_p$ norm, $\forall x, \mathfrak p_\imath(x,x) = 0$ and $\mathfrak p_\imath(x,y) = \mathfrak p_\imath(y,x)$. However, $\forall x \ne y,\ \mathfrak p_\imath(x,y)$ depends on the data distribution and how $\mathfrak p_\imath$ is implemented, not the geometric positions only. We denote the nearest-neighbour-induced Isolation Dissimilarity $\mathfrak p_\imath$-aNNE; and the tree-induced version $\mathfrak p_\imath$-iForest. An example comparison of the contour maps produced the two dissimilarities are given in Figure~\ref{fig_contour_example}. Note that the contour maps of $\mathfrak p_\imath$ depend on the data distribution, whereas that of $\ell_2$ is not. Also, comparing to $\ell_2$, the other dissimilarity change slower in area far from the centre point and faster in area close to the centre point. \begin{figure} \begin{subfigure}{0.23\textwidth} \centering \includegraphics[width=110pt,height=90pt]{contour-pi-anne} \caption{$\mathfrak p_\imath$-aNNE} \end{subfigure}% \begin{subfigure}{0.23\textwidth} \centering \includegraphics[width=110pt,height=90pt]{contour-pi-iforest} \caption{$\mathfrak p_\imath$-iForest} \end{subfigure}% \caption{Contour plots of $\mathfrak p_\imath$ on the Thyroid dataset (mapped to 2 dimensions using MDS \cite{borg2012applied}). $\psi=14$ is used in aNNE and iForest.} \label{fig_contour_example} \end{figure} We examine the impact of Isolation Dissimilarity on density-based clustering in the rest of the paper. We describe the neighbourhood density function commonly used in the density-based clustering and its counterpart called neighbourhood mass function in the next section. \section{Neighbourhood density and mass functions} \label{sec_neighbourhood_function} \begin{framed} Neighbourhood mass function \cite{ting2016overcoming} was first proposed as a way to model data in terms of mass distribution, analogous to the neighbourhood density function used in modelling data as a density distribution. The key difference is the measure $\partial$ used in the following function of $x$: $\#\{y \in D\ |\ \partial(x,y) \le \mbox{cutoff} \}$, where the boundary of the region within which the points are counted is set by a user-specified constant cutoff. When a distance measure is used, it becomes a neighbourhood density function as the ball has a fixed volume when the cutoff is fixed. It is a neighbourhood mass function when a data dependent dissimilarity is used as the volume of the `ball' varies depending on the local distribution even if the cutoff is fixed. \end{framed} \section{Mass-connected clusters} \label{sec_mass-connectivity} Here we define mass-connected clusters defined in terms of neighbourhood mass function: $$ M_\alpha(x) = \#\{ y \in D\ |\ \mathfrak p_\imath(x,y) \le \alpha\} $$ \begin{definition} Using an $\alpha$-neighbourhood mass estimator $ M_\alpha(x) = \#\{ y \in D\ |\ \mathfrak p_\imath(x,y) \le \alpha\} $, mass-connectivity with threshold $\tau$ between $x_1$ and $x_p$ via a sequence of $p$ unique points from $D$, i.e., $\{x_1,x_2,x_3,...,x_p\}$ is denoted as $MConnect_{\alpha}^{\tau}(x_1, x_p)$, and it is defined as: \begin{equation} \begin{split} MConnect_{\alpha}^{\tau}(x_1, x_p) & \leftrightarrow \\ [(\mathfrak p_\imath(x_1,x_2)\leq \alpha) \wedge & ((M_{\alpha}(x_1)\geq \tau) \vee (M_{\alpha}(x_2)\geq \tau))] \\ \vee [\exists_{\{x_1,x_2,...,x_p\}} \ & (\forall_{i\in\{2,...,p\}} \mathfrak p_\imath(x_{i-1},x_{i}) \leq \alpha) \\ \wedge & (\forall_{i\in \{2,...,p-1\}} \ M_{\alpha}(x_i)\geq \tau)] \end{split} \label{def:connect} \end{equation} \end{definition} The second line denotes direct connectivity between two neighbouring points when $p=2$. The last two lines denote transitive connectivity when $p>2$. \begin{definition} A mass-connected cluster $\widetilde C$, which has a mode ${\bf{c}}=\arg\max_{\substack{x\in \widetilde C}}{M}_\alpha(x)$, is a maximal set of points that are mass-connected with its mode, i.e., $\widetilde C=\{x\in D \ | \ MConnect_{\alpha}^{\tau}(x, \bf c)\}$. \end{definition} Note that density-connectivity and density-connected clusters are similarly defined in DBSCAN \cite{ester1996density} when $M_\alpha = \#\{ y \in D\ |\ \mathfrak p_\imath(x,y) \le \alpha\}$ is replaced with $N_\epsilon = \#\{ y \in D\ |\ \ell_p(x,y) \le \epsilon\} $ in the above two definitions. In other words, DBSCAN \cite{ester1996density} which uses $N_\epsilon$ detects density-connected clusters; whereas DBSCAN which uses $M_\alpha$ detects mass-connected clusters. The only difference between a density-connected cluster and a mass-connected cluster is the dissimilarity measure used in Equation \ref{def:connect}. We called the DBSCAN procedure which employs $M_\alpha$: MBSCAN, since it detects mass-connected clusters rather than density-connected clusters. \section{Condition under which MBSCAN detects all mass-connected clusters} \label{sec_condition} Let a valley between two cluster modes be the points having the minimum estimated $M_\alpha(\cdot)$, i.e., ${\mathfrak g}_{ij}$, along any path linking cluster modes ${\bf c}_{i}$ and ${\bf c}_{j}$. A path between two points ${\bf x}$ and ${\bf y}$ is non-cyclic linking a sequence of unique points starting with ${\bf x}$ and ending with ${\bf y}$ where adjacent points lie in each other's $\alpha$-neighbourhood: $\mathfrak p_\imath(\cdot,\cdot) \le \alpha$. Because $\mathfrak p_\imath$ (unlike $\ell_2$ used in $N_\epsilon$) is adaptive to the density of local data distribution, it is possible to adjust $\psi$ and $\alpha$ to yield an $M_\alpha$ distribution such that all valley-points have close enough small values, if there exist such $\psi$ and $\alpha$. In other words, for some data distributions $F$, there exist some $\psi$ and $\alpha$ such that the distribution of $M_\alpha(\cdot)$ satisfies the following condition: \begin{equation} \min_{\substack{k\in \lbrace1,\dots,\aleph \rbrace}} M_\alpha({\bf c}_k) > \max_{\substack{i\neq j\in \lbrace1,\dots,\aleph \rbrace}} \hat{\mathfrak{g}}_{ij} \label{eqn_condition1} \end{equation} \noindent where $\hat{\mathfrak{g}}_{ij}$ is the largest of the minimum estimated $M_\alpha(\cdot)$ along any path linking cluster modes ${\bf c}_{i}$ and ${\bf c}_{j}$. In data distributions $F$, MBSCAN is able to detect all mass-connected clusters because a threshold $\tau$ can be used to breaks all paths between the modes by assigning regions with estimated $M_\alpha(\cdot)$ less than ${\tau}$ to noise, i.e., \[ \exists_{{\tau}} \forall_{k,i\neq j \in \lbrace 1,...,\aleph \rbrace} M_{{\alpha}}({\bf c}_k) \geqslant {\tau} > \hat{\mathfrak{g}}_{ij} \] An example that $F$ subsumes $G$, derived from the same dataset, is shown in Figure \ref{fig_mass-estimation}, where a hard distribution $G$ in which DBSCAN fails to detect all clusters is shown in Figure \ref{fig_mass-estimation}(a); but MBSCAN succeeds \footnote{Note that the above condition was first described in the context of using mass-based dissimilarity \cite{Ting-MLJ2018}; but not in relation to mass-connected clusters. We have made the relation to mass-connected clusters more explicitly here.}. In other words, the mass distribution afforded by $M_\alpha$ is more flexible than the density distribution generated by $N_\epsilon$ which leads directly to MBSCAN's enhanced cluster detection capability in comparison with DBSCAN, though both are using exactly the same algorithm, except the dissimilarity. Figure \ref{fig_change_rate} shows the change of neighbourhood function values wrt the change in their parameter for $N_\epsilon$ using $\ell_2$ and $M_\alpha$ using $\mathfrak p_\imath$-aNNE. This example shows that no $\epsilon$ exists which enables DBSCAN to detect all three clusters. This is because the line for Peak\#3 (which is the mode of the sparse cluster) has $N_\epsilon$ values in-between those of the two valleys. In contrast, many settings of $\alpha$ of $M_\alpha$ can be used to detect all three clusters because the lines of the two valleys are lower than those of the three peaks. \begin{figure} \centering \begin{subfigure}{.23\textwidth} \centering \includegraphics[width=1\linewidth]{curve-l2} \caption{$N_\epsilon$: $\ell_2$} \end{subfigure}% \begin{subfigure}{.23\textwidth} \centering \includegraphics[width=1\linewidth]{curve-anne} \caption{$M_\alpha$: $\mathfrak p_\imath$-aNNE} \end{subfigure} \caption{(a) A hard distribution for DBSCAN as estimated by $N_\epsilon$, where DBSCAN (which uses $N_\epsilon$) fails to detect all clusters using a threshold. (b) The distribution estimated by $M_\alpha$ from the same dataset, where MBSCAN (which uses $M_\alpha$) succeeds in detecting all clusters using a threshold.} \label{fig_mass-estimation} \end{figure} \begin{figure} \centering \begin{subfigure}{0.23\textwidth} \centering \includegraphics[width=120pt]{change-l2} \caption{$N_\epsilon$ vs $\epsilon$} \end{subfigure} \begin{subfigure}{0.23\textwidth} \centering \includegraphics[width=120pt]{change-anne} \caption{$M_\alpha$ vs $\alpha$} \end{subfigure} \caption{Change of neighbourhood density/mass wrt its parameter. $N_\epsilon$ uses $\ell_2$; and $M_\alpha$ uses $\mathfrak p_\imath$-aNNE. The peak numbers and valley numbers refer to those shown in Figure~\ref{fig_mass-estimation}(a). } \label{fig_change_rate} \end{figure} \begin{table*}[!htb] \centering \caption{Clustering results in $F_1$ scores. The best performer is boldfaced; the second best is underlined. } \label{tbl_results} \begin{tabular}{r|rrr||c|ccc|cc} \hline \multicolumn{4}{c||}{Datasets} & DP & \multicolumn{3}{c|}{DBSCAN} & \multicolumn{2}{c}{MBSCAN} \\ \hline Name & \#Points & \#Dim. & \#Clusters & $\ell_2$ & $\ell_2$ & ReScale & DScale & iForest & aNNE \\ \hline \multicolumn{1}{l}{Artificial data} & \multicolumn{3}{r}{(average$\Rightarrow$)} & \multicolumn{1}{c}{\textit{0.961}} & \textit{0.852} & \textit{0.941} & \multicolumn{1}{c}{\textit{0.985}} & \textit{0.969} & \textit{0.981} \\ \hline aggregation & 788 & 2 & 7 & \textbf{1.000} & 0.997 & 0.996 & \textbf{1.000} & 0.996 & \textbf{1.000} \\ compound & 399 & 2 & 6 & 0.867 & 0.791 & 0.862 & \textbf{0.942} & 0.875 & \underline{0.918} \\ jain & 373 & 2 & 2 & \textbf{1.000} & 0.976 & \textbf{1.000} & \textbf{1.000} & \textbf{1.000} & \textbf{1.000} \\ pathbased & 300 & 2 & 3 & 0.943 & 0.828 & 0.864 & \underline{0.987} & 0.986 & \textbf{0.995} \\ hard distribution & 1500 & 2 & 3 & \underline{0.994} & 0.667 & 0.985 & \textbf{0.995} & 0.987 & 0.992 \\ \hline \multicolumn{1}{l}{High-dimensional data} & \multicolumn{3}{r}{(average$\Rightarrow$)} & \multicolumn{1}{c}{\textit{0.627}} & \textit{0.446} & \textit{0.521} & \multicolumn{1}{c}{\textit{0.491}} & \textit{0.568} & \textit{0.727} \\ \hline ALLAML & 72 & 7129 & 2 & 0.706 & 0.484 & 0.729 & 0.484 & \underline{0.747} & \textbf{0.820} \\ COIL20 & 1440 & 1024 & 20 & 0.724 & 0.842 & 0.861 & 0.839 & \underline{0.865} & \textbf{0.952} \\ Human Activity & 1492 & 561 & 6 & \textbf{0.595} & 0.331 & 0.352 & 0.374 & 0.402 & \underline{0.502} \\ Isolet & 1560 & 617 & 26 & \underline{0.517} & 0.194 & 0.234 & 0.426 & 0.289 & \textbf{0.605} \\ lung & 203 & 3312 & 5 & \underline{0.703} & 0.489 & 0.544 & 0.489 & 0.649 & \textbf{0.921} \\ TOX 171 & 171 & 5748 & 4 & \underline{0.519} & 0.336 & 0.403 & 0.336 & 0.454 & \textbf{0.563} \\ \hline \multicolumn{1}{l}{General data} & \multicolumn{3}{r}{(average$\Rightarrow$)} & \multicolumn{1}{c}{\textit{0.876}} & \textit{0.680} & \textit{0.820} & \multicolumn{1}{c}{\textit{0.860}} & \textit{0.873} & \textit{0.896} \\ \hline breast & 699 & 9 & 2 & \textbf{0.970} & 0.824 & 0.951 & 0.966 & 0.963 & \underline{0.964} \\ control & 600 & 60 & 6 & 0.736 & 0.531 & 0.663 & 0.844 & \underline{0.738} & \textbf{0.854} \\ gps & 163 & 6 & 2 & \underline{0.811} & 0.753 & \underline{0.811} & \underline{0.811} & \textbf{0.819} & 0.766 \\ iris & 150 & 4 & 3 & \underline{0.967} & 0.848 & 0.905 & 0.926 & 0.966 & \textbf{0.973} \\ seeds & 210 & 7 & 3 & \underline{0.909} & 0.750 & 0.885 & 0.871 & 0.907 & \textbf{0.922} \\ shape & 160 & 17 & 9 & \underline{0.761} & 0.581 & 0.680 & 0.722 & 0.725 & \textbf{0.787} \\ thyroid & 215 & 5 & 3 & 0.868 & 0.584 & 0.850 & 0.828 & \underline{0.915} & \textbf{0.916} \\ WDBC & 569 & 30 & 2 & \textbf{0.933} & 0.600 & 0.765 & 0.894 & 0.895 & \underline{0.927} \\ wine & 178 & 13 & 3 & \underline{0.933} & 0.645 & 0.866 & 0.881 & 0.927 & \textbf{0.959} \\ \hline \hline \multicolumn{4}{r||}{Grand Average} & 0.823 & 0.653 & 0.760 & 0.781 & 0.805 & 0.867 \\ \multicolumn{4}{r||}{Number of datasets with the \textbf{Best} $F_1$ score} & 5 & 0 & 1 & 4 & 2 & 14 \\ \multicolumn{4}{r||}{\#wins/\#draws/\#loses wrt MBSCAN-$\mathfrak p_\imath$-aNNE} & 5/2/13 & 0/0/20 & 1/2/18 & 4/2/14 & 1/1/18 & - \\ \hline \end{tabular} \end{table*} \section{Experiments} \label{sec_experiments} The aim of the experiments is to compare the clustering performance of DBSCAN using different dissimilarities relative to that of the state-of-the-art density-based clustering algorithm DP \cite{rodriguez2014clustering}. In addition to the three dissimilarity measures, i.e., $\ell_2$, $\mathfrak p_\imath$-iForest and $\mathfrak p_\imath$-aNNE, two recent distance transformation method called ReScale \cite{zhu2016density} and DScale \cite{DSCALE} are also included. Note that DBSCAN using $\mathfrak p_\imath$ are denoted as MBSCAN, as they are mass-based clustering methods. All algorithms used in our experiments are implemented in Matlab (the source code with demo can be obtained from \url{https://github.com/cswords/anne-dbscan-demo}). We produced the GPU accelerated versions of all implementations. The experiments ran on a machine having CPU: i5-8600k 4.30GHz processor, 8GB RAM; and GPU: GTX Titan X with 3072 1075MHz CUDA \cite{4490127} cores \& 12GB graphic memory. A total of 20 datasets\footnote{The artificial datasets are from \url{http://cs.uef.fi/sipu/datasets/} \cite{gionis2007clustering,zahn1971graph,chang2008robust,jain2005data} except that the hard distribution dataset is from \url{https://sourceforge.net/p/density-ratio/} \cite{zhu2016density}, 5 high-dimensional data are from \url{http://featureselection.asu.edu/datasets.php} \cite{li2016feature}, and the rest of the datasets are from \url{http://archive.ics.uci.edu/ml} \cite{dua2017uci}.} are used in the experiments. They are from three categories: 5 artificial datasets, 6 high-dimensional datasets, and 9 general datasets. They are selected because they represent diverse datasets in terms of data size, number of dimensions and number of clusters. The data characteristics of these datasets are shown in the first four columns of Table \ref{tbl_results}. All datasets are normalised using the $min$-$max$ normalisation so that each attribute is in [0,1] before the experiments begin. We compared all clustering results in terms of the best $F_{1}$ score \cite{Fmeasure} \footnote{$F_{1}=\frac{1}{k}\sum_{i=1}^{k}\frac{2p_{i}r_{i}}{p_{i}+r_{i}}$, where $p_{i}$ and $r_{i}$ are the precision and the recall for cluster $i$, respectively. $F_{1}$ is preferred over other evaluation measures such as Purity \cite{Manning:2008} and Normalized Mutual Information (NMI) \cite{strehl2002cluster} because these measures do not take into account noise points which are identified by a clustering algorithm. Based on these measures, algorithms can obtain a high clustering performance by assigning many points to noise, which can be misleading in a comparison.} that is obtained from a search of the algorithm's parameter. We search each parameter within a reasonable range. The ranges used for all algorithms/dissimilarities are provided in Table \ref{tbl_param_range}. Because $\mathfrak p_\imath$ used in MBSCAN is based on randomised methods, we report the mean $F_{1}$ score over 10 trials for each dataset. \begin{table}[ht] \renewcommand{\arraystretch}{1.1} \setlength{\tabcolsep}{2.8pt} \centering \caption{Search ranges of parameters used.} \begin{tabular}{@{}c|c|l@{}} \hline & Description & Candidates \\ \hline \multirow{2}{*}{DP} & Target cluster number & $k \in [2...40]$ \\ & neighbourhood size in $N_\epsilon$ & $\epsilon \in [0.001...0.999]$ \\ \hline {DBSCAN} & $MinPts$ & $MinPts \in [2...40]$ \\ {MBSCAN}& neighbourhood size in $N_\epsilon$ & $\epsilon \in [0.001...0.999]$ \\ \hline {ReScale} & precision factor & $f=200$ * \\ {DScale} & neighbourhood size in $N_\eta$ & $\eta \in [0.05...0.95]$ \\ \hline {aNNE} & Ensemble size & $t= 200$ \\ iForest & Subsample size & $\psi \in [2, \lceil n/2 \rceil]$ $\dagger$ \\ \hline \multicolumn{3}{p{8cm}}{* $f$ parameter is required for ReScale only.}\\ \multicolumn{3}{p{8cm}}{$\dagger$ A search of 10 values with equal interval in the range.} \end{tabular} \label{tbl_param_range} \end{table} \subsection{Clustering results} Table \ref{tbl_results} shows that MBSCAN using $\mathfrak p_\imath$-aNNE has the best performance overall. Its $F_1$ scores are the best on 14 out of 20 datasets. The closest contender DP has the best $F_1$ scores on 5 datasets only. In two other performance measures, MBSCAN using $\mathfrak p_\imath$-aNNE has 13 wins, 2 draws and 5 losses against DP; and has higher average $F_1$ score too (0.867 versus 0.823). One notable standout is on the high-dimensional datasets: MBSCAN using $\mathfrak p_\imath$-aNNE has the largest gap in average $F_1$ score in comparison with other contenders among the three categories of datasets. With reference to DBSCAN, the gap is close to 0.3 $F_1$ score; even compare with the closest contender DP, the gap is 0.1 $F_1$ score. The superiority of $\mathfrak p_\imath$-aNNE over $\mathfrak p_\imath$-iForest is also highlighted on these high-dimensional datasets. MBSCAN using $\mathfrak p_\imath$-iForest wins over the original DBSCAN on all datasets except one (aggregation). This version of MBSCAN uplifted the clustering performance of DBSCAN significantly to almost the same level of DP. A significance test is conducted over MBSCAN with $\mathfrak p_\imath$-aNNE, MBSCAN with $\mathfrak p_\imath$-iForest and DP. Figure \ref{rank} shows the result of the test---MBSCAN using $\mathfrak p_\imath$-aNNE performs the best and is significantly better than DP and MBSCAN using $\mathfrak p_\imath$-iForest; and there is no significant difference between DP and MBSCAN using $\mathfrak p_\imath$-iForest. \begin{figure}[!htb] \centering \includegraphics[width=\linewidth]{rank} \caption{Critical difference (CD) diagram of the post-hoc Nemenyi test ($\alpha=0.10$). A line is showed between two algorithms if their rank gap is smaller than CD; otherwise, the difference is significant. } \label{rank} \end{figure} DBSCAN is known to be sensitive to its parameter settings. As MBSCAN is using exactly the same algorithm, it has same sensitivity. A caveat is in order. Although Isolation Similarity consistently outperforms distance measure in DBSCAN, our preliminary experiment using DP shows that the result is mixed. An analysis of DP, similar to that provided in this paper, is required in order to ascertain the condition(s) under which DP performs well and Isolation Similarity can help. \subsection{Complexity and runtime} MBSCAN with $\mathfrak p_\imath$-aNNE is much faster than MBSCAN with $\mathfrak p_\imath$-iForest because the time complexity to build a maximum-size isolation tree and testing one point is $O(\psi^2)$; and aNNE takes $O(\psi)$. The space complexity to store trained aNNE model is $O(t \cdot \psi)$; and that of iForest is $O(t \cdot \psi \cdot \log\psi)$. Table \ref{tbl_runtime} shows the GPU runtime results on the four largest datasets. In contrast to DBSCAN and DP, MBSCAN needs to pre-compute the dissimilarity matrix in the pre-processing step, and this takes $O(n^2)$ time. This pre-processing constitutes the most of the time of MBSCAN reported in Table~\ref{tbl_runtime}. $\mathfrak p_\imath$-aNNE is still faster than ReScale in high-dimensional datasets, though it is one order of magnitude slower than DBSCAN and DP. \begin{table}[!htb] \centering \renewcommand{\arraystretch}{1.2} \setlength{\tabcolsep}{5pt} \caption{Runtime in GPU seconds} \begin{tabular}{@{}r|c|cc|cc@{}} \hline Datasets & DP & \multicolumn{2}{c|}{DBSCAN} & \multicolumn{2}{c}{MBSCAN} \\ \cline{3-6} & & Original & ReScale & \multicolumn{1}{c}{iForest} & \multicolumn{1}{c}{aNNE} \\ \hline Hard dist. & 0.11 & 0.07 & 0.08 & 154 & 0.50 \\ COIL20 & 0.03 & 0.02 & 3.74 & 762 & 0.45 \\ Human Act. & 0.10 & 0.03 & 2.12 & 146 & 0.48 \\ Isolet & 0.08 & 0.03 & 2.64 & 472 & 0.48 \\ \hline \end{tabular} \label{tbl_runtime} \end{table} \vspace{3mm} \begin{center} \textbf{Summary: aNNE versus iForest implementations of Isolation Similarity} \end{center} We find that the aNNE implementation of Isolation Similarity is better than the iForest implementation because the aNNE implementation is more: \begin{itemize} \item Effective on datasets with varied densities and high-dimensional datasets. \item Amenable to GPU acceleration because aNNE can be implemented in almost pure matrix manipulations. Thus, aNNE runs many orders of magnitude faster than iForest if a large $\psi$ is required because aNNE in the GPU implementation has almost constant runtime wrt $\psi$. \item Stable because aNNE's randomisation is a result of sampling data subsets only. \end{itemize} \section{Conclusions} \label{sec_conclusions} We make four contributions in this paper: \renewcommand{\labelenumi}{\arabic{enumi})} \begin{enumerate} \item Identifying shortcomings of tree-induced Isolation Similarity; proposing a nearest neighbour-induced Isolation Similarity to overcome these shortcomings; and establishing three advantages of the nearest neighbour-induced Isolation Similarity over the tree-induced one. \item Formally proving the characteristic of the nearest neighbour-induced Isolation Similarity. This is the first proof since the introduction of Isolation Kernel \cite{ting2018IsolationKernel}. \item Providing a formal definition of mass-connected clusters and an explanation why detecting mass-connected clusters is a better approach in overcoming the shortcoming of DBSCAN (which detects density-connected clusters) in datasets with varied densities. This differs fundamentally from the existing density-based approaches of the original DBSCAN, DP and ReScale which all employ a distance measure to compute density. \item Conducting an empirical evaluation to validate the advantages of (i) nearest-neighbour-induced Isolation Similarity over tree-induced Isolation Similarity; and (ii) mass-based clustering using Isolation Similarity over four density-based clustering algorithms, i.e., DBSCAN, DP, ReScale and DScale. \end{enumerate} In addition, we show for the first time that it is possible to uplift the clustering performance of the classic DBSCAN, through the use of nearest-neighbour-induced Isolation Similarity, to surpass that of DP---the state-of-the-art density-based clustering algorithm. \section*{Acknowledgements} This material is based upon work supported by eSolutions of Monash University (Xiaoyu Qin); and partially supported by the Air Force Office of Scientific Research, Asian Office of Aerospace Research and Development (AOARD) under award number: FA2386-17-1-4034 (Kai Ming Ting). \bibliographystyle{aaai}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{introd} When dealing with functional data, the use of dimension reduction techniques arises as a most natural idea. Some of these techniques are based upon the use of general (linear) finite dimensional projections. This is the case of functional principal component analysis (FPCA), see \citet{liy13}, although the so-called functional partial least squares (PLS) methodology is in general preferable when a response variable is involved; see \citet{del12} for a recent reference. Other common dimension reduction methods in the functional setting include sliced inverse regression (\citet{hsi09,jia14}) and additive models (\citet{zha13}). Also, the methods based on random projections could offer an interesting alternative. See, e.g., \citet{cue14} for a short overview of dimension-reduction techniques together with additional references. \medskip \it Some comments on the literature\rm. Our proposal here is concerned with a different, more radical, approach to dimension reduction, given by the so-called \bf variable selection methods\rm. The aim of variable selection, when applied to functional data, is to replace every infinite dimensional observation $\{x(t),\ t\in[0,1]\}$, with a finite dimensional vector $(x(t_1),\ldots,x(t_d))$. The selection of the ``variables'' $t_1,\ldots,t_d$ should be a consequence of a trade-off between two mutually conflicting goals: representativeness and parsimony. In other words, we want to retain as much information as possible (thus selecting relevant variables) employing a small number of variables (thus avoiding redundancy). It is clear that variable selection has, at least, an advantage when compared with other dimension reduction methods (PCA, PLS...) based on general projections: the output of any variable selection method is always directly interpretable in terms of the original variables, provided that the required number $d$ of selected variables is not too large. As a matter of fact, variable selection is sometimes the main target itself in many cases where the focus is on model simplification. We are especially interested in the ``intrinsic'' approaches to variable selection, in the sense that the final output should depend only on the data, not on any assumption on the underlying model (although the result should be interpretable in terms of the model). There is a vast literature on these topics published by researchers in machine learning or by mathematical statisticians. The approaches and the terminology used in these two communities are not always alike. Thus, in machine learning, variable selection is often referred to as \it feature selection\rm. Also, the methods we have called ``intrinsic'' are often denoted as ``filter methods'' in machine learning. It is very common as well (especially in the setting of regression models) to use the terms ``sparse'' or ``sparsity'' to describe situations in which variable selection is the first natural aim; see e.g., \cite{ger10} and \cite{ros13}. It has been also argued in \cite{kne11} that the standard sparsity models are sometimes too restrictive so that it is advisable to combine them with other dimension reduction techniques. The ``relevant'' variables in a functional model are sometimes called ``impact points'' \citep{mck10} or ``most predictive design points'' \citep{fer10}. Also, the term ``choice of components'' has been used by \cite{del12a} as a synonym of variable selection. Let us finally mention, with no attempt of exhaustiveness in mind, that the recent literature in functional variable selection includes a version of the classical lasso procedure \citep{zha14}, a study of consistency in the variable selection setup \citep{com12} and the use of inverse regression ideas in variable selection \citep{jialiu14}. The monograph \cite{guy06} contains a complete survey on feature extraction (including selection) from the point of view of machine learning. The overview paper by \cite{fan10} has a more statistical orientation. \medskip \it The functional classification problem\rm. In what follows we will focus on variable selection for the problem of supervised binary classification, with functional data. While the statement and basic ideas behind the supervised classification (or discrimination) problem are widely known (see, e.g., \citet{dev96}), we need to briefly recall them for the sake of clarity and for notation purposes. Suppose that an explanatory random variable $X$, taking values in a \it feature space\/ \rm ${\mathcal F}$, can be observed in the individuals of two populations $P_0$ and $P_1$. Let $Y$ denote a binary random variable, with values in $\{0,1\}$, indicating the membership to $P_0$ or $P_1$. On the basis of a data set ${\mathcal D}_n=((X_1,Y_1),\ldots,(X_n,Y_n))$ of $n$ independent observations drawn from $(X,Y)$, the supervised classification problem aims at predicting the membership class $Y$ of a new observation for which only the variable $X$ is known. A \it classifier\/ \rm or \it classification rule\/ \rm is just a measurable function $g:{\mathcal F}\rightarrow \{0,1\}$. It is natural to assess the performance of a classifier by the corresponding \it classification error\/ \rm $L={\mathbb P}(g(X)\neq Y)$. It is well-known that the classification error $L={\mathbb P}(g(X)\neq Y)$ is minimized by the so-called \it Bayes classifier\rm , $g^*(x)={\mathbb I}_{\{\eta(x)>1/2\}}$, where $\eta(x)={\mathbb E}(Y|X=x)={\mathbb P}(Y=1|X=x)$. Since $g^*$ is in general unknown, it must be approximated, in different ways, by data-driven classifiers. In our functional setting the feature space will be (unless otherwise stated) ${\mathcal F}={\mathcal C}[0,1]$, the space of real continuous functions defined on $[0,1]$, endowed with the usual supremum norm. Thus, our data will be of type $(X_1,Y_1),\ldots, (X_n,Y_n)$, where the $X_i$ are iid trajectories in ${\mathcal C}[0,1]$ drawn from a stochastic process $X=X(t)=X(\omega,t)$. When no confusion is possible, we will denote the whole process by $X$. When convenient, $X(t)$ will be denoted $X_t$. Several functional classifiers have been considered in the literature (see, e.g., \citet{bai11b} for a survey). Among them, maybe the simplest one is the so-called $k$-nearest neighbors rule ($k$-NN). Additionally, we will also consider, as a simple standard choice, the classical linear Fisher's classifier (henceforth LDA), applied to the selected variables. \medskip \it The purpose and contents of this paper\rm. (a) In Section \ref{max-hunting} we propose a ``maxima hunting'' (MH) method for variable selection. It is essentially based on the idea of selecting the local maxima $(t_1,\ldots,t_d)$ of the function ${\mathcal V}^2(t)={\mathcal V}^2(X(t),Y)$, where ${\mathcal V}^2$ denotes the ``distance covariance'' association measure for random variables due to \citet{sze07}. An alternative version of the MH procedure can be obtained by replacing ${\mathcal V}^2(t)$ by the ``distance correlation'' ${\mathcal R}^2(t)$. See Section \ref{aux} for a short review of the definitions and properties of ${\mathcal V}^2$ and ${\mathcal R}^2$. Some useful simplified versions for ${\mathcal V}^2$ are obtained in Th. \ref{expresiones} of Section \ref{max-hunting}, for the particular case where $Y$ is a binary variable. A result of consistent estimation (Th. \ref{th:consistency}) for the maxima of ${\mathcal V}^2$ is also proved in that section. (b) In Section \ref{motiv} we give several models (identified in terms of the conditional distributions $X(t)|Y=j$) in which the optimal classification rule depends only on a finite number of variables. We also show that in some of these models the variables to be selected coincide with the maxima of ${\mathcal V}^2$. These results provide a theoretical basis for the techniques of variable selection in functional classification models. Usually these techniques are considered from an exclusively algorithmic or computational point of view. It is therefore of some interest to motivate them in ``population terms'', by identifying some specific models where these techniques have full sense. As pointed out by \cite{bia14}, \it ``Curiously, despite a huge research activity in this area, few attempts have been made to connect the rich theory of stochastic processes with functional data analysis''\rm. So the present paper can be seen as a contribution to partially fill this gap. (c) An extensive simulation study, comparing our variable selection methods with other dimension reduction procedures (as well as with the ``baseline option'' of doing no variable selection at all) is included in Section \ref{sim}. Three real data examples are discussed in Section \ref{real}. Section \ref{conclusiones} includes some final conclusions as well as a ranking of all considered methods. All the proofs are included in the Appendix. \section{An auxiliary tool: the distance covariance}\label{aux} The problem of finding appropriate association measures between random variables (beyond the standard linear correlation coefficient) has received increasing attention in recent years; see for instance \citet{hal11}. We will use here the association measure proposed by \citet{sze07}, see also \citet{sze09}. It is called \it distance covariance\/ \rm (or \it distance correlation\/ \rm in the standardized version). It has a number of valuable properties: first, it can be used to define the association between two random variables $X$ and $Y$ of arbitrary (possibly different) dimensions; second, it characterizes independence in the sense that the distance covariance between $X$ and $Y$ is zero if and only if $X$ and $Y$ are independent; third, the distance correlation can be easily estimated in a natural plug-in way, with no need of smoothing or discretization. \begin{definition}\label{def:dcov} Given two random variables $X$ and $Y$ taking values in ${\mathbb R}^p$ and ${\mathbb R}^q$, respectively, let $\varphi_{X,Y}$, $\varphi_{X}$, $\varphi_{Y}$ be the characteristic functions of $(X,Y)$, $X$ and $Y$, respectively. Assume that the components of $X$ and $Y$ have finite first-order moments. The distance covariance between $X$ and $Y$, is the non-negative number ${\cal V}(X,Y)$ defined by \begin{equation} \label{dcov} {\cal V}^2(X,Y) = \int_ {\mathbb{R}^{p+q}} \mid \varphi_{X,Y}(u,v) -\varphi_X(u) \varphi_Y(v)\mid^2 w(u,v) du dv, \end{equation} with $w(u,v)= (c_p c_q \abs{u}_p^{1+p} \abs{v}_q^{1+q} )^{-1} $, where $c_d=\frac{\pi^{(1+d)/2}}{\Gamma((1+d)/2)}$ is half the surface area of the unit sphere in ${\mathbb R}^{d+1}$ and $|\cdot|_d$ stands for the Euclidean norm in ${\mathbb R}^d$. Finally, denoting ${\cal V}^2(X)={\cal V}^2(X,X)$, the (square) distance correlation is defined by ${\cal R}^2(X,Y)=\frac{{\cal V}^2(X,Y)}{\sqrt{{\cal V}^2(X){\cal V}^2(Y)}}$ if ${\cal V}^2(X){\cal V}^2(Y)>0$, ${\cal R}^2(X,Y)$ $=0$ otherwise. \end{definition} Note that these definitions make sense even if $X$ and $Y$ have different dimensions (i.e., $p\neq q$). In addition, the association measure ${\cal V}^2(X,Y)$ can be consistently estimated through a relatively simple average of products calculated in terms of the mutual pairwise distances $|X_i-X_j|_p$ and $|Y_i-Y_j|_q$ between the sample values $X_i$ and the $Y_j$; see \citet[expression (2.8)]{sze09}. See also \cite{li12} for a different use of the correlation distance in variable selection. \section{Variable selection based on maxima hunting}\label{max-hunting} Our proposal is based on a direct use of the distance covariance association measure. We just suggest to select the values of $t$ corresponding to local maxima of the distance-covariance function ${\cal V}^2(X_t,Y)$ or, alternatively, of the distance correlation function ${\cal R}^2(X_t,Y)$. This method has a sound intuitive basis as it provides a simple natural way to deal with the relevance vs. redundancy trade-off: the selected values must carry a large amount of information on $Y$, which takes into account the \it relevance \rm of the selected variables. In addition, the fact of considering local maxima automatically takes care of the \it redundancy \rm problem, since the highly relevant points close to the local maxima are automatically excluded from consideration. This intuition is empirically confirmed by the results of Section \ref{sim}, where the practical performance of the maxima-hunting method is quite satisfactory. Figure \ref{fig:maxima} shows how the fun ction ${\cal V}^2(X_t,Y)$ looks like in two different examples. \begin{figure}[h!]\begin{center} \includegraphics[scale=0.40]{./V.png}\par \caption{\footnotesize Left: 50 trajectories of model in Proposition \ref{BvsBST}. Right: Logistic model L11 (explained in Subsection \ref{estruct}) with 50 Ornstein–Uhlenbeck trajectories. ${\cal V}^2(X_t,Y)$ (scaled) is in black and the relevant variables are marked by vertical dashed lines .}\label{fig:maxima} \end{center} \end{figure} The extreme flexibility of these association measures allows us to consider the case of a multivariate response $Y$. So there is no conceptual restriction to apply the same ideas for multiple classification or even to a regression problem. However, we will limit ourselves here to the important problem of binary classification. In this case we can derive simplified expressions for ${\cal V}^2(X,Y)$ which are particularly convenient in order to get empirical approximations. This is next shown. For the sake of generality, the results of this section will be obtained for the $d$-variate case, although in the rest of the paper we will use them just for $d$=1. Thus, throughout this subsection, $d$ will denote a natural number and $t$ will stand for a vector $t=(t_1,\ldots, t_d)$ $\in [0,1]^d$. Also, for a given process $X$, we abbreviate $X(t)=(X(t_1),\ldots,X(t_d))$ by $X_t$ and $Z'$ will denote an independent copy of a random variable $Z$. We write $u^\top$ and $|u|_d$ to denote the transposed and the Euclidean norm of a vector $u\in\mathbb{R}^d$. Let $\eta(x)=\mathbb{P}(Y=1|X=x)$ so that $Y|X \sim \mbox{Binomial}(1,\eta(X))$ where the symbol $\sim$ stands for ``is distributed as''. Observe that $p=\mathbb{P}(Y=1)=\mathbb{E}(\mathbb{P}(Y=1|X))=\mathbb{E}(\eta(X))$. Our variable selection methodology will heavily depend on the function ${\cal V}^2(X_t,Y)$ giving the distance covariance dependence measure between the marginal vector $X(t)=X_t$, for $t\in [0,1]^d$ and $d\in\mathbb{N}$, and the class variable $Y$. The following theorem gives three alternative expressions for this function. The third one will be particularly useful in what follows. \begin{theorem}\label{expresiones} In the setting of the functional classification problem above stated, the function ${\cal V}^2(X_t,Y)$ defined in (\ref{dcov}) can be alternatively calculated with the following expressions, \begin{equation} \label{e1} \hspace{0.25cm}(a) \hspace{1.5cm} {\cal V}^2(X_t,Y)=\frac{2}{c_d} \int_{\mathbb{R}^d} \frac{\abs{\zeta(u,t)}^2}{|u|_d^{d+1}}du,\hspace{3cm} \end{equation} where $\zeta(u,t)=\E{\left( \eta(X)-p\right)e^{iu^\top X_t}}$ and $c_d$ is given in Definition \ref{def:dcov}. {\begin{align}\label{e2} (b) \hspace{1.5cm} {\cal V}^2(X_t,Y)=& -2\E{(\eta(X)-p)(\eta(X')-p)|X_t-X'_t|_d}\nonumber \\ =&-2\E{(Y-p)(Y'-p)|X_t-X'_t|_d}, \end{align}} where $(X^\prime, Y^\prime)$ denotes an independent copy of $(X,Y)$, respectively. \begin{equation} \label{e3} \hspace{0.5cm}(c) \hspace{1.5cm} {\cal V}^2(X_t,Y)=4p^2(1-p)^2 \left[ I_{01}(t) - \frac{I_{00}(t)+I_{11}(t)}{2}\right], \end{equation} where $I_{i j}(t)=\Ep{|X_t - X'_t|_d\, |\, Y=i,Y'=j}$. \end{theorem} \smallskip In a training sample $\{(X_i,Y_i),\ i=1,\ldots,n\}$ denote by $X^{(0)}_1,\ldots, X^{(0)}_{n_0}$ and $X^{(1)}_1,\ldots, X^{(1)}_{n_1}$ the $X$-observations corresponding to values $Y_i=0$ and $Y_i=1$, respectively. In this section, we use these data to obtain an estimator of ${\cal V}^2(X_t,Y)$, which is uniformly consistent in $t$. As a consequence, we can estimate the local maxima of ${\cal V}^2(X_t,Y)$: using part (c) of Theorem \ref{expresiones}, a natural estimator for ${\cal V}^2(X_t,Y)$ is \[ {\cal V}_n^2(X_t,Y)=4\hat p^2(1-\hat p)^2 \left[ \hat I_{01}(t) - \frac{\hat I_{00}(t)+\hat I_{11}(t)}{2}\right], \] where $\hat p=n_1 /(n_0 + n_1)$, $\hat I_{rr}(t) = \frac{2}{n_r(n_r-1)} \sum_{i<j} |X^{(r)}_i(t) - X^{(r)}_j(t)|_d,$ for $r=0,1$, and $\hat I_{01}(t) = \frac{1}{n_0n_1} \sum_{i=1}^{n_0} \sum_{j=1}^{n_1} |X^{(0)}_i(t) - X^{(1)}_j(t)|_d.$ The uniform strong consistency of ${\cal V}_n^2(X_t,Y)$ is established in Theorem \ref{th:consistency} below. \begin{theorem} \label{th:consistency} Let $X=X_t$, with $t\in[0,1]^d$, be a process with continuous trajectories almost surely such that $\mathbb{E}( \|X\|_\infty\log^+\|X\|_\infty )< \infty$. Then, ${\cal V}_n^2(X_t,Y)$ is continuous in $t$ and \[ \underset{t\in [0,1]^d}{\sup}|{\cal V}_n^2(X_t,Y)-{\cal V}^2(X_t,Y)| \to 0 \ \ \mbox{a.s.,\ as } n\to\infty. \] Hence, if we assume that ${\cal V}^2(X_t,Y)$ has exactly $m$ local maxima at $t_1,\cdots,t_m$, then ${\cal V}_n^2(X_t,Y)$ has also eventually at least $m$ maxima at $t_{1n},\cdots,t_{mn}$ with $t_{jn}\to t_j$, as $n\to\infty$, a.s., for $j=1,\ldots,m$. \end{theorem} \section{Some theoretical, model-oriented motivation for variable selection and maxima-hunting}\label{motiv} The variable selection methods we are considering here for the binary functional classification problem are aimed at selecting \it a finite number of variables\rm. One might think that this is a ``too coarse'' approach for functional data. Nevertheless, we provide here some theoretical motivation by showing that, in some relevant cases, variable selection is ``the best we can do'' in the sense that, in some relevant models, the Bayes rule (i.e., the optimal classifier) has an expression of type $g^*(X)=h(X(t_1),\cdots,X(t_d))$, so that it depends only on a finite (typically small) number of variables. In fact, in many situations, a proper variable selection leads to an improvement in efficiency (with respect to the baseline option of using the full sample curves), due to the gains associated with a smaller noise level. The distribution of $X(t)|Y=i$, will be denoted by $\mu_i$ for $i=0,1$. In all the examples below the considered processes are Gaussian, i.e., for all $t_1,\ldots,t_m\in[0,1]$, with $m\in{\mathbb N}$, the finite-dimensional marginal $(X(t_1),\ldots,X(t_m))|Y=i$ has a normal distribution in ${\mathbb R}^m$ for $i=0,1$. Many considered models have non-smooth, Brownian-like trajectories. These models play a very relevant role in statistical applications, in particular to the classification problem; see, e.g., \citet{lin09}. Let us now recall some basic notions and results to be used throughout (see, e.g., \citet[ch. 4]{ath06}, for details): $\mu_0$ is said to be \it absolutely continuous with respect to $\mu_1$\/ \rm (which is denoted by $\mu_0 \ll\mu_1$) if and only if $\mu_1(A)=0$ entails $\mu_0(A)=0$, $A$ being a Borel set in ${\mathcal C}[0,1]$. Two probability measures $\mu_0$ and $\mu_1$ are said to be \it equivalent\/ \rm if $\mu_0 \ll\mu_1$ and $\mu_1 \ll\mu_0$; they are \it mutually singular\/ \rm when there exists a Borelian set $A$ such that $\mu_1(A)=0$ and $\mu_0(A)=1$. The so-called \it Hajek-Feldman dichotomy \rm (see \citet{fel58}) states that if $\mu_0$ and $\mu_1$ are Gaussian, then they are either equivalent or mutually singular. The \it Radon-Nikodym Theorem\/ \rm establishes that $\mu_1\ll \mu_0$ if and only if there exists a measurable function $f$ such that $\mu_1(A)=\int_Af d\mu_0$ for all Borel set $A$. The function $f$ (which is unique $\mu_0$-almost surely) is called \it Radon-Nikodym derivative of $\mu_1$ which respect to $\mu_0$\rm. It is usually represented by $f=\frac{d\mu_1}{d\mu_0}$. Finally, in order to obtain the results in this section we need to recall (see \citet[Th. 1]{bai11a}) that \begin{equation} \label{eqBayesRN} \eta(x)=\left[\frac{1-p}{p}\frac{d\mu_0}{d\mu_1}(x)+1 \right]^{-1},\ \ \mbox{for}\ x\in {\mathcal S}, \end{equation} where ${\mathcal S}$ is the common support of $\mu_0$ and $\mu_1$, and $p=\mathbb{P}(Y=1)$. This equation provides the expression for the optimal rule $g^*(x)=\mathbb{I}_{\{\eta(x)>1/2\}}$ in some important cases where the Radon-Nikodym derivative is explicitly known. \medskip \it Some examples\rm. \noindent Two non-trivial situations in which the Radon-Nikodym derivatives can be explicitly calculated are those problems where $\mu_0$ is the standard Brownian motion $B(t)$, and $\mu_1$ corresponds to $B(t)$ plus a stochastic or a linear trend. In both cases the Bayes rule $g^*$ turns out to depend just on one value of $t$. To be more precise, it has the form $g^*(X)=h(X(1))$. This is formally stated in the following results. Proofs can be found in the Appendix. \begin{proposition}\label{BvsBST} Let us assume that $\mu_0$ is the distribution of a standard Brownian motion $B(t),\ t\in[0,1]$ and $\mu_1$ is the distribution of $B(t)+\theta t$, where $\theta$ is a random variable with distribution $N(0,1)$, independent from $B$. Then, the Bayes rule is given by $g^*(x)={\mathbb I}_{\left\{x_1^2 > 4\log\left( \frac{\sqrt{2}(1-p)}{p} \right)\right\}}(x),\ \ \mbox{for all}\ x\in{\mathcal C}[0,1]$. \end{proposition} As a particular case, when the prior probabilities of both groups are equal, $p=1/2$, we get $g^*(x)=1$ if and only if $ |x_1| > 2\sqrt{\log\sqrt{2}} \approx 1.77. $ \begin{proposition}\label{BvsBLT} Let us assume that $\mu_0$ is the distribution of a standard Brownian motion $B(t),\ t\in[0,1]$ and $\mu_1$ is the distribution of $B(t)+c t$, where $c\neq 0$ is a constant. Then, for $x\in{\mathcal C}[0,1]$ the Bayes rule is given by $g^*(x)={\mathbb I}_{\left\{x_1 > \frac{c}{2} - \frac{1}{c}\log\left(\frac{p}{1-p}\right)\right\}}(x)$, if $c>0$, and $g^*(x)={\mathbb I}_{\left\{x_1 < \frac{c}{2} - \frac{1}{c}\log\left(\frac{p}{1-p}\right)\right\}}(x)$, if $c<0$. \end{proposition} Before presenting our third example we need some additional notation. Let us now define the countable family of \it Haar functions\rm, $ \varphi_{m,k}=\sqrt{2^{m-1}} \left[ \mathbb{I}_{\left( \frac{2k-2}{2^m},\frac{2k-1}{2^m}\right)} \right.$ $\left. - \mathbb{I}_{\left( \frac{2k-1}{2^m},\frac{2k}{2^m}\right)}\right], $ for $m,k\in{\mathbb N}$, $1\leq k\leq 2^{m-1}$. The family $\{\varphi_{m,k}\}$ is known to be an orthonormal basis in $L^2[0,1]$. Moreover, define the ``peak'' functions $\Phi_{m,k}$ by \begin{equation} \Phi_{m,k}(t)=\int_0^t\varphi_{m,k}(s)ds.\label{peak} \end{equation} We want to use these peak functions to define the trend of the $\mu_1$ distribution in another model of type ``Brownian versus Brownian plus trend''. In this case the Bayes rule depends just on three points. \begin{proposition}\label{BvsBTri} Let us assume that $\mu_0$ is the distribution of a standard Brownian motion $B(t),\ t\in[0,1]$ and $\mu_1$ is the distribution of $B(t)+\Phi_{m,k}(t)$, where $\Phi_{m,k}$ is one of the peak functions defined above. Then, for $x\in{\mathcal C}[0,1]$ the regression function $\eta(x)={\mathbb E}(Y|X=x)$ is \begin{align} \label{etaphi} \eta(x)=\left\{ \frac{1-p}{p}\exp \left( \frac{1}{2} - 2^{\frac{m-1}{2}} \left[\left( x_{\frac{2k-1}{2^m}}-x_{\frac{2k-2}{2^m}}\right)+ \left( x_{\frac{2k-1}{2^m}}-x_{\frac{2k}{2^m}}\right) \right] \right)+1 \right\}^{-1} \end{align} and the Bayes rule $ g^*(x)={\mathbb I}_{\{\eta(x)>1/2\}}$ fulfils $g^*(x)=1$ if and only if \begin{align} \label{etarule} \left( x_{\frac{2k-1}{2^m}}-x_{\frac{2k-2}{2^m}}\right) + \left( x_{\frac{2k-1}{2^m}}-x_{\frac{2k}{2^m}}\right) > \frac{1}{\sqrt{2^{m+1}}} - \frac{1}{\sqrt{2^{m-1}}} \log\left(\frac{p}{1-p}\right). \end{align} \end{proposition} Let us recall that, according to Cameron-Martin Theorem (see \citet[p. 24]{mor10}), in order to get the equivalence of $\mu_1$ and $\mu_0$ the trend function is required to belong to the Dirichlet space ${\mathcal D}[0,1]$ of real functions $F$ defined in $[0,1]$ which have a derivative $F^\prime$ in $L^2[0,1]$ such that $F(t)=\int_0^tF^\prime(s)ds$. It can be seen (\citet[p. 28]{mor10}) that $\{\Phi_{m,k}\}$ is an orthonormal basis for ${\mathcal D}[0,1]$. \begin{remark} Analogous calculations can be performed (still obtaining explicit expressions for the Bayes rule of type $g^*(x)=g(x(t_1),\ldots,x(t_d))$), using a rescaled Brownian motion $\sigma B(t)$ or the Brownian Bridge instead of $B(t)$, or a piecewise linear trend instead of these. Likewise, other models could be obtained by linear combinations in the trend functions or by finite mixtures of other simpler models. Many of them have been included in the simulation study of Section \ref{sim}. \end{remark} Next, we will provide some theoretical support for the maxima-hunting method, by showing that in some specific useful models the optimal classification rule depends on the maxima of the distance covariance function ${\cal V}^2(X_t,Y)$, although in some particular examples, other points (closely linked to the maxima) are also relevant. \begin{proposition}\label{maximo-unico} Under the models assumed in Propositions \ref{BvsBST} and \ref{BvsBLT}, the corresponding distance covariance functions ${\cal V}^2(X_t,Y)$ have both a unique relative maximum at the point $t=1$. \end{proposition} \begin{remark}\label{rem:maximo} Other similar results could be obtained for the model considered in Proposition \ref{BvsBTri} as well as for the Brownian bridge vs. Brownian motion model. \end{remark} The model considered in Proposition \ref{BvsBST} provides a clear example of the advantages of using the distance covariance measure ${\cal V}^2(X_t,Y)$ rather than the ordinary covariance $Cov^2(X_t,Y)$ in the maxima-hunting procedure. Indeed, note that in this case, $Cov^2(X_t,Y) = p^2(1-p)^2({\mathbb E}(X(t)|Y=0)-{\mathbb E}(X(t)|Y=1))^2 = 0,$ for all $t\in[0,1]$, so that the ordinary covariance is useless to detect any difference between the values of $t$. \section{A simulation study}\label{sim} We describe here in detail the methods under study and the models to be considered together with a summary of the results. The full outputs can be found in \url{www.uam.es/antonio.cuevas/exp/outputs.xlsx}. \subsection{The variable selection methods under study. Criteria for comparisons}\label{comp} These are the methods, and their corresponding notations as they appear in the tables and figures below. 1. {\bf Maxima-hunting}. The functional data $x(t),$ $t\in[0,1]$ are discretized to $(x(t_1),\ldots,$ $x(t_N))$, so a non-trivial practical problem is to decide which points in the grid are the local maxima: a point $t_i$ is declared to be a local maximum when it is the highest local maximum on the sub-grid $\{t_j\}$, $j=i-h\ldots,i+h$. The proper choice of $h$ depends on the nature and discretization pattern of the data at hand. Thus, $h$ could be considered as a smoothing parameter to be selected in an approximately optimal way. In our experiments $h$ is chosen by a validation step explained in next section. Then, we sort the maxima $t_i$ by \bf relevance \rm (the value of the function at $t_i$). This seems to be the natural order and it produces better results than other simple sorting strategies. We denote these maxima-hunting methods by \textbf{MHR} and \textbf{MHV} depending on the use of ${\cal R}^2$ or ${\cal V}^2$. 2. \bf Univariate $t$-ranking method\/\rm, denoted by \textbf{T}, is frequently used when selecting relevant variables (see e.g. the review by \citet{fan10}). It is based on the simple idea of selecting the variables $X_t$ with highest Student's $t$ two-sample scores $T(X_t)=|\bar{X}_{1t}-\bar{X}_{0t}|/\sqrt{s^2_{1t}/n_1+s^2_{0t}/n_0}$. 3. {\bf mRMR}. The minimum Redundancy Maximum Relevance algorithm, proposed in \citet{din05} and \citet{pen05}, is a relevant intrinsic variable selection method; see \citet{ber15} for a recent contribution. It aims at maximizing the relevance of the selected variables avoiding an excess of redundancy what seems particularly suitable for functional data. Denoting the set of selected variables by $S$, the variables are sequentially incorporated to $S$ with the criterion of maximizing the difference $Relevance(S)-Redundancy(S)$ (or alternatively the quotient $Relevance(S)/Redundancy(S)$). Two ways of measuring relevance and redundancy have been proposed: first, we can use the Fisher statistic for relevance and the standard correlation for redundancy. Second, a three-fold discretized version of the so-called \textit{Mutual Information} measure for both relevance and redundancy (see \citet[equation (1)]{din05}). In principle these two approaches are intended for continuous and discrete variables respectively. However, \citet{din05} report a good performance for the second one even in the continuous case. We have considered mRMR as a natural competitor for our maxima-hunting approximation. We have computed both Fisher-Correlation and Mutual Information approaches with both difference and quotient criteria. For the sake of clarity we only show here the results of \textbf{FCQ} (Fisher Correlation Quotient) and \textbf{MID} (Mutual Information Difference) which outperform on average their corresponding counterparts. 4. {\bf PLS}. According to the available results (\citet{pre07,del12}) PLS is the ```method of choice'' for dimension reduction in functional classification. Note however that PLS is not a variable selection procedure; in particular it lacks the interpretability of variable selection. In some sense, the motivation for including PLS is to check how much do we lose by restricting ourselves to variable selection methods, instead of considering other more general linear projections procedures (as PLS) for dimension reduction. 5. {\bf Base}. The $k$-NN classifier is applied to the entire curves. The Base performance can be seen as a reference to assess the usefulness of dimension reduction methods. Somewhat surprisingly, Base is often outperformed. Note that the Base method cannot be implemented with LDA since this classifier typically fails with infinite or high-dimensional data; see, e.g. \citet[Section 6.1]{cue14}, for some insights and references. The \bf classifiers \rm used in all cases are either $k$-NN, based on the Euclidean distance or LDA (applied to the selected variables). Similar comparisons could be done with other classifiers, since the considered methods do not depend on the classifier. For comparing the different methods we use the natural accuracy measure, defined by the percentage of correct classification. \subsection{The structure of the simulation study}\label{estruct} Our simulation study consists of 400 experiments, aimed at comparing the practical performances of several intrinsic variable selection methods described in the previous subsection. These experiments are obtained by considering 100 different underlying models and 4 sample sizes, where by ``model'' we mean either, \begin{itemize} \item[(M1)] a pair of distributions for $X|Y=0$ and $X|Y=1$ (corresponding to $P_0$ and $P_1$, respectively); in all cases, we take $p={\mathbb P}(Y=1)=1/2$. \item[(M2)] The marginal distribution of $X$ plus $\eta(x)={\mathbb P}(Y=1|X=x)$. \end{itemize} Models vary in difficulty and number of relevant variables. In all the considered models the optimal Bayes rule turns out to depend on a finite number of relevant variables, see Section 3. The processes involved include also different levels of smoothing. The full list of considered models is available in the Supplementary Material document. All of them belong to one of the following classes: 1. \bf Gaussian models\rm: they are denoted $G1, G1b,\ldots, G8$. All of them are generated according to the general pattern (M1). In all cases the distributions of $X(t)|Y=i$ are chosen among one of the following types: first, the \bf standard Brownian Motion\rm, $B$, in $[0,1]$. Second, \bf Brownian Motion, $BT$, with a trend \rm $m(t)$, i.e., $BT(t)$ $=B(t)+m(t)$ (we have considered several choices for $m(t)$). Third, the \bf Brownian bridge\rm: $BB(t)=B(t)-tB(1)$. Our fourth class of Gaussian processes is the \bf Ornstein–Uhlenbeck process\rm, with a covariance function of type $\gamma(s,t)=a\exp(-b|s-t|)$ and zero mean ($OU$) or different mean functions $m(t)$ ($OUt$). Finally smoother processes have been also computed by convolving Brownian trajectories with Gaussian kernels. We have considered two levels of smoothing denoted by sB and ssB. 2. \bf Logistic models\rm : they are defined through the general pattern (M2): the process $X=X(t)$ follows one of the above mentioned distributions and $Y\sim\mbox{Binom}(1,\eta(X))$ with $\eta(x)=(1+e^{-\Psi(x(t_1),\cdots,x(t_d))})^{-1}$, a function of the relevant variables $x(t_1),\cdots,x(t_d)$. We have considered 15 versions of this model and a few variants, denoted $L1, L2$, $L3, L3b, \ldots, L15$. They correspond to different choices for the link function $\Psi$ (most of them linear or polynomial) and for the distribution of $X$. For example, in the models L2 and L8 we have $\Psi(x)=10x_{30}+10x_{70}$ and $\Psi(x)=10x_{50}^4+50x_{80}^3+20x_{30}^2$, respectively. 3. \bf Mixtures\rm: they are obtained by combining (via mixtures) in several ways the above mentioned Gaussian distributions assumed for $X|Y=0$ and $X|Y=1$. These models are denoted M1, ..., M11 in the output tables. For each model, all the variable selection methods (as well as PLS) are checked for sample sizes $n=30$, 50, 100, 200. So we get $100\times 4=400$ experiments. All the functional simulated data are \bf discretized \rm to $(x(t_1), \ldots, x(t_{100}))$, where $t_i$ are equispaced points in $[0,1]$. In fact (to avoid the degeneracy $x(t_0)=0$ in the Brownian-like models) we take $t_1=6/105$. Similarly, for the case of the Brownian bridge, we truncate as well at the end of the interval. The involved parameters are: the number $k$ of nearest neighbors in the $k$-NN classifier, the dimension of the reduced space (number of variables or PLS components) and the smoothing parameter $h$ in maxima-hunting methods. These are set by standard data-based validation procedures. Parameter validation can be carried out mainly through a validation set or by cross-validation on the training set (see e.g. \cite{guy06}). In the case of the simulation study, validation and test samples of size 200 are randomly generated. In the real data sets we proceed by cross-validation. \subsection{A few numerical outputs from the simulations}\label{outputs} We have selected (with no particular criterion in mind) a sampling of just a few examples among the 400 experiments. The complete simulation outputs can be downloaded from \url{www.uam.es/antonio.cuevas/exp/outputs.xlsx}. Table 1 provides the performance (averaged on 200 runs) measured in terms of classification accuracy (percentages of correct classification). Models are presented in rows and methods in columns. The marked outputs correspond to the winner and second best method in each row. \begin{table} \caption{\footnotesize Average correct classification outputs, over 200 runs, with $n=50$. } \begin{center} \begin{footnotesize} \begin{tabular}{lccccccc}\hline\noalign{\smallskip} \multicolumn{8}{c}{\rm \bf $k$-NN outputs}\\ Models & FCQ & MID & T & PLS & MHR & MHV & Base\\ \hline\noalign{\smallskip} L2\_OUt & 82.47 & 82.11 & 81.68 & \framebox{ 83.27} & 83.22 & \framebox{ 83.23} & 82.60\\ L6\_OU & 88.41 & 89.81 & 86.19 & \framebox{ 90.93} & 90.75 & \framebox{ 90.83} & 90.56\\ L10\_B & 81.09 & 85.02 & 81.13 & 85.90 & \framebox{ 87.27} & \framebox{ 87.42} & 85.46\\ L11\_ssB & 82.31 & 80.85 & 82.28 & 78.81 & \framebox{ 83.10} & \framebox{ 82.81} & 79.89\\ L12\_sB & 77.24 & 75.83 & \framebox{ 77.41} & 74.92 & \framebox{ 78.57} & 76.62 & 74.78\\ G1 & 65.86 & 70.70 & 65.57 & 66.95 & \framebox{ 71.59} & \framebox{ 71.80} & 70.10\\ G3 & 63.09 & 73.39 & 60.57 & 60.56 & \framebox{ 77.47} & \framebox{ 77.06} & 65.26\\ G6 & 84.27 & 91.95 & 84.14 & \framebox{ 93.67} & 93.38 & \framebox{ 93.71} & 92.19\\ M2 & 70.77 & 69.82 & 69.16 & \framebox{ 78.16} & 74.76 & \framebox{ 75.68} & 71.14\\ M6 & 81.15 & 83.08 & 79.73 & \framebox{ 83.47} & \framebox{ 83.32} & 83.35 & 80.99\\ M10 & 64.93 & 68.33 & 64.58 & 68.25 & \framebox{ 70.66} & \framebox{ 70.94} & 68.95\\ \noalign{\smallskip}\hline \noalign{\smallskip}\noalign{\smallskip} \multicolumn{8}{c}{\rm \bf LDA outputs}\\ Models & FCQ & MID & T & PLS & MHR & MHV & Base\\ \hline\noalign{\smallskip} L2\_OUt & 79.80 & 78.95 & 78.23 & 80.07 & \framebox{ 80.24} & \framebox{ 80.14} & -\\ L6\_OU & 87.79 & 88.91 & 84.46 & \framebox{ 91.01} & \framebox{ 89.44} & 89.35 & -\\ L10\_B & 75.97 & 75.44 & 76.04 & 77.60 & \framebox{ 77.63 }& \framebox{ 77.76} & -\\ L11\_ssB & 80.95 & 80.09 & 80.81 & 79.39 & \framebox{ 81.88} & \framebox{ 81.63} & -\\ L12\_sB & 76.39 & 75.20 & \framebox{ 76.40} & 75.02 & \framebox{ 77.38} & 75.96 & -\\ G1 & 51.27 & 51.24 & 51.20 & 51.44 & \framebox{51.55} &\framebox{ 51.70} & -\\ G3 & 51.09 & 52.26 & 50.96 & 50.35 & \framebox{52.95} & \framebox{52.69} & -\\ G6 & 87.72 & 95.28 & 87.80 & \framebox{ 97.77} & 96.54 & \framebox{ 96.85} & -\\ M2 & 67.44 & 76.51 & 66.81 & \framebox{ 84.38} & 82.24 & \framebox{ 83.06} & -\\ M6 & 79.99 & 79.92 & 79.63 & \framebox{ 81.39} & 81.08 & \framebox{ 81.38} & -\\ M10 & 60.03 & 65.61 & 59.24 & \framebox{ 67.49} & 67.25 & \framebox{ 67.99} & \\ \noalign{\smallskip}\hline \end{tabular} \end{footnotesize} \end{center} \end{table} The outputs of Table 1 are more or less representative of the overall conclusions of the entire study. For instance, MHR appears as the overall winner on average with a slight advantage. PLS and the maxima-hunting methods (MHR and MHV) obtain similar scores and clearly outperform the other benchmark methods. Note that they also beat (often very clearly) the Base method in almost all cases using just a few variables. This shows that dimension reduction is, in fact, ``mandatory'' in many cases. Regarding the comparison of $k$-NN and LDA in the second stage (after dimension reduction) the results show a slight advantage for $k$-NN (on average). The complete failure of LDA in models G1 and G3 was to be expected since in these cases the mean functions are identical in both populations. In terms of number of variables, when $k$-NN is used, MHR and MHV need less variables to achieve better results than the rest of variable selection methods. When LDA is used, the number of required variables is quite similar in all methods; see the Supplementary Material, Section S4. \section{Real data examples}\label{real} We have chosen three examples due to their popularity in FDA. There are many references on these datasets so we will just give brief descriptions of them; additional details can be found in the Supplementary Material document. Figure \ref{fig:reales} shows the trajectories $X(t)$ and mean functions for each set and each class. \begin{figure}[h!]\begin{center} \includegraphics[scale=0.5]{./reales.png}\par \caption{\footnotesize Data trajectories and mean functions from class 0 (first row) and class 1 (second row). Columns correspond to growth, Tecator and phoneme data from left to right.}\label{fig:reales} \end{center} \end{figure} {\it Berkeley Growth Data.} The heights of 54 girls and 39 boys measured at 31 non equidistant time points. See, e.g., \citet{ram05}. {\it Tecator.} 215 near-infrared absorbance spectra (100 grid points each) of finely chopped meat, obtained using a Tecator Infratec Food \& Feed Analyzer. The sample is separated in two classes according to the fat content (smaller or larger than 20\%). Tecator curves are often used in a differentiated version. We use here the second derivatives. See \citet{fer06} for details. {\it Phoneme.} As in \citet{del12a} we use the ``binary'' version of these data corresponding to log-periodograms constructed from 32 ms long recordings of males pronouncing the phonemes ``aa'' and ``ao''. The sample size is $n=1717$ ($695$ from ``aa'' and $1022$ from ``ao''). Each curve was observed at 256 equispaced points. In the comparisons with real data sets we have incorporated the method recently proposed by \cite{del12a}. We denote it by DHB. Given a classifier, the DHB method proposes a leave-one-out choice of the best variables for the considered classification problem. While this is a worthwhile natural idea, it is computationally intensive. So the authors implement a slightly modified version, which we have closely followed. It is based on a sort of trade-off between full and sequential search, together with some additional computational savings. Let us note, as an important difference with our maxima-hunting method, that the DHB procedure is a ``wrapper'' method, in the sense that it depends on the chosen classifier. Following \cite{del12a}, we have only implemented the DHB method with the LDA classifier. Apart from that, we proceed as in the simulation study except for the generation of the training, validation and test samples. Here we consider the usual cross-\-validation procedure which avoids splitting the sample (sometimes small) into three different sets. Each output is obtained by standard leave-one-out cross-\-validation. The only exception is the phoneme data set for which this procedure is extremely time-consuming (due to the large sample size); so we use instead ten-fold cross-validation (10CV). The respective validation steps are done with the same resampling schemes within the training samples. This is a usual way to proceed when working with real data; see \citet[Subsection 7.10]{has09}. Several outputs are given in Tables 2 (accuracy) and 3 (number of variables) below. The complete results can be found in \url{www.uam.es/antonio.cuevas/exp/outputs.xlsx}. \smallskip \begin{table} \caption{\footnotesize Classification accuracy (in \%) for the real data with both classifiers.} \begin{center}\footnotesize \begin{tabular}{lcccccccc}\hline\noalign{\smallskip} \multicolumn{9}{c}{\rm \bf $k$-NN outputs}\\ Data & FCQ & MID & T & PLS & MHR & MHV & DHB & Base\\ \hline\noalign{\smallskip} Growth & 83.87 & \framebox{95.70} & 83.87 & 94.62 & \framebox{95.70} & 94.62 & - & \framebox{96.77} \\ Tecator & 99.07 & 99.07 & 99.07 & 97.21 & \framebox{99.53} & \framebox{99.53} & - & 98.60 \\ Phoneme & \framebox{80.43} & 79.62 & \framebox{80.43} & \framebox{82.53} & 80.20 & 78.86 & - & 78.97 \\ \noalign{\smallskip}\hline \noalign{\smallskip}\noalign{\smallskip} \multicolumn{9}{c}{\rm \bf LDA outputs}\\ Data & FCQ & MID & T & PLS & MHR & MHV & DHB &Base\\ \hline\noalign{\smallskip} Growth & 91.40 & 94.62 & 91.40 & 95.70 & 95.70 & \framebox{96.77} & \framebox{96.77} & - \\ Tecator & 94.42 & \framebox{95.81} & 94.42 & 94.42 & \framebox{95.35} & 94.88 & \framebox{95.35} & - \\ Phoneme & 79.38 & \framebox{80.37} & 79.09 & \framebox{80.60} & 80.20 & 78.92 & 77.34 & - \\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table} \smallskip \begin{table} \caption{\footnotesize Average number of variables (or components) selected for the real data sets.} \begin{center}\footnotesize \begin{tabular}{lcccccccc}\hline\noalign{\smallskip} \multicolumn{9}{c}{\rm \bf $k$-NN outputs}\\ Data & FCQ & MID & T & PLS & MHR & MHV & DHB & Base\\ \hline\noalign{\smallskip} Growth & \framebox{ 1.0} & 3.5 & \framebox{1.0} & 2.8 & 4.0 & 4.0 & - & 31 \\ Tecator & 3.0 & 5.7 & 3.0 & 2.7 & \framebox{1.0} & \framebox{1.0} & - & 100 \\ Phoneme & \framebox{10.7} & 15.3 & 12.3 & 12.9 & \framebox{10.2} & 12.3 & - & 256 \\ \noalign{\smallskip}\hline \noalign{\smallskip}\noalign{\smallskip} \multicolumn{9}{c}{\rm \bf LDA outputs}\\ Data & FCQ & MID & T & PLS & MHR & MHV & DHB &Base\\ \hline\noalign{\smallskip} Growth & 5.0 & 3.4 & 5.0 & \framebox{2.0} & 4.0 & 4.0 & \framebox{2.3} & - \\ Tecator & 8.4 & 2.6 & 3.1 & 9.7 & \framebox{1.7} & \framebox{1.8} & 3.0 & - \\ Phoneme & 8.5 & 17.1 & \framebox{7.9} & 15.5 & 16.1 & 11.0 & \framebox{2.0} & - \\ \noalign{\smallskip}\hline \end{tabular} \end{center} \end{table} These results are similar to those obtained in the simulation study. While (as expected) there is no clear global winner, maxima-hunting method looks as a very competitive choice. In particular, Tecator outputs are striking, since MHR and MHV achieve (with $k$-NN) a near perfect classification with just one variable. Note also that maxima-hunting methods (particularly MHR) outperform or are very close to the Base outputs (which uses the entire curves). PLS is overcome by our methods in two of the three problems but it is the clear winner in phoneme example. In any case, it should be kept in mind, as a counterpart, the ease of interpretability of the variable selection methods. The DHB method performs well in the two first considered examples but relatively fails in the phoneme case. There is maybe some room for improvement in the stopping criterion (recall that we have used the same parameters as in \cite{del12a}). Recall also that, by construction, this is (in the machine learning terminology) a ``wrapper'' method. This means that the variables selected by DHB are specific for the LDA classifier (and might dramatically change with other classification rules). Also note that the use of the LDA classifier didn't lead to any significant gain; in fact, the results are globally worse than those of $k$-NN except for a few particular cases. Although our methodology is not primarily targeted to the best classification rate, but to the choice of the most representative variables, we can conclude that MH procedures combined with the simple $k$-NN are competitive when compared with PLS and other successful and sophisticated methods in literature: see \citet{gal14} for Tecator data, \citet{mos14} for growth data and \citet{del12a} for phoneme data. \section{Overall conclusions: a tentative global ranking of methods} \label{conclusiones} We have summarized the conclusions of our 400 simulation experiments in three rankings, prepared with different criteria, according to \bf classification accuracy\rm. With the \bf relative ranking \/ \rm criterion, the winner method (with performance $W$) in each of the 400 experiments gets 10 score points, and the method with the worst performance (say $w$) gets 0 points. The score of any other method, with performance $u$ is just assigned in a proportional way: $10(u-w)/(W-w)$. The \bf positional ranking\/ \rm scoring criterion just gives 10 points to the winner in every experiment, 9 points to the second one, etc. Finally, the \textbf{F1 ranking} rewards strongly the winner. For each experiment, points are divided as in an F1 Grand Prix: the winner gets 25 points and the rest 18, 15, 10, 8, 6 and 4 successively. The final average scores are given in Table 4. The winner and the second best methods in each category appear marked. \begin{table} \caption{\footnotesize Average ranking scores over the 400 experiments.} \par \vskip .2cm \begin{center}\footnotesize \begin{tabular}{lccccccc}\hline\noalign{\smallskip} \multicolumn{8}{c}{\rm \bf $k$-NN rankings}\\ Ranking criterion & FCQ & MID & T & PLS & MHR & MHV & Base\\ \hline\noalign{\smallskip} Relative & 4.42 & 5.80 & 2.93 & 6.99 & \framebox{8.42} & \framebox{7.35} & 3.64\\ Positional & 6.44 & 6.71 & 5.50 & \framebox{7.96} & \framebox{8.68} & 7.84 & 5.89\\ F1 & 11.62 & 12.04 & 9.46 & \framebox{17.39} & \framebox{17.96} & 15.41 & 10.15\\ \noalign{\smallskip}\hline \noalign{\smallskip}\noalign{\smallskip} \multicolumn{8}{c}{\rm \bf LDA rankings}\\ Ranking criterion & FCQ & MID & T & PLS & MHR & MHV & Base\\ \hline\noalign{\smallskip} Relative & 3.76 & 5.19 & 1.96 & 6.90 & \framebox{8.62} & \framebox{8.07} & - \\ Positional & 6.70 & 6.99 & 5.92 & 8.13 & \framebox{8.79} & \framebox{8.49} & -\\ F1 & 11.95 & 12.52 & 10.22 & \framebox{17.49} & \framebox{18.41} & 17.47 & -\\ \noalign{\smallskip}\hline \end{tabular} \end{center}\end{table} The results are self-explanatory. Nevertheless, \bf the following conclusions might be of some interest for practitioners\rm: 1. The maxima-hunting methods are the global winners (in particular when using the distance correlation measure), even if there is still room for improvement in the maxima identification. In fact, the maxima-hunting procedures result in accuracy improvements (with respect to the ``base error'', i.e., using the whole trajectories) in 88.00\% of the considered experiments. Overall, the gain of accuracy associated with \bf MHR \rm variable selection is relevant (2.41\%). 2. While the univariate ranking methods, such as the $t$ ranking, (which ignore the dependence between the involved variables) are still quite popular among practitioners, they are clearly outperformed by the ``functional'' procedures. It is quite remarkable the superiority of the maxima-hunting methods on the rest of variable selection procedures, requiring often a lesser number of variables. 3. As an important overall conclusion, variable selection appears as a \bf highly competitive alternative to PLS\rm, which is so far the standard dimension reduction method in high-dimensional and functional statistics (whenever a response variable is involved). The results of the above rankings show that variable selection offers a better balance in terms of both accuracy and interpretability. 4. On average, the use of the classical Fisher's discriminant rule LDA (after dimension reduction) provides worse results than the nonparametric $k$-NN rule. An example of superiority of a linear classifier is shown in \cite{del12b} where an asymptotic optimality result is provided. In addition, under some conditions, the proposed classifier turns out to be ``near-perfect'' (in the sense that the probability of classification error can be made arbitrarily small) to discriminate between two Gaussian processes. This is an interesting phenomenon which does not appear in the finite dimensional case. However, it requires that the Gaussian measures under discrimination are mutually singular (note that this situation cannot happen with two non-degenerate Gaussian measures in ${\mathbb R}^d$). This topic will be considered in a forthcoming manuscript by the authors. \smallskip \noindent \it A final remark\rm. The present study shows that there are several quite natural models in which the maxima-hunting method is definitely to be recommended. The real data results are also encouraging. Our results suggest that, even when there is no clear, well-founded guess on the nature of the underlying model, the idea of selecting the maxima of the distance correlation is a suitable choice, that always allows for a direct interpretation. It is natural to ask what type of models would typically be less favorable for the maxima-hunting approach. As a rough, practical guide, we might say that some adverse situations might typically arise in those cases where the trajectories are extremely smooth, or when they are very wiggly, with many noisy abrupt peaks which tend to mislead the calculation of the maxima in the distance correlation function. \vskip 14pt \noindent {\large\bf Supplementary Materials.} All the proofs and two auxiliary results can be found in the appendix. Some further methodological and technical details are explained in the Supplementary Materials document below. It also includes some extra simulation outputs and the list of the 100 considered models. The full simulation outputs are included in an Excel file downloadable from \url{www.uam.es/antonio.cuevas/exp/outputs.xlsx}. \par \vskip 14pt \noindent {\large\bf Acknowledgment.} This research has been supported by Spanish grant MTM2013-44045-P. \setcounter{section}{8} \section*{Appendix: Some results and proofs}\label{sec:prop} To prove Theorem 2 we need two lemmas dealing with the uniform strong consistency of one-sample and two-sample functional U-statistics, respectively. \begin{lemma} \label{lemma:Ustatistics} Let $X: T\to \mathbb{R}$ be a process with continuous trajectories a.s. defined on the compact rectangle $T=\prod_{i=1}^d [a_i,b_i]\subset \mathbb{R}^d$. Let $X_1,\ldots, X_n$ be a sample of $n$ independent trajectories of $X$. Define the functional U-statistic \[ U_n(t) = \frac{2}{n(n-1)} \sum_{i<j} k[X_i(t), X_j(t)], \] where the kernel $k$ is a real continuous, permutation symmetric function. Assume that $$ \mathbb{E}\big(\sup_{t\in T} |k[X(t),X'(t)]|\big)<\infty, $$ where $X$ and $X'$ denote two independent copies of the process. Then, as $n\to\infty$, $\|U_n - U\|_\infty \to 0,\ \ \mbox{a.s.,}$ where $U(t)=\mathbb{E}(k[X(t),X'(t)])$. \end{lemma} \begin{proof} First, we show that $U(t)$ is continuous. Let $t_n \subset T$ such that $t_n\to t$. Then, due to the continuity assumptions on the process and the kernel, $k[X(t_n),X'(t_n)]\to k[X(t),X'(t)]$, a.s. Using the assumption $\mathbb{E}\big(\sup_{t\in T} |k[X(t),X'(t)]|\big)<\infty$, Dominated Convergence Theorem (DCT) allows us to deduce $U(t_n)$ $\to U(t)$. Let $M_\delta(t)=\sup_{s: |s-t|_d \leq \delta} |h(s) - h(t)|$ where, for the sake of simplicity, we denote $h(t) = k[X(t),X'(t)]$. The next step is to prove that, as $\delta \downarrow 0$, \begin{equation} \label{eq.sup} \sup_{t\in T} \mathbb{E} ( M_\delta(t)) \to 0. \end{equation} Both $M_\delta(t)$ and $\lambda_\delta (t)= \mathbb{E}(M_\delta(t))$ are continuous functions. Since $h(t)$ is uniformly continuous on $\{s: |s-t|_d \leq\delta\}$, $M_\delta(t)$ is also continuous. The fact that $\lambda_\delta (t)$ is continuous follows directly from DCT since $|M_\delta(t)| \leq 2\sup_{t\in T} |h(t)|$ and, by assumption, $\mathbb{E}(\sup_{t\in T} |h(t)|)<\infty$. By continuity, $M_\delta (t)\to 0$ and $\lambda_\delta (t)\to 0$, as $\delta\downarrow 0$. Now, since $\delta > \delta'$ implies $\lambda_\delta(t) \geq \lambda_{\delta'}(t)$, for all $t\in T$, we can apply Dini's Theorem to deduce that $\lambda_\delta(t)$ converges uniformly to 0, that is, $\sup_{t\in T}\lambda_\delta(t)\to 0$, as $\delta\downarrow 0$. The last step is to show $\|U_n - U\|_\infty \to 0$ a.s., as $n\to\infty$. For $i\neq j$, denote $M_{ij,\delta}(t)=\sup_{s:|s-t|_d<\delta} |h_{ij}(s) - h_{ij}(t)|$, where $h_{ij}(t) = k[X_i(t),X_j(t)]$, and $\lambda_\delta (t)=\mathbb{E}(M_{ij,\delta}(t))$. Fix $\epsilon > 0$. By (\ref{eq.sup}), there exists $\delta >0$ such that $\lambda_{\delta} (t) < \epsilon$, for all $t\in T$. Now, since $T$ is compact, there exist $t_1,\ldots, t_m$ in $T$ such that $T= \cup_{k=1}^m B_k$, where $B_k = \{t: |t-t_k|_d \leq \delta\}\cap T$. Then, \begin{align*} \|U_n - U\|_\infty & = \max_{1\leq k \leq m} \sup_{t\in B_k} |U_n(t) - U(t)| \\ &\leq \max_{1\leq k \leq m} \sup_{t\in B_k}[|U_n(t) - U_n(t_k)| + |U_n(t_k) - U(t_k)| + |U(t_k) - U(t)| ]\\ &\leq \max_{1\leq k \leq m} \sup_{t\in B_k}|U_n(t) - U_n(t_k)| + \max_{k=1,\ldots,m}|U_n(t_k) - U(t_k)| + \epsilon,\\ \end{align*} since $|s-t|_d \leq \delta$ implies $|U(s)-U(t)| = |\mathbb{E}[h(s) - h(t)] | \leq \mathbb{E}|h(s)-h(t)|\leq \lambda_\delta(t) < \epsilon.$ For the second term, we have $\max_{k=1,\ldots,m}|U_n(t_k) - U(t_k)| \to 0$ a.s., as $n\to\infty$, applying SLLN for U-statistics (see e.g. DasGupta (2008), Theorem 15.3(b), p. 230). As for the first term, observe that using again SLLN for U-statistics, \begin{align*} \sup_{t\in B_k}|U_n(t) - U_n(t_k)| &\leq \frac{2}{n(n-1)} \sum_{i<j} \sup_{t\in B_k} |h_{ij}(t_k) - h_{ij}(t)| \\ &= \frac{2}{n(n-1)} \sum_{i<j} M_{ij,\delta}(t_k) \to \lambda_\delta(t_k), \ \ \mbox{a.s.}, \end{align*} where $\lambda_\delta(t_k)<\epsilon$. Therefore, \begin{align*} \limsup_n\Vert U_n-U\Vert_\infty & \leq \limsup_n\max_{k=1,\ldots, m}\sup_{t\in B_k}|U_n(t) - U_n(t_k)| \\ & +\limsup_n\max_{k=1,\ldots,m}|U_n(t_k) - U(t_k)|+\epsilon \leq 2\epsilon. \\ \end{align*} \end{proof} \begin{lemma} \label{lemma:twosampleUstatistics} Let $X^{(0)}: T\to \mathbb{R}$ and $X^{(1)}: T\to \mathbb{R}$ be a pair of independent processes with continuous trajectories a.s. defined on the compact rectangle $T=\prod_{i=1}^d [a_i,b_i]$ $\subset \mathbb{R}^d$. Let $X^{(0)}_1,\ldots, X^{(0)}_{n_0}$ and $X^{(1)}_1,\ldots, X^{(1)}_{n_1}$ be samples of $n_0$ and $n_1$ independent trajectories of $X^{(0)}$ and $X^{(1)}$, respectively. Define the functional two-sample U-statistic \[ U_{n_0,n_1}(t) = \frac{1}{n_0n_1} \sum_{i=1}^{n_0} \sum_{j=1}^{n_1} k[X^{(0)}_i(t), X^{(1)}_j(t)], \] where the kernel $k$ is a continuous, permutation symmetric function. Assume that $$ \mathbb{E}\big(\sup_{t\in T} |h(t)|\log^+|h(t)|\big)<\infty, $$ with $h(t)=k[X^{(0)}(t),X^{(1)}(t)]$. Then, as $\min(n_0, n_1) \to\infty$, \[ \|U_{n_0,n_1} - U\|_\infty \to 0,\ \ \mbox{a.s.,} \] where $U(t)=\mathbb{E}(k[X^{(0)}(t),X^{(1)}(t)])$. \end{lemma} \begin{proof} It is analogous to the proof of Lemma \ref{lemma:Ustatistics} so it is omitted. We need to apply a strong law of large numbers for two-sample U-statistics. This result can be guaranteed under slightly stronger conditions on the moments of the kernel; see \citet[Th.1]{sen77}. Hence the condition $\mathbb{E}\big(\sup_{t\in T} |h(t)|\log^+ |h(t)|\big) < \infty$ in the statement of the lemma. \end{proof} \subsection*{Proofs of the main results} \begin{proof}[Theorem 1] \noindent (a) From (2.1), as $X_t$ is $d$-dimensional and $Y$ is one-dimensional, taking into account $c_1=\pi$, we have \begin{align*} {\cal V}^2(X_t,Y) & = \parallel \varphi_{X_t , Y} (u,v) - \varphi_{X_t} (u) \varphi_Y(v)\parallel_w ^2\\ &= \textstyle \frac{1}{\pi c_d}\int_{\mathbb R}\int_{\mathbb{R}^d} | \varphi_{X_t , Y} (u,v) - \varphi_{X_t} (u) \varphi_Y(v)|^2 \frac{1}{|u|_d^{d+1} v^2}du dv . \end{align*} Let's analyze the integrand, {\small \begin{align*} \varphi_{X_t , Y}(u,v) - \varphi_{X_t} (u) \varphi_Y(v) &= \E{e^{i u^\top X_t} e^{ivY}}-\E{e^{iu^\top X_t}}\E{ e^{ivY}}\\ &=\E{(e^{iu^\top X_t}-\varphi_{X_t}(u))(e^{ivY}-\varphi_{Y}(v))} \\&=\E{\E{(e^{iu^\top X_t}-\varphi_{X_t}(u))(e^{ivY}-\varphi_{Y}(v))| X}}\\ &=\E{(e^{iu^\top X_t}-\varphi_{X_t}(u))\E{(e^{ivY}-\varphi_{Y}(v))| X}} \\&\overset{(*)}{=}\E{(e^{iu^\top X_t}-\varphi_{X_t}(u))(e^{iv}-1)(\eta(X)-p)}\\ &=(e^{iv}-1)\E{(e^{iu^\top X_t}-\varphi_{X_t}(u))(\eta(X)-p)} \\&=(e^{iv}-1)\E{e^{iu^\top X_t}(\eta(X)-p)} = (e^{iv}-1)\zeta(u,t). \end{align*}} Step (*) in the above chain of equalities is motivated as follows: \begin{align*} \E{(e^{ivY}-\varphi_{Y}(v))| X} &= \E{e^{ivY}| X}-\varphi_{Y}(v) =(e^{iv}-1)\eta(X) - (e^{iv}-1)p \\ &= (e^{iv}-1) ((\eta(X)-p)). \end{align*} Therefore, since $\int_{\mathbb R} \frac{|e^{iv}-1|^2}{\pi v^2}dv=2$, \begin{align*} {\cal V}^2(X_t,Y) = \int_{\mathbb R} \frac{|e^{iv}-1|^2}{\pi v^2}dv \int_{\mathbb{R}^d} \frac{|\zeta(u,t)|^2}{c_d|u|_d^{d+1}}du = \frac{2}{c_d} \int_{\mathbb{R}^d} \frac{\abs{\zeta(u,t)}^2}{|u|_d^{d+1}}du. \end{align*} \ \noindent (b) Since $\zeta(u,t)=\E{\left( \eta(X)-p\right)e^{iu^\top X_t}}$, \begin{small} \begin{align*} \abs{\zeta(u,t)}^2 &= \mathbb{E} \left[ (\eta(X)-p) e^{iu^\top X_t} \right] \mathbb{E} \left[ (\eta(X')-p) e^{-iu^\top X'_t} \right] \\ &= \mathbb{E} \left[ (\eta(X)-p)(\eta(X')-p) e^{iu^\top(X_t-X'_t)} \right]\\ &=\mathbb{E} \left[ (\eta(X)-p)(\eta(X')-p) \cos(u^\top (X_t-X'_t)) \right] \\ &=- \mathbb{E} \left[ (\eta(X)-p)(\eta(X')-p)(1- \cos(u^\top(X_t-X'_t))) \right], \end{align*} \end{small} where we have used $\abs{\zeta(u,t)}^2 \in \mathbb{R}$ and $\mathbb{E}\left[ (\eta(X)-p)(\eta(X')-p)\right]=0$. Now, using expression (3.1), \begin{small} \begin{align*} {\cal V}^2(X_t,Y)&= - 2 \mathbb{E} \left[ (\eta(X)-p)(\eta(X')-p) \int_{\mathbb{R}^d} \frac{1- \cos(u^\top(X_t-X'_t))}{c_d |u|_d^{d+1}} du \right] \\ &= -2 \mathbb{E} \left[ (\eta(X)-p)(\eta(X')-p)\abs{X_t - X'_t}_d \right]\\ &=-2 \mathbb{E} \left[ (Y-p)(Y'-p)\abs{X_t - X'_t}_d \right], \end{align*} \end{small} since [see e.g. Lemma 1 in \citet{sze07}], $$ \int_{\mathbb{R}^d}\frac{1-\cos(u^\top x)}{c_d |u|_d^{d+1}}du=|x|_d,\ \ \mbox{for all }x\in {\mathbb{R}^d}. $$ \ \noindent (c) By conditioning on $Y$ and $Y'$ we have {\small \begin{align*} {\mathbb E}[(Y-p)(Y'-p)|X_t - X'_t|_d] &= p^2 I_{00}(t) (1-p)^2 - p(1-p)I_{01}(t) 2p(1-p)\\ \hspace{0.5cm}&\hspace{10pt}+(1-p)^2I_{11}(t)p^2 =p^2(1-p)^2 (I_{00}(t)+I_{11}(t) - 2I_{01}(t)). \end{align*}} Now, using (3.2), $ {\cal V}^2(X_t,Y) = 4p^2(1-p)^2 \left[ I_{01}(t) - \frac{I_{00}(t)+I_{11}(t)}{2}\right]$. \end{proof} \begin{proof}[Theorem 2] Continuity of ${\cal V}_n^2(X_t,Y)$ is straightforward from DCT. It suffices to prove the result for sequences of samples $X_1^{(0)},\ldots,X_{n_0}^{(0)}$, and $X_1^{(1)},\ldots,X_{n_1}^{(1)}$, drawn from $X|Y=0$ and $X|Y=1$, respectively, such that $n_1/(n_0+n_1)\to p={\mathbb P}(Y=1)$. From the triangle inequality it is enough to prove the uniform convergence of $\hat I_{00}(t)$, $\hat I_{11}(t)$ and $\hat I_{01}(t)$ to $I_{00}(t)$, $I_{11}(t)$ and $I_{01}(t)$, respectively. For the first two quantities we apply Lemma \ref{lemma:Ustatistics} to the kernel $k(x,x')=|x-x'|$. For the last one we apply Lemma \ref{lemma:twosampleUstatistics} to the same kernel. Observe that $\mathbb{E}\|X\|_\infty < \infty$ implies the moment condition of Lemma \ref{lemma:Ustatistics} whereas $\mathbb{E}( \|X\|_\infty\log^+\|X\|_\infty )< \infty$ implies the moment condition of Lemma \ref{lemma:twosampleUstatistics}. The last statement readily follows from the uniform convergence and the compactness of $[0,1]^d$. \end{proof} \begin{proof}[Proposition 1] We know $g^*(x)={\mathbb I}_{\{\eta(x)>1/2\}}$. Then, we use equation (4.1), which provides $\eta(x)$ in terms of the Radon-Nikodym derivative $d\mu_0/d\mu_1$, and the expression for $d\mu_0/d\mu_1$ given in \citet{lip77}, p. 239. This gives \[ \eta(x)=\left[\frac{1-p}{p}\sqrt{2}e^{-x_1^2/4}+1 \right]^{-1}. \] Now, from $g^*(x)={\mathbb I}_{\{\eta(x)>1/2\}}$, we get $g^*(x)=1$ if and only if $x_1^2 > 4\log\left( \frac{\sqrt{2}(1-p)}{p} \right)$. \end{proof} \begin{proof}[Proposition 2] Again, we use expression (4.1) to derive the expression of the optimal rule $g^*(x)={\mathbb I}_{\{\eta(x)>1/2\}}$. In this case the calculation is made possible using the expression of the Radon-Nikodym derivative for the distribution of a Brownian process with trend, $F(t)+B(t)$, with respect to that of a standard Brownian: \begin{equation} \frac{d\mu_1}{d \mu_0}(B)=\exp\left\{-\frac{1}{2}\int_0^1F^\prime(s)^2ds+\int_0^1F^\prime dB\right\},\label{RNderivative} \end{equation} for $\mu_0$-almost all $B\in {\mathcal C}[0,1]$; see, \citet{mor10}, Th. 1.38 and Remark 1.43, for further details. Observe that in this case we have $F(t)=ct$. Thus, from (4.1), we finally get $ \eta(x) = \left[\frac{1-p}{p}\exp\left(\frac{c^2}{2}-cx_1\right) +1 \right] ^{-1}, $ which again only depends on $x$ through $x(1)=x_1$. The result follows easily from this expression. \end{proof} \begin{proof}[Proposition 3] In this case, the trend function is $F(t)=\Phi_{m,k}(t)$. So $F^{'}(t)=\varphi_{m,k}$ and $F^{''}(t)=0$. From equations (4.1) and \eqref{RNderivative}, we readily get (4.3) and (4.4). \end{proof} \begin{proof}[Proposition 4] Let us first consider the model in Proposition 1 (i.e., Brownian vs. Brownian with a stochastic trend). Such model entails that $X_t | Y=0 \sim N(0,\sqrt{t})$ and $X_t | Y=1 \sim N(0,\sqrt{t^2 + t})$. Now, recall that if $\xi\sim N(m,\sigma)$, then, \begin{equation}\label{Evalorabsoluto} \mathbb{E}\abs{\xi} =\sigma \sqrt{\frac{2}{\pi}} e^{- \frac{m^2}{\sigma^2}} + m \left( 2 \Phi \left(\frac{m}{\sigma}\right)-1\right), \end{equation} where $\Phi(z)$ denotes the distribution function of the standard normal. Now, using (3.3) and \eqref{Evalorabsoluto} we have the following expressions, $$I_{01}(t) = \mathbb{E} |\sqrt{t}Z - \sqrt{t^2 + t}Z'| = \sqrt{\frac{2(t^2 + 2t)}{\pi}},$$ $$ I_{00}(t) = \mathbb{E} |\sqrt{t}Z - \sqrt{t}Z'| = \sqrt{\frac{4t}{\pi}},$$ $$I_{11}(t) = \mathbb{E} |\sqrt{t^2 + t}Z - \sqrt{t^2 + t}Z'| = \sqrt{\frac{4(t^2+t)}{\pi}},$$ where $Z$ and $Z^\prime$ are independent $N(0,1)$ random variables. Then, the function ${\cal V}^2(X_t,Y)=4p^2(1-p)^2 \left( I_{01}(t) - \frac{I_{00}(t)+I_{11}(t)}{2}\right)$ grows with $t$ so it is maximized at $t^*=1$, which is the only point that has an influence on the Bayes rule. Let us now consider the model in Proposition 2 (i.e., Brownian vs. Brownian with a linear trend). Again, from \eqref{Evalorabsoluto} we have in this case, \begin{align*} I_{01}(t) = \mathbb{E} |ct + \sqrt{t}Z - \sqrt{t}Z'| =2\sqrt{\frac{t}{\pi}}e^{-\frac{c^2t}{2}} +ct\left(2\Phi\left(c\sqrt{\frac{t}{2}}\right) -1\right), \end{align*} $$I_{00}(t)=I_{11}(t) = \mathbb{E} |\sqrt{t}Z - \sqrt{t}Z'| = \sqrt{\frac{4t}{\pi}},$$ where $Z$ and $Z^\prime$ are iid standard Gaussian variables. Therefore using (3.3), $${\cal V}^2(X_t,Y)= C\left[2\sqrt{\frac{t}{\pi}}\left(e^{-\frac{c^2t}{2}} -1 \right) +ct\left(2\Phi\left(c\sqrt{\frac{t}{2}}\right) -1\right)\right],$$ where $C=4p^2(1-p)^2$. We can check numerically that this an increasing function which reaches its only maximum at $t^*=1$. According to Proposition 1 this is the only relevant point for the Bayes rule. \end{proof} \renewcommand\bibname{\large \bf References}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Large empty regions of space, called voids, represent the majority of the volume of the present Universe. They were first discovered in observations by \cite{GT78}, \cite{Joeveer78} and \cite{TullyFisher78}, followed later by \cite{Kirshner81} and more largely in the CfA redshift catalogue \citep{Lapparent86}. This discovery was followed by a large amount of theoretical work. The first gravitational instability model was given by \cite{HoffmanShaham82}, quickly followed by \cite{HSW83} for an infinite size regular mesh of void and by \cite{HOR83} for the impact of cosmology on their evolution. Other work studied the general self-similar evolution of voids in Einstein-de-Sitter universes \citep{Bertschinger83,Bertschinger85}. However, as the voids are intrinsically large and the surveys at that time were small, we only detected a small number of them. This has hindered their use as a cosmological probe for a long time [except some constraints on their maximal size compatible with CMB observations by e.g. \cite{Blumenthal92}]. This situation has changed with the advent of deep and wide galaxy surveys such as the Sloan Digital sky survey \citep[SDSS, ][]{SDSS}, 2dFGRS \citep{TwoDF}, 2MRS \citep{TwoMRS} and now the 6dFGS \citep{SixDF}. Still we miss a clear and simple definition of voids that would allow us to use them as a precision cosmological probe. In this paper, we investigate, analytically and numerically using $N$-body simulations, a new algorithm for finding voids in Large Scale structure surveys and an analytical model that accurately predicts the properties of voids found by this method as a function of cosmology. During the last decade, several algorithms to find voids have been built. They are separated in three broad classes. In the first class, the void finders try to find regions empty of galaxies \citep{Kauffman91,ElAd97,HoyleVogeley2002,Patiri06,FosterNelson09}. The second class of void finders try to identify voids as geometrical structures in the dark matter distribution traced by galaxies \citep{PlionisBasilakos02,Colberg05,Shandarin06,wvf,zobov}. The third class identifies structures dynamically by checking gravitationally unstable points in the distribution of dark matter \citep{Hahn07,VoidsGravity08}. At the same time, $N$-body simulations focused on the studies of voids in a cosmological context were flourishing \citep{Martel90,Regos91,vdW93,GLKH03,Benson03,Colberg05}. Recently, \cite{AspenAmsterdam} made a comparison which shows that, even if these currently available void finder techniques find approximately the same voids, the details of the shapes and sizes found by each of the void finders may be significatively different. This problem is further enhanced by the existence of ad-hoc parameters in most of the existing void finders, which changes the exact definition of voids and does not allow reliable cosmological predictions. One aspect of voids that is also often left aside is the hierarchical structures of voids. So far, apart from ZOBOV \citep{zobov} and the related Watershed Void Finder (WVF) method \citep{wvf}, which are parameter free, no void finder tries to identify correctly the hierarchy of voids-in-voids and clouds-in-voids \citep{ShethWeygaert04}. Another problem of these void finders comes from their Eulerian nature: they try to find structures that are not necessarily in the same dynamical regime (linear or non-linear), which complicates the building of an analytical model. We propose studying a new void finder that belongs to the third class of these void finders. It is based on the success of both the Monge-Amp\`ere-Kantorovitch (MAK) reconstruction of the orbits of galaxies \citep{Brenier2002,moh2005,lavaux08} and the Zel'dovich approximation \citep{zeldov70}. This method is based on finding a way to compute the Lagrangian coordinates of the objects at their present position. The study of voids in Lagrangian coordinates is not new. The evolution of voids in the adhesion approximation has been studied by \cite{SahniShandarin94} to understand the formation and evolution of voids and their inner substructure in a cosmological context. Later, \cite{SahniShandarin96} emphasized the precision of the Zel'dovich approximation for studying void dynamics compared to higher order perturbation theory, either Lagrangian or Eulerian. However, no void finder method has yet tried to take advantage of the Zel'dovich approximation for detecting and studying voids in real data. The voids detected with this method are going to be intrinsically different than the one found using standard Eulerian void finder. This hardens the possibility of making a void-by-void comparison of the different methods. The use of Lagrangian coordinates gives one immediate advantages compared to standard void finding: the Lagrangian displacement field is still largely in the linear regime even at $z=0$ and especially for voids. This allows us for the first time to make nearly exact analytical computation on the dynamical and geometrical properties of voids in Large scale structures. The MAK reconstruction is thus particularly adapted to study the dynamics of voids. However, there is an apparent cost to pay: we lose the intuitive way of defining voids as ``holes'' in a distribution of galaxy, that is the place where matter is not anymore. On the other hand, we gain the physical understanding that voids correspond to regions from which matter is escaping. The dynamics of voids may provide a wealth of information on dark energy without the need for any new survey. The first obvious probe of dark energy properties comes from the study of the linear growth factor. Its evolution with redshift depends, among the other cosmological parameters, on the equation of state $w$ of the Dark Energy. In this work, we assume that $w$ is independent of the redshift. We note that in galaxy surveys, our method is going to be sensitive to bias but not more than the direct approach to void finding. Indeed, void finders of the first class are sensitive to the selection function of galaxies. Generally this is done by limiting the survey to galaxies with an apparent magnitude below some designated threshold. Changing this selection function of the galaxies acts on the boundaries of the detected voids, which thus changes the geometry of these voids. From the point of view of void finders, this will also act as a ``bias''. The method that we propose has a more conventional dependence on the bias by using the dark matter distribution inferred from the galaxy distribution. The advantage is that this bias could be calibrated. One exact calibration consists in comparing peculiar velocities reconstructed using MAK to observed velocities \citep{lavaux09}. Additionally, there are a number of other complementary ways of determining bias from galaxy redshift surveys \citep[e.g.][]{benoist96,Norberg01,tegmark04,Erdogdu2005,tegmark06,Percival07}. This paper is a first of a series studying the properties of voids found by our void finder. It is organised as follows. First, we recall the theory of the Monge-Amp\`ere-Kantorovitch reconstruction in Section~\ref{sec:mak}. Then, we explain how we can use reconstructed orbits as an alternative way to detect and characterise voids. This corresponds to the core of DIVA, our void finder through Dynamical Void Analysis, and is explained in Section~\ref{sec:diva}. In Section~\ref{sec:analytic_voids}, we model analytically the voids found by DIVA. In Section~\ref{sec:nbody_test}, we test our void finder on $N$-body simulations. We also check our analytical model against the results of the simulations for two cosmologies. In Section~\ref{sec:discuss_definitions}, we compare DIVA to earlier existing void finders. In Section~\ref{sec:conclusion}, we conclude. \section{The Monge-Ampere-Kantorovitch reconstruction} \label{sec:mak} The Monge-Amp\`ere-Kantorovitch reconstruction (MAK) is a method capable of tracing the trajectories of galaxies back in time using an approximation of the complete non-linear dynamics. It is a Lagrangian method, as PIZA \citep{CroftGaztanaga97} or the Least-Action method \citep{Peebles89}. The MAK reconstruction is discussed in great detail in \cite{Brenier2002}, \cite{moh2005} and \cite{lavaux08}. It is based on the hypothesis that, expressed in comoving Lagrangian coordinates, the displacement field of the dark matter particles is convex and potential. Since then, this hypothesis has been justified by the success of the method on $N$-body simulations. With the local mass conservation, this hypothesis leads to the equation of Monge-Amp\`ere: \begin{equation} \text{det}_{i,j} \frac{\partial^2 \Phi}{\partial q_i \partial q_j} = \frac{\rho\left({\bf x}({\bf q})\right)}{\rho_0} ,\label{eq:ma} \end{equation} with ${\bf q}$ the comoving Lagrangian coordinates, ${\bf x}({\bf q})$ the change of variable between Eulerian (${\bf x}$) and Lagrangian coordinates (${\bf q}$), ${\rho}({\bf x})$ the Eulerian dark matter density and ${\rho_0}$ the initial comoving density of the Universe, assumed homogeneous. \cite{Brenier2002} showed that solving this equation is equivalent to solving a Monge-Kantorovitch equation, where we seek to minimise \begin{equation} I\left[{\bf q}({\bf x})\right] = \int_{\bf x}\;\text{d}^3 {\bf x} \rho({\bf x}) \left({\bf x} - {\bf q}\left({\bf x}\right)\right)^2,\label{eq:mk} \end{equation} according to the change of variable ${\bf q}({\bf x})$. Discretising this integral, we obtain \begin{equation} S_\sigma = \sum_i \left({\bf x}_i - {\bf q}_{\sigma(i)}\right)^2 \label{eq:disc_mak}, \end{equation} with $\sigma$ a permutation of the particles, ${\bf q}_j$ distributed homogeneously, ${\bf x}_i$ distributed according to the distribution of dark matter. Doing so, we obtain a discretised version of the mapping ${\bf q} \rightarrow {\bf x}({\bf q})$ on a grid. To solve for the problem of minimising Eq.~\eqref{eq:disc_mak} with respect to $\sigma$, we wrote a high-performance algorithm that has been parallelised using MPI. This algorithm is based on the Auction algorithm developed by \cite{Bertsekas79}. It has an overall time complexity for solving cosmological problems empirically between $O(n^2)$ and $O(n^3)$ (at worst) with $n$ the number density of mesh elements $\{ {\bf q}_j\}$.\footnote{An implementation of the algorithm is currently available on the first author's website http://www.iap.fr/users/lavaux/code.php.} \section{The Void Finder by Orbit reconstruction} \label{sec:diva} In this section, we describe our void finder DIVA (for DynamIcal Void Analysis). First, we define in Section~\ref{sec:definition} what we call a void in this work. Second, in Section~\ref{sec:ellipticity}, we make use of the displacement field in the immediate neighbourhood of a void to define the ellipticity arising from tidal field effects, which we also call {\it tidal ellipticity}. In Section~\ref{sec:euler_epsilon}, we define the Eulerian ellipticity of our voids. In Section~\ref{sec:smoothing}, we discuss the impact of smoothing in Lagrangian coordinates to compute void properties. In later sections, we use pure dark matter $N$-body simulations to check the adequacy of the voids found using MAK reconstructed displacement field and the one detected in the simulated displacement field. The results given by the analytical models are then compared to the one given by the simulated field for two equations of state of the Dark Energy. \subsection{Definition of a Void} \label{sec:definition} So far, voids have only been described using a purely geometrical Eulerian approach. Typically, as mentioned in the introduction, a void is an empty region delimited by either sphere or ellipsoid fitting or by using isodensity contours. We propose here to use a Lagrangian approach and use the mapping between Lagrangian, ${\bf q}$, and Eulerian coordinates, ${\bf x}$ as a better probe for voids. In the rest of this article, we will consider these two coordinates to be linked by the displacement field $\bm{\Psi}$: \begin{equation} {\bf x}({\bf q}) = {\bf q} + \bm{\Psi}({\bf q})\,. \end{equation} We now define the source ${S_\Psi}$ of the displacement field by \begin{equation} {S_\Psi}({\bf q}) = \sum_{i=1}^3 \frac{\partial \Psi_i}{\partial q_i}\,. \label{eq:delta} \end{equation} As the displacement field is taken to be potential it is strictly sufficient to only look at ${S_\Psi}$ to study ${\bf \Psi}$. We now define the position of a {\it candidate void} centre by looking at maxima of ${S_\Psi}$ in Lagrangian coordinates.\footnote{This maxima corresponds in terms of the primordial density field to what is sometimes called a protovoid \citep{Blumenthal92,Piran93,Goldwirth95}.} This will effectively catch the source of displacement and from where the void is expanding. The other, practical, advantage is that ${S_\Psi}$ is quite close to the opposite of the linearly extrapolated initial density perturbations of the considered patch of universe \citep{moh2005}. So we can use the usual power spectrum to study most of the statistics of this field. So the main approximation we use in the rest of this study is that the primordial density field power spectrum is a good proxy for the power spectrum of the seed of displacement and that this displacement is a Gaussian random field. From ${\bf \Psi}$, we define the matrix $T_{l,m}$ of the shear of the displacement, which is linked to the Jacobian matrix: \begin{eqnarray} J_{l,m}({\bf q}) & = & \frac{\partial x_l}{\partial q_m} = \delta_{l,m} + \frac{\partial \Psi_l}{\partial q_m}({\bf q}) = \delta_{l,m} + T_{l,m}\,, \label{eq:tidal_field} \\ J({\bf q}) & = & |J_{l,m}|\,, \end{eqnarray} with \begin{equation} T_{l,m}({\bf q}) = \frac{\partial \Psi_l}{\partial q_m}\,. \label{eq:tidal_def} \end{equation} $J$ is the Jacobian of the coordinate transformation ${\bf q}\rightarrow {\bf x}$. Geometrically, $J$ specifies how an infinitely small patch of the Universe expanded, in comoving coordinates, from high redshift to $z=0$. We put $\lambda_i({\bf q})$ the three eigenvalues of $T_{l,m}({\bf q})$ and sort them such that $\lambda_1({\bf q}) > \lambda_2({\bf q}) > \lambda_3({\bf q})$. Among the candidate voids, we select only voids that have strictly expanded, which equivalently means that $J > 1$. We may now define three classes of voids that are inspired from the usual classes of observable large scale structures for galaxies: \begin{itemize} \item {\bf true voids} for which $\lambda_1 > 0, \lambda_2 > 0, \lambda_3 > 0$. These should be the most evident and easily detectable voids as they consist in regions which are expanding in the three directions of space. \item {\bf pancake voids} for which $\lambda_1 > 0, \lambda_2 > 0, \lambda_3 < 0$. The pancake voids are closing along one direction of space but expanding along the two other directions. With a geometrical analysis, this case cannot be distinguished from the true void case. However the dynamical analysis is capable of that, and this will cause a crucial difference to the analysis as we will see later. In practice they represent a substantial fraction of the voids. \item {\bf filament voids} for which $\lambda_1 > 0, \lambda_2 < 0, \lambda_3 < 0$. \end{itemize} We refer to Section~\ref{sec:analytics_cosmology} for the quantitative relative number of voids in each of class. As we will see, the distinction between those cases is important to quantify the shape and properties of voids that we observe at the present time. We discuss our definition of voids, and compare it to other void finders, in Section~\ref{sec:discuss_definitions}. \begin{figure*} \includegraphics[width=\hsize]{figure1} \caption{\label{fig:transformations} {\it Picture of a void and the formation of intrinsic ellipticity} -- We represent in this figure the central idea of the definition of a void and its ellipticity. We take voids as maxima in ${S_\Psi}$. They correspond to first order to minima of the primordial density field represented here by painted surface. These minima undergo an overall expansion from initial conditions to present time. The shape of the Void is defined locally at the minimum. The ellipticity is equal to the square root of the ratio of the axes of the ellipsoid which locally fits the surface.} \end{figure*} Here, we have not yet touched the issue of defining the boundary of a void. The exact study of the properties of void volumes will be studied in a forthcoming paper (Lavaux \& Wandelt 2009, in preparation). We use a variant of the Watershed transform \citep{wvf} to define the Lagrangian volume of a void. In Lagrangian coordinates, the voids occupy half of the volume. So, instead of enforcing strictly that we should have complete segmentation of the volume in terms of void patches, we impose that voids must correspond only to the places that are sources of displacement and not to sink like clusters. Contrarily to a pure Watershed algorithm, we thus enforce that ${S_\Psi} > 0$ everywhere within the void boundary. \subsection{Ellipticity of a void} \label{sec:ellipticity} After having defined the position and the dynamical properties of the void, we may define an interesting property of those structures: the ellipticity. \cite{Icke84} first emphasized that isolated voids should evolve to a spherical geometry. But, in the real case, voids are subject to tidal effects. Assuming the present matter distribution evolved from a totally isotropic and homogeneous distribution, \cite{ParkLee06} and \cite{LeePark09} have shown that the distribution of the ellipticity which is produced by tidal effects is a promising probe for cosmology. More generally, previous work have shown that a lot of potentially observable statistical properties of voids are directly to the primordial tidal field \citep[e.g.][]{LeePark06,Platen08,ParkLee09_1,ParkLee09_2}. However, questions may be raised by the direct use of the formula \cite{Dor70}, as they applied it in Millennium Simulation. Using the orbit reconstruction procedure, our approach should be able to treat the problem right from the beginning, even in redshift space (see \citeauthor{lavaux08} \citeyear{lavaux08} for a long discussion), though some care should be taken for the distortions along the line-of-sight. The other advantage of Lagrangian orbit reconstruction is that it offers for free a way of evaluating the ellipticity {\it locally}, potentially at any space point. From the mass conservation equation and the definition of eigenvalues of $J_{l,m}$ we may write the local Eulerian mass density as: \begin{equation} \rho_E({\bf q}) = \frac{\bar{\rho}}{\left|(1 + \lambda_1({\bf q}))(1 + \lambda_2({\bf q}))(1+\lambda_3({\bf q}))\right|} = \frac{\bar{\rho} V_\text{L}}{V_\text{E}({\bf q})}\,, \end{equation} with $V_L$ the Lagrangian volume of the cell at ${\bf q}$ and $V_\text{E}({\bf q})$ the Eulerian volume of this same cell, $\bar{\rho}$ the homogeneous Lagrangian mass density. This equation is valid at all times. Now we may also explicitly write the change of volume of some infinitely small patch of some universe \begin{equation} V_\text{E}({\bf q}) = V_\text{L} \left|(1 + \lambda_1({\bf q}))(1 + \lambda_2({\bf q}))(1+\lambda_3({\bf q}))\right|\,. \end{equation} Provided the eigenvalues $\lambda_i$ are greater than -1, which is always the case for voids, we may drop the absolute value function.\footnote{An eigenvalue less than -1 would mean that the void would have suffered shell crossing at the position of its centre, which is dynamically impossible as we are at the farthest distance possible of any high density structure.} Now, with the analogy of the volume of an ellipsoid,\footnote{We used here the convention of \cite{ParkLee06} who take the square root of the ratio to define $\mu$ and $\nu$.} we may write the ratio $\nu$ between the minor axis and the major axis \begin{equation} \nu({\bf q}) = \sqrt{\frac{1 + \lambda_3({\bf q})}{1 + \lambda_1({\bf q})}}\,, \label{eq:nu} \end{equation} and the ratio $\mu$ between the second major axis and the major axis \begin{equation} \mu({\bf q}) = \sqrt{\frac{1 + \lambda_2({\bf q})}{1 + \lambda_1({\bf q})}}\,. \label{eq:mu} \end{equation} This allows us to define the ellipticity \begin{equation} \varepsilon({\bf q}) = 1 - \nu({\bf q}) = 1 - \sqrt{\frac{1 + \lambda_3({\bf q})}{1 + \lambda_1({\bf q})}}\,. \label{eq:epsilon} \end{equation} We will define the ellipticity of a void as the value taken by $\varepsilon$ at the Lagrangian position of the void. A picture of the concept of voids and ellipticities in this work is given by Fig.~\ref{fig:transformations}. The painted paraboloid represents a small piece of a larger 2D density field whose value is encoded in the height and the colour. According to our definition, the void is at the centre of the paraboloid. At this centre, the surface of the volume element is mostly circular. The tidal forces are locally transforming the shape of this surface, which produces the new elliptic shape on the right side of the figure. The surface has been here extended along one direction and slightly compressed along the other. Though we are not strictly limited to study ellipticity only at the position of the void it may be promising in terms of robustness to non-linear effects. Indeed, due to the absence of shell-crossings inside voids, the MAK reconstruction should give the exact solution \citep{Brenier2002,moh2005,lavaux08} to the orbit reconstruction problem. It means that the ellipticity that we will compute will be exact, to the extent that we have taken care of the other potential systematics due to observational effects \citep{lavaux08}. As any other method relying on dark matter distribution, we will be sensitive to the fact that the large-scale galaxy distribution is potentially biased. However, if the bias does not depends wildly on redshift, we should be able to compute statistics on ellipticities and derive the evolution of the growth factor of Large Scale Structures. We note that, using MAK reconstruction, we have access to the joint distribution of the three eigenvalues. Our computation of the ellipticity consists in a projection of the whole joint 3d joint distribution on a 1d variable. For cosmological analysis, it is not entirely clear which estimator is the more robust. On one hand, our intrinsic variables are the eigenvalues and we could include them in the analysis just as well as the ellipticities On the other hand, using ellipticity may be helpful to average on a lot of different voids. It may be a more robust estimator with respect to badly modelled tails of the distribution of eigenvalues. In this work, we focus on the use of the ellipticity, as defined in Eq.\eqref{eq:epsilon}. \subsection{Eulerian ellipticity} \label{sec:euler_epsilon} We define the volume ellipticity $\varepsilon_\text{vol}$ using the eigenvalues of the inertial mass tensor \citep{Shandarin06}: \begin{equation} M_{xx} = \sum_{i=1}^{N_p} m_i (y_i^2 + z_i^2),\hspace{.3cm} M_{xy} = - \sum_{i=1}^{N_p} m_i x_i y_i, \end{equation} where $m_i$ and $x_i$, $y_i$, $z_i$ are the mass and the coordinates of the $i$-th particle of the void with respect to its centre of mass. The other matrix elements are obtained by cyclic permutation of $x$,$y$ and $z$ symbols. We put $I_{j}$ the $j$-th eigenvalues of the tensor $M$, with $I_{1} \leq I_{2} \leq I_{3}$. We may now define the volume ellipticity as in \begin{equation} \varepsilon_\text{vol} = 1-\left(\frac{I_2+I_3-I_1}{I_2+I_1-I_3}\right)^{1/4} \label{eq:epsilon_shape} \end{equation} Even though our work is focused on the tidal ellipticity (Section~\ref{sec:ana_epsilon}), there is some interest to compare how the Eulerian volume ellipticity compares to the local tidal ellipticity, as most of the existing void finders use $\varepsilon_\text{vol}$ as a probe for the dynamics. To have a fair comparison with DIVA results, we are computing the inertial mass tensor from the displacement field ${\bf \Psi}({\bf q})$ smoothed on the scale as for the rest of the analysis. The void domain is defined as specified in Sec.~\ref{sec:definition}. The inertial mass tensor is thus: \begin{eqnarray} M_{xx} & = & \int \text{d}^3 {\bf q} \left((q_{y}+\Psi_{y}({\bf q}))^2 + (q_{z} + \Psi_{z}({\bf q}))^2\right),\\ M_{xy} & = & - \int\text{d}^3{\bf q} (q_x + \Psi_x({\bf q}))(q_y + \Psi_y({\bf q})). \end{eqnarray} with the other elements obtained by cyclic permutations. The volume ellipticity $\varepsilon_\text{vol}$ is compared to the tidal ellipticity $\varepsilon_\text{DIVA}$ in Section~\ref{sec:shapes}. Except in that section, we only consider $\varepsilon_\text{DIVA}$ in this paper. \subsection{Smoothing scales} \label{sec:smoothing} There is an apparent price to pay to go to Lagrangian coordinates. One has to set a smoothing scale in Lagrangian coordinates and study the dynamics at corresponding mass scale and let go of the evident notion of whether we see a hole in the distribution of galaxies or not. It actually could be an advantage. Smoothing at different Lagrangian scales allows to probe the structures at different dynamical epoch of the void formation. Each Lagrangian smoothing scale corresponds to a different collapse time: the smallest scales being the fastest to evolve. DIVA in this respect allows us to study the dynamical properties of a the voids which have the same collapse time. This approach is related to the peak patch picture of structure formation \citep{Bond96}, which is a simplified but quite accurate model of the dynamic of peaks in the density field. This model is even more precise for the void patches, which is the name of the equivalent model for studying voids \citep[see e.g.][]{SahniShandarin94,ShethWeygaert04,Novikov06}. Of course, the number of voids depends on the filtering scale (see Section~\ref{sec:analytics_cosmology} and Section~\ref{sec:mak_vs_sim}). If we smooth on large scales we should erase the smaller voids and leave only the voids whose size is large enough. Smoothing also affects the ellipticity distribution. As we smooth to larger and larger scales the density distribution probed by the filter should become more and more isotropic. This leads voids to become more spherical and thus the ellipticity distribution should be pushed towards a perfect sphere. In this paper, we consider a few scales separately and try to understand what were the properties of the minima at each of these scales (see Section~\ref{sec:nbody_sample}). \section{Analytical models for voids} \label{sec:analytic_voids} In this section, we describe an analytical model of the displacement field. This model is based on Zel'dovich approximation \citep{zeldov70}. In a first step (Section~\ref{sec:displacement_stat}), we recall the statistics of the shear of the displacement field. Then, in Section~\ref{sec:ana_epsilon}, we express the ellipticity defined by Eq.~\eqref{eq:epsilon} in terms of this statistic. Finally, we explicitly write the required statistical quantity in the model of Gaussian random fields and give some expected general properties of the voids in this model in Section~\ref{sec:analytics_cosmology}. \subsection{Displacement field statistics} \label{sec:displacement_stat} \cite{ParkLee06} described an analytical model of void ellipticities based on the Zel'dovich approximation. This model should be particularly suitable on making predictions of the result given by DIVA, knowing the previous successes of MAK in this domain \citep{moh2005,lavaux08}. The model that \cite{ParkLee06} have proposed is based on the unconditional joint distribution of the eigenvalues of the tidal field matrix $J_{l,m}$ \citep{Dor70}, given the variance of the density field $\sigma^2$ (Appendix~\ref{app:pdf_lambda}): \begin{multline} P(\lambda_1,\lambda_2,\lambda_3|\sigma) =\\ \frac{3375}{8\sqrt{5}\sigma^6 \pi} \text{exp}\left[\frac{3\left(2 K_1^2 - 5 K_2\right)}{2\sigma^2}\right] |(\lambda_1-\lambda_2)(\lambda_1-\lambda_3)(\lambda_2-\lambda_3)|\,. \end{multline} with \begin{eqnarray} K_1 & = & \lambda_1 + \lambda_2 + \lambda_3\,, \\ K_2 & = & \lambda_1 \lambda_3 + \lambda_1 \lambda_2 + \lambda_2 \lambda_3 \,. \end{eqnarray} as defined in Appendix \ref{app:pdf_lambda}. This expression however neglects the fact that voids correspond to maxima of the source of displacement.\footnote{In terms of primordial density fluctuations, voids correspond to minima of the density field. As MAK is providing a good approximation of this field, we may safely jump from one concept to the other.} As the curvature of ${S_\Psi}= \lambda_1+\lambda_2+\lambda_3$ is correlated with $J_{l,m}$, we need to enforce that we are actually observing the eigenvalues in regions where the curvature of ${S_\Psi}$ is negative. A better expression would be derived if we could constrain that the Hessian $H$ (the matrix of the second derivatives) of ${S_\Psi}$ is negative, which is the case in the vicinity of maxima of ${S_\Psi}$, the source of the displacement field. We derive in Appendix~\ref{app:void_tidal} a general formalism that allows us to compute numerically the probability $P(\lambda_1,\lambda_2,\lambda_3|\sigma_T,r,H<0)$ to observe the eigenvalues $\{\lambda_1,\lambda_2,\lambda_3\}$ given that we look in these regions. This formalism is a natural extension of the formula of \cite{Dor70} (for which a simple derivation is given in Appendix~\ref{app:pdf_lambda}). Of course, ``true voids'' have the additional constraint that $\lambda_i > 0$ for all $i=1,2,3$. As we assumed in previous sections that eigenvalues are ordered according to $\lambda_1 > \lambda_2 > \lambda_3$, the constraint $\lambda_3 > 0$ is sufficient to study this case. \subsection{Ellipticity statistics} \label{sec:ana_epsilon} Whether we use the conditional probability $P(\lambda_1,\lambda_2,\lambda_3|\sigma_T,r,H<0)$ or the unconditional one $P(\lambda_1,\lambda_2,\lambda_3|\sigma_T)$, both under notation $\mathcal{P}(\lambda_1,\lambda_2,\lambda_3|\sigma_T,r)$, we may now express the probability to observe $\delta$, $\nu$, $\mu$ [defined in Equations~\eqref{eq:delta}, \eqref{eq:nu} and \eqref{eq:mu}] in terms of $\mathcal{P}$: \begin{equation} P(\mu,\nu,\delta|r,\sigma_T) = \mathcal{P}(\lambda_1, \lambda_2, \lambda_3|r,\sigma_T) \times \frac{4 (\delta-3)^2 \mu\nu}{(1+\mu^2+\nu^2)^3}\,, \label{eq:axis_distrib} \end{equation} with \begin{eqnarray} \lambda_1 & = & -\frac{1 + \mu^2 - 2 \nu^2 + \delta \nu^2}{1 + \mu^2 + \nu^2}\,, \\ \lambda_2 & = & -\frac{1 - 2\mu^2 + \delta \mu^2 + \nu^2}{1 + \mu^2 + \nu^2}\,, \\ \lambda_3 & = & -\frac{-2 + \delta + \mu^2 + \nu^2}{1 + \mu^2 + \nu^2}\,. \end{eqnarray} The ellipticity distribution of voids is thus \begin{equation} P(\varepsilon|\sigma_T,r) = \frac{1}{\mathcal{N}}\int_{\delta=-\infty}^{+\infty}\int_{\mu=1-\varepsilon}^1 P(\mu,\nu,\delta|\sigma_T,r)\,\text{d}\mu\text{d}\delta , \end{equation} with \begin{equation} \mathcal{N} = \int_{\delta=-\infty}^{+\infty} \int_{\nu=0}^1\int_{\mu=\nu}^1\text{d}\mu\text{d}\nu\text{d}\delta\,P(\mu,\nu,\delta|\sigma_T,r)\,. \end{equation} The alternative distribution for ``true voids'' is given by enforcing that $\lambda_1 > 0$ and may be obtained by introducing the Heaviside function $\Theta(\lambda_1)$ in Eq.~\eqref{eq:axis_distrib} and renormalising. We note that the ellipticity that we are considering here is of dynamical nature \citep[as emphasized by][]{ParkLee06}. This comes in contrast with the first studies of void ellipticities due to redshift distortions by \cite{Ryden95} and \cite{MelottRyden96}. We do not discuss this earlier definition of ellipticity but only the later one. \subsection{Application to cosmology} \label{sec:analytics_cosmology} {\it Shapes of voids --} Now we may compute the ellipticity distribution of voids, $P(\varepsilon|{S_\Psi})$ for a given cosmology. The variance of the density field $\sigma_T^2$, assuming the power spectrum of the density fluctuations $P(k)$, is given by \begin{equation} \sigma^2_T = \frac{1}{2\pi^2} \int_0^{+\infty} P(k) W^2_{R_f}(k)\,k^2\text{d}k\,,\label{eq:var_j} \end{equation} with \begin{equation} W_{R_f}(k) = \exp\left(-\frac{k^2}{2 R_f^2}\right) \end{equation} the Fourier transform of the filter function used to smooth the density field on the scale $R_f$. In practice, we smooth the displacement field in Lagrangian coordinates, to reduce noise and non-linear effects appearing at small scales in the MAK reconstruction ($\la 5$$h^{-1}$Mpc). Thus, we will compute the ellipticity distribution of voids, given that we smoothed on the scale $R_f$ in Lagrangian coordinates, and that the local source of displacement of the void is ${S_\Psi}({\bf q})$. With the model $P(\lambda_1,\lambda_2,\lambda_3|\sigma_T,r,H<0)$ of Appendix~\ref{app:void_tidal}, we may also estimate the number of voids in each class we defined in Section~\ref{sec:definition}. As an illustration, we picked a usual $\Lambda$CDM {} cosmology, with $\Omega_\text{m}=0.30$, $\Omega_\text{b}=0.04$, $\sigma_8=0.77$, $h=0.65$ and estimated the fraction of voids in each class. The results are, when we smooth at 4$h^{-1}$Mpc{}: \begin{itemize} \item {\it true voids}: We estimate that these voids represent $\sim$40\% of the primordial voids, which correspond to underdensities in the primordial density fluctuations. \item {\it pancake voids}: Doing the same estimation as for ``true voids'', we get that $\sim$50\% of the primordial voids should be in that case. \item {\it filament voids}: They correspond to $\sim$10\% of the primordial voids. \end{itemize} \begin{figure*} \includegraphics[width=\hsize]{figure2} \caption{\label{fig:ellipticity} {\it Comparison of MAK reconstructed and analytical ellipticity distribution --} We represent here the distribution of ellipticity of the voids, marginalised over all possible ${S_\Psi}$. We used black squares for the ellipticity distribution obtained using the MAK reconstructed displacement field applied on the simulation. The dashed blue curve is computed using the unconditional \protect\cite{Dor70} formula. The red curve is our new formula obtained by conditioning that voids are regions where the density field has a positive curvature. The left panel gives the result for all voids (true, pancake and filament). The right panel gives the result for only true voids. The blue dashed curve and red solid curve overlap. All fields were smoothed with a Gaussian kernel of radius 4$h^{-1}$Mpc{}. } \end{figure*} We show in Figure~\ref{fig:ellipticity}, the analytical distributions of ellipticity for the two cases when one constrains (or not) the curvature of ${S_\Psi}$ to be negative. The solid curve corresponds to the ellipticity distribution obtained using $P(\lambda_1,\lambda_2,\lambda_3|H<0)$. The dashed curve is obtained with the unconstrained distribution. In the left panel, we took all voids with $\lambda_1 > -1$. In the right panel, we restricted ourselves to ``true voids''. The difference is striking in the left panel between the two models, whereas in the right panel they essentially give the same prediction. This can be understood by looking at the value of the correlation coefficient between the curvature of ${S_\Psi}$ and $T_{l,m}$ (also defined in Eq.~\eqref{eq:correl_curvature}) \begin{equation} r = -\frac{\mathcal{S}_4}{\sqrt{\mathcal{S}_2 \mathcal{S}_6}}\,, \end{equation} with \begin{equation} \mathcal{S}_n = \frac{1}{2\pi^2} \int_{k=0}^{+\infty} k^n P(k)\,\text{d}k \label{eq:var_s_n} \end{equation} This coefficient is equal to $\sim$-0.67 for the aforementioned cosmology. This indicates that the two curvatures are quite strongly linked. Thus, enforcing that $T_{l,m}$ is positive causes the curvature of ${S_\Psi}$ to be preferentially negative. So, the two distributions of the right panel of Fig.~\ref{fig:ellipticity} should be similar. On the other hand, for the left panel no such implication exists, which leads to the largely evident discrepancies of ellipticities. {\it We note the distributions of the right panel are only measurable using our void finder as the other ones cannot distinguish truly expanding voids and pancake voids just by looking at their shape.} {\it Number of voids --} Now that we know the shape the voids should have in the context of linear theory, we would like to know how many of them should be present in the Universe. With our definition of voids, we may conveniently use the results obtained by \cite{BBKS} for Gaussian random fields. In particular in the equation~(4.11b), they show that the average number density of maxima is equal to \begin{equation} n_\text{max} = \frac{29-6\sqrt{6}}{5^{3/2} 2 (2\pi)^2 R_*^3} \simeq 0.016 R_*^{-3}\,. \label{eq:num_maxima} \end{equation} with \begin{equation} R_* = \sqrt{3 \frac{\mathcal{S}_4}{\mathcal{S}_6}} \end{equation} in our notation. We note that this is a mean number, so we expect some fluctuation according to the mean which are difficult to compute analytically. Tests on Gaussian random field seems to point out that the number of voids are distributed like a Poisson distribution. We also expect this number to slightly overestimate the actual density of voids that we will find in simulations (Section~\ref{sec:mak_displacement}). In the next section, we are now going to confront the analytical model with the results given by DIVA applied on $N$-body simulations. \section{Test on $N$-body simulations} \label{sec:nbody_test} In this section, we are going to identify voids in the $N$-body samples described in Section~\ref{sec:nbody_sample}. We then give a sketch (Section~\ref{sec:mak_displacement}) of the procedure we followed to perform the MAK reconstruction, which corresponds to the one described in \cite{lavaux08}. In Section~\ref{sec:results_z0}, we focus on the results obtained at $z=0$. First, we give an illustration of a void of each class in Section~\ref{sec:examples}. We then compare the results obtained using the displacement field given by the simulation and the one reconstructed by MAK (Section~\ref{sec:mak_vs_sim}). There, we also detail the number of voids detected and their ellipticities for both fields. We compare quantitatively the distribution of Eulerian volume ellipticity to the Lagrangian tidal ellipticity in Section~\ref{sec:shapes}. Finally, we check the validity of the analytical model on the simulated displacement field (Section~\ref{sec:simu_vs_analytic}). In Section~\ref{sec:evolution}, we look at the evolution of the mean ellipticity for a simulation with $w=-1$ and in the analytical model. At last, in Section~\ref{sec:two_cosmologies}, we assess the possibility of measuring the evolution of the mean ellipticity in two simulations where $w$ is either $-1$ or $-0.5$. \subsection{The $N$-body simulations} \label{sec:nbody_sample} To test our void finder, we use three large volume $N$-body simulations but with medium resolution of $N=512^3, L=500$$h^{-1}$Mpc, $\Omega_\text{m}=0.30$, $\Omega_\Lambda=0.70$, $H=65$~km s$^{-1}$ Mpc$^{-1}$, $n_s=1$, $\sigma_8 = 0.77$, $\Omega_\text{b}=0.04$. The $N$-body simulations contains only dark matter and we include the effect of baryons only through power spectrum features in the initial conditions. This essentially reduces power on scales smaller than the sound horizon and introduces Baryonic Acoustic Oscillations. The two first $N$-body simulation ($\Lambda$SIM and $\Lambda$SIM2) corresponds to a standard $\Lambda$CDM {} cosmology for which the equation of state is given by $w=-1$. $\Lambda$SIM and $\Lambda$SIM2 have exactly the same cosmology but have different initial conditions. They will allow us to assess the impact of looking at two different realisations of the initial conditions. The third, wSIM, is assuming an equation of state $w=-0.5$ for the Dark Energy. To build the initial conditions, we modified the \verb,GRAFIC, \citep{grafic2} package to use the power spectrum generated by the \verb,CAMB, package \citep{Lewis:1999bs}. We reach a sub-Mpc resolution scale which is sufficient for proper description of most voids (1-20 $h^{-1}$Mpc). The intermediately large volume allows accounting for large-scale tidal field effects and cosmic variance effects. We used the parallelised version of the \verb,RAMSES, $N$-body code \citep{ramses} and run it both on the Cobalt NCSA supercluster and the Teragrid NCSA supercluster through Teragrid facilities \citep{teragrid}. We modified \verb,RAMSES, to simulate cosmologies with a dark energy equation of state different than $w=-1$. To account for the impact of clustering, we analyse the displacement field for which the mass of dark matter halos has been put to the centre of mass of these halos. To be able to do that, we construct a halo catalogue using a Friend-of-Friend algorithm with a traditional linking value of $l=0.2$ \citep{Davis85,EFWD}. We put a threshold of $N_\text{p} = 8$. This prescription in practice should mostly erase the information contained in halos while retaining the dynamics outside of them. Though the use of such a low number for the particles of halos are questionable for the study of the properties of those halos, we are not here interested in them. We are only interested in checking that we keep most of the information useful to study voids and their overall dynamics, even though we destroy the small scale information. The above prescription has already been successfully applied for the study of peculiar velocities with MAK \citep{lavaux08}. We note that we will only use the halo catalogue to do the MAK reconstruction. All the tests of the displacement field of the simulation are run on the {\it particles} of the simulation. We note that that the initial conditions of the simulation present two particularities that must be taken into account. The largest mode of the simulation box is $k_\text{low}=1.25\times 10^{-2}\,h$~Mpc$^{-1}$ thus introducing a sharp low-k cut. Additionally \verb,GRAFIC, applies a Hanning filter on the initial conditions to avoid aliasing. In practice, \verb,GRAFIC, multiplies the cosmological power spectrum by the following filter: \begin{equation} W_\text{hanning}(k) = \left\{ \begin{array}{ll} \left[\cos\left(\frac{1}{2} k \Delta x\right)\right] & \text{if } k \Delta x \le \pi \\ 0 & \text{elsewhere} \end{array} \right., \end{equation} with $\Delta x = 0.976$$h^{-1}$Mpc{} the Lagrangian grid step size of our simulations. These two features must be introduced in the power spectrum of Eq.~\eqref{eq:var_j} and \eqref{eq:var_s_n} to make correct predictions for the simulation. In real observational data, no such features should be present. \subsection{MAK reconstruction and void identification} \label{sec:mak_displacement} Among the different tests that we are going to present in the following, we have run only one MAK reconstruction for the full halo catalogue. We chose a resolution of $256^3$ mesh elements generated following the algorithm of \cite{lavaux08}. This algorithm consists in splitting a mass in an integer number of MAK mesh elements, with the constraints that the splitting must be fair and the number of mesh elements is fixed and equal to $256^3$. This method works well and has, so far, not been prone to biases. Choosing this number of elements gives us a resolution of $\sim$2$h^{-1}$Mpc{} on the Lagrangian coordinates of the displacement field. We cannot run it at the full simulation resolution due to the high CPU-time cost which hinders doing several reconstruction. One reconstruction takes $\sim$13,000 accumulated CPU-hours on Teragrid cluster at NCSA. However, as the complexity grows as $O(N^{2.25})$, increasing the resolution to $512^3$ would have consumed $\sim 10^6$ CPU-hours. So we limited ourself on running the reconstruction at the $256^3$ resolution, the expected worst case for the performance of MAK. At higher redshift, the MAK reconstruction converges faster and gives a more and more precise displacement field compared to the one given by simulation. These two features are both caused by the decrease of small scale non-linearities at earlier times. We took the halo catalogue built on $\Lambda$SIM at $z=0$ and ran a reconstruction on it. The other results presented in this paper use the displacement field given directly by the simulation after having checked that the reconstruction is indeed sufficient for our purpose. This is the case as we will not look at voids smaller than $\sim$1$h^{-1}$Mpc{} scale in Lagrangian coordinates. We chose two different smoothing scales on which we study the displacement field for voids: 2.5$h^{-1}$Mpc{} and 4$h^{-1}$Mpc{}. Once the displacement field has been smoothed, we compute the eigenvalues and the divergences in Fourier space. We then locate the maxima in the divergence of the displacement field. At each maxima, we compute the ellipticity $\varepsilon$ with the help of Eq.~\eqref{eq:epsilon}, where the $\lambda_i$ are taken as the eigenvalues of $T_{l,m}({\bf q})$. The displacement shear tensor, is computed numerically from ${\bf \Psi}({\bf q})$ in Fourier space: \begin{equation} T_{l,m}({\bf q}) = \frac{1}{(2\pi)^3} \int_{\bf k} \text{d}^3{\bf k} i k_m \hat{\Psi}_l({\bf k}) \text{e}^{i {\bf k}\cdot {\bf q}},\label{eq:fourier_tidal} \end{equation} where $\hat{\Psi}_l({\bf k})$ is the Fourier transform of the displacement field in Lagrangian coordinates. All these quantities were evaluated using Fast Fourier Transform on the Lagrangian grid. In summary, the steps are the following: \begin{itemize} \item[-] First, we prepare a catalogue for a MAK reconstruction. This involves doing fair equal mass-splitting of the objects. \item[-] Next, we run the MAK reconstruction. \item[-] After the reconstruction is finished, we put the computed displacement field given by MAK on the homogeneous Lagrangian grid. \item[-] We smooth this displacement field and compute the tidal field $T_{i,j}$ in Lagrangian coordinates in Fourier space using Eq.\eqref{eq:fourier_tidal}. \item[-] Now we compute ${S_\Psi}({\bf q})$ on the grid using Eq.~\eqref{eq:delta}, which corresponds to summing the three eigenvalues of $T_{i,j}$. \item[-] We look for local maxima in ${S_\Psi}({\bf q})$ using an iterative steepest descent algorithm. This gives us the list of the voids in Lagrangian coordinates. \item[-] Using these coordinates, we now compute the ellipticity using the eigenvalues of $T_{i,j}$ at the location of the minima. \item[-] For computing the void boundary, we execute the modified Watershed transform of Section~\ref{sec:definition}. This identifies the Lagrangian domain of the voids. The boundary of this domain is then transported using the displacement field to recover the Eulerian void volume. \end{itemize} We now look at the results obtained with MAK, the simulation and the analytical model at $z=0$ in the next section. \subsection{Results at $z=0$} \label{sec:results_z0} \subsubsection{Example of found voids} \label{sec:examples} Before going to the statistical study of the local shape $\varepsilon_\text{DIVA}$ properties of void found by DIVA, we look at a visual example of each of the void type. We chose a filtering scale of 4$h^{-1}$Mpc{} in Lagrangian coordinates. We selected one void of each class. These three voids have the following properties: \begin{itemize} \item The first void is a true void. The eigenvalues of the tidal field $T_{l,m}({\bf q})$ \eqref{eq:tidal_def} at the centre are $(1.2,0.84,0.69)$ along the three axis. We thus derive the ellipticity $\varepsilon = 0.13$. The volume of the void, in Lagrangian coordinates, is $V_\text{L} = 75240$$h^{-3}$Mpc$^{3}$, which corresponds to an equivalent spherical volume given by a sphere of radius $R_\text{equiv} = (3/(4\pi)*V)^{1/3} = 26$$h^{-1}$Mpc{}. \item The second void is a pancake void. The eigenvalues of the tidal field are $(1.1,0.11,-0.60)$, the ellipticity is $0.563$ and the Lagrangian volume is $V_\text{L}= 1560 h^{-3}$~Mpc$^{3}$, with $R_\text{equiv} = 7.2$$h^{-1}$Mpc{}. \item The last void is a filament void. Again, the eigenvalues of the tidal field are $(0.99,-0.24,-0.61)$, the ellipticity is $\varepsilon=0.557$ and the Lagrangian volume is $V_\text{L}= 260 h^{-3}$~Mpc$^{3}$, with $R_\text{equiv} = 4.0$$h^{-1}$Mpc{}. \end{itemize} Those three voids are represented in three dimensions, along with the particles of the simulation in the same region, in Fig.~\ref{fig:divavoid}. We note that the true voids is the largest one. We expect to observe that effect as true voids expands in three directions and thus are more likely to be bigger than pancake voids and filament voids. For these three cases, the surface delimited by DIVA seems to nicely fit to the structures located on the boundaries. In the case of the pancake and filament voids, we note that the cavity seem to be split into two pieces. This splitting is due to the Watershed transform criterion which isolated two void cavities separated by a structure. \begin{figure*} \includegraphics[width=\hsize]{figure3} \caption{\label{fig:divavoid} {\it Example of voids} -- We illustrate here the voids that are found using DIVA. Each of these belong to one of the void class that we listed in Section~\ref{sec:definition}. The scale of the box is the same for the three cases: the side corresponds to 50$h^{-1}$Mpc{}. } \end{figure*} \subsubsection{MAK vs Simulation} \label{sec:mak_vs_sim} \begin{figure*} \includegraphics[width=\hsize]{figure4} \caption{\label{fig:comparison_simu_vs_mak} {\it MAK reconstructed ellipticity distribution vs. ellipticity distribution from simulated displacement field} -- This figure gives the ellipticity distribution computed using either the MAK reconstructed displacement field (solid black line) or the simulated displacement field (solid thick gray line). We represent the dispersion expected if the error on estimating the distribution come from the number of voids in a given bin. We assumed a Poisson distribution for the estimation of the error bar. The displacement fields were both smoothed with a Gaussian kernel of radius $R_f=2.5$$h^{-1}$Mpc{} in the left panel and to $R_f=4$$h^{-1}$Mpc{} in the right panel.} \end{figure*} We now concentrate on the properties of the voids at $z=0$. This is the time where the density is the most non-linear and the reconstruction is the most difficult and therefore represents a worst case example. We take the displacement field given by the simulation as the field of reference to study voids. Indeed, this field has been obtained by solving completely the equation of motion for each particle. In this section, we will first compare the properties of the voids found using this field and the MAK reconstructed field. Then, we will focus on how it compares to the analytic model. We represent on Fig.~\ref{fig:comparison_simu_vs_mak} the distribution of ellipticities obtained using both the reconstructed and the simulated displacement field. We also give the number of voids found in simulated displacement field, in the reconstructed displacement field and the expected number of maxima according to Eq.~\eqref{eq:num_maxima} (Table~\ref{tab:num_voids}). To allow for a void-by-void comparison, we tried to match the voids found using the two displacement fields. We considered that two voids are the same if the distance between the two voids is less than a Lagrangian grid cell ($d \le \sqrt{3}\,l_\text{cell}$, with $l_\text{cell}$ the length of the side of a cell). At 2.5$h^{-1}$Mpc{} smoothing, the agreement of the ellipticity distribution derived from the simulated displacement and the MAK reconstructed displacement is very good, though the high ellipticity tail seems a little different in the two cases. This is actually due to a selection effect which, unfortunately, is correlated with the ellipticity. If we look at the number of voids detected using the two fields (Table~\ref{tab:num_voids}), we see that MAK is missing about 10\% of the voids detected in the simulation. The distribution of ellipticity of those voids happens to be skewed towards higher ellipticities. Thus it seems that we tend to miss the most elliptical voids. This behaviour is expected as these voids tend to be pancake voids. So they are closing along one direction, and the more elliptical they are the faster they are closing. If they close, the particles of these voids shell cross and MAK is not able to reconstruct the displacement field. Looking at this same distribution, but for a 4$h^{-1}$Mpc{} smoothing, we now barely see a difference between the two curves. We indeed checked that the ellipticity distribution of the voids that have not been identified using the MAK reconstructed displacement field is similar to the distribution of the other voids. The number of voids detected in the simulation looks less than the expected average number of minima (Table~\ref{tab:num_voids}). This is also due to the destruction of minima by the collapse of large scale structures. When we look at the beginning of the simulation we find 11,485 minima (for $R_f=4$$h^{-1}$Mpc), and this number steadily decreases to the value we put in the table as the simulation evolves. We estimated the mean and the variance in the number of minima using 20 realisations of a Gaussian random field with the same cosmology as the simulation. We found that the mean should be 11,762 and the standard deviation is 58, which is in agreement with the result given by the analytic computation. \begin{table} \begin{center} \begin{tabular}{cccc} \hline \hline \multirow{2}{1cm}{Filter} & Predicted & Displacement & Number of \\ & average number & field & candidates \\ \hline \hline 2.5$h^{-1}$Mpc{} & 42908 & Simulated & 31002 \\ & & Reconstructed & 28397 \\ 4$h^{-1}$Mpc{} & 11706 & Simulated & 10643 \\ & & Reconstructed & 9412\\ \hline \end{tabular} \end{center} \caption{\label{tab:num_voids} {\it Unconditioned number of voids in $\Lambda$SIM} -- we give here the unconditioned number of voids found within the volume of the simulation, (500 $h^{-1}$Mpc)$^3$. The predictions are obtained using Equation~\eqref{eq:num_maxima} applied on the power spectrum of primordial density fluctuations multiplied by the Fourier transform of the indicated filter in the first column. The last column gives the actual number of void candidates that we found using the displacement field. } \end{table} \begin{figure*} \includegraphics[width=\hsize]{figure5} \caption{\label{fig:ellipticity_scatter} {\it Ellipticity derived from the simulated displacement field vs. the MAK displacement field} -- We represent here a scatter plot between the ellipticities of the voids that were both detected in the MAK reconstructed displacement field and the simulated displacement field, both smoothed at 4$h^{-1}$Mpc{}. In the left panel, we show the raw joint probability distribution of the two ellipticities. The gray-scale is linear in the density of probability. In the right panel, we represent the conditional distribution of $\varepsilon_\text{MAK}$ given $\varepsilon_\text{SIM}$, on the left of the thick vertical line. On the right of this same line, we represent this same distribution if one uses the estimated standard deviation $\sigma_\varepsilon$ of the error. It is estimated using the distribution between the two vertical dotted lines. The estimated standard deviation is $\sigma_\varepsilon = 0.018$.} \end{figure*} Using the match between voids coming from the two fields, we can build a statistical error model in the form of the joint probability distribution $P(\varepsilon_\text{MAK},\varepsilon_\text{SIM})$ of getting an ellipticity $\varepsilon_\text{MAK}$ using MAK displacement and an ellipticity $\varepsilon_\text{SIM})$ using simulated displacement. It is important to check $P(\varepsilon_\text{MAK},\varepsilon_\text{SIM})$ if, as we will do in future, we want to estimate cosmological parameters from ellipticity distribution. This function tells us what accuracy we may expect on the ellipticity measurements. We represent this probability in the left panel of Figure~\ref{fig:ellipticity_scatter}. We see a strong correlation, already seen in Fig.~\ref{fig:comparison_simu_vs_mak}, indicating a high accurate reconstruction. We also see that the error seems low. We represent at the left side of the thick red solid line of the right panel the conditional probability $P(\varepsilon_\text{MAK}|\varepsilon_\text{SIM})$ that we computed using: \begin{equation} P(\varepsilon_\text{MAK}|\varepsilon_\text{SIM}) = \frac{P(\varepsilon_\text{MAK},\varepsilon_\text{SIM})}{\int_{\varepsilon_\text{MAK}=0}^1 P(\varepsilon,\varepsilon_\text{SIM}) \text{d}\varepsilon} \end{equation} wherever it was possible to evaluate the denominator. This conditional probability is mostly Gaussian, so we tried to estimate the standard error by computing the mean variance of the error on the ellipticity for the distribution between the two dotted red line $\varepsilon_\text{SIM} \in [\varepsilon_{S,min};\varepsilon_{S,max}]$ with $\varepsilon_{S,min}=0.15$ and $\varepsilon_{S,max}=0.40$. With this notation, we computed \begin{multline} \sigma_\varepsilon = \frac{1}{\varepsilon_{S,max}-\varepsilon_{S,min}} \\ \int_{\varepsilon=\varepsilon_{S,min}}^{\varepsilon_{S,max}}\text{d}\varepsilon \sqrt{ \int_{\varepsilon_\text{\sc mak}=0}^1\text{d}{\varepsilon_\text{\sc mak}} (\varepsilon_\text{\sc mak}-\varepsilon)^2 P(\varepsilon_\text{\sc mak}|\varepsilon)}. \end{multline} Within the interval limited by the two dotted red line, we estimate $\sigma_\varepsilon \simeq 0.018$. At the right of the vertical red solid line, we show the function \begin{equation} P(\varepsilon|\varepsilon_\text{SIMU},\sigma_\varepsilon) = \frac{1}{\sqrt{2\pi} \sigma_\varepsilon} \text{e}^{-\frac{1}{2\sigma_\varepsilon^2}(\varepsilon-\varepsilon_\text{SIMU})^2} \end{equation} with $\sigma_\varepsilon$ as estimated above. We note that this model of the conditional probability (right of the vertical solid line) looks much like the actual ellipticity discrepancy (left of the vertical solid line). \subsubsection{Volume ellipticity $\varepsilon_\text{vol}$ vs. Tidal ellipticity $\varepsilon_\text{DIVA}$} \label{sec:shapes} \begin{figure} \includegraphics[width=\hsize]{figure6} \caption{\label{fig:epsilon_comp} {\it Tidal ellipticities vs Volume ellipticity} -- This plot gives a comparison of the ellipticity of the void as determined either using the shear of the displacement field [$\varepsilon_\text{DIVA}$, Eq.~\eqref{eq:epsilon}] or using the overall shape of the void [$\varepsilon_\text{vol}$, Eq.~\eqref{eq:epsilon_shape}].} \end{figure} In Fig.~\ref{fig:epsilon_comp}, we represented a comparison between the ellipticity of the volume, $\varepsilon_\text{vol}$, and the local tidal ellipticity, $\varepsilon_\text{DIVA}$. The volume ellipticity is computed using the Eq.~\eqref{eq:epsilon_shape}, with the inertial mass tensor $M$ as computed using the smoothed displacement field. Visually, the two variables seem loosely correlated. We observe they do follow each other but this correlation just get worse when the ellipticity increases. This is expected as the volume ellipticity is a non-local quantity and thus is sensitive to all local ellipticities in the void volume. This is what makes $\varepsilon_\text{vol}$ more difficult to use as a precise cosmology probe. We show in Appendix~\ref{app:local_vs_global_ellipticity} that these two quantities are indeed related but only to first order. This explains that the scatter seems smaller for small ellipticities than for high ellipticities. {\it The volume ellipticity, which has been used so far, seems thus to be a poor proxy of the tidal ellipticity, which we manage to recover with extreme precision using our Lagrangian orbit reconstruction technique.} For the rest of this paper, we will only use the tidal ellipticity. \subsubsection{Simulation vs Analytic} \label{sec:simu_vs_analytic} In this section, we compare the results obtained on the simulated displacement field and the prediction given by the analytical model of Section~\ref{sec:analytic_voids}. We represented in the left panel of Fig.~\ref{fig:ellipticity} the ellipticity distribution of all observable voids as defined in Section~\ref{sec:definition}. The black points give the ellipticity distribution for voids in the reconstructed displacement field. We estimated error bars assuming a Poisson distribution of the number of voids in each bin. The red line is obtained using the method of Appendix~\ref{app:void_tidal}. The dashed blue line is obtained through the formula of \cite{ParkLee06}, where no constraints are put on the curvature of the density field where the ellipticity of the void is measured. The success of the comparison between the black points and the solid red curve is striking. It shows the importance of our constraint that we only look in the negatively curved part of ${S_\Psi}$. We note that this should be a robust feature for voids found with any void finder. Any probe of the ellipticity in voids looks in regions of the density field that is strongly underdense, and thus should come mainly from initially underdense regions. In these primordial underdense regions, the curvature of the density field is likely to be positive, which corresponds to a negatively curved ${S_\Psi}$ in our case. {\it Our measured ellipticity distribution in the simulation is very clean because we rely on features of the displacement field which can be understood in terms of linear theory even at redshift $z=0$. } In the right panel of Fig.~\ref{fig:ellipticity}, we show this same ellipticity distribution but only for ``true voids''. The comparison between simulation and analytic is also here successful. As we noted in Section~\ref{sec:analytics_cosmology}, there is no real difference between the two models in this case. However, there is no way a purely geometrical analysis could yield this curve from the observation of galaxy catalogues, as this requires the knowledge of the sign of eigenvalues of $T_{l,m}$ (Eq.~\ref{eq:tidal_field}). We note a small shift of $\sim$1-3\% between the model and the reconstruction. We find, using numerical experiments with Gaussian random fields, that a fraction of this shift may be explained by the finite bin size and the very strong steepness of the distribution represented in this panel. \subsection{Evolution with redshift} \label{sec:evolution} \begin{figure*} \includegraphics[width=\hsize]{figure7} \caption{\label{fig:evolution} {\it Evolution of the mean ellipticity with redshift} -- We represent the evolution of the mean ellipticity with redshift for the two $\Lambda$CDM simulations (left panel) and the wCDM simulation (right panel). The mean ellipticities deduced from $\Lambda$SIM are represented using square symbols, and the ones from $\Lambda$SIM2 using triangular symbols. The solid curve is obtained using the statistical model of Appendix~\ref{app:void_tidal} and changing $\sigma_8$ according to the growth factor as specified in Eq.~\eqref{eq:evolve}. The bottom panels give the relative difference, in percentage, between the simulation and the analytical model. In the bottom left panel, the points at $\sim$2\% correspond to the square symbols} \end{figure*} In this section, we focus on the evolution with redshift of the ellipticity of voids. This evolution has been shown to be analytically a sensitive probe of $w$ by \cite{LeePark09}. We took snapshots of the simulation at different redshifts and computed the ellipticity distribution in each of these snapshots. We note that we must at least have two main differences compared to what would happen with observations. First, we may have a systematic effect in the evolution of the mean ellipticity as we are studying only a single realisation of initial conditions. Second, the number of available voids should be a non-trivial function of redshift. Second, because of both volume and selection effects, the error bars should be large at both small and large redshift, while attaining a minimum at some intermediate redshifts The exact numbers for this second problem depends on the specific considered galaxy survey, in particular the apparent magnitude limit and the incompleteness function. To compute the predicted ellipticity distribution at any given redshift $z$, we simply scale the $\sigma_8(z)$ parameter using the growth factor $D(z)$: \begin{equation} \sigma_8(z) = \sigma_8(z=0) \times \frac{D(z)}{D(z=0)}\label{eq:evolve}. \end{equation} For clarity we only represent on Fig.~\ref{fig:evolution} the mean ellipticity $\bar{\varepsilon}$, defined as, \begin{equation} \bar{\varepsilon}(z) = \int_0^1 \varepsilon P(\varepsilon|z) \text{d}\varepsilon \end{equation} with $P(\varepsilon|z)$ the probability distribution of the ellipticity $\varepsilon$ at redshift $z$. The red solid line gives the prediction given by the analytic model of Section~\ref{sec:analytic_voids}. The black points are obtained from the simulated field. The error bar on the mean ellipticity is estimated using \begin{equation} \sigma_{\bar{\varepsilon}} \simeq \frac{\sigma_\varepsilon}{\sqrt{N_\text{voids}}} \end{equation} with $\sigma_\varepsilon = 0.02$ as estimated in Section~\ref{sec:mak_vs_sim} for a smoothing scale of 4$h^{-1}$Mpc{}. For $N_\text{voids}\simeq 10^4$, this gives a typical error of $\sigma_{\bar{\varepsilon}} = 2\times 10^{-4}$ on the mean. This error bar corresponds to the uncertainty of the ellipticity derived from the MAK reconstructed displacement field with respect to the one obtained from the simulated displacement field. This gives an interval on which the mean ellipticity can be trusted. In the left panel of Fig.~\ref{fig:evolution}, we see that the agreement between the analytical model and the one from the simulated displacement field (square symbols, ``Simulation 1'') is very good for $w=-1$. Looking at the relative error between the model (lower-left panel) and the simulation yields a systematic $\sim$2\% deviation relative to the expectation. The main reason is the inexactness of the initial conditions to the finite number of modes available in the simulation box. Even though the power spectrum is normalised to $\sigma_8 = 0.77$, the realized $\sigma_8$ of the displacement field is $0.783$. This produces intrinsically a shift of 1.7\% towards positive errors. It is exactly what we observe. As we see, this bias is relatively modest. However it should be observable when we look the small amplitude of the expected reconstruction errors given by the small error bars. To check this effect, we rerun another simulation with the exact same cosmology but with another seed. This time, we measured $\sigma_8=0.7688$ in the initial conditions, this corresponds to a small statistical fluctuation of $-0.15\%$. We plotted the corresponding evolution of the mean ellipticity in the left panel of Fig.~\ref{fig:evolution} (triangular symbols, ``Simulation 2''). This will not prevent applying this method to observations for two reasons. First, we will marginalise over the bias and so the systematic shift will disappear. Second, each considered slice should be a nearly independent random realisation of a Gaussian random field normalised to the same $\sigma_8$. Thus the points should be scattered according to our dashed horizontal line ``0\%'' and not systematically pushed up or down. \subsection{$w=-0.5$ vs $w=-1.0$} \label{sec:two_cosmologies} In all the previous sections, we studied the case of a standard $\Lambda$CDM {} cosmology with $w=-1$. However, we first started to look at voids to check if they may be good tracers of the properties of the dark energy, and in particular of the growth factor. We now focus on the results obtained from wSIM, a wCDM simulation with $w=-0.5$ as we specified in Section~\ref{sec:nbody_sample}. The results are presented in the right panel of Fig.~\ref{fig:evolution} and in Fig.~\ref{fig:different_cosmo}. We computed that our particular realisation of the initial conditions had $\sigma_8=0.773$, which is $0.33\%$ above 0.77. We again note that the discrepancy in the lower-right panel in Fig.~\ref{fig:different_cosmo} has the correct systematic shift at low redshift. Taking into account this shift, as previously, the analytical model seems to follow the results given by simulation at the $0.1\%$ level, taking into account the statistical uncertainty. The points obtained from the simulation are in excellent agreement with the simulation. Current redshift galaxy catalogues map the Universe at intermediate redshifts $0 \la z \la 1$. We check if our method is sufficiently sensitive to distinguish a $w=-0.5$ from a $w=-1$ cosmology in Figure~\ref{fig:different_cosmo}. In this figure, we used the $\sigma_8$ as measured in the simulations to compute the analytical predictions (red and blue solid curves). We note that even at $z\simeq 0.2$, the behaviour of the two curves is already significantly different and above statistical uncertainties. If we consider the whole interval between $z=0$ and $z=1$, the difference is very significant compared to the uncertainties, with an ellipticity that changes by $\simeq $35\%. \begin{figure} \includegraphics[width=\hsize]{figure8} \caption{\label{fig:different_cosmo} {\it Difference between w=-1 and w=-0.5} -- We plotted here the evolution of the mean ellipticity with redshift. We used the simulation $\Lambda$SIM2 (square) and wSIM (triangle). The red solid line gives the prediction for $\sigma_8=0.7688$, $w=-1$. The blue solid line gives the prediction $\sigma_8=0.773$ and $w=-0.5$. These two values of $\sigma_8$ have been computed using the initial conditions of the two simulations. } \end{figure} In all the above we considered catalogues with an important observable number of voids (typically $\sim$10,000). We do not expect to have such a high number available in catalogues. Now we try to make an estimate of the error bars on the mean ellipticity that we may expect. The SDSS covers one fifth of the sky. We now limit the survey at $z=0.1$ ($\sim$300$h^{-1}$Mpc), and we take a Lagrangian smoothing scale of 5$h^{-1}$Mpc{}. This smoothing scale is motivated by the average density of galaxies in the SDSS, which in band $r$ is about $1-5\times 10^{-2} h^3\text{Mpc}^3$ \citep{Blanton03}. This gives a mean separation of $\sim 2-4$$h^{-1}$Mpc{}. The equation \eqref{eq:num_maxima} predicts that we should observe $\sim$1,000 of our voids in this volume when smoothing the density field in Lagrangian coordinates with a Gaussian of radius 5$h^{-1}$Mpc{}, taking into account the survey coverage. If we go to $z=0.2$, this number should increase to $\sim$9,000. This means that the error bars should only be moderately larger than the one that we considered in this work. Typically we expect about three times larger. Even, with this amount of uncertainty, the comparison to the analytic model should be able to highly constrain the equation of state of dark energy at $z \la 0.2$. The Fisher matrix analysis is done in a companion paper. \section{Comparison of DIVA to earlier void finders} \label{sec:discuss_definitions} In this section, we discuss how our technique is related to the other existing void finders. We try to make a qualitative assessment of its strengths and weaknesses compared to the three classes of void finders define in Section~\ref{sec:intro}. The void finders of the first class try to find emptier regions in a distribution of points, which in actual catalogues correspond to galaxies. The void finder developed by \cite{ElAd97}, and one of its later versions by \cite{HoyleVogeley2002}, is popularly used in observations \citep{HV2004,HRV2005,TK06,FosterNelson09}. In these void finders, the first step consists in classifying galaxies in two types. Galaxies may lie in a strongly overdense region, in this case we consider it as a ``wall galaxy''. The other possibility is that they lie in a mildly underdense region, and they are then called ``field galaxies''. The exact separation between ``wall galaxies'' and ``field galaxies'' depends on an ad-hoc parameter. This parameter specifies how the local density of galaxy control the type (field or wall) of the galaxy. The voids are then grown from regions empty of wall galaxies. This classification thus gives a non-trivial dependence of the void sizes and shapes on the galaxy bias and catalogue selection function. Additionally, this definition depends on an ad-hoc empirical factor. These issues make the quantitative study of the geometry of voids difficult, while they find voids that correspond to the visual impression given by large scale structures in redshift catalogues of galaxies. Void finders belonging to the second class look for particularities in the continuous three dimensional distribution of the dark matter traced by galaxies. Of course, from observational data, one has then to first project and then smooth the distribution of galaxies to obtain this distribution. Different techniques are used: \begin{itemize} \item[-] One technique is to adaptively smooth the galaxy distribution either using an SPH technique \citep[see e.g. ][]{Colombi07} or a Delaunay Tesselation Field Estimator \citep{dtfe}. Voids must then be identified from the smooth distribution derived using either of these techniques. One option is to use a scheme similar to \cite{ElAd97} or \cite{HoyleVogeley2002} to identify shapes of underdensity in the vicinity of a minima of the density field \citep{Colberg05}. As with void finders of the first class, the galaxy bias does not affect the positions of the voids but their overall properties. A second option is to use a Watershed Transform \citep{wvf} to identify the cavities of the voids. In this case, the galaxy bias does not affect the structure of the cavities. However, devising an efficient way of relating the geometrical properties of these cavities to the cosmology, which corresponds to studying Morse theory, could well be non-trivial \citep{Jost08}. Some work to study the skeleton (also called the ``cosmic spine'') of Large Scale structures in this theory has been recently done by \cite{AragonCalvo08}, \cite{Pogosyan08} and \cite{Sousbie09}. \item[-] A second technique consists in using the local density estimated using the Voronoi diagram of a Delaunay tessellation to locate minima \citep{zobov}. The particles are first grouped in zones. Each particle is assigned to a zone on to where this particle is attracted if it followed the density gradient as in the watershed technique. Each zone is defined to be a void. But it is also possible to define a hierarchy of voids by checking, for two neighbouring voids, which of the two has the lowest density at the minima. The zones are then grouped and a probability of being a void is assigned depending on the contrast between the density at the ridge of the void and its depth. \end{itemize} This void finder has the advantage of defining voids in terms of topology of the density field, which is easier to handle from a theoretical point of view and may better define a void in terms of dark matter. Still, we are faced with the task of relating the shape of the voids that are found by these algorithms, which is non-local by nature, to cosmology. As mentioned previously for void finders of the first class, this seems to be non-trivial. Void finders of the third class use the inferred dark matter distribution but they do a dynamical analysis to infer the location of these voids. DIVA belongs to these class of void finders. There are two advantages of looking at dynamics for voids. (i) It gives a much more physical and intuitive definition of these structures: voids corresponds to places in the universe from which the matter {\it is really escaping} and not gravitationally unstable at present time. (ii) Using this criterion, one is bound to use either the velocity field or the displacement field. These two quantities are highly linear. It has been directly shown for velocity fields by \cite{CCK03} and indirectly shown by \cite{moh2005} and \cite{lavaux08} for the displacement field. This linearity helps us at constructing an analytical statistical model of the voids. \cite{Hahn07} and \cite{VoidsGravity08} attempted to classify structures according to a criterion on the gravitational field. However we may highlight two very important differences compared to the approach we are following here: \begin{itemize} \item[-] we are using a purely Lagrangian method and it takes into account the true evolution of the void and not how virtual tracers in the void should move now, \item[-] we put an exact natural threshold on eigenvalues to classify voids. This is in contrast with \cite{VoidsGravity08} where they need to to put a threshold on eigenvalues depending on an estimated collapse time. In our case, everything is already integrated in the definition of the displacement field. \end{itemize} Moreover, the Monge-Amp\`ere-Kantorovitch reconstruction presents the two advantages of: (i) never diverging in the neighbourhood of large density fluctuations, compared to a pure gravitational approach, (ii) recovering exactly the Zel'dovich approximation in the neighbourhood of centres of voids. As for the other void finders, DIVA depends on galaxy bias. We recall that the bias $b$ is defined with \begin{equation} \delta_g \simeq b \delta_m , \end{equation} with $\delta_g$ the density fluctuations of galaxies and $\delta_m$ the density fluctuations of matter. As MAK is essentially reconstructing the Zel'dovich displacement in underdense regions, and that the Zel'dovich potential is proportional to density fluctuations, the MAK displacement should also be mostly linear with the bias. We describe in a second paper (Lavaux \& Wandelt 2009, in prep.) how to relate the volume of the voids that we find in Lagrangian coordinates to the voids that we observe in Eulerian coordinates. \section{Conclusion} \label{sec:conclusion} We have described a new technique to identify and characterise voids in Large Scale structures. Using the MAK reconstruction, we have been able to define void centres in Lagrangian coordinates by assimilated them to the maxima of the divergence ${S_\Psi}$ of the displacement field, interpreted as its source term. The scalar field ${S_\Psi}$ has the interesting property of being nearly equal to the opposite of the linearly extrapolated primordial density field \citep{moh2005}. This allowed us to consider that the statistics of those two fields were equal. Using that, we made predictions on the number of voids in Lagrangian coordinates, along with their ellipticities defined using the eigenvalues of the curvature of ${S_\Psi}$. We tested our model using $N$-body simulations with different cosmologies ($w=-1$ and $w=-0.5$). We checked, using the largest Lagrangian reconstruction run so far, that MAK is capable of recovering the ellipticity of individual voids to the order of a few percent. We highlighted the importance of using our model for the statistics of the eigenvalues of the curvature instead of the formula of \cite{Dor70} for the particular case of voids. We showed that our analytical model agrees within $\sim$0.1-0.3\% to results obtained with MAK and the displacement field obtained from the simulation. We expect our method to be able to provide a very promising constraint on the equation of state of the Dark Energy of the late universe, especially given the notable accuracy of the prediction that we obtained Fig.~\ref{fig:different_cosmo}. We intend to pursue this work to continue characterising analytically the voids found by DIVA, in particular the evolution of the number of voids and their size distribution. We will make further robustness tests using mock catalogues, including especially redshift distortion effects. We also would like to apply our method to the Luminous Red Galaxy sample of the SDSS DR7 \citep{SDSSDR7}. \section*{Acknowledgements} This research was supported in part by the National Science Foundation through TeraGrid resources provided by the NCSA. TeraGrid systems are hosted by Indiana University, LONI, NCAR, NCSA, NICS, ORNL, PSC, Purdue University, SDSC, TACC and UC/ANL. The authors acknowledge financial support from NSF grant AST 07-08849. We acknowledge the hospitality of the California Institute of Technology, where the authors completed most of this work. We thank the referee, Rien van de Weygaert, for his useful comments and suggestions. \input{biblio}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\hspace{0.5cm} The announcement by the Super-Kamiokande collaboration\cite{sk} of evidence for neutrino oscillation (and hence nonzero neutrino mass) in their atmospheric neutrino data is a major milestone in the search for new physics beyond the standard model. An outstanding feature of these oscillations is the maximal mixing between the $\nu_{\mu}$ and $\nu_{\tau}$ ($sin^22\theta_{\mu-\tau}\approx 0.7-1$) in sharp contrast with the mixing pattern in the quark sector. Also the inferred $\Delta m^2_{\mu-\tau}\sim 5\times 10^{-4}- 6\times 10^{-3}$ eV$^2$ is lower than most ``see-saw motivated'' extrapolations from $\Delta m^2_{e-\mu}$ values in the small or large angle MSW solutions to the solar neutrino problem\cite{solar}: $\Delta m^2_{e-\mu}\simeq 3\times 10^{-6}-7\times 10^{-6}$eV$^2$ with $sin^22\theta \simeq 3-5\times 10^{-3}$ and $\Delta m^2\simeq 10^{-5}-10^{-4}$ eV$^2$ with $sin^22\theta \simeq .8-1$. The solar neutrino problem provided the first indication for neutrino oscillation and this evidence keeps building up. It can also be resolved by the maximal $\nu_e-\nu_{\mu}$ vacuum oscillation with fine tuned small mass difference $\Delta m^2_{e-\mu}\approx 10^{-10}$ eV$^2$. Maximal mixing with larger $\Delta m^2$ values yield an energy independent suppression of all solar neutrinos\footnote{More precisely, the suppression factor in the radio chemical experiments is by 50\% whereas in the Super-Kamiokande it is 57\%.} (except when it is in the large angle MSW range mentioned above). While this does not resolve the solar neutrino problem at present, it does considerably ameliorate it. All the above suggests considering maximal ($\nu_e-\nu_{\mu}$) mixing alongside maximal ($\nu_{\mu}-\nu_{\tau}$) mixing\cite{gold}. Three specific ``bimaximal mixing'' patterns\cite{nussinov,fritzsch,other,gold} having particularly simple forms are: \noindent{\it Case (A)\cite{nussinov}:} \begin{eqnarray} U_{\nu}=\frac{1}{\sqrt{3}} \left(\begin{array}{ccc} 1 & \omega & \omega^2\\ 1 & \omega^2 &\omega\\ 1 & 1 & 1 \end{array} \right) \end{eqnarray} where $\omega =e^{\frac{2\pi i}{3}}$; we will call this the symmetric mixing pattern. \noindent{\it Case (B)\cite{gold}:} \begin{eqnarray} U_{\nu}=\left(\begin{array}{ccc} \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0\\ \frac{1}{2} &\frac{1}{2} &\frac{1}{\sqrt{2}} \\ \frac{1}{2} &\frac{1}{2} &-\frac{1}{\sqrt{2}} \end{array}\right) \end{eqnarray} This has been called in the literature as bimaximal mixing\cite{gold}. \noindent{\it Case (C)\cite{fritzsch}:} \begin{eqnarray} U_{\nu}=\left(\begin{array}{ccc} \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0\\ \frac{1}{\sqrt{6}} &\frac{1}{\sqrt{6}} &-\frac{2}{\sqrt{6}} \\ \frac{1}{\sqrt{3}} &\frac{1}{\sqrt{3}} &\frac{1}{\sqrt{3}} \end{array}\right) \end{eqnarray} We call this democratic mixing. In the above equations, we have defined $U_{\nu}$ as follows: \begin{eqnarray} \left(\begin{array}{c} \nu_e\\ \nu_{\mu} \\ \nu_{\tau}\end{array} \right)=~ U_{\nu}\left(\begin{array}{c} \nu_1 \\ \nu_2 \\ \nu_3 \end{array} \right) \end{eqnarray} with $\nu_{ e,\mu, \tau}$ being the weak eigenstates and $\nu_{1,2,3}$, the mass eigenstates. Our goal is to explore possible extensions of the standard model that may provide a theoretical understanding of the maximal mixing patterns. Attempts to understand the pattern (A), made in our previous paper\cite{nussinov} were largely unsuccessful. Also the CHOOZ upper bound\cite{chooz} on $\nu_e-\nu_{x}$ oscillation with $\Delta m^2_{13}\geq 10^{-3}$ eV$^2$, tends together with Super-Kamiokande data, to disfavor this possibility. There has been several recent attempts to derive the second pattern (B)\cite{sacha}. Here we will show by using an extension of the standard model, that it is possible to generate the pattern (C) in a consistent and natural way. Our motivation is quite clear: if nature presents us with such a neutrino mixing pattern, we must seek an extension of the standard model that can naturally lead to it. Hopefully a theory that naturally provides this pattern will have other testable predictions. In Ref.\cite{fritzsch}, permutation symmetry was imposed on the charged lepton mass matrix and not on the neutrino mass matrices in order to motivate the pattern (C). No underlying theoretical justification was discussed for such a hypothesis. In the framework of gauge theories, such an assumption is hard to understand apriori since the charged leptons and the neutrinos are members of the same isodoublet of the standard model gauge group $SU(2)_L$ and therefore the permutation symmetry could lead to a similar mass matrix for both the charged lepton sector as well as the neutrino sector. If that happens, the neutrino mixing matrix which is given by $U^{\dagger}_{ell} U_{\nu}$ could substantially differ from (C). It is therefore important to investigate whether the above mixing pattern arises in a complete theory. Also the putative mass pattern $\Delta m^2_{32}\gg \Delta m^2_{21}$ should be provided by the model rather than arbitrarily fixed. It is considerations such as these which motivate us to add this brief note to the exploding literature on neutrino models. We find that by combining the permutation symmetry $S_3$ with a $Z_4\times Z_3\times Z_2$ symmetry in the left-right symmetric extension of the standard model, we can obtain the maximal mixing pattern (C) in a technically natural manner (i.e. without setting any parameters to zero by hand). In the limit of exact permutation symmetry, all the neutrinos are degenerate as are the elctron and the muon. As a result, the mixing angles can be rotated away. However, once one admits permutation breaking terms to accomodate the electron muon mass difference, the neutrino degeneracy is removed and the democratic form (pattern C) of the maximal mixing pattern remains. In fact, the masses of $\nu_e$ and $\nu_{\mu}$ get related to the elctron and muon masses arising completely from radiative corrections. To avoid arbitrary deviations from the maximal pattern, we assume that the permutation symmetry (but not the $Z_4\times Z_3\times Z_2$) is softly broken. This adds only small, finite, corrections to the mixing pattern and one obtains a complete and realistic gauge theoretic derivation of the maximal mixing pattern C. \section{Permutation symmetry and a gauge theory of maximal mixing} We consider a left-right symmetric extension of the standard model with the usual fermionic field content\cite{lr}. We omit the discussion of the quark sector for now. Denoting the leptons by $\psi_a\equiv (\nu_a, e_a)$, the $\psi_{L,R}$ transform as the $SU(2)_{L,R}$ doublets respectively under the left-right gauge group $SU(2)_L\times SU(2)_R\times U(1)_{B-L}$. The subscript $a$ stands for the generations. We choose the following set of Higgs bosons to achieve the symmetry breaking: three sets of left and right doublets dnoted by $\chi_{a,L,R}$ ($a=1,2,3$); the $\chi_{aR}$ vev will break the $SU(2)_R\times U(1)_{B-L}$ gauge symmetry down to the standard model $U(1)_Y$ group. We choose three bidoublets $\phi_{0,1,2}$ to break the electroweak $SU(2)_L\times U(1)_Y$ symmetry and give mass to the quarks and charged leptons as well as the Dirac mass for the neutrinos. In order to implement the double seesaw\cite{valle} mechanism for neutrino masses, we introduce three gauge singlet fermion fields, $\sigma_a$. In order to get the desired pattern for lepton masses, we demand the theory to respect the symmetry $S_3\times Z_4\times Z_3\times Z_2$ for all dimension four terms. We assume that all interactions of dimension four are invariant under permutation of the three indices $a=1,2,3$ for the fields that carry the subscript $a$\cite{derman}. This symmetry will be softly broken by terms of dimension $\leq 3$. We assume that under left-right symmetry $\phi_0\leftrightarrow \phi^{\dagger}_0$ and $\phi_1\leftrightarrow\phi^{\dagger}_2$ The transformation of the various fields under symmetry $Z_4\times Z_3\times Z_2$ is given in Table I. The quark fields are assumed to be singlets under the above groups. The Yukawa couplings invariant under the above symmetries are: \begin{eqnarray} {\cal L_Y}= h_0 \Sigma_a \bar{\psi}_{aL}\phi_0\psi_{aR} + h_1 (\bar{\psi}_{1L}\phi_1\psi_{2R} +\bar{\psi}_{2L}\phi_1 \psi_{3R}+\bar{\psi}_{3L}\phi_1\psi_{1R})\nonumber\\ +h_1(\bar{\psi}_{1R}\phi^{\dagger}_2\psi_{2L} +\bar{\psi}_{2R}\phi^{\dagger}_2 \psi_{3L}+\bar{\psi}_{3R}\phi^{\dagger}_2\psi_{1L}) h.c. \end{eqnarray} It is then clear that after the $\phi_{0,1,2}$ acquire vev's, they will give Dirac mass to the charged leptons and the neutrinos. To get the desired pattern of charged lepton masses and the Dirac mass for the neutrinos, we choose the vev pattern for the $\phi$'s as follows: $<\phi_0>=\left(\begin{array}{cc} \kappa_0 & 0\\ 0 & \kappa'_0\end{array}\right)$. On the other hand, for the fields $\phi_{1,2}$, we choose $<\phi_{1,2}>=\left(\begin{array}{cc} 0 & 0\\ 0 & \kappa'_{1,2}\end{array}\right)$. Due to left-right symmetry, one can assure that $\kappa'_1=\kappa'_2$. It is crucial that the the vev pattern for $\phi_{1,2}$ is stable since this is what distinguishes the neutrino sector from the charged lepton sector and leads to the maximal mixing pattern (C) of democratic type for the neutrinos. It is important for this that there be no tadpole terms involving the $11$ components of the $\phi_{1,2}$ fields. This is verified by making the observation that all the $\phi_\alpha$ have same $Z_4$ quantum number; as a result terms like $Tr(\tilde{\phi}_\alpha\phi_\alpha)$ which could generate the tadpoles are not present in the potential. The above vev pattern has the consequence that all elements of the charged lepton mass matrix are nonzero whereas the Dirac mass matrix for the neutrinos is diagonal. To see the resulting mixing matrix, let us write the charged lepton mass matrix: \begin{eqnarray} M_{\ell}=m_0\left(\begin{array}{ccc} a & 1 & 1 \\ 1 & a & 1 \\ 1 & 1 & a \end{array}\right) \end{eqnarray} where $m_0=h_1\kappa'_1$ and $m_0a=h_0\kappa'_0$. Three eigen vectors of this mass matrix can be written as: \begin{eqnarray} \left(\begin{array}{c} e\\ \mu \\ \tau \end{array}\right) = \left(\begin{array}{ccc} \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0\\ \frac{1}{\sqrt{6}} &\frac{1}{\sqrt{6}} &-\frac{2}{\sqrt{6}} \\ \frac{1}{\sqrt{3}} &\frac{1}{\sqrt{3}} &\frac{1}{\sqrt{3}} \end{array} \right)\left(\begin{array}{c} e^0 \\ \mu^0 \\ \tau^0\end{array} \right) \end{eqnarray} where the particles in the above equation with superscript zero denote that they are the weak eigenstates prior to the diagonalization of the mass matrix. Note that this matrix is precisely the matrix in Eq. (2). The corresponding eigenvalues are: \begin{eqnarray} m_e=m_0(a-1)\nonumber \\ m_\mu=m_0(a-1)\nonumber\\ m_\tau=m_0(a+2) \end{eqnarray} It is then easy to see that if the Majorana mass matrix for the neutrinos is diagonal, the consequent neutrino mixing matrix determined by the diagonalization of the charged lepton matrix above and is precisely of type (C) described above. Note however that the muon and the electron masses are equal. In order for them to be much less than the $\tau$ mass as observed, we must have $a\simeq 1$. It is interesting to note that if instead of $S_3$ symmetry, one assumes $S_{3L}\times S_{3R}$ then indeed one ends up with $a=1$ as has been noted already\cite{fritzsch}. As alluded to before, we must have permutation symmetry breaking terms to split their masses. Before proceeding to that discussion, let us turn to the neutrino sector to make sure that no mixing angles emerge in the neutrino sector that could vitiate the maximal pattern. Equation (3) and the $\phi_{\alpha}$ vev pattern imply that the Dirac mass matrix for the three neutrinos is diagonal with all $m_{{\nu^D}_\alpha}$ given by $m_0a(\kappa'_0/\kappa_0)$. In order to understand the small neutrino masses we must implement a seesaw mechanism. It turns out that in this case the appropriate one for us is the double seesaw mechanism discussed in \cite{valle}. From the terms involving the gauge singlet fermions $\sigma_a$'s in the Lagrangian: \begin{eqnarray} {\cal L_{\sigma}} = \Sigma_a f (\bar{\psi}_{aR}\chi_{aR}\sigma_a ~+~ R\rightarrow L + m_{\sigma_a} \sigma^2_a) +~~ h. c. \end{eqnarray} the $(\nu_L,\nu_R,\sigma)$ mass matrix comes out to be \begin{eqnarray} M_{\nu}~=~\left(\begin{array}{ccc} 0 & m_{\nu^D} & 0 \\ m_{\nu^D} & 0 & f v_R \\ 0 & fv_R & M_{\sigma} \end{array} \right) \end{eqnarray} with $<\chi_{aR}>= v_R$ providing the largest mass scale in the problem. Each of the entries in Eq. (7) except the $M_\sigma$ is a $3\times 3$ unit matrix. In the limit of exact permutation symmetry, $M_{\sigma}$ would also be a unit matrix. The light neutrino eigenvalues are given by: \begin{eqnarray} m_{\nu_a}\simeq \frac{m^2_{\nu^D}m_{\sigma_a}}{f^2 v^{2}_R} \end{eqnarray} It is important to emphasize that there is no mixing in the purely neutrino sector so that in the basis where the charged leptons are diagonal, we have the desired maximal mixing pattern. This in our opinion is the big model building challenge that we have solved in this article. Clearly, if permutation symmetry had not been broken by the different $\sigma_a$ masses, the mixing matrix would have been arbitrary. To get a feeling for the scale of new physics $v_R$, we note that $m_{\nu^D}\simeq (\kappa'_0/\kappa_0)(m_{\tau}/3)$ GeV. Therefore, assuming $\kappa'_0/ \kappa_0 \sim 0.1$, we get $m_{\nu^D}\simeq 0.06$ GeV; and for $v_R= 10^5$ GeV and $f=2$, we get $m_{\nu_a}\simeq 0.9\times 10^{-4}(m_{\sigma_a}/GeV)$ eV. If we choose $m_{\sigma_3}\simeq 500$ GeV and $m_{\sigma_{1,2}}\ll m_{\sigma_3}$, we get $m_{\nu_{\tau}}\simeq 4.5\times 10^{-2}$ eV, which is in the range required to solve the atmospheric neutrino puzzle. At this stage it might appear that the muon- and electron-neutrino masses can be chosen at will by adjusting the $m_{\sigma_{1,2}}$. But this is not so since the muon and electron masses which are tiny at the tree level (if we choose $a=1$) must also arise out of the mass splitting among the $\sigma_a$'s at the one loop level. The radiative contributions to the muon and electron masses arise from the diagram of type shown in Fig.1 and we can estimate this contribution to be: \begin{eqnarray} m^{(1)}_{\ell_a}\simeq \frac{f^2}{16\pi^2}\frac{m^2_{\sigma_a}\mu^3\kappa_0}{\lambda(\beta v_R)^5} \end{eqnarray} where $\beta v_R$ is the typical heavy Higgs boson mass that appears in the loop. We have also used the fact the vevs of $\chi_{a L,R}$ satisfy the relation $v_{aL} v_{aR}\simeq \frac{\kappa_0 \mu}{\lambda}$. \begin{figure}[htb] \begin{center} \epsfxsize=8.5cm \epsfysize=8.5cm \mbox{\hskip -1.0in}\epsfbox{bimax.ps} \caption{ The Feynman diagram responsible for one loop radiative corrections to the muon and the electron masses. The dashed lines are the scalar bosons with appropriate quantum numbers.\label{Fig.1}} \end{center} \end{figure} Choosing $\beta \simeq 0.14$ and $\mu\simeq v_R$, $\lambda\simeq 1$ and $f\simeq 2$, we estimate \begin{eqnarray} m^{(1)}_{\ell_a}\simeq 10^{-5} m^2_{\sigma_a} \end{eqnarray} Note that since we need to get the entire masses for the muon and the electron from the one loop correction, we must choose $m_{\sigma_2}\simeq 100$ GeV and $m_{\sigma_1}\simeq 7$ GeV. This then implies that $m_{\nu_{\mu}}\simeq 9\times 10^{-3}$ eV and $m_{\nu_e}\simeq 6\times 10^{-4}$ eV. We thus see that $\Delta m^2_{12}$ relevant for solving the solar neutrino problem is $\simeq 8.1\times 10^{-5}$ eV$^2$. This is comfortably in the right range for solving the solar neutrino problem using the large angle MSW solution. \section{Higgs potential and symmetry breakings} Let us now discuss the vev pattern assumed in the preceding analysis. Two points need to be discussed are (i) the specific vev pattern for the field $\phi_{1,2}$ that differentiates the neutrino Dirac mass from the charged lepton mass matrix and (ii) the induced $\chi_{aL}$ vev. Note that due to the nontrivial transformation of the $\phi_1$ field under the $Z_3$ symmetry, the only allowed terms involving it in the potential are $Tr(\phi^{\dagger}_1\phi_1)$, $Tr(\phi^{\dagger}_1\phi_1\phi^{\dagger}_1\phi_1)$, $Tr(\phi^{\dagger}_1\phi_0\phi^{\dagger}_0\phi_1)$. Similar thing happens for $\phi_2$. Note further that the $Z_4$ symmetry forbids terms like $Tr \tilde{\phi}^{\dagger}_1\phi_2$. The absence of terms of the form $Tr \tilde{\phi}^{\dagger}_{1,2}\phi_{1,2}$ guarantees that once we choose the vev of the form $Diag<\phi_{1,2}>=(0,\kappa'_{1,2})$, there are no tadpole like terms that can destabilize that vacuum. Finally the fact that under left-right symmetry $\phi_1\leftrightarrow \phi^{\dagger}_2$ guarantees there is a discrete symmetry between these two fields leading to a stable minimum with $\kappa'_1=\kappa'_2$. Turning now to the second point, note that the potential involving the $\chi_{aL,R}$ fields has the form \begin{eqnarray} V(\chi_{aL},\chi_{aR})= - M^2_0(\chi^{\dagger}_{aL}\chi_{aL}+\chi^{\dagger}_{aR}\chi_{aR}\nonumber\\ +\lambda_+(\chi^{\dagger}_{aL}\chi_{al}+ \chi^{\dagger}_{aR}\chi_{aR})^2 \nonumber\\ +\lambda_-(\chi^{\dagger}_{aL}\chi_{al}- \chi^{\dagger}_{aR}\chi_{aR})^2 +\mu_a\chi^{\dagger}_{aL}\phi_0\chi_{aR}+h.c. \end{eqnarray} where sum over $a$ has been omitted for simplicity. Minimizing this we get that $v_{aL}v_{aR}\simeq (\mu \kappa_0)/\lambda$. A few comments about the model are in order. \noindent (i) It is worth pointing out that in presence of the permutation symmetry breaking terms in the singlet fermion sector, there will be small deviations from the equality of the $\mu_a$'s and consequently of the scalar doublet masses. But these effects are small and they do not alter any of our conclusions. \noindent (ii) The quark fields are assumed to be singlets under $S_3 \times Z_3\times Z_2$. Therefore, their masses arise from the $\phi_0$ couplings only and thus are not constrained by the patterns in the lepton sector. \noindent(iii) The lightest of the singlet fermions $\sigma_1$ which couples to electrons can be produced at LEP energies but has a cross section of order $\sigma_{e^+e^-\rightarrow \sigma_1\sigma_1}\sim f^4E^2/v^4_R$ which at the highest LEP energies is about $\sim 10^{-44} f^4$ cm$^2$ and is thus practically invisible. \section{Conclusion and outlook} In conclusion, we have succeeded in constructing a natural gauge model for the democratic maximal mixing for neutrinos suggested by the present neutrino data if LSND results are not included. The models also predicts a small mass difference between the $\nu_e$ and $\nu_{\mu}$ as needed for the large angle MSW solution to the solar neutrino problem. To the best of our knowledge, this is the first time that a gauge model for understanding the democratic lepton mass matrix in an extension of the standard model has been constructed. The model is essentially an electroweak scale model with low scale for the right handed $W$'s and uses the double seesaw mechanism to generate small neutrino masses. Some of the new fermions of the model are light in the sense of collider physics. But their couplings to known particles are weak and thus there is no conflict with existing data. This work is supported by the National Science Foundation under grant no. PHY-9802551 and also a grant from the Israel-US Binational Science Foundation.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The report of the observation of a very narrow peak in the $K^+n$ invariant mass distribution \cite{LEPS03,CLAS03} around 1540 MeV in 2003, a pentaquark predicted in a chiral soliton model \cite{Diakonov}, triggered considerable excitement in the hadronic physics community. It has been labeled as $\Theta^+$ and included by the PDG in 2004 \cite{PDG04} under exotic baryons and rated with three stars. Very intensive research efforts, both theoretically and experimentally, ensued. On the experimental side, practically all studies conducted after the first sightings were confirmed by several other groups produced null results, casting doubt on the existence of the five-quark state \cite{BELLE04,HERA04}. Subsequently, PDG in 2006 reduced the rating from three to one stars \cite{PDG04}. More recently, the ZEUS experiment at HERA \cite{HERA07} observed a signal for $\Theta^+$ in a high energy reaction, while H1 \cite{HERA07}, SPHINX \cite{SPHINX07} and CLAS \cite{CLAS05} did not see it. This disagreement between the LEPS \cite{LEPS03} and other experiments could possibly originate from their differences of experimental setups and kinematical conditions. So the experimental situation is presently not completely settled \cite{BELLE05,Hicks07,Chekanov07}. Many theoretical approaches have been employed, in addition to the chiral soliton model \cite{Diakonov}, including quark models \cite{Stancu03}, QCD sum rules \cite{Kojo06}, and lattice QCD \cite{Csikor03} to understand the properties and structure of $\Theta^+$. Several interesting ideas were also proposed on the pentaquark production mechanism. Review of the theoretical activities in the last couple of years can be found in Refs. \cite{Oka04,Jaffe05}. One of the most intriguing theoretical ideas suggested for $\Theta^+$ is the diquark picture of Jaffe and Wilczek (JW) \cite{Jaffe03} in which $\Theta^+$ is considered as a three-body system consisted of two scalar, isoscalar, color $\bf {\bar 3}$ diquarks ($D$'s) and a strange antiquark $(\bar s)$. It is based, in part, on group theoretical consideration. It would hence be desirable to examine such a scheme from a more dynamical perspective. The idea of diquark is not new. It is a strongly correlated quark pair and has been advocated by a number of QCD theory groups since 60's \cite{Kobayashi66,Jaffe76,Anselmino93}. It is known that diquark arises naturally from an effective quark theory in the low energy region, the Nambu-Jona-Lasinio (NJL) model \cite{NJL,Callan}. NJL model conveniently incorporates one of the most important features of QCD, namely, chiral symmetry and its spontaneously breaking which dictates the hadronic physics at low energy. Models based on NJL type of Lagrangians have been very successful in describing the low energy meson physics \cite{Klevansky92,Takizawa90}. Based on relativistic Faddeev equation the NJL model has also been applied to the baryon systems \cite{Huang94,Ishii95}. It has been shown that, using the quark-diquark approximation, one can explain the nucleon static properties reasonably well \cite{Asami95,Buck92}. If one further take the static quark exchange kernel approximation, the Faddeev equation can be solved analytically. The resulting forward parton distribution functions \cite{Mineo99} successfully reproduce the qualitative features of the empirical valence quark distribution. The model has also been used to study the generalized parton distributions of the nucleon \cite{Mineo}. Consequently, we will employ NJL model to describe the dynamics of a diquark-diquark-antiquark system. To describe such a three-particle system, it is necessary to resort to Faddeev formalism. Since the NJL model is a covariant-field theoretical model, it is important to use relativistic equations to describe both the three-particle and its two-particle subsystems. To this end, we will adopt Bethe-Salpeter-Faddeev (BSF) equation \cite{Rupp88} in our study. For practical purposes, Blankenbecler-Sugar (BbS) \cite{BbS} reduction scheme will be followed to reduce the four-dimensional integral equation into three-dimensional ones. In Sec II, NJL model in flavor $SU(3)$ will be introduced with focus on the diquark. The NJL model is then used to investigate the antiquark-diquark and diquark-diquark interaction with Bethe-Salpeter equation in Sec. III. In Sec. IV, we introduce the Bethe-Salpeter-Faddeev equation and solve it for the system of strange antiquark-diquark-diquark with the interaction obtained in Sec. III. Results and discussions are presented in Sec. V, and we summarize in Sec. VI. \section{$\bf SU(3)_f$ NJL model and the diquark} The flavor $SU(3)_f$ NJL Lagrangian takes the form \begin{equation} {\cal L}={\bar \psi}(i\fslash{\partial}-m)\psi +{\cal L}_I, \end{equation} where $\psi^T=(u,d,s)$ is the SU(3) quark field, and $m=diag(m_u,m_d,m_s)$ is the current quark mass matrix. ${\cal L}_I$ is a chirally symmetric four-fermi contact interaction. By a Fierz transformation, we can rewrite ${\cal L}_I$ into a Fierz symmetric form ${\cal L}_{I,q{\bar q}} =\frac12({\cal L}_I+{\cal F} ({\cal L}_I))$, where ${\cal F}$ stands for the Fierz rearrangement. It has the advantage that the direct and exchange terms give identical contribution. In the $q{\bar q}$ channel, the chiral invariant ${\cal L}_{I,q{\bar q}}$, is given by \cite{Klimt} \begin{eqnarray} {\cal L}_{I,q{\bar q}} &=& G_1\left[ ({\bar \psi}\lambda^a_f \psi)^2 -({\bar \psi}\gamma^5 \lambda^a_f \psi)^2 \right] -G_2 \left[ ({\bar \psi} \gamma^{\mu}\lambda^a_f\psi)^2 +({\bar \psi}\gamma^{\mu} \gamma^5\lambda^a_f \psi)^2 \right]\nonumber\\ &-&G_{3} \left[ ({\bar \psi}\gamma^{\mu}\lambda^0_f \psi)^2 +({\bar \psi}\gamma^{\mu}\gamma^5 \lambda^0_f \psi)^2 \right] -G_{4} \left[ ({\bar \psi}\gamma^{\mu}\lambda^0_f \psi)^2 -({\bar \psi}\gamma^{\mu}\gamma^5 \lambda^0_f \psi)^2 \right]\nonumber\\ &+&\cdots , \label{NJLLagrangian} \end{eqnarray} where $a=0\sim 8$, and $\lambda_f^0=\sqrt{\frac23}I$. If we define $G_5$ by $-G_5({\bar \psi}_i \gamma^{\mu}\psi_j)^2 =-(G_2+G_3+G_4) ({\bar \psi}_i \gamma^{\mu}\lambda^0_f\psi_j)^2 -G_2 ({\bar \psi}_i \gamma^{\mu}\lambda^8_f\psi_j)^2 $ where $i,j=u,d$, then $G_3,G_4,G_5$ are related by $G_5=G_2+\frac23G_v,$ with $G_v\equiv G_3+G_4.$ In passing, we mention that the conventionally used $G_\omega$ and $G_\rho$ are related to $G_5, G_v$ by $G_\omega =2G_5$ and $G_\rho=2G_5-\frac 43 G_v$. For the diquark channel we rewrite ${\cal L}_I$ into an form $({\bar \psi}A{\bar \psi}^T)(\psi^T B\psi)$, where $A$ and $B$ are totally antisymmetric matrices in Dirac, isospin and color indices. We will restrict ourselves to scalar, isoscalar diquark with color and flavor in $\bf \bar 3$ as considered in the JW model. The interaction Lagrangian for the scalar-isoscalar diquark channel \cite{Vogl,Ishii} is given by \begin{equation} {\cal L}_{I,s} = G_s \left[ {\bar \psi}(\gamma^5 C ) \lambda_f^2 \beta^A_c {\bar \psi}^{T}\right] \left[ \psi^T (C^{-1} \gamma^5 )\lambda_f^2 \beta^{A}_c\psi \right], \label{L_Is} \end{equation} where $\beta^A_c=\sqrt{\frac32}\lambda^A (A=2,5,7)$ corresponds to one of the color ${\bar 3}_c$ states. $C=i\gamma^0\gamma^2$ is the charge conjugation operator, and $\lambda's$ are the Gell-Mann matrices. The Bethe-Salpeter (BS) equation for the scalar diquark channel \cite{Vogl,Ishii} is given by \begin{equation} \tau_s(q)=4iG_s - 2iG_s\int\frac{d^4k}{(2\pi)^4} tr[(C^{-1}\gamma^5\tau_f^2\beta^A))S(k+q) (\gamma^5C\tau_f^2\beta^A)S^T(-q)]\tau_s(q), \label{diquarktmatrix} \end{equation} where the factors $4$ and $2$ arise from Wick contractions. $S(k)=(\fslash{k}-M+i\epsilon)^{-1}$ with $M\equiv M_u=M_d$, the constituent quark mass of u and d quarks, generated by solving the gap equation. $\tau_s(q)$ is the reduced t-matrix which is related to the t-matrix by $t_s(q)=(\gamma^5C\tau_f^2\beta^A_c) \tau_s(q)(C^{-1}\gamma^5\tau_f^2\beta^A_c)$. The solution to Eq. (\ref{diquarktmatrix}) is \begin{equation} \tau_s(q)=\frac{4iG_s}{1+2G_s\Pi_s(q^2)}, \label{taus} \end{equation} with \begin{equation} \Pi_s(q^2)=6i\int\frac{d^4k}{(2\pi)^4} tr_D[\gamma^5S(q)\gamma^5S(k+q)]. \label{pis} \end{equation} The gap equation for u, d and s quarks are given by \begin{equation} M_i=m_i-8G_1<{\bar q}_iq_i>, \label{gap} \end{equation} with \begin{equation} <{\bar q}_iq_i>\equiv -iN_c\int\frac{d^4k}{(2\pi)^4} tr_D(S(k)), \label{condense} \end{equation} where $i=u,d,s$. The loop integrals in Eqs. (\ref{pis}) and (\ref{condense}) diverge and we need to regularize the four-momentum integral by adopting some cutoff scheme. With regularization, we can solve the gap equation and t-matrix of the diquark in Eqs. (\ref{taus}) and (\ref{condense}) to determine the constituent quark and diquark masses. However, since our purpose in this work is not an exact quantitative analysis but rather a qualitatively study of the interactions inside $\Theta^+$, we will not adopt any regularization scheme and simply use the empirical values of the constituent quark masses $M=M_{u,d}=400$ MeV, $M_s=600$ MeV, and the diquark mass $M_D=600$ MeV as obtained in the study of the nucleon properties \cite{Huang94,Ishii95,Asami95,Mineo99,Mineo}. \section{Two-body interactions for strange antiquark-diquark $(\bf\bar sD)$ and diquark-diquark ($DD$) channels} In the JW model for $\Theta^+$, the two scalar-isoscalar, color $\bf \bar 3$ diquarks must be in a color $\bf 3$ in order to combine with $\bar s$ into a color singlet. Since $\bf 3$ is the antisymmetric part of $\bf \bar 3 \times \bar 3 = 3 \oplus \bar 6$, the diquark-diquark wave function must be antisymmetric with respect to the rest of its labels. For two identical scalar-isoscalar diquarks $[ud]_0$, only spatial labels remain so that the spatial wave function must be antisymmetric under space exchange and the lowest possible state is $p$-state. Since in JW's scheme, $\Theta^+$ has the quantum number of $J^P={\frac 12}^+$, $\bar s$ would be in relative $s$-wave to the $DD$ pair. Accordingly, we will consider only the configurations where $\bar sD$ and $DD$ are in relative $s$- and $p$-waves, respectively. We will employ Bethe-Salpeter-Faddeev equation \cite{Rupp88} to describe such a three-particle system of $\bar sDD$. For consistency, we will use Bethe-Salpeter equation to describe two-particles subsystems like $\bar sD$ and $DD$, which reads as, \begin{equation} T = B + BG_0T,\label{BSEq}\end{equation} where B is the sum of all two-body irreducible diagrams and $G_0$ is the free two-body propagator. In momentum space, the resulting Bethe-Salpeter equation can be written as \begin{equation} T(k',k;P)=B(k',k;P)+\int d^4k^{''}B(k',k^{''};P)G_0(k^{''};P)T(k^{''},k;P), \label{BSeq}\end{equation} where $G_0$ is the free two-particle propagator in the intermediate states. $k$ and $P$ are, respectively, the relative and total momentum of the system. In practical applications, B is commonly approximated by the lowest order diagrams prescribed by the model Lagrangian and will be denoted by V hereafter. In addition, it is often to further reduce the dimensionality of the integral equation (\ref{BSeq}) from four to three, while preserving the relativistic two-particle unitarity cut in the physical region. It is well known (for example, Ref. \cite{Hung01}) that such a procedure is rather arbitrary and we will adopt, in this work, the widely employed Blankenbecler-Sugar (BbS) reduction scheme \cite{BbS} which, for the case of two spinless particles, amounts to replacing $G_0$ in Eq. (\ref{BSeq}) by \begin{eqnarray} G_0(k ,P)&=&\frac{1}{(P/2+k )^2-m_1^2} \frac{1}{(P/2-k )^2-m_2^2}\nonumber\\ &\rightarrow& -i(2\pi)^4 \frac{1}{(2\pi)^3} \int \frac{ds'}{s-s' +i\epsilon}\nonumber\\ &\times& \delta^{(+)}\left((P'/2+k )^2-m_1^2\right) \delta^{(+)}\left((P'/2-k)^2-m_2^2 \right)\nonumber\\ &=& -2\pi i \delta\left(k_0 -\frac{E_1(|\vec{k}|)-E_2(|\vec{k}|)}{2}\right) G^{BbS}(|\vec{k}|,s), \label{BbSthree1} \end{eqnarray} with \begin{equation} G^{BbS}(|\vec{k}|,s)= \frac{E_1(|\vec{k}|)+E_2(|\vec{k}|)} {2E_1(|\vec{k}|)E_2(|\vec{k}|)} \frac{1}{s-(E_1(|\vec{k}|)+E_2(|\vec{k}|))^2 +i\epsilon}, \label{BbSthree2} \end{equation} where $s=P^2$ and $P'=\sqrt{s'/s}P$. The superscript (+) associated with the delta functions mean that only the positive energy part is kept in the propagator, and $E_{1,2}(|\vec{k}|) \equiv \sqrt{\vec{k}^2+m_{1,2}^2 }$. \subsection{$\bf{\bar s}$D potential and the t-matrix} In Fig. 1 we show the lowest order diagram, i.e., first order in ${\cal L}_{I,q{\bar q}}$ in ${\bar s}D$ scattering. Due to the trace properties for Dirac matrices, only the scalar-isovector $({\bar \psi}\lambda^a_f \psi)^2$, the vector-isoscalar $({\bar \psi} \gamma^{\mu}\lambda^0_f\psi)^2$, and the vector-isovector $({\bar \psi} \gamma^{\mu}\lambda^a_f \psi)^2$ will contribute to the vertex $\Gamma$. \begin{figure}[hbtp] \begin{center} \epsfig{file=fig1.ps,width=8cm} \end{center} \caption{${\bar s}$D potential of the lowest order in ${\cal L}_{I,q{\bar q}}$.} \end{figure} Furthermore, the isovector vertex $(\bar{\psi}\Gamma\lambda_f^a \psi)^2$ will not contribute since the trace in flavor space vanishes, $\sum_{a=0}^8 (\lambda_f^a)_{33}tr_f(\lambda_f^2 \lambda_f^a\lambda_f^2)=0$. Thus only the vector-isoscalar term, $({\bar \psi} \gamma^{\mu}\lambda^0_f\psi)^2$, remains. For the on-shell diquarks, the lower part of Fig. 1 which corresponds to the scalar diquark form factor, can be calculated as \begin{eqnarray} (p_{Di}+p_{Df})^{\mu}F_v(q^2) &=& i \int \frac{d^4 k}{(2\pi)^4} tr[ (g_D C^{-1}\gamma^5 \lambda_f^2\beta^A_c ) S(k+q)\gamma^{\mu}S(k) (g_D \gamma^5 C \lambda_f^2\beta^A_c ) S^T(k-p_{Di})]\nonumber\\ &=& 6ig_D^2 \int \frac{d^4 k}{(2\pi)^4} tr[S(k+q)\gamma^{\mu}S(k)S(p_{Di}-k)], \end{eqnarray} where we have made use of the relations $C^{-1}(\gamma^{\mu})^T C=-\gamma^{\mu}$, $tr_c[\beta^A_c \beta^A_c]=3$. $g_D$ is defined by \begin{equation} g_D^{-2}=-\left.\frac{\partial \Pi_D (p^2)} {\partial p^2}\right|_{p^2=M_D^2}, \end{equation} with \begin{equation} \Pi_D(p^2)\equiv 6i \int \frac{d^4 k}{(2\pi)^4} tr[S(k)S(p-k)], \end{equation} and $M_D$ is the diquark mass. $F_v(0)$ is normalized as $2p^{\mu} F_v(0)=-g_D^2 \frac{\partial \Pi_D(p^2)}{\partial p_{\mu}}$, such that $F_v(0)=1$. \footnote{In the actual calculation we use the dipole form factor, $F_v(q^2)\equiv (1-q^2/\Lambda^2)^{-2}$ with $\Lambda=0.84$ GeV since the $q^2$ dependence for $F_v(q^2)$ in the NJL model is not well reproduced.} Then the matrix element of the potential $V_{\bar sD}$ can be expressed as \begin{eqnarray} <{\bar s}_fD_f| V|{\bar s}_iD_i> &=&(-{\bar v}(p_{{\bar s}i}))(-i V_{{\bar s}D}) (p_{Di},p_{Df})v(p_{{\bar s}f})\nonumber\\ &=& (+16i) (-G_v) (-{\bar v}(p_{{\bar s}i})) \gamma_{\mu} v(p_{{\bar s}f}) \left[ (\lambda^0_f)_{33} \cdot tr_f \left( \lambda^0_f (\lambda_f^2)^2 \right) \right]\nonumber\\ &\times&(p_{Di}+p_{Df})^{\mu}\frac{F_v(q^2)} {tr_f((\lambda_f^2)^2)}, \label{VsbarD} \end{eqnarray} i.e., \begin{equation} {V}_{{\bar s}D}=\frac{64}{3} G_v F_v(q^2) \tilde{V}_{{\bar s}D}(p_{Di},p_{Df}), \label{VsbD} \end{equation} with \begin{equation} \tilde{V}_{{\bar s}D}({p}_{Di},{p}_{Df}) =(\fslash{p}_{Di}+\fslash{p}_{Df})/2. \label{tildeVsbD}\end{equation} Here the factor $+16i$ in Eq. (\ref{VsbarD}) arises from the Wick contractions, and the factor $tr_f((\lambda_f^2)^2)$ in Eq. (\ref{VsbarD}) is introduced to divide $F_v(q^2)$, since the factor $tr_f((\lambda_f^2)^2)$ is already included in the expression of $F_v(q^2)$ by a trace in flavor $SU(3)_f$ space. \begin{figure}[hbtp] \begin{center} \epsfig{file=fig3.ps,width=14cm} \end{center} \caption{The BS equation for ${\bar s}$D.} \end{figure} The three-dimensional scattering equation for the ${\bar s}D$ system is now given by \begin{eqnarray} t_{{\bar s}D}(p_{Di},p_{Df})&=& V_{{\bar s}D}(p_{Di},p_{Df})\nonumber\\ &+& 4\pi\int\frac{d |\vecp{p}_D| |\vecp{p}_D|^2}{(2\pi)^3} \frac12 \int_{-1}^1 dx_i G_{{\bar s}D}^{BbS}(|\vecp{p}_D|,s_2) K_{{\bar s}D}(|\vec{p}_{Di}|,|\vecp{p}_D|,x_i) t_{{\bar s}D}(\vecp{p}_D ,p_{Df}),\nonumber\\ \label{tsbD} \end{eqnarray} where $x_i\equiv \hat{p}_{Di} \cdot \hat{p}_D^{\,\,'}$, $\hat{p}\equiv\vec{p}/|p|$, $s_2=(p_{Di}+p_{{\bar s}i})^2=(p_{Df}+p_{{\bar s}f})^2 $, $p_{Di}^0=\sqrt{\vec{p}_{Di}^{\,\,2}+M_D^2}$, $p_{Df}^0=\sqrt{\vec{p}_{Df}^{\,\,2}+M_D^2}$ and \begin{eqnarray} K_{{\bar s}D}(|\vec{p}_{Di}|,|\vecp{p}_D|,x_i) &\equiv& \frac{64}{3} G_v F_v((p'_D-p_{Di})^2) \tilde{K}_{{\bar s}D}(p_{Di},p'_D)|_{{p'_{D}}^0 =\sqrt{ {\vec{p}}^{\,\,'2}_D+M_D^2}},\nonumber\\ \tilde{K}_{{\bar s}D}(p_{Di},p'_D) &=& (\fslash{p}_{Di}+\fslash{p}_D^{\,\, '}) (-\fslash{p}_{\bar s}^{\,\, '}+M_s)/2,\nonumber \end{eqnarray} with $M_s$ being the constituent quark mass of ${\bar s}$ and $s$. We also present the results for the interactions between diquark and $\bar u$ or $\bar d$, which would be of interest when we study non-strange pentaquarks. One can just repeat the derivations we describe in the above and easily obtain \begin{equation} {V}_{{\bar u}D}={V}_{{\bar d}D}=\\ -16G_1 F_s(q^2)+32 G_5 F_v(q^2) \tilde{V}_{{\bar s}D}(p_{Di},p_{Df}), \label{VubD} \end{equation} in analogous to Eqs. (\ref{VsbD}) and (\ref{tildeVsbD}). We add in passing that, within tree approximation, the sign of the potential for $sD$ is opposite to that of $V_{{\bar s}D}$ due to charge conjugation, i.e., \begin{equation} V_{sD}(p_{Df},p_{Di})=-V_{{\bar s}D}(p_{Di},p_{Df}).\end{equation} We can immediately write down the scattering equation for the $sD$ as, \begin{eqnarray} t_{sD}( p_{Df},p_{Di}) &=&V_{sD}( p_{Df},p_{Di})\nonumber\\ &+& 4\pi\int\frac{d |\vecp{p}_D| |\vecp{p}_D|^2}{(2\pi)^3} \frac12 \int_{-1}^1 dx_f G_{sD}^{BbS}(|\vecp{p}_D|,s_2) K_{sD}(|\vec{p}_{Df}|,|\vecp{p}_D|,x_f) t_{sD}(\vecp{p}_D ,p_{Di}),\nonumber\\ \label{tsD} \end{eqnarray} where $x_f\equiv \hat{p}_{Df} \cdot \hat{p}_D^{\,\,'}$, $G_{sD}^{BbS}(|\vecp{p}_D|,s_2)=G_{{\bar s}D}^{BbS}(|\vecp{p}_D|,s_2)$, and \begin{eqnarray} K_{sD}(|\vec{p}_{Df}|,|\vecp{p}_D|,x_f)&\equiv & \frac{64}{3} G_v F_v((p'_D-p_{Df})^2) \tilde{K}_{sD}(p_{Df},p'_D)|_{{p'_{D}}^0 =\sqrt{ {\vec{p}}^{\,\,'2}_D+M_D^2}},\nonumber\\ \tilde{K}_{sD}(p_{Df},p'_D)&=& - (\fslash{p}_{Df}+\fslash{p}_D^{\,\, '}) (\fslash{p}_s^{\,\, '}+M_s)/2, \end{eqnarray} with $p'_s=p'_{\bar s}$. \subsection{Representation in $\rho$-spin notation} In the ${\bar s}D$ (or $sD$) center of mass system the wave function which describes the relative motion in $J=\frac12$, is given by the Dirac spinor of the following form (see \cite{Tjon,Oettel}), \begin{eqnarray} \Psi_{sD,m_s} (p_s^0,\vec{p}_s) &=& \left( \begin{array}{c} \phi_{s1}(p_s^0,|\vec{p}_s|)\\ \vec{\sigma} \cdot \hat{p}_s \,\phi_{s2}(p_s^0,|\vec{p}_s|)\\ \end{array} \right)\chi_{m_s},\\ \label{PsisD} \Psi_{{\bar s}D,m_s} (p_{\bar s}^0,\vec{p}_{\bar s}) &=& \left( \begin{array}{c} \vec{\sigma} \cdot \hat{p}_{\bar s}\, \phi_{{\bar s}2}(p_{\bar s}^0,|\vec{p}_{\bar s}|)\\ \phi_{{\bar s}1}(p_{\bar s}^0,|\vec{p}_{\bar s}|)\\ \end{array} \right)\chi_{m_s}, \nonumber\\ &=& \gamma^5 \left( \begin{array}{c} \phi_{{\bar s}1}(p_{\bar s}^0,|\vec{p}_{\bar s}|)\\ \vec{\sigma} \cdot \hat{p}_{\bar s} \,\phi_{{\bar s}2}(p_{\bar s}^0,|\vec{p}_{\bar s}|)\\ \end{array} \right)\chi_{m_s}, \label{PsibarsD}\\ {\bar \Psi}_{sD}(p_s^0,\vec{p}_s)&\equiv& \Psi_{sD}^{\dagger}(p_s^0,\vec{p}_s)\gamma^0, \\ \label{barPsisD} {\bar \Psi}_{{\bar s}D}(p_{\bar s}^0,\vec{p}_{\bar s})&\equiv& \Psi_{{\bar s}D}^{\dagger}(p_{\bar s}^0,\vec{p}_{\bar s})\gamma^0, \label{barPsibarsD} \end{eqnarray} where $\vec{p}_D=-\vec{p}_s=-\vec{p}_{\bar s}$, i.e., $\Psi_{sD} (p_s^0,\vec{p}_s) =\Psi_{sD} (p_s^0,-\vec{p}_D)$ and $\Psi_{{\bar s}D} (p_{\bar s}^0,\vec{p}_{\bar s}) =\Psi_{{\bar s}D} (p_{\bar s}^0,-\vec{p}_D)$. In the following we simply write $p'_Q=|\vecp{p}_Q|, p'_{Qi(f)}=|\vecp{p}_{Qi(f)}|$ , $Q=s, {\bar s}$ or $D$. Note that the index 1 (2) corresponds to large (small) components for both ${\bar s}$ and $s$ quark spinors. For a discretization in spinor space, we define the complete set of $\rho$-spin notation (\cite{Tjon,Gammel}) for the operators ${\cal O}_{sD}=V_{sD},t_{sD},\tilde{V}_{sD}$ and $ {\cal K}_{sD}=K_{sD},\tilde{K}_{sD}$ of $sD$: \begin{eqnarray} {\cal O}_{sD,nm} (p_{Df},p_{Di}) &\equiv & tr[ \Omega^{\dagger}_n (p_{sf}) {\cal O}_{sD}(p_{Df},p_{Di})\Omega_m(p_{si})], \label{sDnm1} \\ {\cal K}_{sD,nm} (p_{Df},p_D',x_f) &\equiv & tr[ \Omega^{\dagger}_n (p_{sf}) {\cal K}_{sD}(p_{Df},p_D',x_f) \Omega_m(p'_s)], \label{sDnm2} \end{eqnarray} where $n,m=1,2$, $\Omega_1(p)=\frac{\Omega}{\sqrt{2}}$ and $\Omega_2(p)=\vec{\gamma}\cdot \hat{p} \frac{\Omega}{\sqrt{2}}$, $\Omega=\frac{1+\gamma_0}{2}$. $\Omega_1(p)$ and $\Omega_2(p)$ satisfy $tr[ \Omega^{\dagger}_n (p) \Omega_m(p')]= \delta_{n1}\delta_{m1}+ \hat{p}\cdot\hat{p}^{\,\, '}\delta_{n2}\delta_{m2} $. Concerning the ${\bar s}D$ spinor, the large and small components can be reversed by $\gamma^5$, with the minus sign which comes from the definitions Eqs. (\ref{PsibarsD}) and (\ref{barPsibarsD}): ${\bar \Psi}_{{\bar s}D}{\cal O}\Psi_{{\bar s}D}= -{\bar \Psi}_{sD}\gamma^5{\cal O}\gamma^5 \Psi_{sD}$. Then we can define $\rho$-spin notation for ${\bar s}D$ i.e., ${\cal O}_{{\bar s}D}=V_{{\bar s}D},t_{{\bar s}D}, \tilde{V}_{{\bar s}D}$ and ${\cal K}_{{\bar s}D}=K_{{\bar s}D} ,\tilde{K}_{{\bar s}D}$, \begin{eqnarray} {\cal O}_{{\bar s}D,nm} (p_{Di},p_{Df}) &\equiv& -tr[ \Omega^{\dagger}_n (p_{{\bar s}i}) \gamma^5 {\cal O}_{{\bar s}D}(p_{Di} ,p_{Df})\gamma^5 \Omega_m(p_{{\bar s}f})], \label{sbDnm1}\\ {\cal K}_{{\bar s}D,nm} (p_{Di},p_D',x_i) &\equiv & -tr[ \Omega^{\dagger}_n (p_{{\bar s}i}) \gamma^5 {\cal K}_{{\bar s}D}(p_{Di},p_D',x_i) \gamma^5\Omega_m(p_{\bar s}')]. \label{sbDnm2} \end{eqnarray} From Eqs. (\ref{tsbD},\ref{tsD},\ref{sDnm1}-\ref{sbDnm2}), each component $n$ $(n=1,2)$ of spinors for the ${\bar s}$D satisfy the following quadratic equation: \begin{eqnarray} && \phi^{\dagger}_{{\bar s}n} (p_{{\bar s}i}) t_{{\bar s}D,nm}(p_{Di},p_{Df}) \phi_{{\bar s}m} (p_{{\bar s}f}) =\phi^{\dagger}_{{\bar s}n} (p_{{\bar s}i}) \Bigl[ V_{{\bar s}D,nm}(p_{Di},p_{Df})\nonumber\\ &&+4\pi\sum_{l=1}^2 \int\frac{dp_D'} {(2\pi)^3} p_D^{'2} \frac12\int_{-1}^1 dx_i G_{{\bar s}D}^{BbS}(p_D',s_2) K_{{\bar s}D,nl}(p_{Di},p_D',x_i) t_{{\bar s}D,lm} (p_D',p_{Df}) \Bigr] \phi_{{\bar s}m} (p_{{\bar s} f}).\nonumber\\ \label{tsbDnm} \end{eqnarray} A similar equation can be obtained for the $sD$ by exchanging $i\leftrightarrow f$ and $s\leftrightarrow {\bar s}$ in Eq. (\ref{tsbDnm}). The explicit expressions of the $\rho$-spin notation for $\tilde{V}_{{\bar s}(s)D}$ and $\tilde{K}_{{\bar s}(s)D}$ are given in appendix B. We note that there are important relations: \begin{eqnarray} V_{{\bar s}D,nm}(p,q) &=& -V_{sD,nm}(p,q),\nonumber\\ V_{{\bar s}D}(p,q)&=& -V_{sD}(p,q),\nonumber\\ K_{{\bar s} D,nm} (|\vec{p}^{\,}|,|\vec{q\,}|,x_{pq}) &=& -K_{sD,nm}(|\vec{p}^{\,}|,|\vec{q}^{\,}|,x_{pq}) ,\nonumber\\ K_{{\bar s}D}(|\vec{p}^{\,}|,|\vec{q}^{\,}|,x_{pq}) &=& - K_{sD} (|\vec{p}^{\,}|,|\vec{q}^{\,}|,x_{pq}). \nonumber \end{eqnarray} By the partial wave expansion in Eq. (\ref{PWE}) in appendix A, the BS equation for $t_{{\bar s}D,nm}$ in Eq. (\ref{tsbDnm}) for $s$-wave can be written as \begin{equation} t_{{\bar s}D,nm}^{l_{{\bar s}D}=0}(p_{Di},p_{Df}) =V_{{\bar s}D,nm}^{l_{{\bar s}D}=0}(p_{Di},p_{Df}) +4\pi \int \frac{d p_D^{\,'}}{(2\pi)^3} p_D^{\,'2} \sum_{l=1}^2 G_{{\bar s}D}^{BbS}(p_D^{\,'},s_2) K^{l_{{\bar s}D}=0}_{{\bar s}D,nl}(p_{Di},p_D^{\,'}) t_{{\bar s}D,lm}^{l_{{\bar s}D}=0}(p_D^{\,'} ,p_{Df}). \label{tsbDint} \end{equation} \subsection{$DD$ potential and t-matrix} In the case of $DD$ interaction, the lowest order diagrams are depicted in Figs. 3(a) and (b), with (a) the quark rearrangement diagram and (b) of the first order in ${\cal L}_{I, q\bar q}$, respectively. \begin{figure}[hbtp] \begin{center} \epsfig{file=fig4.ps,width=12cm} \end{center} \caption{Lowest order diagrams in $DD$ scattering. } \end{figure} We first show that the quark exchange diagram in Fig. 3(a) does not contribute due to its color structure, where $a\sim d$ and $i\sim l$ denote the color indices of the diaquarks and quarks, respectively. Since each diquark is in the color ${\bf\bar 3} $ \cite{Jaffe03,Vogl}, the color factor for the $qqD$ vertex is proportional to $\epsilon_{aij}$. Hence the color factor of the quark exchange diagram is given by \begin{equation} \epsilon_{aij}\epsilon_{bik}\epsilon_{clk}\epsilon_{dlj}= \delta_{ab}\delta_{cd}+\delta_{ad}\delta_{bc}. \label{quarkexh} \end{equation} As we discussed earlier, the color of the $DD$ pair inside $\Theta^+$ is of ${\bf 3}$ in order to combine with ${\bar s}$ to form a color singlet pentaquark. As color ${\bf 3}$ state is antisymmetric under the exchange between diquarks in the initial and final states, the matrix element of Eq. (\ref{quarkexh}) vanishes. For the contact interaction diagram Fig. 3(b), only the direct term is shown since the exchange term does not contribute as it has the same color structure as the quark rearrangement diagram of Fig 3(a). It is easy to see that the color structure of Fig. 3(b) is proportional to $\delta_{ab}\delta_{cd}$. Then the terms in the interaction Lagrangian in Eq. (\ref{NJLLagrangian}) that can give rise to non-vanishing contributions are: \begin{equation} G_1({\bar \psi}\lambda^a_f \psi)^2,\,\, -G_2({\bar \psi}\gamma^{\mu}\lambda^a_f \psi)^2,\,\, -G_v ({\bar \psi}\gamma^{\mu}\lambda^0_f \psi)^2, \end{equation} with $a=0\sim 8$. We next calculate the form factors, which diagrammatically correspond to the lower part of diagram in Fig. 1. For $\Gamma=\gamma^{\mu}\lambda_f^a$, we obtain \begin{eqnarray} && {tr_f \left( \lambda_f^a (\lambda_f^2)^2 \right) } (p_{Di}+p_{Df})^{\mu} \frac{F_v(q^2)}{tr_f((\lambda_f^2)^2)}\nonumber\\ &=& \left( \sqrt{\frac23}\delta_{a0}+\sqrt{\frac13}\delta_{a8} \right) (p_{Di}+p_{Df})^{\mu} F_v(q^2), \label{Fv} \end{eqnarray} and for $\Gamma=\lambda_f^a$, we get \begin{equation} {tr_f \left( \lambda_f^a (\lambda_f^2)^2 \right)} \frac{F_s(q^2)}{tr_f((\lambda_f^2)^2)}\nonumber\\ = \left( \sqrt{\frac23}\delta_{a0}+\sqrt{\frac13}\delta_{a8} \right) F_s(q^2), \label{Fs} \end{equation} where the factor ${tr_f((\lambda_f^2)^2)}$ in Eqs. (\ref{Fv}) and (\ref{Fs}) is introduced by the same reason for Eq. (\ref{VsbarD}), and we have used $tr(\lambda_f^2 \lambda_f^a \lambda_f^2)= 2(\sqrt{\frac23}\delta_{a0} +\sqrt{\frac13}\delta_{a8})$. For the on-shell diquarks, $F_s(q^2)$ is calculated as\footnote{ Same as the case for ${\bar s}D$ potential, we use the dipole form factor, $F_s(q^2)\equiv c_s (1-q^2/\Lambda^2)^{-2}$ with $\Lambda=0.84$ GeV and $c_s$ is a constant. In the original NJL model calculation with the Pauli-Villars (PV) cutoff, $c_s$ is given by $F_s(0)=c_s=0.53$ GeV \cite{Mineo}.} \begin{eqnarray} F_s(q^2) &=& i \int \frac{d^4 k}{(2\pi)^4} tr[ (g_D C^{-1}\gamma^5 \lambda_f^2\beta^A ) S(k+q)S(k) (g_D \gamma^5 C \lambda_f^2\beta^A ) S^T(k-p_{Di})]\nonumber\\ &=& 6ig_D^2 \int \frac{d^4 k}{(2\pi)^4} tr[S(k+q)S(k)S(k-p_{Di})]. \end{eqnarray} With the form factors $F_v(q^2)$ and $F_s(q^2)$ obtained in the above, $V_{DD}$ is given by \begin{eqnarray} -iV_{DD}(\vec{p}_{Di},\vec{p}_{Df}) &=& +128i \left[ G_1 F_s^2(q^2) -\left( G_2+\frac23 G_v \right) (p_{D1i}+p_{D1f})\cdot (p_{D2i}+p_{D2f})F_v^2(q^2)\right]\nonumber\\ &=&128i \left[ G_1 F_s^2(q^2) -G_5 (p_{D1i}+p_{D1f})\cdot(p_{D2i}+p_{D2f})F_v^2(q^2)\right], \label{128i} \end{eqnarray} where the factor $+128i$ in a first line of Eq. (\ref{128i}) comes from the Wick contractions, and in a second line we have used the relation between couplling constants in meson sectors; $G_5= G_2+\frac23 G_v$ which is explained in section 2. The momenta of the diquarks in the initial and final states in Fig. 4 are given by \begin{eqnarray} p_{D1i(f)} &=&(\sqrt{s_2}/2,\vec{p}_{Di(f)}) ,\nonumber\\ p_{D2i(f)} &=&(\sqrt{s_2}/2,-\vec{p}_{Di(f)}), \end{eqnarray} with $q=p_{D1f}-p_{D1i}=p_{D2i}-p_{D2f}$. $s_2=4(\vec{p}_{Di}^{\,2}+M_D^2)=4(\vec{p}_{Df}^{\,2} +M_D^2)$ is the $DD$ center of mass energy squared. \begin{figure}[hbtp] \begin{center} \epsfig{file=fig6.ps,width=14cm} \end{center} \caption{BS equation for $DD$.} \end{figure} As in the case of $\bar sD$ scattering, we use the BbS three-dimensional reduction scheme and the resulting equation for $DD$ scattering reads as\begin{equation} t_{DD}(\vec{p}_{Df},\vec{p}_{Di}) = V_{DD}(\vec{p}_{Df},\vec{p}_{Di}) + \int\frac{d^3 p'}{(2\pi)^3} V_{DD}(\vec{p}_{Df},\vecp{p}) G_{DD}^{BbS}(|\vecp{p}|,s_2) t_{DD}(\vecp{p},\vec{p}_{Di}), \end{equation} with \begin{eqnarray} G_{DD}^{BbS}(|\vecp{p}|,s_2)&=& \frac{1}{ 4E_D(|\vecp{p}|)(s_2/4-E_D(|\vecp{p}|)^2+i \epsilon)}\nonumber\\ &=&\frac{1}{4E_D(|\vecp{p}|)(\vec{p}_{Df}^{\,\,2} -\vec{p}^{\,\,'2}+i\epsilon)}, \label{GDD} \end{eqnarray} with $E_D(|\vec{p}^{\,\,'}|) =\sqrt{\vec{p}^{\,\,'2}+M_D^2}$. In the JW model for $\Theta^+$, the diquark-diaquark spatial wave function must be antisymmetric and we will consider here only the lowest configuration, namely, $DD$ are in relative $p$-wave. Partial wave expansion of Eq. (\ref{PWE}) then gives \begin{equation} t_{DD}^{l=1} (p_f,p_i)=V_{DD}^{l=1} (p_f,p_i) +4\pi \int \frac{dp' }{(2\pi)^3}p'^2 G_{DD}^{BbS}(p',s_2) V_{DD}^{l=1}(p_f,p')t_{DD}^{l=1}(p',p_i), \label{tDDlresultofs} \end{equation} with $p_{i(f)}\equiv|\vec{p}_{Di(f)}|, p'\equiv|\vecp{p}|$. \section{Relativistic Faddeev equation} \subsection{3-body Lippmann-Schwinger equation} For a system of three particles with momenta $\vec k_i's \, (i=1,2,3)$, we introduce the Jacobi momenta with particle 3 as a special choice: \begin{eqnarray} \vec{k}_1 &=& \mu_1 \vec{P} + \vec{\tilde{p}} + \alpha_1\,\, \vec{\tilde{q}}_3\nonumber\\ \vec{k}_2 &=& \mu_2 \vec{P} - \vec{\tilde{p}} + \alpha_2\,\, \vec{\tilde{q}}_3\nonumber\\ \vec{k}_3 &=& \mu_3 \vec{P} + \alpha_3\,\, \vec{\tilde{q}}_3, \end{eqnarray} with $\sum \mu_n =1$ and $\alpha_3= -\alpha_1-\alpha_2$. For the coefficients we find $\mu_n = m_n/M$,\,\, $M =m_1+m_2+m_3$,\,\, and $\alpha_1= m_1/m_{12},\,\, \alpha_2=m_2/m_{12},\,\, \alpha_3=-1$, where $m_{ij}=m_i+m_j\,\,(i\neq j).$ In terms of the Jacobi momenta the total kinetic energy is given by: \begin{equation} K_{tot} = \frac{P^2}{2 M} + \frac{\tilde{p}\,^2}{2 m_{12}} +\frac{{\tilde{q}_3}^2}{2m_{(12)3}},\end{equation} where $m_{(ij)k}= {m_k m_{ij}}/{M}$. New integration variables are chosen to be: $\tilde{p} = f_{p3}\,\, p$ with $f_{p3}=\sqrt{2 m_{12}}$ and $\tilde{q_3}=f_{q3}\,\,q$ with $f_{q3}=\sqrt{2 m_{(12)3}}$, and in general for cyclic $(ijk)$, $f_{pi}=\sqrt{2 m_{jk}}$ and $f_{qi}=\sqrt{2 m_{(jk)i}}$. In terms of the new integration variables we have \begin{equation} K_{tot}=\frac{P^2}{2 M} +p^2+q^2, \end{equation} and the 3-body Lippmann-Schwinger equation for the T-matrix becomes: \begin{equation} T(\vec{p},\vec{q})=V+ {f_{p3}}^3 {f_{q3}}^3 \int\,\,\frac{d^3p'}{(2\pi)^3}\int\,\, \frac{d^3q'}{(2\pi)^3}\,\, V\,\, G_3(p',q')\,\,T(\vec{p}\,',\vec{q}\,'),\label{LS-T}\end{equation} with $G_3(p,q)= 1/(z-K_{tot})$. The parameter $z$ is implicit in the arguments of $T$ and $G_3$ in Eq. (\ref{LS-T}), a convention to be followed hereafter. Similarly we define the Jacobi momenta $\vec{p}_i,\vec{q}_i$ with particle $i$ as the special choice. The momenta are related to each other as \begin{eqnarray} \vec {p}_i = a_{ij} \vec {p}_j + b_{ij} \vec {q}_j, \hspace{2.0cm} \vec {q}_i = c_{ij} \vec {p}_j + d_{ij} \vec {q}_j , \end{eqnarray} where $(ijk)$ are cyclic, and $a_{ij} = - [m_i m_j /(m_i + m_k )(m_j + m_k )]^{1/2} $, $b_{ij} = \sqrt {1 - a_{ij}^2} = - b_{ji} $, $c_{ij}=-b_{ij}$ and $d_{ij}=a_{ij}$. It can be shown that the total angular momentum is related to the angular momentum $\vec {l}_{pi}$ and $\vec {l}_{qi}$ by \begin{equation} \label{eq5} \vec {L} = \sum\limits_{i = 1}^3 {\left( {\vec {r}_i \times \vec {k}_i } \right)} = \sum\limits_{i = 1}^3 {\left( {\vec {l}_{pi} + \vec {l}_{qi} } \right)} + \vec {l}_c . \end{equation} With these three choices of Jacobi momenta we may introduce corresponding 3-particle states $|>_n$ where particle $n$ plays a special role. For the 3-particle T-matrix we have \begin{equation} <\vec{k}_1,\vec{k}_2,\vec{k}_3|T|\alpha>=_n<\vec{p}_n,\vec{q}_n|T|\alpha>, \end{equation} or in terms of the Faddeev amplitudes $T_n$, \begin{equation} <\vec{k}_1,\vec{k}_2,\vec{k}_3|T|\alpha> = T_1(\vec{p}_1, \vec{q}_1)+T_2(\vec{p}_2,\vec{q}_2)+T_3(\vec{p}_3,\vec{q}_3), \end{equation} with $T_n(\vec{p}_n,\vec{q}_n)=_n<\vec{p}_n,\vec{q}_n|T_n|\alpha>$. For the pentaquark system we now chose particles 1 and 3 as the diquark and particle 2 to be the ${\bar s}$. The Faddeev equations for $T=T_1+T_2+T_3$ with $T_i = t_i + \sum\limits_{j \ne i} {t_i } G_2 (s)T_j\,\,\,(i=1,2,3)$, with $t_i$ denoting the two-body t-matrix between particle pair $(jk)$, become \begin{eqnarray} T_1(\vec{p}_1,\vec{q}_1) &=& {f_{p3}}^3 {f_{q3}}^3\int \frac{d^3p'_3}{(2\pi)^3}\int \frac{d^3q'_3}{(2\pi)^3}\,\, K_{13} \,\,G_3(p'_3,q'_3)\,\,T_3(\vec{p}_3\,',\vec{q}_3\,')\nonumber\\ &+& {f_{p2}}^3 {f_{q2}}^3 \int \frac{d^3p'_2}{(2\pi)^3} \int \frac{d^3q'_2}{(2\pi)^3}\,\, K_{12}\,\,G_3(p'_2,q'_2)\,\, T_2(\vec{p}_2\,',\vec{q}_2\,'),\label{T_1} \end{eqnarray} where the channels 1 and 3 correspond to $D({\bar s}D)$ states and channel 2 to the ${\bar s}(DD)$ states. Since diquarks obey Bose-Einstein statistics, we have $T_3(\vec{p}_3,\vec{q}_3)=T_1(-\vec{p}_3,\vec{q}_3)$ and $T_3(\vec{p}_3,\vec{q}_3)=T_1(-\vec{p}_1,\vec{q}_1)$. We note that the symmetry property which requires the amplitude $T$ be anti-symmetric with respect to interchange of the 2 diquarks is automatically satisfied by the angular momentum content $L=l_{q1}=l_{p2}=1,l_{p1}=l_{q2}=0$. The ${\bar s}(DD)$ T-matrix $T_2$ satisfies \begin{equation} T_2(\vec{p}_2,\vec{q}_2) = 2 fp^3_1 fq^3_1\int \frac{d^3p'_1}{(2\pi)^3} \int \frac{d^3q'_1}{(2\pi)^3} \,\, K_{21} \,\,G_3({p}_1',{q}_1')\,\,T_1(\vec{p}_1\,',\vec{q}_1\,'). \label{T2} \end{equation} The kernels $K_{13}$ and $K_{12}$ are expressed in terms of the ${\bar s}D$ t-matrix \begin{equation} K_{13}=K_{12}= t_{{\bar s}D}(\vec{p}_1,{\vec{p}_1}\,';z-q_1^2)\,\, \frac{{(2\pi)^3}}{f_{q_1}\,^3} \delta^{(3)}[\vec{q}_1-\vec{q}_1 \,']. \label{K13} \end{equation} Similarly the kernel $K_{21}$ is given by \begin{equation} K_{21}= t_{DD}(\vec{p}_2,\vec{p}_{2}\,';z-q_2^2) \,\,\frac{{(2\pi)^3}}{f_{q_2}\,^3} \delta^{(3)}[\vec{q}_2-\vec{q}_2\,']. \label{K21} \end{equation} The term with $K_{13}$ can be worked out by making use of the $\delta$-function relation \begin{equation} \delta ^{(3)}\left[ {\vec {q}_1 - {\vec {q}}_1\,'} \right] = \frac{2}{q_1} \delta \left( {q_1^2 - {{q}_1'}^2 } \right) \delta \left( {\cos \theta _{q_3 } - \cos \theta_{{q}_3' } } \right)\delta \left( {\phi_{{q}_3' } - \phi_{q_3 } } \right), \end{equation} and the linear relation $\vec{q}_1\,'=c_{13} \vec{p}_3\,' +d_{13} \vec{q}_3\,'$, which lead to \begin{eqnarray} \delta ^{(3)}\left[ {\vec {q}_1 - \vec{q}_1\,' } \right] &= &\frac{1}{q_1 c_{13} d_{13} {p}_3' {q}_3' } \delta \left( {\cos \theta _{p_3' q_3' } - \frac{q_1'^2 - c_{13}'^2 p_3'^2 - d_{13}^2 q_3'^2 } {2c_{13} d_{13}p_3' q_3' }} \right)\nonumber\\ &\times& \delta \left( {\cos \theta _{q_3 } - \cos \theta_{q_3' } } \right)\delta \left( {\phi_{q_3' } - \phi_{q_3 } } \right). \label{deltafunc} \end{eqnarray} We mention that similar expression for a delta function in the term $K_{12}$ can also be obtained by replacing $3 \rightarrow 2$. Performing a partial wave expansion for the $D({\bar s}D)$ amplitude \begin{equation} T_1(\vec{p}_1,\vec{q}_1) = 4 \pi Y_{lp_1 0}^* (\Omega_{p_1})Y_{lq_1 0} (\Omega_{q_1}) T_1^L(p_1,q_1), \end{equation} and for the $\bar sD$ t-matrix $t_{\bar {s}D} (\vec {p}_1 ,\vec {p}_1\,' ;z - q_1^2 )$, \begin{equation} t_{\bar {s}D} (\vec {p}_1 ,\vec {p}_1 \,' ;z - q_1^2 ) = 4\pi {Y_{l{p_1} 0}^* (\Omega_{p_1} )Y_{lp_1 0} (\Omega_{p'_1} )} t_{\bar {s}D}^{(l_{p1} )} ({p}_1 ,{p}_1' ;z - q_1^2 ), \end{equation} yield \begin{eqnarray} &&T_1^L(p_1,q_1) \nonumber\\ && = c_3 \int_0^\infty {q'_3}^2 dq'_3 \int_{A_{13}}^{B_{13}} {p'_3}^2 dp'_3\,\, t^{(lp_1)}_{{\bar s}D}(p_1,p'_1;z-q_1^2)\,\, X_{13}\, \frac{1}{c_{13}\,d_{13}\,q_1\,p'_3\,q'_3}\,\,G_3(p'_3,q'_3)\,\, T_3^L(p'_3,q'_3)\nonumber\\ &&+ c_2 \int_0^\infty {q'_2}^2 dq'_2 \int_{A_{12}}^{B_{12}} {p'_2}^2 dp'_2 \,\, t_{{\bar s}D}^{(lp_1)}(p_1,p'_1;z-q_1^2)\,\, X_{12}\, \frac{1}{c_{12}\,d_{12}\,q_1\,p'_2\,q'_2} \,\,G_3(p'_2,q'_2)\,\, T_2^L(p'_2,q'_2),\nonumber\\ \end{eqnarray} with \begin{equation} c_3= \frac{2}{\sqrt{ \pi}} ({f_{p3} f_{q3}}/{f_{q1}})^3,\,\, c_2= \frac{2}{\sqrt{ \pi}} ({f_{p2} f_{q2}}/{f_{q1}})^3, \end{equation} and where the boundaries $A,B$ for the $p'$ integration can easily be found from the condition $q^2_1={q'_1}^2$ in Eq. (\ref{deltafunc}), given by \begin{eqnarray} A_{ij} &=& \left|\frac{ c_{ij} {q}'_j + q_i }{d_{ij} }\right|\\ B_{ij} &=& \left|\frac{ c_{ij} {q}'_j - q_i }{d_{ij} }\right|, \label{ABbound} \end{eqnarray} For the ${\bar s}(DD)$ amplitude $T_2$, partical wave expansion gives, \begin{eqnarray} T_2^L(p_2,q_2) &=& 2 c_1 \int_0^\infty {q'_1}^2 dq'_1 \int_{A_{21}}^{B_{21}} {p'_1}^2 dp'_1\,\,\nonumber\\ &\times&t_{DD}^{(lp_2)}(p_2,p'_2;z-q^2_2)\,\, X_{21}\,\,\frac{1}{c_{21}\,d_{21}\,q_2\,p'_1\,q'_1} \,\,G_3(p'_1,q'_1)\,\,T_1^L(p_1',q_1'), \end{eqnarray} where $A_{21}$ and $B_{21}$ are given by Eq. (\ref{ABbound}), and \begin{equation} c_1=\frac{2}{\sqrt{ \pi}} ({f_{p1} f_{q1}}/{f_{q2}})^3. \end{equation} In the above equations $X_{ij}$ are angular momentum functions depending on the states we consider. In our case, the ${\bar s}D$ 2-body channel is a s-wave, $lp=0$, and the $DD$ channel a p-wave, $lp=1$. Hence, for the 3-body channel with total angular momentum $L=1$ we have for the $D({\bar s}D)$ 3-body channnel $lp_1=0,lq_1=L$ and $lp_3=0,lq_3=L$, while for ${\bar s}(DD)$ $lp_2=1,lq_2=0$. The obtained $X_{ij}$ have the form \begin{equation} X_{13}= \frac{1}{4 \pi \sqrt{3}} Y_{lq_3 0}(\theta_{q_3\,q_1}), \,\, X_{12}=\frac{1}{4 \pi\sqrt{3}} Y_{lq_2 0}(\theta_{q_2\,q_1}),\,\, X_{21}=\frac{1}{4\pi \sqrt{3}} Y_{lp_2 0}(\theta_{p_2\,p_1} ). \label{X} \end{equation} \subsection{Relativistic Faddeev equations} Following Amazadeh and Tjon \cite{Amazadeh} (see also \cite{Rupp88}) we adopt the relativistic quasi-potential prescription based on a dispersion relation in the 2-particle subsystem. Then the 3-body Bethe-Salpeter-Faddeev equations have essentially the same form as the non relativistic version.Taking the representation with particle 3 as special choice we may write down for the 3-particle Green function a dispersion relation of the (1,2)-system, i.e., \begin{equation} G_3(p_3,q_3;s_3) = \frac{E_1(k_1)+E_2(k_2)}{E_1(k_1)\,E_2(k_2)}\, \frac{1}{s_3-q_3^2-(E_1(k_1)+E_2(k_2))^2}, \end{equation} with $E_1(k_1)=\sqrt{k_1^2+m_1^2},\,E_2(k_2)=\sqrt{k_2^2+m_2^2},$ and $s_3=P^2$ being the invariant 3-particle energy square. In the 3-particle cm-system we have $\sqrt{s_3}=M+E_b$. The resulting 2-body Green function with invariant 2-body energy square $s_2$ has then the form of the BSLT quasi-potential Green function \begin{equation} G_2(p_3;s_2) = \frac{E_1(k_1)+E_2(k_2)}{E_1(k_1)\,E_2(k_2)}\, \frac{1}{s_2-(E_1(k_1)+E_2(k_2))^2}. \end{equation} This quasi-potential prescription for $G_3$ has obviously the advantage that the 2-body t-matrix in the Faddeev kernel satisfies the same equation as the one in the 2-particle Hilbert space with only a shift in the invariant 2-body energy. So the structure of the resulting 3-body equations are the same as in the non relativistic case. \section{Results and discussions} In the NJL model some cutoff scheme must be adopted since the NJL model is non-renormalizable. However, in this work we will not use any cutoff scheme but simply employ the dipole form factors for the scalar and vector vertices. Namely, the NJL model is only used to study the Dirac, flavor and color structure of the ${\bar s}D$ and $DD$ potentials. For the values of the masses $M_{u,d}$, $M_s$ and $M_D$, we use the empirical values $M=M_u=M_d=400$ MeV and $M_s=M_D=600$ MeV \cite{Mineo}. We will treat the coupling constants $G_i$ $(i=1\sim 5)$ in Eq. (\ref{NJLLagrangian}) as free parameters. For the ${\bar s}D$ channel, it depends only on $G_v=G_3+G_4 =\frac32 (G_5-G_2)$ as seen in Eq. (\ref{VsbarD}). In the NJL model calculation with the Pauli-Villars (PV) cutoff regularization \cite{Mineo}, the coupling constants $G_{\pi}$, $G_{\rho}$ and $G_{\omega}$ are related with the parameters used in our work by $G_1=G_{\pi}/2$, $G_2=G_{\rho}/2$ and $G_5=G_{\omega}/2$. Thus by using the values of mesonic coupling constants in the NJL model, $G_v$ is determined as $G_v=\frac32(G_{\omega}/2-G_{\rho}/2) =\frac32(7.34/2-8.38/2)=-0.78$ GeV$^{-2}$. We remark that the sign of $G_v$ is definitely negative since experimentally omega meson is heavier than the rho meson. Then the interaction between ${\bar s}$ and diquark in $s$-wave is attractive, as can be seen from the ${\bar s}D\,\, s$-wave phaseshift shown in Fig. 5 with $G_v=-0.78$ GeV$^{-2}$, while the interaction between ${s}$ and diquark is repulsive which can be seen in Fig. 6. In both figures we find that the magnitudes of the phaseshift is within 10 degrees, that is, $G_v=-0.78$ GeV$^{-2}$ gives very weak interaction between ${\bar s}$ ($s$) and diquark. As we can see in Figs. 5 and 6, generally the phaseshift in $s$-wave is more sensitive to three momentum than that in $p$-wave. We note that ${\bar s}D$ and $sD$ phaseshift are not symmetric around the $p_E$ axis, which can be understood from the decompositions of $t_{sD}$ and $t_{{\bar s}D}$ in the spinor space in appendix B. We further mention that if $G_v$ is determined from the $\Lambda$ hyperon mass $M_{\Lambda}=1116$ MeV within the $sD$ picture, one obtains $G_v=6.44$ GeV$^{-2}$, which is different from $G_v=-0.78$ GeV$^{-2}$ determined from meson sector in the NJL model in sign. In this case the rho meson mass is larger than the omega meson mass, that is, the vector meson masses are not correctly reproduced. $DD$ phaseshift is plotted in Fig. 7 where we have used the values of coupling constants $G_1=G_{\pi}/2=5.21$ GeV$^{-2}$ and $G_5=G_{\omega}/2=3.67$ GeV$^{-2}$ which are determined from meson sectors in the NJL model calculation with the Pauli-Villars cutoff \cite{Mineo}. We can easily see that the phaseshift $\delta_l$ is definitely negative i.e., the $DD$ interaction is repulsive, and its dependence on three momentum $p_E$ is very strong and almost proportional to $p_E$ both for $s$-wave and $p$-wave. This strong $p_E$ dependence of phaseshift comes from the $p_E^2$ dependence of a second term $(p_{D1i}+p_{D1f})\cdot(p_{D2i}+p_{D2f})$ in Eq. (\ref{128i}). The $G_v$ dependence of the ${\bar s}D$ binding energy, $E_{{\bar s}D}$, is presented in Fig. 8. We find that the ${\bar s}D$ bound state begins to appear around $G_v=-5\sim -6$ GeV$^{-2}$, becomes more deeply bound as $G_v$ becomes more negative. It is easily seen that $E_{{\bar s}D}$ is almost proportional to $G_v$. However even for the case of a weakly bound state with $|E_{{\bar s}D}|$ less than $0.1$ GeV, it will require a value of $-G_v=5\sim 6$ GeV$^{-2}$ which is about eight times larger than the $-G_v$ determined from meson sector in the original NJL model with the PV cutoff regularization. For the calculation of the pentaquark binding energy we use the relativistic three-body Faddeev equation which is introduced in section 4. If the pentaquark state is in $J^P=\frac12^+$ state with which we are concerned in the present paper, the total force is attactive but there is no pentaquark bound state. On the other hand if the pentaquark state is in $J^P=\frac12^-$ state, a bound pentaquark state begins to appear when $G_v$ becomes more negative than $-8.0 \,$ GeV$^{-2}$, a value inconsistent with what is required to predict a bound $\Lambda$ hyperon with $M_{\Lambda}=1116$ MeV in a quark-diquark model as mentioned in Sec. 5. The lowest configuration which would correspond to a $J^P=\frac12^-$ state is for the spectator $\bar s$ to be in $p-$wave w.r.t. to a $DD$ pair in $p-$wave, or alternatively speaking, the spectator diquark in relative $s$-wave to $\bar sD$ in $s$-wave. Our results for the binding energy of a $J^P=\frac 12^-$ pentaquark state for the case with and without $DD$ channel are given in Table 1. It is found that although the $DD$ interaction is repulsive, including the $DD$ channel gives an additional binding energy which is leading to the more deeply pentaquark boundstate. It is because the coupling to the $DD$ channel is attractive due to the sign of the effective kernel $K_{21}$ in Eqs. (\ref{T2}, \ref{K21}). This depends on the recoupling coefficients $X_{21}$, $X_{12}$ in Eq. (\ref{X}) and the 2-body t-matrices. \begin{table} \begin{center} \begin{tabular}[htbp]{|c|c|c|} \hline $G_v [GeV^{-2}]$ & $E_B^0(5q) [MeV]$ & $E_B(5q) [MeV]$\\ \hline -8.0 & 47 & 77\\ \hline -9.0 & 87 & 139 \\ \hline -10.0 & 132 & 205 \\ \hline -12.0 & 226 & 333 \\ \hline -14.0 & 316 & 505 \\ \hline \end{tabular} \caption{The binding energy of $J^P =\frac 12^{-}$ pentaquark state. $E_B^0 (5q)$ ($E_B (5q)$) is the binding energy without (including) the $DD$ channel.} \end{center} \end{table} In Fig. 9 (10) the phaseshift of ${\bar s}D$ is plotted, where the coupling constant is fixed at $G_v=-8.0$ GeV$^{-2}$ ($G_v=-14.0$ GeV$^{-2}$). It is easily seen that in Figs. 9 and 10 the phaseshift of ${\bar s}D$ in $s$-wave is positive for small $p_E<0.3$ GeV and $p_E<0.45$ GeV, but it changes the sign around $p_E = 0.3$ and $p_E = 0.45$ GeV, thus the phaseshift of ${\bar s}D$ in $s$-wave is very sensitive to three momentum $p_E$. Whereas the phaseshift of ${\bar s}D$ in $p$-wave is definitely positive. In Fig. 11 we plot the phaseshift of ${s}D$ with the coupling constant $G_v=-14.0$ GeV$^{-2}$ which is same as the one used in Fig. 10. Different from the phaseshift of ${\bar s}D$ the phaseshifts of $sD$ in $s$ and $p$-wave do not change the sign for higher three momentum $p_E$, i.e., the sign of the phaseshifts are definitely negative. From the above results we find that even if we use a very strong coupling constant $G_v$ which is unphysical because it gives much larger mass difference of rho and omega mesons than the experimental value, $M_{\omega}-M_{\rho}=13$ MeV, it is impossible to obtain the pentaquark bound state with $J^P=\frac12^+$. With only the $J=\frac12$ three-body channels considered, we do not find a bound $J^P=\frac12^+$ pentaquark state. The $J^P=\frac12^-$ channel is more attractive, resulting in a bound pentaquark state in this channel, but for unphysically large values of vector mesonic coupling constants. \section{Summary} In this work, we have presented a Bethe-Salpeter-Faddeev (BSF) calculation for the pentaquark $\Theta^+$ in the diquark picture of Jaffe and Wilczek in which $\Theta^+$ is treated as a diquark-diquark-${\bar s}$ three-body system. The Blankenbecler-Sugar reduction scheme is used to reduce the four-dimensional integral equation into three-dimensional ones. The two-body diquark-diquark and diquark-${\bar s}$ interactions are obtained from the lowest order diagrams prescribed by the Nambu-Jona-Lasinio (NJL) model. The coupling constants in the NJL model as determined from the meson sector are used. We find that ${\bar s}D$ interaction is attractive in $s$-wave while $DD$ interaction is repulsive in $p$-wave. Within the truncated configuration where $DD$ and ${\bar s}D$ are restricted to $p$- and $s$-waves, respectively, we do not find any bound $ \frac 12^+$ pentaquark state, even if we turn off the repulsive $DD$ interaction. It indicates that the attractive ${\bar s}D$ interaction is not strong enough to support a bound $DD\bar s$ system with $J^P=\frac 12^+$. However, a bound pentaquark with $J^P=\frac 12^-$ begins to appear if we change the vector mesonic coupling constant $G_v$ from $-0.78$ GeV$^{-2}$, as determined from the mesonic sector, to around $G_v=-8$ GeV$^{-2}$. And it becomes more deeply bound as $G_v$ becomes more negative. \section*{Acknowledgements} This work was supported in part by the National Science Council of ROC under grant no. NSC93-2112-M002-004 (H.M. and S.N.Y.). J.A.T. wishes to acknowledge the financial support of NSC for a visiting chair professorship at the Physics Department of NTU and the warm hospitality he received throughout the visit. K.T. acknowledges the support from the Spanish Ministry of Education and Science, Reference Number: SAB2005-0059. \begin{figure}[hbtp] \begin{center} \epsfig{file=SbarD078.ps,scale=0.5} \end{center} \caption{Three momentum $p_E$ dependence of the phaseshift $\delta_l$ for the ${\bar s}D$ interaction with the coupling constant $G_v=-0.78$ GeV$^{-2}$. } \end{figure} \begin{figure}[hbtp] \begin{center} \epsfig{file=sD078.ps,scale=0.5} \end{center} \caption{Three momentum $p_E$ dependence of the phaseshift $\delta_l$ for the ${s}D$ interaction with the coupling constant $G_v=-0.78$ GeV$^{-2}$. } \end{figure} \begin{figure}[hbtp] \begin{center} \epsfig{file=phaseshiftDD.ps,width=10cm} \end{center} \caption{Three momentum $p_E$ dependence of the phaseshift $\delta_l$ for the $DD$ interaction.} \end{figure} \begin{figure}[hbtp] \begin{center} \epsfig{file=Ebgsd.ps,width=10cm} \end{center} \caption{$G_v$ dependence of the ${\bar s}$D binding energy.} \end{figure} \begin{figure}[hbtp] \begin{center} \epsfig{file=phaseshiftSbarD8.ps,width=10cm} \end{center} \caption{Three momentum $p_E$ dependence of the phaseshift $\delta_l$ for the ${\bar s}D$ interaction with the coupling constant $G_v=-8.0$ GeV$^{-2}$.} \end{figure} \begin{figure}[hbtp] \begin{center} \epsfig{file=phaseshiftSbarD14.ps,width=10cm} \end{center} \caption{Three momentum $p_E$ dependence of the phaseshift $\delta_l$ for the ${\bar s}D$ interaction with the coupling constant $G_v=-14.0$ GeV$^{-2}$.} \end{figure} \begin{figure}[hbtp] \begin{center} \epsfig{file=phaseshiftsD.ps,width=10cm} \end{center} \caption{Three momentum $p$ dependence of the phaseshift $\delta_l$ for the $sD$ interaction with the coupling constant $G_v=-14.0$ GeV$^{-2}$} \end{figure} \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Functional equation and interpolation in the whole weight space} \label{sec:functional-eq-interpolation-whole} In this section we explain the functional equation for $L^{\mathrm{imp}}_p(\Sym^2 F)$, and how to use it in combination with the results proven so far, to prove an interpolation property on the whole weight space. By definition, proving a functional equation for $L^{\mathrm{imp}}_p(\Sym^2 F)$ amounts to proving a functional equation for $L_p^{\mathrm{geom}}(F,F)$, as the Kubota-Leopoldt function has a well-known functional equation. In turn, a functional equation for $L_p^{\mathrm{geom}}(F,F)$ is widely expected, given the fact that it interpolates complex Rankin-Selberg $L$-functions (which enjoy functional equations) and that the analogous function in the ordinary case has a functional equation too. In fact, a functional equation for $L_p^{\mathrm{geom}}(F,G)$ has already been proven in literature. The specific one that we present here is due to Benois and Horte~\cite[Proposition~6.4.2]{benois.horte:on}. \begin{proposition}[Benois-Horte] \label{prop:benois-horte-functional-eq} Assume that $\alpha_f \neq \beta_f$, $\alpha_g \neq \beta_g$, that $v_p(\alpha_f) < k+1$, $v_p(\alpha_g) < k'+1$, and that $\psi$, $\psi'$ and $\psi\psi'$ are primitive modulo $N$, $N'$ and $N''=\lcm(N,N')$ respectively. Then for every $(\kappa_1,\kappa_2,s)\in V_1\times V_2\times\mathcal{W}$: \begin{equation*} L_p^{\mathrm{geom}}(F,G)(\kappa_1,\kappa_2,s) = \epsilon_p^{[F,G,a]}(\kappa_1,\kappa_2,s) L_p^{\mathrm{geom}}(F^*,G^*)(\kappa_1,\kappa_2,\kappa_1+\kappa_2-s+3) \end{equation*} where $a\in\{0,\ldots,p-1\}$ is defined by $s\vert_{\numberset{F}_p^{\times}} = \omega^a$, and \begin{equation*} \epsilon_p^{[F,G,a]}(\kappa_1,\kappa_2,s) = w(f,g)(NN'N'')^{\frac{k+k'+3}{2}-a}\langle NN'N'' \rangle^{a-s}\langle N \rangle^{\frac{\kappa_2-k'}{2}-1}\langle N' \rangle^{\frac{\kappa_1-k}{2}-1}. \end{equation*} \end{proposition} The Coleman families $F^*$ and $G^*$ pass through $p$-stabilisations of the forms $f^*$ and $g^*$ when specialised at $k$ and $k'$. For our purposes, we will not need the explicit expression of $\epsilon_p$, but it is sufficient to know that it is finite and non-zero. We refer the reader to the aforementioned article for the definition of $w(f,g)$. From the above result, the functional equation for $L_p^{\mathrm{geom}}(F,G)(k_0,k_0')$ exchanges $s$ with $k_0+k_0'-s+3$, in particular the line $s=\frac{k_0+k_0'+3}{2}$ is left invariant. We recall here the hypotheses of the proposition: \begin{description} \item[M1] $\alpha_f \neq \beta_f$ and $\alpha_g \neq \beta_g$; \item[M2] $v_p(\alpha_f) < k+1$ and $v_p(\alpha_g) < k'+1$; \item[M3] the characters $\psi$, $\psi'$ and $\psi\psi'$ are primitive modulo $N$, $N'$ and $N''=\lcm(N,N')$. \end{description} Notice that the hypothesis M2 is not actually a restriction, since we can always be in that case by swapping the Satake parameters in the ordinary case, while in the supersingular case both roots satisfy it. Moreover, the hypothesis M1 is conjectured to always hold, and is known to do so in weight $2$: it is only a mild restriction, if at all. We have already assumed it for $f$ throughout, to ensure the existence of $F$ and $L_p^{\mathrm{geom}}(F,F)$. The only restrictive hypothesis of the list is M3, but as Benois and Horte point out, this is assumed only to simplify the bookkeeping and the notation. To simplify the exposition we will assume it too, but the material presented here is expected to hold also without assuming M3, \emph{mutatis mutandis}. Note that M3 implies hypothesis H1. Let us specialise the functional equation to our case of interest, i.e.\ $f=g$ and $F=G$. As usual we assume $f$ supersingular at $p$, which in particular implies hypothesis M2. Then the above hypotheses become: \begin{itemize} \item $\alpha_f \neq \beta_f$; \item the characters $\psi$ and $\psi^2$ are primitive modulo $N$. \end{itemize} As just noted, we were already assuming the former, and we continue to do so. We will also assume the latter for the remainder of the paper. Furthermore, under the above H1 holds. For every pair of integers $(k_0,k_0')\in V_1\times V_2$, the functional equation is then \begin{equation*} L_p^{\mathrm{geom}}(F,F)(k_0,k_0',s) = \epsilon_p^{[F,F,a]}(k_0,k_0',s) L_p^{\mathrm{geom}}(F^*,F^*)(k_0,k_0',k_0+k_0'-s+3) \end{equation*} where $s\vert_{\numberset{F}_p^{\times}} = \omega^a$, and \begin{equation*} \epsilon_p^{[F,F,a]}(k_0,k_0',s) = w(f,f)(N)^{3(\frac{k+k'+3}{2}-a)}\langle N \rangle^{3(a-s)}\langle N \rangle^{\frac{k_0+k_0'-2k}{2}-2}. \end{equation*} We will use this expression to extend the interpolation range of $L^{\mathrm{imp}}_p(\Sym^2 F)$. The strategy is the following: we will first deduce a functional equation for $L^{\mathrm{imp}}_p(\Sym^2 F)$ by patching together those for $L_p(\psi)$ and $L_p^{\mathrm{geom}}(F,F)$ and using some properties of the complex $L$-function. Afterwards, we will use that functional equation to mirror the interpolation property~\eqref{eq:final-interpolation} into the other half of the weight space. \subsection{Functional equation for \texorpdfstring{$L^{\mathrm{imp}}_p(\Sym^2 F)$}{Lp(Sym² F)}} We now derive a functional equation for $L^{\mathrm{imp}}_p(\Sym^2 F)$. \begin{theorem}[Functional equation] \label{thm:functional-equation} Suppose that $\psi$, $\psi^2$ are primitive modulo $N$. Then $L^{\mathrm{imp}}_p(\Sym^2 F)$ satisfies the following functional equation for every $(\kappa,s)\in U\times\mathcal{W}$: \begin{equation*} L^{\mathrm{imp}}_p(\Sym^2 F)(\kappa,s) = \epsilon_p^{[F,F,a]}(\kappa,\kappa,s) i^{-a_{\psi}}\tau(\psi^{-1})N^{s-\kappa-2} L^{\mathrm{imp}}_p(\Sym^2 F^*)(\kappa,2\kappa-s+3). \end{equation*} \end{theorem} \begin{proof} By definition $L^{\mathrm{imp}}_p(\Sym^2 F)(\kappa,s) = L_p^{\mathrm{geom}}(F,F)(\kappa,\kappa,s)L_p(\psi,s-\kappa-1)^{-1}$. We now use the functional equations of both functions to deduce: \begin{align*} L^{\mathrm{imp}}_p(\Sym^2 F)(\kappa,s) &= \frac{L_p^{\mathrm{geom}}(F,F)(\kappa,\kappa,s)}{L_p(\psi,s-\kappa-1)} \\ &= \frac{\epsilon_p^{[F,F,a]}(\kappa,\kappa,s)L_p^{\mathrm{geom}}(F^*,F^*)(\kappa,\kappa,2\kappa-s+3)}{i^{a_{\psi}}\tau(\psi^{-1})^{-1}N_{\psi}^{2+\kappa-s} L_p(\psi^{-1},2+\kappa-s)} \\ &= \frac{\epsilon_p^{[F,F,a]}(\kappa,\kappa,s)}{i^{a_{\psi}}\tau(\psi^{-1})^{-1}N_{\psi}^{2+\kappa-s}}L^{\mathrm{imp}}_p(\Sym^2 F^*)(\kappa,2\kappa-s+3). \end{align*} The last inequality follows from the fact that $2\kappa-s+3-\kappa-1 = 2+\kappa-s$, and that $F^*$ passes at $k$ through a $p$-stabilisation of the form $f^*$, which has character $\psi^{-1}$. Therefore we can recast the ratio of $L$-functions in terms of $L^{\mathrm{imp}}_p(\Sym^2 F^*)$, by applying its definition. Lastly, $N=N_{\psi}$ by hypothesis. This proves the claim. \end{proof} From the above result, the functional equation for $L^{\mathrm{imp}}_p(\Sym^2 F)(k_0)$ exchanges $s$ with $2k_0-s+3$, in particular the line $s=\frac{2k_0+3}{2}$ is left invariant. \subsection{Interpolation on the whole weight space} Here we apply the above results to deduce an interpolation property for $L^{\mathrm{imp}}_p(\Sym^2 F)$ on both halves of the weight space. Recall that $L^{\mathrm{imp}}_p(\Sym^2 F)$ is defined on $U\times \mathcal{W}$, but the interpolation property~\eqref{eq:final-interpolation} holds for points in $\mathcal{I}\subseteq U\times\mathcal{W}_-$. This has two consequences: firstly, $L^{\mathrm{imp}}_p(\Sym^2 F)$ is uniquely determined by that interpolation property only on $U\times\mathcal{W}_-$, because by density we can only cover that half. Secondly, we do not have any explicit handle on the values on $U\times\mathcal{W}_+$, and correspondingly we do not have any explicit link with the values of $L^{\mathrm{imp}}(\Sym^2 f_{k_0})$ in the range $\{k_0+2,\ldots,2k_0+2\}$. This is not desirable, because we are cutting out half of the critical range, thus not completely fulfilling the idea that $L$-functions can be feasibly interpolated over the whole critical range. Furthermore, there is no reason as to why one half (either of $\mathcal{W}$ or of the critical range) should be preferable to the other. In this subsection we overcome this issue, by proving an interpolation property on the other half of the weight space, i.e.\ on $U\times\mathcal{W}_+$, and correspondingly covering the other half of the critical range. We will do this by mirroring the existing interpolation property through the functional equation for $L^{\mathrm{imp}}_p(\Sym^2 F)$. \begin{theorem} \label{thm:factorization-other-half} Suppose that $\psi$ and $\psi^2$ are primitive modulo $N$. For every pair $(k_0,j_0)\in\numberset{N}^2\cap U\times\mathcal{W}$ with $j_0$ odd and either $k_0+1< j_0 \leq 2k_0+1$ or $j_0=k_0+1$ and $E(\Sym^2 f_{k_0}^*,k_0+1)\neq 0$, then: \begin{multline} \label{eq:interpolation-other-half} L^{\mathrm{imp}}_p(\Sym^2 F)(k_0,j_0+1) = (-1)^{k_0+1} (2k_0-j_0+1)! \frac{(2\pi iN)^{j_0-k_0-1}}{4(4\pi)^{k_0+1}\langle f_{k_0}, f_{k_0} \rangle} \\ \cdot \epsilon_p^{[F,F,j_0+1]}(k_0,k_0,j_0+1)\frac{E(\Sym^2 f_{k_0}^*,2k_0-j_0+2)}{E(f_{k_0})E^*(f_{k_0})} \\ \cdot L^{\mathrm{imp}}(\Sym^2 f_{k_0}\otimes\psi^{-2},2k_0-j_0+2) \end{multline} where $f_{k_0}\in M_{k_0+2}(N,\psi)$ is such that $F$ passes through its $p$-stabilisation at $k_0$. \end{theorem} \begin{proof} The strategy of the proof is to use the functional equation to land in the range where we already proved interpolation, and then apply it. By Theorem~\ref{thm:functional-equation} we obtain \begin{multline*} L^{\mathrm{imp}}_p(\Sym^2 F)(k_0,j_0+1) = \epsilon_p^{[F,F,j_0+1]}(k_0,k_0,j_0+1) i^{-a_{\psi}}\tau(\psi^{-1})N^{j_0-k_0-1} \\ \cdot L^{\mathrm{imp}}_p(\Sym^2 F^*)(k_0,2k_0-j_0+2). \end{multline*} We now write $2k_0-j_0+2 = j_0'+1$. Our hypotheses on $j_0$ imply: \begin{equation*} \begin{cases} j_0 \leq 2k_0+1 \\ j_0 \geq k_0+1 \\ j_0 \; \text{odd} \end{cases} \implies \begin{cases} j_0' \geq 0 \\ j_0' \leq k_0 \\ j_0' \; \text{even} \end{cases} \end{equation*} By construction, the Coleman family $F^*$ passes through a $p$-stabilisation of $f_{k_0}^*$ at $k_0$, with $f_{k_0}^* \in M_{k_0+2}(N^*,\psi^{-1})$ for some level $N^*$ with the same prime divisors of $N$. When $k_0+1<j_0\leq 2k_0+1$ then $0\leq j_0' < k_0$, so we can apply Theorem~\ref{thm:first-half-final} to get \begin{multline*} L^{\mathrm{imp}}_p(\Sym^2 F^*)(k_0,j_0'+1) = (-1)^{k_0+1}(j_0')! \frac{(2\pi i)^{k_0-j_0'}i^{a_{\psi}}}{4\tau(\psi^{-1})(4\pi)^{k_0+1}\langle f_{k_0}^*,f_{k_0}^* \rangle} \\ \cdot \frac{E(\Sym^2 f_{k_0}^*,j_0'+1)}{E(f_{k_0}^*)E^*(f_{k_0}^*)} L^{\mathrm{imp}}(\Sym^2 f_{k_0}^*,j_0'+1). \end{multline*} When $j_0=k_0+1$ we obtain $j_0' = k_0$, and the theorem still applies thanks to the hypothesis $E(\Sym^2 f_{k_0}^*,k_0+1)\neq 0$. Hypothesis H1 also holds as it is implied by M3. The last equation already proves that, in our hypotheses, $L^{\mathrm{imp}}_p(\Sym^2 F)$ interpolates complex $L$-values for $k_0+1\leq j_0 \leq 2k_0+1$. We now express the complex $L$-function in terms of $f_{k_0}$ only. We start with the following consideration: the cusp form $f_{k_0}^*$ is characterised by the property that $a_n(f_{k_0}^*) = \overline{a_n(f_{k_0})} = \psi^{-1}(n)a_n(f_{k_0})$. Moreover it has character $\psi^{-1}$. Its imprimitive symmetric square $L$-function is then by definition \begin{equation*} \begin{split} L^{\mathrm{imp}}(\Sym^2 f_{k_0}^*,s) &= L_{(N^*)}(\psi^{-2},2s-2k_0-2)\sum_{n\geq 1} \frac{a_{n^2}(f_{k_0}^*)}{n^s} \\ &= L_{(N^*)}(\psi^{-2},2s-2k_0-2)\sum_{n\geq 1} \frac{a_{n^2}(f_{k_0})\psi^{-2}(n)}{n^s}. \end{split} \end{equation*} On the other hand, again by definition we have \begin{equation*} \begin{split} L^{\mathrm{imp}}(\Sym^2 f_{k_0}\otimes\psi^{-2},s) &= L_{(N)}(\psi^2\psi^{-4},2s-2k_0-2)\sum_{n\geq 1} \frac{a_{n^2}(f_{k_0})\psi^{-2}(n)}{n^s} \\ &= L_{(N)}(\psi^{-2},2s-2k_0-2)\sum_{n\geq 1} \frac{a_{n^2}(f_{k_0})\psi^{-2}(n)}{n^s}. \end{split} \end{equation*} The Dirichlet series coincide, and the Dirichlet $L$-functions with some local factors removed coincide as well, because $N$ and $N^*$ have the same prime factors. Therefore $L^{\mathrm{imp}}(\Sym^2 f_{k_0}^*) = L^{\mathrm{imp}}(\Sym^2 f_{k_0}\otimes\psi^{-2})$. We have translated the complex $L$-function in terms of $f_{k_0}$ at the cost of adding a twist by its character. Furthermore, we also have $\langle f_{k_0}^*,f_{k_0}^* \rangle = \langle f_{k_0}, f_{k_0}\rangle$. The relation $a_p(f_{k_0}^*) = \psi^{-1}(p)a_p(f_{k_0})$ and the fact that $f_{k_0}^*$ has character $\psi^{-1}$ also imply $\alpha_{f_{k_0}^*} = \psi^{-1}(p)\alpha_{f_{k_0}}$ and $\beta_{f_{k_0}^*} = \psi^{-1}(p)\beta_{f_{k_0}}$. We deduce \begin{align*} E(f_{k_0}^*) = \Bigl(1-\frac{\beta_{f_{k_0}^*}}{p\alpha_{f_{k_0}^*}}\Bigr) &= \Bigl(1-\frac{\beta_{f_{k_0}}}{p\alpha_{f_{k_0}}}\Bigr) = E(f_{k_0}), \displaybreak[0]\\ E^*(f_{k_0}^*) = \Bigl(1-\frac{\beta_{f_{k_0}^*}}{\alpha_{f_{k_0}^*}}\Bigr) &= \Bigl(1-\frac{\beta_{f_{k_0}}}{\alpha_{f_{k_0}}}\Bigr) = E^*(f_{k_0}). \end{align*} Putting everything together we finally deduce the following equation: \begin{multline*} L^{\mathrm{imp}}_p(\Sym^2 F)(k_0,j_0+1) = \epsilon_p^{[F,F,j_0+1]}(k_0,k_0,j_0+1) i^{-a_{\psi}}\tau(\psi^{-1})N^{j_0-k_0-1} \\ \cdot (-1)^{k_0+1}(j_0')!\frac{(2\pi i)^{k_0-j_0'}i^{a_{\psi}}}{4\tau(\psi^{-1})(4\pi)^{k_0+1}\langle f_{k_0},f_{k_0} \rangle} \frac{E(\Sym^2 f_{k_0}^*,j_0'+1)}{E(f_{k_0})E^*(f_{k_0})} \\ \cdot L^{\mathrm{imp}}(\Sym^2 f_{k_0}\otimes\psi^{-2},2k_0-j_0+2). \end{multline*} Simplifying the extra factor and writing $j_0'$ in terms of $j_0$ proves the claim. \end{proof} We remark that the theorem states that $L^{\mathrm{imp}}_p(\Sym^2 F)$ satisfies an interpolation property also for $j_0 \in \{k_0+1,\ldots,2k_0+1\}$. Therefore it has an interpolation property for $s\in\{1,\ldots,2k_0+2\}$. This checks out with the idea that $p$-adic $L$-functions should interpolate complex ones in their critical ranges. Notice that here the complex $L$-values that are being exploited are again in the first half of the interval, i.e.\ $\{1,\ldots,k_0+1\}$, but for a different function: we had to insert an additional twist. This additional twist accounts for the fact that we are interpolating ``mirrored'' values, that is in the other half of the critical interval. It is possible to express the interpolation property in terms of complex $L$-values over $\{k_0+2,\ldots,2k_0+2\}$, by applying the functional equation for the complex imprimitive $L$-function to the above statement. \subsubsection{Generalised interpolation in the whole weight space} We now apply the just proved interpolation on $U\times\mathcal{W}_+$ to the study of the value $L^{\mathrm{imp}}_p(\Sym^2 F)(k_0,k_0+2)$ when $E(\Sym^2 f_{k_0}^*,k_0+1) = 0$ holds, finding criteria to determine whether it is zero and when the interpolation formula~\eqref{eq:interpolation-other-half} holds trivially. Clearly, the behaviour of the function will mirror that on $U\times\mathcal{W}_-$, thanks to the functional equation. All the results are then proved quickly by referring back to the results of Subsection~\ref{subsec:generalising-interpolation}. We first count the number of zeros in the Euler factor. Denote with $\mathscr{E}'$ the extra factor in the interpolation formula~\eqref{eq:interpolation-other-half}, i.e. \begin{multline*} \mathscr{E}'(k,j) = (-1)^{k+1} (2k-j+1)! \frac{(2\pi iN)^{j-k-1}}{4(4\pi)^{k+1}\langle f_k, f_k \rangle} \epsilon_p^{[F,F,j+1]}(k,k,j+1) \\ \cdot\frac{E(\Sym^2 f_k^*,2k-j+2)}{E(f_k)E^*(f_k)}. \end{multline*} \begin{proposition} Let $(k_0,j_0+1)$ be a pair of integers with $k_0+1\leq j_0\leq 2k_0+1$ and $j_0$ odd. Then the number of factors of $\mathscr{E}'$ which are zero at $(k_0,j_0+1)$ is $O''(j_0)$, which is defined as \begin{equation*} \begin{array}{c|c|c|c} & & j_0 > k_0+1 & j_0 = k_0+1 \\ \multirow{2}{*}{$\alpha \neq \beta$} & \psi(p) = 1 & 0 & +1 \\ & \psi(p) \neq 1 & 0 & 0 \;\text{or}\; +1 \\ \hline \multirow{2}{*}{$\alpha = \beta$} & \psi(p) = 1 & -1 & +1 \\ & \psi(p) \neq 1 & -1 & -1 \end{array} \end{equation*} In the case $\alpha\neq\beta$ and $\psi(p)\neq 1$, $O''(k_0)=1$ if and only if $\beta = \pm p^{\frac{k_0+1}{2}}$, $\alpha = \psi(p)\beta$. \end{proposition} The proof of this proposition is completely analogous to that of Proposition~\ref{prop:counting-zeros}. We now derive consequences. The proofs are identical to the analogous cases over $U\times\mathcal{W}_-$. We specialise to the case of interest, that is $j_0=k_0+1$ and $E(\Sym^2 f_{k_0}^*,k_0+1) = 0$, and study when the interpolation further generalises. By replicating the strategy used over $U\times\mathcal{W}_-$, we try to show that in this case the interpolation formula is zero on both sides. As we are not using the previous techniques, we drop here all additional assumptions, M3 included. Let $j_0=k_0+1$ odd. \begin{proposition} If $\psi\neq 1$ and $L_p^{\mathrm{geom}}(F,F)(k_0,k_0,k_0+2)=0$, then $L^{\mathrm{imp}}_p(\Sym^2 F)(k_0,\allowbreak k_0+2)=0$. \end{proposition} \begin{proof} Since $j_0=k_0+1$ is odd, $k_0$ is even, so is $\psi$. The proposition follows from the same proof as Proposition~\ref{prop:geom-0-sym-0}, using that $L_p(\psi,1)\neq 0$. \end{proof} \begin{proposition} Suppose that $E(\Sym^2 f_{k_0}^*,k_0+1)=0$ and $\psi\neq 1$. If $L_p^{\mathrm{geom}}(F,F)(k_0,\allowbreak k_0,k_0+2)=0$, then the interpolation formula~\eqref{eq:interpolation-other-half} holds (trivially). \end{proposition} \begin{proof} Under the hypothesis $E(\Sym^2 f_{k_0}^*,k_0+1)=0$, the right hand side of the interpolation formula~\eqref{eq:interpolation-other-half} is zero, by the same reasoning as in Lemma~\ref{lemma:right-hand-side-0}. Under the hypotheses $\psi\neq 1$ and $L_p^{\mathrm{geom}}(F,F)(k_0,k_0,k_0+2)=0$ the last proposition applies, and the left hand side of the formula vanishes as well. Therefore the interpolation formula reads $0=0$, which trivially holds. \end{proof} As it is the case over $U\times\mathcal{W}_-$, also in the half of the domain $U\times\mathcal{W}_+$ we have that when $E(\Sym^2 f_{k_0}^*,f_{k_0}^*,k_0+1)\neq 0$ applies, interpolation at $j_0=k_0+1$ always holds, while when it does not apply, interpolation holds only when we are able to prove that the $p$-adic $L$-value is zero. \paragraph{Exceptional zeros in the whole weight space} The study of the factor $\mathscr{E}'$ suggests where the exceptional zeros and poles of $L^{\mathrm{imp}}_p(\Sym^2 F)$ could be located in this half of the domain too. For the same reasons as over $U\times\mathcal{W}_-$, we cannot directly translate this indication into a full-fledged proof, but we turn it into a conjecture. \begin{conjecture} Suppose that $\psi$ and $\psi^2$ are primitive modulo $N$. Every point $(k_0,j_0+1)\in U\times\mathcal{W}_+$ such that $j_0$ is odd and $k_0+1<j_0\leq 2k_0+1$, is an exceptional zero for $L^{\mathrm{imp}}_p(\Sym^2 F)$ of order $O''(j_0)$. Every point $(k_0,k_0+2)\in U\times\mathcal{W}_+$ such that $k_0+1$ is odd and $E(\Sym^2 f_{k_0}^*,k_0+1)\neq 0$, is an exceptional zero for $L^{\mathrm{imp}}_p(\Sym^2 F)$ of order $-r_0$, where \begin{equation*} r_0 = \begin{cases} 0 &\text{if $\alpha\neq\beta$} \\ 1 &\text{if $\alpha=\beta$} \end{cases} \end{equation*} Moreover, assuming $E(\Sym^2 f_{k_0}^*,k_0+1)\neq 0$ then: \begin{enumerate} \item If $k_0=0$, then $L^{\mathrm{imp}}_p(\Sym^2 F)$ is analytic at $(0,2)$. \item If $k_0>0$, assuming the conjecture that $\alpha\neq\beta$ then $L^{\mathrm{imp}}_p(\Sym^2 F)$ is analytic at all points $(k_0,j_0+1)$ for $j_0$ odd, $k_0+1\leq j_0\leq 2k_0+1$. \item Assuming the conjecture that $\alpha\neq\beta$ in all weights, then $L^{\mathrm{imp}}_p(\Sym^2 F)$ is analytic on the whole interpolation range in $U\times\mathcal{W}_+$, and then on the whole interpolation range in $U\times\mathcal{W}$. \end{enumerate} \end{conjecture} \subsection{Conclusion and final remarks} By putting together Theorem~\ref{thm:factorization-other-half} with the main theorem of the last section, we prove the main and final theorems of the paper. For easier reference, we recall here that $f\in S_{k+2}(N,\psi)$ is a normalised cuspidal newform and that we assume $N\geq 5$. \begin{theorem} \label{thm:final-int} Suppose that $f$ is supersingular at $p$ and assume that $\alpha_f\neq\beta_f$. There exists a unique $2$-variable meromorphic $p$-adic $L$-function $L^{\mathrm{imp}}_p(\Sym^2 F)\colon U\times\mathcal{W} \to \overline{\numberset{Q}_p}$ with the following interpolation property: \begin{itemize} \item if $(k_0,j_0)\in \numberset{N}^2 \cap U\times \mathcal{W}$ with $j_0$ even is such that either $0\leq j_0 < k_0$, or $j_0=k_0$, $E(\Sym^2 f_{k_0},k_0+1)\neq 0$ and $N=N_{\psi}$, then: \begin{multline*} L^{\mathrm{imp}}_p(\Sym^2 F)(k_0,j_0+1) = (-1)^{k_0+1}j_0! \frac{(2\pi i)^{k_0-j_0} i^{a_{\psi}}}{4\tau(\psi)(4\pi)^{k_0+1}\langle f_{k_0},f_{k_0} \rangle} \\ \cdot \frac{E(\Sym^2 f_{k_0},j_0+1)}{E(f_{k_0})E^*(f_{k_0})} L^{\mathrm{imp}}(\Sym^2 f_{k_0},j_0+1)\prod_{\substack{l \mid N \\ l \centernot\mid N_{\psi}}} \Bigl(1 - \frac{\psi(l)}{l^{j_0-k_0}}\Bigr) \end{multline*} \item assume that $\psi$, $\psi^2$ have conductor $N$; if $(k_0,j_0)\in \numberset{N}^2 \cap U\times \mathcal{W}$ with $j_0$ odd is such that either $k_0+1< j_0 \leq 2k_0+1$, or $j_0=k_0+1$ and $E(\Sym^2 f_{k_0}^*,k_0+1)\neq 0$, then: \begin{multline*} L^{\mathrm{imp}}_p(\Sym^2 F)(k_0,j_0+1) = (-1)^{k_0+1} (2k_0-j_0+1)! \frac{(2\pi iN)^{j_0-k_0-1}}{4\langle f_{k_0}, f_{k_0} \rangle(4\pi)^{k_0+1}} \\ \cdot \epsilon_p^{[F,F,j_0+1]}(k_0,k_0,j_0+1) \frac{E(\Sym^2 f_{k_0}^*,2k_0-j_0+2)}{E(f_{k_0})E^*(f_{k_0})} \\ \cdot L^{\mathrm{imp}}(\Sym^2 f_{k_0}\otimes\psi^{-2},2k_0-j_0+2) \end{multline*} \end{itemize} where $f_{k_0}\in M_{k_0+2}(N,\psi)$ is such that $F$ passes through its $p$-stabilisation at $k_0$. \end{theorem} In general $L^{\mathrm{imp}}_p(\Sym^2 F)$ is not necessarily analytic, but only meromorphic. \begin{proof} We have proved all the interpolation formulæ already, in the course of the last two sections: see Theorems~\ref{thm:first-half-final} and~\ref{thm:factorization-other-half}. The only thing that is left to prove is that these equations determine $L^{\mathrm{imp}}_p(\Sym^2 F)$ uniquely over $U\times\mathcal{W}$: this is the same density argument used in the proof of Theorem~\ref{thm:first-half-final} at page~\pageref{thm:first-half-final}. \end{proof} \begin{theorem} \label{thm:final-fact} In the hypotheses of Theorem~\ref{thm:final-int}, the factorisation of $p$-adic $L$-functions holds: \begin{equation*} L_p^{\mathrm{geom}}(F,F)(\sigma_1,\sigma_1,\sigma_2) = L^{\mathrm{imp}}_p(\Sym^2 F)(\sigma_1,\sigma_2)L_p(\psi,\sigma_2-\sigma_1-1) \end{equation*} for every $(\sigma_1,\sigma_2)\in U\times\mathcal{W}$. \end{theorem} \begin{proof} Follows directly from the definition of $L^{\mathrm{imp}}_p(\Sym^2 F)$. \end{proof} The following corollary is straightforward. \begin{corollary} Suppose that $f$ is supersingular at $p$ and that $\alpha_f\neq\beta_f$. There exist one-variable meromorphic $p$-adic $L$-functions $L^{\mathrm{imp}}_p(\Sym^2 f_{k_0})\colon \mathcal{W} \to \overline{\numberset{Q}_p}$ for varying $k_0$, defined as specialisations \begin{equation*} L^{\mathrm{imp}}_p(\Sym^2 f_{k_0}) = L^{\mathrm{imp}}_p(\Sym^2 F)(k_0) \end{equation*} such that they enjoy the interpolation property \begin{itemize} \item for integers $j_0$ such that $(-1)^{j_0} = 1$ and either $0\leq j_0 < k_0$, or $j_0=k_0$, $E(\Sym^2 f_{k_0},\allowbreak k_0+1)\neq 0$ and $N=N_{\psi}$: \begin{multline*} L^{\mathrm{imp}}_p(\Sym^2 f_{k_0})(j_0+1) = (-1)^{k_0+1}j_0! \frac{(2\pi i)^{k_0-j_0} i^{a_{\psi}}}{4\tau(\psi)(4\pi)^{k_0+1}\langle f_{k_0},f_{k_0} \rangle} \\ \cdot \frac{E(\Sym^2 f_{k_0},j_0+1)}{E(f_{k_0})E^*(f_{k_0})} L^{\mathrm{imp}}(\Sym^2 f_{k_0},j_0+1)\prod_{\substack{l \mid N \\ l \centernot\mid N_{\psi}}} \Bigl(1 - \frac{\psi(l)}{l^{j_0-k_0}}\Bigr) \end{multline*} \item assume that $\psi$, $\psi^2$ have conductor $N$; for integers $j_0$ such that $(-1)^{j_0+1} = 1$ and either $k_0+1 < j_0 \leq 2k_0+1$, or $j_0=k_0+1$ and $E(\Sym^2 f_{k_0}^*,k_0+1)\neq 0$, then: \begin{multline*} L^{\mathrm{imp}}_p(\Sym^2 f_{k_0})(j_0+1) = (-1)^{k_0+1} (2k_0-j_0+1)! \frac{(2\pi iN)^{j_0-k_0-1}}{4(4\pi)^{k_0+1}\langle f_{k_0}, f_{k_0} \rangle} \\ \cdot \epsilon_p^{[F,F,j_0+1]}(k_0,k_0,j_0+1) \frac{E(\Sym^2 f_{k_0}^*,2k_0-j_0+2)}{E(f_{k_0})E^*(f_{k_0})} \\ \cdot L^{\mathrm{imp}}(\Sym^2 f_{k_0}\otimes\psi^{-2},2k_0-j_0+2). \end{multline*} \end{itemize} \end{corollary} With the new interpolation properties, the $p$-adic $L$-function defined in the theorem is uniquely determined on the whole $U\times\mathcal{W}$ by density. The arguments of this section complete the goal of covering the half of the weight space $U\times\mathcal{W}_+$, as pointed out at the end of the last section. Having used the additive notation for variables of $p$-adic $L$-functions and thanks to our normalisations, $L^{\mathrm{imp}}_p(\Sym^2 F)$ interpolates $L(\Sym^2 f_k)$ at the same evaluation points. It is also remarkable that the shape of the $p$-adic factorisation formula matches exactly that of the complex factorisation formula. We highlight that we have interpolation for $s_0 = j_0+1$ in the interval $[1,2k_0+2]$, which is cut in two halves where we have different parity conditions. In particular, in the first half the condition is $(-1)^{s_0+1} = 1$, while in the second one is $(-1)^{s_0} = 1$. These conditions coincide exactly with those of Schmidt's $p$-adic $L$-function, which interpolates $L^{\mathrm{imp}}(\Sym^2 f_{k_0})$ in the ordinary case. Therefore our functions are a direct generalisation to the supersingular case (and to families of forms) of Schmidt's function. \nocite{deninger.scholl:beilinson} \nocite{buyukboduk.lei:iwasawa} \nocite{kings:higher} \section{Introduction} \subsection*{Statement of the results} The purpose of this paper is to construct a $p$-adic $L$-function for the symmetric square of a cusp form which is supersingular at $p$, and to prove that this function satisfies a $p$-adic factorisation formula. Let $f\in S_{k+2}(N,\psi)$ be a normalised cuspidal newform of level $N\geq 5$ and character $\psi$ of conductor $N_{\psi}$, with $q$-expansion $\sum_{n\geq 1} a_n(f)q^n$. Let $p$ be a fixed rational odd prime not dividing $N$, and $v_p$ be the $p$-adic valuation on $\overline{\numberset{Q}_p}$, normalised such that $v_p(p) = 1$. Define the slope of $f$ at $p$ to be the quantity $v_p(a_p(f))$. Dasgupta proved in 2016~\cite{dasgupta:factorization} that when $f$ is $p$-ordinary---that is if $v_p(a_p(f))=0$---the following factorisation holds: \begin{equation} \label{intro:dasgupta} L_p(f\otimes f,s) = L_p(\Sym^2 f,s)L_p(\psi,s-k-1). \end{equation} Here $L_p(\Sym^2 f)$ is Schmidt's symmetric square $p$-adic $L$-function. The function $L_p(f\otimes f)$ is the $p$-adic $L$-function defined by specialising Hida's $p$-adic Rankin-Selberg $L$-function for families of modular forms, to a pair of coincident weights. The theorem is the $p$-adic version of the complex factorisation: \begin{equation*} L(f\otimes f,s) = L(\Sym^2 f,s)L(\psi,s-k-1). \end{equation*} Here we denote with $L(f\otimes f)$ and $L(\Sym^2 f)$ the primitive Rankin-Selberg and symmetric square complex $L$-functions. In the present paper, we construct a $p$-adic $L$-function that generalises Schmidt's $L_p(\Sym^2 f)$ to the case of $f$ supersingular at $p$, i.e.\ $v_p(a_p(f))>0$. In addition, this function is defined for (Coleman) families of modular forms. We then prove a generalisation of~\eqref{intro:dasgupta} to the case when $f$ is $p$-supersingular by using said function. To state the main theorems, let $\mathcal{W}$ be the weight space, that is the rigid analytic space whose $K$-points are $\mathcal{W}(K) = \Hom(\numberset{Z}_p^{\times},K^{\times})$ for every extension $K$ of $\numberset{Q}_p$. Elements of the weight space will be referred to as weight-characters. Recall that there is an injection $\numberset{Z} \hookrightarrow \mathcal{W}(\numberset{Q}_p)$, mapping an integer $n\in\numberset{Z}$ to the weight-character defined by $z\mapsto z^n$. We will also assume that the Satake parameters of $f$ at $p$ are distinct, i.e.\ that $\alpha_f\neq\beta_f$. Under this assumption, there exist an affinoid disc $U\subseteq \mathcal{W}$, $U\ni k$ and a Coleman family $F$ over $U$, passing through a $p$-stabilisation of $f$. Let $L_p^{\mathrm{geom}}(F,F)$ be the $3$-variable geometric $p$-adic $L$-function constructed by Zerbes and the second author~\cite{loeffler.zerbes:rankin-eisenstein}. We will prove the following results, which are Theorems~\ref{thm:final-int} and~\ref{thm:final-fact} in the text. \begin{thmintro}[Main Theorem] \label{intro:final1} Suppose that $f$ is supersingular at $p$ and that $\alpha_f\neq\beta_f$. There exists a unique $2$-variable meromorphic $p$-adic $L$-function $L^{\mathrm{imp}}_p(\Sym^2 F)\colon U\times\mathcal{W} \to \overline{\numberset{Q}_p}$ with the following interpolation property: \begin{itemize} \item if $(k_0,j_0)\in \numberset{N}^2 \cap U\times \mathcal{W}$ with $j_0$ even is such that either $0\leq j_0 < k_0$, or $j_0=k_0$, $E(\Sym^2 f_{k_0},k_0+1)\neq 0$ and $N=N_{\psi}$, then: \begin{multline} \label{intro:interpolation1} L^{\mathrm{imp}}_p(\Sym^2 F)(k_0,j_0+1) = (-1)^{k_0+1}j_0! \frac{(2\pi i)^{k_0-j_0} i^{a_{\psi}}}{4\tau(\psi)(4\pi)^{k_0+1}\langle f_{k_0},f_{k_0} \rangle} \\ \cdot \frac{E(\Sym^2 f_{k_0},j_0+1)}{E(f_{k_0})E^*(f_{k_0})} L^{\mathrm{imp}}(\Sym^2 f_{k_0},j_0+1)\prod_{\substack{l \mid N \\ l \centernot\mid N_{\psi}}} \Bigl(1 - \frac{\psi(l)}{l^{j_0-k_0}}\Bigr) \end{multline} \item assume that $\psi$, $\psi^2$ have conductor $N$; if $(k_0,j_0)\in \numberset{N}^2 \cap U\times \mathcal{W}$ with $j_0$ odd is such that either $k_0+1< j_0 \leq 2k_0+1$, or $j_0=k_0+1$ and $E(\Sym^2 f_{k_0}^*,k_0+1)\neq 0$, then: \begin{multline} \label{intro:interpolation2} L^{\mathrm{imp}}_p(\Sym^2 F)(k_0,j_0+1) = (-1)^{k_0+1} (2k_0-j_0+1)! \frac{(2\pi iN)^{j_0-k_0-1}}{4(4\pi)^{k_0+1}\langle f_{k_0}, f_{k_0} \rangle} \\ \cdot\epsilon_p^{[F,F,j_0+1]}(k_0,k_0,j_0+1) \frac{E(\Sym^2 f_{k_0}^*,2k_0-j_0+2)}{E(f_{k_0})E^*(f_{k_0})} \\ \cdot L^{\mathrm{imp}}(\Sym^2 f_{k_0}\otimes\psi^{-2},2k_0-j_0+2) \end{multline} \end{itemize} where $f_{k_0}\in M_{k_0+2}(N,\psi)$ is such that $F$ passes through its $p$-stabilisation at $k_0$. \end{thmintro} In general $L^{\mathrm{imp}}_p(\Sym^2 F)$ is not necessarily analytic. Here $L^{\mathrm{imp}}(\Sym^2 f)$ is the imprimitive symmetric square complex $L$-function, see Subsection~\ref{subsec:primitive-imprimitive-functions} below. The quantity $\tau(\psi)$ is the Gauss sum of the character $\psi$, while for the definition of the factors $E$, $E^*$ and $\epsilon_p$ see Theorem~\ref{thm:geometric-l-function} and Proposition~\ref{prop:benois-horte-functional-eq}. \begin{thmintro} \label{intro:final2} In the hypotheses of Theorem~\ref{thm:final-int}, the factorisation of $p$-adic $L$-functions holds: \begin{equation} \label{intro:factorization} L_p^{\mathrm{geom}}(F,F)(\sigma_1,\sigma_1,\sigma_2) = L^{\mathrm{imp}}_p(\Sym^2 F)(\sigma_1,\sigma_2)L_p(\psi,\sigma_2-\sigma_1-1) \end{equation} for every $(\sigma_1,\sigma_2)\in U\times\mathcal{W}$. \end{thmintro} Equation~\eqref{intro:factorization} is the analogue of~\eqref{intro:dasgupta} in the $p$-supersingular case. Taken together with Dasgupta's result, this proves a version of the factorisation formula independently on the slope of $f$ at $p$. Equations~\eqref{intro:interpolation1} and~\eqref{intro:interpolation2} show that the newly constructed function interpolates the critical values of symmetric square complex $L$-functions, subject to some parity conditions. Both theorems are stated for imprimitive $L$-functions, see Subsection~\ref{subsec:primitive-imprimitive-padic} for further explanation. We will also prove that, according to expectations, the $p$-adic function defined above satisfies a functional equation. \begin{thmintro}[Functional equation] \label{intro:final3} Suppose that $\psi$, $\psi^2$ are primitive modulo $N$. Then $L^{\mathrm{imp}}_p(\Sym^2 F)$ satisfies the following functional equation for every $(\kappa,s)\in U\times\mathcal{W}$: \begin{equation*} L^{\mathrm{imp}}_p(\Sym^2 F)(\kappa,s) = \epsilon_p^{[F,F,a]}(\kappa,\kappa,s) i^{-a_{\psi}}\tau(\psi^{-1})N^{s-\kappa-2} L^{\mathrm{imp}}_p(\Sym^2 F^*)(\kappa,2\kappa-s+3). \end{equation*} \end{thmintro} For the definition of the extra factor, see Theorem~\ref{thm:functional-equation}. \subsection*{Motivation and background} The motivation for this work is manifold. Firstly, this paper completes the picture for the factorisation of the $p$-adic Rankin-Selberg $L$-function for the self-convolution of a cusp form. Given the current state of research, the authors believe that other results of this kind are within reach. For instance, we are convinced that the case of a base-change Hilbert modular form over a real quadratic field falls into this category. In that case, one should be able to factor the Asai $p$-adic $L$-function attached to the Hilbert modular form as the product of a Kubota-Leopoldt $L$-function and the $p$-adic $L$-function associated to the underlying elliptic modular form. Issues similar to those faced in this paper arise, in particular, regarding the unconditional existence of the Asai $p$-adic $L$-function. Going deeper into speculation, whenever an automorphic $L$-function factorises as a product of simpler automorphic $L$-functions, there should be a corresponding factorisation of $p$-adic $L$-functions. Secondly, equation~\eqref{intro:factorization} is useful to investigate whether the values of the involved $p$-adic $L$-functions are zero or not, and the vanishing orders. For example, in the range where $L_p^{\mathrm{geom}}(F,F)$ interpolates complex Rankin-Selberg $L$-values, one can investigate the vanishing of $L^{\mathrm{imp}}_p(\Sym^2 F)$. This task is carried out to some extent in Sections~\ref{sec:comparison} and~\ref{sec:functional-eq-interpolation-whole}. These kinds of results may not say much about the $p$-adic functions themselves (other than proving their non-triviality), but are much needed in the context of Euler systems. Indeed, a strategy often used to study an Euler system is to relate its classes to special values of $p$-adic $L$-functions via an \emph{explicit reciprocity law}. This helps a long way in proving the non-triviality of (the base class of) the Euler system by proving that a special $p$-adic $L$-value is non-zero. This strategy has been successfully used for example in~\cite{lei.loeffler:euler1,kings.loeffler:rankin-eisenstein1,loeffler.zerbes:iwasawa,bertolini.darmon:beilinson-flach}. It is then clear that the main results of this paper---and potentially other similar ones---are of great help. In particular, the specific case addressed in this article sheds some light on special values of $L_p^{\mathrm{geom}}(F,F)$, other than clearly those of $L^{\mathrm{imp}}_p(\Sym^2 F)$ (but this last has not been linked to any Euler system so far), potentially saying something more on the Euler systems constructed in~\cite{kings.loeffler:rankin-eisenstein,kings.loeffler:rankin-eisenstein1,loeffler.zerbes:rankin-eisenstein}. Furthermore, proving the factorisation in the above mentioned case of the Asai $p$-adic $L$-function could yield a conditional proof of the non-vanishing of the Euler system constructed in~\cite{lei.loeffler:euler}, assuming a conjectural formula linking the Asai-Flach Euler system and the Asai $p$-adic $L$-function. Thirdly, Dasgupta's result has been useful to prove several other results in the area: Dasgupta himself used it to investigate exceptional zeros~\cite{dasgupta:factorization}; Rivero and Rotger to investigate relations between $L$-invariants and units of number fields~\cite{rivero.victor:derived}; Zerbes and the second author to prove the non-triviality of an Euler System for the symmetric square of a modular form~\cite{loeffler.zerbes:iwasawa}. Other applications can be found in~\cite{palvannan:on}. Theorems~\ref{intro:final1}, \ref{intro:final2} and~\ref{intro:final3} open then the possibility to prove in the supersingular case results similar to those. We would like to especially highlight the preprint~\cite{buyukboduk.lei:iwasawa1}, which studies exceptional zeros and signed $p$-adic $L$-functions for the symmetric square of a supersingular modular form. That paper relies explicitly on the results proved in this article, giving thus an immediate application of the main theorems. Finally, the construction of $L^{\mathrm{imp}}_p(\Sym^2 F)$ fills a gap in the current literature. There is at present no construction of a $p$-adic symmetric square $L$-function that works regardless of the cusp form one feeds it with, and in particular there is no such construction for supersingular cusp forms of arbitrary slope. This article provides a well-defined \emph{unique} function $L^{\mathrm{imp}}_p(\Sym^2 F)$ for $f$ supersingular of any finite slope, and which interpolates a whole range of symmetric square $L$-functions by~\eqref{intro:interpolation1} and~\eqref{intro:interpolation2}. Therefore, the function acts as a generalisation of the $p$-adic symmetric square function in two different directions: it is applicable to supersingular cusp forms, and it is a symmetric square $L$-function for \emph{families} of cusp forms. In particular, $L^{\mathrm{imp}}_p(\Sym^2 F)(k_0,\cdot)$ is a $p$-adic $L$-function which interpolates at least one value of a complex symmetric square $L$-function, for all integers $k_0\geq 0$, thus covering new cases for the construction of $L^{\mathrm{imp}}_p(\Sym^2 f)$. \subsection*{Strategy of the proof} We now explain briefly the strategy of the proof. First of all, we will explain some specific features of the Rankin-Selberg supersingular case that make the task harder to prove. We will try to emphasise the differences between the ordinary and the supersingular cases, and why some of Dasgupta's techniques do not directly generalise to this case. We especially highlight the following obstructions: \begin{enumerate} \item the $p$-adic Rankin-Selberg $L$-function is not available for the convolution of two modular forms of the same weight. Indeed, its construction relies on Shimura's classical result~\cite{shimura:on} exploiting a period for the critical values, which are the integers in the range $[r',r-1]$ for the convolution of forms of weights $r=k+2$ and $r'=k'+2$, under the assumption $r'\leq r$. These values were then interpolated $p$-adically by Panchishkin and Hida. When $r'=r$ the critical range is empty, and no interpolation result is known at the time of writing. To work around this issue, we need to replace $L_p(f\otimes f)$ with a $p$-adic $L$-function for a family of modular forms through $f$, which satisfies many interpolation formulæ and is in particular always defined. In that way, we can prove a factorisation formula for $p$-adic $L$-functions ``in families'', from which the usual one follows by specialising the weight. In the ordinary case one can resort to Hida families, while in the supersingular case we need Coleman families. As a replacement for $L_p(f\otimes f)$ we will use the geometric $p$-adic $L$-function of~\cite{loeffler.zerbes:rankin-eisenstein}; \item \label{2} the $p$-adic symmetric square $L$-function is not available when $f$ does not satisfy a ``small slope'' condition. The construction of $L_p(\Sym^2 f)$ was performed by Schmidt~\cite{schmidt:p-adic} when $f$ is ordinary, and extended by Dąbrowski-Delbourgo~\cite{dabrowski.delbourgo:s-adic} when $v_p(a_p(f)) < \frac{k+1}{2}$. However, there is not any result covering the general case. In our settings, $f$ is supersingular at $p$ without restrictions on the slope, so none of these results apply. Addressing this problem is one of the main aims of the paper: we will construct a replacement for the $1$-variable imprimitive version $L^{\mathrm{imp}}_p(\Sym^2 f)$ by varying $f$ in a family, namely the $2$-variable $p$-adic $L$-function $L^{\mathrm{imp}}_p(\Sym^2 F)$ associated to the Coleman family $F$; \item the range of points at which there is an explicit formula for the values of $L_p^{\mathrm{geom}}(F,F)$ includes only integer weight-characters. These formulæ are essential to deduce the interpolation properties for $L^{\mathrm{imp}}_p(\Sym^2 F)$, which are its characterising features. Therefore, as a consequence of the first replacement, we can deduce these last only at integer weight-characters. In particular, we could not rely on spanning a dense set of points by varying only a $p$-power character, but we necessarily have to vary the weight. This is in contrast with the ordinary case, where by varying the $p$-power character one can span a sufficiently large subset while keeping the weight constant (Dasgupta restricts to weight $2$); \item \label{4} in the case $k=j=0$, all the cohomological arguments apply to the cohomology of the modular curves $Y_1(N)^2$ and $X_1(N)^2$ with trivial coefficients. However, to cover the case $k>0$ one needs to allow non-trivial coefficients. This prevents the application of the isomorphism between higher Chow groups and motivic cohomology with trivial coefficients, thus making geometric techniques out of reach for the study of motivic classes. The solution to this problem is to work over higher dimensional Kuga-Sato varieties, while keeping the cohomological coefficients trivial. In this way one recovers the geometric interpretation, at the cost of working over higher dimensional varieties. This requires extra work, especially when dealing with compactifications and (cuspidal) divisors. As mentioned in the previous point, in the ordinary case Dasgupta can restrict to $k=0$, but in this paper we need $k>0$ essentially. \end{enumerate} It is clear that while for the first obstruction the workaround for the supersingular case is akin to that for the ordinary one---albeit technically harder---, for obstructions~\ref{2}--\ref{4} the supersingular case requires new tools and ideas. On the second point, when the slope of $f$ satisfies $0 < v_p(a_p(f)) < \frac{k+1}{2}$ both Dąbrowski-Delbourgo's and this paper' constructions make sense. At the time of writing, there is however no way to directly compare the two to check whether they are equal or not. What is required to carry out the comparison is a version of Theorems~\ref{intro:final1} and~\ref{intro:final2} of this paper involving twists by Dirichlet characters. Regarding the actual proof of the result, the first remark to be made is that $p$-adic interpolation is doomed to fail. If we had a sufficiently large set of integers at which interpolation formulæ for all three functions hold, then we would directly prove the theorem from the complex factorisation formula. However, as said above $L(f\otimes f)$ has an empty critical range so this strategy is not viable (not even in the ordinary case). An entirely different approach is needed. The strategy of the proof is to compare cohomology classes related to the relevant $L$-functions. The idea of comparing cohomology classes related to $L$-values to deduce information on the values themselves, was introduced by Gross~\cite{gross:on} and picked up again by Dasgupta. It is also in accordance with the leitmotif that special values of $L$-functions are related to Euler systems. The authors hope that the techniques presented in this article may be useful in further generalising this technique and applying it to other cases of interest. The first goal is then to find cohomology classes that yield $L$-values by applying regulator maps. Once we have such classes at our disposal we will be able to use all the tools in the cohomological shed to study and compare them. More precisely, we will prove that there exist \emph{motivic} cohomology classes \begin{equation*} \phi_{\psi}, b_f \in H^1_{\mot}(\Spec \numberset{Q}(\mu_N),1+k-j) \otimes K_f \end{equation*} where $K_f$ is the number field generated by the Fourier coefficients of $f$. The former is a (higher) cyclotomic class, the latter comes from Beilinson-Flach classes. These classes contain both complex and $p$-adic information about Dirichlet and Rankin-Selberg $L$-functions respectively, thus fitting the idea that motivic cohomology contains ``all'' the available cohomological information. To prove the main theorems we will compare these classes. $\phi_{\psi}$ is the higher cyclotomic class defined by Beilinson, attached to the character $\psi$. When $j$ is even, it satisfies the formula: \begin{equation*} \regulator{\del}(\phi_{\psi}) = -\Bigl( \frac{2\pi i}{N_{\psi}} \Bigr)^{k-j} \tau(\psi)L'(\psi,j-k). \end{equation*} To define $b_f$, called Beilinson-Flach $K$-element, we start from Beilinson-Flach classes constructed by Kings, Zerbes and the second author: \begin{equation*} \mathrm{BF}^{[k,k,j]}_{\mot} \in H^{2k+3}_{\mot}(\mathcal{E}^k\times\mathcal{E}^k,2+2k-j). \end{equation*} Here $\mathcal{E}^k \to Y_1(N)$ is the $k$-fold fibre product of the universal elliptic curve over $Y_1(N)$, and $\mathcal{E}^k\times\mathcal{E}^k\to Y_1(N)^2$ is the self-fibre product over the base scheme $\Spec \numberset{Q}$. Brunault and Chida then compactified these classes, which means that they defined motivic cohomology classes \begin{equation*} \widetilde{\Xi}^{k,k,j} \in H^{2k+3}_{\mot}(\mathscr{W}_k,2+2k-j) \end{equation*} that pull-back to the above ones. Here $\mathscr{W}_k$ is the smooth and proper Kuga-Sato variety $\mathscr{W}_k = W_k \times W_k \to X_1(N)^2$. The building block $W_k$ is defined as follows: the universal generalised elliptic curve $\overline{\mathcal{E}} \to X_1(N)$ is singular over the cusps, so its $k$-fold fibre product $\overline{\mathcal{E}}^k \to X_1(N)$ is not smooth for $k>0$, and $W_k$ is its canonical desingularisation. The advantage of these classes is that they are defined over a smooth and proper variety, which allows for more cohomological manipulations. We further define a symmetrised version $\Xi^{k,k,j} = (\widetilde{\Xi}^{k,k,j} + (-1)^{k+j}\rho^*\widetilde{\Xi}^{k,k,j})/2$ which has the advantage of being an eigenvector for the involution $\rho$ swapping the components of the fibre product. Kings, Zerbes and the second author proved in~\cite{kings.loeffler:rankin-eisenstein} that the Deligne realisation of $\mathrm{BF}^{[k,k,j]}_{\mot}$ gives complex Rankin-Selberg $L$-values via the pairing with a specific differential form. More specifically, they proved that for $j$ even \begin{equation*} \langle \regulator{\del}(\mathrm{BF}^{[k,k,j]}_{\mot}), \omega_f\otimes\eta_f - \eta_f\otimes\omega_f \rangle = (2\pi i)^{2(k-j)}\frac{(-1)^{k-j}}{2(\omega_f, \overline{\omega_f})}\Bigl(\frac{k!}{(k-j)!}\Bigr)^2 L'(f,f,j+1). \end{equation*} The function $L(f,f)$ is the imprimitive Ranking-Selberg $L$-function, see Subsection~\ref{subsec:primitive-imprimitive-functions} for the precise definition. Brunault and Chida later extended this formula to the classes $\widetilde{\Xi}^{k,k,j}$, and we prove that the $\Xi^{k,k,j}$ satisfy it too (Theorem~\ref{thm:regulator-formula}). The pairing in cohomology given by the cup-product commutes with the Deligne regulator $\regulator{\del}$. By constructing the relevant commutative diagram we can then argue that if we can find a motivic cohomology class $Z_f$ mapping to $\omega_f\otimes\eta_f - \eta_f\otimes\omega_f$ under $\regulator{\del}$, then \begin{equation*} \regulator{\del}(\langle \Xi^{k,k,j}, Z_f \rangle) = \langle \regulator{\del}(\Xi^{k,k,j}), \regulator{\del}(Z_f) \rangle = \langle \regulator{\del}(\Xi^{k,k,j}), \omega_f\otimes\eta_f - \eta_f\otimes\omega_f \rangle. \end{equation*} The construction of the class $Z_f \in H^{2k+2}_{\mot}(\mathscr{W}_k,1+k)\otimes K_f \simeq \mathrm{CH}^{k+1}(\mathscr{W}_k)\otimes K_f$ is performed in Proposition~\ref{prop:image-cycle-dr}. It relies almost completely on geometric techniques, used to exploit a $(k+1)$-codimensional subvariety of $\mathscr{W}_k$ via the linear functional given by integration. We then define $b_f = \langle \Xi^{k,k,j}, Z_f \rangle \in H^1_{\mot}(\Spec \numberset{Q}(\mu_N),1+k-j)\otimes K_f$. By combining these results and equations we prove the complex regulator formula for $b_f$ (Theorem~\ref{thm:complex-l-value}): \begin{stepintro} If $j$ is even and $0\leq j\leq k$, then \begin{equation*} \regulator{\del}(b_f) = (2\pi i)^{2(k-j)}\lambda_N(f^*)\frac{(-1)^{k-j}}{2(\omega_f,\overline{\omega_f})}\Bigl( \frac{k!}{(k-j)!} \Bigr)^2 L'(f,f,j+1). \end{equation*} \end{stepintro} We will then show that $b_f$ and $\phi_{\psi}$ actually belong to the same $1$-dimensional subspace of the motivic cohomology group. This is identified as being the character eigenspace under the action of the Galois group, inside a subgroup of the full cohomology group. The fact that its dimension is exactly $1$ will come from an integrality argument. We will first exploit the action of a character (Proposition~\ref{prop:galois-action}), to show that $b_f$ and $\phi_{\psi}$ are in its eigenspace. Then, we will prove that they are integral cohomology classes, i.e.\ they live in the cohomology of the ring of integers. More precisely, in Subsection~\ref{subsec:integrality} we will prove several results towards this goal for the class $b_f$ (see for example Theorem~\ref{thm:bf-bloch-kato-units} and Corollary~\ref{cor:localisation-compact-in-selmer}), which can be summarised as: \begin{stepintro} Let $\mathcal{S}$ be the set of prime divisors of $\mathcal{O}_{\numberset{Q}(\mu_N)}$ lying over prime divisors of $N$. \begin{itemize} \item if $j<k$, then $b_f \in (H^1_{\mot}(\Spec \mathcal{O}_{\numberset{Q}(\mu_N)},1+k-j)\otimes K_f)^{\psi}$; \item if $j=k$ then $b_f \in (H^1_{\mot}(\Spec \mathcal{O}_{\numberset{Q}(\mu_N),\mathcal{S}},1)\otimes K_f)^{\psi}$; moreover, assuming $\psi(p)\neq 1$ and $\beta_f^2 \neq p^{k+1}$, then $b_f \in (H^1_{\mot}(\Spec \mathcal{O}_{\numberset{Q}(\mu_N)},1)\otimes K_f)^{\psi}$. \end{itemize} \end{stepintro} This theorem is an essential stepping stone in proving the main theorem, because we have finer control over the dimensions of these vector spaces. It also fits the leitmotif: since the starting point of the argument is to exploit the relation between Euler systems and $L$-functions, it is natural to ask whether the étale realisation of $b_f$ lies in any Euler system, given its link with Rankin-Selberg $L$-values. There is already an Euler system related to the Rankin convolution of modular forms, precisely the Beilinson-Flach Euler system used above; and out of that, an Euler system in $H^1(\numberset{Q},M_{\etbar}(f\otimes f)^*)$ has already been built and used to bound Selmer groups~\cite{kings.loeffler:rankin-eisenstein,kings.loeffler:rankin-eisenstein1}. In the case under consideration, we have actually built classes in the first étale cohomology group of $\numberset{Q}_p(1+k-j)$, which is a well-studied Galois representation, and the most interesting Selmer group attached to it is the Bloch-Kato Selmer group $H^1_f(\numberset{Q}(\mu_N),\numberset{Q}_p(1+k-j))$, linked to the integral units of $\numberset{Q}(\mu_N)$. The above theorem characterises whether $\regulator{\et}(b_f)$ belongs to the full or relaxed Selmer group. Finally, we will compute the dimension of the character eigenspace inside the cohomology of $\Spec \mathcal{O}_{\numberset{Q}(\mu_N)}$ and $\Spec \mathcal{O}_{\numberset{Q}(\mu_N), \mathcal{S}}$. Thanks to the above argument we deduce that, under some technical hypotheses, the cohomology classes are in the same $1$-dimensional vector space. Therefore, they must differ by a scalar factor. By comparison with the complex factorisation, we are then able to define a scalar $\Upsilon$ such that $\regulator{\del}(b_f) = \Upsilon\regulator{\del}(\phi_{\psi}) = \regulator{\del}(\Upsilon\phi_{\psi})$. In particular, $\Upsilon$ contains the value $L^{\mathrm{imp}}(\Sym^2 f,j+1)$ as a factor, where $L^{\mathrm{imp}}(\Sym^2 f)$ is the imprimitive symmetric square $L$-function. With these consideration we will prove the following, which is Theorem~\ref{thm:motivic-relation} in the article. It expresses a deep relation between the two classes directly in motivic cohomology, it is the ``motivic avatar'' of all the $L$-functions factorisations. \begin{stepintro} \label{intro:motivic-comparison} Let $j$ be even, $0\leq j\leq k$. If $j=k$, assume $\psi\neq 1$ and $N=N_{\psi}$. Then the following equality of motivic classes holds: $b_f = \Upsilon\phi_{\psi}$. \end{stepintro} On the $p$-adic side, we link motivic classes and $p$-adic $L$-values through $p$-adic regulator formulæ. When $j$ is even Beilinson's classes satisfy \begin{equation*} \regulator{\syn}(\phi_{\psi}) = (-1)^{k-j}\frac{(k-j)!}{N_{\psi}^{k-j}} i^{a_{\psi}} \Bigl(1-\frac{\psi^{-1}(p)}{p^{k-j+1}}\Bigr)^{-1} L_p(\psi,j-k). \end{equation*} One can appeal interchangeably to syntomic or local étale cohomology, here we are using the former to avoid the extra bookkeeping due to the appearance of the Bloch-Kato exponential. Proving a regulator formula for $b_f$ amounts to computing the value of \begin{equation*} \regulator{\syn}(b_f) = \regulator{\syn}(\langle \Xi^{k,k,j}, Z_f \rangle) = \langle \regulator{\syn}(\Xi^{k,k,j}), \regulator{\syn}(Z_f) \rangle. \end{equation*} To start with, this requires a careful study of the syntomic class $\regulator{\syn}(Z_f)$ which is in $H^{2k+2}_{\syn}((\mathscr{W}_k)_{\numberset{Q}_p(\mu_N)},\numberset{Q}_p(k+1))\otimes K_f$, especially regarding the associated de Rham cohomology class; and secondly, an argument to recast the pairing in terms of the value of \begin{equation*} \langle \regulator{\syn}(\mathrm{BF}^{[k,k,j]}_{\mot}), \omega_f'\otimes\eta^{\alpha_f} \rangle, \end{equation*} computed in~\cite{kings.loeffler:rankin-eisenstein}. Here $\eta^{\alpha_f}\in M_{\dR}(f)_{\numberset{Q}_p}$ is a de Rham class with $\numberset{Q}_p$-coefficients associated to $f$ and to the choice of a Satake parameter $\alpha_f$, and $\omega_f' = \tau(\psi^{-1})\omega_f \in M_{\dR}(f)$. When $f$ is supersingular both Satake parameters work well, so that there are actually two possible choices, contrarily to the ordinary case where one is forced to choose the unit root. These arguments are explained in Subsection~\ref{subsec:p-adic-argument}, and prove the following $p$-adic regulator formula in Theorem~\ref{thm:syntomic-l-value}. \begin{stepintro} If $j$ is even, $0\leq j\leq k$ and $E(f,f,j+1)\neq 0$, then \begin{equation*} \regulator{\syn}(b_f) = (-1)^k k!\binom{k}{j}\lambda_N(f^*)\frac{2E(f)E^*(f)}{E(f,f,j+1)}L_p^{\mathrm{geom}}(F,F)(k,k,j+1). \end{equation*} \end{stepintro} Now applying $\regulator{\syn}$ to the equality of Theorem~\ref{intro:motivic-comparison} we deduce $\regulator{\syn}(b_f) = \Upsilon\regulator{\syn}(\phi_{\psi})$, which by the above formulæ can be translated into the relation \begin{equation*} \frac{(\star)L_p^{\mathrm{geom}}(F,F)(k,k,j+1)}{(\diamond)L_p(\psi,j-k)} = \Upsilon. \end{equation*} The terms $(\star)$ and $(\diamond)$ represent the extra factors of the regulator formulæ. At this point one would like to invoke the existence of a symmetric square $p$-adic $L$-function interpolating all the extraneous factors. This strategy works well in the ordinary case, but as highlighted above there is no such function at our disposal, so in the supersingular case this argument fails. To tackle this problem we define a function: \begin{equation*} \mathscr{L}_p(\sigma_1,\sigma_2) = \frac{L_p^{\mathrm{geom}}(F,F)(\sigma_1,\sigma_1,\sigma_2)}{L_p(\psi,\sigma_2-\sigma_1-1)} \end{equation*} for all $(\sigma_1,\sigma_2)\in U\times\mathcal{W}$ such that $L_p(\psi,\sigma_2-\sigma_1-1)\neq 0$. The function $\mathscr{L}_p$ may have poles, so it is only meromorphic in general. Reinterpreting the above equation in light of the definition we get \begin{equation*} \mathscr{L}_p(k,j+1) = \frac{(\diamond)}{(\star)}\Upsilon. \end{equation*} Recall that on the right hand side there is a factor $L^{\mathrm{imp}}(\Sym^2 f,j+1)$ appearing. Making the right hand side explicit then proves equation~\eqref{intro:interpolation1} in Theorem~\ref{intro:final1}. This holds for every form $f_0$ of weight $k_0+2$ such that $F$ passes through its $p$-stabilisation at $k_0$. Therefore, for every pair of integers $(k_0,j_0)$ satisfying the hypotheses of the theorem, we obtain an interpolation formula. We then get a range of explicit values for $\mathscr{L}_p$ over \begin{gather*} \mathcal{I} = \mathcal{I}_1 \cup \mathcal{I}_2 \subseteq \numberset{N}^2 \cap U\times \mathcal{W} \\ \begin{lgathered} \mathcal{I}_1 = \{ (k_0,j_0+1) \in U\times \mathcal{W} \mid j_0 \; \text{even}, \; 0 \leq j_0 < k_0 \}, \\ \mathcal{I}_2 = \{ (k_0,k_0+1) \in U\times \mathcal{W} \mid k_0 \; \text{even}, \; E(\Sym^2 f_{k_0},k_0+1)\neq 0, \; N=N_{\psi} \}. \end{lgathered} \end{gather*} Denote with $\mathcal{W}_-$ the half of the weight space of odd weight-characters. Since $\mathcal{I}\subseteq U\times\mathcal{W}_-$ is dense, the values of $\mathscr{L}_p$ at $\mathcal{I}$ define the function \emph{uniquely} over $U\times\mathcal{W}_-$. With this uniqueness result we can safely denote $\mathscr{L}_p$ with $L^{\mathrm{imp}}_p(\Sym^2 F)$, a notation justified by the fact that this function interpolates a range of complex symmetric square imprimitive $L$-functions. Equation~\eqref{intro:interpolation2} in the first part of the main theorem is proved by combining the already known interpolation on $U\times\mathcal{W}_-$ and a functional equation for $L^{\mathrm{imp}}_p(\Sym^2 F)$. More specifically, the proof of the functional equation exchanging the two halves of the weight space is intertwined with the proof of the interpolation on $U\times\mathcal{W}_+$. The two results are intimately connected, and in Section~\ref{sec:functional-eq-interpolation-whole} we will prove both of them at once. As usual, it is expected that the $p$-adic $L$-function satisfies a functional equation, mirroring the two halves of the weight space. That for $L^{\mathrm{imp}}_p(\Sym^2 F)$ comes from those of $L_p^{\mathrm{geom}}(F,F)$ (as stated in~\cite{benois.horte:on}) and $L_p(\psi)$. As stated in Theorem~\ref{intro:final3}, under some additional hypotheses we will prove that \begin{equation*} L^{\mathrm{imp}}_p(\Sym^2 F)(\kappa,s) = \epsilon_p^{[F,F,a]}(\kappa,\kappa,s) i^{-a_{\psi}}\tau(\psi^{-1})N^{s-\kappa-2} L^{\mathrm{imp}}_p(\Sym^2 F^*)(\kappa,2\kappa-s+3). \end{equation*} For the precise statement see Theorem~\ref{thm:functional-equation}. Thanks to this equation, we can prove the second set of interpolation formulæ for $L^{\mathrm{imp}}_p(\Sym^2 F)$ with a standard argument: let $k_0+1\leq j_0 \leq 2k_0+1$ be odd, then the above reads \begin{equation*} L^{\mathrm{imp}}_p(\Sym^2 F)(k_0,j_0+1) = (\natural) L^{\mathrm{imp}}_p(\Sym^2 F^*)(k_0,2k_0-j_0+2). \end{equation*} The $p$-adic $L$-value on the right hand side interpolates a complex symmetric square $L$-value, since $0\leq 2k_0-j_0+1 \leq k_0$. By applying equation~\eqref{intro:interpolation1} it interpolates the value $L^{\mathrm{imp}}(\Sym^2 f_{k_0}^*,2k_0-j_0+2)$, because $F^*$ passes through a $p$-stabilisation of $f_{k_0}^*$ at $k_0$. To include the case $j_0=k_0+1$ we also need to assume $E(\Sym^2 f_{k_0}^*,k_0+1)\neq 0$. By reinterpreting the complex $L$-value as a value of the twisted $L^{\mathrm{imp}}(\Sym^2 f_{k_0}\otimes\psi^{-2})$, we get the wanted interpolation~\eqref{intro:interpolation2} on a subset of $\mathcal{W}\times\mathcal{W}_+$. Repeating the same density argument as before, it follows that the $p$-adic $L$-function is uniquely determined also on $U\times\mathcal{W}_+$ and then on the whole $U\times\mathcal{W}$. This finishes the proof of Theorem~\ref{intro:final1}. The factorisation of Theorem~\ref{intro:final2} follows from the definition of the $p$-adic $L$-function. Since $L^{\mathrm{imp}}_p(\Sym^2 F)$ is uniquely determined, no other function can fit the interpolation formula. Turning the factorisation into a definition does not lose generality, as it is then proven that the function is unique. This completes the proof of Theorems~\ref{intro:final1}, \ref{intro:final2} and~\ref{intro:final3}. \paragraph{Remark on parity} We now motivate the repeated appearance of the hypothesis on the parity of $j$. We would like to emphasise that this is not due to computational reasons, but incarnates deep concepts. The strategy of the proof relies on finding a relation between $b_f$ and $\phi_{\psi}$, starting from the fact that they produce related complex (and then $p$-adic) $L$-values. Higher cyclotomic classes give values for the $L$-function associated to the representation $\wedge^2 \rho_{f,v}$. To construct classes to compare them with, the only natural choice is then to define them in the cohomology of the same representation. In particular, in Deligne and étale cohomology we have Beilinson-Flach classes that produce $L$-values via pairing. On one hand, $\phi_{\psi}$ produces values for the $L$-function of the $1$-dimensional representation $\wedge^2 \rho_{f,v}$, which coincides with $\wedge^2 M_{\etbar}(f)^*$. On the other hand, $\mathrm{BF}_{\mathcal{D}}^{[k,k,j]}$ and $\mathrm{BF}_{\et}^{[k,k,j]}$ have natural images $\mathrm{BF}_{\mathcal{D}}^{[f,f,j]}$ and $\mathrm{BF}_{\et}^{[f,f,j]}$ in the cohomology of (some twist of) the full $M_B(f\otimes f)^*$ and $M_{\etbar}(f\otimes f)^*$, given by applying edge maps and isotypical projections. There is a natural direct sum decomposition: \begin{equation*} M_{\bullet}(f\otimes f)^* \simeq \Sym^2 M_{\bullet}(f)^* \oplus \wedge^2 M_{\bullet}(f)^* \end{equation*} for $\bullet\in\{B,\etbar\}$. These two submodules do not see each other, in particular $\mathrm{BF}_{\mathcal{D}}^{[f,f,j]}$ and $\mathrm{BF}_{\et}^{[f,f,j]}$ belong to the cohomology of either one according to mutually exclusive conditions on $j$. As said, if we want to produce $L$-values comparable with those produced by $\phi_{\psi}$, we need to construct classes in the $\wedge^2$-component. We must then ensure that $\mathrm{BF}^{[k,k,j]}_{\mathcal{D}}$ and $\mathrm{BF}^{[k,k,j]}_{\et}$ are in the cohomology of (some twist of) the submodule $\wedge^2 M_{\bullet}(f)^*$. This is controlled by the parity of $j$. The condition of $j$ even, ensures that these elements are in the cohomology of the $\wedge^2$-component, and then that they (and in turn $b_f$) produce $L$-values related to the correct $1$-dimensional representation. The opposite parity condition on $j$ would make the classes be in the cohomology of $\Sym^2 M_{\bullet}(f)^*$, and the machinery would produce $L$-values related to the $\Sym^2$-component, hence incompatible with our goal. \section{\texorpdfstring{$L$}{L}-functions and factorisations} \subsection{Notation and preliminaries} In this subsection we fix some standard notation and terminology. Proofs of the facts stated in this subsection can be found in~\cite{diamond.shurman:first}. Denote with $\mathcal{H}$ the upper complex half plane, on which $\GL_2^+(\numberset{R})$ acts on the left via Möbius transformations. Let $N\in\numberset{N}_{\geq 1}$, we denote with $\Gamma(N)$, $\Gamma_1(N)$ and $\Gamma_0(N)$ the standard congruence subgroups inside $\mathrm{SL}_2(\numberset{Z})$: \begin{align*} \Gamma(N) &= \biggl\{ \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \mathrm{SL}_2(\numberset{Z}) \biggm| \begin{pmatrix} a & b \\ c & d \end{pmatrix} \equiv \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \mod N \biggr\}, \\ \Gamma_1(N) &= \biggl\{ \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \mathrm{SL}_2(\numberset{Z}) \biggm| \begin{pmatrix} a & b \\ c & d \end{pmatrix} \equiv \begin{pmatrix} * & * \\ 0 & 1 \end{pmatrix} \mod N \biggr\}, \\ \Gamma_0(N) &= \biggl\{ \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \mathrm{SL}_2(\numberset{Z}) \biggm| \begin{pmatrix} a & b \\ c & d \end{pmatrix} \equiv \begin{pmatrix} * & * \\ 0 & * \end{pmatrix} \mod N \biggr\}. \end{align*} Here $*\in (\numberset{Z}/N\numberset{Z})$ is an arbitrary element. When $\Gamma$ is a congruence subgroup of one of these kinds, we denote with $M_{r_0}(\Gamma)$ and $S_{r_0}(\Gamma)$ the spaces of modular and cusp forms of level $\Gamma$ and weight $r_0\geq 2$. When $\Gamma = \Gamma_1(N)$ we use the shorthands $M_{r_0}(N)$ and $S_{r_0}(N)$. Let $\chi \colon (\numberset{Z}/M\numberset{Z})^{\times} \to \C^{\times}$ be a Dirichlet character modulo $M | N$, we denote with $M_{r_0}(N,\chi)$ and $S_{r_0}(N,\chi)$ the spaces of modular and cusp forms of level $\Gamma_1(N)$, weight $r_0$ and which transform with character $\chi$ under the action of the diamond operators modulo $N$, with action induced by $\Gamma_0(N) / \Gamma_1(N) \simeq (\numberset{Z}/N\numberset{Z})^{\times}$. For every $m\in \numberset{N}_{\geq 1}$, the Hecke operator at $m$ is denoted $T_m$. For every $d\in (\numberset{Z}/N\numberset{Z})^{\times}$, the diamond operator associated to $d$ is denoted $\langle d \rangle$. The algebra generated over $\numberset{Q}$ by all Hecke and diamond operators, is denoted $\mathbb{T}$ and called Hecke algebra. Hecke and diamond operators commute, and two Hecke operators $T_m$ and $T_{m'}$ with $m$ and $m'$ coprime commute. Since the Hecke operators $T_m$ at composite $m$ are generated by those at prime $m$, $\mathbb{T}$ is generated as a $\numberset{Q}$-algebra by the Hecke operators at primes and the diamond operators. The standard Petersson inner product between modular and cusp forms of the same weight and level will be denoted by \begin{align*} \langle \cdot , \cdot \rangle \colon M_{r_0}(N) \times S_{r_0}(N) &\to \C \\ (h_1,h_2) &\mapsto \langle h_1,h_2 \rangle = \int_{\Gamma_1(N) \backslash \mathcal{H}} h_1(z)\overline{h_2(z)} \mathrm{d} z \mathrm{d} \overline{z} \end{align*} Let $h \in S_{k+2}(N,\psi)$ be a cusp form of weight $r=k+2$, level $N$ and character $\psi$ with $q$-expansion \begin{equation*} h(q) = \sum_{n \geq 1} a_n(h)q^n. \end{equation*} Denote by $K_h$ the field extension of $\numberset{Q}$ generated by the Fourier coefficients of $h$, explicitly $K_h = \numberset{Q}(\{a_n(h)\}_{n\geq 1})$. The field $K_h$ is a finite extension of $\numberset{Q}$. The Fourier coefficients of $h$ further satisfy the relation $\overline{a_n(h)} = \psi^{-1}(n)a_n(h)$ when $n$ is coprime to the conductor of $\psi$. Define a modular form $h^*$ by $h^*(z) = \overline{h(-\overline{z})}$ for every $z\in\mathcal{H}$. This defines a modular form $h^* \in S_{k+2}(N^*,\psi^{-1})$, where $N^*$ is a divisor of $NN_{\psi}^2$, with $N_{\psi}$ the conductor of $\psi$. The Fourier coefficients of $h^*$ satisfy $a_n(h^*) = \overline{a_n(h)}$, which also implies that its Hecke eigenvalues are conjugates of those of $h$. Fix once and for all an odd prime $p\in\numberset{N}$. Fix also an embedding $\overline{\numberset{Q}} \hookrightarrow \overline{\numberset{Q}_p}$, which amounts to choosing compatibly a prime ideal above $p$ at any finite extension. We also normalise the $p$-adic valuation on $\overline{\numberset{Q}_p}$ so that $p$ has $p$-adic valuation $1$. Suppose that $h$ is a normalised eigenform. We say that $h$ is \emph{ordinary} at $p$ if $v_p(a_p(h))=0$, that it is \emph{supersingular} at $p$ if $v_p(a_p(h))>0$. We call $v_p(a_p(h))$ the \emph{slope} of $h$ at $p$. For every prime $l$ not dividing $N$, the Satake polynomial of $h$ at $l$ is the quadratic polynomial \begin{equation*} X^2 - a_l(h)X + \psi(l)l^{k+1} \in K_h[X]. \end{equation*} The two roots $\alpha_{h,l}$, $\beta_{h,l}$ of this polynomial are the Satake parameters of $h$ at $l$. By the Ramanujan-Petersson conjecture, $\alpha_{h,l}$ and $\beta_{h,l}$ are either equal or complex conjugates, in particular they always satisfy $|\alpha_{h,l}| = |\beta_{h,l}| = l^{\frac{k+1}{2}}$. When $l \centernot\mid N$, the $l$-\emph{stabilisations} of $h$ are defined as \begin{equation*} h_{\alpha_{h,l}}(z) = h(z) - \beta_{h,l}h(pz), \qquad h_{\beta_{h,l}}(z) = h(z) - \alpha_{h,l}h(pz). \end{equation*} These are forms of level $\Gamma_1(N) \cap \Gamma_0(l)$, with the same Hecke eigenvalues of $h$ away from $l$. The eigenvalues of $h_{\alpha_{h,l}}$ and $h_{\beta_{h,l}}$ under the Hecke operator at $l$ are $\alpha_{h,l}$ and $\beta_{h,l}$ respectively. For every prime $l$, we denote with $\mathrm{Frob}_l$ the arithmetic Frobenius, i.e.\ a preimage in the absolute decomposition group of $l$ of the element defined by $x \mapsto x^l$ in the absolute Galois group of $\numberset{F}_l$. Thanks to our previous choices, this is a well-defined element up to inertia. \paragraph{Dirichlet and Galois characters} The cyclotomic character $\epsilon_{cyc}$ is defined for every integer $M\in\numberset{N}_{\geq 1}$ as the character identifying the Galois group $\gal{\numberset{Q}(\mu_M)}{\numberset{Q}}$ with $(\numberset{Z}/M\numberset{Z})^{\times}$: \begin{align*} \epsilon_{cyc} \colon \gal{\numberset{Q}(\mu_M)}{\numberset{Q}} &\xrightarrow{\simeq} (\numberset{Z}/M\numberset{Z})^{\times} \\ (\sigma \colon \xi \mapsto \xi^t) &\mapsto t \end{align*} We agree with the convention under which the cyclotomic character $\epsilon_{cyc}$ has Hodge-Tate weight $+1$. We identify Dirichlet characters with Galois characters, by composition with the inverse of the cyclotomic character modulo the conductor: more precisely, if $\chi$ is a Dirichlet character modulo $M$, then we define a Galois character $\chi_{Gal}$ as the composition $\chi \circ \epsilon_{cyc}^{-1}$: \begin{equation*} \begin{tikzcd} \gal{\numberset{Q}(\mu_M)}{\numberset{Q}} \arrow["\simeq"',"\epsilon_{cyc}^{-1}"]{r} \arrow["\chi_{Gal}"']{dr} & (\numberset{Z}/M\numberset{Z})^{\times} \arrow["\chi"]{d} \\ & \C^{\times} \end{tikzcd} \end{equation*} In particular, $\chi_{Gal}(\mathrm{Frob}_l) = \chi(l)^{-1} = \chi^{-1}(l)$ for every prime $l$ not dividing $M$. \subsection{Primitive and imprimitive \texorpdfstring{$L$}{L}-functions} \label{subsec:primitive-imprimitive-functions} Let $k, k' \geq 0$ be positive integers. Let $f \in S_{k+2}(N,\psi)$ and $g \in S_{k'+2}(N',\psi')$ be normalised cuspidal eigenforms of weights $r=k+2$ and $r'=k'+2$ respectively, with $q$-expansions \begin{equation*} f(q) = \sum_{n \geq 1} a_n(f)q^n, \quad g(q) = \sum_{n \geq 1} a_n(g)q^n. \end{equation*} Without losing generality we assume $r \geq r'$. We assume that the fixed prime $p$ is coprime to $NN'$. We denote by $K_f$ and $K_{f,g}$ the number fields generated by the coefficients of $f$, and $f$ and $g$ respectively, and by $N_{K_f/\numberset{Q}}$ and $N_{K_{f,g}/\numberset{Q}}$ the respective norms. Later on we will restrict to the case of $f$ and $g$ non-ordinary at a fixed rational prime, but the first four sections work in full generality so we do not make this assumption yet. From section~\ref{sec:cohomology-kuga-sato} onwards we will also assume $N\geq 5$. Associated to the modular form $f$ there is a collection of Galois representations as constructed in~\cite{deligne:formes}: \begin{equation*} \rho_{f,v} \colon G_{\numberset{Q}} \to \GL(V_{f,v}) \end{equation*} where $V_{f,v}$ is a $2$-dimensional vector space over $(K_f)_v$, and $v$ is any place of $K_f$. When $l\centernot\mid NN_{K_f/\numberset{Q}}(v)$, $\rho_{f,v}$ is unramified at $l$ with local factor: \begin{equation*} \det(1 - X\mathrm{Frob}_l^{-1} \mid V_{f,v}) = 1 - a_l(f)X + l^{k+1}\psi(l)X^2. \end{equation*} We now introduce the two main complex $L$-functions that we will study, starting from the Rankin-Selberg $L$-function. The primitive Rankin-Selberg $L$-function has been studied extensively in literature, and the book~\cite{jacquet:automorphic} is the most complete account. For the imprimitive function we refer to Shimura's article~\cite{shimura:on}, which draws from his earlier work~\cite{shimura:special1}. \begin{definition} The Rankin-Selberg Galois representation (associated to $f$ and $g$) is the $4$-dimensional tensor product representation: \begin{equation*} \rho_{f,v} \otimes \rho_{g,v} \colon G_{\numberset{Q}} \to \GL(V_{f,v} \otimes V_{g,v}). \end{equation*} Correspondingly, the \emph{primitive} Rankin-Selberg $L$-function associated to $f$ and $g$ is the $L$-function of the representation $\rho_{f,v}\otimes \rho_{g,v}$: \begin{gather*} L(\rho_{f,v}\otimes \rho_{g,v},s) = \prod_{l \; \text{prime}} P_l(l^{-s})^{-1}, \\ P_l(X) = \begin{cases} \det\bigl(1 - X\mathrm{Frob}_l^{-1} \mid (V_{f,v}\otimes V_{g,v})^{I_l}\bigr) &\text{if } l\centernot\mid N_{K_{f,g}/\numberset{Q}}(v) \\ \det\bigl(1 - X\phi \mid ((V_{f,v}\otimes V_{g,v})\otimes B_{\mathrm{crys}})^{G_{\numberset{Q}_l}}\bigr) &\text{if } l\mid N_{K_{f,g}/\numberset{Q}}(v). \end{cases} \end{gather*} We have denoted with $\phi$ the crystalline Frobenius. \end{definition} At good primes $l \centernot\mid NN'$ the local factor has the following explicit form: \begin{equation*} P_l(X) = (1-\alpha_{f,l}\alpha_{g,l}X)(1-\alpha_{f,l}\beta_{g,l}X)(1-\beta_{f,l}\alpha_{g,l}X)(1-\beta_{f,l}\beta_{g,l}X) \end{equation*} where $\alpha_{\bullet,l}$ and $\beta_{\bullet,l}$ are the local Satake parameters at $l$, satisfying the usual relation \begin{equation*} 1 - a_l(f)X + \psi(l)l^{k+1}X^2 = (1-\alpha_{f,l}X)(1-\beta_{f,l}X). \end{equation*} The function $L(f\otimes g)$ can also be completed to an $L$-function which admits a meromorphic continuation to the whole complex plane, and a functional equation. For the analytical properties of the primitive Rankin-Selberg $L$-function the reader can consult~\cite[§19]{jacquet:automorphic}. \begin{remark} The primitive local factors $P_l$ are independent of $v$, because they coincide with the automorphic local factors of the representations $\pi_f$ and $\pi_g$ attached to $f$ and $g$, as proved by Jacquet~\cite{jacquet:automorphic}. Thanks to this result we can denote $L(f\otimes g, s) = L(\rho_{f,v}\otimes \rho_{g,v}, s)$ without risk of ambiguity. \end{remark} \begin{definition} The \emph{imprimitive} Rankin-Selberg $L$-function associated to $f$ and $g$ is defined by: \begin{equation*} L(f,g,s) = L_{(N N')}(\psi\psi',2s-2-k-l)\sum_{n\geq 1} \frac{a_n(f)a_n(g)}{n^s} \qquad \text{for $\Re(s) > r+r'-1$} \end{equation*} and then continued meromorphically to the whole complex plane. The notation $L_{(N N')}$ stands for an $L$-function with the local factors at primes diving $NN'$ removed. \end{definition} The $L$-function $L(f,g,s)$ admits a meromorphic continuation to the whole complex plane, with a simple pole at $s=r$ if and only if $g=f^*$ and $r'=r$. The critical values of $L(f,g,s)$ are those in the range $r' \leq s \leq r-1$. In the region $\Re(s) > r+r'-1$, $L(f,g,s)$ admits the following Euler product: \begin{gather*} L(f,g,s) = \prod_{l \; \text{prime}} P_l^{\mathrm{imp}}(l^{-s})^{-1}, \\ P_l^{\mathrm{imp}}(X) = (1-\alpha_{f,l}\alpha_{g,l}X)(1-\alpha_{f,l}\beta_{g,l}X)(1-\beta_{f,l}\alpha_{g,l}X)(1-\beta_{f,l}\beta_{g,l}X). \end{gather*} We provisionally use the convention that at bad primes $(\alpha_{f,l},\beta_{f,l}) = (a_l(f),0)$. In addition, the imprimitive function admits an integral representation. By using Rankin's method, one can write $L(f,g,s)$ in terms of the Petersson inner product of $f^*$ and the product of $g$ with a real analytic Eisenstein series of two complex variables, of weight $r-r'$. The real analytic Eisenstein series enjoys some properties from which those of $L(f,g)$ follow, most notably the meromorphic continuation, the poles location and the functional equation. The statement and proof of the integral representation formula are in~\cite[§2]{shimura:special1}. It is clear that at the good primes $P_l(X) = P_l^{\mathrm{imp}}(X)$, but at the bad primes dividing $NN'$ the local factors can differ. In particular, the divisibility relation $P_l^{\mathrm{imp}}(X) \mid P_l(X)$ holds, as may be seen by choosing $v$ such that $l\centernot\mid N_{K_{f,g}/\numberset{Q}}(v)$ in the definition of the primitive local factor. It is possible to make such choice because the primitive local factors are independent on $v$. Since there are only finitely many bad primes we obtain that the two $L$-functions differ by an error factor, $L(f,g,s) = L(f\otimes g,s)\cdot P(s)$. The (complex) $L$-function of the symmetric square, which we now introduce, fits in a similar picture. Our main references are~\cite{shimura:on1}, which studies the imprimitive one and its analytic properties, and~\cite{schmidt:p-adic}, which studies the primitive one in great detail and also gives an account of the comparison between the two. When $f=g$, the well-known vector space decomposition $V_{f,v}^{\otimes 2} \simeq \wedge^2 V_{f,v} \oplus \Sym^2 V_{f,v}$ is also a decomposition of Galois modules, we thus obtain \begin{equation} \label{eq:decomposition-rho} \rho_{f,v} \otimes \rho_{f,v} \simeq \Sym^2(\rho_{f,v}) \oplus \det \rho_{f,v}. \end{equation} \begin{definition} The \emph{primitive} symmetric square $L$-function associated to $f$ is the $L$-function of the representation $\Sym^2 \rho_{f,v}$: \begin{gather*} L(\Sym^2 \rho_{f,v},s) = \prod_{l \; \text{prime}} Q_l(l^{-s})^{-1}, \\ Q_l(X) = \begin{cases} \det\bigl(1 - X\mathrm{Frob}_l^{-1} \mid (\Sym^2 V_{f,v})^{I_l}\bigr) &\text{if } l\centernot\mid N_{K_f/\numberset{Q}}(v) \\ \det\bigl(1 - X\phi \mid (\Sym^2 V_{f,v}\otimes B_{\mathrm{crys}})^{G_{\numberset{Q}_l}}\bigr) &\text{if } l\mid N_{K_f/\numberset{Q}}(v). \end{cases} \end{gather*} \end{definition} At good primes $l \centernot\mid N$ the local factor has the following explicit form: \begin{equation*} Q_l(X) = (1-\alpha_{f,l}^2 X)(1-\alpha_{f,l}\beta_{f,l}X)(1-\beta_{f,l}^2 X). \end{equation*} Analogously to the Rankin-Selberg local factors, the primitive local factors $Q_l$ are independent of $v$ too. We then denote $L(\Sym^2 f, s) = L(\Sym^2 \rho_{f,v},s)$ without ambiguity. \begin{definition} The \emph{imprimitive} symmetric square $L$-function associated to $f$ is defined by: \begin{equation*} L^{\mathrm{imp}}(\Sym^2 f,s) = L_{(N)}(\psi^2, 2s-2k-2)\sum_{n\geq 1} \frac{a_{n^2}(f)}{n^s} \qquad \text{for $\Re(s) > 2r-1$} \end{equation*} and then continued analytically to the whole complex plane. \end{definition} The $L$-function $L(\Sym^2 f,s)$ admits an analytic continuation to the whole complex plane. The critical values of $L(\Sym^2 f,s)$ are the odd integers in the range $1 \leq s \leq r-1$ and the even integers in the range $r \leq s \leq 2r-2$. In the region $\Re(s) > 2r-1$, $L^{\mathrm{imp}}(\Sym^2 f,s)$ admits the following Euler product: \begin{gather*} L^{\mathrm{imp}}(\Sym^2 f,s) = \prod_{l \; \text{prime}} Q_l^{\mathrm{imp}}(l^{-s})^{-1}, \\ Q_l^{\mathrm{imp}}(X) = (1-\alpha_{f,l}^2 X)(1-\alpha_{f,l}\beta_{f,l}X)(1-\beta_{f,l}^2 X). \end{gather*} Again we temporarily agree that at bad primes $(\alpha_{f,l},\beta_{f,l}) = (a_l(f),0)$. As in the previous case, at the good primes $Q_l(X) = Q_l^{\mathrm{imp}}(X)$, but at the bad primes dividing $N$ the local factors can differ. We again have a divisibility relation $Q_l^{\mathrm{imp}}(X) \mid Q_l(X)$, which can be seen by choosing $v$ such that $l\centernot\mid N_{K_f/\numberset{Q}}(v)$ in the definition of the primitive local factor. It is possible to make such choice because the primitive local factors are independent on $v$. Since there are only finitely many bad primes the two $L$-functions differ again by an error factor, $L^{\mathrm{imp}}(\Sym^2 f,s) = L(\Sym^2 f,s)\cdot Q(s)$. \subsection{Primitive and imprimitive factorisation} We remain in the hypothesis $f=g$. Recall the direct sum decomposition: \begin{equation*} \rho_{f,v} \otimes \rho_{f,v} \simeq \Sym^2(\rho_{f,v}) \oplus \det \rho_{f,v}. \end{equation*} By Artin's formalism, the corresponding primitive $L$-functions satisfy: \begin{equation} \label{eq:complex-factorization-l} L(f \otimes f,s) = L(\Sym^2 f,s)L(\det \rho_{f,v},s). \end{equation} The form of the local factor of $\rho_{f,v}$ shows that $\det \rho_{f,v}$ is a Galois character mapping $\mathrm{Frob}_l^{-1}$ to $\psi(l)l^{k+1}$. By the convention explained at the beginning of this section, $\det \rho_{f,v} = (\psi\epsilon_{cyc}^{k+1})_{Gal}$ as the two characters agree at all Frobenii but finitely many. The above Dirichlet $L$-function then becomes \begin{equation*} L(\det \rho_{f,v},s) = L((\psi\epsilon_{cyc}^{k+1})_{Gal},s) = L(\psi,s-k-1) \end{equation*} where we have used the fact that, under our convention, the $L$-function of the representation attached to a Galois character is the $L$-function of the corresponding Dirichlet character. The analogous factorisation for the imprimitive $L$-functions hold. This is useful as all the results about the Beilinson-Flach Euler system involve the imprimitive Rankin-Selberg $L$-function. However, the imprimitive factorisation does not follow directly from Artin's formalism. \begin{proposition} The imprimitive $L$-functions satisfy: \begin{equation} \label{eq:imprimitive-factorization-l} \begin{split} L(f,f,s) &= L^{\mathrm{imp}}(\Sym^2 f,s)L_{(N)}(\psi,s-k-1) \\ &= L^{\mathrm{imp}}(\Sym^2 f,s)L(\psi,s-k-1)\prod_{\substack{l \mid N \\ l \centernot\mid N_{\psi}}} \Bigl(1 - \frac{\psi(l)}{l^{s-k-1}} \Bigr). \end{split} \end{equation} \end{proposition} \begin{proof} We compare the Euler factors at each prime $l$. If $l\centernot\mid N$ then: \begin{align*} P_l^{\mathrm{imp}}(l^{-s}) &= (1-\alpha_{f,l}\alpha_{f,l}l^{-s})(1-\alpha_{f,l}\beta_{f,l}l^{-s})(1-\beta_{f,l}\alpha_{f,l}l^{-s})(1-\beta_{f,l}\beta_{f,l}l^{-s}), \\ Q_l^{\mathrm{imp}}(l^{-s}) &= (1-\alpha_{f,l}^2 l^{-s})(1-\alpha_{f,l}\beta_{f,l}l^{-s})(1-\beta_{f,l}^2 l^{-s}). \end{align*} The missing factor in $Q_l^{\mathrm{imp}}$ is then $1 - \alpha_{f,l}\beta_{f,l}l^{-s}$. Since $\alpha_{f,l}\beta_{f,l} = \psi(l)l^{k+1}$, this is exactly the local factor at $l$ of the Dirichlet $L$-function. If $l\mid N$ then: \begin{align*} P_l^{\mathrm{imp}}(l^{-s}) &= 1-a_l(f)^2 l^{-s}, \\ Q_l^{\mathrm{imp}}(l^{-s}) &= 1-a_l(f)^2 l^{-s}. \end{align*} The two local factors already coincide, and indeed the local factor at $l$ of $L_{(N)}(\psi)$ is $1$ by definition. \end{proof} \begin{corollary} \label{cor:defect-terms} The functions $P$ and $Q$ are related by: \begin{equation*} P(s) = Q(s)\prod_{\substack{l \mid N \\ l \centernot\mid N_{\psi}}} \Bigl(1 - \frac{\psi(l)}{l^{s-k-1}} \Bigr). \end{equation*} \end{corollary} \begin{proof} By equation~\eqref{eq:imprimitive-factorization-l} and the relation between primitive and imprimitive $L$-functions we have \begin{equation*} P(s)\cdot L(f\otimes f,s) = Q(s)\cdot L(\Sym^2 f,s)L(\psi,s-k-1)\prod_{\substack{l \mid N \\ l \centernot\mid N_{\psi}}} \Bigl(1 - \frac{\psi(l)}{l^{s-k-1}} \Bigr) \end{equation*} and we conclude thanks to~\eqref{eq:complex-factorization-l}. \end{proof} Let $a_{\psi} \in \{0,1\}$ be equal to $0$ if $\psi$ is even and to $1$ if it is odd. The function $L(\psi,s)$ has a zero of order one at $s=-n$ for every $n\in\numberset{N}_{\geq 1}$ of the same parity as $a_{\psi}$. The same holds for $L_{(N)}(\psi,s)$ since Euler factors have no poles. Every zero yields a relation between leading terms: first we differentiate the imprimitive factorisation to get \begin{gather} L'(f,f,s) = (L^{\mathrm{imp}})'(\Sym^2 f,s)L_{(N)}(\psi,s-k-1) + L^{\mathrm{imp}}(\Sym^2 f,s)L_{(N)}'(\psi,s-k-1) \notag \intertext{and then we evaluate at $s=j+1$, for $j\in\numberset{N}$, $j\leq k$ such that $j-k\equiv a_{\psi}$ modulo $2$} L'(f,f,j+1) = L^{\mathrm{imp}}(\Sym^2 f,j+1)L_{(N)}'(\psi,j-k) \label{eq:leading-terms}. \end{gather} Notice that we must have $a_{\psi} \equiv k$ modulo $2$ since $\Gamma_0(N)$ contains the opposite of the identity matrix. Therefore the condition $j-k \equiv a_{\psi}$ modulo $2$ becomes just $j\equiv 0$ modulo $2$, i.e.\ $j$ even. The relation between the leading terms of the three $L$-functions will be the starting point to deduce the $p$-adic factorisation from the complex one. \subsection{Primitive and imprimitive \texorpdfstring{$p$}{p}-adic factorisation} \label{subsec:primitive-imprimitive-padic} As is the case for complex $L$-functions, $p$-adic ones have two versions of the story too: a primitive and an imprimitive one. In this subsection we explain this framework and why we work with the imprimitive convention. This subsection does not contain any result to be used later, so we will not lay out the full machinery of $p$-adic $L$-functions yet: that will be the subject of section~\ref{sec:padic-Lfunctions}. Suppose for the moment that we have defined $p$-adic $L$-functions: \begin{itemize} \item a primitive $L_p(f\otimes f)$ and an imprimitive $L_p(f,f)$ Rankin-Selberg function; \item a primitive $L_p(\Sym^2 f)$ and an imprimitive $L^{\mathrm{imp}}_p(\Sym^2 f)$ symmetric square function. \end{itemize} Recall that there are two versions of the complex factorisation, one primitive~\eqref{eq:complex-factorization-l} and one imprimitive~\eqref{eq:imprimitive-factorization-l}. As explained in the last subsection, by multiplying or dividing by the defect factor $P(s)$ on both sides of the equations, we can pass from one to the other. This uses the known relation between $P(s)$ and $Q(s)$. The primitive and imprimitive factorisations are thus in fact equivalent. If we want to generalise them, we could try to prove a primitive $p$-adic factorisation of the same shape as~\eqref{eq:complex-factorization-l}: \begin{equation} \label{eq:p-adic-primitive} L_p(f\otimes f,s) = L_p(\Sym^2 f,s)L_p(\psi,s-k-1) \end{equation} or an imprimitive one of the same shape as~\eqref{eq:imprimitive-factorization-l}: \begin{equation} \label{eq:p-adic-imprimitive} L_p(f,f,s) = L^{\mathrm{imp}}_p(\Sym^2 f,s)L_{p,(N)}(\psi,s-k-1). \end{equation} As is the case for the complex functions, one would expect the primitive and imprimitive $p$-adic functions to differ by some defect factor. If that held, it would also imply that the two factorisations are equivalent, up to the multiplication by those defect factors. It would then suffice to prove either $p$-adic factorisation to automatically deduce the other one. We now briefly explain when this is the case. If $r>r'$, the imprimitive Rankin-Selberg $p$-adic $L$-function $L_p(f,g)$ is well-defined by the work of Shimura and Hida, and satisfies interpolation formulæ: \begin{equation*} L_p(f,g,s) = (\triangle)\frac{L(f,g,s)}{\mho} \end{equation*} for $s\in\{r'+1,\ldots,r-1\}$ and some period $\mho$. The primitive $p$-adic $L$-function $L_p(f\otimes g)$ is then defined by dividing by the imprimitive factor, that is \begin{equation*} L_p(f\otimes g,s) = \frac{L_p(f,g,s)}{P(s)}. \end{equation*} Both $L_p(f\otimes g)$ and $L_p(f,g)$ are analytic functions. This uses the fact that $P(s)$ is a product of polynomials in $l^{-s}$ for finitely many primes $l\neq p$, so it interpolates $p$-adically. Since $P(s)$ is also the defect between the complex functions, $L_p(f\otimes g)$ and $L(f\otimes g)$ satisfy the previous formula too. In particular, the primitive (resp.\ imprimitive) $p$-adic $L$-function interpolates the primitive (resp.\ imprimitive) complex $L$-function. If $v_p(a_p(f)) < \frac{k+1}{2}$, the imprimitive symmetric square $p$-adic $L$-function $L^{\mathrm{imp}}_p(\Sym^2 f)$ is well-defined by the work of Schmidt, and satisfies \begin{equation*} L^{\mathrm{imp}}_p(\Sym^2 f,s) = (\tilde{\triangle})\frac{L^{\mathrm{imp}}(\Sym^2 f,s)}{\tilde{\mho}} \end{equation*} for $s\in\{r'+1,\ldots,r-1\}$ and some period $\tilde{\mho}$. The primitive $p$-adic $L$-function $L_p(\Sym^2 f)$ is then defined by dividing by the imprimitive factor, that is \begin{equation*} L_p(\Sym^2 f,s) = \frac{L^{\mathrm{imp}}_p(\Sym^2 f,s)}{Q(s)}. \end{equation*} Both $L^{\mathrm{imp}}_p(\Sym^2 f)$ and $L_p(\Sym^2 f)$ are analytic functions (thanks to results by Schmidt, Hida and Dąbrowski-Delbourgo). This uses the fact that $Q(s)$ is a product of polynomials in $l^{-s}$ for finitely many primes $l\neq p$, so it interpolates $p$-adically. Since $Q(s)$ is also the defect between the complex functions, $L^{\mathrm{imp}}_p(\Sym^2 f)$ and $L^{\mathrm{imp}}(\Sym^2 f)$ fulfil the previous formula too. In particular, the primitive (resp.\ imprimitive) $p$-adic $L$-function interpolates the primitive (resp.\ imprimitive) complex $L$-function. The above definitions have two shortcomings. Firstly, we would need the $p$-adic Rankin-Selberg $L$-functions for $f=g$, but these are not defined: as we will explain in section~\ref{sec:padic-Lfunctions}, when the weights of the two forms coincide the construction fails. Therefore, even though the defects between the primitive and imprimitive functions are preserved (in particular, they are still linked by the relation~\ref{cor:defect-terms}), the above is not enough to tackle $p$-adic factorisations. Secondly, when the slope is arbitrary there is no construction of the functions $L_p(\Sym^2 f)$ and $L^{\mathrm{imp}}_p(\Sym^2 f)$. The latter problem is a distinctive feature of the supersingular case. Roughly speaking, the problem with the Rankin-Selberg function is tackled in the following way: let $F$ be a family of modular forms passing through a $p$-stabilisation of $f$. Then there exists a $3$-variable $p$-adic $L$-function $L_p(F,F)$ which interpolates special values of complex imprimitive Rankin-Selberg $L$-functions. This is an ``imprimitive $p$-adic Rankin-Selberg function for the family $F$'', as it interpolates \emph{imprimitive} functions. A $p$-adic function attached to $f$ can then be defined as \begin{equation*} s\mapsto L_p(f,f,s) = L_p(F,F)(k,k,s). \end{equation*} This is a $1$-variable $p$-adic $L$-function. Although it does not satisfy any interpolation formula, we regard it as the imprimitive $p$-adic $L$-function, since it comes from $L_p(F,F)$. Following the above ideas, a primitive $p$-adic $L$-function can be defined as \begin{equation*} L_p(f\otimes f,s) = L_p(f,f,s)P(s)^{-1}. \end{equation*} Again, $L_p(f\otimes f)$ does not interpolate any complex $L$-function, but we regard it as the primitive $L$-function since it stands in the same relation to $L_p(f,f)$ as $L(f\otimes f)$ to $L(f,f)$. In particular, the same defect factor linking the two versions of the complex $L$-functions, also gives the link between the $p$-adic ones. Assuming the small slope hypothesis, then the same holds for the functions $L_p(\Sym^2 f)$ and $L^{\mathrm{imp}}_p(\Sym^2 f)$. This is especially important thanks to Corollary~\ref{cor:defect-terms}, which gives an explicit relation between $P(s)$ and $Q(s)$. In particular, equations~\eqref{eq:p-adic-primitive} and~\eqref{eq:p-adic-imprimitive} now become equivalent, because the defect factors are the same on the two sides of the equality: \begin{align*} &L_p(f,f,s) = L^{\mathrm{imp}}_p(\Sym^2 f,s)L_{p,(N)}(\psi,s-k-1) \\ \iff &L_p(f\otimes f,s)P(s) = L_p(\Sym^2 f,s)Q(s)L_p(\psi,s-k-1)\smashoperator{\prod_{l \mid N, \; l\centernot\mid N_{\psi}}} \Bigl(1 - \frac{\psi(l)}{l^{s-k-1}}\Bigr) \\ \iff &L_p(f\otimes f,s) = L_p(\Sym^2 f,s)L_p(\psi,s-k-1). \end{align*} The two versions of the $p$-adic factorisation are then equivalent up to multiplying by the defect factor, as is the case for the complex factorisation. Under the hypothesis that $f$ is ordinary, the above workaround defines all the missing functions, because the small slope hypothesis holds true. Assuming ordinarity, Dasgupta proved the primitive $p$-adic factorisation~\eqref{eq:p-adic-primitive}. Thanks to the above argument this is equivalent to proving the imprimitive one~\eqref{eq:p-adic-imprimitive}. In this paper we work in the supersingular case, and in particular we never assume the small slope hypothesis, as to not put any additional restriction and work in greater generality. We will then construct a $p$-adic symmetric square $L$-function, and use that to prove the $p$-adic factorisation formula. We will actually prove a more general version of the factorisation formula, involving $p$-adic $L$-functions for the family $F$. In that case, one is forced to work in the imprimitive settings: at the time of writing there is no primitive analogue of $L_p(F,F)$, as it is not known if the defect factor $P(s)$ interpolates $p$-adically in the weights. We will thus prove the imprimitive version of the $p$-adic factorisation for families, in the supersingular case. More precisely, we prove ``imprimitive versions'' (Theorems~\ref{thm:final-int} and~\ref{thm:final-fact}) of the main Theorems~\ref{intro:final1} and~\ref{intro:final2}, i.e.\ involving only imprimitive $L$-functions. This is the only possible choice for $p$-adic $L$-functions for families of forms. Regarding the specialisations to single forms, the above discussion shows that proving one version of the factorisation and of the interpolation is equivalent to proving both of them. Indeed, we can pass from one version to the other with the above technique. The primitive versions then follows from the imprimitive ones, so our treatment works in full generality. \section{Homological algebra} In this section we introduce the main homological constructions that we will use later in the thesis. We will use several cohomology theories throughout the thesis, among which motivic cohomology in the formulation due to Beilinson. Let $X$ be a regular scheme, and $K_{\bullet}(X)$ be the $K$-theory of $X$. For every integer $n\in\numberset{N}$, denote with $\mathrm{gr}^{\gamma}_n K_{\bullet}(X)$ the $n$-th graded piece of the $\gamma$-filtration of $K_{\bullet}(X)$, that is the eigenspace where the Adams operator $\psi^l$ acts as $l^n$ for every $l\in\numberset{N}$. Since the Adams operators all commute with each other, it follows $K_{\bullet}(X) = \bigoplus_n \mathrm{gr}^{\gamma}_n K_{\bullet}(X)$. We then define \begin{definition}[\cite{beilinson:higher}] If $X$ is a regular scheme, its \emph{motivic cohomology groups} for $n\in\numberset{N}$ are: \begin{equation*} H^i_{\mot}(X,n) = \numberset{Q} \otimes_{\numberset{Z}} \mathrm{gr}_n^{\gamma} K_{2n-i}(X). \end{equation*} \end{definition} By definition, these are finite dimensional vector spaces over $\numberset{Q}$. Let $\mathcal{T}\in\{\et,\dR,\mathrm{Betti},\mathcal{D},\syn,\rig\}$ be a cohomology theory. For any choice of $\mathcal{T}$ there exists a notion of ``constant'' sheaf $\numberset{Q}_{\mathcal{T}}$, for example for de Rham and étale cohomology they are actually the constant sheaves $\numberset{Q}$ and $\numberset{Q}_p$. For each cohomology theory there is a \emph{regulator} map \begin{equation*} \regulator{\mathcal{T}} \colon H^i_{\mot}(X,n) \to H^i_{\mathcal{T}}(X,\numberset{Q}_{\mathcal{T}}(n)). \end{equation*} These maps are compatible with pull-backs, push-forwards and cup products. In the sequel we will only need the regulators $\regulator{\del}$, $\regulator{\et}$ and $\regulator{\syn}$. For their construction and properties we refer to the original work of Beilinson~\cite{beilinson:higher} for the first, to~\cite[Proposition~B.4.6 and Appendix~B]{huber.wildeshaus:classical} for the second, and to~\cite[Theorem~7.5]{besser:syntomic} for the third, where it was originally introduced. Let $X$ be a smooth and proper variety of even dimension $2d$ over a field $K$ of characteristic zero (with a chosen embedding $K \hookrightarrow \C$). Let $0\leq j\leq d-1$ be an integer. Let $\pi \colon X \to \Spec K$ be the structural morphism, then there is a push-forward map in any degree and for all $n\in\numberset{N}$ \begin{equation*} \pi_* \colon H_{\mot}^{\bullet}(X,n) \to H_{\mot}^{\bullet-4d}(\Spec K,n-2d). \end{equation*} By composing the cup product and the push-forward along $\pi$ we can define a pairing in motivic cohomology by composition: \begin{equation*} \langle\cdot ,\cdot \rangle \colon H^{2d+1}_{\mot}(X,2d-j) \times H^{2d}_{\mot}(X,d) \xrightarrow{\cup} H^{4d+1}_{\mot}(X,3d-j) \xrightarrow{\pi_*} H^1_{\mot}(\Spec K,d-j). \end{equation*} One of the main results of this article is the construction of an element in the rightmost motivic cohomology group which gives back $L$-values when realised in different cohomology theories. In this section we lay down the machinery to compute said realisations. \begin{remark} When $j=d-1$ the above pairing has a geometric interpretation as the Bloch intersection pairing. Indeed, we have isomorphisms \begin{gather*} H_{\mot}^{2d+1}(X,d+1) \simeq \mathrm{CH}^{d+1}(X,1), \qquad H_{\mot}^{2d}(X,d) \simeq \mathrm{CH}^d(X), \\ H_{\mot}^1(\Spec K,1) \simeq K^{\times} \end{gather*} and given generators $(Z,f_Z) \in \mathrm{CH}^{d+1}(X,1)$ and $[Z']\in\mathrm{CH}^d(X)$ we can define \begin{equation*} \langle (Z,f_Z), [Z'] \rangle = f_Z(Z\cap Z'). \end{equation*} Since $f_Z \in k(X)^{\times}$ it is possible to arrange $Z'$ such that the intersection avoids zeros and poles of $f_Z$. This pairing is then extended by bilinearity to the full higher Chow groups, and it coincides with the above one. This point of view is constantly used in~\cite{dasgupta:factorization}---which treats the case $d=1$---but we will not use it in this paper. \end{remark} \subsection{A complex diagram} \label{sec:complex-diagram} In this subsection we explain the strategy we will apply to compute the image of cohomology classes under the regulator $\regulator{\del}$ to Deligne cohomology. In this way, in Subsection~\ref{subsec:complex-argument} we will relate Beilinson-Flach cohomology classes with the special values of the complex Rankin-Selberg $L$-function. We will use Deligne cohomology for varieties defined over subrings of $\C$, in particular if a variety is defined over a subring of $\numberset{R}$ we will ignore its real structure. Our main references on Deligne cohomology are~\cite{deninger.scholl:beilinson,esnault.viehweg:deligne-beilinson,jannsen:deligne}. Let $\regulator{\del} \colon H_{\mot} \to H_{\mathcal{D}}$ be the Deligne regulator. As $\regulator{\del}$ is compatible with cup product and push-forward, there is a commutative diagram as in figure~\ref{diag:complex-compatibility}. For computations it is useful to attach a row in de Rham cohomology. Deligne cohomology fits in a long exact sequence~\cite[Corollary~2.10 (a)]{esnault.viehweg:deligne-beilinson}: \begin{equation*} \cdots \to H^p_{\mathcal{D}}(X,K(q)) \to F^q H^p_{\dR}(X,K)\oplus H^p_B(X,K(q)) \to H^p_{\dR}(X,\C) \to [+1] \end{equation*} Moreover, when $p+1\leq 2q$ there is a short exact sequence in the category $\mathrm{MH}_K$ of mixed Hodge structures over $K$: \begin{equation*} 0 \to \Ext^1_{\mathrm{MH}_K}(K,H^p_B(X,K(q))) \to H_{\mathcal{D}}^{p+1}(X,K(q)) \to \Ext^0_{\mathrm{MH}_K}(K,H^{p+1}_B(X,K(q))) \to 0. \end{equation*} In particular, if $p+1 < 2q$ then $H^{p+1}_B(X,K(q))$ is a pure Hodge structure of negative weight, so it does not admit morphisms from the pure structure $K$ of weight $0$. In this case the sequence becomes an isomorphism between $\Ext^1$ and $H_{\mathcal{D}}^{p+1}$. In general, the groups $\Ext^i_{\mathrm{MH}_K}$ have the explicit characterisation given in~\cite[Equations~2.4.2 and 2.7.1, Proposition~4.13]{jannsen:deligne}: \begin{equation} \label{eq:ext-groups} \begin{aligned} \Ext^0_{\mathrm{MH}_K}(K,H) &\simeq W_0 H \cap F^0(H\otimes \C), \\ \Ext^1_{\mathrm{MH}_K}(K,H) &\simeq \frac{W_0 H \otimes \C}{W_0 H + F^0(H\otimes \C)}, \\ \Ext^i_{\mathrm{MH}_K}(K,-) &= 0 \quad \text{for $i>1$}. \end{aligned} \end{equation} In particular, by the second isomorphism, if $q$ is big enough then $H_{\mathcal{D}}^p(X,K(q))$ has also the structure of an intermediate Jacobian. In both cases considered in diagram~\ref{diag:complex-compatibility} the condition $p+1 \leq 2q$ holds. In the first case $(p+1 = 2d+1, q = 2d-j)$ we also have $p+1 < 2q$, so as explained above the Betti cohomology group $H_B^{p+1}(X,K(q))$ does not admit morphisms from the pure structure $K$ of weight $0$. Hence \begin{equation*} \Ext^0_{\mathrm{MH}_K}(K,H^{2d+1}_B(X,K(2d-j))) = \Hom_{\mathrm{MH}_K}(K,H^{2d+1}_B(X,K(2d-j))) = 0. \end{equation*} Therefore there are canonical maps \begin{equation*} \begin{aligned} H_{\mathcal{D}}^{2d+1}(X,K(d-j)) &\xrightarrow{\simeq} \Ext^1_{\mathrm{MH}_K}(K,H_B^{2d}(X,K(d-j))), \\ H_{\mathcal{D}}^{2d}(X,K(d)) &\to \Ext^0_{\mathrm{MH}_K}(K,H_B^{2d}(X,K(d))). \end{aligned} \end{equation*} By repeating the argument for the other two groups and remembering that the spectral sequence is compatible with cup products and push-forwards, we obtain the next theorem. \begin{theorem} If $X$ is smooth and proper, then the diagram in figure~\ref{diag:complex-diagram} is commutative. \end{theorem} When $X$ is smooth and affine (but not necessarily proper), its de Rham and Betti cohomology vanish in degrees strictly greater than its dimension $2d$. Consequently, there are trivially no non-zero homomorphisms from $K$ to $H^{2d+1}_B(X,K(2d-j))$. Therefore the first part of the argument still applies and proves the following. \begin{proposition} \label{prop:isomorphism-for-affine-deligne} If $X$ is smooth and affine, then there is an isomorphism \begin{equation*} H_{\mathcal{D}}^{2d+1}(X,K(d-j)) \xrightarrow{\simeq} \Ext^1_{\mathrm{MH}_K}(K,H_B^{2d}(X,K(d-j))). \end{equation*} \end{proposition} We also recall the de Rham cycle class map. \begin{definition} \label{def:cycle-class-map} The \emph{de Rham cycle class map} is the function associating a ``current'' to every equivalence class of subvarieties: \begin{align*} \cl_{\dR} \colon \mathrm{CH}^i(X) &\to H^{2d-i,2d-i}_{\dR}(X,\C)^{\vee} \\ [V] &\mapsto \Bigl( \rho \mapsto \int_{V\setminus V^{\mathrm{sing}}} \rho \Bigr) \end{align*} where $V^{\mathrm{sing}}$ is the set of singular points of $V$. \end{definition} The prescription gives a well-defined de Rham class in $H^{2d-i,2d-i}_{\dR}(X,\C)^{\vee} \simeq H^{i,i}_{\dR}(X,\C)$, where the isomorphism is given by Poincaré duality and Hodge theory, so identifies a unique cohomology class $\cl_{\dR}(V)$. We can be more precise about the target of the cycle class map. By the de Rham isomorphism \begin{equation*} H_B^{2i}(X)\otimes \C \simeq H^{2i}_{\dR}(X,\C) \end{equation*} we identify $H_B^{2i}(X,\C)$ with $H^{2i}_{\dR}(X,\C)$. For every subring $R\subseteq \C$ we can then regard $H_B^{2i}(X,R) = H_B^{2i}(X)\otimes R$ as a vector subspace of $H^{2i}_{\dR}(X,\C)$. The target of the cycle class map is then: \begin{equation*} \cl_{\dR} \colon \mathrm{CH}^i(X) \to H^{2i}_B(X,\numberset{Z}) \cap H^{i,i}_{\dR}(X,\C). \end{equation*} Indeed, a general element of $\mathrm{CH}^i(X)$ is an equivalence class, represented by a finite linear combination of subvarieties of $X$, with coefficients in $\numberset{Z}$. This coincides with the description of a general element of $H^{2i}_B(X,\numberset{Z})$. As we will explain in Subsection~\ref{subsec:complex-argument}, $\cl_{\dR}$ is compatible with $\regulator{\del}$. Its usefulness relies then in the fact that we can compute the Deligne regulator from $H_{\mot}^{2i}(X,i)$ by computing the more explicit map $\cl_{\dR}$. \subsection{A \texorpdfstring{$p$}{p}-adic diagram} \label{sec:p-adic-diagram} As in the previous subsection, we start from the commutative diagram given by the compatibility of $\regulator{\et}$ with cup product and push-forward. Let $\regulator{\et} \colon H_{\mot} \to H_{\et}$ be the étale regulator. As $\regulator{\et}$ is compatible with cup product and push-forward, there is a commutative diagram as in figure~\ref{diag:etale-compatibility}. We now bring in the Hochschild-Serre spectral sequence: recall that for every $n\in\numberset{Z}$ its second page is \begin{equation*} E_2^{ij} = H^i(K,H^j_{\et}(X_{\overline{K}},\numberset{Q}_p(n))) \implies H^{i+j}_{\et}(X_K,\numberset{Q}_p(n)). \end{equation*} In particular one has a canonical edge map \begin{equation*} H_{\et}^{2d}(X,\numberset{Q}_p(d)) \to H^0(K,H^{2d}_{\et}(X_{\overline{K}},\numberset{Q}_p(d))). \end{equation*} On the other hand, we would like the group $H^{2d+1}_{\et}(X,\numberset{Q}_p(2d-j))$ to map to $E_2^{1,2d}$, to mimic the same procedure done for Deligne cohomology. However, such a morphism exists naturally only when the edge map \begin{equation*} H^{2d+1}_{\et}(X,\numberset{Q}_p(2d-j)) \to H^0(K,H^{2d+1}_{\et}(X_{\overline{K}},\numberset{Q}_p(2d-j))) \end{equation*} is zero, which is not true in general. We treat separately the cases of $K$ global or local field. Suppose first that $K$ is a number field. If $X$ is smooth and proper over $K$, then $H^{2d+1}_{\et}(X_{\overline{K}},\numberset{Q}_p(2d-j))$ is a pure $p$-adic representation of $G_K$ of weight $1+2j-2d$. Since $1+2(j-d) < 0$, there cannot be any morphisms from the trivial representation---which is pure of weight zero---to it. Indeed, the image of a generator of $K$ would determine a $1$-dimensional sub-representation on which almost all Frobenii have characteristic polynomials with roots of absolute value $1$, but this is impossible being the target representation pure of weight different than zero. Therefore \begin{equation*} H^0(K,H^{2d+1}_{\et}(X_{\overline{K}},\numberset{Q}_p(2d-j))) = 0. \end{equation*} The Hochschild-Serre spectral sequence converges to $H = H_{\et}^{2d+1}(X_K,\numberset{Q}_p(2d-j))$. We have \begin{equation*} \frac{H}{F^1 H} = E_{\infty}^{0,2d+1} \hookrightarrow H^0(K,H^{2d+1}_{\et}(X_{\overline{K}},\numberset{Q}_p(2d-j))) = 0 \end{equation*} which shows that $F^1 H = H$, and since $E_{\infty}^{1,2d} = F^1 H / F^2 H$, we get a canonical edge map \begin{equation*} H \twoheadrightarrow E_{\infty}^{1,2d} \hookrightarrow E_2^{1,2d} = H^1(K,H^{2d}_{\et}(X_{\overline{K}},\numberset{Q}_p(2d-j))). \end{equation*} Thus we obtained the map we were looking for under the hypothesis that $K$ is a number field. The usefulness of this strategy lies in the following theorem. \begin{theorem} If $K$ is a number field, and $X$ is smooth and proper, then the diagram in figure~\ref{diag:etale-diagram} is commutative. \end{theorem} \begin{proof} This is an instance of the more general compatibility of the edge maps with cup products and push-forwards. \end{proof} When $X$ is smooth and affine (but not necessarily proper), its base change to an algebraically closed field verifies \begin{equation*} H^i_{\et}(X_{\overline{K}},\numberset{Q}_p) = 0 \quad \forall i > 2d. \end{equation*} Therefore the first part of the above argument still applies and proves the following. \begin{proposition} \label{prop:morphism-for-affine-etale} If $X$ is smooth and affine over a number field, then there is a morphism \begin{equation*} H_{\et}^{2d+1}(X,\numberset{Q}_p(2d-j)) \to H^1(K,H_{\et}^{2d}(X_{\overline{K}},\numberset{Q}_p(2d-j))). \end{equation*} \end{proposition} Suppose now that $K$ is a $p$-adic field, i.e.\ a finite extension of $\numberset{Q}_p$. In this case we cannot invoke the notion of ``pure'' $G_K$-representation, so we cannot argue for the vanishing of $E_2^{0,2d+1}$. However, when $K = K_v$ is the localisation of a number field at a place $v$ over $p$, we have localisation maps induced by $K \to K_v$ which are compatible with the edge maps: \begin{equation*} \begin{tikzcd} H^{2d+1}_{\et}(X_K,\numberset{Q}_p(2d-j)) \arrow{r} \arrow{d} & H^0(K,H^{2d+1}_{\et}(X_{\overline{K}},\numberset{Q}_p(2d-j))) \arrow{d} \\ H^{2d+1}_{\et}(X_{K_v},\numberset{Q}_p(2d-j)) \arrow{r} & H^0(K_v,H^{2d+1}_{\et}(X_{\overline{K_v}},\numberset{Q}_p(2d-j))) \end{tikzcd} \end{equation*} By the previous argument, if $X$ is smooth and proper over $K$ then the top right group is trivial. Therefore every element in the cohomology of $X_{K_v}$ that is the localisation of a global class is still in the kernel of the bottom edge map. As the kernel coincides with $F^1 H_v$ for $H_v = H^{2d+1}_{\et}(X_{K_v},\numberset{Q}_p(2d-j))$, it is equipped with the sought morphism to $E_2^{1,2d}$. This is the strategy we will use in Subsection~\ref{subsec:p-adic-argument}. \begin{landscape} \begin{figure} \caption{Compatibility of $\regulator{\del}$ with cup product and push-forward} \label{diag:complex-compatibility} \begin{equation*} \begin{tikzcd}[column sep=small] H^{2d+1}_{\mot}(X,2d-j) \times H^{2d}_{\mot}(X,d) \arrow["\cup"]{r} \arrow["\regulator{\del} \times \regulator{\del}"]{d} & H^{4d+1}_{\mot}(X,3d-j) \arrow["\pi_*"]{r} \arrow["\regulator{\del}"]{d} & H^1_{\mot}(\Spec K,d-j) \arrow[d,"\regulator{\del}"] \\ H_{\mathcal{D}}^{2d+1}(X_{\C},K(2d-j)) \times H_{\mathcal{D}}^{2d}(X_{\C},K(d)) \arrow["\cup"]{r} & H_{\mathcal{D}}^{4d+1}(X_{\C},K(3d-j)) \arrow["\pi_*"]{r} & H_{\mathcal{D}}^1(\Spec \C,K(d-j)) \end{tikzcd} \end{equation*} \end{figure} \begin{figure} \caption{Complex diagram} \label{diag:complex-diagram} \begin{equation*} \begin{tikzcd}[column sep=small] H^{2d+1}_{\mot}(X,2d-j) \times H^{2d}_{\mot}(X,d) \arrow["\cup"]{r} \arrow["\regulator{\del} \times \regulator{\del}"]{d} & H^{4d+1}_{\mot}(X,3d-j) \arrow["\pi_*"]{r} \arrow["\regulator{\del}"]{d} & H^1_{\mot}(\Spec K,d-j) \arrow[d,"\regulator{\del}"] \\ H_{\mathcal{D}}^{2d+1}(X_{\C},K(2d-j)) \times H_{\mathcal{D}}^{2d}(X_{\C},K(d)) \arrow["\cup"]{r} \arrow{d} & H_{\mathcal{D}}^{4d+1}(X_{\C},K(3d-j)) \arrow["\pi_*"]{r} \arrow{d} & H^1_{\mathcal{D}}(\Spec \C,K(d-j)) \arrow[equal]{d} \\ \Ext^1_{\mathrm{MH}_K}(K,H_B^{2d}(X,K(2d-j))) \times \Ext^0_{\mathrm{MH}_K}(K,H_B^{2d}(X,K(d))) \arrow["\cup"]{r} & \Ext^1_{\mathrm{MH}_K}(K,H_B^{4d}(X,K(3d-j))) \arrow["\pi_*"]{r} & H^1_{\mathcal{D}}(\Spec \C,K(d-j)) \end{tikzcd} \end{equation*} \end{figure} \end{landscape} \begin{landscape} \begin{figure} \caption{Compatibility of $\regulator{\et}$ with cup product and push-forward} \label{diag:etale-compatibility} \begin{equation*} \begin{tikzcd}[column sep=small] H^{2d+1}_{\mot}(X,2d-j) \times H^{2d}_{\mot}(X,d) \arrow["\cup"]{r} \arrow["\regulator{\et} \times \regulator{\et}"]{d} & H^{4d+1}_{\mot}(X,3d-j) \arrow["\pi_*"]{r} \arrow["\regulator{\et}"]{d} & H^1_{\mot}(\Spec K,d-j) \arrow[d,"\regulator{\et}"] \\ H_{\et}^{2d+1}(X,\numberset{Q}_p(2d-j)) \times H_{\et}^{2d}(X,\numberset{Q}_p(d)) \arrow["\cup"]{r} & H_{\et}^{4d+1}(X,\numberset{Q}_p(3d-j)) \arrow["\pi_*"]{r} & H_{\et}^1(\Spec K,\numberset{Q}_p(d-j)) \end{tikzcd} \end{equation*} \end{figure} \begin{figure} \caption{$p$-adic étale diagram} \label{diag:etale-diagram} \begin{equation*} \begin{tikzcd}[column sep=small] H^{2d+1}_{\mot}(X,2d-j) \times H^{2d}_{\mot}(X,d) \arrow["\cup"]{r} \arrow["\regulator{\et} \times \regulator{\et}"]{d} & H^{4d+1}_{\mot}(X,3d-j) \arrow["\pi_*"]{r} \arrow["\regulator{\et}"]{d} & H^1_{\mot}(\Spec K,d-j) \arrow[d,"\regulator{\et}"] \\ H_{\et}^{2d+1}(X,\numberset{Q}_p(2d-j)) \times H_{\et}^{2d}(X,\numberset{Q}_p(d)) \arrow["\cup"]{r} \arrow{d} & H_{\et}^{4d+1}(X,\numberset{Q}_p(3d-j)) \arrow["\pi_*"]{r} \arrow{d} & H_{\et}^1(\Spec K,\numberset{Q}_p(d-j)) \arrow[equal]{d} \\ H^1(K,H_{\et}^{2d}(X_{\overline{K}},\numberset{Q}_p(2d-j))) \times H^0(K,H_{\et}^{2d}(X_{\overline{K}},\numberset{Q}_p(d))) \arrow["\cup"]{r} & H^1(K,H_{\et}^{4d}(X_{\overline{K}},\numberset{Q}_p(3d-j))) \arrow["\pi_*"]{r} & H_{\et}^1(\Spec K,\numberset{Q}_p(d-j)) \end{tikzcd} \end{equation*} \end{figure} \end{landscape} \section{Cohomology of Kuga-Sato varieties} \label{sec:cohomology-kuga-sato} In this section we introduce the key geometric objects of this paper, i.e.\ Kuga-Sato varieties, along with their cohomology. They were introduced and studied in~\cite{deligne:formes, scholl:motives}---which are also the main references---in the general case, and in the appendix of~\cite{bertolini.darmon:generalized} by Brian Conrad for the case of level $\Gamma_1(N)$. From now on we assume $N\geq 5$ so that all the objects we construct exist. \subsection{Motivic cohomology} Let $Y_1(N)$ be the moduli space of elliptic curves with a point of order $N$. The curve $Y_1(N)$ has a model over $\numberset{Q}$ which becomes isomorphic to a finite number of copies of $\Gamma_1(N) \backslash \mathcal{H}$ when base changed to $\C$. Let $\mathcal{E} \xrightarrow{\pi} Y_1(N)$ be the universal elliptic curve. For every $d\in\numberset{N}_{\geq 1}$ we can form the $d$-th fibre product of $\mathcal{E}$ over the modular curve, where in every component the map along which we are taking the fibre product is the projection $\pi$. We have thus a variety: \begin{equation*} \mathcal{E}^d \xrightarrow{\pi_d} Y_1(N). \end{equation*} $\mathcal{E}^d$ is $d$-dimensional over $Y_1(N)$ and $(d+1)$-dimensional over $\Spec \numberset{Q}$. We can further form the fibre product $\mathcal{E}^d \times \mathcal{E}^d$, where this time we consider both copies of $\mathcal{E}^d$ as varieties over $\Spec \numberset{Q}$. We have then a commutative diagram: \begin{equation*} \begin{tikzcd} \mathcal{E}^d \arrow[hook]{r} \arrow["\pi_d"]{d} & \mathcal{E}^d \times \mathcal{E}^d \arrow["\pi_d \times \pi_d"]{d} \\ Y_1(N) \arrow[hook]{r} & Y_1(N) \times Y_1(N) \end{tikzcd} \end{equation*} Let $X_1(N)$ be the Baily-Borel compactification of $Y_1(N)$. Notice that with our choice of model for $Y_1(N)$, the cusp at infinity of $X_1(N)$ is only defined over $\numberset{Q}(\mu_N)$, even though $X_1(N)$ is still defined over $\numberset{Q}$. One can consider the universal generalised elliptic curve $\overline{\mathcal{E}} \to X_1(N)$ and repeat the process to have proper varieties. However, the fibres of $\overline{\mathcal{E}}$ over the cusps are singular cubics, so $\overline{\mathcal{E}}^d$ fails to be smooth when $d>1$. In~\cite{deligne:formes} Deligne proved the existence of a \emph{canonical} desingularisation which we denote with $W_d$: \begin{equation*} W_d \to \overline{\mathcal{E}}^d \to X_1(N). \end{equation*} This comes equipped with a structural morphism $\overline{\pi} \colon W_d \to X_1(N)$, and is a smooth and proper variety over $X_1(N)$. Define also the \emph{cuspidal locus} $W_d^{\infty} = \overline{\pi}^{-1}\bigl( X_1(N) \setminus Y_1(N) \bigr) = W_d \times \{\mathrm{cusps}\}$. Notice that $\overline{\pi}^{-1}(Y_1(N)) = \mathcal{E}^d$. We also denote with $\hat{\mathcal{E}}^d$ the locus of $\overline{\mathcal{E}}^d$ where the projection to $X_1(N)$ is smooth, i.e.\ the Néron model of $\mathcal{E}^d$ over $X_1(N)$, and with $\hat{\mathcal{E}}^{d,*}$ the connected component of the identity. \begin{definition} The \emph{Kuga-Sato variety} $\mathscr{W}_d$ is the self-fibre product of $W_d$ over the base scheme $\Spec \numberset{Q}$ \begin{equation*} \mathscr{W}_d \to X_1(N)^2, \quad \mathscr{W}_d = W_d \times_{\Spec \numberset{Q}} W_d. \end{equation*} Define also the \emph{cuspidal locus} $\mathscr{W}_d^{\infty} = (\overline{\pi}\times\overline{\pi})^{-1}\bigl( X_1(N)^2 \setminus Y_1(N)^2 \bigr)$. \end{definition} All the varieties just introduced are defined over $\numberset{Q}$. Kuga-Sato varieties are both smooth and proper over the base scheme $\Spec \numberset{Q}$, as are the $W_d$. Hence their cohomology groups enjoy several desirable properties. This is the one of the two main motivations for the introduction of Kuga-Sato varieties. The second is that they trivialise the coefficient sheaves on $X_1(N)^2$ whose cocycles are given by cusp forms, as we explain below. Kuga-Sato varieties are the closest we can get to $\mathcal{E}^d \times \mathcal{E}^d$ while keeping both the smooth and proper requirements. At this stage one would like to link the cohomology of $\mathscr{W}_d$ to that of $\mathcal{E}^d \times \mathcal{E}^d$, in order to pass easily from classes which are easier to construct to classes which enjoy more cohomological properties. The best result in this direction was proved by Brunault and Chida. Let $S_d$ be the group of permutations on $d$ objects. There is an obvious action of $S_d$ on the fibre product $\mathcal{E}^d$ given by permuting the coordinates. Likewise, the group $\mu_2$ acts on $\mathcal{E}$ as the multiplication by $-1$, and this of course extends to an action of $\mu_2^d$ on $\mathcal{E}^d$. We gather these together into an action of the group $\mu_2^d \rtimes S_d = \mathfrak{I}_d$. \begin{definition} The character $\epsilon_d$ of $\mathfrak{I}_d$ is defined by: \begin{equation*} \epsilon_d \colon \mathfrak{I}_d \to \mu_2, \quad (a_1,\ldots,a_d,\sigma) \mapsto a_1\cdots a_d \cdot \sgn(\sigma). \end{equation*} \end{definition} \begin{theorem}[{\cite[Proposition~8.1]{brunault.chida:regulators}}] \label{thm:brunault-chida} For every quadruple $(i,d,d',j) \in \numberset{N}^4$ with $d,d' \geq 1$, we have \begin{align*} H_{\mot}^i(W_d \times W_{d'}, \numberset{Q}(j))(\epsilon_d,\epsilon_{d'}) &\simeq H_{\mot}^i(\hat{\mathcal{E}}^{d,*} \times \hat{\mathcal{E}}^{d',*}, \numberset{Q}(j))(\epsilon_d,\epsilon_{d'}) \intertext{in particular} H_{\mot}^i(\mathscr{W}_d, \numberset{Q}(j))(\epsilon_d,\epsilon_d) &\simeq H_{\mot}^i(\hat{\mathcal{E}}^{d,*} \times \hat{\mathcal{E}}^{d,*}, \numberset{Q}(j))(\epsilon_d,\epsilon_d). \end{align*} \end{theorem} \begin{remark} \label{remark:hecke-ops} Since the desingularisation is canonical, the Hecke correspondences on $X_1(N)$ associated to the Hecke operators extend as correspondences on $W_d$, as explained in~\cite[§4]{scholl:motives}. In particular they act on the cohomology of $W_d$. \end{remark} \subsubsection{The coefficient sheaf} Let $\mathcal{T}\in\{\et,\dR,\mathrm{Betti},\mathcal{D},\syn,\rig\}$ be a cohomology theory. Recall that $\numberset{Q}_{\mathcal{T}}$ denotes the standard notion of ``constant'' sheaf in $\mathcal{T}$. Let $\mathscr{H}_{\numberset{Q}}(\mathcal{E})$ be the sheaf on $Y_1(N)$ defined as $(R^1\pi_* \numberset{Q}_{\mathcal{T}})(1)$. This can be described explicitly as ``cohomology along the fibres'' by the assignment \begin{equation*} U \mapsto H_{\mathcal{T}}^1(\pi^{-1}(U),\numberset{Q}_{\mathcal{T}}\vert_{\pi^{-1}(U)})(1). \end{equation*} In particular the fibre at a point $x \in Y_1(N)$ is $H_{\mathcal{T}}^1(\mathcal{E}_x,\numberset{Q}_{\mathcal{T}})(1)$. For every field $K/\numberset{Q}$, we denote $\mathscr{H}_K(\mathcal{E}) = \mathscr{H}_{\numberset{Q}}(\mathcal{E})\otimes K$ the extension to $K$. This sheaf constitutes the ``coefficients'' of the cohomology groups where the $\mathcal{T}$-realisations of Eisenstein motivic classes live, if we wish to work over modular curves. Indeed, a result of Scholl states that--roughly speaking--the cohomology of $\mathcal{E}^d$ with trivial coefficients is equivalent to the cohomology of the modular curve $Y_1(N)$ with coefficients in a suitably defined sheaf $\TSym^d \mathscr{H}_{\numberset{Q}}(\mathcal{E})$. We now explain concretely Scholl's result by means of Liebermann's trick, a manipulation coming from the analysis of the action of permutations on Kuga-Sato varieties. As before $\epsilon_d$ is a character of $\mathfrak{I}_d$, which acts on $\mathcal{E}^d$ and $W_d$. If $H$ is an abelian group, the group $S_d$ acts also on the direct product $H^d$ in a natural way. Define $\TSym^d H \subseteq H^d$ as the subgroup of invariants under this action, and $\Sym^d H$ as the coinvariants. \begin{proposition}[{\cite[Proposition~4.1.1]{scholl:motives}}] For every cohomology theory $\mathcal{T}\in\{\et,\dR,\allowbreak\mathrm{Betti},\allowbreak\mathcal{D},\syn,\rig\}$, there is an isomorphism: \begin{equation*} H_{\mathcal{T}}^i(Y_1(N),\TSym^d \mathscr{H}_{\numberset{Q}}(\mathcal{E})(j)) \simeq H_{\mathcal{T}}^{i+d}(\mathcal{E}^d, \numberset{Q}_{\mathcal{T}}(j+d))(\epsilon_d). \end{equation*} \end{proposition} Motivic cohomology should be the initial object in the category of ``cohomology theories''; this motivates the following definition. \begin{definition} Motivic cohomology of modular curves with coefficients is defined by $H_{\mot}^i(Y_1(N),\TSym^d \mathscr{H}_{\numberset{Q}}(\mathcal{E})(j)) = H_{\mot}^{i+d}(\mathcal{E}^d, \numberset{Q}(j+d))(\epsilon_d).$ \end{definition} This description is the same outlined in~\cite{kings.loeffler:rankin-eisenstein}. In particular, the regulator maps $\regulator{\mathcal{T}}$ commute with the action of the character $\epsilon_d$, so they go through to \begin{equation*} \regulator{\mathcal{T}} \colon H^i_{\mot}(Y_1(N),\TSym^d \mathscr{H}_{\numberset{Q}}(\mathcal{E})(j)) \to H^i_{\mathcal{T}}(Y_1(N),\TSym^d \mathscr{H}_{\numberset{Q}}(\mathcal{E})(j)). \end{equation*} By~\cite[Théorème~1.3]{ancona:decomposition}, the sheaf $\mathscr{H}_{\numberset{Q}}(\mathcal{E})$ exists as a motivic sheaf over $Y_1(N)$, and so do its symmetric tensors. Hence we can make sense of the usual homological constructions. Denote $\TSym^{[d,d']} = \TSym^d \otimes \TSym^{d'}$, then together with the above definition we have the following proposition. \begin{proposition} The following equality holds: \begin{equation*} H^i_{\mot}(Y_1(N)^2,\TSym^{[d,d']} \mathscr{H}_{\numberset{Q}}(\mathcal{E})(j)) = H^{i+d+d'}_{\mot}(\mathcal{E}^d \times \mathcal{E}^d,\numberset{Q}(j+d+d'))(\epsilon_d,\epsilon_{d'}). \end{equation*} \end{proposition} \begin{proof} Thanks to~\cite[Théorème~1.3]{ancona:decomposition} the sheaf on left hand side exists as a motivic sheaf over $Y_1(N)$, and the motivic cohomology group is in the category of (relative) Chow motives. By~\cite[§2.2]{ancona:decomposition}, in this category the Künneth formula holds, so we can compute: \begin{gather*} \begin{lgathered} H^i_{\mot}(Y_1(N)^2,\TSym^{[d,d']} \mathscr{H}_{\numberset{Q}}(\mathcal{E})(j)) \\ = \bigoplus_{p=0}^i H^p_{\mot}(Y_1(N),\TSym^d \mathscr{H}_{\numberset{Q}}(\mathcal{E})(j))\otimes H^{i-p}_{\mot}(Y_1(N),\TSym^{d'} \mathscr{H}_{\numberset{Q}}(\mathcal{E})) \\ = \bigoplus_{p=0}^i H^{p+d}_{\mot}(\mathcal{E}^d,\numberset{Q}(j+d))(\epsilon_d)\otimes H^{i+d'-p}_{\mot}(\mathcal{E}^{d'},\numberset{Q}(d'))(\epsilon_{d'}) \\ = H^{i+d+d'}_{\mot}(\mathcal{E}^d \times \mathcal{E}^d,\numberset{Q}(j+d+d'))(\epsilon_d,\epsilon_{d'}) \end{lgathered} \end{gather*} which proves the claim. \end{proof} Carrying forward these ideas, we give the next definition. \begin{definition} Define motivic cohomology groups as follows: \begin{equation*} H^i_{\mot}(X_1(N)^2,\TSym^{[d,d']} \mathscr{H}_{\numberset{Q}}(\mathcal{E})(j)) = H^{i+d+d'}_{\mot}(W_d\times W_{d'},\numberset{Q}(j+d+d'))(\epsilon_d,\epsilon_{d'}). \end{equation*} \end{definition} \subsection{Motives for cusp forms and Rankin-Selberg convolution} \label{subsec:motives} In this subsection we identify ``pieces'' of various cohomology groups that are naturally associated to cusp forms and their convolutions. This is important for our work, since these objects are smaller than the whole cohomology groups but still retain all the information we need. We follow~\cite[§4]{scholl:motives}. Recall that by the remark at page~\pageref{remark:hecke-ops}, the Hecke operators act on the cohomology of $W_k$ and $W_k\times W_{k'}$. For any Hecke operator $T$, denote with $T'$ the dual Hecke operator, i.e.\ its adjoint with respect to the Petersson inner product. When $l$ is a prime not dividing the level, $T_l' = T_l\langle l \rangle^{-1}$. From now on, we denote with $\etbar$ étale cohomology over $\overline{\numberset{Q}}$. \begin{definition} Let $\mathcal{T} \in \{\dR, B, \etbar, \rig\}$. Define: \begin{itemize} \item $M_{\mathcal{T}}(f)$ as the maximal $K_f\otimes \numberset{Q}_{\mathcal{T}}$-submodule of \begin{equation*} H^{k+1}_{\mathcal{T}}(\mathcal{E}^k,\numberset{Q}_{\mathcal{T}})(\epsilon_k) \otimes_{\numberset{Q}} K_f \end{equation*} on which the Hecke operators $T_l$ act as multiplication by $a_l(f)$ for all primes $l$; \item $M_{\mathcal{T}}(f)^*$ as the maximal quotient of \begin{equation*} H^{k+1}_{\mathcal{T}}(\mathcal{E}^k,\numberset{Q}_{\mathcal{T}}(k+1))(\epsilon_k) \otimes_{\numberset{Q}} K_f \end{equation*} on which the dual Hecke operators $T_l'$ act as multiplication by $a_l(f)$ for all primes $l$; \end{itemize} Both $M_{\mathcal{T}}(f)$ and $M_{\mathcal{T}}(f)^*$ are $2$-dimensional. \end{definition} The different realisations enjoy extra structure: \begin{itemize} \item $M_{\dR}(f)$ and $M_{\dR}(f)^*$ are filtered $K_f$-vector spaces; \item $M_B(f)$ and $M_B(f)^*$ are pure Hodge structures over $K_f$ of weight $k+1$ and $-(k+1)$ respectively, and whose only non-zero Hodge numbers are $h^{k+1,0}$, $h^{0,k+1}$, and $h^{-(k+1),0}$, $h^{0,-(k+1)}$ respectively; \item $M_{\etbar}(f)$ and $M_{\etbar}(f)^*$ are pure $p$-adic Galois representations of weight $k+1$ and $-(k+1)$ respectively, over $K_f \otimes \numberset{Q}_p \simeq (K_f)_{\mathfrak{p}}$ for some prime ideal $\mathfrak{p} \mid p$. They are $G_{\numberset{Q}}$-representations, as the varieties are defined over $\numberset{Q}$. \end{itemize} \begin{remark} The extra twist in the definition of $M_{\mathcal{T}}(f)^*$ is explained by the aim to define $M_{\mathcal{T}}(f)^*$ such that it is naturally the dual of $M_{\mathcal{T}}(f)$. There is a natural pairing \begin{equation*} H^{k+1}_{\mathcal{T}}(\mathcal{E}^k,\numberset{Q}_{\mathcal{T}}) \times H^{k+1}_{\mathcal{T}}(\mathcal{E}^k,\numberset{Q}_{\mathcal{T}}) \to H^0_{\mathcal{T}}(\Spec \numberset{Q},\numberset{Q}_{\mathcal{T}}(-k-1)) = 0 \end{equation*} so in order to land in the more natural $H^0_{\mathcal{T}}(\Spec \numberset{Q},\numberset{Q}_{\mathcal{T}}) \simeq \numberset{Q}_{\mathcal{T}}$, we need an extra twist to compensate for the one on the target. \end{remark} An important property that these modules enjoy is that they can be found in several cohomology groups. Indeed, one can give the same definition for the compactified variety $W_k$, and it turns out that the resulting objects are canonically isomorphic to those we defined. Additionally, over $\mathcal{E}^k$ there is a canonical lift of $M_{\mathcal{T}}(f)$ to compactly supported cohomology (this statement makes sense only over a non-compact variety). To sum up, there are \emph{canonical} isomorphisms: \begin{equation*} \begin{tikzcd}[column sep=small] M_{\mathcal{T}}(f) \arrow[<->,"\simeq"]{r} \arrow[hook]{d} & M_{\mathcal{T}}(f) \arrow[<->,"\simeq"]{r} \arrow[hook]{d} & M_{\mathcal{T}}(f) \arrow[hook]{d} \\ H_{c,\mathcal{T}}^{k+1}(\mathcal{E}^k,\numberset{Q}_{\mathcal{T}})(\epsilon_k) & H_{\mathcal{T}}^{k+1}(\mathcal{E}^k,\numberset{Q}_{\mathcal{T}})(\epsilon_k) & H_{\mathcal{T}}^{k+1}(W_k,\numberset{Q}_{\mathcal{T}})(\epsilon_k) \end{tikzcd} \end{equation*} And: \begin{equation*} H_{\mathcal{T}}^{k+1}(\mathcal{E}^k,\numberset{Q}_{\mathcal{T}}(k+1))(\epsilon_k) \twoheadrightarrow M_{\mathcal{T}}(f)^* \simeq M_{\mathcal{T}}(f)^* \twoheadleftarrow H_{\mathcal{T}}^{k+1}(W_k,\numberset{Q}_{\mathcal{T}}(k+1))(\epsilon_k). \end{equation*} Since these are all canonical, we will use the identifications in the rest of the paper. In particular, any cohomology class in $M_{\mathcal{T}}(f)$ has an associated canonical cohomology class with compact support over $\mathcal{E}^k$. We also have comparison isomorphisms linking the various realisations. In particular, we have the de Rham, crystalline and Faltings-Tsuji comparison theorems: \begin{align} M_B(f) \otimes \C &\simeq M_{\dR}(f) \otimes \C, \tag{De Rham}\\ M_{\rig}(f) \otimes B_{\mathrm{crys}} &\simeq M_{\dR}(f)_{\numberset{Q}_p} \otimes B_{\mathrm{crys}} \simeq M_{\etbar}(f)_{\numberset{Q}_p} \otimes B_{\mathrm{crys}}, \tag{$C_{\mathrm{crys}}$} \\ M_{\rig}(f) \otimes B_{\dR} &\simeq M_{\dR}(f)_{\numberset{Q}_p} \otimes B_{\dR} \simeq M_{\etbar}(f)_{\numberset{Q}_p} \otimes B_{\dR} \tag{$C_{\dR}$}. \end{align} For the Rankin convolution one defines the motives using the above ones as building blocks. \begin{definition} Let $\mathcal{T}\in\{\dR,B,\etbar,\rig\}$. Define: \begin{itemize} \item $M_{\mathcal{T}}(f\otimes g) = M_{\mathcal{T}}(f) \otimes M_{\mathcal{T}}(g)$; \item $M_{\mathcal{T}}(f\otimes g)^* = M_{\mathcal{T}}(f)^* \otimes M_{\mathcal{T}}(g)^*$. \end{itemize} Both $M_{\mathcal{T}}(f\otimes g)$ and $M_{\mathcal{T}}(f\otimes g)^*$ are $4$-dimensional, as the building blocks have dimension $2$. \end{definition} By the Künneth formula this definition gives the following characterisations: \begin{itemize} \item $M_{\mathcal{T}}(f\otimes g)$ is the maximal $K_{f,g}\otimes \numberset{Q}_{\mathcal{T}}$-submodule of \begin{equation*} H^{k+k'+2}_{\mathcal{T}}(\mathcal{E}^k\times \mathcal{E}^{k'},\numberset{Q}_{\mathcal{T}})(\epsilon_k,\epsilon_{k'}) \otimes_{\numberset{Q}} K_{f,g} \end{equation*} on which the Hecke operators $(T_l,1)$ and $(1,T_l)$ act as multiplication by $a_l(f)$ and $a_l(g)$ respectively, for all primes $l$; \item $M_{\mathcal{T}}(f\otimes g)^*$ is the maximal quotient of \begin{equation*} H^{k+k'+2}_{\mathcal{T}}(\mathcal{E}^k\times \mathcal{E}^{k'},\numberset{Q}_{\mathcal{T}}(2+k+k'))(\epsilon_k,\epsilon_{k'}) \otimes_{\numberset{Q}} K_{f,g} \end{equation*} on which the dual Hecke operators $(T_l',1)$ and $(1,T_l')$ act as multiplication by $a_l(f)$ and $a_l(g)$ respectively, for all primes $l$; \end{itemize} As before, the different realisations enjoy extra structure: \begin{itemize} \item $M_{\dR}(f\otimes g)$ and $M_{\dR}(f\otimes g)^*$ are filtered $K_{f,g}$-vector spaces; \item $M_B(f\otimes g)$ and $M_B(f\otimes g)^*$ are pure Hodge structures over $K_{f,g}$ of weight $k+k'+2$ and $-(k+k'+2)$ respectively, and whose only non-zero Hodge numbers are \begin{align*} h^{k+k'+2,0},\; h^{k+1,k'+1} &,\; h^{k'+1,k+1},\; h^{0,k+k'+2} \\ \text{and} \quad h^{-(k+k'+2),0},\; h^{-(k+1),-(k'+1)} &,\; h^{-(k'+1),-(k+1)},\; h^{0,-(k+k'+2)} \end{align*} respectively; \item $M_{\etbar}(f\otimes g)$ and $M_{\etbar}(f\otimes g)^*$ are pure $p$-adic Galois representations of weight $k+k'+2$ and $-(k+k'+2)$ respectively, over $K_{f,g} \otimes \numberset{Q}_p \simeq (K_{f,g})_{\mathcal{P}}$ for some prime ideal $\mathcal{P} \mid p$. They are $G_{\numberset{Q}}$-representations, as the varieties are defined over $\numberset{Q}$. \end{itemize} Similar remarks apply to these spaces: both have a canonical lift to the cohomology of $W_k \times W_{k'}$, and $M_{\mathcal{T}}(f\otimes g)$ has also a canonical lift to the compactly supported cohomology of $\mathcal{E}^k \times \mathcal{E}^{k'}$. Moreover, they are linked by the de Rham, crystalline and Faltings-Tsuji comparison theorems. By construction there are natural pairings \begin{gather*} M_{\mathcal{T}}(f) \times M_{\mathcal{T}}(f)^* \to K_f\otimes_{\numberset{Q}} \numberset{Q}_{\mathcal{T}}, \\ M_{\mathcal{T}}(f\otimes g) \times M_{\mathcal{T}}(f\otimes g)^* \to K_{f,g}\otimes_{\numberset{Q}} \numberset{Q}_{\mathcal{T}}. \end{gather*} When $f=g$, $M_{\mathcal{T}}(f\otimes f)$ enjoys the standard decomposition in symmetric and antisymmetric tensors: \begin{equation*} M_{\mathcal{T}}(f\otimes f) \simeq \Sym^2 M_{\mathcal{T}}(f) \oplus \wedge^2 M_{\mathcal{T}}(f). \end{equation*} Let $s \colon M_{\mathcal{T}}(f\otimes f) \to M_{\mathcal{T}}(f\otimes f)$ be the involution swapping the components of the tensor product, then the above are by definition the $s=1$ and $s=-1$ eigenspaces. It is a standard fact that $\Sym^2 M_{\mathcal{T}}(f)$ is $3$-dimensional, while $\wedge^2 M_{\mathcal{T}}(f)$ is $1$-dimensional. This decomposition mirrors~\eqref{eq:decomposition-rho} in various cohomology theories, in particular it is exactly that decomposition of Galois representations when $\mathcal{T} = \etbar$. Correspondingly we have: \begin{equation} \label{eq:decomposition-m} M_{\mathcal{T}}(f\otimes f)^* \simeq \Sym^2 M_{\mathcal{T}}(f)^* \oplus \wedge^2 M_{\mathcal{T}}(f)^*. \end{equation} Notice that by functoriality the projections \begin{gather*} \pr_f \colon H^{k+1}_{\mathcal{T}}(W_k,\numberset{Q}_{\mathcal{T}}(k+1)) \twoheadrightarrow M_{\mathcal{T}}(f)^*, \\ \pr_{f,g} \colon H^{k+k'+2}_{\mathcal{T}}(W_k\times W_{k'},\numberset{Q}_{\mathcal{T}}(2+k+k')) \twoheadrightarrow M_{\mathcal{T}}(f\otimes g)^*. \end{gather*} induce morphisms in cohomology. Clearly $\pr_f$ and $\pr_{f,g}$ can be defined equivalently from the cohomology of $\mathcal{E}^k$ and $\mathcal{E}^k\times\mathcal{E}^{k'}$. We particularly highlight the following. \begin{itemize} \item in the Betti realisation there are morphisms in the category of mixed Hodge structures: \begin{gather*} \Ext_{\mathrm{MH}_{K_f}}^i(K_f,H^{k+1}_B(W_k,\numberset{Q}(k+1))) \xrightarrow{\pr_f} \Ext^i_{\mathrm{MH}_{K_f}}(K_f,M_B(f)^*), \displaybreak[0]\\ \begin{multlined} \Ext_{\mathrm{MH}_{K_{f,g}}}^i(K_{f,g},H^{k+k'+2}_B(W_k\times W_{k'},\numberset{Q}(2+k+k'))) \\ \xrightarrow{\pr_{f,g}} \Ext^i_{\mathrm{MH}_{K_{f,g}}}(K_{f,g},M_B(f\otimes g)^*). \end{multlined} \end{gather*} \item in the étale realisation over an algebraic closure, there are morphisms in the category of Galois modules: \begin{gather*} H^i(\numberset{Q},H^{k+1}_{\et}(W_{k,\overline{\numberset{Q}}},\numberset{Q}_p(k+1))) \xrightarrow{\pr_f} H^i(\numberset{Q},M_{\etbar}(f)^*), \\ H^i(\numberset{Q},H^{k+k'+2}_{\et}((W_k\!\times\! W_{k'})_{\overline{\numberset{Q}}},\numberset{Q}_p(2+k+k'))) \xrightarrow{\pr_{f,g}} H^i(\numberset{Q},M_{\etbar}(f\otimes g)^*) \end{gather*} \end{itemize} When $f=g$, we can compose these with the projections induced by the direct sum decomposition~\eqref{eq:decomposition-m}, to obtain the following morphisms in the category of mixed Hodge structures: \begin{equation*} \begin{tikzcd} \Ext_{\mathrm{MH}_{K_f}}^i(K_f,H^{2k+2}_B(\mathscr{W}_k,\numberset{Q}(2k+2))) \arrow["\pr_{f,f}"]{d} & \Ext^i_{\mathrm{MH}_{K_f}}(K_f,\wedge^2 M_B(f)^*) \\ \Ext^i_{\mathrm{MH}_{K_f}}(K_f,M_B(f\otimes f)^*) \arrow{r} \arrow{ur} & \Ext^i_{\mathrm{MH}_{K_f}}(K_f,\Sym^2 M_B(f)^*) \end{tikzcd} \end{equation*} and the analogous ones in the category of Galois modules: \begin{equation*} \begin{tikzcd} H^i(\numberset{Q},H^{2k+2}_{\et}(\mathscr{W}_{k,\overline{\numberset{Q}}},\numberset{Q}_p(2k+2))) \arrow["\pr_{f,f}"]{d} & H^i(\numberset{Q},\wedge^2 M_{\etbar}(f)^*)\\ H^i(\numberset{Q},M_{\etbar}(f\otimes f)^*) \arrow{r} \arrow{ur} & H^i(\numberset{Q},\Sym^2 M_{\etbar}(f)^*) \end{tikzcd} \end{equation*} \subsection{De Rham cohomology and modular forms} \label{sec:de-rham-kuga-sato} In this subsection we derive explicit descriptions for the de Rham cohomology of Kuga-Sato varieties. To start with, we have the following description of the canonical filtration restricted to the $\epsilon_d$-eigenspaces, as explained in~\cite[202--205]{kato:p-adic}: \begin{equation*} \Fil^i H^{d+1}_{\dR}(W_d,\C)(\epsilon_d) = \begin{cases} H^{d+1}_{\dR}(W_d,\C)(\epsilon_d) &\text{$i\leq 0$} \\ S_{d+2}(N) &\text{$0<i\leq d+1$} \\ 0 &\text{$i>d+1$} \end{cases} \end{equation*} which implies that we have cusp forms spaces in degree $d+1$ while all the rest is concentrated in degree $0$. Considering the Hodge decomposition: \begin{equation*} S_{d+2}(N) = \Fil^{d+1} H^{d+1}_{\dR}(W_d,\C)(\epsilon_d) \simeq \bigoplus_{\substack{p+q = d+1\\ p\geq d+1}} H^{p,q}_{\dR}(W_d,\C)(\epsilon_d) = H^{d+1,0}_{\dR}(W_d,\C)(\epsilon_d). \end{equation*} This also shows $H^{0,d+1}_{\dR}(W_d,\C)(\epsilon_d) = \overline{S_{d+2}(N)}$. Since there are no graded pieces in degrees $1$ to $d$, we deduce \begin{equation} \label{eq:cohomology-decompositions} H^{d+1}_{\dR}(W_d,\C)(\epsilon_d) \simeq S_{d+2}(N) \oplus \overline{S_{d+2}(N)}. \end{equation} This direct sum decomposition shows that the $\epsilon_d$-eigenspace of the de Rham cohomology of $W_d$ only contains classes coming from cusp forms. Since we are working with coefficients in a field, the Künneth formula for $\mathscr{W}_d$ becomes an isomorphism \begin{equation*} H^q_B(\mathscr{W}_d,\C) \simeq \bigoplus_{i+j = q} H^i_B(W_d,\C) \otimes H^j_B(W_d,\C). \end{equation*} In particular for $i>d+1$ we have $H^i_B(W_d,\C)(\epsilon_d) = 0$, so we get for $q=2d+2$: \begin{equation*} H^{2d+2}_B(\mathscr{W}_d,\C)(\epsilon_d, \epsilon_d) \simeq H^{d+1}_B(W_d,\C)(\epsilon_d) \otimes H^{d+1}_B(W_d,\C)(\epsilon_d). \end{equation*} Combining this last with equations~\eqref{eq:cohomology-decompositions} we get \begin{align*} H^{2d+2}_{\dR}(\mathscr{W}_d,\C)(\epsilon_d,\epsilon_d) &\simeq (S_{d+2}(N)\oplus \overline{S_{d+2}(N)})^{\otimes 2} \\ &= (S_{d+2}(N)\otimes \overline{S_{d+2}(N)}) \oplus (\overline{S_{d+2}(N)} \otimes S_{d+2}(N)) \\ &\phantom{=} \oplus (S_{d+2}(N)\otimes S_{d+2}(N)) \\ &\phantom{=} \oplus (\overline{S_{d+2}(N)} \otimes \overline{S_{d+2}(N)}). \end{align*} In the last direct sum decomposition one can easily recognise the four graded pieces that constitute the cohomology group: \begin{subequations} \label{eq:kunneth-components} \begin{align} H^{d+1,d+1}_{\dR}(\mathscr{W}_d,\C)(\epsilon_d,\epsilon_d) &= (S_{d+2}(N)\otimes \overline{S_{d+2}(N)}) \oplus (\overline{S_{d+2}(N)} \otimes S_{d+2}(N)) \label{eq:middle-kunneth}\\ H^{2d+2,0}_{\dR}(\mathscr{W}_d,\C)(\epsilon_d,\epsilon_d) &= (S_{d+2}(N)\otimes S_{d+2}(N)) \label{eq:hol-kunneth} \\ H^{0,2d+2}_{\dR}(\mathscr{W}_d,\C)(\epsilon_d,\epsilon_d) &= (\overline{S_{d+2}(N)} \otimes \overline{S_{d+2}(N)}). \label{eq:nonhol-kunneth} \end{align} \end{subequations} \section{\texorpdfstring{$p$}{p}-adic \texorpdfstring{$L$}{L}-functions} \label{sec:padic-Lfunctions} In this section we introduce the $p$-adic $L$-functions we will study. We regard them as functions on the weight space $\mathcal{W}$, i.e.\ the rigid analytic space whose $K$-points are $\mathcal{W}(K) = \Hom(\numberset{Z}_p^{\times},K^{\times})$ for every extension $K$ of $\numberset{Q}_p$. Arithmetic weights are those of the form $\nu_{\theta,n}$ for $\theta$ a Dirichlet character of $p$-power conductor and $n\in\numberset{Z}$, acting via $\nu_{\theta,n}(z) = \theta(z)z^n$. We will often use the additive notation: the character $\nu_{\theta,n}$ will be denoted with $\theta+n$. Let $\mathcal{W}_{\pm}$ denote the two halves of the weight space whose $M$-points are, for every extension $M/\numberset{Q}_p$: \begin{equation*} \mathcal{W}_{\pm}(M) = \{ \kappa\in\Hom(\numberset{Z}_p^{\times},M^{\times}) \mid \kappa(-1) = \pm 1 \}. \end{equation*} In particular, for the arithmetic weights \begin{equation*} \theta+n = \nu_{\theta,n} \in \mathcal{W}_{\pm} \iff \theta(-1) = \pm (-1)^n. \end{equation*} \subsection{Kubota-Leopoldt and Symmetric square} The Kubota-Leopoldt $p$-adic $L$-function interpolates the values of a Dirichlet complex $L$-function at infinitely many negative integers. Let $\chi$ be a Dirichlet character of conductor $N_{\chi}$, and write it as $\chi = \chi_p\chi'$, where $\chi_p$ has $p$-power conductor and $\chi'$ has conductor prime to $p$. Since $p$-power conductor characters may be incorporated into the weight, we can suppose $\chi_p=1$ without loss of generality, i.e.\ $\chi$ has conductor prime to $p$. Let $a_{\chi}$ be $0$ if $\chi$ is even, $1$ if it is odd. Denote the Gauss sum of a character with: \begin{equation*} \tau(\chi) = \sum_{n=1}^{N_{\chi}} e^{\frac{2\pi in}{N_{\chi}}}\chi(n). \end{equation*} \begin{proposition} There exists a unique $p$-adic meromorphic (holomorphic when $\chi$ is not trivial) function $L_p(\chi,\cdot)$ which at arithmetic weights $\theta+n$ satisfies the interpolation property: if $n\leq 0$ and $\chi\theta(-1) = (-1)^{n+1}$, then \begin{align*} L_p(\chi,\theta+n) &= \mathrm{E}_p(\chi\theta^{-1},n)L(\chi\theta^{-1},n) \intertext{while if $n\geq 1$ and $\chi\theta(-1)=(-1)^n$, then} L_p(\chi,\theta+n) &= \mathrm{E}_p(\chi^{-1}\theta,1-n)\frac{2\Gamma(n)i^{a_{\chi}}}{(2\pi i)^n} \frac{\chi(N_{\theta})\tau(\theta^{-1})}{N_{\theta}^n}L(\chi\theta^{-1},n) \end{align*} where $\mathrm{E}_p(\theta,x) = (1-\theta(p)p^{-x})$. \end{proposition} The set of points at which we have interpolation is dense in $\mathcal{W}$, hence $L_p(\chi,\cdot)$ is well-defined. Notice that the choice of a ``parity condition'' on $\theta$ and $n$ is equivalent to choosing one half of the weight space. Indeed, the value of $\theta+n$ at $-1$ determines the parity of the integer $t$ for which $(\theta+n)\vert_{\numberset{F}_p^{\times}} = \omega^t$, $\omega$ being the Teichmüller character. A priori one could choose two unrelated interpolation properties on the two halves of the weight space: following Dasgupta, we have made a choice that yields a functional equation. \begin{theorem} For every $\kappa\in\mathcal{W}$, we have \begin{equation*} L_p(\chi,1-\kappa) = \frac{i^{a_{\chi}}}{\tau(\chi^{-1})}\kappa(N_{\chi})L_p(\chi^{-1},\kappa). \end{equation*} \end{theorem} By contrast, the values of the symmetric square complex $L$-function $L(\Sym^2 f)$ have been interpolated in a $p$-adic $L$-function only when $f$ is ordinary or $v_p(a_p) < \tfrac{k+1}{2}$. Since we do not put restrictions on $v_p(a_p)$ we cannot take advantage of such a result. In particular, in our case the putative function $L_p(\Sym^2 f)$ is \emph{not defined}. The construction of $L_p(\Sym^2 f)$ is one of the main goals of the paper. \subsubsection{The Euler System of cyclotomic classes} Let $U$ be a $\numberset{Z}[\mu_{\phi(N_{\chi})}]$-module with an action of $G_{\numberset{Q}}$. Define the $\numberset{Q}$-vector space: \begin{equation} \label{eq:chi-eigenspace} U^{\chi} = \{ x\in U \mid \sigma(x) = (\chi)_{Gal}^{-1}(\sigma)x, \; \forall \sigma\in G_{\numberset{Q}} \}. \end{equation} By definition of $U^{\chi}$ the absolute Galois group $G_{\numberset{Q}}$ acts through $(\chi)_{Gal}^{-1}$. Consider in particular the case $U = \mathcal{O}_F^{\times}$ for $F$ a number field containing the $N_{\chi}$-th roots of unity, that is for an extension of $\numberset{Q}(\mu_{N_{\chi}})$. Let $\zeta$ be a primitive $N_{\chi}$-th root of unity, then the cyclotomic units are: \begin{equation*} u_{\chi} = \prod_{1 \leq a \leq N_{\chi}} (1-\zeta^a)^{\chi^{-1}(a)} \in (\mathcal{O}_F^{\times})^{\chi}. \end{equation*} We emphasise that the above units are by construction an ``averaged over a twist'' version of the elements \begin{equation*} 1-\zeta^a \in \mathcal{O}_F^{\times} \subseteq H^1_{\mot}(\Spec F,1). \end{equation*} We have written $u_{\chi}$ in multiplicative notation as we want to emphasise the identification $F^{\times} \simeq H^1_{\mot}(\Spec F,1)$ given by the Kummer map. One can also regard $u_{\chi}$ as the image of an element of $\bigl(H^1_{\mot}(\Spec F',1) \otimes \numberset{Q}(\chi)\bigr)^{\chi}$, where $F = F'\numberset{Q}(\mu_{N_{\chi}})$. The above classes constitute of one of the earliest examples of Euler systems, and their étale realisations can be made into an Euler system in the sense of Rubin by taking care of the Euler factor in the $p$-direction, see~\cite[\nopp III]{rubin:euler}. These collection of classes all arise as twists of a unique Euler system by finite-order characters. In particular, for any character of prime-to-$p$ conductor, the twisted classes $u_{\chi}$ give an Euler system for the representation $\numberset{Q}_p(1)\otimes \chi$, while the ``original'' classes are an Euler system for $\numberset{Q}_p(1)$. The shape of (the étale realisation of) $u_{\chi}$ also comes from the general theory of twists of Euler systems by characters~\cite[\nopp II.4]{rubin:euler}. Accordingly, the $u_{\chi}$ are linked to both complex and $p$-adic $L$-values. Since we will not use these formulæ directly, we only give them for $\chi$ even for the sake of brevity. Write $\log_{\infty}$ and $\log_p$ for the complex and $p$-adic logarithm respectively. The following formulæ for $s=1$ can be found in~\cite{washington:introduction}, pages~37 and~63, while the ones for $s=0$ follow from the functional equations. \begin{proposition} If $\chi$ is even and non-trivial, then: \begin{gather*} L(\chi^{-1},1) = -\frac{\tau(\chi^{-1})}{N_{\chi}}\log_{\infty}(u_{\chi}), \\ L'(\chi,0) = -\log_{\infty}(u_{\chi}). \end{gather*} \end{proposition} \begin{theorem}[Leopoldt] If $\chi$ is even and non-trivial, then: \begin{gather*} L_p(\chi^{-1},1) = -\frac{\tau(\chi^{-1})}{N_{\chi}} \Bigl(1-\frac{\chi^{-1}(p)}{p}\Bigr) \log_p(u_{\chi}), \\ L_p(\chi,0) = -i^{a_{\chi}} \Bigl(1-\frac{\chi^{-1}(p)}{p}\Bigr) \log_p(u_{\chi}). \end{gather*} \end{theorem} Since we want to allow representations with higher twists, we have to search for analogues of $u_{\chi}$ lying in $H^1_{\mot}(\Spec F,n+1)$ for any $n>0$. Here one can choose to appeal to motivic elements constructed by Beilinson~\cite{beilinson:higher} or by Soulé~\cite{soule:k-theorie}. Even though the two belong to slightly different cohomology groups (Soulé elements have coefficients in a integral lattice), they actually only differ by a rational factor. For further explanation on this, see~\cite{gros:regulateurs}, \cite[§3.2.4 and §5.1]{perrin-riou:fonctions} and~\cite[384]{bloch.kato:l-functions} for comparison of the elements in motivic cohomology and their regulator formulæ, and~\cite{huber.wildeshaus:classical,huber.kings:bloch-kato} which also treat the computations of the polylogarithms. The following theorem is~\cite[Corollary~7.1.6]{beilinson:higher}, but the precise form of the regulator is to be found in the aforementioned references of Perrin-Riou and Bloch and Kato. The reader that wishes a formula in terms of the polylogarithm can consult the other references reported above. \begin{theorem}[Beilinson] \label{thm:deligne-regulator-beilinson-element} Let $n\in\numberset{N}_{\geq 1}$, then \begin{equation*} \dim(H^1_{\mot}(\Spec F,n+1)\otimes\numberset{Q}(\chi))^{\chi} = \begin{cases} 1 &\text{if $\chi(-1) = (-1)^n$} \\ 0 &\text{if $\chi(-1)=(-1)^{n+1}$} \end{cases}. \end{equation*} There exists an element $\phi_{\chi} \in (H^1_{\mot}(\Spec F,n+1)\otimes \numberset{Q}(\chi))^{\chi}$ such that \begin{equation*} \regulator{\del}(\phi_{\chi}) = -n! L(\overline{\chi},n+1). \end{equation*} \end{theorem} According to the theorem, if $\chi(-1)=(-1)^n$ then $H^1_{\mot}(\Spec F,n+1)^{\chi}$ is $1$-dimensional, otherwise it is zero dimensional, see also~\cite[§4.1]{soule:regulateurs}. We have then a manifestation of the Beilinson conjecture---predicting the injectivity of $\regulator{\del}$---through the above equation. The parity condition is an indication that the Beilinson conjecture holds true for spectra of number fields. Using the functional equation for $L(\chi,\cdot)$ and the computation of the residue of the Gamma function, we deduce \begin{equation} \label{eq:deligne-regulator-beilinson-element} \regulator{\del}(\phi_{\chi}) = -n!L(\overline{\chi},n+1) = -\Bigl( \frac{2\pi i}{N_{\chi}} \Bigr)^n \tau(\chi)L'(\chi,-n). \end{equation} One also obtains the following formula relating $\phi_{\chi}$ to a $p$-adic $L$-value. Denote with $\log$ the Bloch-Kato logarithm. \begin{theorem}[{\cite[Proposition~3.2.3]{perrin-riou:fonctions}}] \label{thm:syntomic-regulator-beilinson-element} Under the hypothesis $\chi(-1)=(-1)^n$, the étale cohomology class $\regulator{\et}(\phi_{\chi}) \in H^1_{\et}(\Spec F,\numberset{Q}_p(1+n))^{\chi}$ satisfies \begin{equation*} \log\bigl(\regulator{\et}(\phi_{\chi}) \bigr) = \tau(\chi) \Bigl(1-\frac{\chi^{-1}(p)}{p^{1+n}}\Bigr)^{-1} (-1)^{n}n! L_p(\overline{\chi},1+n). \end{equation*} \end{theorem} We warn the reader that we our convention differs from Perrin-Riou's, especially regarding the eigenspaces for Galois characters. The eigenspace~\eqref{eq:chi-eigenspace} is $U^{\chi}$ in our notation, while in her notation it would be $U^{\chi^{-1}}$. For this reason we had to adjust Perrin-Riou's proposition to our notation: this is why the above equation differs from the original one by a conjugation of the character. Using the functional equation for $L_p(\chi,\cdot)$, we deduce: \begin{equation} \label{eq:syntomic-regulator-beilinson-element} \log\bigl(\regulator{\et}(\phi_{\chi}) \bigr) = (-1)^n\frac{n!}{N_{\chi}^n} i^{a_{\chi}} \Bigl(1-\frac{\chi^{-1}(p)}{p^{n+1}}\Bigr)^{-1} L_p(\chi,-n). \end{equation} \begin{remark} If we let $n = k-j$, the condition under which $H^1_{\mot}(\Spec F,1+k-j)^{\chi}$ is non-trivial is $k-j \equiv a_{\chi}$ modulo $2$. When $\chi=\psi$, this is exactly the condition we imposed at page~\pageref{eq:leading-terms} to deduce equation~\eqref{eq:leading-terms}. \end{remark} \paragraph{Integrality of higher cyclotomic classes} \label{par:integrality-cyclotomic} To end our account of higher cyclotomic classes we show that they are integral, which means that they are in $(H^1_{\mot}(\Spec \mathcal{O}_F,1+n)\otimes \numberset{Q}(\chi))^{\chi}$. Other than being interesting on its own, this will allow for a more precise comparison with the motivic classes we will construct, as the cohomology groups of the ring of integers are smaller. By definition, the motivic cohomology groups of spectra of number fields are \begin{equation*} H^1_{\mot}(\Spec F,1+n) = \numberset{Q} \otimes_{\numberset{Z}} \mathrm{gr}_{1+n}^{\gamma} K_{2n+1}(F). \end{equation*} In particular, motivic groups are torsion-free. Suppose first $n\geq 1$. Then as $2n+1\geq 3$, the $K$-groups satisfy \begin{equation*} K_{2n+1}(F)\otimes_{\numberset{Z}} \numberset{Q} = K_{2n+1}(\mathcal{O}_F)\otimes_{\numberset{Z}} \numberset{Q}. \end{equation*} As all motivic classes are in the torsion-free part of the $K$-group, we can regard $\phi_{\chi}\in (K_{2n+1}(\mathcal{O}_F)\otimes_{\numberset{Z}} \numberset{Q})\otimes\numberset{Q}(\chi) \simeq H^1_{\mot}(\Spec \mathcal{O}_F,n+1)\otimes\numberset{Q}(\chi)$. Since $\phi_{\chi}$ is already in the $\chi$-eigenspace, the claim follows. Suppose now $n=0$. In this case $\phi_{\chi}\in (H^1_{\mot}(\Spec F,1)\otimes\numberset{Q}(\chi))^{\chi} \simeq (F^{\times}\otimes\numberset{Q}(\chi))^{\chi}$. We compare it with the cyclotomic unit: thanks to the formulæ expressing $\phi_{\chi}$ and $u_{\chi}$ in terms of $L$-values, we find \begin{equation*} \phi_{\chi} = -\frac{N_{\chi}}{\tau(\chi^{-1})}u_{\chi}. \end{equation*} Since $F$ contains the $N_{\chi}$-th roots of unity, which are then in its ring of integers, we have $\tau(\chi^{-1}) \in \mathcal{O}_F^{\times}\otimes\numberset{Q}(\chi)$. Moreover $u_{\chi}\in (\mathcal{O}_F^{\times})^{\chi}$ as known. These imply that $\phi_{\chi} \in \mathcal{O}_F^{\times}\otimes\numberset{Q}(\chi)$, and since $\phi_{\chi}$ is already in the $\chi$-eigenspace, the claim follows. \subsection{Rankin-Selberg} The Rankin-Selberg $p$-adic $L$-function is usually constructed by interpolating critical values of the complex one. However, this definition is possible only when the weights of $f$ and $g$ are different, otherwise the lack of critical values undermines it. In this subsection we explain a workaround. Recall that $L(f\otimes g,s)$ denotes the complex Rankin-Selberg $L$-function attached to $f \in S_{k+2}(N,\psi)$ and $g\in S_{k'+2}(N',\psi')$; without loss of generality suppose $k\geq k'$. The construction of a $p$-adic $L$-function by interpolation---due to Panchishkin and Hida---relies on the existence of a ``period'' for $L(f\otimes g,s)$ in the range $[k'+2,k+1]$~\cite{shimura:on}. Hida then showed that for $f$ ordinary there exists a unique $p$-adic $L$-function varying analytically as $f$ varies in a Hida family, and with an interpolation property in the above range. Nevertheless in this paper we are interested in the case $f=g$ supersingular, so we are violating both requirements of that construction. In particular, the value one would like to call $L_p(f,f,s)$ is never obtained by interpolation, but by evaluating a \emph{unique} function on the weight space (with constraints given elsewhere) at appropriately chosen weight-characters. This also entails that the factorisation formula cannot be obtained by interpolation, contrarily to what the naive approach would suggest. Furthermore, being in the supersingular case we replace Hida families of modular forms with Coleman families. We do not aim to give here a full-fledged account of the topic of families of modular forms, we only explain the results relevant to our case, following~\cite{loeffler.zerbes:rankin-eisenstein}. The reader who wishes to know more on the subject can consult the seminal work of Coleman and Mazur~\cite{coleman.mazur:eigencurve}, and the articles~\cite{coleman:p-adic,buzzard:eigenvarieties,emerton:p-adic}. If $U\subseteq \mathcal{W}$ is an affinoid defined over a finite extension $H$ of $\numberset{Q}_p$, denote with $\Lambda_U$ the $\mathcal{O}_H$-algebra of rigid functions on $U$ bounded by $1$. This is isomorphic to the $\mathcal{O}_H$-algebra of formal power series in one variable, $\Lambda_U \simeq \mathcal{O}_H[[T]]$. In the remainder of this subsection, $f$ is supposed to be supersingular. \begin{definition} Let $U\subseteq \mathcal{W}$ be an affinoid disc defined over a finite extension $H$ of $\numberset{Q}_p$, such that the set of classical weights $U \cap \numberset{N}$ is dense in $U$. A \emph{Coleman family} $F$ over $U$ (of tame level $N$) is a power series \begin{equation*} F = \sum_{n\geq 0} a_n(F)q^n \in \Lambda_U[[q]] \end{equation*} with $a_1(F)=1$ and $a_p(F)$ invertible in $\Lambda_U[1/p]$, such that for all but finitely many classical weights $k_0\in U\cap \numberset{N}$, the series $F_{k_0} = \sum_{n\geq 0} a_n(F)(k_0)q^n$ with coefficients in $\mathcal{O}_H$ is the $q$-expansion of a classical modular form of weight $k_0+2$ and level $\Gamma_1(N)\cap\Gamma_0(p)$ which is a normalised eigenform. \end{definition} Roughly speaking, a Coleman family is a formal $q$-expansion with coefficients in $\Lambda_U$, which can be specialised to every weight-character in $U$, to obtain a modular form of that weight-character. The condition on $a_p(F)$ implies that it is never zero, hence all specialisations have finite slope. If $\breve{f}$ is a modular form of weight $\breve{k}+2\geq 2$ and level $\Gamma_1(N)\cap\Gamma_0(p)$, a Coleman family \emph{passing through} $\breve{f}$ is a family $F$ over $U$ such that $\breve{k}\in U$ and $F_{\breve{k}} = \breve{f}$. Since $U$ is an affinoid disc, the tame level and tame character are constant along the family. The family $F$ does not need to exist in general, but it is guaranteed to do so if $\breve{f}$ is a \emph{noble} eigenform, i.e.\ if it is a normalised cuspidal Hecke eigenform such that \begin{itemize} \item it is the $p$-stabilisation of a newform whose Satake parameters at $p$ are distinct (``$p$-regularity''); \item if $v_p(a_p(\breve{f}))=\breve{k}+1$, then the Galois representation $M_{\etbar}(f)\vert_{G_{\numberset{Q}_p}}$ is not the direct sum of two characters (``non-criticality''). \end{itemize} \begin{theorem}[{\cite[Theorem~4.6.4]{loeffler.zerbes:rankin-eisenstein}}] Suppose $\breve{f}$ is a noble eigenform of weight $\breve{k}+2$, then there exists a disc $U\subseteq \mathcal{W}$ with $\breve{k}\in U$ and a unique Coleman family $F$ over $U$, such that $F_{\breve{k}} = \breve{f}$. \end{theorem} Coleman families are the key ingredient to construct $p$-adic $L$-functions for families of supersingular forms. \begin{theorem}[\cite{loeffler.zerbes:rankin-eisenstein}] \label{thm:geometric-l-function} Let $f,g$ be cusp forms of level $\Gamma_1(N)$ which are supersingular at $p$, with chosen $p$-stabilisations $f_{\alpha_f}$, $g_{\alpha_g}$. Suppose that $N\geq 5$ and that $f_{\alpha_f}$ and $g_{\alpha_g}$ are noble. Let $V_1$ and $V_2$ be affinoid discs around $k$ and $k'$ respectively, $F$ and $G$ Coleman families over $V_1$ and $V_2$ respectively, passing through $f_{\alpha_f}$ and $g_{\alpha_g}$. Then there exists a unique meromorphic rigid function \begin{align*} L_p^{\mathrm{geom}}(F,G) \colon V_1\times V_2 \times \mathcal{W} &\to \C_p \\ (\kappa_1,\kappa_2,\upsilon) &\mapsto L_p^{\mathrm{geom}}(F,G)(\kappa_1,\kappa_2,\upsilon) \end{align*} with the following interpolation property. Suppose that \begin{itemize} \item $(\kappa_1,\kappa_2,\upsilon) = (k_0,k'_0,j_0)$ for integers $k_0\geq 0$, $k'_0\geq -1$ and $k'_0+1\leq j_0 \leq k_0$; \item $f_{k_0,\alpha} = F_{k_0}$ and $g_{k'_0,\alpha} = G_{k'_0}$ are the $p$-stabilisations of forms $f_{k_0},g_{k'_0}$. \end{itemize} Then \begin{gather*} \begin{multlined} L_p^{\mathrm{geom}}(F,G)(k_0,k'_0,j_0+1) = \frac{j_0!(j_0-k'_0-1)!}{\pi^{2j_0-k'_0+1}(-i)^{k_0-k'_0}2^{2j_0+2+k_0-k'_0}\langle f_{k_0},f_{k_0}\rangle} \\ \cdot\frac{E(f_{k_0},g_{k'_0},j_0+1)}{E(f_{k_0})E^*(f_{k_0})}L(f_{k_0},g_{k'_0},j_0+1) \end{multlined} \intertext{where} E(f) = \Bigl(1-\frac{\beta_f}{p\alpha_f}\Bigr), \quad E^*(f) = \Bigl(1-\frac{\beta_f}{\alpha_f}\Bigr), \\ E(f,g,j+1) = \Bigl(1-\frac{p^j}{\alpha_f\alpha_g}\Bigr)\Bigl(1-\frac{p^j}{\alpha_f\beta_g}\Bigr)\Bigl(1-\frac{\beta_f\alpha_g}{p^{1+j}}\Bigr)\Bigl(1-\frac{\beta_f\beta_g}{p^{1+j}}\Bigr). \end{gather*} \end{theorem} The function defined in the theorem is a $p$-adic $L$-function in three variables, where we have analytic variation as the two modular forms vary in Coleman families, and the last variable over a subset of $\mathcal{W}$. For every specialisation of $F$ and $G$ we obtain a single-variable $p$-adic $L$-function. The $3$-variable function was initially constructed in the main theorem of~\cite[§4.4]{urban:nearly}, but the article had a gap which was addressed in~\cite{andreatta.iovita:triple}. In the meanwhile, Zerbes and the second author gave an alternative proof, which is the one we refer to. If we start from the supersingular eigenforms $f$ and $g$, then their $p$-stabilisations automatically satisfy the second requirement. It suffices then to ask that $f$ and $g$ have distinct Satake parameters to ensure that there exist unique Coleman families passing through their $p$-stabilisations. In this case we can then apply the last theorem to deduce the existence of the geometric $p$-adic $L$-function. For this reason, from now on we will make the following assumption: \begin{itemize} \item the supersingular form $f$ satisfies $\alpha_f\neq\beta_f$. \end{itemize} Both $p$-stabilisations of $f$ are then noble, so we fix one of them and deduce the existence of a unique Coleman family $F$ through it. \begin{remark} \label{remark:semisimplicity-tp} Clearly the condition $\alpha_f\neq\beta_f$ is equivalent to the fact that the Hecke operator $T_p$ acts semisimply on $M_{\et}(f)$. Coleman and Edixhoven~\cite{coleman.edixhoven:on} showed that this is always the case in weight $2$, and that it is implied by Tate's conjecture in higher weights. Therefore, the case $\alpha_f=\beta_f$ never occurs when $k=0$, and is conjectured to never occur also when $k>0$. \end{remark} Note that $L_p^{\mathrm{geom}}(F,F)$ is always well-defined: there is no requirement about the weights of the two cusp forms. There are no interpolation results when we specialise the Coleman families at the same arithmetic weights, but we can still speak of the value $L_p^{\mathrm{geom}}(F,F)(k,k,\upsilon)$ for every $\upsilon$. The crucial point is that these values are \emph{not} related to $L(f,f,s)$ for any value of $s$, since the ``interpolation condition'' is not fulfilled. As written above, this is the actual reason why we cannot prove the $p$-adic factorisation theorem simply interpolating the complex $L$-functions. In the remainder of the article we will use $L_p^{\mathrm{geom}}(F,F)$ as a replacement for the missing Rankin-Selberg $p$-adic $L$-function. Indeed, since it interpolates a range of complex Rankin-Selberg $L$-functions, it can be considered a generalisation of the $p$-adic one. \begin{remark} In the literature there are a few different versions of the interpolation formula for the geometric $p$-adic $L$-function. The above formula agrees with the conventions of~\cite{benois.horte:on}, which in turn draws from~\cite{urban:nearly,andreatta.iovita:triple}. The version stated in~\cite{loeffler.zerbes:rankin-eisenstein} is slightly different: firstly, it relates $L(f_{k_0},g_{k'_0},j_0+1)$ to the value of $L_p^{\mathrm{geom}}(F,G)$ at $(k_0,k'_0,j_0)$ instead of at $(k_0,k'_0,j_0+1)$. Secondly, the factor $(-i)^{k_0-k'_0}$ is replaced with $(-1)^{k_0-k'_0}$. We underline that we will need this function only in the case where the two cusp forms coincide, therefore this last difference disappears. In that case, passing from one convention to the other is just a matter of translating the last variable by $\pm 1$, so the two versions actually define completely interchangeable $p$-adic functions. \end{remark} \paragraph{Examples of non-noble modular forms} We give here examples of how one can construct non-noble modular forms. In these cases it is not guaranteed that the modular form can be put in a family (this corresponds to the fact that the eigencurve is not necessarily étale over the weight space). Noble forms have level $\Gamma_1(N)\cap\Gamma_0(p)$, so we work with a form $\breve{f}$ of weight $\breve{k}+2 \geq 2$ and level $\Gamma_1(N)\cap\Gamma_0(p)$. In addition, noble forms are the $p$-stabilisations of cusp forms with distinct Satake parameters. When $\breve{f}$ is not the $p$-stabilisation of a form, it is automatically non-noble. Nevertheless, the most interesting case is that of forms which arise as $p$-stabilisations, so we put ourselves in this case. Let then $\breve{f}$ be a modular form of weight $\breve{k}+2$ and level $\Gamma_1(N)\cap\Gamma_0(p)$, which is the $p$-stabilisation of a cusp form $h$ of level $\Gamma_1(N)$. Notice that this implies that $\breve{f}$ is old at $p$. Nobility amounts to two features: $p$-regularity and non-criticality. If we assume the Coleman-Edixhoven conjecture, all cusp forms have distinct Satake parameters, hence $p$-regularity is always satisfied. For $\breve{f}$ to be noble, it must also be non-critical. We say that $\breve{f}$ has \emph{critical slope} if $v_p(a_p(\breve{f})) = \breve{k}+1$, and that it is \emph{critical} if it has critical slope, and $M_{\etbar}(\breve{f})\vert_{G_{\numberset{Q}_p}}$ splits as the direct sum of two characters. This amounts to say that $M_{\etbar}(\breve{f})$ is semisimple as a $G_{\numberset{Q}_p}$-representation. By the Coleman-Edixhoven conjecture, if $v_p(a_p(\breve{f})) \neq \breve{k}+1$ then $\breve{f}$ is automatically noble. Non-noble forms can only be found when $\breve{f}$ has critical slope, and in addition the form is critical. We now apply a result of Bellaïche which characterises forms with critical slope. \begin{proposition}[{\cite[Proposition~2.13 and Remark~2.14]{bellaiche:critical}}] Let $h$ be a newform. Then there exists a $p$-stabilisation of $h$ of critical slope in and only in the following cases: \begin{itemize} \item $h$ is cuspidal, $p$-ordinary and not CM. In this case it is expected that $\breve{f}$ is non-critical; \item $h$ is cuspidal, with CM by an imaginary quadratic field where $p$ is split. In this case $\breve{f}$ is always critical; \item $h$ is Eisenstein. In this case it is expected that $\breve{f}$ is non-critical. \end{itemize} In these cases there exists exactly one $p$-stabilisation of critical slope. \end{proposition} Therefore, non-criticality is violated only in the second case of the proposition. This provides us with an infinite supply of non-noble forms: $p$-stabilisations of cusp forms with CM by an imaginary quadratic field where $p$ splits. On the other hand, in all the other cases the resulting forms are of critical slope, but not critical. In particular, $p$-stabilisations of cusp forms without CM are expected to always be noble. We now give an explicit example of a non-noble modular form, arising as $p$-stabilisation of cuspidal forms with CM, with $p$ split in the imaginary quadratic field. Consider the form with label 27.2.a.a in the $L$-functions and Modular Forms Database~\cite[\href{https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/27/2/a/a/}{Modular Form 27.2.a.a}]{lmfdb}. This is a modular form of level $\Gamma_0(27)$ and weight $2$, with CM by $\numberset{Q}(\sqrt{-3})$. Its $q$-expansion is \begin{equation*} h(q) = q - 2q^4 - q^7 + 5q^{13} + 4q^{16} - 7q^{19} + O(q^{20}). \end{equation*} Let $p=7$, then $a_7(h) = -1$ so it is a $7$-adic unit. The form $h$ is then cuspidal, ordinary and with CM by $\numberset{Q}(\sqrt{-3})$, and in this field the prime $7$ splits as $(7) = (2 - \sqrt{-3})(2 + \sqrt{-3})$. We are then in the second case above. It follows that one of the two $p$-stabilisations of $h$ is critical. Since critical implies critical slope, the correct candidate is the $p$-stabilisation corresponding to the root of $7$-adic valuation $+1$. This is a non-noble modular form which arises as a $p$-stabilisation of a cusp form. \section{The Beilinson-Flach Euler system} In this section we define the cohomology classes that we will use as ingredients to construct our motivic classes. We first introduce Beilinson-Flach classes and explain how they can be extended to the cohomology of Kuga-Sato varieties, and then exploit some differential forms attached to cusp forms. \subsection{From Eisenstein to Beilinson-Flach classes} In this subsection we introduce the Beilinson-Flach classes, constructed in~\cite{lei.loeffler:euler1,kings.loeffler:rankin-eisenstein} as the push forward of Eisenstein classes (constructed by Beilinson) from the motivic cohomology of modular curves, to that of the product of two modular curves. \begin{definition}[{\cite[Definition~5.3.1]{kings.loeffler:rankin-eisenstein}}] The Beilinson-Flach classes defined over $\numberset{Q}$ are motivic classes \begin{equation*} \mathrm{BF}^{[k,k',j]}_{\mot,N} \in H^3_{\mot}(Y_1(N)^2,\TSym^{[k,k']} \mathscr{H}_{\numberset{Q}}(\mathcal{E})(2-j)). \end{equation*} We will suppress the subscript $N$, since it is constant in the paper. For any theory $\mathcal{T}\in\{\et,\dR,\mathrm{Betti},\mathcal{D},\syn,\rig\}$ denote $\mathrm{BF}^{[k,k',j]}_{\mathcal{T}} = \regulator{\mathcal{T}}(\mathrm{BF}^{[k,k',j]}_{\mot})$. \end{definition} The key feature of Beilinson-Flach classes is that they provide an Euler system for the Galois representation attached to the convolution of $f$ and $g$. For us the most interesting feature is their relation to the special values of $L'(f,g)$ via a regulator formula: the bridge is the regulator to Deligne cohomology $\regulator{\del}$. For our purposes it is also necessary to extend Beilinson-Flach classes to $X_1(N)^2$ while keeping a similar formula. The ``compactification'' process is done in three steps: \begin{itemize} \item construct cohomology classes over $X_1(N)^2$ extending Beilinson-Flach classes (Subsection~\ref{subsec:compactification-bf}); \item construct an appropriate differential form to pair them with (Subsection~\ref{subsec:differential-forms}); \item prove a version of the regulator formula over $X_1(N)^2$ (Subsubsection~\ref{subsubsec:complex-formula}). \end{itemize} \begin{remark} The map $\regulator{\del}$ can be expressed explicitly as an integral when one starts from $H_{\mot}^3(X,2) \simeq \mathrm{CH}^2(X,1)$ with $X$ a proper variety. Recall that the higher Chow group $\mathrm{CH}^{i+1}(X,1)$ can be described as the cohomology in the middle place of the Gersten complex \begin{equation*} \bigoplus_{y \in Z^{i-1}} K_2(k(x)) \xrightarrow{T} \bigoplus_{x \in Z^i} k(x) \xrightarrow{\div} \bigoplus_{Z^{i+1}} \numberset{Z} \end{equation*} Hence every element of the motivic cohomology group $H^3_{\mot}(X,2)$ has also a description as an element of $\mathrm{CH}^2(X,1)$, i.e.\ as a sum $\sum_j (Z_j,f_j)$ where $Z_j \subseteq X$ are subvarieties of codimension $1$, and $f_j$ are functions such that $\sum_j \div(f_j) = 0$. In this case the regulator can be written as: \begin{align*} \regulator{\del} \colon \mathrm{CH}^i(X,1) &\to H^{2d-i,2d-i}_{\dR}(X,\C)^{\vee} \simeq H^{i,i}_{\dR}(X,\C) \\ \sum_j (Z_j,f_j) &\mapsto \Bigl(\omega \mapsto \sum_j \int_{Z_j} \log|f_j| \omega\Bigr). \end{align*} Notice the similarity with Definition~\ref{def:cycle-class-map}: as we suggested, there is a tight relation between $\regulator{\del}$ and $\cl_{\dR}$ which we will investigate in~\ref{subsec:complex-argument}. Specialise to the case $X = X_1(N)^2$. When $k = l = 0$, $j=0$ we end up with Beilinson-Flach classes lying in $H^3_{\mot}(X_1(N)^2,2) \simeq \mathrm{CH}^2(X_1(N),1)$, which corresponds to the case $i=2$ above. However, aside from this special case we cannot interpret the motivic cohomology groups as Chow groups, because the coefficients are not trivial. We can overcome this issue employing the results of Section~\ref{sec:cohomology-kuga-sato} and working with motivic cohomology of Kuga-Sato varieties: since in this case the coefficients are always trivial, we can rely on the Chow groups interpretation and fall back on geometric techniques to prove our claims. \end{remark} \subsection{Compactification of Beilinson-Flach classes} \label{subsec:compactification-bf} In this subsection we explain how to pass from Beilinson-Flach classes defined over $Y_1(N)^2$ (equivalently, over $\mathcal{E}^k \times \mathcal{E}^k$) to a version defined over $X_1(N)^2$ (equivalently, over $\mathscr{W}_k$). The key difference is that $\mathcal{E}^k\times\mathcal{E}^k$ is not proper, while $\mathscr{W}_k$ is both proper and smooth. Indeed, the motivation for this entire process is to place ourselves in a position of using the full force of cohomology, in particular Hodge theory. The process goes under the name \emph{compactification of motivic classes}. The idea is to find an element defined over $\mathscr{W}_k$ that pulls back to the Beilinson-Flach one. The defect should be supported only on the cuspidal locus $\mathscr{W}_k^{\infty}$. In other words, we are searching for an element which \begin{itemize} \item makes Beilinson-Flach classes defined over $\mathscr{W}_k$; \item is defined over $\mathscr{W}_k^{\infty}$, i.e.\ is \emph{negligible}. \end{itemize} This task is carried out in~\cite[§7-8]{brunault.chida:regulators}, which we now explain. Let \begin{align*} \mathcal{E}^{k,k'} &= \mathcal{E}^k \times \mathcal{E}^{k'}, &\; \hat{\mathcal{E}}^{k,k',*} &= \hat{\mathcal{E}}^{k,*}\times \hat{\mathcal{E}}^{k',*}, \\ Z^k &= \hat{\mathcal{E}}^{k,*} \setminus \mathcal{E}^k = \hat{\mathcal{E}}^{k,*} \times_{X_1(N)} X_1(N)^{\infty}, &\; Z^{k,k'} &= Z^k\times Z^{k'}, \\ U^{k,k'} &= \hat{\mathcal{E}}^{k,k',*} \setminus Z^{k,k'}. &\; \phantom{Z^{k,k'}} &\phantom{= Z^k\times Z^{k'}} \end{align*} One should think of $Z^k$ as the smooth cuspidal locus (i.e.\ the fibre over the cusps) and $U^{k,k'}$ as the ``open'' locus. By definition $U^k = \mathcal{E}^k$ at least set-theoretically, but $U^{k,k'}$ is strictly larger than $\mathcal{E}^k \times \mathcal{E}^{k'}$. For example, it contains both $Z^k \times \mathcal{E}^{k'}$ and $\mathcal{E}^k \times Z^{k'}$. We naturally have \begin{equation*} \mathcal{E}^{k+k'} \hookrightarrow \mathcal{E}^k \times \mathcal{E}^{k'} \hookrightarrow U^{k,k'}. \end{equation*} Brunault and Chida define lifted classes $\widetilde{\mathrm{BF}}^{[k,k',j]}_{\mot}$ as the push-forward of Eisenstein classes to the cohomology of $U^{k,k'}$ rather than of $\mathcal{E}^{k,k'}$. The lifted classes then live in $H^{3+k+k'}_{\mot}(U^{k,k'},2+k+k'-j)$ and pull-back to the standard ones. The localisation sequence $Z^{k,k'} \to \hat{\mathcal{E}}^{k,k',*} \to U^{k,k'}$ induces a long exact sequence \begin{multline*} \cdots\to H^{3+k+k'}_{\mot}(\hat{\mathcal{E}}^{k,k',*},2+k+k'-j)(\epsilon_k,\epsilon_{k'}) \\ \to H^{3+k+k'}_{\mot}(U^{k,k'},2+k+k'-j)(\epsilon_k,\epsilon_{k'}) \\ \xrightarrow{\Res} H^{k+k'}_{\mot}(Z^{k,k'},k+k'-j)(\epsilon_k,\epsilon_{k'}) \to\cdots \end{multline*} The residue $\Res(\widetilde{\mathrm{BF}}^{[k,k',j]}_{\mot})$ is non-zero only when $j=0$. On one hand, for $j>0$ by exactness the lifted class has a preimage in the first group. On the other hand, when $j=0$ one deduces the same thesis by diagram chasing: the ``cuspidal embedding'' $i_{\mathrm{cusp}} \colon Z^k \times \mathcal{E}^{k'} \hookrightarrow U^{k,k'}$ induces a push-forward morphism \begin{equation*} H^{1+k+k'}_{\mot}(Z^k \times \mathcal{E}^{k'}, 1+k+k') \xrightarrow{i_{\mathrm{cusp},*}} H^{3+k+k'}_{\mot}(U^{k,k'},2+k+k'). \end{equation*} It turns out that there exists an element $\xi_{\beta} \in H^{k+k'+1}_{\mot}(Z^k \times \mathcal{E}^{k'}, k+k'+1)(\epsilon_k,\epsilon_{k'})$ satisfying $\Res\circ i_{\mathrm{cusp},*}(\xi_{\beta}) = \Res(\widetilde{\mathrm{BF}}^{[k,k',0]}_{\mot})$. This shows that the element \begin{equation} \label{eq:brunault-chida-lift} \begin{cases} \widetilde{\mathrm{BF}}^{[k,k',0]}_{\mot} - i_{\mathrm{cusp},*}(\xi_{\beta}) &\text{if $j=0$}\\ \widetilde{\mathrm{BF}}^{[k,k',j]}_{\mot} &\text{if $j>0$} \end{cases} \tag{$\sharp$} \end{equation} has residue always equal to zero. We then denote with $\widetilde{\Xi}^{k,k',j}$ an arbitrary choice of a preimage of this element in $H^{3+k+k'}_{\mot}(\hat{\mathcal{E}}^{k,k',*},2+k+k'-j)(\epsilon_k,\epsilon_{k'})$. Recall now that Theorem~\ref{thm:brunault-chida} establishes the existence of an isomorphism \begin{equation*} H_{\mot}^{3+k+k'}(W_k \times W_{k'}, \numberset{Q}(j))(\epsilon_k,\epsilon_{k'}) \simeq H_{\mot}^{3+k+k'}(\hat{\mathcal{E}}^{k,*} \times \hat{\mathcal{E}}^{k',*}, \numberset{Q}(j))(\epsilon_k,\epsilon_{k'}) \end{equation*} which shows that $\widetilde{\Xi}^{k,k',j}$ lifts to $W_k\times W_{k'}$, which is $\mathscr{W}_k$ when $k=k'$. Reversing the argument shows that $\widetilde{\Xi}^{k,k',j}$ pulls back to $\mathrm{BF}^{[k,k',j]}_{\mot}$. Indeed, its pull back from $\hat{\mathcal{E}}^{k,k',*}$ to $\mathcal{E}^{k,k'}$ factors through $U^{k,k'}$. But by definition, its image in the cohomology of $U^{k,k'}$ is the class~\eqref{eq:brunault-chida-lift}, which clearly pulls back to $\mathrm{BF}^{[k,k',j]}_{\mot}$ as the cuspidal contribution maps to zero. Hence $\widetilde{\Xi}^{k,k',j}$ is a lift of $\mathrm{BF}^{[k,k',j]}_{\mot}$ in the cohomology of $W_k\times W_{k'}$, as wanted. These motivic classes are the \emph{first version} of compactified Beilinson-Flach classes, as defined by Brunault and Chida. For any cohomology theory $\mathcal{T}\in\{\et,\dR,\mathrm{Betti},\mathcal{D},\syn,\rig\}$ denote $\widetilde{\Xi}^{k,k',j}_{\mathcal{T}} = \regulator{\mathcal{T}}(\widetilde{\Xi}^{k,k',j})$. \begin{remark} \label{rmk:pairing-unchanged} Notice that the defect between $\mathrm{BF}^{[k,k',j]}_{\mot}$ and $\widetilde{\Xi}^{k,k',j}$ is supported purely on $Z^k\times \mathcal{E}^{k'}$. Therefore, its pairing with differential forms attached to cusp forms vanishes. Hence, as long as we stick to cusp forms, the classes $\widetilde{\Xi}^{k,k',j}$ and $\mathrm{BF}^{[k,k',j]}_{\mot}$ satisfy the same regulator formulæ. A precise statement is given in~\cite[Proposition~8.3]{brunault.chida:regulators}, which we will recall later. \end{remark} We now define a second version of compactified Beilinson-Flach classes for the coincident weights case. Since the kernel of the map induced by $\hat{\mathcal{E}}^{k,k',*} \to U^{k,k'}$ is not guaranteed to be trivial, the class $\widetilde{\Xi}^{k,k',j}$ is not uniquely determined in general. Suppose now $k=k'$ and let $\rho'$ be the involution of $\mathscr{W}_k = W_k\times W_k$ swapping the two components of the fibre product. \begin{definition} The \emph{second version} of the motivic compactified Beilinson-Flach classes is \begin{equation*} \Xi^{k,k,j} = \frac{1}{2}(\widetilde{\Xi}^{k,k,j} + (-1)^{k+j}(\rho')^*(\widetilde{\Xi}^{k,k,j})) \in H^{3+2k}_{\mot}(\hat{\mathcal{E}}^{k,k,*},2+2k-j)(\epsilon_k,\epsilon_k). \end{equation*} By the same argument as before, we regard $\Xi^{k,k,j}\in H^{3+2k}_{\mot}(\mathscr{W}_k,2+2k-j)(\epsilon_k,\epsilon_k)$. For any cohomology theory $\mathcal{T}\in\{\et,\dR,\mathrm{Betti},\mathcal{D},\syn,\rig\}$ denote $\Xi^{k,k',j}_{\mathcal{T}} = \regulator{\mathcal{T}}(\Xi^{k,k',j})$. \end{definition} The element $\Xi^{k,k,j}$ is still a lifting of the non-compactified classes: its pull-back to $\mathcal{E}^k\times\mathcal{E}^k$ coincides with \begin{equation*} \frac{1}{2}(\mathrm{BF}^{[k,k,j]}_{\mot} + (-1)^{k+j}(\rho')^*\mathrm{BF}^{[k,k,j]}_{\mot}) = \frac{1}{2}(\mathrm{BF}^{[k,k,j]}_{\mot} + \mathrm{BF}^{[k,k,j]}_{\mot}) = \mathrm{BF}^{[k,k,j]}_{\mot} \end{equation*} where we have used the properties of non-compactified classes, see Subsubsection~\ref{subsubsec:swapping}. $\Xi^{k,k,j}$ also satisfies an extra useful property: by definition, the involution $\rho'$ acts on $\Xi^{k,k,j}$ with eigenvalue $(-1)^{k+j}$. \subsection{Classes in the cohomology of \texorpdfstring{$f$}{f}} In this subsection we explain how it is possible to pass from Beilinson-Flach classes associated to the triple $(k,k',j)$ to classes associated to the triple $(f,g,j)$. In the sequel we will mainly be interested in the case $f=g$. It is important to underline that the arguments of this subsection apply only to a restricted choice of cohomology theories, in particular they do \emph{not} apply to motivic classes. \subsubsection{Classes in Deligne cohomology} In Deligne cohomology we have classes $\widetilde{\Xi}^{k,k',j}_{\mathcal{D}}$. Following Subsection~\ref{sec:complex-diagram} there is an isomorphism \begin{multline*} H^{3+k+k'}_{\mathcal{D}}(W_k \times W_{k'}, K_{f,g}(2+k+k'-j)) \\ \xrightarrow{\simeq} \Ext^1_{\mathrm{MH}_{K_{f,g}}}(K_{f,g},H^{2+k+k'}_B(W_k\times W_{k'},K_{f,g}(2+k+k'-j))) \end{multline*} which we compose with the functorial morphism induced by $\pr_{f,g}$ \begin{multline*} \Ext^1_{\mathrm{MH}_{K_{f,g}}}(K_{f,g},H^{2+k+k'}_B(W_k\times W_{k'},K_{f,g}(2+k+k'-j))) \\ \xrightarrow{\pr_{f,g}} \Ext^1_{\mathrm{MH}_{K_{f,g}}}(K_{f,g},M_B(f\otimes g)^*(-j)). \end{multline*} We define $\widetilde{\Xi}^{f,g,j}_{\mathcal{D}}$ as the image of $\widetilde{\Xi}^{k,k',j}_{\mathcal{D}}$ under this composition. When $k=k'$ we define $\Xi^{f,g,j}_{\mathcal{D}}$ as the image of $\Xi^{k,k,j}_{\mathcal{D}}$ under the same composition. The first isomorphism still exists when we replace $W_k\times W_{k'}$ with $\mathcal{E}^k\times\mathcal{E}^{k'}$. This follows from the isomorphism expressing the cohomology of $\mathcal{E}^k\times\mathcal{E}^{k'}$ as the cohomology of $Y_1(N)^2$, which is affine, and by Proposition~\ref{prop:isomorphism-for-affine-deligne}. Indeed, $Y_1(N)^2$ has dimension $2$, hence its third Betti cohomology group is trivial, and obviously does not admit non-zero morphisms from the trivial structure. Furthermore, the morphism $\pr_{f,g}$ is defined from the $\Ext$-groups with coefficients in the cohomology of $\mathcal{E}^k\times\mathcal{E}^{k'}$ too. Hence the whole composition is well-defined even when starting from the Deligne cohomology of $\mathcal{E}^k\times\mathcal{E}^{k'}$. We then define $\mathrm{BF}^{[f,g,j]}_{\mathcal{D}}$ as the image $\mathrm{BF}^{[k,k',j]}_{\mathcal{D}}$ under it. \subsubsection{Classes in étale cohomology} In étale cohomology we have classes $\widetilde{\Xi}^{k,k',j}_{\et}$. Following Subsection~\ref{sec:p-adic-diagram} there is a morphism \begin{multline*} H^{3+k+k'}_{\et}(W_k \times W_{k'}, \numberset{Q}_p(2+k+k'-j)) \\ \to H^1(\numberset{Q}(\mu_N),H^{2+k+k'}_{\et}((W_k\times W_{k'})_{\overline{\numberset{Q}}},\numberset{Q}_p(2+k+k'-j))) \end{multline*} which we compose with the functorial morphism induced by $\pr_{f,g}$ \begin{multline*} H^1(\numberset{Q}(\mu_N),H^{2+k+k'}_{\et}((W_k\times W_{k'})_{\overline{\numberset{Q}}},\numberset{Q}_p(2+k+k'-j))) \\ \xrightarrow{\pr_{f,g}} H^1(\numberset{Q}(\mu_N),M_{\etbar}(f\otimes g)^*(-j)). \end{multline*} We define $\widetilde{\Xi}^{f,g,j}_{\et}$ as the image of $\widetilde{\Xi}^{k,k',j}_{\et}$ under this composition. When $k=k'$ we define $\Xi^{f,g,j}_{\et}$ as the image of $\Xi^{k,k,j}_{\et}$ under the same composition. The first morphism still exists when we replace $W_k\times W_{k'}$ with $\mathcal{E}^k\times\mathcal{E}^{k'}$. As before, this follows from the identification of the cohomology of $\mathcal{E}^k\times\mathcal{E}^{k'}$ and of $Y_1(N)^2$, which is affine, and by Proposition~\ref{prop:morphism-for-affine-etale} (again because the third cohomology group of the $2$-dimensional affine scheme $Y_1(N)^2$ is trivial when we base change to $\overline{\numberset{Q}}$). Furthermore, the morphisms $\pr_{f,g}$ is defined from the Galois cohomology groups with coefficients in the cohomology of $\mathcal{E}^k\times\mathcal{E}^{k'}$ too. Hence the whole composition is well-defined also when starting from the étale cohomology of $\mathcal{E}^k\times\mathcal{E}^{k'}$. We then define $\mathrm{BF}^{[f,g,j]}_{\et}$ as the image of $\mathrm{BF}^{[k,k',j]}_{\et}$ under it. From the above discussion, when $f=g$ we have three different classes in the cohomology of $M_{\bullet}(f\otimes f)^*(-j)$ for $\bullet\in\{B,\etbar\}$. Explicitly \begin{align*} \mathrm{BF}^{[f,f,j]}_{\mathcal{D}},\; \widetilde{\Xi}^{f,f,j}_{\mathcal{D}},\; \Xi^{f,f,j}_{\mathcal{D}} &\in \Ext^1_{\mathrm{MH}_{K_f}}(K_f,M_B(f\otimes f)^*(-j)), \\ \mathrm{BF}^{[f,f,j]}_{\et},\; \widetilde{\Xi}^{f,f,j}_{\et},\; \Xi^{f,f,j}_{\et} &\in H^1(\numberset{Q}(\mu_N),M_{\etbar}(f\otimes g)^*(-j)). \end{align*} We record here a useful result which we will need in Subsection~\ref{subsec:integrality}. \begin{proposition} \label{prop:classes-of-f-coincide} The classes $\mathrm{BF}^{[f,f,k]}_{\et}$, $\widetilde{\Xi}^{f,f,k}_{\et}$ and $\Xi^{f,f,k}_{\et}$ coincide. \end{proposition} \begin{proof} We show first that $\Xi^{f,f,j}_{\et}$ coincides with $\mathrm{BF}^{[f,f,j]}_{\et}$. Let $\iota\colon \mathcal{E}^k\times\mathcal{E}^k \hookrightarrow \mathscr{W}_k$. The pull-back of $\Xi^{k,k,j}_{\et}$ under $\iota$ coincides with $\mathrm{BF}^{[k,k,j]}_{\et}$. The morphism $\iota$ also induces a morphism \begin{gather*} H^1(\numberset{Q},H^{2k+2}_{\et}(\mathscr{W}_{k,\overline{\numberset{Q}}},\numberset{Q}_p(2+2k-j))) \xrightarrow{\iota^*} H^1(\numberset{Q},H^{2k+2}_{\et}(\mathcal{E}^{k,k}_{\overline{\numberset{Q}}},\numberset{Q}_p(2+2k-j))) \intertext{which further induces} H^1(\numberset{Q},M_{\etbar}(f\otimes f)^*(-j)) \xrightarrow{\iota^*} H^1(\numberset{Q},M_{\etbar}(f\otimes f)^*(-j)). \end{gather*} This last morphism is actually an isomorphism as explained in~\ref{subsec:motives}. The images of $\Xi^{k,k,j}_{\et}$ and $\mathrm{BF}^{[k,k,j]}_{\et}$ in this last cohomology group are $\Xi^{f,f,j}_{\et}$ and $\mathrm{BF}^{[f,f,j]}_{\et}$ respectively. Since $\iota^*\Xi^{k,k,j}_{\et} = \mathrm{BF}^{[k,k,j]}_{\et}$ and the morphisms to $H^1(\numberset{Q},M_{\etbar}(f\otimes f)^*(-j))$ commute with $\iota^*$, we obtain that $\iota^*\Xi^{f,f,j}_{\et} = \mathrm{BF}^{[f,f,j]}_{\et}$. But $\iota^*$ is now an isomorphism of this last group, so the two cohomology classes coincide. The argument goes through unchanged if we replace $\Xi^{\bullet}_{\et}$ with $\widetilde{\Xi}^{\bullet}_{\et}$, thus showing that $\widetilde{\Xi}^{f,f,j}_{\et} = \mathrm{BF}^{[f,f,j]}_{\et}$. This proves the claim that all three classes coincide. \end{proof} The proof goes through \emph{mutatis mutandis} for the case of the classes in the cohomology of $M_B(f\otimes f)^*(-j)$, because this last group has canonical isomorphisms to the cohomologies of $\mathcal{E}^k\times\mathcal{E}^k$ and $\mathscr{W}_k$ too. Therefore, the classes $\mathrm{BF}^{[f,f,k]}_{\mathcal{D}}$, $\widetilde{\Xi}^{f,f,k}_{\mathcal{D}}$ and $\Xi^{f,f,k}_{\mathcal{D}}$ also coincide. \subsubsection{Properties under swapping involutions} \label{subsubsec:swapping} We collect here some properties regarding the behaviour of both compactified and non-compactified Beilinson-Flach classes under three different involutions, when $f=g$. First of all, non-compactified classes belong to \begin{equation*} H^{3+2k}_{\mot}(\mathcal{E}^k\times\mathcal{E}^k,2+2k-j) \simeq H^3_{\mot}(Y_1(N)^2,\TSym^{[k,k]} \mathscr{H}_{\numberset{Q}}(\mathcal{E})(2-j)). \end{equation*} This cohomology group enjoys two different involutions. Let $\rho'$ and $\rho$ be the involutions swapping the components of $\mathcal{E}^k\times\mathcal{E}^k$ and of $Y_1(N)^2$ respectively. Since $\mathcal{E}^k$ is $k$-dimensional over $Y_1(N)$ we have $(\rho')^* = (-1)^k\rho^*$. Hence the $(\pm 1)$-eigenspaces of $H^3_{\mot}(Y_1(N)^2,\TSym^{[k,k]}\mathscr{H}_{\numberset{Q}}(\mathcal{E})(2-j))$ correspond to the $\pm (-1)^k$-eigenspaces of $H^{3+2k}_{\mot}(\mathcal{E}^k\times\mathcal{E}^k,2+2k-j)$. According to~\cite[Proposition~5.2.3]{kings.loeffler:rankin-eisenstein1} the non-compactified classes satisfy \begin{gather*} \rho^* \mathrm{BF}^{[k,k,j]}_{\mot} = (-1)^j \mathrm{BF}^{[k,k,j]}_{\mot}, \\ (\rho')^* \mathrm{BF}^{[k,k,j]}_{\mot} = (-1)^{k+j} \mathrm{BF}^{[k,k,j]}_{\mot}. \end{gather*} Furthermore, for $\mathcal{T}\in\{\mathcal{D},\et\}$ the classes $\mathrm{BF}^{[f,f,j]}_{\mathcal{T}}$ are in cohomology groups with coefficients in $M_{\mathcal{T}'}(f\otimes f)^*(-j)$ for $\mathcal{T}'\in\{B,\etbar\}$ respectively. We can then consider a third involution on these groups, induced by the involution $s$ swapping the two components of the tensor product, see Subsection~\ref{subsec:motives}. Notice that $s$ is purely algebraic, it does not detect any geometric structure. Since the Künneth isomorphism is induced by the cup-product, which is commutative or alternating depending on the degree, the involution $s$ differs from $\rho^*$ and $(\rho')^*$ by a sign that also depends only on the degree. In particular, the coefficients of $\mathrm{BF}^{[f,f,j]}_{\mathcal{T}}$ are in cohomology groups of degrees $2k+2$ or $2$ according to the chosen interpretation. This means that we are taking tensor products of classes in degrees $k+1$ and $1$ respectively. Hence these classes satisfy: \begin{equation*} s(\mathrm{BF}^{[f,f,j]}_{\mathcal{T}}) = -\rho^* \mathrm{BF}^{[f,f,j]}_{\mathcal{T}} = (-1)^{k+1}(\rho')^* \mathrm{BF}^{[f,f,j]}_{\mathcal{T}} = (-1)^{j+1}\mathrm{BF}^{[f,f,j]}_{\mathcal{T}}. \end{equation*} Therefore, by controlling the parity of $j$ we are choosing whether Beilinson-Flach classes belong to cohomology with coefficients in $\Sym^2 M_{\mathcal{T}'}(f)^*$ or in $\wedge^2 M_{\mathcal{T}'}(f)^*$. Regarding compactified classes, we cannot say anything about the classes $\widetilde{\Xi}^{k,k,j}$ since they are chosen arbitrarily. We can still say something about their image in the cohomology of $U^{k,k'}$ though. The class $\widetilde{\mathrm{BF}}^{[k,k,j]}_{\mot}$ pulls back to $\mathrm{BF}^{[k,k,j]}_{\mot}$, so it has the same behaviour under the considered involutions. However, when $j=0$ the extra contribution $i_{\mathrm{cusp},*}(\xi_{\beta})$ does not respect any symmetry property. It is actually defined asymmetrically with respect to the fibre product from the very beginning. The second version of compactified classes $\Xi^{k,k,j}$ has a much easier behaviour, which is also the reason for their introduction: by construction \begin{align*} (\rho')^* \Xi^{k,k,j} &= (-1)^{k+j}\Xi^{k,k,j}, \\ \rho^* \Xi^{k,k,j} &= (-1)^k(-1)^{k+j} \Xi^{k,k,j} = (-1)^j \Xi^{k,k,j}. \end{align*} Moreover, analogously to above we have \begin{equation*} s(\Xi^{f,f,j}_{\mathcal{T}}) = -\rho^* \Xi^{f,f,j}_{\mathcal{T}} = (-1)^{k+1}(\rho')^* \Xi^{f,f,j}_{\mathcal{T}} = (-1)^{j+1}\Xi^{f,f,j}_{\mathcal{T}}. \end{equation*} The parity of $j$ then controls also whether $\Xi^{k,k,j}$ belongs to cohomology with coefficients in $\Sym^2 M_{\mathcal{T}'}(f)^*$ or $\wedge^2 M_{\mathcal{T}'}(f)^*$. Controlling when Beilinson-Flach classes have coefficients in $\wedge^2 M_{\mathcal{T}'}(f)^*$ is crucial for us. Indeed, we want to construct classes to compare with higher cyclotomic classes, drawing from the decomposition of Galois representations. Higher cyclotomic classes give values for the $L$-function associated to the representation $\wedge^2 \rho_{f,v}$, hence if we want to construct classes to compare them with, the only natural choice is the $\wedge^2$-component. In the sequel we will rely heavily on the properties of the non-compactified classes in order to draw conclusions, working around the cuspidal contribution. \subsection{Differential forms attached to cusp forms} \label{subsec:differential-forms} In this subsection we seek a differential form to pair with the image of the compactified Beilinson-Flach classes under $\regulator{\del}$ (see diagram in figure~\ref{diag:complex-compatibility}). Such a differential would belong to the cohomology group $H^{k+1,k+1}_{\dR}(\mathscr{W}_k, \C)(\epsilon_k)$. In order to construct it we start from modular forms. Recall from Subsection~\ref{sec:de-rham-kuga-sato} that the middle Hodge component is isomorphic to $(S_{k+2}(N)\otimes \overline{S_{k+2}(N)}) \oplus (\overline{S_{k+2}(N)} \otimes S_{k+2}(N))$. Starting from cusp forms, the Eichler-Shimura isomorphism associates: \begin{align*} S_{k+2}(N) &\xrightarrow{\simeq} \Fil^1 H^{k+1}_{\dR}(W_k,\C)(\epsilon_k) \\ f &\mapsto \omega_f \end{align*} where $\omega_f$ is the unique $(k+1)$-form whose pull-back to $Y_1(N)$ (through $\mathcal{E}^k$) is \begin{equation*} (2\pi i)^{k+1} f(\tau)\mathrm{d}\tau w^{(k,0)} = (2\pi i)^k f(q) \frac{\mathrm{d} q}{q} w^{(k,0)} \in H^{1}_{\dR}(Y_1(N),\TSym^k \mathscr{H}_{\C}(\mathcal{E})). \end{equation*} Here we are denoting with $w = \mathrm{d} z$ the standard section of $\mathscr{H}_{\C}(\mathcal{E})$, and $w^{(r,s)} = w^r \overline{w}^s \in \TSym^k \mathscr{H}_{\C}(\mathcal{E})$. This prescription does not only define a de Rham cohomology class, but a true differential form. Since the Eichler-Shimura isomorphism is Hecke-equivariant, the form $\omega_f$ is in the $f$-isotypical component of the de Rham cohomology group. The dual of $\omega_f$ under the Poincaré pairing belongs to the dual of $H^{k+1,0}_{\dR}(W_k,\C)$, which is $H^{0,k+1}_{\dR}(W_k,\C)$, and we denote such form with $\eta_{f^*}$. We can actually make this explicit: \begin{equation*} \eta_{f^*} = \frac{1}{(\omega_f,\overline{\omega_f})} \overline{\omega_f} \in H^{0,k+1}_{\dR}(W_k,\C)(\epsilon_k) \simeq \overline{S_{k+2}(N)}. \end{equation*} The form $\eta_{f^*}$ lies in the $f^*$-isotypical component, as the Hecke operators act on it with eigenvalues which are conjugates of those of $f$. We also have the relation $(\omega_f,\overline{\omega_f}) = (-4\pi)^{k+1} \langle f,f \rangle$. Suppose now that $f$ is a \emph{newform}. By Strong Multiplicity One $\omega_f$ spans the $f$-isotypical component inside the $(k+1,0)$ Hodge component, which is $1$-dimensional. Analogously, $\eta_f$ spans the $f$-isotypical component inside the $(0,k+1)$ Hodge component. \begin{lemma} If $f,g \in S_{k+2}(N)^{new}$ are newforms, then the $(f,g)$-isotypical component $M_{\dR}(f\otimes g) \otimes \C \leq H^{2k+2}_{\dR}(\mathscr{W}_k, \C)(\epsilon_k)$ is $4$-dimensional with basis \begin{equation*} \{\omega_f\otimes \omega_g, \omega_f \otimes \eta_g, \eta_f\otimes \omega_g, \eta_f\otimes \eta_g\}. \end{equation*} \end{lemma} \begin{proof} This follows from the decompositions~\eqref{eq:kunneth-components} simply by taking the relevant eigenspace. \end{proof} Let $\Omega_{f,g} = \omega_f \otimes \eta_g - \eta_f\otimes \omega_g \in M_{\dR}(f\otimes g)$. It is clear that $\Omega_{f,g} \neq 0$ as it comes from the composition of two linearly independent differential forms, hence it determines a $1$-dimensional subspace filtration: \begin{equation*} 0 \subseteq \langle \Omega_{f,g} \rangle \subseteq M_{\dR}(f\otimes g) \otimes \C. \end{equation*} When $f=g$, $M_{\dR}(f\otimes f)$ already enjoys a filtration given by the direct sum decomposition: \begin{equation*} M_{\dR}(f\otimes f) \simeq \wedge^2 M_{\dR}(f) \oplus \Sym^2 M_{\dR}(f) \end{equation*} induced by the action of the involution swapping the two components in the tensor product. As $M_{\dR}(f)$ is $2$-dimensional, $\wedge^2 M_{\dR}(f\otimes f)$ is $1$-dimensional, and since $\Omega_{f,f}$ is non-zero and belongs to the anti-symmetric component by construction, the following result follows. \begin{proposition} $\Omega_{f,f} \in \wedge^2 M_{\dR}(f)$ and it generates the anti-symmetric component. \end{proposition} This amounts to say that the above filtrations coincide. We note that $\Omega_{f,f}$ always lies in the $\wedge^2$-component, in accordance with the fact that we want to exploit classes there, see Subsubsection~\ref{subsubsec:swapping}. \section{Regulator formulæ} In this section we prove complex and $p$-adic regulator formulæ linking compactified Beilinson-Flach classes to special values of $L$-functions. \subsection{Archimedean argument} \label{subsec:complex-argument} \subsubsection{A complex regulator formula} \label{subsubsec:complex-formula} We explain here the complex regulator formula, following~\cite{brunault.chida:regulators,kings.loeffler:rankin-eisenstein}. Recall diagram in figure~\ref{diag:complex-diagram} at page~\pageref{diag:complex-diagram} and specialise it to $X = \mathscr{W}_k$, which has dimension $2k+2$ over $\numberset{Q}$. As we will compute the pairing using the bottom row, we first analyse carefully the cohomology group appearing there. Even though $\mathscr{W}_k$ is defined over $\numberset{Q}$, we regard it as being defined over $K_f$, as in~\ref{subsubsec:regulator-motivic-cohomology} we will need to extend coefficients to $K_f$. By the isomorphisms~\eqref{eq:ext-groups} we obtain \begin{align*} \Ext^0_{\mathrm{MH}_{K_f}}(K_f,K) &\simeq W_0 K \cap F^0(K\otimes \C), \\ \Ext^1_{\mathrm{MH}_{K_f}}(K_f,H) &\simeq \frac{(W_0 H \otimes \C)}{W_0 H + F^0(H\otimes \C)}. \end{align*} with $H = H_B^{2k+2}(\mathscr{W}_k,K_f(2+2k-j))$, $K = H_B^{2k+2}(\mathscr{W}_k,K_f(k+1))$. Moreover, since $H^{2k+2}_B(\mathscr{W}_k,K_f(k+1))$ is concentrated in weight $0$, the first isomorphism becomes \begin{equation} \begin{split} \Ext^0_{\mathrm{MH}_{K_f}}&(K_f,H_B^{2k+2}(\mathscr{W}_k,K_f(k+1))) \\ &\simeq H^{2k+2}_B(\mathscr{W}_k,K_f(k+1)) \cap \Fil^0 H^{2k+2}_{\dR}(\mathscr{W}_k,\C(k+1)) \\ &= H^{2k+2}_B(\mathscr{W}_k,K_f(k+1)) \cap \Fil^{k+1} H^{2k+2}_{\dR}(\mathscr{W}_k,\C). \end{split} \label{eq:isomorph-ext-0} \end{equation} Regarding the $\Ext^1$, we attach the functorial morphism $\pr_{f,f}$ induced by the projection $H^{2k+2}_B(\mathscr{W}_k,K_f(2k+2-j)) \twoheadrightarrow M_B(f\otimes f)_{K_f}^*(-j)$, as explained in Subsection~\ref{subsec:motives}. In this way we have obtained the following diagram, where the $\Ext$-groups are all in the category $\mathrm{MH}_{K_f}$: \begin{equation} \label{diag:complex-ext} \begin{tikzcd}[column sep=small] H_{\mot}^{3+2k}(\mathscr{W}_k,2+2k-j) \arrow["\regulator{\del}"]{d} & H_{\mot}^{2k+2}(\mathscr{W}_k,k+1) \arrow["\regulator{\del}"]{d} \\ H_{\mathcal{D}}^{3+2k}(\mathscr{W}_k,K_f(2+2k-j)) \arrow["\mathrm{edge}"]{d} & H_{\mathcal{D}}^{2k+2}(\mathscr{W}_k,K_f(k+1)) \arrow["\text{edge}"]{d} \\ \Ext^1(K_f,H^{2k+2}_B(\mathscr{W}_k,K_f(2+2k-j))) \arrow["\pr_{f,f}"]{d} & \Ext^0(K_f,H_B^{2k+2}(\mathscr{W}_k,K_f(k+1))) \arrow[shift left=1ex,"\pi_{f,f}"]{d} \\ \Ext^1(K_f,M_B(f\otimes f)_{K_f}^*(-j)) & \Ext^0(K_f,M_B(f\otimes f)_{K_f}(k+1)) \arrow[shift left=1ex,"\subseteq",hook]{u} \end{tikzcd} \end{equation} Notice that in the right column we are projecting onto a subobject: this is possible since $H^{2k+2}_B(\mathscr{W}_k,K_f(k+1))$ decomposes as a direct sum of isotypical components corresponding to cusp forms (see Subsection~\ref{sec:de-rham-kuga-sato}). We are thus using the projection over a direct summand. The $\Ext^0$-group with coefficients in (some twist of) $M_B(f\otimes f)_{K_f}$ is the correct group to land in, as we want to pair elements in the bottom row. With the same argument as before, we can compute \begin{equation*} \begin{split} \Ext^0_{\mathrm{MH}_{K_f}}&(K_f,M_B(f\otimes f)_{K_f}(k+1)) \\ &\simeq M_B(f\otimes f)_{K_f}(k+1) \cap \Fil^0 M_{\dR}(f\otimes f)_{\C}(k+1) \\ &= M_B(f\otimes f)_{K_f}(k+1) \cap \Fil^{k+1} M_{\dR}(f\otimes f)_{\C}. \end{split} \end{equation*} \begin{lemma} We have $\Omega_{f,f} \in \Ext^0_{\mathrm{MH}_{K_f}}(K_f,M_B(f\otimes f)_{K_f}(k+1))$. \end{lemma} \begin{proof} Clearly $\Omega_{f,f}$ belongs to $\Fil^{k+1} M_{\dR}(f\otimes f)_{\C}$, so it suffices to show it also belongs to $M_B(f\otimes f)_{K_f}(k+1)$. Notice that by definition $M_B(f\otimes f)_{K_f}(k+1) = M_B(f\otimes f)_{K_f} \otimes K_f(k+1)$ so it suffices to show that $\Omega_{f,f}$ belongs to the untwisted group. We will show in Proposition~\ref{prop:image-cycle-dr} that $\Omega_{f,f}$ lies in the target of the cycle class map: there exists a class in $\mathrm{CH}^{k+1}(\mathscr{W}_{k,\numberset{Q}(\mu_N)})\otimes K_f$ such that its image under $\cl_{\dR}$ coincides with $\Omega_{f,f}$. The proof of that proposition only uses the fact that $\Omega_{f,f}$ is a de Rham class in the middle Hodge component, so we can use the result here. The image of $\mathrm{CH}^{k+1}(\mathscr{W}_{k,\numberset{Q}(\mu_N)})$ under the cycle class map is \begin{gather*} H_B^{2k+2}(\mathscr{W}_{k,\numberset{Q}(\mu_N)},\numberset{Z}) \cap H^{k+1,k+1}_{\dR}(\mathscr{W}_{k,\numberset{Q}(\mu_N)},\C). \intertext{In particular, this shows that $\Omega_{f,f}$ belongs to} H_B^{2k+2}(\mathscr{W}_{k,\numberset{Q}(\mu_N)},\numberset{Z})\otimes K_f \simeq H^{2k+2}_B(\mathscr{W}_k,K_f). \end{gather*} Since $\Omega_{f,f}$ is already in the $(f,f)$-isotypical component, this shows that it belongs to $M_B(f\otimes f)_{K_f}$. \end{proof} \begin{remark} By construction $\Omega_{f,f} \in \wedge^2 M_{\dR}(f)$, therefore the above lemma proves that it is in the subspace $\Ext^0_{\mathrm{MH}_{K_f}}(K_f,\wedge^2 M_B(f\otimes f)_{K_f}(k+1))$. \end{remark} To summarise, we have shown that $\Omega_{f,f}$ is in the second group in the bottom row of the diagram and, by definition, $\Xi_{\mathcal{D}}^{f,f,j}$ is in the first group in the same row. There is a natural pairing between those two groups, induced by the natural one between $M_B(f\otimes f)_{K_f}$ and its dual, with target $\Ext^1_{\mathrm{MH}_{K_f}}(K_f,K_f(1+k-j)) \simeq K_f(1+k-j)$. By commutativity of the corresponding diagram, pairing $\Xi_{\mathcal{D}}^{f,f,j}$ with $\Omega_{f,f}$ computes the same value as in Deligne cohomology (which is also the same obtainable by pairing elements in the $\Ext$-groups of Betti cohomology). The resulting computation expresses the relation between the bottom layer of the Beilinson-Flach Euler system and a special value of the derivative of the Rankin-Selberg $L$-function and is foundational for our paper. By the remark at page~\pageref{rmk:pairing-unchanged}, pairing (the Deligne realisations of) $\widetilde{\Xi}^{k,k,j}$ or $\mathrm{BF}^{[k,k,j]}_{\mot}$ with differential forms arising from cusp forms yields the same result. Since $\Omega_{f,f}$ comes from cusp forms, we can compute the pairing as if we were using the original classes. More precisely, we have the following proposition. \begin{proposition}[{\cite[Proposition~8.3]{brunault.chida:regulators}}] \label{prop:brunault-chida-pairing} We have \begin{equation*} \langle \regulator{\del}(\mathrm{BF}^{[k,k,j]}_{\mot}),\Omega_{f,f} \rangle = \langle \regulator{\del}(\widetilde{\Xi}^{k,k,j}),\Omega_{f,f} \rangle. \end{equation*} \end{proposition} \begin{remark} The proposition makes sense because $\Omega_{f,f}$ is canonically associated to a cohomology class with compact support as noted in Subsection~\ref{subsec:motives}. Notice that in the above equality, on the left hand side we have less information on the Beilinson-Flach classes, so we have to make up for this with additional information on $\Omega_{f,f}$---namely, that it corresponds to a compactly supported class. \end{remark} \begin{proposition} We have \begin{equation*} \langle \regulator{\del}(\Xi^{k,k,j}),\Omega_{f,f} \rangle = \frac{1+(-1)^j}{2}\langle \regulator{\del}(\widetilde{\Xi}^{k,k,j}),\Omega_{f,f} \rangle. \end{equation*} \end{proposition} \begin{proof} By linearity and compatibility of $\regulator{\del}$ with pull-backs: \begin{align*} \langle \regulator{\del}(\Xi^{k,k,j}),\Omega_{f,f} \rangle &= \frac{1}{2}\langle \regulator{\del}(\widetilde{\Xi}^{k,k,j}),\Omega_{f,f} \rangle + \frac{(-1)^{k+j}}{2}\langle \regulator{\del}((\rho')^*\widetilde{\Xi}^{k,k,j}),\Omega_{f,f} \rangle \\ &= \frac{1}{2}\langle \regulator{\del}(\widetilde{\Xi}^{k,k,j}),\Omega_{f,f} \rangle + \frac{(-1)^{k+j}}{2}\langle \regulator{\del}(\widetilde{\Xi}^{k,k,j}),(\rho')^*\Omega_{f,f} \rangle. \end{align*} By definition $\Omega_{f,f}$ is an antisymmetric tensor with components of degree $k+1$, hence $(\rho')^*\Omega_{f,f} = (-1)^{k+1}s(\Omega_{f,f}) = (-1)^k\Omega_{f,f}$. This proves the claim. \end{proof} We are now in a position to prove the following version of the regulator formula over $X_1(N)^2$. \begin{theorem} \label{thm:regulator-formula} If $j$ is even and $0\leq j\leq k$, then the following formula holds. \begin{equation*} \begin{split} \langle \regulator{\del}(\widetilde{\Xi}^{k,k,j}), \Omega_{f,f} \rangle &= \langle \regulator{\del}(\Xi^{k,k,j}), \Omega_{f,f} \rangle = \langle \Xi^{f,f,j}_{\mathcal{D}}, \Omega_{f,f} \rangle \\ &= (2\pi i)^{2(k-j)}\frac{(-1)^{k-j}}{2(\omega_f,\overline{\omega_f})}\Bigl(\frac{k!}{(k-j)!}\Bigr)^2 L'(f,f,j+1). \end{split} \end{equation*} \end{theorem} \begin{proof} The first equality is given by the last proposition since $j$ is even. The second equality comes from the fact that if we extend diagram~\ref{diag:complex-diagram} with the morphisms from~\eqref{diag:complex-ext}, the resulting diagram is still commutative, because the involved morphisms commute with cup-products and push-forwards. Proposition~\ref{prop:brunault-chida-pairing} shows that the pairings all coincide with $\langle \mathrm{BF}^{[k,k,j]}_{\mathcal{D}},\Omega_{f,f}\rangle$. Applying~\cite[Theorem~6.2.9]{kings.loeffler:rankin-eisenstein} we deduce the explicit value of the pairing. Indeed, the pull-back of $\Omega_{f,f}$ to $Y_1(N)^2$ equals the differential form used there by construction, as we assumed $j$ even. The cohomology class they denote $\mathrm{AJ}_{\mathcal{H},f,f}(\mathrm{Eis}^{[k,k,j]}_{\mathcal{H},1,N})$ corresponds to the class we denoted $\mathrm{BF}^{[f,f,j]}_{\mathcal{D}}$. This follows from the fact that if a Hodge structure has non-positive weight, its Deligne and absolute Hodge cohomologies are isomorphic. Hence, the cited theorem computes the pairing which in our notation is \begin{equation*} \langle \mathrm{BF}^{[f,f,j]}_{\mathcal{D}}, \Omega_{f,f} \rangle. \end{equation*} By the same argument that we used for compactified classes, we deduce that this equals $\langle \mathrm{BF}^{[k,k,j]}_{\mathcal{D}}, \Omega_{f,f} \rangle$. Therefore, all the pairings in the statement of the theorem compute to the expression on the right hand side of~\cite[Theorem~6.2.9]{kings.loeffler:rankin-eisenstein}, which proves the claim. \end{proof} \begin{remark} More generally, the regulator formula links the Deligne realisation of $\widetilde{\Xi}^{k,k',j}$ with $L'(f,g,j+1)$ through the pairing with $\Omega_{f,g}$. However, we will only need the version with $f=g$, so we restricted to this case for simplicity. \end{remark} In~\cite{kings.loeffler:rankin-eisenstein} Deligne cohomology is replaced with absolute Hodge cohomology. In the case under consideration the two cohomologies agree, so the above form of the theorem is equivalent to that with absolute Hodge cohomology. \paragraph{Remark on symmetry eigenspaces} As remarked above, $\Omega_{f,f}$ lives in the subspace $\Ext^0_{\mathrm{MH}_{K_f}}(K_f,\wedge^2 M_B(f)_{K_f}(k+1))$, so it pairs non-trivially only with the $\wedge^2$-component of $\widetilde{\Xi}^{f,f,j}_{\mathcal{D}}$, i.e.\ that in the cohomology of $\wedge^2 M_{\dR}(f)^*$. When $j$ is even, $\mathrm{BF}^{[f,f,j]}_{\mathcal{D}}$ is in the antisymmetric component too, but $\widetilde{\Xi}^{f,f,j}_{\mathcal{D}}$ has no such property in general. This is the main reason why we defined the symmetrised classes $\Xi^{k,k,j}$. In any case, after Brunault and Chida's result the complex regulator formula does not detect this. It is however important to keep in mind that we are ultimately studying classes in the cohomology of $\wedge^2 M_{\mathcal{T}}(f)$ rather than of the full $M_{\mathcal{T}}(f\otimes f)$. For this reason we defined $\Omega_{f,f}$ to detect the antisymmetric component. As for Beilinson-Flach classes, we ensured that the non-compact ones belong to the correct eigenspace with the requirement on $j$, and we worked around the cuspidal defect, as explained in~\ref{subsubsec:swapping}. \subsubsection{The regulator formula in motivic cohomology} \label{subsubsec:regulator-motivic-cohomology} We now interpret the complex regulator formula purely in terms of motivic cohomology. By looking at the diagrams, it is natural to ask if one could find a preimage of $\Omega_{f,f}$ in the motivic cohomology group, so that both pieces of the regulator formula would have a source in motivic cohomology. For this purpose we take advantage of the extra row in de Rham cohomology. The isomorphism~\eqref{eq:isomorph-ext-0} is such that its composition with $\regulator{\del}$ is the de Rham cycle class map, as explained in~\cite[§7]{esnault.viehweg:deligne-beilinson}. This is coherent with the numerology, since $H^{2k+2}_{\mot}(\mathscr{W}_k,k+1) \simeq \mathrm{CH}^{k+1}(\mathscr{W}_k)$. Therefore, it suffices to show an element of the motivic cohomology group whose image under the cycle class map is $\Omega_{f,f}$. Summarising, we are searching for a rational equivalence class $[Z_f]\in\mathrm{CH}^{k+1}(\mathscr{W}_k)$ of cycles of codimension $k+1$ such that \begin{align*} \cl_{\dR} \colon H_{\mot}^{2k+2}(\mathscr{W}_k,k+1) &\to H^{k+1,k+1}_{\dR}(\mathscr{W}_k,\C) \\ [Z_f] &\mapsto C\cdot \Omega_{f,f}, \quad C \in \C^{\times}. \end{align*} In the remainder of this subsubsection we construct the subvariety $Z_f$ combining together three different correspondences. \begin{description} \item[Projection onto $f$-eigenspace] Since $f$ is a newform, Strong Multiplicity One implies that its eigenspace in $S_{k+2}(N,\psi)$ is $1$-dimensional. The projection onto it can be realised by a Hecke operator: let $f=f_0,f_1,\ldots,f_c$ be a basis of $S_{k+2}(N,\psi)$ over $K_f$ consisting of eigenforms. For every $i=1,2,\ldots,c$ there exists a prime $l_i$ such that $a_{l_i}(f) \neq a_{l_i}(f_i)$. If $f_i\neq f_j$ are in the same Galois $G_{K_f}$-orbit, then we choose $l_i=l_j$. This is possible because $a_{l_i}(f_j) = a_{l_i}(f_i)^{\sigma}$ for some $\sigma\in G_{K_f}$, so either both or none of them equals $a_{l_i}(f)\in K_f$. Define the operator: \begin{equation*} T_f = \prod_{i=1}^c \frac{T_{l_i} - a_{l_i}(f_i)}{a_{l_i}(f) - a_{l_i}(f_i)} \in \mathbb{T}\otimes_{\numberset{Q}} K_f. \end{equation*} It is easy to see that $T_f(f_i) = 0$ for every $i=1,2,\ldots,c$ and $T_f(f) = f$. Therefore $T_f$ is the $f$-isotypical projector on $S_{k+2}(N,\psi)$. We regard $T_f$ also as a correspondence on $W_k$, yielding functorially a morphism in cohomology. Note that $T_f$ is a combination of correspondences defined over $\numberset{Q}$ with coefficients in $K_f$. Indeed, the chosen basis contains all the Galois conjugates of a given eigenform, under the action of $G_{K_f}$ . As a consequence, the finite linear combination defined above is fixed by $G_{K_f}$, because the coefficients are permuted among themselves thanks to our choices. Therefore, it defines an element of the Hecke algebra $\mathbb{T}\otimes_{\numberset{Q}} K_f$. \item[Projection onto the middle Künneth component] Consider the morphisms \begin{equation*} \begin{tikzcd} W_k \arrow[bend right=25,"i_1",swap]{r} & \mathscr{W}_k \arrow[bend left=25,"\pi_2"]{r} \arrow[bend right=25,"\pi_1",swap]{l} & W_k \arrow[bend left=25,"i_2"]{l} \end{tikzcd} \end{equation*} Let $\delta_{\infty} \colon \mathrm{CH}^*(\mathscr{W}_k) \to \mathrm{CH}^*(\mathscr{W}_k)$ be the correspondence on $\mathscr{W}_k$ given by the prescription \begin{equation*} \delta_{\infty}([Z]) = [Z] - i_{1,*}\circ\pi_{1,*}([Z]) - i_{2,*}\circ\pi_{2,*}([Z]). \end{equation*} In~\cite[§2]{darmon.rotger:iterated} it is proved the action of $\delta_{\infty}$ on cohomology is to project onto the middle Künneth component in any degree. In particular \begin{equation*} \delta_{\infty}^* \colon H^{2k+2}_{\dR}(\mathscr{W}_k,\C) \to \bigl(H^{k+1}_{\dR}(W_k,\C)\bigr)^{\otimes 2}. \end{equation*} \item[Atkin-Lehner involution] Denote with $w_N$ the Atkin-Lehner involution induced by the action of $\bigl(\begin{smallmatrix} 0 & -1 \\ N & 0 \end{smallmatrix}\bigr)$, and with $\lambda_N(f)$ the pseudo-eigenvalue of $f$. Notice that $w_N$ is not defined over $\numberset{Q}$, but only over $\numberset{Q}(\mu_N)$, so it is actually only a correspondence on $W_{k,\numberset{Q}(\mu_N)}$. \end{description} Denote also with $\mathrm{gr}$ the graph of a correspondence. For a correspondence $c$ from $C_1$ to $C_2$, its graph is a subset $\mathrm{gr}(c) \subseteq C_1 \times C_2$. \begin{definition} Let $[Z_f]\in\mathrm{CH}^{k+1}(\mathscr{W}_{k,\numberset{Q}(\mu_N)})\otimes K_f$ be the class of the cycle $\delta_{\infty}(\mathrm{gr}(T_{f^*}\circ w_N))$. \end{definition} The composition $T_{f^*}\circ w_N$ is a correspondence on $W_k$, so its graph is a subvariety of $\mathscr{W}_k$, and $\delta_{\infty}$ has a well-defined action on its equivalence class. \begin{proposition} \label{prop:image-cycle-dr} The above class satisfies $\cl_{\dR}(Z_f) = \lambda_N(f^*)\Omega_{f,f}$. \end{proposition} In the statement we are comparing classes in de Rham cohomology with coefficients in $\C$, so the base change to $\numberset{Q}(\mu_N)$ and the extension of coefficients to $K_f$ do not play any role. Thanks to the flatness of $\C$, the equality would still hold over smaller fields, even though we will not use this fact. \begin{proof} Let $T = T_{f^*} \circ w_N$ for notational simplicity. Denote with $\Delta \subseteq \mathscr{W}_k$ the diagonal of $\mathscr{W}_k$. By definition \begin{equation*} \cl_{\dR}(Z_f) = \rho \iff \int_{Z_f} \omega = \int_{\mathscr{W}_k} \omega\rho = (\omega,\rho) \quad \forall \omega. \end{equation*} In particular, if we vary $\omega$ in a basis, we can compute $\cl_{\dR}(Z_f)$ as an element of $H^{k+1,k+1}_{\dR}(\mathscr{W}_k,\C)^{\vee}$ and then express it as a differential form by duality. Consider a basis of $H^{k+1,k+1}_{\dR}(\mathscr{W}_k,\C)$ determined by the direct sum decomposition given by the Künneth isomorphism. Let $\omega \in H^{k+1,k+1}_{\dR}(\mathscr{W}_k,\C)$ be a differential form in this basis, then by a standard property of the pull-back \begin{equation*} \int_{Z_f} \omega = \int_{\mathrm{gr}(T)} \delta_{\infty}^* \omega = \begin{cases} \int_{\mathrm{gr}(T)}\omega &\text{if $\omega \in (H^{k+1}_{\dR}(W_k,\C))^{\otimes 2}$} \\ 0 &\text{otherwise} \end{cases} \end{equation*} The last equality holds because $\delta_{\infty}^*$ projects on the middle Künneth component. It suffices then to consider forms in a basis of $(H^{k+1}_{\dR}(W_k,\C))^{\otimes 2}$. Consider a basis of $H^{k+1}_{\dR}(W_k,\C)$ consisting of differential forms associated to cusp forms, which exists by the isomorphism~\eqref{eq:cohomology-decompositions}. Write $\omega = \omega_1 \otimes \omega_2$ with $\omega_1$ and $\omega_2$ in this basis. Again by properties of the pull-back we have \begin{equation*} \int_{Z_f} \omega = \int_{\mathrm{gr}(T)} \delta_{\infty}^* \omega = \int_{\mathrm{gr}(T)} \omega = \int_{\Delta} \omega_1 \otimes w_N^* T_{f^*}^* \omega_2. \end{equation*} By construction $T_{f^*}$ projects onto $M_{\dR}(f^*)\otimes \C$, which is generated by $\omega_{f^*}$ and $\eta_{f^*}$. In the first case we are left with \begin{equation*} = \int_{\Delta} \omega_1 \otimes w_N^* \omega_{f^*} = \int_{\Delta} \omega_1 \otimes \lambda_N(f^*)\omega_f = \lambda_N(f^*)(\omega_1,\omega_f). \end{equation*} Since we are working with basis vectors, the cup product $(\omega_1,\omega_f)$ is non-zero only when $\omega_1$ is a scalar multiple of the dual of $\omega_f$, and equals $1$ exactly when $\omega_1 = -(\omega_f)^* = -\eta_{f^*}$. The argument then shows that \begin{align*} &\cl_{\dR}(Z_f)(-\lambda_N(f^*)^{-1}\eta_{f^*}\otimes\omega_{f^*}) = 1 \\ \text{similarly} \qquad &\cl_{\dR}(Z_f)(\lambda_N(f^*)^{-1}\omega_{f^*}\otimes\eta_{f^*}) = 1. \end{align*} Therefore $\cl_{\dR}$ coincides with $(-\lambda_N(f^*)^{-1}\eta_{f^*} \otimes \omega_{f^*})^* + (\lambda_N(f^*)^{-1}\omega_{f^*} \otimes \eta_{f^*})^*$. To compute these observe: \begin{equation*} (\omega_{f^*}\otimes \eta_{f^*},\eta_f\otimes \omega_f) = (\omega_{f^*},\eta_f)\cdot (\eta_{f^*},\omega_f) = -1. \end{equation*} So we get as a result $\lambda_N(f^*)(\omega_f \otimes \eta_f -\eta_f\otimes \omega_f )$. This proves the proposition. \end{proof} In order to use the correspondences $T_{f^*}$ and $w_N$, we base changed to $\numberset{Q}(\mu_N)$ and extended coefficients to $K_f$. We then need to modify our original diagram to reflect these changes. Notably, we now start from the motivic cohomology of $\mathscr{W}_{k,\numberset{Q}(\mu_N)}$. However, the two bottom rows of diagram~\eqref{diag:complex-ext} are unchanged. Indeed, $\mathscr{W}_{k,\numberset{Q}(\mu_N)}$ is still defined over $\numberset{Q}$ (hence over $K_f$) and in Betti and de Rham cohomology we deduce by the Künneth isomorphism: \begin{align*} H_B^i(\mathscr{W}_{k,\numberset{Q}(\mu_N)},K_f(n)) &\simeq H_B^i(\mathscr{W}_k,K_f(n)) \otimes H_B^0(\Spec \numberset{Q}(\mu_N),K_f) = H_B^i(\mathscr{W}_k,K_f(n)), \\ H_{\dR}^i(\mathscr{W}_{k,\numberset{Q}(\mu_N)},K_f(n)) &\simeq H_{\dR}^i(\mathscr{W}_k,K_f(n)). \end{align*} Thanks to this remark we can use the commutativity of the diagram in figure~\ref{diag:complex-diagram} to finish the proof of the the next theorem. \begin{definition} We define a cohomology class in $H^1_{\mot}(\Spec \numberset{Q}(\mu_N),1+k-j)\otimes K_f$ as $b_f = \langle \Xi^{k,k,j}, Z_f \rangle$. \end{definition} \begin{theorem}[Complex regulator formula] \label{thm:complex-l-value} If $j$ is even and $0\leq j\leq k$, then $b_f$ satisfies: \begin{equation} \label{eq:deligne-regulator-bf} \begin{split} \regulator{\del}(b_f) &= \langle \regulator{\del}(\Xi^{k,k,j}), \lambda_N(f^*)\Omega_{f,f} \rangle \\ &= (2\pi i)^{2(k-j)}\lambda_N(f^*)\frac{(-1)^{k-j}}{2(\omega_f,\overline{\omega_f})}\Bigl( \frac{k!}{(k-j)!} \Bigr)^2 L'(f,f,j+1). \end{split} \end{equation} \end{theorem} \begin{proof} This is a direct application of the complex regulator formula proved above. The first equality is implied by the commutativity of the diagram in figure~\ref{diag:complex-diagram}. The second equality is Theorem~\ref{thm:regulator-formula}, which applies because $j$ is even. \end{proof} \begin{remark} $b_f$ is a motivic source for $L$-values, as it lies in the motivic cohomology group $H^1_{\mot}(\Spec \numberset{Q}(\mu_N),1+k-j)\otimes K_f$ and its image under $\regulator{\del}$ yields \emph{complex} $L$-values. In the sequel we will also relate $b_f$ to \emph{$p$-adic} $L$-values, by computing its image under $\regulator{\et}$. \end{remark} \subsection{Non-archimedean argument} \label{subsec:p-adic-argument} In this subsection we apply the machinery of Subsection~\ref{sec:p-adic-diagram} to the case under consideration. Recall that at the end of that subsection we constructed a commutative diagram (figure~\ref{diag:etale-diagram}) by studying edge maps in the Hochschild-Serre spectral sequence. When we specialise to $X = \mathscr{W}_{k,\numberset{Q}(\mu_N)}$ (then $d = k+1$) the bottom line of the diagram reads: \begin{multline*} H^1(\numberset{Q}(\mu_N),H^{2k+2}_{\et}(\mathscr{W}_{k,\overline{\numberset{Q}}},\numberset{Q}_p(2+2k-j))) \times H^0(\numberset{Q}(\mu_N),H^{2k+2}_{\et}(\mathscr{W}_{k,\overline{\numberset{Q}}},\numberset{Q}_p(1+k)))_{K_f} \\ \to H^1_{\et}(\Spec \numberset{Q}(\mu_N),\numberset{Q}_p(1+k-j)) \otimes K_f. \end{multline*} We had to extend coefficients of the second group to $K_f$ after the definition of $Z_f$. Inside the cohomology groups $H^{2k+2}_{\et}(\mathscr{W}_{k,\overline{\numberset{Q}}},\numberset{Q}_p(\star))$ we can find the $p$-adic Galois representations associated to the pair $(f,f)$, which are the isotypical component in étale cohomology $M_{\etbar}(f\otimes f)$ and its dual $M_{\etbar}(f\otimes f)^*$. Therefore, by inserting projections, we obtain an expression purely in terms of the cohomology of $M_{\etbar}(f\otimes f)$: \begin{multline*} H^1(\numberset{Q}(\mu_N),M_{\etbar}(f\otimes f)^*(-j)) \times H^0(\numberset{Q}(\mu_N),M_{\etbar}(f\otimes f)(1+k)) \otimes K_f \\ \to H^1_{\et}(\Spec \numberset{Q}(\mu_N),\numberset{Q}_p(1+k-j)) \otimes K_f. \end{multline*} The above row is connected to the previous one compatibly by projection maps. We can then equivalently compute the pairing in the cohomology of $M_{\etbar}(f\otimes f)$. Notice also that since $f$ is a cusp form, its eigenspace is concentrated in the middle cohomological degree. This is equivalent to the fact that the representation $\rho_{f,v}$ is pure of weight $k+1$. Furthermore, by replacing the étale regulator with the syntomic regulator we can apply the same argument to construct a commutative diagram in syntomic cohomology. Recall that under suitable hypotheses---in particular over $p$-adic fields---the first étale and syntomic cohomology groups are related by the Bloch-Kato exponential. We introduce the following notation: \begin{equation*} Z_f^{\diamond} = \regulator{\diamond}(Z_f), \quad \diamond\in\{\et,\syn\}. \end{equation*} Suppose we are able to compute $\langle \regulator{\syn}(\Xi^{k,k,j}), Z_f^{\syn} \rangle$. The resulting element belongs to $H^1_{\syn}(\Spec \numberset{Q}_p(\mu_N), \numberset{Q}_p(1+k-j))\otimes K_f$. There is then a map: \begin{equation*} H^1_{\syn}(\Spec \numberset{Q}_p(\mu_N),\numberset{Q}_p(1+k-j)) \to H^1_{\et}(\Spec \numberset{Q}_p(\mu_N),\numberset{Q}_p(1+k-j)) \end{equation*} given by the Bloch-Kato exponential, as $\numberset{Q}_p(1+k-j)$ is crystalline hence de Rham. Notice that we moved to étale cohomology over local fields. We would like to use this map to link the syntomic and étale diagrams, but there is an obstacle given by the fact that over $p$-adic fields the edge map from $H^{3+2k}_{\et}$ to $E_2^{0,3+2k}$ is not necessarily zero, as explained in Subsection~\ref{sec:p-adic-diagram}. This problem would defy the construction of our diagram, because we could not map Beilinson-Flach classes to $E_2^{1,2k+2}$. However, the kernel of the edge map to $E_2^{0,3+2k}$ contains all the cohomology classes that are localisations of global classes. Our construction is chiefly global, which is to say that we are dealing with classes in the cohomology of $\mathscr{W}_k$ over a number field; and the classes that we consider in the local diagram are their localisations. Therefore, they lie in the kernel of the edge map to $E_2^{0,3+2k}$, and thus they can safely be mapped to $E_2^{1,2k+2}$. By this discussion, the following diagram commutes as we are feeding it with global classes (we omit the coefficients for better readability): \begin{equation*} \begin{tikzcd}[column sep=0.7em,cramped] H_{\syn}^{2k+3}(\mathscr{W}_{k,\numberset{Q}_p(\mu_N)}) \times H_{\syn}^{2k+2}(\mathscr{W}_{k,\numberset{Q}_p(\mu_N)}) \arrow{r} & H^1_{\syn}(\Spec \numberset{Q}_p(\mu_N)) \arrow["\exp"]{dd} \\ H_{\mot}^{2k+3}(\mathscr{W}_k) \times H_{\mot}^{2k+2}(\mathscr{W}_k) \arrow["\regulator{\syn}"]{u} \arrow["\mathrm{loc}\,\circ\,\regulator{\et}"']{d} & \\ H_{\et}^{2k+3}(\mathscr{W}_{k,\numberset{Q}_p(\mu_N)}) \times H_{\et}^{2k+2}(\mathscr{W}_{k,\numberset{Q}_p(\mu_N)}) \arrow{r} \arrow["\pr_{f,f}\circ\mathrm{edge}"']{d} & H^1_{\et}(\Spec \numberset{Q}_p(\mu_N)) \arrow[equal]{d} \\ H^1(\numberset{Q}_p(\mu_N),M_{\etbar}(f\! \otimes\! f)^*_{\numberset{Q}_p}) \times H^0(\numberset{Q}_p(\mu_N),M_{\etbar}(f\! \otimes\! f)_{\numberset{Q}_p}) \arrow{r} & H^1_{\et}(\Spec \numberset{Q}_p(\mu_N)) \end{tikzcd} \end{equation*} Therefore, simply by applying $\exp$ we would obtain the value of the pairing in \emph{local} étale cohomology. It suffices then to compute the pairing in syntomic cohomology. The rest of the argument is organised as follows: \begin{itemize} \item we will show that $Z_f^{\syn}$ is canonically associated to a compactly supported cohomology class; \item we will split the pairing in syntomic cohomology in the sum of three terms; \item the first term computes the pairing on the open variety $\mathcal{E}^k \times \mathcal{E}^k$, we will link it to the $p$-adic value; \item the other terms compute the ``cuspidal contribution'', we will show it is always zero. \end{itemize} \begin{remark} The material of this section applies also to the ordinary case, with some changes. Firstly, Lemma~\ref{lemma:frobenius-eigenspaces} works only for the Satake parameter which is also a $p$-adic unit. By contrast, in the supersingular case it works with both choices. Secondly, in Theorem~\ref{thm:klz-regulator-formula} and the subsequent ones, the geometric $p$-adic $L$-function must be replaced with Hida's $3$-variable $p$-adic $L$-function. This section gives then also an alternative proof of the $p$-adic regulator formula in the ordinary case. The main difference with the proof contained in~\cite[§6.3]{dasgupta:factorization} is that that proof only works in weight $2$ as stated, while the proof presented here works in higher weights too. Moreover, Dasgupta's proof yields a formula for points $(0,0,1+\theta)$ in our normalisation, with $\theta$ a $p$-power conductor Dirichlet character, while the one presented here covers points $(k_0,k_0,j_0+1)$ with $0\leq j_0\leq k_0$. \end{remark} \subsubsection*{Study of $Z_f^{\syn}$} We recall that syntomic cohomology is characterised by sitting in a long exact sequence (induced by the Leray spectral sequence): \begin{multline*} \cdots \to H_{\syn}^{2k+2}(\mathscr{W}_{k,\numberset{Q}_p(\mu_N)},\numberset{Q}_p(k+1)) \\ \to F^0 H_{\dR}^{2k+2}(\mathscr{W}_{k,\numberset{Q}_p(\mu_N)},\numberset{Q}_p(k+1)) = F^{k+1} H_{\dR}^{2k+2}(\mathscr{W}_{k,\numberset{Q}_p(\mu_N)},\numberset{Q}_p) \\ \to H_{\rig}^{2k+2}(\mathscr{W}_{k,\numberset{Q}_p(\mu_N)},\numberset{Q}_p(k+1)) \to \cdots \end{multline*} We can thus write $Z_f^{\syn} = (\xi,\lambda)$ with $\xi$ and $\lambda$ respectively rigid and de Rham cohomology classes. By the compatibility of the regulator $\regulator{\syn}$ and $\cl_{\dR}$ in the Leray spectral sequence~\cite[Theorem~7.5]{besser:syntomic}, the composition of $\regulator{\syn}$ and the morphism to de Rham cohomology in the long exact sequence coincides with $\cl_{\dR}$. Therefore $\lambda$ is equal to the image of $\cl_{\dR}(Z_f) = \lambda_N(f^*)\Omega_{f,f}$ under the extension of the coefficients to $\numberset{Q}_p$. Our first aim is to compute it. We start by studying the behaviour of $\omega_f$ and $\eta_f$ in syntomic cohomology. In order to pass to de Rham cohomology with $(K_f\otimes\numberset{Q}_p)$-coefficients we must start from classes defined over $K_f$ rather than over $\C$. According to~\cite[§6.1]{kings.loeffler:rankin-eisenstein1} the differentials $\omega_f, \eta_f$ can be easily modified to be $K_f$-rational. Explicitly, if we define \begin{equation*} \omega_f' = \tau(\psi^{-1})\omega_f, \quad \eta_f' = \tau(\psi^{-1})\eta_f \end{equation*} then $\omega_f', \eta_f' \in M_{\dR}(f)$, i.e.\ they are $K_f$-rational. We can then safely regard $\omega_f', \eta_f' \in M_{\dR}(f)_{\numberset{Q}_p}$. \begin{lemma} \label{lemma:frobenius-eigenspaces} Let $\alpha_f$ be the Satake parameter of $f$ of smallest valuation. Then the eigenspace $V_{\phi}(\alpha_f)$ in $M_{\dR}(f)_{\numberset{Q}_p}$ is a complement of $\Fil^1 M_{\dR}(f)_{\numberset{Q}_p}$. If $f$ is supersingular, then $V_{\phi}(\beta_f)$ is also a complement of $\Fil^1 M_{\dR}(f)_{\numberset{Q}_p}$, and $\Fil^1 M_{\dR}(f)_{\numberset{Q}_p}$ is not a $\phi$-eigenspace. \end{lemma} \begin{proof} By transport of structure via the isomorphism $C_{\mathrm{crys}}$, the module $M_{\rig}(f)_{\numberset{Q}_p}\simeq M_{\dR}(f)_{\numberset{Q}_p}\simeq D_{\mathrm{crys}}(M_{\etbar}(f)_{\numberset{Q}_p})$ enjoys a semi-linear action of Frobenius $\phi$ (by comparison with rigid cohomology) and an action of $G_{\numberset{Q}_p}$ (by comparison with étale cohomology). It also has a natural Hodge filtration. Being in the image of the functor $D_{\mathrm{crys}}$, it is a \emph{weakly admissible} filtered $\phi$-module in the sense of~\cite[Definition~8.2.1]{brinon.conrad:preparatory}, that is its Newton polygon lies on or above its Hodge polygon, equivalently the rightmost endpoint of the Newton polygon lies on or above the rightmost endpoint of the Hodge polygon. Since the category of weakly admissible filtered $\phi$-modules is abelian, this property is preserved by taking subobjects and subquotients. The Hodge filtration of $\Fil^1 M_{\dR}(f)_{\numberset{Q}_p}$ has a unique non-zero piece in degree $k+1$, which is $1$-dimensional. Therefore the rightmost endpoint of the Hodge polygon is $(1,k+1)$. Suppose now that $\Fil^1 M_{\dR}(f)_{\numberset{Q}_p}$ is also a $\phi$-eigenspace. Since $\phi$ acts on $M_{\dR}(f)_{\numberset{Q}_p}$ with eigenvalues $\alpha_f$ and $\beta_f$, $\Fil^1 M_{\dR}(f)_{\numberset{Q}_p}$ must be equal to either eigenspace. Suppose it equals $V_{\phi}(\alpha_f)$, then the slope is $v_p(\alpha_f)$, so the rightmost endpoint of the Newton polygon is $(1,v_p(\alpha_f))$. However $\alpha_f$ is the Satake parameter of smallest valuation, so we have $v_p(\alpha_f) < k+1$. This contradicts the fact that $\Fil^1 M_{\dR}(f)_{\numberset{Q}_p}$ is a weakly admissible module. Therefore we deduce that $\Fil^1 M_{\dR}(f)_{\numberset{Q}_p}$ is \emph{not} the eigenspace $V_{\phi}(\alpha_f)$. Finally, the two $1$-dimensional subspaces $V_{\phi}(\alpha_f)$ and $\Fil^1 M_{\dR}(f)_{\numberset{Q}_p}$ of $M_{\dR}(f)_{\numberset{Q}_p}$ do not coincide, so they must be complementary, given that the total space is $2$-dimensional. If in addition we suppose that $f$ is supersingular, we have also $0 < v_p(\alpha_f) < k+1$, and the same inequalities hold for $\beta_f$. Therefore, the argument shows that $\Fil^1 M_{\dR}(f)_{\numberset{Q}_p}$ cannot coincide with $V_{\phi}(\beta_f)$ as well, so in this case it cannot be a $\phi$-eigenspace at all. \end{proof} \begin{remark} In the ordinary case $v_p(\alpha_f) = 0$ and $v_p(\beta_f)=k+1$, so $\Fil^1 M_{\dR}(f)_{\numberset{Q}_p}$ might coincide with the eigenspace $V_{\phi}(\beta_f)$. \end{remark} Since $\eta_f'$ is a generator of $M_{\dR}(f)_{\numberset{Q}_p}/\Fil^1 M_{\dR}(f)_{\numberset{Q}_p}$, by the above lemma it has a \emph{unique} lift to the eigenspace where $\phi$ acts as $\alpha_f$. Clearly there are several choices of a lift to $M_{\dR}(f)_{\numberset{Q}_p}$, with an ambiguity given by an element of $\Fil^1 M_{\dR}(f)_{\numberset{Q}_p}$. Thanks to the next lemma this ambiguity does not play any role in our future computations. \begin{lemma} Let $\eta_1, \eta_2 \in M_{\dR}(f)_{\numberset{Q}_p}$ be two lifts of $\eta_f'$. Then $\omega_f' \otimes \eta_1 - \eta_1 \otimes \omega_f' = \omega_f' \otimes \eta_2 - \eta_2 \otimes \omega_f'$. \end{lemma} \begin{proof} The difference of the two is: \begin{equation*} (\omega_f' \otimes \eta_1 - \eta_1 \otimes \omega_f') - (\omega_f'\otimes \eta_2 - \eta_2 \otimes \omega_f') = \omega_f' \otimes (\eta_1 - \eta_2) - (\eta_1 - \eta_2)\otimes \omega_f'. \end{equation*} Now $\eta_1 - \eta_2 \in \Fil^1 M_{\dR}(f)_{\numberset{Q}_p}$ which is a $1$-dimensional $K_f\otimes\numberset{Q}_p$-vector space. By writing $\eta_1 - \eta_2 = u\omega_f'$ for some $u\in K_f\otimes\numberset{Q}_p$ the claim follows. \end{proof} Since we will only consider linear combinations of this kind, we can choose as a ``favourite'' lift the unique lift $\eta^{\alpha_f} \in V_{\phi}(\alpha_f)$ satisfying $\phi(\eta^{\alpha_f}) = \alpha_f\eta^{\alpha_f}$. The above argument proves that the image of $\tau(\psi^{-1})^2\Omega_{f,f}$ under extension of scalars to $\numberset{Q}_p$ can be identified with $\omega_f'\otimes\eta^{\alpha_f} -\eta^{\alpha_f}\otimes\omega_f'$. We sum up this discussion in the following proposition. \begin{proposition} \label{prop:pair-representing-syn} $Z_f^{\syn}$ is represented by the pair $(\xi, \lambda_N(f^*)\tau(\psi^{-1})^{-2}(\omega_f'\otimes\eta^{\alpha_f} - \eta^{\alpha_f}\otimes\omega_f'))$. \end{proposition} We now study the situation over the non-proper $\mathcal{E}^k \times \mathcal{E}^k$. The analogue of the long exact sequence above holds for cohomology with compact support: \begin{multline*} \cdots \to H_{\syn,c}^{2k+2}((\mathcal{E}^k \times \mathcal{E}^k)_{\numberset{Q}_p(\mu_N)},\numberset{Q}_p(k+1)) \\ \to F^0 H_{\dR,c}^{2k+2}((\mathcal{E}^k \times \mathcal{E}^k)_{\numberset{Q}_p(\mu_N)},\numberset{Q}_p(k+1)) \\ \to H_{\rig,c}^{2k+2}((\mathcal{E}^k \times \mathcal{E}^k)_{\numberset{Q}_p(\mu_N)},\numberset{Q}_p(k+1)) \to \cdots \end{multline*} Recall that $\lambda_N(f^*)\tau(\psi^{-1})^2\Omega_{f,f}$ belongs to $M_{\dR}(f\otimes f)$, which has canonical inclusions as a direct summand of the cohomology of $\mathcal{E}^k \times \mathcal{E}^k$ with and without compact support. Therefore $\lambda_N(f^*)\tau(\psi^{-1})^2\Omega_{f,f}$ has a canonical image in compactly supported de Rham cohomology, under extension of scalars to $\numberset{Q}_p$. By the above exact sequence, the preimage of a compactly supported de Rham cohomology class is a compactly supported syntomic cohomology class. By putting everything together we deduce the following result. \begin{proposition} \label{prop:syntomic-compact-support} There exists a canonical compactly supported cohomology class $Z^{\syn}_{f,c} \in H^{2k+2}_{\syn,c}((\mathcal{E}^k \times \mathcal{E}^k)_{\numberset{Q}_p(\mu_N)},\numberset{Q}_p(k+1))$ that maps to $Z_f^{\syn}$ in $H^{2k+2}_{\syn}((\mathcal{E}^k\times \mathcal{E}^k)_{\numberset{Q}_p(\mu_N)},\numberset{Q}_p(k+1))$. \end{proposition} \subsubsection*{Splitting the pairing} The pairing that we are interested in computing is: \begin{equation*} \langle \regulator{\syn}(\Xi^{k,k,j}),Z_f^{\syn} \rangle_{\syn, \mathscr{W}_k}. \end{equation*} Since our aim is to split this computation into an ``open'' and a ``cuspidal'' part, the standard strategy would be to use the Gysin long exact sequence induced by the embedding $\mathcal{E}^k \times \mathcal{E}^k \hookrightarrow \mathscr{W}_k$. In our case it suffices to use the existence of Gysin maps in the category, and the following projection formula due to Besser. The theorem is stated in terms of finite-polynomial cohomology, of which syntomic cohomology is a particular case. \begin{theorem}[{\cite[Corollary~5.3]{besser:on}}] Let $\iota\colon X\to Y$ be a morphism of smooth integral $\mathcal{O}_K$-schemes, and identify the groups $H^{2\dim (X)+1}_{\fp,c}(X) \simeq K$ and $H^{2\dim (Y)+1}_{\fp,c}(Y) \simeq K$ via the respective trace maps. Let $d = \dim(Y) - \dim(X)$. Let $x\in H^j_{\fp,c}(X,m)$ and $y\in H^i_{\fp}(Y,n)$, and suppose $i+j = 2\dim (X)+1$ and $n+m > \dim (X)$. Then there exists a push-forward map in any degree \begin{gather*} \iota_* \colon H^{\bullet}_{\fp}(X,d_0) \to H^{\bullet+2d}_{\fp}(Y,d_0+d) \shortintertext{for all $d_0\in\numberset{N}$, and} y \cup \iota_* x = \iota^* y \cup x. \end{gather*} \end{theorem} In our case, $\mathscr{W}_k$ and $\mathcal{E}^k \times \mathcal{E}^k$ are both schemes with a model over $\numberset{Z}[1/N]$, so they can be regarded as smooth $\numberset{Z}_p$-schemes; and their base changes to $\numberset{Q}_p(\mu_N)$ can be regarded as smooth $\mathcal{O}_{\numberset{Q}_p(\mu_N)}$-schemes. The embedding $\iota\colon \mathcal{E}^k \times \mathcal{E}^k \hookrightarrow \mathscr{W}_k$ is a smooth morphism, so we are in a position to apply the theorem with $d=0$. Moreover, $Z_f^{\syn} \in H^{2k+2}_{\syn}((\mathcal{E}^k\times\mathcal{E}^k)_{\numberset{Q}_p(\mu_N)},\numberset{Q}_p(k+1))$ is associated to a canonical compactly supported class $Z^{\syn}_{f,c}$ by Proposition~\ref{prop:syntomic-compact-support}. If we choose \begin{equation*} x = Z^{\syn}_{f,c}, \quad y = \regulator{\syn}(\Xi^{k,k,j}) \end{equation*} the hypotheses of the theorem are verified, since the sums of degrees and twists are \begin{gather*} 2+2k+3+2k = 5+4k = 2\dim (\mathcal{E}^k \times \mathcal{E}^k)+1, \\ k+1+2+2k-j = 3+3k-j > 2\dim (\mathcal{E}^k \times \mathcal{E}^k). \end{gather*} We can thus apply the theorem to find \begin{equation*} \mathrm{tr}_{\mathscr{W}_k} (\regulator{\syn}(\Xi^{k,k,j}) \cup \iota_* Z_{f,c}^{\syn}) = \mathrm{tr}_{\mathcal{E}^k\times \mathcal{E}^k} (\iota^* \regulator{\syn}(\Xi^{k,k,j}) \cup Z_{f,c}^{\syn}). \end{equation*} We can now make the following consideration: \begin{enumerate} \item $\iota_* Z_{f,c}^{\syn} = Z_f^{\syn}$ since they are represented by the same pair of cohomology classes (in rigid and de Rham cohomology, $M_{\star}(f\otimes f)$ has canonical isomorphisms between cohomologies of $\mathscr{W}_k$ and $\mathcal{E}^k \times \mathcal{E}^k$ with and without compact support); \item $\iota^*\regulator{\syn}(\Xi^{k,k,j}) = \regulator{\syn}(\iota^*\Xi^{k,k,j})$ by compatibility of regulators and pull-backs, and by construction of $\Xi^{k,k,j}$ \begin{equation*} \iota^*\Xi^{k,k,j} = \frac{1}{2}\iota^*\widetilde{\Xi}^{k,k,j} + \frac{(-1)^{k+j}}{2}\iota^*(\rho')^*\widetilde{\Xi}^{k,k,j}. \end{equation*} Now $\iota^*(\rho')^* = (\rho'\circ \iota)^* = (\iota\circ \rho')^*$ because the two morphisms commute. Indeed, $\rho'$ swaps the two components of the fibre product, while $\iota$ embeds $\mathcal{E}^k\times\mathcal{E}^k$ into $\mathscr{W}_k$. The two compositions become then, for every $P,Q\in\mathcal{E}^k$: \begin{equation*} \iota\circ\rho'(P,Q) = \iota(Q,P) = (Q,P) = \rho'(P,Q) = \rho'\circ \iota(P,Q). \end{equation*} We deduce \begin{equation*} \iota^*\Xi^{k,k,j} = \frac{1}{2}\iota^*\widetilde{\Xi}^{k,k,j} + \frac{(-1)^{k+j}}{2}(\rho')^*\iota^*\widetilde{\Xi}^{k,k,j}. \end{equation*} The inner pull-back unravels as: \begin{equation} \label{eq:pull-back-lcbf} \begin{split} \iota^* \widetilde{\Xi}^{k,k,j} &= \iota^*(\widetilde{\mathrm{BF}}^{[k,k,j]}_{\mot} + i_{\mathrm{cusp},*}(\xi_{\beta})) \\ &= \iota^*(\widetilde{\mathrm{BF}}^{[k,k,j]}_{\mot}) + \iota^*(i_{\mathrm{cusp},*}(\xi_{\beta})) = \mathrm{BF}^{[k,k,j]}_{\mot} + \iota^*(i_{\mathrm{cusp},*}(\xi_{\beta})). \end{split} \end{equation} Finally, putting everything together: \begin{equation} \label{eq:pull-back-cbf} \begin{split} \iota^* \Xi^{k,k,j} &= \frac{1}{2}\bigl(\mathrm{BF}^{[k,k,j]}_{\mot} + \iota^*(i_{\mathrm{cusp},*}(\xi_{\beta}))\bigr) + \frac{(-1)^{k+j}}{2}\bigl((\rho')^*\mathrm{BF}^{k,k,j}_{\mot} + (\rho')^*\iota^*(i_{\mathrm{cusp},*}(\xi_{\beta}))\bigr) \\ &= \mathrm{BF}^{[k,k,j]}_{\mot} + \frac{1}{2}\iota^*(i_{\mathrm{cusp},*}(\xi_{\beta})) + \frac{(-1)^{k+j}}{2}(\rho')^*\iota^*(i_{\mathrm{cusp},*}(\xi_{\beta})). \end{split} \end{equation} \end{enumerate} By linearity of the trace map, we conclude that \begin{equation} \label{eq:pairing-in-two} \begin{split} \langle \regulator{\syn}(\Xi^{k,k,j}),Z_f^{\syn} \rangle_{\syn, \mathscr{W}_k} &= \langle \mathrm{BF}^{[k,k,j]}_{\syn}, Z_{f,c}^{\syn} \rangle_{\syn,\mathcal{E}^k\times\mathcal{E}^k} \\ &+ \frac{1}{2}\langle \iota^*(i_{\mathrm{cusp},*}(\xi_{\beta}))_{\syn}, Z_{f,c}^{\syn} \rangle_{\syn,\mathcal{E}^k\times\mathcal{E}^k} \\ &+ \frac{(-1)^{k+j}}{2}\langle \iota^*(i_{\mathrm{cusp},*}(\xi_{\beta}))_{\syn}, (\rho')^* Z_{f,c}^{\syn} \rangle_{\syn,\mathcal{E}^k\times\mathcal{E}^k}. \end{split} \end{equation} We have therefore written the sought pairing as the sum of three terms. The first is nothing other than the pairing of the non-compactified Beilinson-Flach classes with $Z_{f,c}^{\syn}$ on the open variety, while the other two represent the contributions at the cuspidal locus. In the remainder of this subsection we will evaluate them separately, showing that the first contribution gives a $p$-adic $L$-value, while the others vanish. By comparing equations~\eqref{eq:pull-back-lcbf} and~\eqref{eq:pull-back-cbf} we also notice that the difference between $\iota^*\widetilde{\Xi}^{k,k,j}$ and $\iota^*\Xi^{k,k,j}$ is supported only on the cuspidal locus. Therefore, the pairings of their syntomic realisations with $Z_f^{\syn}$ differ only by some contributions of the form $\langle \iota^*(i_{\mathrm{cusp},*}(\xi_{\beta}))_{\syn}, - \rangle_{\syn}$. \begin{remark} The argument essentially requires the smoothness of $\mathscr{W}_k$, it is then clear that using $\overline{\mathcal{E}}^k\times\overline{\mathcal{E}}^k$ would not have been possible. Since we also need properness to apply Hodge theory, we conclude that enlarging the variety we work on to $\mathscr{W}_k$ is truly essential. No other variety ``in between'' is both smooth and proper. \end{remark} \subsubsection*{Computing the first term} We now compute the first pairing in equation~\eqref{eq:pairing-in-two}. Reasoning as we did for étale cohomology, we can use the edge map and the projection onto the $(f,f)$-isotypical component to link compatibly the row giving the pairing in syntomic cohomology \begin{multline*} H^{3+2k}_{\syn}((\mathcal{E}^k \times \mathcal{E}^k)_{\numberset{Q}_p(\mu_N)},\numberset{Q}_p(2+2k-j)) \times H^{2+2k}_{\syn,c}((\mathcal{E}^k \times \mathcal{E}^k)_{\numberset{Q}_p(\mu_N)},\numberset{Q}_p(k+1)) \\ \xrightarrow{\langle , \rangle_{\syn}} H^1_{\syn}(\Spec \numberset{Q}_p(\mu_N),\numberset{Q}_p(1+k-j)) \end{multline*} with the following row \begin{multline*} H^1(\Spec \mathcal{O}_{\numberset{Q}_p(\mu_N)},M_{\rig}(f\otimes f)^*(-j)) \times H^0(\Spec \mathcal{O}_{\numberset{Q}_p(\mu_N)},M_{\rig,c}(f\otimes f)(k+1)) \\ \xrightarrow{\langle , \rangle_{f,f}} H^1_{\syn}(\Spec \numberset{Q}_p(\mu_N),\numberset{Q}_p(1+k-j)). \end{multline*} The two rows together form a commutative diagram. We can therefore compute the pairing along the latter one. To go even further, one can compose every map from the first row to the second, with an isomorphism, to obtain as the composition the syntomic Abel-Jacobi map. Since these isomorphisms are compatible with the cup-product and the trace map, all three rows compute the same pairing, in particular \begin{equation*} \langle \mathrm{BF}^{[k,k,j]}_{\syn},Z_{f,c}^{\syn}\rangle_{\syn} = \langle \mathrm{AJ}_{\syn}(\mathrm{BF}^{[k,k,j]}_{\syn}),\mathrm{AJ}_{\syn}(Z_{f,c}^{\syn})\rangle_{f,f}. \end{equation*} By~\cite[Proposition~6.3.1]{kings.loeffler:rankin-eisenstein} the pairing $\langle \mathrm{AJ}_{\syn}(-), \tilde{\lambda} \rangle_{f,f}$ depends only on the class of $\tilde{\lambda}$ in de Rham cohomology. This result comes from the fact that the Abel-Jacobi map has as target a quotient of $M_{\dR}(f\otimes f)_{\numberset{Q}_p}$. We can then compute the above pairing regardless of the associated rigid cohomology class. By Propositions~\ref{prop:pair-representing-syn} and~\ref{prop:syntomic-compact-support} and since $M_{\dR}(f\otimes f)$ has canonical isomorphisms between cohomologies of $\mathscr{W}_k$ and $\mathcal{E}^k \times \mathcal{E}^k$ with compact support, the image of $Z_{f,c}^{\syn}$ in de Rham cohomology is the class of $\lambda_N(f^*)\tau(\psi^{-1})^{-2}(\omega_f'\otimes\eta^{\alpha_f} -\eta^{\alpha_f}\otimes\omega_f')$. Therefore the pairing computes to \begin{equation*} \langle \mathrm{BF}^{[k,k,j]}_{\syn},Z_{f,c}^{\syn}\rangle_{\syn} = \lambda_N(f^*)\tau(\psi^{-1})^{-2}\langle \mathrm{AJ}_{\syn}(\mathrm{BF}^{[k,k,j]}_{\syn}),\omega_f'\otimes\eta^{\alpha_f} -\eta^{\alpha_f}\otimes\omega_f')\rangle_{f,f}. \end{equation*} The following theorem links explicitly the pairing in syntomic cohomology with the Rankin-Selberg $p$-adic $L$-function. \begin{theorem}[{\cite[Theorem~6.5.9 and Remark~6.5.10]{kings.loeffler:rankin-eisenstein}}] \label{thm:klz-regulator-formula} Let $\alpha_f$ be the Satake parameter of $f$ of smallest valuation. If $E(f,f,j+1)\neq 0$, then \begin{multline*} \langle \mathrm{AJ}_{\syn}(\mathrm{BF}^{[k,k,j]}_{\syn}),\eta^{\alpha_f}\otimes \omega_f'\rangle \\ = (-1)^{k-j+1}k!\binom{k}{j}\tau(\psi^{-1})^2\frac{E(f)E^*(f)}{E(f,f,j+1)}L_p^{\mathrm{geom}}(F,F)(k,k,j+1). \end{multline*} \end{theorem} \begin{remark} Theorem~6.5.9 in \cite{kings.loeffler:rankin-eisenstein} states the above result for the case of $f$ ordinary. Remark~6.5.10 therein covers the case of $f$ supersingular, by using the geometric $p$-adic $L$-function. \end{remark} \begin{lemma} We have \begin{equation*} \langle \mathrm{AJ}_{\syn}(\mathrm{BF}^{[k,k,j]}_{\syn,N}),\omega_f'\otimes\eta^{\alpha_f}\rangle = (-1)^{j+1} \langle \mathrm{AJ}_{\syn}(\mathrm{BF}^{[k,k,j]}_{\syn,N}),\eta^{\alpha_f}\otimes \omega_f'\rangle. \end{equation*} \end{lemma} \begin{proof} The proof boils down to considering the action of the involution swapping the two Künneth components in cohomology and its relation with the involution interchanging the two components of the fibre product. The non-compactified Beilinson-Flach classes live in the cohomology of $\mathcal{E}^k\times\mathcal{E}^k$ and of $Y_1(N)^2$. We regard them and the involved differentials as classes in the cohomology of $Y_1(N)^2$. The differential forms are then in the second cohomology group, for which the Künneth isomorphism reads \begin{equation*} H^2(Y_1(N)^2,\TSym^{[k,k]} \mathscr{H}_{\numberset{Q}}(\mathcal{E})(2)) \simeq \bigl(H^1(Y_1(N),\TSym^k \mathscr{H}_{\numberset{Q}}(\mathcal{E})(1))\bigr)^{\otimes 2}. \end{equation*} Let $s$ be the map that swaps the factors in the tensor product, and $\rho$ the morphism exchanging the two factors of $Y_1(N)^2$. Since the isomorphism is induced by the cup product, $s$ and $\rho^*$ differ by a sign that depends on the degree. We deduce $\omega_f'\otimes\eta^{\alpha_f} = s(\eta^{\alpha_f}\otimes\omega_f') = -\rho^*(\eta^{\alpha_f}\otimes\omega_f')$. Now~\cite[Proposition~5.2.3]{kings.loeffler:rankin-eisenstein1} implies \begin{align*} \langle \mathrm{AJ}_{\syn}(\mathrm{BF}^{[k,k,j]}_{\syn,N}), \omega_f'\otimes\eta^{\alpha_f} \rangle &= \langle \mathrm{AJ}_{\syn}(\mathrm{BF}^{[k,k,j]}_{\syn,N}), -\rho^*(\eta^{\alpha_f}\otimes\omega_f') \rangle \\ &= -\langle \rho^*(\mathrm{AJ}_{\syn}(\mathrm{BF}^{[k,k,j]}_{\syn,N})), \eta^{\alpha_f}\otimes\omega_f' \rangle \\ &= (-1)^{j+1} \langle \mathrm{AJ}_{\syn}(\mathrm{BF}^{[k,k,j]}_{\syn,N}), \eta^{\alpha_f}\otimes \omega_f' \rangle. \end{align*} \end{proof} \begin{remark} The argument of the proof would apply to the second version of the compactified classes, though not to the first one, because of the extra correction term $i_{\mathrm{cusp},*}(\xi_{\beta})$. Indeed, this term is defined asymmetrically with respect to the two factors of the fibre product. There is then little hope that $\widetilde{\Xi}^{k,k,j}$ is in any of the eigenspaces for the swapping involution in general (see Subsubsection~\ref{subsubsec:swapping}), while the classes $\mathrm{BF}^{[k,k,j]}$ and $\Xi^{k,k,j}$ show a better behaviour. \end{remark} The above combine into \begin{theorem} \label{thm:first-term} If $j$ is even, $0\leq j\leq k$ and $E(f,f,j+1)\neq 0$, then \begin{equation*} \langle \mathrm{BF}^{[k,k,j]}_{\syn},Z_{f,c}^{\syn}\rangle_{\syn} = (-1)^k k!\binom{k}{j}\lambda_N(f^*)\frac{2E(f)E^*(f)}{E(f,f,j+1)}L_p^{\mathrm{geom}}(F,F)(k,k,j+1). \end{equation*} \end{theorem} \subsubsection*{Computing the cuspidal terms} It remains to compute the other terms on the right hand side of equation~\eqref{eq:pairing-in-two}. As we have shown that the first one computes to a $p$-adic $L$-value, the second and third one have to be regarded as ``error terms''. We now show that they vanish, by showing that the cohomology class corresponding to $\iota^*(i_{\mathrm{cusp},*}(\xi_{\beta}))$ is already zero in motivic cohomology. \begin{lemma} \label{lemma:cuspidal-is-zero} We have $\iota^*(i_{\mathrm{cusp},*}(\xi_{\beta})) = 0$. \end{lemma} The proof of this lemma is contained in the proof of~\cite[Proposition~9.4]{brunault.chida:regulators}. We report here a sketch for the reader's benefit. \begin{proof}[Sketch of the proof] The core idea is that the defect $i_{\mathrm{cusp},*}(\xi_{\beta})$ is the push forward of a cohomology class of the cuspidal locus, so pulling it back to the $\mathcal{E}^k \times \mathcal{E}^k$ vanishes by an excision argument. By Theorem~\ref{thm:brunault-chida} we can equivalently consider $i_{\mathrm{cusp},*}(\xi_{\beta})$ in the cohomology of $U^{k,k}$. We have to show that we have $(\eta_1)^* (i_{\mathrm{cusp},*}(\xi_{\beta})) = 0$, where $\eta_1$ is the open embedding $\mathcal{E}^k\times\mathcal{E}^k \to U^{k,k}$. Let $i_1 \colon Z^k \times \mathcal{E}^k \to \hat{\mathcal{E}}^{k,*}\times \mathcal{E}^k$ be the canonical closed embedding and $j_1 \colon \mathcal{E}^k \times \mathcal{E}^k \to \hat{\mathcal{E}}^{k,*} \times \mathcal{E}^k$ the complementary open embedding. Then $\eta_1$ factors as $j_2 \circ j_1$ in the following commutative diagram: \begin{equation*} \begin{tikzcd} \mathcal{E}^k \times \mathcal{E}^k \arrow["j_1"]{r} & \hat{\mathcal{E}}^{k,*} \times \mathcal{E}^k \arrow["j_2"]{r} & U^{k,k} \\ & Z^k \times \mathcal{E}^k \arrow["i_1"]{u} \arrow[equal]{r} & Z^k \times \mathcal{E}^k \arrow["i_{\mathrm{cusp}}"]{u} \end{tikzcd} \end{equation*} By commutativity $j_2^*(i_{\mathrm{cusp},*}(\xi_{\beta})) = (i_1)_*(\xi_{\beta})$. Considering the long exact localisation sequence induced by $Z^k \to \hat{\mathcal{E}}^{k,*} \to \mathcal{E}^k$, we deduce $j_1^* \circ (i_1)_* = 0$. \end{proof} The lemma clearly implies that $\langle \iota^*(i_{\mathrm{cusp},*}(\xi_{\beta}))_{\syn}, \bullet \rangle = 0$ for all choices of the second class. \subsubsection*{Conclusion} The above results taken together prove the following. \begin{theorem} If $j$ is even, $0\leq j\leq k$ and $E(f,f,j+1)\neq 0$, then \begin{multline*} \langle \regulator{\syn}(\Xi^{k,k,j}),Z_f^{\syn} \rangle_{\syn, \mathscr{W}_k} = \langle \regulator{\syn}(\widetilde{\Xi}^{k,k,j}),Z_f^{\syn} \rangle_{\syn, \mathscr{W}_k} \\ = (-1)^k k!\binom{k}{j}\lambda_N(f^*)\frac{2E(f)E^*(f)}{E(f,f,j+1)}L_p^{\mathrm{geom}}(F,F)(k,k,j+1). \end{multline*} \end{theorem} \begin{proof} By Lemma~\ref{lemma:cuspidal-is-zero}, all the terms in equation~\eqref{eq:pairing-in-two} vanish bar the first, hence \begin{equation*} \langle \regulator{\syn}(\Xi^{k,k,j}),Z_f^{\syn} \rangle_{\syn, \mathscr{W}_k} = \langle \mathrm{BF}^{[k,k,j]}_{\syn}, Z_{f,c}^{\syn} \rangle_{\syn,\mathcal{E}^k\times\mathcal{E}^k}. \end{equation*} Applying Theorem~\ref{thm:first-term} shows that this computes to the $p$-adic $L$-value in the statement of the theorem. The other equality follows from the discussion after equation~\eqref{eq:pairing-in-two}. Indeed, the difference of the two pairings equals \begin{equation*} \frac{(-1)^{k+j}}{2}\langle \iota^*(i_{\mathrm{cusp},*}(\xi_{\beta}))_{\syn}, (\rho')^* Z_{f,c}^{\syn} \rangle_{\syn,\mathcal{E}^{k,k}} - \frac{1}{2}\langle \iota^*(i_{\mathrm{cusp},*}(\xi_{\beta}))_{\syn}, Z_{f,c}^{\syn} \rangle_{\syn,\mathcal{E}^{k,k}} \end{equation*} which is again zero by the proved results. \end{proof} We then obtain the main theorem of this subsection. \begin{theorem}[$p$-adic regulator formula] \label{thm:syntomic-l-value} If $j$ is even, $0\leq j\leq k$ and $E(f,f,j+1)\neq 0$, then \begin{equation} \label{eq:syntomic-l-value} \regulator{\syn}(b_f) = (-1)^k k!\binom{k}{j}\lambda_N(f^*)\frac{2E(f)E^*(f)}{E(f,f,j+1)}L_p^{\mathrm{geom}}(F,F)(k,k,j+1). \end{equation} \end{theorem} \begin{proof} By the commutativity of the diagram expressing the compatibility of $\regulator{\syn}$ with cup product and trace map, we directly compute \begin{equation*} \regulator{\syn}(b_f) = \regulator{\syn}(\langle \Xi^{k,k,j}, Z_f\rangle) = \langle \regulator{\syn}(\Xi^{k,k,j}),Z_f^{\syn} \rangle_{\syn,\mathscr{W}_k} \end{equation*} and we conclude by the previous proposition thanks to the hypothesis on $j$. \end{proof} Theorem~\ref{thm:syntomic-l-value} expresses the syntomic regulator of $b_f$ in terms of $p$-adic $L$-values. Since étale cohomology is where we truly find $p$-adic Galois representations and their Galois cohomology groups (for example Selmer groups), we also give the following equivalent formulation in terms of the étale regulator of a motivic cohomology class. \begin{corollary} If $j$ is even, $0\leq j\leq k$ and $E(f,f,j+1)\neq 0$, then the local étale cohomology class $\mathrm{loc}_p(\regulator{\et}(b_f))\in H^1_{\et}(\Spec \numberset{Q}_p(\mu_N),\numberset{Q}_p(1+k-j))\otimes K_f$ satisfies the formula: \begin{equation} \label{eq:etale-l-value} \log(\mathrm{loc}_p(\regulator{\et}(b_f))) = (-1)^k k!\binom{k}{j}\lambda_N(f^*)\frac{2E(f)E^*(f)}{E(f,f,j+1)}L_p^{\mathrm{geom}}(F,F)(k,k,j+1). \end{equation} \end{corollary} These results are the $p$-adic analogues of Theorem~\ref{thm:complex-l-value}. Notice also that the parity conditions coincide. We have hence shown that the motivic class $b_f$ contains both complex and $p$-adic information about special values of $L$-functions. \begin{remark} The $p$-adic $L$-function appearing in equations~\eqref{eq:syntomic-l-value} and~\eqref{eq:etale-l-value} is the geometric $p$-adic $L$-function defined in Theorem~\ref{thm:geometric-l-function}. It turns out that the $L$-function itself---and not just its values---is related to the Beilinson-Flach Euler system. The link is given by the Perrin-Riou regulator, and detailed in the explicit reciprocity law~\cite[Theorem~7.1.5]{loeffler.zerbes:rankin-eisenstein}. \end{remark} \section{Beilinson-Flach \texorpdfstring{$K$}{K}-elements} \begin{definition} The \emph{Beilinson-Flach} $K$-element associated to $f$ is the element \begin{equation*} b_f = \langle\Xi^{k,k,j}, Z_f\rangle \in H^1_{\mot}(\Spec \numberset{Q}(\mu_N),\numberset{Q}(1+k-j))\otimes K_f. \end{equation*} \end{definition} By definition motivic cohomology groups are a torsion-free graded part of $K$-groups \begin{equation*} H^1_{\mot}(\Spec \numberset{Q}(\mu_N),1+k-j) = \numberset{Q} \otimes_{\numberset{Z}} \mathrm{gr}_{1+k-j}^{\gamma} K_{2k-2j+1}(\numberset{Q}(\mu_N)) \end{equation*} which shows that $b_f$ lives in a finite dimensional vector space. We want to show that it actually lives in a $1$-dimensional vector space of the kind~\eqref{eq:chi-eigenspace}, and we do so by exploiting the Galois action. \begin{proposition} \label{prop:galois-action} The Galois group $\gal{\numberset{Q}(\mu_N)}{\numberset{Q}}$ acts on $b_f$ through the character $\psi^{-1}$, i.e.\ $b_f^{\sigma} = (\psi)_{Gal}^{-1}(\sigma)b_f$. Explicitly, the automorphism $\sigma_q \colon \zeta \mapsto \zeta^q$ acts as $b_f^{\sigma_q} = \psi(q)b_f$. \end{proposition} \begin{proof} The intersection pairing is Galois-equivariant, hence \begin{equation*} b_f^{\sigma} = \langle (\Xi^{k,k,j})^{\sigma}, Z_f^{\sigma} \rangle. \end{equation*} The Beilinson-Flach class is defined over $\numberset{Q}$, so it is left invariant by $\sigma$. To compute $Z_f^{\sigma}$, we write $\sigma = \sigma_q$ for some $q\in(\numberset{Z}/N\numberset{Z})^{\times}$, where $\sigma_q$ is the unique automorphism of $\gal{\numberset{Q}(\mu_N)}{\numberset{Q}}$ characterised by $\zeta_N \mapsto \zeta_N^q$. In the composition defining $Z_f$, the only correspondence which is not defined over $\numberset{Q}$ is that induced by the Atkin-Lehner involution. According to~\cite[75]{ohta:on}, the action of the Galois group on the Atkin-Lehner involution is given by composition with a diamond operator: $w_N^{\sigma_q} = w_N \circ \langle q \rangle$. Since $\langle q \rangle^*$ acts on $\omega_f$ and $\eta_f$ as the multiplication by $\psi(q) = (\psi)_{Gal}^{-1}(\sigma_q)$, the claim is proved. \end{proof} The proposition shows in particular that $b_f \in \bigl(H^1_{\mot}(\Spec \numberset{Q}(\mu_N),1+k-j)\otimes K_f\bigr)^{\psi}$. The comparison of $b_f$ with the class $\phi_{\psi}$ from Theorem~\ref{thm:deligne-regulator-beilinson-element} is the main aim of the next section and will lead to our main result. To allow that, we need to show that $b_f$ and $\phi_{\psi}$ are in the same $1$-dimensional subspace. \label{remark:beilinson} As stated in Theorem~\ref{thm:deligne-regulator-beilinson-element} due to Beilinson (see also the account in~\cite[§4.1]{soule:regulateurs}), when $j<k$ \begin{equation*} \dim (H^1_{\mot}(\Spec \numberset{Q}(\mu_N),1+k-j)\otimes K_f)^{\psi} = \begin{cases} 1&\text{if $\psi(-1) = (-1)^{k-j}$} \\ 0&\text{else} \end{cases} \end{equation*} Under our running hypothesis that $j$ is even, we are always in the first case, as $\psi$ and $k$ necessarily have the same parity because $f$ is not zero. Therefore, in this case this result suffices to ensure that $b_f,\phi_{\psi}$ lie in the same $1$-dimensional $K_f$-vector space. However, when $j=k$ the above result does not apply. Furthermore, in that case the first cohomology group of $\numberset{Q}(\mu_N)$ is infinite-dimensional. More specifically, we have \begin{equation*} H^1_{\mot}(\Spec \numberset{Q}(\mu_N),1) \simeq \numberset{Q}(\mu_N)^{\times} \end{equation*} which is an abelian group of infinite rank. In order to exploit a $1$-dimensional eigenspace we need then to give extra conditions that imply bounds on the overall dimension. We will show in this section that we need the full integrality of the motivic classes $b_f$ in order to restrict the dimension of the $\psi$-eigenspace. We first prove results regarding the integrality of the motivic classes, and then prove a dimension formula for the eigenspace of interest. \subsection{Integrality of \texorpdfstring{$K$}{K}-elements} \label{subsec:integrality} In this subsection we discuss integrality of Beilinson-Flach $K$-elements. The question we are addressing is whether or not they live in $(H_{\mot}^1(\Spec \mathcal{O}_{\numberset{Q}(\mu_N)},1+k-j) \otimes K_f)^{\psi}$. We separate the cases $j=k$ and $j<k$. \subsubsection*{Case $j<k$} In this case $2k-2j+1 \geq 3$, which entails the equality \begin{equation*} K_{2k-2j+1}(\numberset{Q}(\mu_N))\otimes_{\numberset{Z}} \numberset{Q} = K_{2k-2j+1}(\mathcal{O}_{\numberset{Q}(\mu_N)})\otimes_{\numberset{Z}} \numberset{Q}. \end{equation*} Therefore we can regard $b_f$ as being integral, as it already lied in the torsion-free part of the $K$-group. \subsubsection*{Case $j=k$} There are no straightforward integrality results regarding $K_1$ that we can apply in this case, but we have an explicit equality $K_1(\numberset{Q}(\mu_N)) = \numberset{Q}(\mu_N)^{\times}$ which is our starting point. Notice that the special case $k=j=0$ has been originally addressed by Dasgupta. In particular, for every possible value of $k$ there is at least one value of $j$ such that the Beilinson-Flach $K$-element associated to those $k,j$ is integral. Notice that as $j=k$ the $K$-elements live in $H^1_{\mot}(\Spec \numberset{Q}(\mu_N),1)\otimes K_f = \numberset{Q}(\mu_N)^{\times} \otimes K_f$, with equality given by the Kummer map. By applying the étale regulator, we have a map \begin{equation*} \begin{tikzcd} H^1_{\mot}(\Spec \numberset{Q}(\mu_N),1) \arrow["\regulator{\et}"]{r} \arrow[equal]{d} & H^1_{\et}(\Spec \numberset{Q}(\mu_N),\numberset{Q}_p(1)) \arrow[equal]{d} \\ \numberset{Q}(\mu_N)^{\times}\otimes \numberset{Q} \arrow["\regulator{\et}"]{r} & \numberset{Q}(\mu_N)^{\times} \otimes \numberset{Q}_p \end{tikzcd} \end{equation*} where on the right we have again used the identification given by the Kummer map. We will show that the étale realisation of $b_f$ is in $(\mathcal{O}_{\numberset{Q}(\mu_N),S}^{\times} \otimes K_f)\otimes \numberset{Q}_p$ where $\mathcal{S}$ is a suitably chosen finite set of primes of $\numberset{Q}(\mu_N)$. This will prove that the motivic classes belong to $\mathcal{O}_{\numberset{Q}(\mu_N),\mathcal{S}}^{\times} \otimes K_f$, which is the cohomology of $\Spec \mathcal{O}_{\numberset{Q}(\mu_N),\mathcal{S}}$. Recall that inside the first étale cohomology group one constructs Selmer groups. We are interested in particular in the Bloch-Kato Selmer group. Indeed, the Kummer map gives a further identification: \begin{equation*} \begin{tikzcd} H^1_f(\numberset{Q}(\mu_N),\numberset{Q}_p(1)) \arrow[hook]{r} \arrow[equal]{d} & H^1_{\et}(\Spec \numberset{Q}(\mu_N),\numberset{Q}_p(1)) \arrow[equal]{d} \\ \mathcal{O}_{\numberset{Q}(\mu_N)}^{\times} \otimes \numberset{Q}_p \arrow[hook]{r} & \numberset{Q}(\mu_N)^{\times} \otimes \numberset{Q}_p \end{tikzcd} \end{equation*} which continues to hold if we relax the Bloch-Kato local condition for primes in a finite set $\mathcal{S}$. More precisely, we have the identification \begin{equation*} H^1_{f,\mathcal{S}}(\numberset{Q}(\mu_N),\numberset{Q}_p(1)) = \mathcal{O}_{\numberset{Q}(\mu_N),\mathcal{S}}^{\times} \otimes \numberset{Q}_p \end{equation*} where we use the following notation: \begin{itemize} \item $\mathcal{O}_{\numberset{Q}(\mu_N),\mathcal{S}}^{\times}$ are the $\mathcal{S}$-units of $\numberset{Q}(\mu_N)$, that is the set of elements in $\numberset{Q}(\mu_N)$ that are integral units at all primes outside $\mathcal{S}$; \item $H^1_{f,\mathcal{S}}(\numberset{Q}(\mu_N),\numberset{Q}_p(1))$ is the relaxed Selmer group, consisting of cohomology classes $x\in H^1_{\et}(\Spec \numberset{Q}(\mu_N),\numberset{Q}_p(1))$ such that $\mathrm{loc}_{\mathfrak{q}}(x) \in H^1_f$ for every $\mathfrak{q}\not\in \mathcal{S}$, and $\mathrm{loc}_{\mathfrak{q}}(x) \in H^1_g$ for every $\mathfrak{q}\in \mathcal{S}$. \end{itemize} Recall that for primes $\mathfrak{q}\centernot\mid p$, locally at $\mathfrak{q}$ by definition $H^1_f = H^1_{ur}$ and $H^1_g = H^1$. The statement that $b_f$ belongs to $(\mathcal{O}^{\times}_{\numberset{Q}(\mu_N),\mathcal{S}}\otimes K_f)\otimes \numberset{Q}$ is then equivalent to saying that $\regulator{\et}(b_f)$ belongs to $H^1_{f,\mathcal{S}}(\numberset{Q}(\mu_N),\numberset{Q}_p(1)) \otimes K_f$, as the two subgroups correspond to one another under $\regulator{\et}$. We will prove the following theorem. \begin{theorem} \label{thm:bf-bloch-kato-units} Let $\mathcal{S}$ be the set of primes of $\numberset{Q}(\mu_N)$ dividing $N$, then \begin{equation*} \regulator{\et}(b_f) \in H^1_{f,\mathcal{S}}(\numberset{Q}(\mu_N),\numberset{Q}_p(1))\otimes K_f. \end{equation*} Equivalently, $b_f \in \mathcal{O}_{\numberset{Q}(\mu_N),\mathcal{S}}^{\times} \otimes K_f$. Moreover, $\Xi^{k,k,k}_{\et}\in H^1_{f,\mathcal{S}}(\numberset{Q}(\mu_N),M_{\etbar}(f\otimes f)^*(-k))$. \end{theorem} \begin{proof}[Proof of the theorem] We have to show that the localisation of $\regulator{\et}(b_f)$ at a prime ideal $\mathfrak{q}$ of $\numberset{Q}(\mu_N)$ belongs to $H^1_f(\numberset{Q}(\mu_N)_{\mathfrak{q}},\numberset{Q}_p(1))$ if $\mathfrak{q}$ is outside $\mathcal{S}$, and to $H^1_g(\numberset{Q}(\mu_N)_{\mathfrak{q}},\numberset{Q}_p(1))$ if $\mathfrak{q}$ is contained in $\mathcal{S}$. First of all, by definition $\regulator{\et}(b_f)$ can be written as the pairing of $\Xi_{\et}^{f,f,k}$ and $Z_f^{\et}$, where the latter belongs to $H^0_{\et}(\Spec \numberset{Q}(\mu_N),M_{\etbar}(f\otimes f)(k+1))$. In particular, $Z_f^{\et}$ corresponds to a morphism $z\colon \numberset{Z} \to M_{\etbar}(f\otimes f)(k+1)$ of $G_{\numberset{Q}(\mu_N)}$-representations. By tensoring, $z$ induces a morphism $M_{\etbar}(f\otimes f)^*(-k) \to \numberset{Q}_p(1)$. Moreover, morphisms of $G_{\numberset{Q}(\mu_N)}$-representations are also morphisms of $G_{\numberset{Q}(\mu_N)_{\mathfrak{q}}}$-representations; and they commute with restriction to subgroups and scalar extensions. Therefore they respect the local unramified and Bloch-Kato conditions, meaning that: \begin{align*} x \in H^1_{ur}(\numberset{Q}(\mu_N),M_{\etbar}(f\otimes f)^*(-k)) &\implies z(x) \in H^1_{ur}(\numberset{Q}(\mu_N),\numberset{Q}_p(1)), \\ x \in H^1_f(\numberset{Q}(\mu_N),M_{\etbar}(f\otimes f)^*(-k)) &\implies z(x) \in H^1_f(\numberset{Q}(\mu_N),\numberset{Q}_p(1)). \end{align*} Therefore, it suffices to prove that \begin{equation*} \Xi_{\et}^{f,f,k} \in H^1_{f,\mathcal{S}}(\numberset{Q}(\mu_N),M_{\etbar}(f\otimes f)^*(-k)) \subseteq H^1_{\et}(\Spec \numberset{Q}(\mu_N),M_{\etbar}(f\otimes f)^*(-k)). \end{equation*} We first treat the case of primes in $\mathcal{S}$: we have to show that the localisation to $\numberset{Q}(\mu_N)_{\mathfrak{q}}$ of $\Xi_{\et}^{f,f,k}$ is in $H^1_g(\numberset{Q}(\mu_N)_{\mathfrak{q}},M_{\etbar}(f\otimes f)^*(-k))$ when $\mathfrak{q}$ lies over a prime divisor of $N$. Since $\mathfrak{q} \centernot\mid p$ (because $p \centernot\mid N$) this is the whole cohomology group, so the thesis is trivial. We now address the case of primes outside $\mathcal{S}$. In order to do so we build on~\cite[Theorem~1.2.4]{scholl:motives}, which shows that $M_{\etbar}(f)$ is unramified at all rational primes not dividing $pN$, and crystalline at $p$ (in Scholl's language, our $M_{\etbar}(f)$ is the $p$-adic realisation of the motive denoted $M(f)$). It follows that $M_{\etbar}(f\otimes f)$, $M_{\etbar}(f\otimes f)^*$ and their twists are also unramified at all rational primes away from $pN$ and crystalline at $p$, since the category of $B_{\mathrm{crys}}$-admissible representations is closed by tensor products and duals. Now $\Xi_{\et}^{f,f,k} \in H^1_{\et}(\Spec \numberset{Q}(\mu_N),M_{\etbar}(f\otimes f)^*(-k))$, so it is automatically unramified at all primes $\mathfrak{q} \subseteq \numberset{Q}(\mu_N)$ not lying over a prime divisor of $pN$. On the other hand, even if the representation is crystalline, the localisation of $\Xi_{\et}^{f,f,k}$ at primes over $p$ is not necessarily in the local Bloch-Kato subgroup. For the benefit of the reader we also give a second, slightly different argument to conclude that $\Xi_{\et}^{f,f,k}$ is unramified at primes not dividing $pN$. From~\cite[Remark~4.2.1]{scholl:motives} we know that $\mathscr{W}_k$ has a smooth and proper model over $\numberset{Z}[1/N]$. Taking into account the base change, the classes $\Xi_{\et}^{f,f,k}$ are actually in $H^1_{\et}(\Spec \mathcal{O}_{\numberset{Q}(\mu_N),\mathcal{S}\cup\mathcal{S}_p},M_{\etbar}(f\otimes f)^*(-k))$. Here we had to add to the set $\mathcal{S}$ the set $\mathcal{S}_p$ of primes over $p$, as $M_{\etbar}(f\otimes f)^*(-k)$ is only unramified outside $pN$. Let $\Sigma = \mathcal{S}\cup\mathcal{S}_p$ and use the isomorphism \begin{equation*} H^1_{\et}(\Spec \mathcal{O}_{\numberset{Q}(\mu_N),\Sigma},M_{\etbar}(f\otimes f)^*(-k)) \simeq \; H^1(\gal{\numberset{Q}(\mu_N)^{\Sigma}}{\numberset{Q}(\mu_N)},M_{\etbar}(f\otimes f)^*(-k)) \end{equation*} where $\numberset{Q}(\mu_N)^{\Sigma}$ is the maximal extension of $\numberset{Q}(\mu_N)$ unramified outside $\Sigma$. Since in this extension all the inertia subgroups for primes outside $\Sigma$ are trivial, the localisation at those primes necessarily lands into $H^1_{ur}$. This shows that $\Xi_{\et}^{f,f,k}$ is unramified at all primes $\mathfrak{q}$ not lying over a prime divisor of $pN$. To conclude the proof, if $\mathfrak{q} \mid p$ then the localisation of $\Xi_{\et}^{f,f,k}$ is in the image of the Bloch-Kato exponential, by the reasoning we did at the beginning of Subsection~\ref{subsec:p-adic-argument}. Since the local Bloch-Kato subgroup coincides with the image of $\exp$, this proves the claim. \end{proof} The theorem shows that Beilinson-Flach $K$-elements are integral units at all primes of $\numberset{Q}(\mu_N)$ that do not divide $N$. We remark that the non-compactified and compactified Beilinson-Flach cohomology classes show the same behaviour. Indeed, if $S$ is the set of rational primes dividing $N$, then the $\numberset{Q}$-version of $\mathrm{BF}^{[f,f,k]}_{\et}$ is in $H^1_{f,S}(\Spec \numberset{Q},M_{\etbar}(f\otimes f)^*(-k))$ because: \begin{itemize} \item if $l \mid N$, then $H^1_g(\numberset{Q}_l,M_{\etbar}(f\otimes f)^*(-k))$ equals the full $H^1(\numberset{Q}_l,M_{\etbar}(f\otimes f)^*(-k))$ so the claim is trivial; \item if $l\not\in S$, $l \neq p$ then the Bloch-Kato local condition coincides with the unramified local condition, and the non-compactified classes are unramified at $l$ by the remark after Definition~3.3.2 in~\cite{kings.loeffler:rankin-eisenstein}, as $l$ does not divide $pN$; \item if $l\not\in S$, $l=p$ then the localisation of $\mathrm{BF}^{[f,f,k]}_{\et}$ is in the local Bloch-Kato subgroup as it is in the image of the Bloch-Kato exponential, by the same argument used in the proof, see also the introduction of~\cite{kings.loeffler:rankin-eisenstein1}. \end{itemize} The precise behaviour of $\regulator{\et}(b_f)$ at primes lying over prime divisors of $N$ is harder to investigate. Recall that since the involved representation is $\numberset{Q}_p(1)$, at primes not lying over $p$ the unramified condition becomes: \begin{equation*} H^1_{ur}(\numberset{Q}(\mu_N)_{\mathfrak{q}},\numberset{Q}_p(1)) = H^0(\numberset{Q}(\mu_N)_{\mathfrak{q}},\numberset{Q}_p(1)) = 0. \end{equation*} Since the dimension of $H^1(\numberset{Q}(\mu_N)_{\mathfrak{q}},\numberset{Q}_p(1))$ is $1$, the local unramified condition is not trivial. This is a special feature of the representation $\numberset{Q}_p(1)$, as for every $n\neq 1$ we have $H^1_{ur}(\numberset{Q}(\mu_N)_{\mathfrak{q}},\numberset{Q}_p(n)) = H^1(\numberset{Q}(\mu_N)_{\mathfrak{q}},\numberset{Q}_p(n))$ (they are both $1$-dimensional if $n=0$, and both zero otherwise). Therefore, if $\mathfrak{q} \mid N$ we want to characterise when the localisation of étale classes is zero at $\mathfrak{q}$. \begin{proposition} Let $\mathfrak{q} \mid N$, then the localisation at $\mathfrak{q}$ of \begin{equation*} \Bigl(1-\frac{p^k}{\alpha_f^2}\Bigr)\Bigl(1-\frac{\alpha_f\beta_f}{p^{k+1}}\Bigr)^2\Bigl(1-\frac{\beta_f^2}{p^{k+1}}\Bigr) \mathrm{BF}^{[f,f,k]}_{\et} \end{equation*} belongs to $H^1_f(\numberset{Q}(\mu_N)_{\mathfrak{q}},M_{\etbar}(f\otimes f)^*(-k))$. \end{proposition} \begin{proof} By the proof of~\cite[Theorem~8.1.4(i)]{loeffler.zerbes:rankin-eisenstein}, the specialisation of the three-parameter Beilinson-Flach classes $\mathcal{BF}^{[F,F]}$ at $(k,k,k)$ is mapped under $\mathrm{Pr}^{\alpha_f}\times\mathrm{Pr}^{\alpha_f}$ to a class whose localisation is in $H^1_f(\numberset{Q}(\mu_N)_{\mathfrak{q}},M_{\etbar}(f\otimes f)^*(-k))$. We underline that even though that theorem assumes that the two forms have different weights, the first point does not use this fact. We now compute this class. According to~\cite[Theorem~5.4.2]{loeffler.zerbes:rankin-eisenstein}, the specialisation of $\mathcal{BF}^{[F,F]}$ equals \begin{equation*} \Bigl(1-\frac{p^k}{a_p(F_k)a_p(F_k)}\Bigr)\frac{\mathrm{BF}^{[F_k,F_k,k]}_{\et}}{(-1)^k k!}. \end{equation*} The specialisation at $k$ of the Coleman family $F$ equals the $p$-stabilisation of $f_k$, hence $a_p(F_k) = \alpha_f$ without losing generality. This explains the first Euler factor. The factor $(-1)^k k!$ at the denominator is a non-zero scalar, so it does not change the local behaviour, and we can omit it. Notice that the above cohomology class is related to $F_k$, i.e.\ to the $p$-stabilisation of $f_k$ rather than to $f_k$ itself. By~\cite[Proposition~3.5.5]{loeffler.zerbes:rankin-eisenstein}, the application of $\mathrm{Pr}^{\alpha_f}\times\mathrm{Pr}^{\alpha_f}$ maps \begin{equation*} (\mathrm{Pr}^{\alpha_f} \times \mathrm{Pr}^{\alpha_f})_*(\mathrm{BF}^{[F_k,F_k,k]}_{\et}) = \Bigl(1 - \frac{\alpha_f\beta_f}{p^{k+1}}\Bigr)^2 \Bigl(1-\frac{\beta_f^2}{p^{k+1}}\Bigr) \mathrm{BF}^{[f_k,f_k,k]}_{\et}. \end{equation*} This provides the missing Euler factors. This finishes the proof, since the localisation of the resulting class at $\mathfrak{q}$ is in the local Bloch-Kato subgroup by the quoted theorem. \end{proof} \begin{remark} As explained in the proof, the map $\mathrm{Pr}^{\alpha_f} \times \mathrm{Pr}^{\alpha_f}$ has the effect of translating Beilinson-Flach classes related to the $p$-stabilisation of $f$, to classes related to $f$. \end{remark} The proposition applies equally to the compactified étale classes $\Xi^{f,f,k}_{\et}$, as they coincide with $\mathrm{BF}^{[f,f,k]}_{\et}$, by Proposition~\ref{prop:classes-of-f-coincide}. We then obtain the following corollary. \begin{corollary} \label{cor:localisation-compactified} Let $\mathfrak{q} \mid N$, then the localisation at $\mathfrak{q}$ of \begin{equation} \label{eq:localisation-at-q-compact} \Bigl(1-\frac{p^k}{\alpha_f^2}\Bigr)\Bigl(1-\frac{\alpha_f\beta_f}{p^{k+1}}\Bigr)^2\Bigl(1-\frac{\beta_f^2}{p^{k+1}}\Bigr) \Xi^{f,f,k}_{\et} \end{equation} belongs to $H^1_f(\numberset{Q}(\mu_N)_{\mathfrak{q}},M_{\etbar}(f\otimes f)^*(-k))$. \end{corollary} Recall that $\Xi^{f,f,k}_{\et}$ always lies in $H^1_{f,\mathcal{S}}(\numberset{Q}(\mu_N),M_{\etbar}(f\otimes f)^*(-k))$. From the above proposition we directly deduce \begin{corollary} \label{cor:localisation-compact-in-selmer} Let $\mathfrak{q}\mid N$. If $\psi(p)\neq 1$ and $\beta_f^2 \neq p^{k+1}$, then $\regulator{\et}(b_f) = 0$ at $\mathfrak{q}$. In particular, $\regulator{\et}(b_f) \in H^1_f(\numberset{Q}(\mu_N),\numberset{Q}_p(1))\otimes K_f$, equivalently $b_f \in \mathcal{O}_{\numberset{Q}(\mu_N)}^{\times} \otimes K_f$. \end{corollary} \begin{proof} By the same argument used in the proof of Theorem~\ref{thm:bf-bloch-kato-units}, the image of a class in the (global) Selmer group under a morphism, is still in the (global) Selmer group. Since in this case the class~\eqref{eq:localisation-at-q-compact} is in the Selmer group at $\mathfrak{q}$, the class \begin{equation*} \Bigl(1-\frac{p^k}{\alpha_f^2}\Bigr)\Bigl(1-\frac{\alpha_f\beta_f}{p^{k+1}}\Bigr)^2\Bigl(1-\frac{\beta_f^2}{p^{k+1}}\Bigr) \regulator{\et}(b_f) \end{equation*} is in the Selmer group at $\mathfrak{q}$ too, because $\regulator{\et}(b_f)$ is the image of $\Xi^{f,f,k}_{\et}$ under a morphism. Therefore its localisation at $\mathfrak{q}$ is zero. Under the hypotheses in the statement, all the factors multiplying $\regulator{\et}(b_f)$ are non-zero, so the localisation of the class itself must vanish. Indeed, the only Euler factors that could vanish are the second and the third. The second one is non-zero as $\alpha_f\beta_f = p^{k+1}\psi(p) \neq p^{k+1}$ by hypothesis, and the third one directly from the hypotheses. This proves the claim. \end{proof} All the extra factors in equation~\eqref{eq:localisation-at-q-compact} appear in $E(f,f,k+1)$. Hence the hypothesis $E(f,f,k+1)\neq 0$ implies that all the extra factors in the statement are non-zero, so by the corollary $\regulator{\et}(b_f)$ vanishes in $H^1(\numberset{Q}(\mu_N)_{\mathfrak{q}},\numberset{Q}_p(1))$, and is in $H^1_f(\numberset{Q}(\mu_N),\numberset{Q}_p(1))$. We will use this fact in the next section. \begin{corollary} Let $\mathfrak{q} \mid N$. If $\psi(p)\neq 1$ then $\regulator{\et}(b_f) = 0$ at $\mathfrak{q}$ for all but finitely many values of $k$. In particular, $b_f \in H^1_f(\numberset{Q}(\mu_N),\numberset{Q}_p(1))\otimes K_f$ for all but finitely many values of $k$, equivalently $b_f \in \mathcal{O}_{\numberset{Q}(\mu_N)}^{\times} \otimes K_f$ for the same values of $k$. \end{corollary} \begin{proof} We use the same argument of the previous corollary. Under these hypotheses the localisation at $\mathfrak{q}$ can be non-zero only if $\beta_f^2 = p^{k+1}$. In this case we deduce $v_p(\alpha_f) = v_p(\beta_f) = (k+1)/2 \leq v_p(a_p(f))$. The equality can then hold only for finitely many values of $k$, because the slope is bounded in the Coleman family as $a_p(F)$ is a rigid function, and $a_p(F_k) = \alpha_f$. \end{proof} The second Euler factor in equation~\eqref{eq:localisation-at-q-compact} can also vanish, in particular it does so if and only if $\psi(p)=1$. If we do not impose the condition $\psi(p)\neq 1$, then we are not able to tell if the localisation of $\regulator{\et}(b_f)$ at $\mathfrak{q}$ is zero. Contrarily to the condition $\beta_f^2 = p^{k+1}$, which can hold only for finitely many cases, the condition $\psi(p)=1$ can hold infinitely often, as $\psi$ is constant along the family. Therefore, if we assume that $\psi(p)\neq 1$ then along the $1$-dimensional slice $\{(k,k,k) \mid k\in\numberset{N}\}\subseteq \mathcal{W}^3$ the classes $\regulator{\et}(b_f)$ are in the Bloch-Kato Selmer group almost always. If we do not assume it, we can only deduce that they are in $H^1_{f,\mathcal{S}}(\numberset{Q}(\mu_N),\numberset{Q}_p(1))$. We underline the following equivalence: \begin{equation*} (\psi(p)\neq 1) \wedge (\beta_f^2 \neq p^{k+1}) \iff E(f,f,k+1)\neq 0. \end{equation*} The last condition also appears in the hypotheses of Theorem~\ref{thm:syntomic-l-value}. \subsection{A dimension formula for units} As explained earlier in this section, we now prove a formula computing the dimension of the $\psi$-eigenspace in the case $j=k$. As we proved in the last subsection, Beilinson-Flach $K$-elements are $\mathcal{S}$-integral in general, and integral if we assume additional hypotheses. Combined with the character action, this shows that they are in \begin{equation*} (H^1_{\mot}(\Spec \mathcal{O}_{\numberset{Q}(\mu_N),\mathcal{S}},1) \otimes K_f)^{\psi}. \end{equation*} In this subsection we compute the dimension of this $K_f$-vector space. We will use the following isomorphism induced by the Kummer map, as explained at the beginning of Subsection~\ref{subsec:integrality}: \begin{equation*} H^1_{\mot}(\Spec \mathcal{O}_{\numberset{Q}(\mu_N),\mathcal{S}},1)\otimes K_f \simeq \mathcal{O}_{\numberset{Q}(\mu_N),\mathcal{S}}^{\times} \otimes K_f. \end{equation*} The key result of this subsection is the following theorem computing the dimension of a character eigenspace inside the group of $\mathcal{S}$-units. It goes under the name of ``equivariant Dirichlet's units theorem''. \begin{theorem} \label{thm:dimension-units} Let $S\subset\numberset{N}$ be a finite set of rational primes, $\mathcal{S}$ the set of primes of $\mathcal{O}_{\numberset{Q}(\mu_N)}$ lying over those in $S$, $\chi$ a character modulo $N$ induced by a primitive character $\check{\chi}$ modulo $\check{N}$, $M$ a number field containing the values of $\chi$. Then the $\chi$-eigenspace of $\mathcal{O}_{\numberset{Q}(\mu_N),\mathcal{S}}^{\times}$ has dimension: \begin{gather*} \dim (\mathcal{O}_{\numberset{Q}(\mu_N),\mathcal{S}}^{\times} \otimes M)^{\chi} = |\{l\in S \mid l \centernot\mid \check{N}, \; \check{\chi}(l)=1\}| + d_0 \shortintertext{where} d_0 = \begin{cases} 0&\text{$\chi$ odd or trivial} \\ 1&\text{$\chi$ even and non-trivial} \end{cases}. \end{gather*} \end{theorem} Theorem~\ref{thm:dimension-units} is not an original result, but as far as the authors are aware, the only proofs present in literature are of analytical nature, and rely on the computation of the order of vanishing of the $L$-functions attached to the unit groups and their eigenspaces. The first proof is due to Tate~\cite[§3]{tate:conjectures}, who computed the multiplicity of every character inside the character of a Galois representation $V$, by recasting it as the order of vanishing of an Artin $L$-function. By applying his results to the case where $V$ is the group of $\mathcal{S}$-units, the theorem follows. For a purely algebraic---and more elementary---proof for the cyclotomic case, see the first author's PhD thesis~\cite[Theorem~8.2.1]{arlandini:on}. The theorem should be put in the context of the circle of ideas surrounding the Dirichlet-Chevalley-Hasse $\mathcal{S}$-units theorem, which states that if $M$ is a number field, then the group of its integral units $\mathcal{O}_M^{\times}$ satisfies: \begin{equation*} \mathcal{O}_{M,\mathcal{S}}^{\times} \simeq \mu(M) \times \Gamma_M \end{equation*} where $\mu(M) = (M^{\times})_{\mathrm{tors}}$ is the torsion subgroup of roots of unity, and $\Gamma_M$ is a real lattice such that \begin{equation*} \Gamma_M \leq \numberset{R}^{r_1+r_2+|\mathcal{S}|}, \quad \mathrm{rank}\; \Gamma_M = r_1+r_2+|\mathcal{S}|-1. \end{equation*} In particular, the above theorem computes the eigenspaces for the action of the Galois group $\gal{M}{\numberset{Q}}$, associated to the characters of the Galois group, when $M$ is a cyclotomic field. By what explained above, the theorem implies the following corollary. \begin{corollary} In the hypotheses of Theorem~\ref{thm:dimension-units}, the dimension of the $\chi$-eigenspace of $H^1_{\mot}(\Spec \mathcal{O}_{\numberset{Q}(\mu_N),\mathcal{S}},1)\otimes M$ is: \begin{gather*} \dim (H^1_{\mot}(\Spec \mathcal{O}_{\numberset{Q}(\mu_N),\mathcal{S}},1) \otimes M)^{\chi} = |\{l\in S \mid l\centernot\mid \check{N}, \; \check{\chi}(l)=1\}| + d_0 \shortintertext{where} d_0 = \begin{cases} 0&\text{$\chi$ odd or trivial} \\ 1&\text{$\chi$ even and non-trivial} \end{cases}. \end{gather*} \end{corollary} By Theorem~\ref{thm:bf-bloch-kato-units}, our classes live in $(H^1_{\mot}(\Spec \mathcal{O}_{\numberset{Q}(\mu_N),\mathcal{S}},1)\otimes K_f)^{\psi}$, which by the last result has dimension greater than $1$ in general. To single out a $1$-dimensional eigenspace without forcing new hypotheses on the character, we would need $\mathcal{S}=\emptyset$. This follows from Corollary~\ref{cor:localisation-compactified} and its consequences. The combination of the integrality results with the dimension formula allows then for a precise comparison of cohomology classes, which will be done in Section~\ref{sec:comparison}. Theorem~\ref{thm:dimension-units} implies the following. \begin{corollary} Let $\chi$ be a character modulo $N$, $M$ a number field containing the values of $\chi$ and $\mathcal{S}$ be the set of prime ideals of $\numberset{Q}(\mu_N)$ lying over a prime divisor of $N$. Then: \begin{itemize} \item if $\chi$ is even and non-trivial, then $(H^1_{\mot}(\Spec \mathcal{O}_{\numberset{Q}(\mu_N)},1)\otimes M)^{\chi}$ is a $1$-dimensional $M$-vector space; if $\chi$ is odd or trivial, then it is a trivial vector space; \item $(H^1_{\mot}(\Spec \mathcal{O}_{\numberset{Q}(\mu_N),\mathcal{S}},1)\otimes M)^{\chi}$ and $(H^1_{f,\mathcal{S}}(\numberset{Q}(\mu_N),\numberset{Q}_p(1))\otimes M)^{\chi}$ are $1$-dimensional vector spaces (over $M$ and $M\otimes\numberset{Q}_p$ respectively) if one of the following applies: \begin{itemize} \item $\chi\neq 1$ is even and $N=\check{N}$ (i.e.\ $\chi$ is primitive); \item $\chi=1$ or odd, and $N/\check{N}$ is the power of a prime in the kernel of $\check{\chi}$; \end{itemize} \end{itemize} \end{corollary} \begin{proof} All the points are direct applications of Theorem~\ref{thm:dimension-units} and the subsequent corollary. For the first point, since the cohomology group is of integral units, the dimension of the $\chi$-eigenspace equals $d_0$. In particular, $d_0=1$ if and only if $\chi$ is even and non-trivial. For the second point, the dimension of the $\chi$-eigenspace of the cohomology group of $\mathcal{S}$-units equals \begin{equation*} |\{l\in S \mid l\centernot\mid \check{N}, \; \check{\chi}(l)=1\}| + d_0. \end{equation*} This equals one if and only if exactly one contribution is $1$ and the other is $0$. In the first case, every prime $l\in S$ divides $\check{N}=N$, hence it cannot contribute to the first set. Therefore the first contribution is zero. The second contribution is $1$ as $\chi$ is even and non-trivial. In the second case, $d_0=0$ and the first contribution equals the number of prime factors of $N/\check{N}$ that are in the kernel of $\check{\chi}$. Since $S$ contains exactly one such prime, the claim follows. \end{proof} We remark that the second point includes the case of the trivial character modulo any prime power, as in this case $\check{N}=1$ and $\check{\chi}=1$. Bringing together the results proved in this subsection and in Subsection~\ref{subsec:integrality}, we deduce the following result about the classes $b_f$. \begin{theorem} \label{thm:beilinson-el-1-dim} Let $0\leq j\leq k$ be even, and $\mathcal{S}$ be the set of prime ideals of $\numberset{Q}(\mu_N)$ lying over a prime divisor of $N$. Then: \begin{itemize} \item if $j<k$, then $b_f\in (H^1_{\mot}(\Spec \mathcal{O}_{\numberset{Q}(\mu_N)},1+k-j)\otimes K_f)^{\psi}$ which is a $1$-dimensional $K_f$-vector space; \item if $j=k$, then $b_f\in (H^1_{\mot}(\Spec \mathcal{O}_{\numberset{Q}(\mu_N),\mathcal{S}},1)\otimes K_f)^{\psi}$. If in addition $E(f,f,k+1)\neq 0$, then $b_f\in (H^1_{\mot}(\Spec \mathcal{O}_{\numberset{Q}(\mu_N)},1)\otimes K_f)^{\psi}$. In particular, $b_f$ is in a $1$-dimensional $K_f$-eigenspace if one of the following holds: \begin{itemize} \item $\psi\neq 1$ is even and $N=\check{N}$ (i.e.\ $\psi$ is primitive); \item $\psi=1$ and $N$ is a prime power. \end{itemize} \end{itemize} \end{theorem} Clearly the analogous behaviour applies to $\regulator{\et}(b_f)$, which lies either in the full Selmer group or in the Selmer group outside $\mathcal{S}$ according to the same conditions. Notice that if $j=k$ we exclude the case of $\psi$ odd, as it must have the same parity of $k$ for the cusp form $f$ to be non-zero. \begin{proof} The first point was proved at the beginning of Subsection~\ref{subsec:integrality}, while the dimension of the vector space follows from Beilinson's result as explained at page~\pageref{remark:beilinson}. The first part of the second point comes by bringing together Theorem~\ref{thm:bf-bloch-kato-units} and Corollary~\ref{cor:localisation-compact-in-selmer}. The dimension counting follows again from the last corollary: when $E(f,f,k+1)\neq 0$, $1$-dimensionality is equivalent to $\psi$ even and non-trivial. When $E(f,f,k+1)=0$, $1$-dimensionality is implied by either of the two stated conditions. Therefore, $\psi$ even, non-trivial and of conductor $N$ always implies $1$-dimensionality, regardless of the value of $E(f,f,k+1)$. The condition $\psi=1$ and $N$ prime power implies it only if $E(f,f,k+1)$ vanishes, but this is guaranteed by the fact that $\psi=1$. \end{proof} We remark that when $j=k$, the condition $\psi$ even, non-trivial and primitive always implies that $b_f$ lives in a $1$-dimensional $K_f$-eigenspace of the kind~\eqref{eq:chi-eigenspace}, independently on the value of $E(f,f,k+1)$. \begin{remark} The case of trivial character and prime power level is not of interest to us, because we cannot show that our classes are non-zero in this case. However, it is historically important in relation to the work of Flach. \end{remark} \section{Factorisation and interpolation in half of the weight space} \label{sec:comparison} In this section we gather the work done in the previous sections to prove Theorem~\ref{intro:final2}, and Theorem~\ref{intro:final1} in a half of the weight space. We will then discuss how to generalise the interpolation equations when we relax the hypotheses. \subsection{Comparison of motivic classes} In this section we prove a relation between motivic cohomology classes. We will need Theorems~\ref{thm:deligne-regulator-beilinson-element} and~\ref{thm:complex-l-value}, which allow us to ``lift'' the complex factorisation in terms of motivic classes. We then put ourselves in the following hypotheses: \begin{equation*} \begin{cases} k - j \equiv a_{\psi} \mod 2 \\ j \equiv 0 \mod 2 \end{cases} \end{equation*} Actually, the two hypotheses are equivalent, since for $f$ to be non-zero we need $k \equiv a_{\psi}$ modulo $2$. The mentioned results provide equations linking higher cyclotomic classes and Beilinson-Flach $K$-elements to complex $L$-values. In particular, $\phi_{\psi}$ is related to the value $L'(\psi,j-k)$ and $b_f$ is related to the value $L'(f,f,j+1)$. Recall equation~\eqref{eq:leading-terms} relating leading terms of the complex $L$-functions: \begin{equation*} L'(f,f,j+1) = L'(\psi,j-k)L^{\mathrm{imp}}(\Sym^2 f,j+1)\prod_{\substack{l \mid N \\ l \centernot\mid N_{\psi}}} \Bigl(1 - \frac{\psi(l)}{l^{j-k}} \Bigr). \end{equation*} Equations~\eqref{eq:deligne-regulator-beilinson-element} and~\eqref{eq:deligne-regulator-bf} proved in the paper show that the above translates to \begin{multline*} \regulator{\del}(b_f) = \regulator{\del}(\phi_{\psi}) (-1)^{k+1}\frac{(2\pi iN_{\psi})^{k-j}}{2\tau(\psi)}\frac{\lambda_N(f^*)}{(-4\pi)^{k+1}\langle f,f \rangle}\Bigl( \frac{k!}{(k-j)!} \Bigr)^2 \\ \cdot L^{\mathrm{imp}}(\Sym^2 f,j+1)\prod_{\substack{l \mid N \\ l \centernot\mid N_{\psi}}} \Bigl(1 - \frac{\psi(l)}{l^{j-k}} \Bigr). \end{multline*} Let \begin{equation*} \Upsilon = (-1)^{k+1}\frac{(2\pi iN_{\psi})^{k-j}\lambda_N(f^*)}{2\tau(\psi)(-4\pi)^{k+1}\langle f,f \rangle}\Bigl( \frac{k!}{(k-j)!} \Bigr)^2 L^{\mathrm{imp}}(\Sym^2 f,j+1)\smashoperator{\prod_{\substack{l \mid N \\ l \centernot\mid N_{\psi}}}} \Bigl(1 - \frac{\psi(l)}{l^{j-k}}\Bigr) \end{equation*} then $\regulator{\del}(b_f) = \regulator{\del}(\phi_{\psi})\Upsilon$. We want to show that $\Upsilon$ is not just the scalar defect in Deligne cohomology, but that it is also the scalar giving the difference between the motivic cohomology classes. In order to do this we must ensure that $\Upsilon$ makes sense as a scalar in motivic cohomology. This means that we need to show that it belongs to $K_f$ or to some finite extension. As a preparation we recall the following result due to Schmidt. \begin{lemma}[{\cite[Corollary~2.6]{schmidt:p-adic}}] Let $s\in \{1,\ldots,2k+2\}$ and \begin{equation*} I(f,s) = \frac{L(\Sym^2 f,s)}{\pi^{k+1} \langle f,f \rangle} \Bigl( \frac{\tau(\psi^{-1})}{(2\pi i)^{s-k-1}} \Bigr)^{1+\delta} \end{equation*} with $\delta = 0$ if $s\in\{1,\ldots,k+1\}$ and $\delta=1$ if $s\in\{k+2,\ldots,2k+2\}$. Then $I(f,s)\in\overline{\numberset{Q}}$ and $I(f,s)^{\sigma} = I(f^{\sigma},s)$ for every $\sigma\in G_{\numberset{Q}}$. \end{lemma} Schmidt's result is actually more general as it concerns twisted $L$-values, but this version will suffice for our needs. The analogous result holds for $L^{\mathrm{imp}}(\Sym^2 f)$, as the ratio between the two $L$-functions depends on $f$ Galois-equivariantly. Indeed, it is a ratio of Euler factors. The lemma in particular implies: \begin{corollary} $I(f,s)\in K_f$ for all $s\in\{1,\ldots,2k+2\}$. \end{corollary} \begin{proof} For every $\sigma\in G_{K_f}$ we have $I(f,s)^{\sigma} = I(f,s)$ as all the coefficients of $f$ are in $K_f$ by definition. \end{proof} \begin{proposition} We have $\Upsilon \in K_f(\zeta_N)$, in particular $\Upsilon\in\overline{K_f}$. \end{proposition} \begin{proof} Clearly \begin{equation*} \frac{(-1)^{k+1}}{2}\Bigl( \frac{k!}{(k-j)!} \Bigr)^2 \prod_{\substack{l \mid N \\ l \centernot\mid N_{\psi}}} \Bigl(1 - \frac{\psi(l)}{l^{j-k}}\Bigr) \in K_f \end{equation*} as $K_f$ contains the values of $\psi$. By~\cite[Theorem~2.1]{atkin.li:twists}, the Atkin-Lehner pseudo-eigenvalue can be expressed as: \begin{equation*} \lambda_N(f^*) = \frac{N^{\frac{k}{2}+1} \tau(\psi^{-1})}{a_N(f^*)}. \end{equation*} Clearly, $a_N(f^*)\in K_f$. Therefore, we are left with \begin{equation*} \Upsilon = C\cdot\frac{\tau(\psi^{-1})}{\tau(\psi)}\frac{(2\pi iN_{\psi})^{k-j}}{(-4\pi)^{k+1}\langle f,f \rangle} L^{\mathrm{imp}}(\Sym^2 f,j+1), \qquad C\in K_f. \end{equation*} By the above corollary we deduce: \begin{equation*} \tau(\psi^{-1})\cdot\frac{(2\pi iN_{\psi})^{k-j}}{(-4\pi)^{k+1}\langle f,f \rangle} L^{\mathrm{imp}}(\Sym^2 f,j+1) = \frac{N_{\psi}^{k-j}}{-4^{k+1}}I(f,j+1) \in K_f \end{equation*} since $j+1 \leq k+1$. This shows $\Upsilon = \tilde{C}\tau(\psi)^{-1}$ with $\tilde{C}\in K_f$, which concludes the proof. \end{proof} Without loss of generality we can suppose $\zeta_N\in K_f$. Indeed, if this is not the case the whole argument can be reproduced replacing $K_f$ with $K_f(\zeta_N)$. By the above results, the classes $b_f, \phi_{\psi}$ are in the same $K_f$-vector space and $\Upsilon \in K_f$, so we deduce the relation $\regulator{\del}(b_f) = \Upsilon\regulator{\del}(\phi_{\psi}) = \regulator{\del}(\Upsilon\phi_{\psi})$. We now want to deduce from this a relation between motivic cohomology classes. We know that $\phi_{\psi}\in (H^1_{\mot}(\Spec \mathcal{O}_{\numberset{Q}(\mu_N)},1+k-j)\otimes K_f)^{\psi}$ from the paragraph at page~\pageref{par:integrality-cyclotomic}. Regarding the classes $b_f$, we would like to apply Theorem~\ref{thm:beilinson-el-1-dim}, which shows that: \begin{itemize} \item if $j<k$, then $b_f\in (H^1_{\mot}(\Spec \mathcal{O}_{\numberset{Q}(\mu_N)},1+k-j)\otimes K_f)^{\psi}$ too, and this is a $1$-dimensional $K_f$-vector space; \item if $j=k$, then $b_f$ is in a $1$-dimensional eigenspace where $G_{\numberset{Q}}$ acts through $\psi^{-1}$: \begin{itemize} \item if $\psi\neq 1$ is even and $N=N_{\psi}$; \item if $\psi=1$, when $N$ has only one prime divisor. \end{itemize} \end{itemize} If $j=k$ the class $b_f$ is always in a superspace of $(H^1_{\mot}(\Spec \mathcal{O}_{\numberset{Q}(\mu_N)},1)\otimes K_f)^{\psi}$. When $\psi\neq 1$ is even and $N=N_{\psi}$, both spaces are $1$-dimensional and coincide, but in the other case this last is trivial by Theorem~\ref{thm:dimension-units}. Since we need essentially the fact that $b_f$ and $\phi_{\psi}$ are non-zero and in the same $1$-dimensional vector space, we add to our hypotheses the following: \begin{description} \item[H1] if $j=k$, we assume $\psi\neq 1$ and $N=N_{\psi}$ (i.e.\ $\psi$ primitive). \end{description} Notice that we do not need to assume that $\psi$ is even, because it is already implied by the fact that $j=k$ is even, and $k$ and $\psi$ have the same parity. Under these hypotheses $b_f$ and $\phi_{\psi}$ live in the same $1$-dimensional $K_f$-vector space for all choices of $j$. We are now in a position to prove the following relation between motivic cohomology classes. \begin{theorem} \label{thm:motivic-relation} Let $j$ be even, $0\leq j\leq k$ and assume H1. Then the following equality of motivic classes holds: $b_f = \Upsilon\phi_{\psi}$. \end{theorem} \begin{proof} The restriction of the Deligne regulator to $(H^1_{\mot}(\Spec \mathcal{O}_{\numberset{Q}(\mu_N)},1+k-j)\otimes K_f)^{\psi}$ is non-trivial, and under the assumptions of the theorem this space is $1$-dimensional. Therefore, the restriction of the regulator is injective. The relation $\regulator{\del}(b_f) = \regulator{\del}(\Upsilon\phi_{\psi})$ then implies the claim. \end{proof} \subsection{\texorpdfstring{$p$}{p}-adic factorisation and interpolation} If we were in the ordinary case, the next step would be to deduce a relationship between the elements $b_f$ and $\phi_{\psi}$ in the $p$-adic setting by applying the syntomic regulator to the last theorem, and then prove the theorem by making the $p$-adic symmetric square appear from the extra factor $\Upsilon$---which has $L^{\mathrm{imp}}(\Sym^2 f,j+1)$ as a factor---thanks to its interpolation equation. That argument works well in the ordinary case, because the only $L$-function for which we miss a regulator formula is the symmetric square one, but we can still pass from its complex (critical) values to $p$-adic ones via interpolation. By contrast, in the supersingular case we face the problem of the \emph{existence} of a $p$-adic symmetric square $L$-function. As we cannot ``translate'' the complex factor in a $p$-adic one, such missing piece jeopardises the whole argument. We then choose a different route: we \emph{define} a $2$-variable $p$-adic $L$-function via the putative factorisation, and then prove that this interpolates a family of $L$-functions $L^{\mathrm{imp}}(\Sym^2 \cdot, \cdot)$ where both the modular form and the complex variable are allowed to vary analytically. Recall that we assume $\alpha_f\neq\beta_f$, to guarantee the existence of $F$ and $L_p^{\mathrm{geom}}(F,F)$. \begin{definition} Let $U = V_1\cap V_2$ be the largest affinoid open such that the function $L_p^{\mathrm{geom}}(F,F)$ is well-defined over $U\times U\times\mathcal{W}$. The auxiliary $p$-adic $L$-function $\mathscr{L}_p$ is defined as: \begin{align*} \mathscr{L}_p \colon U\times \mathcal{W} &\to \overline{\numberset{Q}_p} \\ (\sigma_1,\sigma_2) &\mapsto \frac{L_p^{\mathrm{geom}}(F,F)(\sigma_1,\sigma_1,\sigma_2)}{L_p(\psi,\sigma_2-\sigma_1-1)} \end{align*} for all $(\sigma_1,\sigma_2)\in U\times\mathcal{W}$ such that $L_p(\psi,\sigma_2-\sigma_1-1) \neq 0$. \end{definition} We may be introducing poles at the zeros of $L_p(\psi)$, so in general $\mathscr{L}_p$ is a meromorphic function. We took the ``slice'' of the $3$-variable geometric $L$-function $(\sigma_1,\sigma_2) \mapsto L_p^{\mathrm{geom}}(F,F)(\sigma_1,\sigma_1,\sigma_2)$, so this is a function on a subset of $\mathcal{W}^2$ rather than of $\mathcal{W}^3$. Applying the syntomic regulator to both sides of the relation in Theorem~\ref{thm:motivic-relation} gives $\regulator{\syn}(b_f) = \regulator{\syn}(\Upsilon\phi_{\psi}) = \Upsilon\regulator{\syn}(\phi_{\psi})$, where again we can bring $\Upsilon$ out because it makes sense as a scalar over the localisation of $K_f$, thanks to the last proposition. To recast the syntomic regulators in terms of $p$-adic $L$-values we need Theorems~\ref{thm:syntomic-regulator-beilinson-element} and~\ref{thm:syntomic-l-value}. The parity conditions of those theorems are satisfied as we already assumed that $j$ is even. However, to apply the $p$-adic regulator formula for $b_f$ we need to add another assumption to our hypotheses: \begin{description} \item[H2] if $j=k$, we assume $E(f,f,k+1)\neq 0$. \end{description} We underline that hypothesis H2 implies $\psi$ non-trivial, as $E(f,f,k+1)\neq 0$ entails $\psi(p)\neq 1$ necessarily. In this case H1 simplifies to the assumption that $N=N_{\psi}$, or equivalently that $\psi$ is primitive, if $j=k$. Under these hypotheses we can now apply the $p$-adic regulator theorems for $\phi_{\psi}$ and $b_f$. By equations~\eqref{eq:syntomic-regulator-beilinson-element} and~\eqref{eq:syntomic-l-value}, unravelling the relation $\regulator{\syn}(b_f) = \regulator{\syn}(\phi_{\psi})\Upsilon$ we obtain: \begin{gather} \frac{(-1)^k k!\binom{k}{j}\lambda_N(f^*)\frac{2E(f)E^*(f)}{E(f,f,j+1)}L_p^{\mathrm{geom}}(F,F)(k,k,j+1)}{(-1)^{k-j}\frac{(k-j)!}{N_{\psi}^{k-j}} i^{a_{\psi}} \Bigl(1-\frac{\psi^{-1}(p)}{p^{1+k-j}}\Bigr)^{-1} L_p(\psi,j-k)} = \Upsilon \label{eq:auxp-unraveled} \intertext{which rearranges to} \begin{split} \mathscr{L}_p(k,j+1) = &(-1)^{k+1}j! \frac{E(f,f,j+1)}{E(f)E^*(f)}\frac{(2\pi i)^{k-j}i^{a_{\psi}}}{4\tau(\psi)(4\pi)^{k+1}\langle f,f \rangle} \\ &\cdot L^{\mathrm{imp}}(\Sym^2 f,j+1)\Bigl(1-\frac{\psi^{-1}(p)}{p^{1+k-j}}\Bigr)^{-1}\prod_{\substack{l \mid N \\ l \centernot\mid N_{\psi}}} \Bigl(1 - \frac{\psi(l)}{l^{j-k}}\Bigr). \end{split} \notag \end{gather} The formula is well-defined, as $E(f)\neq 0$ always, and $E^*(f)\neq 0$ because we assumed $\alpha_f\neq\beta_f$. We recall that if $j=k$, we need to assume H1 and H2 in order to prove the above equation, but of course the value of $\mathscr{L}_p$ is well-defined anyway. We also prove the following sanity check. \begin{lemma} Let $j$ be even and assume H1 and H2. Then $\mathscr{L}_p(k,j+1) \in \overline{\numberset{Q}_p}$. \end{lemma} \begin{proof} Rearranging~\eqref{eq:auxp-unraveled} without unpacking $\Upsilon$ we obtain \begin{equation*} \mathscr{L}_p(k,j+1) = \Bigl(1-\frac{\psi^{-1}(p)}{p^{1+k-j}}\Bigr)^{-1} \frac{i^{a_{\psi}}(k-j)!}{\lambda_N(f^*)N_{\psi}^{k-j}k!\binom{k}{j}} \frac{ E(f,f,j+1)}{2E(f)E^*(f)} \Upsilon. \end{equation*} On the right hand side every explicit factor is algebraic over $\numberset{Q}_p$, and $\Upsilon$ is in (a finite extension of) $K_f$ by the last proposition and the remark thereafter. \end{proof} This proves that $\mathscr{L}_p(k)$ actually interpolates the values of the complex function $L^{\mathrm{imp}}(\Sym^2 f)$. This should be interpreted as the fact that if we specialise $F$ at $k$, then $\mathscr{L}_p$ interpolates the corresponding symmetric square $L$-function. Notice that $F$ is not guaranteed to pass through $f$, but only through its $p$-stabilisation when specialised at $k$. As $k$ varies analytically, $\mathscr{L}_p(k)$ interpolates symmetric square $L$-functions varying in a family of modular forms. The interpolation holds over \begin{gather*} \mathcal{I} = \mathcal{I}_1 \cup \mathcal{I}_2 \subseteq \numberset{N}^2 \cap U\times \mathcal{W}_- \\ \begin{lgathered} \mathcal{I}_1 = \{ (k_0,j_0+1) \in U\times \mathcal{W} \mid j_0 \; \text{even}, \; 0 \leq j_0 < k_0 \}, \\ \mathcal{I}_2 = \{ (k_0,k_0+1) \in U\times \mathcal{W} \mid k_0 \; \text{even}, \; E(f_{k_0},f_{k_0},k_0+1)\neq 0, \; N=N_{\psi} \}. \end{lgathered} \end{gather*} We now show that this suffices to determine $\mathscr{L}_p$ uniquely over half of the weight space. For this reason we can denote the auxiliary function with $L^{\mathrm{imp}}_p(\Sym^2 F)$, so that we have proved our main result of this section. Recall that along the family $F$ the tame level and tame character are constant, in particular the condition $N=N_{\psi}$ is independent on $k_0$. We also define an Euler factor \begin{equation*} E(\Sym^2 f,j+1) = E(f,f,j+1)\Bigl(1-\frac{\psi^{-1}(p)}{p^{1+k-j}}\Bigr)^{-1}. \end{equation*} It is clear that $E(f,f,k+1) =0$ if and only if $E(\Sym^2 f,k+1)=0$, H2 can then be restated equivalently as \begin{description} \item[H2] if $j=k$, we assume $E(\Sym^2 f,k+1)\neq 0$. \end{description} \begin{theorem} \label{thm:first-half-final} Suppose that $f$ is supersingular at $p$ and that $\alpha_f\neq\beta_f$. There exists a $2$-variable meromorphic $p$-adic $L$-function $L^{\mathrm{imp}}_p(\Sym^2 F)\colon U\times\mathcal{W} \to \overline{\numberset{Q}_p}$, uniquely determined on $U\times\mathcal{W}_-$, with the following interpolation property at every pair $(k_0,j_0)\in \numberset{N}^2 \cap U\times \mathcal{W}$ with $j_0$ even and either $0\leq j_0 < k_0$ or $j_0=k_0$, $E(\Sym^2 f_{k_0}, k_0+1)\neq 0$ and $N=N_{\psi}$: \begin{multline} \label{eq:final-interpolation} L^{\mathrm{imp}}_p(\Sym^2 F)(k_0,j_0+1) = (-1)^{k_0+1}j_0! \frac{(2\pi i)^{k_0-j_0} i^{a_{\psi}}}{4\tau(\psi)(4\pi)^{k_0+1}\langle f_{k_0},f_{k_0} \rangle} \\ \cdot \frac{E(\Sym^2 f_{k_0},j_0+1)}{E(f_{k_0})E^*(f_{k_0})} L^{\mathrm{imp}}(\Sym^2 f_{k_0},j_0+1)\prod_{\substack{l \mid N \\ l \centernot\mid N_{\psi}}} \Bigl(1 - \frac{\psi(l)}{l^{j_0-k_0}}\Bigr) \end{multline} where $f_{k_0}\in M_{k_0+2}(N,\psi)$ is such that $F$ passes through its $p$-stabilisation at $k_0$; and such that the factorisation of $p$-adic $L$-functions holds: \begin{equation} \label{eq:final-factorization} L_p^{\mathrm{geom}}(F,F)(\sigma_1,\sigma_1,\sigma_2) = L^{\mathrm{imp}}_p(\Sym^2 F)(\sigma_1,\sigma_2)L_p(\psi,\sigma_2-\sigma_1-1). \end{equation} \end{theorem} \begin{proof} The only thing that is left to prove is that the interpolation equations determine $L^{\mathrm{imp}}_p(\Sym^2 F)=\mathscr{L}_p$ uniquely. We will do so by proving that $\mathcal{I}$ is dense in $U\times\mathcal{W}_-$. The set $U\subseteq \mathcal{W}$ is an affinoid disc, and affinoid discs in the weight space are disjoint unions of connected affinoids, which in turn are the complement of finitely many open discs. If we prove that the interpolation formulæ determine $L^{\mathrm{imp}}_p(\Sym^2 F)$ uniquely over each connected component of $U$, then we are done. Therefore, without loss of generality we can suppose that $U$ is connected. As $U$ is a connected affinoid disc, it is the complement of finitely many open discs. Hence it is a closed set, precisely a closed disc. $U$ can then be characterised as the maximal spectrum of an affinoid algebra of the form \begin{equation*} \frac{\numberset{Q}_p\langle T,S \rangle}{(T^n - pS)}, \qquad x\in U \iff |x| \leq \frac{1}{p^{1/n}}. \end{equation*} Now, since $p$ is odd we have $|2| = 1$. Therefore, if $x\in U$ then $2x$ is still in $U$, because $|2x|=|x|$. This entails that if $\tilde{k}$ is an integer, $\tilde{k}\in U$ implies $2\tilde{k}\in U$, and vice versa $\frac{1}{2}\tilde{k}\in U$. By definition of Coleman family, $\numberset{N}\cap U$ is dense in $U$, hence by the previous argument $2\numberset{N} \cap U$ is also dense in $U$. We can then find a collection of positive even integers $\{\tilde{k}\} \subseteq U$ which is dense in $U$. The collection of pairs of integers $\{(\tilde{k},j)\}$ with $0\leq j \leq \tilde{k}$ even, is then dense in $U\times\mathcal{W}_-$, because as we can get $\tilde{k}$ as large as we wish, we can choose infinitely many even values of $j$ between $0$ and $\tilde{k}$ that are congruent to the values in $\{0,\ldots,p-1\}$ modulo $p$. These are dense in the whole $\mathcal{W}_-$. This proves that the set $\mathcal{I}_1$ is dense in $U\times \mathcal{W}_-$, and more so $\mathcal{I}\subseteq U\times\mathcal{W}_-$ is dense. Since $L^{\mathrm{imp}}_p(\Sym^2 F)$ is a meromorphic function with prescribed values in a dense subset, there exists a unique extension to the whole $U\times\mathcal{W}_-$. \end{proof} The proof in particular shows that $\mathcal{I}$ is dense in $U\times\mathcal{W}_-$. If $U$ were dense in $\mathcal{W}$, then $\mathcal{I}$ would be dense in the whole $\mathcal{W}\times\mathcal{W}_-$, thus implying that there would be a unique possible way to extend $L^{\mathrm{imp}}_p(\Sym^2 F)$ to the whole $\mathcal{W}\times\mathcal{W}_-$. \begin{corollary} Suppose that $f$ is supersingular at $p$ and that $\alpha_f\neq\beta_f$. There exist one-variable meromorphic $p$-adic $L$-functions $L^{\mathrm{imp}}_p(\Sym^2 f_{k_0})\colon \mathcal{W} \to \overline{\numberset{Q}_p}$ for varying $k_0$, defined as specialisations \begin{equation*} L^{\mathrm{imp}}_p(\Sym^2 f_{k_0}) = L^{\mathrm{imp}}_p(\Sym^2 F)(k_0) \end{equation*} such that if $j_0$ is an even integer and either $0\leq j_0 < k_0$ or $j_0=k_0$, $E(\Sym^2 f_{k_0},k_0+1)\neq 0$ and $N=N_{\psi}$, then they enjoy the interpolation property \begin{multline*} L^{\mathrm{imp}}_p(\Sym^2 f_{k_0})(j_0+1) = (-1)^{k_0+1}j_0! \frac{(2\pi i)^{k_0-j_0} i^{a_{\psi}}}{4\tau(\psi)(4\pi)^{k_0+1}\langle f_{k_0},f_{k_0} \rangle} \\ \cdot \frac{E(\Sym^2 f_{k_0},j_0+1)}{E(f_{k_0})E^*(f_{k_0})} L^{\mathrm{imp}}(\Sym^2 f_{k_0},j_0+1)\prod_{\substack{l \mid N \\ l \centernot\mid N_{\psi}}} \Bigl(1 - \frac{\psi(l)}{l^{j_0-k_0}}\Bigr). \end{multline*} \end{corollary} The statement of the theorem assumes the hypothesis H2, or more explicitly, it assumes that when $j_0=k_0$ the value $E(\Sym^2 f_{k_0},k_0+1)$ is not zero. In turn, this condition is equivalent to $E(f_{k_0},f_{k_0},k_0+1)\neq 0$. As explained above, this is required to apply the $p$-adic regulator formula computing $\regulator{\syn}(b_f)$ as a $p$-adic $L$-value. The condition $E(f_{k_0},f_{k_0},k_0+1)\neq 0$ explicitly means \begin{equation*} E(f_{k_0},f_{k_0},k_0+1) = \Bigl(1-\frac{p^{k_0}}{\alpha^2}\Bigr)\Bigl(1-\frac{p^{k_0}}{\alpha\beta}\Bigr)\Bigl(1-\frac{\alpha\beta}{p^{1+k_0}}\Bigr)\Bigl(1-\frac{\beta^2}{p^{1+k_0}}\Bigr) \neq 0. \end{equation*} On one hand, $\alpha\beta = \psi(p)p^{k_0+1}$ and $|\alpha| = p^{\frac{k_0+1}{2}}$ by purity, so the first two factors are never zero by weight reasons. On the other hand, the third factor is zero if and only if $\psi(p)=1$ and the fourth if and only if $\beta^2 = p^{k_0+1}$. As explained at the end of Section~\ref{subsec:integrality}, the second condition can only hold finitely many times along a Coleman family, while the first can hold infinitely often. We can therefore give the following equivalent formulation of the interpolation property at $j_0=k_0$. \begin{proposition} Assume H1 and $\psi(p)\neq 1$. For all but finitely many even integers $k_0$, the following interpolation property holds: \begin{multline*} L^{\mathrm{imp}}_p(\Sym^2 F)(k_0,k_0+1) = -\frac{k_0!}{4\tau(\psi)(4\pi)^{k_0+1}\langle f_{k_0},f_{k_0} \rangle}\frac{E(\Sym^2 f_{k_0},k_0+1)}{E(f_{k_0})E^*(f_{k_0})} \\ \cdot L^{\mathrm{imp}}(\Sym^2 f_{k_0},k_0+1). \end{multline*} \end{proposition} \begin{corollary} Assume H1 and $\psi(p)\neq 1$. For all but finitely many even integers $k_0$, the one-variable $p$-adic $L$-function $L^{\mathrm{imp}}_p(\Sym^2 f_{k_0})$ enjoys the interpolation property \begin{multline*} L^{\mathrm{imp}}_p(\Sym^2 f_{k_0})(k_0+1) = -\frac{k_0!}{4\tau(\psi)(4\pi)^{k_0+1}\langle f_{k_0},f_{k_0} \rangle}\frac{E(\Sym^2 f_{k_0},k_0+1)}{E(f_{k_0})E^*(f_{k_0})} \\ \cdot L^{\mathrm{imp}}(\Sym^2 f_{k_0},k_0+1). \end{multline*} \end{corollary} Clearly the condition $E(\Sym^2 f_{k_0},k_0+1)\neq 0$ excludes some (possibly infinitely many) points from the interpolation range. Removing finitely many integer weights among those over which the Coleman family ranges, does not affect the density of $U\cap\numberset{N}$: the condition $\beta^2 \neq p^{k_0+1}$ alone would not negate the density of the subset $\mathcal{I}_2$. However, the condition $\psi(p)\neq 1$ is more restrictive, as it possibly invalidates the density of $\mathcal{I}_2$. Nevertheless, the subset $\mathcal{I}_1$ is large enough for our density argument to work, so this restriction does not affect the uniqueness of $L^{\mathrm{imp}}_p(\Sym^2 F)$. Studying what happens when $E(\Sym^2 f_{k_0},k_0+1)= 0$ and when we can generalise the interpolation at these points is the subject of the next subsection. \subsection{Generalising the interpolation formulæ} \label{subsec:generalising-interpolation} It is natural to ask if the interpolation further generalises to the case of $j_0=k_0$ and $E(\Sym^2 f_{k_0},k_0+1) = 0$. Clearly, this condition violates hypothesis H2 and invalidates the argument using the comparison of motivic classes, because we are not able to translate it into a relation of $p$-adic $L$-values: we need other ideas on how to possibly prove the interpolation. In this subsection we explain one way to do so, using the factorisation formula. As explained earlier, under the assumption $E(\Sym^2 f_{k_0},k_0+1)= 0$ at least one equality between $\psi(p)= 1$ and $\beta^2 = p^{k_0+1}$ must hold. Allowing these conditions would afford more interpolation formulæ, although allowing the second one would only add finitely many more of them. We now investigate what happens when at least one of these conditions holds. Recall that when $j_0=k_0$ is even, $\psi$ is an even character. Suppose for a moment that the interpolation formula held, then under the hypothesis $E(\Sym^2 f_{k_0},k_0+1)=0$, it is reasonable to expect that the right hand side of~\eqref{eq:final-interpolation} should be zero. Proving the interpolation formula would then be equivalent to showing that $L^{\mathrm{imp}}_p(\Sym^2 F)$ has a zero at $(k_0,k_0+1)$. Our aim is now to prove that under some hypotheses this is exactly the case. We first show that the right hand side of the formula indeed vanishes assuming $E(\Sym^2 f_{k_0},k_0+1)=0$. We start by analysing the extra factor in the interpolation formula in general. The interpolation domain is \begin{gather*} \mathcal{I} = \mathcal{I}_1 \cup \mathcal{I}_2 \subseteq \numberset{N}^2 \cap U\times \mathcal{W}_- \\ \begin{lgathered} \mathcal{I}_1 = \{ (k_0,j_0+1) \in U\times \mathcal{W} \mid j_0 \; \text{even}, \; 0 \leq j_0 < k_0 \}, \\ \mathcal{I}_2 = \{ (k_0,k_0+1) \in U\times \mathcal{W} \mid k_0 \; \text{even}, \; E(\Sym^2 f_{k_0},k_0+1)\neq 0, \; N=N_{\psi} \}. \end{lgathered} \end{gather*} We denote with $\alpha, \beta$ the Satake parameters at $p$ of $f_{k_0}\in M_{k_0+2}(N,\psi)$. Denote also with $\mathscr{E}$ the extra factor in the interpolation formula, i.e.\ \begin{equation*} \mathscr{E}(k,j) = (-1)^{k+1}j! \frac{(2\pi i)^{k-j} i^{a_{\psi}}}{4\tau(\psi)(4\pi)^{k+1}\langle f_k,f_k \rangle} \frac{E(\Sym^2 f_k,j+1)}{E(f_k)E^*(f_k)} \prod_{\substack{l \mid N \\ l \centernot\mid N_{\psi}}} \Bigl(1 - \frac{\psi(l)}{l^{j-k}}\Bigr). \end{equation*} We now count the zeros appearing in $\mathscr{E}$, i.e.\ the number of factors that are zero at a given point of $\mathcal{I}$. We temporarily use the convention that a negative number $n\in\numberset{Z}_{<0}$ of zero factors denotes $-n\in\numberset{N}_{\geq 1}$ zero factors appearing at the denominator. \begin{proposition} \label{prop:counting-zeros} Let $(k_0,j_0+1)$ be a pair of integers with $0\leq j_0\leq k_0$ and $j_0$ even. Then the number of factors of $\mathscr{E}$ which are zero at $(k_0,j_0+1)$ is $o(j_0) = o'(j_0) + o''(j_0)$, where \begin{equation*} o'(j_0) = \begin{cases} \Bigl| \Bigl\{ \text{primes} \;\; l \mid N, \; l \centernot\mid N_{\psi} \;\text{such that}\; \psi(l) = 1 \Bigr\} \Bigr| &\text{if}\; j_0=k_0 \\ 0 &\text{else} \end{cases} \end{equation*} and $o''(j_0)$ is defined as \begin{equation*} \begin{array}{c|c|c|c} & & j_0 < k_0 & j_0 = k_0 \\ \multirow{2}{*}{$\alpha \neq \beta$} & \psi(p) = 1 & 0 & +1 \\ & \psi(p) \neq 1 & 0 & 0 \;\text{or}\; +1 \\ \hline \multirow{2}{*}{$\alpha = \beta$} & \psi(p) = 1 & -1 & +1 \\ & \psi(p) \neq 1 & -1 & -1 \end{array} \end{equation*} In the case $\alpha\neq\beta$ and $\psi(p)\neq 1$, $o''(k_0)=1$ if and only if $\beta = \pm p^{\frac{k_0+1}{2}}$, $\alpha = \psi(p)\beta$. \end{proposition} \begin{proof} In order to keep the notation lighter, we will prove the theorem for the pair $(k,j)$ without losing generality. In this case clearly $f_k = f$ but we will not use this fact. We analyse the function given by $j\mapsto \mathscr{E}(k,j)$: \begin{equation*} j\mapsto (-1)^{k+1}j! \frac{(2\pi i)^{k-j} i^{a_{\psi}}}{4\tau(\psi)(4\pi)^{k+1}\langle f_k,f_k \rangle} \frac{E(\Sym^2 f_k,j+1)}{E(f_k)E^*(f_k)} \prod_{\substack{l \mid N \\ l \centernot\mid N_{\psi}}} \Bigl(1 - \frac{\psi(l)}{l^{j-k}}\Bigr) \end{equation*} for $j \in \{0,\ldots,k\}$. We want to give conditions for these factors to be finite and non-zero. The only factors that can be zero are those contained in: \begin{equation*} \frac{E(\Sym^2 f_k,j+1)}{E(f_k)E^*(f_k)}\prod_{\substack{l \mid N \\ l \centernot\mid N_{\psi}}} \Bigl(1 - \frac{\psi(l)}{l^{j-k}}\Bigr) = \frac{E(f_k,f_k,j+1)}{E(f_k)E^*(f_k)\Bigl(1-\frac{\psi^{-1}(p)}{p^{1+k-j}}\Bigr)}\prod_{\substack{l \mid N \\ l \centernot\mid N_{\psi}}} \Bigl(1 - \frac{\psi(l)}{l^{j-k}}\Bigr) \end{equation*} and we study them separately. Firstly, $1 - \frac{\psi^{-1}(p)}{p^{1+k-j}}$ is always finite, and is zero if and only if $\psi^{-1}(p) = p^{1+k-j}$, that is exactly if $j=k+1$ and $\psi(p)=1$, but the former is impossible. Therefore this factor gives no contribution. Secondly, $1 - \frac{\psi(l)}{l^{j-k}}$ is always finite, and is zero if and only if $\psi(l) = l^{j-k}$, that is exactly if $j=k$ and $\psi(l)=1$. Therefore this factor gives no contribution when $j<k$, and when $j=k$ the number of vanishing factors appearing is \begin{equation*} o'(k) \coloneqq \Bigl| \Bigl\{ l \mid N, \; l \centernot\mid N_{\psi} \;\text{such that}\; \psi(l) = 1 \Bigr\} \Bigr|. \end{equation*} Thirdly, we have to study $E(f_k)$, $E^*(f_k)$ and $E(f_k,f_k,j+1)$. To start with, we recall that since the representation attached to $f_k$ is pure of weight $k+1$, the Satake parameters satisfy $|\alpha| = |\beta| = p^{\frac{k+1}{2}}$. \begin{itemize} \item $E(f_k) = 1 - \frac{\beta}{p\alpha}$ so it is not finite only when $\alpha = 0$, and zero only when $\beta = p\alpha$, both of which are impossible by purity; \item $E^*(f_k) = 1 - \frac{\beta}{\alpha}$ so it is always finite, and zero only when $\alpha = \beta$. Therefore it accounts for a vanishing factor at the denominator if and only if $\alpha=\beta$, independently on $j$; \item $E(f_k,f_k,j+1)$ is again the product of four factors, see Theorem~\ref{thm:geometric-l-function}. \begin{itemize} \item $1 - \frac{p^j}{\alpha^2}$ is always non-zero and finite, because $\alpha^2 = p^j$ would imply by purity $k+1 = j$ which is impossible. \item Similarly $1 - \frac{p^j}{\alpha\beta}$ is always non-zero and finite, because $\alpha\beta = p^j$ implies $\psi(p)p^{k+1} = p^j$ and then $j=k+1$. \item The factor $1 - \frac{\alpha\beta}{p^{1+j}}$ is always finite, and zero if and only if $p^{1+j} = \alpha\beta = \psi(p)p^{k+1}$, i.e.\ exactly when $j=k$ and $\psi(p)=1$. \item Finally $1 - \frac{\beta^2}{p^{1+j}}$ is always finite, and zero if and only if $\beta^2 = p^{1+j}$, i.e.\ $\beta = \pm p^{\frac{j+1}{2}}$. This in turn implies $\alpha = \pm p^{k+1-\frac{j+1}{2}}\psi(p) = p^{k-j}\psi(p)\beta$. By taking absolute values this implies $j=k$ necessarily, but there are no other restrictions. \end{itemize} \end{itemize} Separating cases, these considerations give the global contribution $o''$ in the statement of the theorem. \end{proof} We now specialise to the case of interest, i.e.\ when H2 does not hold. As we are not comparing cohomology classes, we also drop assumption H1. Let $j_0=k_0$. Applying the proposition, we prove that the right hand side of the interpolation formula indeed vanishes when $E(\Sym^2 f_{k_0},k_0+1)=0$. \begin{lemma} \label{lemma:right-hand-side-0} Suppose $E(\Sym^2 f_{k_0},k_0+1)=0$, then the right hand side of formula~\eqref{eq:final-interpolation} is zero. \end{lemma} \begin{proof} This is a direct application of Proposition~\ref{prop:counting-zeros}. Indeed, by that result at $j_0=k_0$ there is always a zero factor when $\psi(p)=1$. On the contrary, when $\psi(p)\neq 1$ the hypothesis $E(\Sym^2 f_{k_0},k_0+1)=0$ necessarily implies $\beta^2=p^{k_0+1}$, and the relation $\alpha\beta=\psi(p)p^{k_0+1}$ shows that $\alpha=\psi(p)\beta$. Since $\psi(p)\neq 1$, this shows at once that $\alpha\neq\beta$, and that there is exactly one vanishing factor by the proposition. \end{proof} Thanks to the lemma, if the interpolation formula held, it would necessarily imply $L^{\mathrm{imp}}_p(\Sym^2 F)(k_0,k_0+1)=0$. Our task is now to show---by other means---that this is indeed the case, so that the interpolation formula holds ``trivially''. The strategy is to use the $p$-adic factorisation formula, to deduce the value of $L^{\mathrm{imp}}_p(\Sym^2 F)$ from those of $L_p^{\mathrm{geom}}(F,F)$ and $L_p(\psi)$. \begin{proposition} \label{prop:geom-0-sym-0} If $\psi\neq 1$ and $L_p^{\mathrm{geom}}(F,F)(k_0,k_0,k_0+1)=0$, then $L^{\mathrm{imp}}_p(\Sym^2 F)(k_0,\allowbreak k_0+1)=0$. \end{proposition} \begin{proof} We evaluate the interpolation formula at $\sigma_1 = k_0$, $\sigma_2 = k_0+1$: \begin{equation*} L_p^{\mathrm{geom}}(F,F)(k_0,k_0,k_0+1) = L^{\mathrm{imp}}_p(\Sym^2 F)(k_0,k_0+1)L_p(\psi,0). \end{equation*} In order to deduce that $L^{\mathrm{imp}}_p(\Sym^2 F)$ vanishes at $(k_0,k_0+1)$, we study the other two values. In particular, since we assume $L_p^{\mathrm{geom}}(F,F)(k_0,k_0,k_0+1)=0$, if we show that $L_p(\psi,0)\neq 0$, we will have proved the claim. By hypothesis $\psi\neq 1$ and is an even character, so the function $L_p(\psi)$ is analytic and not identically zero. It is known that in this case $L_p(\psi,1)\neq 0$, by~\cite[Corollary~5.30]{washington:introduction}. Applying the functional equation for Kubota-Leopoldt $p$-adic $L$-functions we obtain $L_p(\psi,0)\neq 0$. \end{proof} It is clear from the proof of the proposition that the hypothesis $\psi\neq 1$ is essential. This is part of hypothesis H1. We are thus studying interpolation when only a weaker version of H1 holds. \begin{proposition} Suppose that $E(\Sym^2 f_{k_0},k_0+1)=0$ and $\psi\neq 1$. If $L_p^{\mathrm{geom}}(F,F)(k_0,k_0,\allowbreak k_0+1)=0$, then the interpolation formula~\eqref{eq:final-interpolation} holds (trivially). \end{proposition} \begin{proof} Under the hypothesis $E(\Sym^2 f_{k_0},k_0+1)=0$, the right hand side of the interpolation formula vanishes by applying the last lemma. Under the hypotheses $\psi\neq 1$ and $L_p^{\mathrm{geom}}(F,F)(k_0,k_0,k_0+1)=0$ the last proposition applies, and the left hand side of the formula vanishes as well. Therefore the interpolation formula reads $0=0$, which trivially holds. \end{proof} \begin{remark} At the time of writing we lack a more explicit condition for the vanishing of $L_p^{\mathrm{geom}}(F,F)(k_0,k_0,k_0+1)$. We are not even able to tell whether it is zero or not in general, as there are no results in this direction. In our case the interpolation property of this function does not apply, as the weights of the forms coincide. Similarly, the regulator formulæ linking it to cohomology classes do not hold when $E(f_{k_0},f_{k_0},k_0+1)=0$. The main theorem of~\cite{benois.horte:on} investigates this value (more precisely, the corresponding one via the functional equation) and it also gives conditions for its vanishing; however, the hypotheses of that result cannot be fulfilled when $f=g$ (they would require both $\psi(p)\neq 1$ and $\alpha_f\beta_f = p^{k+1}$, which give a contradiction). The only remaining choice is then to assume the vanishing as a hypothesis. \end{remark} The results of this subsection give the best characterisation that we are able to prove, for the behaviour of $L^{\mathrm{imp}}_p(\Sym^2 F)$ at points $(k_0,k_0+1)$. When hypotheses H1 and H2 apply (in particular when $E(\Sym^2 f_{k_0},k_0+1)\neq 0$), interpolation always holds. On the contrary, when H2 does not apply, the interpolation holds only when we are able to prove that the $p$-adic $L$-value is zero, as just studied. \paragraph{Exceptional zeros of $L^{\mathrm{imp}}_p(\Sym^2 F)$} One direction for further study can be investigating the exceptional zeros of $L^{\mathrm{imp}}_p(\Sym^2 F)$ arising from the interpolation property~\eqref{eq:final-interpolation}. Recall that a point in the interpolation domain is said to be an exceptional zero if the Euler factor vanishes there. They represent zeros of the $p$-adic $L$-function that do not come from the complex $L$-function, but rather from the interpolating factors. Proposition~\ref{prop:counting-zeros} gives us an explicit indication of where we could find exceptional zeros and poles of $L^{\mathrm{imp}}_p(\Sym^2 F)$, and its orders there. This suggestion cannot be directly translated into a formal proof as it is, because the interpolation formula does not hold in a neighbourhood of the studied points. Nevertheless, we can formulate the following conjectures. \begin{conjecture} Every point $(k_0,j_0+1)\in\mathcal{I}_1$ is an exceptional zero of order $o''(j_0)$ for $L^{\mathrm{imp}}_p(\Sym^2 F)$. Every point $(k_0,k_0+1)\in\mathcal{I}_2$ is an exceptional zero for $L^{\mathrm{imp}}_p(\Sym^2 F)$ of order $-r_0$, where \begin{equation*} r_0 = \begin{cases} 0 &\text{if $\alpha\neq\beta$} \\ 1 &\text{if $\alpha=\beta$} \end{cases} \end{equation*} \end{conjecture} As recalled at page~\pageref{remark:semisimplicity-tp}, it is always the case that $\alpha\neq\beta$ in weight $2$, and it is implied by Tate's conjecture in higher weights. Therefore, the case $\alpha=\beta$ never occurs when $k_0=0$, and is conjectured to never occur also when $k_0>0$. Upon the condition $\alpha\neq\beta$, all the contributions are non-negative, so we further conjecture the following. \begin{conjecture} Assume H1 and H2. \begin{enumerate} \item If $k_0=0$, then $L^{\mathrm{imp}}_p(\Sym^2 F)$ is analytic at $(0,1)$. \item If $k_0>0$, assuming the conjecture that $\alpha\neq\beta$ then $L^{\mathrm{imp}}_p(\Sym^2 F)$ is analytic at all points $(k_0,j_0+1)$ for $j_0$ even, $0\leq j_0\leq k_0$. \item Assuming the conjecture that $\alpha\neq\beta$ in all weights, then $L^{\mathrm{imp}}_p(\Sym^2 F)$ is analytic on all $\mathcal{I}$. \end{enumerate} \end{conjecture} \subsection{Remarks} Recall that we denote with $\mathcal{W}_{\pm}$ the two halves of the weight space of even and odd weight-characters. While the factorisation~\eqref{eq:final-factorization} holds for every pair in $U\times \mathcal{W}$, the interpolation property~\eqref{eq:final-interpolation} requires $j_0$ even, so it holds over the dense subset of points $\mathcal{I} \subseteq U\times\mathcal{W}_-$. Therefore, the interpolation property alone determines $L^{\mathrm{imp}}_p(\Sym^2 F)$ \emph{uniquely}, but only on the slice $U\times\mathcal{W}_- \subseteq U\times\mathcal{W}$, i.e.\ on half of $U\times\mathcal{W}$. Similarly, the results of Subsection~\ref{subsec:generalising-interpolation} investigate explicit values for $L^{\mathrm{imp}}_p(\Sym^2 F)$ at a subset of points of $U\times\mathcal{W}_-$. The one-variable $p$-adic $L$-functions of the corollary also enjoy interpolation properties over $\mathcal{W}_-$. In general they are not uniquely determined by these explicit values over $\mathcal{W}_-$ as we only have finitely many of them. To compute explicit values for $L^{\mathrm{imp}}_p(\Sym^2 F)$ over $U\times \mathcal{W}_+$, and to determine it uniquely over its whole domain, we would need a functional equation that allows us to pass back and forth between $\mathcal{W}_-$ and $\mathcal{W}_+$. This is the topic of the next section. Another improvement of the main theorem of this section would be to replace $j_0+1$ with more general weight-characters $\chi+j_0+1 = \nu_{\chi,j_0+1}$ where $\chi$ is a non-trivial, $p$-power conductor character of varying parity: this would provide more interpolation formulæ.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The Higgs boson in particle physics is modeled by a gauged bosonic condensate, which is responsible for generating mass for other elementary particles. Higgs-like excitations also emerge in condensed matter {systems} as a consequence of {spontaneous} symmetry breaking. \cite{pekker_amplitude_2015} Examples include charge density wave \cite{yusupov_coherent_2010}, superconductors \cite{PhysRevB.26.4883,PhysRevLett.115.157002,sherman_higgs_2015,doi:10.1146/annurev-conmatphys-031119-050813}, quantum magnets \cite{PhysRevLett.119.067201,jain_higgs_2017} and cold atom condensates in {a} optical lattice \cite{PhysRevLett.109.010401,endres_higgs_2012,gross_quantum_2017}. In these systems, the Higgs mode is the collective amplitude {fluctuation} of the complex order parameter or vector fields, and is usually gapped. When a continuous symmetry is broken, there exist gapless Goldstone modes in addition to the massive Higgs mode. Quantum fluctuations therefore {may induce a} decay of the Higgs mode into the low-lying Goldstone {modes,} which causes damping of the Higgs mode. The question is whether the Higgs mode remains stable. {Podolsky {\it et al}. \cite{PodolskyArovasPRB} addressed this question by using a field theoretical approach and found} that the imaginary part of the longitudinal susceptibility associated with the Higgs mode diverges at low frequency $\omega$ as $1/\omega$ for two dimensional {(2D)} systems and $\log(1/|\omega|)$ for three dimensional {(3D)} systems, which can obscure the spectral peak of the Higgs mode. This motivates {the} authors {in Refs.~\cite{PhysRevLett.110.140401,PhysRevB.88.235108}} {to} propose a scalar susceptibility, where a well defined spectral peak corresponding to the Higgs mode appears despite of the strong damping. The scalar susceptibility {is argued to be identified} in the {Raman spectroscopy.} {The spectral peak of {the} scalar susceptibility is broadened near the quantum phase transition point in 2D, whereas the peak remains sharp in 3D.} This is consistent with the intuition that damping of the Higgs mode is stronger in lower dimensions as a result of the quantum fluctuations of the Goldstone modes. {One may then argue that it is necessary} to consider three dimensional systems in order to have a stable Higgs mode. \cite{PhysRevLett.118.147207} In {magnetic} materials, continuous symmetry can be lifted by anisotropy. For instance, the spin rotation symmetry can be reduced to {$U(1)\times Z_2$ symmetry by either an easy plane anisotropy in $XY$-like systems or an easy axis anisotropy in Ising-like systems.} The rotation symmetry can also be lifted by an external magnetic field. The reduced symmetry therefore can stabilize the Higgs mode in quantum magnets, as will be discussed below. The magnons carry spin quantum number $S_m=\pm 1$ while the Higgs mode carries spin quantum number $S_h=0$. A Higgs mode with energy $E_h$ can decay into a pair of magnons {$(\mathbf{k}_1, \mathbf{k}_2)$} with $S_m=\pm 1$ constrained by the energy {and momentum} conservation {laws, also known as the kinematic condition $E_m(\mathbf{k}_1) + E_m(\mathbf{k}_2) = E_h(\mathbf{k}_1 + \mathbf{k}_2)$.} {Some or all of magnon branches} are gapped in magnets with reduced symmetry, which in turn mitigates the decay of the Higgs mode by {reducing the phase space satisfying} the kinematic condition {in a part of, or even entire, region of the Brillouin zone.} Especially, when a quantum magnet undergoes a continuous quantum phase transition into the quantum paramagnetic state by tuning {an} external parameter, such as pressure, the magnitude of the magnetic moment is suppressed continuously {down} to zero at the quantum critical point (QCP). The gap of the Higgs mode becomes small near the transition point, and therefore the decay into magnon modes is suppressed {when the Higgs mode has lower energy than the magnon modes}. Recently, a stable Higgs mode with long lifetime was detected by inelastic neutron scattering measurement in {the two dimensional quantum magnet} $\mathrm{C_9H_{18}N_2CuBr_4}$ with an easy axis anisotropy near a quantum critical point. \cite{hong_higgs_2017} The authors constructed an effective spin Hamiltonian for $\mathrm{C_9H_{18}N_2CuBr_4}$, and derived the dispersion relation of the magnon and Higgs modes using the mean-field bond operator approach. The decay of the Higgs mode in $\mathrm{C_9H_{18}N_2CuBr_4}$ was also investigated {recently} using quantum Monte Carlo simulations \cite{PhysRevLett.122.127201}. To investigate the role of spin anisotropy on the stability of the Higgs mode, in this work, we study the Higgs mode in an anisotropic bilayer quantum antiferromagnetic Heisenberg model for spin $S=1/2$ with an easy axis anisotropy by employing the bond operator method, field theoretical approach and quantum Monte Carlo simulation. By combining these methods, we can show clearly how the spin anisotropy suppresses the damping of the Higgs mode. The bilayer Heisenberg model is relevant for several quantum magnets including $\mathrm{BaCuSi_2O_6}$ \cite{PhysRevB.55.8357,PhysRevLett.93.087203,sebastian_dimensional_2006} and $\mathrm{Sr_3Ir_2O_7}$ \cite{PhysRevLett.109.157402,PhysRevB.92.024405}. We note that the collective excitations including the Higgs mode in an isotropic model {have} been considered in Ref. \cite{PhysRevB.92.245137}. Upon increasing the interlayer antiferromagnetic interaction, the system undergoes a quantum phase transition from {the N\'{e}el} order to {the} nonmagnetic dimerized phase by forming interlayer {spin} singlet. {Upon reaching the critical point from the magnetically ordered state,} the magnitude of the moment vanishes and the Higgs mode becomes gapless. {However,} because of the easy axis anisotropy, the magnon modes remain gapped. There exists a region where the dispersion of the magnon modes are above the Higgs mode, which prevents the decay of Higgs mode into the magnon modes and therefore the Higgs mode is long lived. We note in passing that the decay of Higgs mode is already forbidden when the magnon gap is larger than half of the Higgs gap. By tuning the magnon gap, the present model allows to investigate the lifetime of the Higgs mode as a function of {the} magnon gap. In the remainder of the paper, we will employ the mean field bond {operator} approach to construct the phase diagram and derive the Higgs and magnon {dispersion relations.} Then we will use the field theoretical approach to study the decay of Higgs mode into magnon modes. In the region where such a decay is {prohibited because of the the kinematic condition,} {the Higgs mode becomes the lowest lying mode with {a} very sharp spectral peak}. {Finally, we will present the results of our quantum Monte Carlo simulation to study the Higgs and magnon modes near the quantum critical point. The paper is then concluded with a summary.} \begin{figure} \begin{center} \includegraphics[width=\columnwidth,bb=0 0 307 254]{fig1.pdf} \end{center} \caption{Schematic phase diagram of the anisotropic quantum magnet model that includes three different phases: interlayer dimer order, Ising and XY AFM order. {The spin texture of the dimer, Ising and \textit{XY} AFM are sketched.} } \label{f1} \end{figure} \section{Bond operator approach} We consider {the} anisotropic bilayer quantum antiferromagnetic Heisenberg {(or \textit{XXZ})} model defined on a square lattice. The model Hamiltonian is \begin{align} \mathcal{H}=J_{xy}\sum_{l,\langle i j \rangle}[S_{l,i}^x S_{l,j}^x+S_{l,i}^y S_{l,j}^y]+J_z\sum_{l,\langle i j \rangle}[S_{l,i}^z S_{l,j}^z]+J\sum_{i}\mathbf{S}_{1, i}\cdot \mathbf{S}_{2, i}. \label{eq:H} \end{align} where {$\mathbf{S}_{l,i}$} is the quantum spin 1/2 operator and $l=1,\ 2$ is the layer index. Here we assume a nearest neighbor anisotropic antiferromagnetic interaction with an Ising-like exchange anisotropy $J_{z}\ge J_{xy}$ {described by the first two terms} and an antiferromagnetic (AFM) inter-layer coupling {in the last term}. {In this model, three limits can be identified: (i)} When $J\gg J_z$ {and $J_{xy}$}, the AFM interlayer coupling stabilizes singlets between aligned spins in different layers. These singlets condense and stabilize a singlet dimer phase. {(ii) For $J_z \gg J$ and $J_{xy}$ in the Ising limit,} each layer orders antiferromagnetically with {spins} aligned along the $z$ direction and {staggered} between layers {that forms the N{\' e}er order}. {(iii) In the $XY$ limit $J_{xy}\gg J$ and $J_z$, {where the spins order antiferromagnetically in the $xy$ plane}. The phase diagram of the model and the corresponding spin structures in the three limits are sketched in Fig. \ref{f1}. Here we focus on the phase boundary between the dimer and AFM phases for $J_z\ge J_{xy}$.} By gradually reducing $J/J_z$, there is a phase transition from {the} dimer phase to the AFM phase. To describe this phase transition, {we start with the dimer phase in which spin singlets are stabilized along the vertical bonds between two layers, as shown in Fig. \ref{f1}. The bond operator representation is introduced to describe the dimerized spins by one singlet operator $s_i$ and three triplet operators $t_{i,\alpha}$ with $\alpha=x,\ y,\ z$ as} \begin{align} s_{{i}}^\dagger\ket{0}&=\frac{1}{\sqrt{2}}\left(\ket{\uparrow\downarrow}-\ket{\downarrow\uparrow}\right),\\ t_{{i},x}^\dagger\ket{0}&=-\frac{1}{\sqrt{2}}\left(\ket{\uparrow\uparrow}-\ket{\downarrow\downarrow}\right),\\ t_{{i},y}^\dagger\ket{0}&=\frac{i}{\sqrt{2}}\left(\ket{\uparrow\uparrow}+\ket{\downarrow\downarrow}\right),\\ t_{{i},z}^\dagger\ket{0}&=\frac{{1}}{\sqrt{2}}\left(\ket{\uparrow\downarrow}+\ket{\downarrow\uparrow}\right). \end{align} We choose $s_i$ and $t_{i,\alpha}$ (here $i$ labels {the interlayer dimers}) to be bosonic operators satisfying the commutation relation \begin{align} \left[s_i, s_j^\dagger\right]=\delta_{ij}, \ \ \left[t_{i,\alpha}, t_{j,\beta}^\dagger\right]=\delta_{ij}\delta_{\alpha\beta},\ \ \left[s_{i}, t_{j,\alpha}^\dagger\right]=0. \end{align} For each {vertical} bond, it can be either in the singlet or {one of the triplet states,} and we have $s_i^\dagger s_i+\sum_{\alpha={x, y, z}}t_{i, \alpha}^\dagger t_{i, \alpha}=1$ for all $i$'s. The two spins $\mathbf{S}_{1,j}$ and $\mathbf{S}_{2,j}$ at the two ends of the $j$-th vertical bond can be expressed in term of the bond operators \begin{align} S_{{1,j}}^\alpha &=\frac{1}{2}\left(s_{j}^\dagger t_{{j},\alpha}+t_{{j},\alpha}^\dagger s_{j}-i\epsilon_{\alpha\beta\gamma}t_{{j},\beta}^\dagger t_{{j},\gamma} \right),\\ S_{{2,j}}^\alpha &=\frac{1}{2}\left(-s_{j}^\dagger t_{{j},\alpha}-t_{{j},\alpha}^\dagger s_{j}-i\epsilon_{\alpha\beta\gamma}t_{{j},\beta}^\dagger t_{{j},\gamma} \right), \end{align} where $\epsilon_{\alpha\beta\gamma}$ is the Levi-Civita tensor, and {the} summation over repeated indices is assumed. The Hamiltonian $\mathcal{H}$ can be re-expressed in term of these bond operators {as} \begin{align} \mathcal{H}=\frac{J_{xy}}{2}(\mathcal{H}_x+\mathcal{H}_y)+\frac{J_z}{2}\mathcal{H}_z+ {\frac{J}{4}} \mathcal{H}_J, \end{align} \begin{equation} \begin{split} \mathcal{H}_\alpha=& {\sum_{\langle i j \rangle}}\left(s_i^\dagger t_{i,\alpha}+t_{i,\alpha}^\dagger s_i\right)\left(s_j^\dagger t_{j,\alpha}+t_{j,\alpha}^\dagger s_j\right) \\ &- {\sum_{\langle i j \rangle}} \epsilon_{\alpha\beta\gamma}t_{i,\beta}^\dagger t_{i,\gamma}\epsilon_{\alpha\beta{'}\gamma{'}}t_{j,\beta{'}}^\dagger t_{j,\gamma{'}}, \end{split} \end{equation} \begin{align} \mathcal{H}_J={\sum_i}\left(-3 s_i^\dagger s_i+\sum_{\alpha}t_{i,\alpha}^\dagger t_{i,\alpha}\right). \end{align} In the dimerized phase, the $s_i$ boson condenses. We can replace the operator $s_i$ and $s_i^\dagger$ by a real number $\bar{s}$. Furthermore, we replace the local constraint on each {vertical} bond by a global one $\sum_i s_i^\dagger s_i+\sum_{i, \alpha} t_{i,\alpha}^\dagger t_{i, \alpha} = {N_\mathrm{d}} $ with {$N_\mathrm{d}$ being the number of dimers,} from which we obtain $\bar{s}{\simeq}1-\frac{1}{2 {N_\mathrm{d}} }\sum_{i, \alpha} t_{i,\alpha}^\dagger t_{i, \alpha}$ {under the Holstein-Primakoff expansion \cite{PhysRevB.69.054423}}. {Here} the collective excitation in the dimerized phase is the triplet excitation, which can be obtained by expanding $\mathcal{H}$ to the {quadratic} order in $t_{i, \alpha}$. The dispersion {of the triplet excitation} is given by \begin{equation} {\xi _{\alpha ,k}} = \sqrt {2{J_{xy}}{J}{A_k} + {J}{^2} }, \end{equation} {for $\alpha=x,y$ and} \begin{equation} {\xi _{z ,k}} = \sqrt {2{J_z}{J}{A_k} + {J}{^2} }, \end{equation} where $A_k=\cos k_x+\cos k_y$. {The higher order terms that are responsible for the damping of the Higgs mode are not considered here, but will be included effectively in the field theoretical treatment below.} The spin anisotropy splits the otherwise triply degenerate triplet excitations into two modes with one having double degeneracy. The gap of $t_z$ triplet excitation first vanishes at {$(J_z/J)_c = 1/4$} {and $\bm{G}_0=(\pi,\pi)$} indicating a phase transition into {the antiferromagnetic} magnetically ordered phase with spins pointing in the {$\pm$} $z$ direction. \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth,bb=0 0 730 952]{fig2.pdf} \end{center} \caption{(a) Dispersion of the Higgs and magnon mode for $J_{xy}=0.1{J}$ and $J_z=0.255{J}$, and (b) the corresponding gap at the wavevector $\mathbf{G}_0$ for $J_z/J_{xy}=3$.} \label{f2} \end{figure} The phase transition point can also be determined by considering the magnetically ordered phase. In this phase, both $s_i$ and $t_{z,i}$ {bosons} condense. Because of the intralayer AFM interaction, the ordering wave {vector} for the $t_{z,i}$ boson is $\mathbf{G}_0=(\pi,\ \pi)$. The ground state wave function can be approximated by $\ket{\phi_{\mathrm{AFM}}}=\prod_i\ket{\phi_i}$ with \begin{align}\label{eq13} \ket{\phi_i}=\frac{1}{\sqrt {1 + {\lambda ^2}}}\left( {s_i^\dag + \lambda \exp \left( {{i}{\mathbf{G}_0}\cdot{\mathbf{r}_i}} \right)t_{i,z}^\dag } \right)\ket{0}, \end{align} where $\lambda$ is a variational parameter to be determined later. \cite{sommer_magnetic_2001} We can introduce a new basis $\tilde{s}_i^{\dagger}\ket{0}=\ket{\phi _{i}}$. Then the ground state corresponds to the condensation of $\tilde{s}_i^{\dagger}$ boson. The other three operators in this new basis are given by $\tilde{t}_{i, x/y}^{{\dagger}}={t}_{i, x/y}^{{\dagger}}$ and \begin{align} \tilde{t}_{i, z}^{{\dagger}}=\frac{1}{\sqrt {1 + {\lambda ^2}}}\left( { {-} \lambda \exp \left( {{-i}\mathbf{G}_0\cdot{\mathbf{r}_i}} \right)s_i^\dag +t_{i,z}^\dag } \right). \end{align} Here $\lambda$ can be determined by minimizing the Hamiltonian. Again we approximate the local constraint for $\tilde{s}_i$ and $ \tilde{t}_{i, \alpha}$ by the global one as in the case of dimerized phase. $\mathcal{H}$ can be expanded to the second order in $\tilde{t}_{i, \alpha}$, $\mathcal{H}=\mathcal{H}_0+\mathcal{H}_1+\mathcal{H}_2$, with \begin{align} \mathcal{H}_0=\frac{-4 {N_\mathrm{d}} \lambda^2 J_z}{(1+\lambda^2)^2}+\frac{ {N_\mathrm{d}} }{4}\frac{\lambda^2-3}{1+\lambda^2}{J}, \end{align} \begin{align} \mathcal{H}_1=\left[\frac{ {J} }{1+\lambda^2}-\frac{4J_z(1-\lambda^2)}{(1+\lambda^2)^2}\right]\lambda\sum_{i}\exp \left( {{i}\mathbf{G}_0\cdot{\mathbf{r}_i}} \right)\left(\tilde{t}_{i, z}+\tilde{t}_{i, z}^\dagger\right). \end{align} The ground state condition requires that the terms linear in $ \tilde{t}_{i, \alpha}$ vanishes, which yields $\lambda=\sqrt{(4J_z- {J} )/(4J_z+ {J} )}$. This $\lambda$ also minimizes $\mathcal{H}_0$ simultaneously. The phase diagram can be obtained from the staggered magnetization \begin{align} M_\alpha(G_0) &= {\frac{1}{N_\mathrm{d}}} \bra{\phi_{\mathrm{AFM}}}\sum_i(S_{1,i}^\alpha-S_{2,i}^\alpha)\exp(i \mathbf{G}_0\cdot{\mathbf{r}_i})\ket{\phi_{\mathrm{AFM}}}. \end{align} Here $M_x=M_y=0$ and $M_z(G_0)={\sqrt{16 J_z^2- {J^2} }}/{4J_z}$ vanishes continuously at $J_z/J=1/4$ upon decreasing $J_z$, therefore the quantum phase transition is of second order. {This mean-field} critical point is independent {of} $J_{xy}$ {as long as} $J_{xy}\le J_z$. The phase transition point is consistent with the previous estimate based on the triplet excitation gap. This consistency is achieved by the approximation scheme used here. First we replace the local constraint by the global one, known as the Holstein-Primakoff approximation (HPA) \cite{PhysRevB.69.054423}. Within HPA, $\langle s_i\rangle=\langle \tilde{s}_i\rangle=1$, therefore the HPA neglects the suppression of the amplitude of the singlet condensate due to the triplet quantum fluctuations. Secondly, we have introduced a rotated basis to describe the magnetically ordered phase. \cite{sommer_magnetic_2001} At $J_z/J=1/4$, the ground state described by Eq. \eqref{eq13} is the same as the dimerized phase. Therefore, the rotated basis connects {continuously} to the un-rotated one upon varying $J_z$. Alternatively, one can introduce a chemical potential $\mu$ to impose the local constraint by adding a term $-\mu (s_i^\dagger s_i+\sum_{\alpha={x, y, z}}t_{i, \alpha}^\dagger t_{i, \alpha}-1)$ to the Hamiltonian {\cite{PhysRevB.49.8901}}. In the magnetically ordered phase, one can assume the condensation of $s_i$ and $t_\alpha$ bosons without introducing the rotated basis. This approximation, however, does not yield the same transition point by treating the dimerized and magnetic phase separately. The second order contribution $\mathcal{H}_2$ is \begin{align} \mathcal{H}_2 =\frac{{J^2}}{32J_z}\sum_{\langle i j \rangle}\left(\tilde{t}_{i,z}+\tilde{t}_{i,z}^\dagger\right)\left(\tilde{t}_{j,z}+\tilde{t}_{j,z}^\dagger\right)\nonumber\\ +\frac{J_{xy}}{2}\sum_{\langle i j \rangle; \alpha=x, y; {\eta}=\pm}\left(\frac{1}{2}+{\eta}\frac{{J}}{8J_z}\right)\left(\tilde{t}_{i,\alpha}+\eta\tilde{t}_{i,\alpha}^\dagger\right)\left(\tilde{t}_{j,\alpha}+\eta\tilde{t}_{j,\alpha}^\dagger\right)\nonumber\\ +\left(2J_z-\frac{{J}}{2}\right)\sum_i \tilde{t}_{i,z}^\dagger\tilde{t}_{i,z}+\left(2J_z+\frac{{J}}{2}\right)\sum_{i;\alpha=x, y, z} \tilde{t}_{i,\alpha}^\dagger\tilde{t}_{i,\alpha}. \end{align} The magnon dispersion associated with the operator $\tilde{t}_{i, x/y}^{\dagger}$ in the AFM phase can be obtained by the Bogoliubov transformation and is \begin{equation}\label{wm} \omega_M=\sqrt{\left(\frac{{J}}{2}+2J_z+\frac{A_k {J} J_{xy}}{4J_z}\right)^2-(J_{xy}A_k)^2}. \end{equation} The Higgs mode corresponds to the excitation of $\tilde{t}_{i, z}$ boson and its dispersion is given by \begin{equation}\label{wh} \omega_H=\sqrt{4J_z\left(4J_z+\frac{{J^2}A_k}{8J_z}\right)}. \end{equation} The gap of the Higgs mode vanishes at the transition point. As shown in Fig. \ref{f2}, for a strong anisotropy $\gamma=J_z/J_{xy}$, the Higgs mode can lie below the magnon continuum. In this case, the decay of the Higgs mode into {magnon} continuum is expected to be suppressed and therefore the Higgs mode is stabilized. The magnon and Higgs modes cease to exist when the system is tuned to the dimerized phase. The gaps of the Higgs and magnon modes can be estimated in the Ising limit, $J_z/J_{xy}\rightarrow\infty$. The magnon carries quantum spin number $S_m=\pm 1$, and it corresponds to a single spin flip. The energy cost is $E_m=2J_z+J/2$. The Higgs mode has quantum spin number $S_h=0$, and therefore it corresponds to flip a pair of antiferromagnetically aligned spins between different layers. Its energy cost is $E_h=4J_z$. This simple estimate of the magnon and Higgs gaps agrees well with the results in Fig. \ref{f2} (b) when the system is in the well developed magnetically ordered phase (large $J_z/J$ region). We proceed to calculate the dynamic spin structure factor that can be accessed experimentally, \begin{align} \chi_{\alpha}(\omega, \mathbf{q})=\sum_{l=1,2}\langle S_l^\alpha(-\omega, -\mathbf{q})S_l^\alpha(\omega, \mathbf{q})\rangle_Q, \end{align} where $\langle\dots \rangle_Q$ denotes quantum average. Knowing the dispersion for the magnon and Higgs modes, $\chi_{\alpha}$ can be obtained straightforwardly \begin{align} \chi_{\alpha}=C_k \left(\frac{1}{\omega+i0^+ + \omega_M }-\frac{1}{\omega+i0^+- \omega_M}\right),\\ C_k=\frac{1}{64\omega_M}\left(4+\frac{{J}}{J_z}\right)\left[2{J}+8J_z+A_k J_{xy}\left(\frac{{J}}{J_z}-4\right)\right], \end{align} for $\alpha=x, y$ and $0^+$ represents a positive infinitesimal number. For $\chi_z$, we have \begin{align} \chi_{z}= \frac{{J^2}}{8 J_z \omega_H} \left(\frac{1}{\omega+i0^+ + \omega_H }-\frac{1}{\omega+i0^+- \omega_H}\right). \end{align} The magnon (Higgs) excitation appears in the transverse (longitudinal) susceptibility. No damping has been taken into account here so the spectral density is a delta function. The Higgs peak in $\chi_\alpha$ can be smeared out severely in the presence of decay, especially in low dimensional systems. To detect the Higgs mode, singlet bond susceptibility was introduced and was shown to exhibit a sharp Higgs peak despite of the strong damping \cite{PhysRevB.92.245137}. The singlet bond susceptibility is analogous to the scalar susceptibility introduced in Ref. \onlinecite{PodolskyArovasPRB}. The singlet bond susceptibility is defined as \begin{align} \chi_B(\omega, \mathbf{q})=\langle B(-\omega, -\mathbf{q})B(\omega, \mathbf{q})\rangle_Q, \end{align} with $B_i=\mathbf{S}_{1,i}\cdot \mathbf{S}_{2,i}$. It can be calculated \begin{align} \chi_B= \frac{({J}+2J_z)^2}{64J_z^2}\delta(\mathbf{q})\delta(\omega)+(16J_z^2-{J^2})\chi_z(\omega, \mathbf{q}-\mathbf{G}_0). \end{align} The first term accounts for the static dimer correlation at $Q=0$. There is a $\mathbf{G}_0$ momentum shift between $\chi_B$ and $\chi_z$ because of the N\'{e}el order. Approaching the quantum critical point, the spectral density of $\chi_B$ vanishes as $\sqrt{16 J_z^2-{J^2}}$. \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth,bb=0 0 393 162]{fig3.pdf} \end{center} \caption{Feynman diagrams describing the self energy $\Sigma_\sigma(q)$ under the random phase approximation of the polarization bubble $\Pi_\pi(q)$ and the full susceptibility of Higgs mode $\chi_{\sigma\sigma}(q)$. The solid and dashed lines represent the bare susceptibility of Higgs and magnon modes, respectively.} \label{f3} \end{figure} \begin{figure*}[t] \begin{center} \includegraphics[width=15cm]{fig4.pdf} \end{center} \caption{(a) and (b) Real and imaginary parts of the self energy of Higgs mode {at $\bm{q}=\bm{0}$ and} for $D=2+1$, $N=100$, $\Lambda=3m_0$, and $g=0.9g_c$. Here we consider three different anisotropies with $A=0,$ 0.02, and 0.3 {[see Eq.~\eqref{S}]}. (c) The corresponding spectral function at $\bm{q}=\bm{0}$. (d)-(f) The intensity plot of the spectral functions of the Higgs mode for $A=0,$ 0.02, and 0.3, respectively. Here the red and white dash lines are the bare dispersion of Higgs and magnon modes. {The decay of the Higgs mode to the magnon modes smears the spectral peak of the Higgs mode in (d) and (e). Because the magnon mode is above the Higgs mode in (f), the spectral peak of the Higgs mode is sharp}.} \label{f4} \end{figure*} \section{Field theoretical approach} In the mean-field bond operator approach, we have shown that the magnon modes can be gapped due to the magnetic anisotropy and the magnon {energy} can {be even larger than that of} the {long-wavelength Higgs mode.} The question is how the magnon gap affects the lifetime of the Higgs mode. Here we proceed to calculate the lifetime of the Higgs mode by considering the decay of the Higgs mode into magnon modes. A more convenient method is {a} field theoretical approach based on an effective action. We generalize the calculations in Ref. \onlinecite{PodolskyArovasPRB} by including the spin anisotropy. We consider {an action of} the relativistic $\mathcal{O}(N)$ field theory with anisotropy, which describes various condensed matter systems. For example{, the case with $N = 3$} describes the long wavelength fluctuation in the {anisotropic} Heisenberg model. The Euclidean time action of the model reads as \begin{align}\label{S} \mathcal{S}=\frac{1}{2g}\int_\Lambda d^D x \left[ (\partial_{\alpha}\bm{\Phi})^2+\frac{m_0^2}{4N}\left(|\bm{\Phi}|^2-N\right)^2+\frac{A}{2}\sum_{i=2}^{N}\Phi_i^2\right], \end{align} where $\bm{\Phi}$ is a $N$-component vector {field}, which can be parametrized by $\bm{\Phi}=(\Phi_1, \bm{\pi})$ with $\bm{\pi}$ being the $(N-1)$-component vector. $D=d+1$ is the space-time dimension. $A>0$ is the hard axis anisotropy, which ensures the saddle point solution {$\langle\Phi_1\rangle=\sqrt{N}$} and {$\langle\bm{\pi}\rangle=\bm{0}$}. We do not write the anisotropy in the easy axis anisotropy form $-A \Phi_1^2 /2$ because the saddle point solution depends on $A$ in this case. Here $m_0$ is the bare mass and $\Lambda$ is the ultraviolet cutoff wavevector, both of which depend on the microscopic details of the systems. $g$ is a parameter which controls the strength of quantum fluctuations. There exists a quantum phase transition at $g=g_c$ and the system orders when $g<g_c$. Because of the anisotropy $A>0$, the phase transition exists at $d{\geq}1$. The fluctuations {of the field} in the ordered phase can be parametrized as \begin{align} \bm{\Phi}=(r\sqrt{N}+\sigma, \bm{\pi}), \end{align} where $r$ is responsible for the suppression of the order parameter due to the quantum fluctuation. The action Eq.~\eqref{S} can be expanded as \begin{align} \mathcal{S}=\mathcal{S}_0+\mathcal{S}_A+\mathcal{S}_C,\\ \mathcal{S}_0=\frac{1}{2g}\int _{\Lambda }d^Dx\left[\left(\partial_\mu \sigma \right)^2+\left(\partial_\mu \bm{\pi} \right)^2+m_0^2 r^2 \sigma ^2+\frac{ A}{2}\pi ^2\right],\\ \mathcal{S}_A=\frac{m_0^2}{2 g}\int _{\Lambda }d^Dx\left[\frac{r \left(\sigma ^3+\sigma \bm{\pi} ^2\right)}{\sqrt{N}}+\frac{\left(\sigma ^2+\bm{\pi} ^2\right)^2}{4 N}\right],\\ \mathcal{S}_C=\frac{m_0^2 \left(r^2-1\right)}{4 g}\int _{\Lambda }d^Dx\left(2 \sqrt{N} r \sigma +\sigma ^2+\bm{\pi} ^2\right), \end{align} where {$\mathcal{S}_0$ is the free field action with anisotropy}, $\mathcal{S}_A$ {collects the anharmonic contributions,} and $\mathcal{S}_C$ is the counterterm. {The bare susceptibility of Higgs and magnon modes from $\mathcal{S}_0$ are \begin{align}\label{bs} \chi _{\sigma\sigma}^{(0)}(q)=\frac{g}{q^2+ r^2 m_0^2}, \qquad \chi _{\pi\pi}^{(0)}(q)=\frac{g}{q^2+{A}/{2}}, \end{align} where $rm_0$ is the renormalized mass of the Higgs mode {and $q$ denotes a $D$-dimensional momentum}. Under the analytical continuation $q^2\rightarrow \mathbf{q}^2-(\omega+i 0^+)^2$, the zeroth order Higgs and magnon dispersions are $\omega_H^{(0)} {(\mathbf{q})} =\sqrt{\mathbf{q}^2+m_0^2r^2}$ and $\omega_M^{(0)} {(\mathbf{q})} =\sqrt{\mathbf{q}^2+A/2}$, respectively. {The dispersions can be viewed as the expansion of Eq. (\ref{wm}) and (\ref{wh}) around $\mathbf{G}_0$ up to a renormalization factor. Then we can make the correspondence that $A\simeq 2J/J_{xy}-8$ and $rm_0 \simeq \sqrt{J/J_z-4}$ around the mean-field QCP $(J_z/J)_c=1/4$ .} The Higgs gap, $rm_0$, vanishes at the QCP at $g=g_c$, consistent with the previous bond operator approach. Because the magnon gap remains {nonzero with $\omega_M^{(0)}(0)=$} $\sqrt{A/2}$, the Higgs mode is below the magnon mode in the long wavelength limit near the QCP considered here. } To ensure $\bm{\Phi}_g=(r\sqrt{N},\bm{0})$ is a stable ground state, the expectation of $\sigma$ must vanish, $\langle\sigma\rangle=0$. This means that the sum of all one-particle irreducible (1PI) diagrams with one $\sigma$ {external} leg mush vanish. In the large $N\gg 1$ limit, {the cancellation of the two leading-order 1PI diagrams originated from the term $r\sigma\bm{\pi}^2/\sqrt{N}$ in $\mathcal{S}_A$ and $2 \sqrt{N} r \sigma $ in $\mathcal{S}_c$ yields} $r=\sqrt{1-g/g_c}$ with \begin{equation}\label{gc} \begin{split} g_c&=\left[\int_\Lambda \frac{d^D k}{(2\pi)^D}\frac{1}{k^2+A/2}\right]^{-1} \\ &=\begin{cases} \frac{4\pi}{\sqrt{\Lambda^2+A/2}-\sqrt{A/2}}, & D=3 \\ \\ \frac{8\pi^2}{\Lambda\sqrt{\Lambda^2+A/2} - \frac{A}{2} \ln\frac{\Lambda+\sqrt{\Lambda^2+A/2}}{\sqrt{A/2}}}, & D=4 \end{cases} \end{split} \end{equation} where the integral is due to the $\pi$ loop contribution in the term $r\sigma\pi^2/\sqrt{N}$ {(see Appendix \ref{AA})}. In the limit $\Lambda\gg A$, $g_c=4\pi/\Lambda$ for $D=3$ and $g_c=8\pi^2/\Lambda^2$ for $D=4$. Here $g_c$ depends only on the ultraviolet cutoff {but not} the easy axis anisotropy. The full Higgs mode susceptibility is given by the Dyson equation \begin{align} \chi _{\sigma \sigma }(q)=\frac{g}{q^2+m_0^2 r^2-g \Sigma _{\sigma }(q)}, \end{align} where $\Sigma _{\sigma }(q)$ is the self-energy that collects all the 1PI diagrams. In the one-loop order, we consider the {dominant} polarization bubble {from the $r\sigma\bm{\pi}^2/\sqrt{N}$ term in $\mathcal{S}_A$ under the second order perturbation expansion} \begin{widetext} \begin{align}\label{Pi} \Pi _{\pi } (q)=\frac{m_0^4 r^2}{2 }\int _{\Lambda }\frac{d^Dk}{(2 \pi )^D}\frac{1}{\left(k^2+A/2\right)\left[(k+q)^2+A/2\right]} = \frac{m_0^4 r^2}{2}\begin{cases} \frac{1}{4\pi\sqrt{q^2}}\cot^{-1}\left(\sqrt{\frac{2A}{q^2} }\right), & D=3\\ \frac{1}{16\pi^2}\left[1+\log\left(\frac{2\Lambda^2}{A}\right) -2\sqrt{\frac{2A+q^2}{q^2}}\tanh^{-1}\sqrt{\frac{q^2}{2A+q^2} }\right], & D=4 \end{cases} \end{align} \end{widetext} that describes the decay of one Higgs mode into two magnon modes, {as shown in Fig. \ref{f3}}. {The loop integral can be evaluated by using the Feynman parameterization (see Appendix \ref{BB}).} The decay of one Higgs mode into other Higgs modes is negligible in the large $N$ limit {and has no contribution in the low frequency region} \cite{PodolskyArovasPRB}. The {other} one-loop tadpole diagrams {from $\sigma^4$ and $2\sigma^2\bm{\pi}^2$ in $\mathcal{S}_A$} are cancelled out by {$(r^2-1)\sigma^2$ in} the counterterm {$\mathcal{S}_C$, as shown in Appendix \ref{CC}}. Going beyond the one-loop order, we introduce the random phase approximation (RPA) of bubble diagrams and the self energy becomes \begin{align} \Sigma _{\sigma }(q)=\frac{\Pi _{\pi } (q)}{1+ g\Pi _{\pi } (q)/m_0^2 r^2}, \end{align} {as shown in Fig. \ref{f3}}. The spectral function {of the Higgs mode} is \begin{equation} \begin{split} \chi''_{\sigma\sigma}(q) & \equiv \mathrm{Im}[\chi_{\sigma\sigma}(q)] \\ & = \frac{g^2\text{Im}[\Sigma_\sigma(q)]}{\left(q^2+r^2m_0^2-g\text{Re}[\Sigma_\sigma(q)]\right)^2 + g^2\text{Im}[\Sigma_\sigma(q)]^2}, \end{split} \end{equation} which is a Lorentzian function. {For $\textbf{q}=\bm{0}$}, the spectral peak is centered at $\omega_c=\sqrt{r^2m_0^2-g \mathrm{Re}[\Sigma_\sigma(\omega_c)]}$ and its width is $\Gamma_\sigma=2g\mathrm{Im}[\Sigma_\sigma(\omega_c)]$. When twice of the magnon gap is above the Higgs mode {for $A>2r^2m_0^2$}, the decay of the Higgs mode into the magnon mode is absent. Since the Higgs mode becomes the lowest lying mode in this case, there is no decay channel of the Higgs mode even including the higher order processes. Therefore, the Higgs mode can be stable in anisotropic quantum magnets. \begin{figure*}[t] \begin{center} \includegraphics[width=15cm]{fig5.pdf} \end{center} \caption{(a) and (b) Real and imaginary parts of the self energy of Higgs mode {at $\bm{q}=\bm{0}$ and} $D=3+1$. The other parameters are same as those used in Fig. \ref{f4}. (c) The spectral function at $\bm{q}=\bm{0}$. (d)-(f) The intensity plot of the spectral functions of the Higgs mode for $A=0,$ 0.02, and 0.3, respectively, with the red and white dash lines representing the bare dispersion of Higgs and magnon modes. {The spectral peak of the Higgs mode is sharp in $D=3+1$.}} \label{f5} \end{figure*} In Figs. \ref{f4}, we show the self energy and spectral function of the Higgs mode for $D=3$. At $\bm{q}=\bm{0}$, the finite ${\rm Re}(\Sigma_\sigma)$ in Fig. \ref{f4}(a) shifts the spectral peak downward slightly [see Fig. \ref{f4}(f)]. For $A<2r^2m_0^2$, the ${\rm Im}(\Sigma_\sigma)$ shown in Fig. \ref{f4}(b) remains finite at $\omega_c$, which broadens the spectral peak due to the decay of Higgs mode into magnon modes, as shown in Figs. \ref{f4}(c)-\ref{f4}(e). On the other hand, when $A>2r^2m_0^2$, ${\rm Im}(\Sigma_\sigma)$ vanishes at $\omega_c$ due to the absence of a decay channel. In this case, a pronounced spectral peak of Higgs mode is identified as displayed in Figs. \ref{f4}(c) and \ref{f4}(f). Here we add a tiny imaginary part to the frequency such that the spectral peak for $A>2r^2m_0^2$ has a finite width. The self energy and spectral function for $D=4$, shown in Fig. \ref{f5}, are similar to that for $D=3$. However, due to the weaker quantum fluctuation in higher dimension, the self energy in Figs. \ref{f5}(a) and \ref{f5}(b) is much smaller than that in Figs. \ref{f4}(a) and \ref{f4}(b). As a consequence, even when $A<2r^2m_0^2$, the spectral peak of Higgs mode is still apparent, as shown in Figs. \ref{f5}(c)-\ref{f5}(f). The spectral peak width decreases as $A$ increases to $A=2r^2m_0^2$ above which ${\rm Im}[\Sigma_\sigma(w_c)]=0$ and the spectral peak has zero width. \section{Quantum Monte Carlo Results} In the field theoretical calculations, we have considered the large $N\gg 1$ limit in order to make controlled approximation. To connect to physical spin with $N=3$, below, we show the results of our unbiased quantum Monte Carlo simulation to address the excitation spectrum near the QCP in the Ising-like bilayer \textit{XXZ} model. Especially, we focus on $d = 2$, where the anisotropy effect to stabilize the Higgs mode is expected as more drastic than $d = 3$. Our quantum Monte Carlo simulation is based on the directed-loop algorithm~\cite{PhysRevE.66.046701,PhysRevE.71.036706} with the continuous imaginary time world-line scheme. The analytical continuation from the imaginary to real frequency is the core part of our numerical study, which we perform utilizing the recently developed stochastic optimization method~\cite{PhysRevB.95.014102}. \begin{figure*}[t] \begin{center} \includegraphics[width=14cm,bb=0 0 700 481]{fig6.pdf} \end{center} \caption{% { (a) Quantum Monte Carlo results of $\langle{M_s^2}\rangle$ [Eq.~\eqref{eq:Ms}] and (b) $U$ [Eq.~\eqref{eq:U}] for $J_{z} / J_{xy} = 3$ as a function of $J_z/J$ at $\beta J_z = 2.5 \times L$. Finite-size scaling of (c) $\langle{M_s^2}\rangle$ and (d) $U$ where we assume the critical exponents of the $D=2+1$ Ising universality class: $\nu = 0.63012(16)$, $\eta = 0.03639(15)$, and $z = 1$~\cite{PhysRevE.65.066127}. $(J_z /J)_c = 0.332(1)$ is obtained by the data collapse of the presented data. } } \label{f6} \end{figure*} We simulated the bilayer square-lattice Hamiltonian [Eq.~\eqref{eq:H}] by adopting periodic boundary conditions in the $a$ and $b$ directions. First, to determine the QCP induced by changing $J_z / J$, we consider a source term of a longitudinal staggered field $-h_s\sum_{l,i}(-1)^{l}e^{i\mathbf{G}_0\cdot r_{i}}S^z_{l,i}$ with $\mathbf{G}_0 = (\pi,\pi)$, thereby define the Binder parameter, \begin{align} U = \frac{\langle{M_s^4}\rangle}{ \langle{M_s^2}\rangle^2}, \label{eq:U} \end{align} with \begin{align} \langle{M_s^2}\rangle = \frac{T^2}{N_\mathrm{site}^2 Z} \frac{\partial^2 Z}{\partial h_{s}^2}\Biggr\rvert_{h_{s} = 0}, ~~~ \langle{M_s^4}\rangle = \frac{T^4}{N_\mathrm{site}^4 Z} \frac{\partial^4 Z}{\partial h_{s}^4}\Biggr\rvert_{h_{s} = 0}, \label{eq:Ms} \end{align} where $Z$ is the partition function and $N_\mathrm{site} = 2L^2$ is the total number of sites ($L$ denotes the system size). $U$ is a dimensionless scaling parameter and is expected to be asymptotically size independent at the QCP. In Figs.~\ref{f6}(a) and \ref{f6}(b), we show the $J_z /J$ dependence of $\langle{M_s^2}\rangle$ and $U$, respectively, for $J_{z} / J_{xy} = 3$ and $4 \le L \le 16$. To investigate quantum critical behaviors, the inverse temperature $\beta = 1/T$ is set to $\beta J_z = 2.5 \times L$, anticipating the $D=2+1$ Ising universality class where the dynamical scaling exponent is $z = 1$; this temperature is low enough to study ground state properties. We find that $\langle{M_s^2}\rangle$ increases with increasing $J_{z} / J$. Furthermore, in the region where rapid increase of $\langle{M_s^2}\rangle$ suggests a QCP, $U$ shows a clear tendency towards crossing for different $L$. By using the finite-size scaling analysis, we obtain the estimate of the QCP as $(J_z/J)_c = 0.332(1)$ for $J_{z} / J_{xy} = 3$ based on the data collapse shown in Figs.~\ref{f6}(c) and \ref{f6}(d). The enlarged disordered phase relative to the prediction of the mean-field theory, $(J_z/J)_{c,\text{MF}} = 1/4$, is a normal observation. To study the excitation spectrum near the QCP, we measure the imaginary-time dynamical correlation function in the quantum Monte Carlo simulation, \begin{align} C^{zz}_{l,ij}(\tau) &= \frac{1}{Z} \mathrm{Tr}\, T_\tau \left( e^{-\beta \mathcal{H}} S^z_{l,i}(\tau) S^z_{l,j}(0) \right), \\ C^{xx}_{l,ij}(\tau) &= \frac{1}{Z} \mathrm{Tr}\, T_\tau \left( e^{-\beta \mathcal{H}} S^x_{l,i}(\tau) S^x_{l,j}(0) \right), \end{align} where $0 \le \tau \le \beta$, $T_\tau$ denotes the time ordering operator, and $S^\alpha_{l,i}(\tau) = e^{\tau\mathcal{H}} S^\alpha_{l,i} e^{-\tau\mathcal{H}}$ ($\alpha = z,x$). We evaluate the $\tau$ dependence of $C^{\alpha\alpha}_{l,ij}(\tau)$ at equally-spaced discrete sample points, $\tau_m = m\Delta\tau$, $0 \le m < N_\tau$, where $N_\tau \equiv \beta / \Delta\tau$ is taken as $N_\tau = 200L$ in our study. By taking the input of $C^{\alpha\alpha}_{l,ij}(\tau)$, the stochastic optimization method~\cite{PhysRevB.95.014102} can numerically execute the analytical continuation and the Fourier transformation to yield the corresponding dynamical spin structure factor, \begin{align} C^{\alpha\alpha}_{l,\mathbf{k}}(\omega) = \frac{1}{2\pi L^2} \sum_{i,j} \int_{-\infty}^{\infty} dt \left\langle{ S^{\alpha}_{l,i}(t) S^{\alpha}_{l,j}(0) }\right\rangle_{T} e^{-i[\mathbf{k}\cdot(\mathbf{r}_i - \mathbf{r}_j) - \omega t]}, \label{eq:Ckw} \end{align} with $\alpha = z,x$, $S^\alpha_{l,i}(t) = e^{i\mathcal{H}t} S^\alpha_{l,i} e^{-i\mathcal{H}t}$, and $\langle{\dots}\rangle_T$ denotes thermal average at temperature $T$. Our simulations for these dynamical correlation functions are carried out at $\beta J_z = 0.833 \times L$, which is still a low enough temperature to address the ground-state spectral function. \begin{figure*} \begin{center} \includegraphics[width=14cm,bb=0 0 622 607]{fig7.pdf} \end{center} \caption{% {% Results of $C^{zz}_{l,\mathbf{k}}(\omega)$ [Eq.~\eqref{eq:Ckw}] obtained by the quantum Monte simulation and the analytical continuation for $L = 16$ and (a) $J_z / J = 1/3.25 = 0.927(3) \times (J_z/J)_c$, (b) $J_z / J = 1/3 = 1.004(3) \times (J_z/J)_c$, and (c) $J_z / J = 1/2.75 = 1.095(3) \times (J_z/J)_c$, corresponding to the dimerized phase, a vicinity of the QCP, and the magnetically ordered phase, respectively. The results of $C^{xx}_{l,\mathbf{k}}(\omega)$ in the same parameters are shown in (d) $J_z / J = 1/3.25$, (e) $J_z / J = 1/3$, and (f) $J_z / J = 1/2.75$. The results are shown along the line in the Brillouin zone shown in (a). }% } \label{f7} \end{figure*} We show the results of $C^{zz}_{l,\mathbf{k}}(\omega)$ in the intensity plot for $L = 16$ in Figs.~\ref{f7}(a)--\ref{f7}(c). The consistency with the result for $L = 12$ has been checked (not shown). The result in Fig.~\ref{f7}(a) for $J_z / J = 1/3.25 = 0.927(3) \times (J_z/J)_c$ shows the gapped $z$-component of the triplet excitation in the dimerized phase. Figure~\ref{f7}(b) corresponds to the spectrum in the vicinity of the QCP and shows the quantum critical soft mode at $\mathbf{k} = (\pi,\pi)$ for $J_z / J = 1/3 = 1.004(3) \times (J_z/J)_c$. Finally, the result in Fig.~\ref{f7}(c) shows the gapped Higgs excitations in the magnetically ordered phase for $J_z / J = 1/2.75 = 1.095(3) \times (J_z/J)_c$. The observed Higgs excitations are relatively sharp (smeared) in the long-wavelength limit $\mathbf{k} \simeq (\pi,\pi)$ [away from $\mathbf{k} \simeq (\pi,\pi)$], consistent with our field theory predictions. We also show the results of $C^{xx}_{l,\mathbf{k}}(\omega)$ for the same set of parameters in Figs.~\ref{f7}(d)--\ref{f7}(f). We find that the spectral weight corresponding to the $xy$-components of the triplet excitation in the dimerized phase and the one corresponding to the magnons in the ordered phase seem to evolve continuously into each other by varying $J_z / J$. These excitations are gapped and with small bandwidths all the way through the QCP. Remarkably, by comparing $C^{zz}_{l,\mathbf{k}}(\omega)$ and $C^{{xx}}_{l,\mathbf{k}}(\omega)$ in the ordered phase, we find that the stable Higgs excitations emerge below the gapped magnon band near $\mathbf{k} = (\pi,\pi)$, whereas the smeared Higgs excitations away from $\mathbf{k} \simeq (\pi, \pi)$ are within the energy range of the less dispersive magnon band [Figs.~\ref{f7}(c) and \ref{f7}(f)]. This observation confirms the predicted mechanism of the protection of the long-wavelength Higgs mode through the violation of the kinematic condition near the QCP. \section{Conclusions} In this work, we show the existence of a stable Higgs mode in {an anisotropic quantum spin system near the QCP.} The easy axis {anisotropy gaps out} magnons, while the Higgs mode gap vanishes at the QCP between the {magnetically ordered} and dimerized phases. Therefore, close to the QCP, the energy of the {long-wavelength Higgs mode is lower than that of magnons.} As a consequence, the decay of the Higgs mode to {{the}} magnon modes is forbidden due to the energy conservation. In the quantum field theory perspective, the system can be described by a coarse-grained $O(N)$ nonlinear $\sigma$ model \cite{PodolskyArovasPRB} with an easy axis anisotropy term. The anisotropy completely suppresses the damping of Higgs mode above a critical value. In this case, the Higgs mode is the lowest lying mode and becomes stable even in {$d = 2$. Our quantum Monte Carlo simulation indeed demonstrates the stability of the Higgs mode in the bilayer square-lattice \textit{XXZ} model around $\mathbf{k} = (\pi,\pi)$ near the QCP. Hence, our theory and simulation establish a new} mechanism to stabilize the Higgs mode in anisotropic quantum magnets near a QCP, which can be an ideal platform to study the Higgs physics. \begin{acknowledgments} The authors would like to thank Anders W. Sandvik, Cristian D. Batista, Ziyang Meng, Marc Janoschek, and Filip Ronning for helpful discussions. This work was carried out under the auspices of the U.S. DOE NNSA under contract No. 89233218CNA000001 through the LDRD Program and the U.S. DOE Office of Basic Energy Sciences Program E3B5 (S.-Z. L. and J.-X. Z.). A.M.-K. used computational resources of the HPCI system through the HPCI System Research Project (Project IDs.:~hp170213, hp180098, and hp180129). Y.K. acknowledges the support from NSFC Research Fund for International Young Scientists No.~11950410507 as well as the support from Ministry of Science and Technology (MOST) with the Grants No.~2016YFA0300500 and No.~2016YFA0300501. W.Z. is supported by the start-up funding from Westlake University. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A dominating set of a graph $G=(V,E)$ is a subset $D$ of $V$ such that every vertex in $V\setminus D$ is adjacent to at least one member of $D$. The minimum cardinality of all dominating sets of $G$ is called the domination number of $G$ and is denoted by $\gamma(G)$. This parameter has been extensively studied in the literature and there are hundreds of papers concerned with domination. We recommend a fundamental book \cite{domination} about domination in general. The various different domination concepts are well-studied now, however new concepts are introduced frequently and the interest is growing rapidly. A set $D\subseteq V$ is a \emph{strong dominating set} of $G$, if for every vertex $x\in \overline{D}=V\setminus D$ there is a vertex $y\in D$ with $xy\in E(G)$ and $\deg(x)\leq \deg(y)$. The \emph{strong domination number} $\gamma_{\rm st}(G)$ is defined as the minimum cardinality of a strong dominating set. A $\gamma_{\rm st}$-\emph{set} of $G$ is a strong dominating set of $G$ of minimum cardinality $\gamma_{\rm st}(G)$. If $D$ is a strong dominating set in a graph $G$, then we say that a vertex $u \in \overline{D}$ is \emph{strong dominated} by a vertex $v \in D$ if $uv\in E(G)$, and $\deg(u)\leq \deg(v)$. The strong domination number was introduced in \cite{DM} and some upper bounds on this parameter presented in \cite{DM2,DM}. Similar to strong domination number, a set $D\subset V$ is a weak dominating set of $G$, if every vertex $v\in V\setminus S$ is adjacent to a vertex $u\in D$ such that $deg(v)\geq deg(u)$ (see \cite{Boutrig}). The minimum cardinality of a weak dominating set of $G$ is denoted by $\gamma_w(G)$. Boutrig and Chellali proved that the relation $\gamma_w(G)+\frac{3}{\Delta+1}\gamma_{st}(G)\leq n$ holds for any connected graph of order $n\geq 3.$ Alikhani, Ghanbari and Zaherifard \cite{sub} examined the effects on $\gamma_{st}(G)$ when $G$ is modified by the edge deletion, the edge subdivision and the edge contraction. Also they studied the strong domination number of $k$-subdivision of $G$. Motivated by enumerating of the number of dominating sets of a graph and domination polynomial (see e.g. \cite{euro,saeid1}), the enumeration of the strong dominating sets for certain graphs has studied in \cite{JAS}. Study of the strong domination number of graph operation is a natural and interesting subject and for join and corona product has studied (\cite{JAS}). In this paper, we consider other kinds of graph operations which are called Haj\'{o}s sum and vertex sum of two graphs. The Haj\'{o}s sum is useful when either of the network is disrupted and certain node(s) is(are) not functioning. Then that node(s) is(are) to be identified(fused) with the node of a network which is functioning properly and thus new network is constructed. \section{ Haj\'{o}s sum} In this section, we study the strong domination number of Haj\'{o}s sum of two graphs. First we recall its definition. Given graphs $G_1 = (V_1,E_1)$ and $G_2 = (V_2, E_2)$ with disjoint vertex sets, an edge $x_1y_1\in E_1$, and an edge $x_2y_2\in E_2$, the \emph{Haj\'{o}s sum} $G_3 = G_1(x_1y_1) +_H G_2(x_2y_2)$ is the graph obtained as follows: begin with $G_3 = (V_1 \cup V_2, E_1 \cup E_2)$; then in $G_3$ delete the edges $x_1y_1$ and $x_2y_2$, identify the vertices $x_1$ and $x_2$ as $v_H(x_1x_2)$, and add the edge $y_1y_2$~\cite{HAJOSSUM}. Figure~\ref{HaJ-K6C6} shows the Haj\'{o}s sum of $K_6$ and $C_6$ with respect to $x_1y_1$ and $x_2y_2$. \begin{figure} \begin{center} \psscalebox{0.6 0.6} { \begin{pspicture}(0,-6.9293056)(16.402779,-0.66791654) \psline[linecolor=black, linewidth=0.06](2.601389,-2.0693054)(2.601389,-3.6693053)(1.4013889,-4.8693056)(0.20138885,-3.6693053)(0.20138885,-2.0693054)(1.4013889,-0.86930543)(2.601389,-2.0693054)(2.601389,-2.0693054) \psline[linecolor=black, linewidth=0.06](1.4013889,-0.86930543)(0.20138885,-3.6693053)(2.601389,-3.6693053)(1.4013889,-0.86930543)(1.4013889,-0.86930543) \psline[linecolor=black, linewidth=0.06](0.20138885,-2.0693054)(2.601389,-2.0693054)(1.4013889,-4.8693056)(0.20138885,-2.0693054)(0.20138885,-2.0693054) \psline[linecolor=black, linewidth=0.06](0.20138885,-3.6693053)(2.601389,-2.0693054)(2.601389,-2.0693054) \psline[linecolor=black, linewidth=0.06](0.20138885,-2.0693054)(2.601389,-3.6693053)(2.601389,-3.6693053) \psline[linecolor=black, linewidth=0.06](4.201389,-2.0693054)(5.4013886,-0.86930543)(6.601389,-2.0693054)(6.601389,-3.6693053)(5.4013886,-4.8693056)(4.201389,-3.6693053)(4.201389,-2.0693054)(4.201389,-2.0693054) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](0.20138885,-2.0693054) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](2.601389,-2.0693054) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](0.20138885,-3.6693053) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](2.601389,-3.6693053) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](5.4013886,-0.86930543) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](4.201389,-2.0693054) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](6.601389,-2.0693054) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](6.601389,-3.6693053) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](5.4013886,-4.8693056) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](4.201389,-3.6693053) \rput[bl](1.0413889,-6.809305){\LARGE{$K_6$}} \rput[bl](5.061389,-6.809305){\LARGE{$C_6$}} \rput[bl](10.321389,-6.9293056){\LARGE{$K_6(x_1y_1)+_H C_6(x_2y_2)$}} \rput[bl](12.941389,-2.6493053){$v_H(x_1x_2)$} \rput[bl](2.7013888,-1.9093055){$x_1$} \rput[bl](3.7013888,-1.9293054){$x_2$} \rput[bl](3.641389,-4.0693054){$y_2$} \rput[bl](2.821389,-4.0893054){$y_1$} \psline[linecolor=black, linewidth=0.06](11.001389,-0.86930543)(9.801389,-3.6693053)(12.201389,-3.6693053)(11.001389,-0.86930543)(11.001389,-0.86930543) \psline[linecolor=black, linewidth=0.06](9.801389,-2.0693054)(12.201389,-3.6693053)(12.201389,-3.6693053) \psline[linecolor=black, linewidth=0.06](1.4013889,-0.86930543)(1.4013889,-4.8693056)(1.4013889,-4.8693056) \psline[linecolor=black, linewidth=0.06](11.001389,-0.86930543)(11.001389,-4.8693056)(11.001389,-4.8693056) \psline[linecolor=black, linewidth=0.06](11.001389,-0.86930543)(9.801389,-2.0693054)(9.801389,-3.6693053)(11.001389,-4.8693056)(11.001389,-4.8693056) \psline[linecolor=black, linewidth=0.06](11.001389,-4.8693056)(12.201389,-3.6693053)(12.201389,-3.6693053) \psline[linecolor=black, linewidth=0.06](13.801389,-3.6693053)(15.001389,-4.8693056)(16.20139,-3.6693053)(16.20139,-2.0693054)(15.001389,-0.86930543)(15.001389,-0.86930543) \psline[linecolor=black, linewidth=0.06](12.201389,-3.6693053)(13.801389,-3.6693053)(13.801389,-3.6693053) \psline[linecolor=black, linewidth=0.06](15.001389,-0.86930543)(13.001389,-2.0693054)(13.001389,-2.0693054) \psline[linecolor=black, linewidth=0.06](11.001389,-0.86930543)(13.001389,-2.0693054)(13.001389,-2.0693054) \psline[linecolor=black, linewidth=0.06](9.801389,-2.0693054)(13.001389,-2.0693054)(9.801389,-3.6693053)(9.801389,-3.6693053) \psline[linecolor=black, linewidth=0.06](13.001389,-2.0693054)(11.001389,-4.8693056)(11.001389,-4.8693056) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.001389,-2.0693054) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](12.201389,-3.6693053) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.801389,-3.6693053) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.001389,-4.8693056) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](9.801389,-3.6693053) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](9.801389,-2.0693054) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.001389,-0.86930543) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](15.001389,-0.86930543) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](16.20139,-2.0693054) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](16.20139,-3.6693053) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](15.001389,-4.8693056) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](1.4013889,-0.86930543) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](1.4013889,-4.8693056) \rput[bl](12.221389,-4.1693053){$y_1$} \rput[bl](13.381389,-4.1693053){$y_2$} \end{pspicture} } \end{center} \caption{Haj\'{o}s construction of $K_6$ and $C_6$.} \label{HaJ-K6C6} \end{figure} The following theorem gives the lower bound and the upper bound for the strong domination number of Haj\'{o}s sum of two graphs. \begin{theorem}\label{thm:Hajos} Let $G_1=(V_1,E_1)$ and $G_2=(V_2,E_2)$ be two graphs with disjoint vertex sets, $x_1y_1\in E_1$ and $x_2y_2\in E_2$. Also, suppose that $x_1$ and $x_2$ are not pendant vertices. Then for the Haj\'{o}s sum $$G_3=G_1(x_1y_1)+_H G_2(x_2y_2),$$ we have: $$ \gamma_{\rm st} (G_1) + \gamma_{\rm st} (G_2) -\deg(x_1)-\deg(x_2)+2 \leq \gamma_{\rm st}(G_3) \leq \gamma_{\rm st} (G_1) + \gamma_{\rm st} (G_2)+1. $$ \end{theorem} \begin{proof} First we find the upper bound. Since $x_1$ and $x_2$ are not pendant vertices, then by the definition of the Haj\'{o}s sum we know that $\deg (v_H(x_1x_2))=\deg (x_1)+\deg (x_2)-2$. Also, $\deg _{G_3}(y_1)=\deg _{G_1}(y_1)$, and $\deg _{G_3}(y_2)=\deg _{G_2}(y_2)$. Suppose that $D_i$ is a $\gamma_{\rm st}$-set of $G_i$, for $i=1,2$. We have the following cases: \begin{itemize} \item[(i)] $y_1$ is strong dominated by $x_1$, and $y_2$ is strong dominated by $x_2$. Without loss of generality, suppose that $\deg(y_1)\geq \deg(y_2)$. Let $$D_3=\left(D_1\setminus \{x_1\}\right)\cup \left(D_2\setminus \{x_2\}\right)\cup \{v_H(x_1x_2),y_1\}.$$ $D_3$ is a strong dominating set of $G_3$, because $y_2$ is strong dominated by $y_1$, and every other vertices in $\overline{D_3}$ is strong dominated by the same vertices as before or $v_H(x_1x_2)$. So we have $$\gamma_{\rm st}(G_3) \leq \gamma_{\rm st} (G_1) + \gamma_{\rm st} (G_2).$$ \item[(ii)] $y_1$ is strong dominated by $x_1$, and $y_2$ is not strong dominated by $x_2$. In this case, we may have $y_2\in D_2$ or $y_2\in \overline{D_2}$, and we may have $x_2\in D_2$ or $x_2\in \overline{D_2}$. Let $$D_3=\left(D_1\setminus \{x_1\}\right)\cup \left(D_2\setminus \{x_2\}\right)\cup \{v_H(x_1x_2),y_1\}.$$ $D_3$ is a strong dominating set of $G_3$, because if $y_2\in \overline{D_2}$, then it is strong dominated by the same vertex as before, and every other vertices in $\overline{D_3}$ is strong dominated by the same vertices as before or $v_H(x_1x_2)$. So, in the worst case, which is $x_2\in \overline{D_2}$, we have $$\gamma_{\rm st}(G_3) \leq \gamma_{\rm st} (G_1) + \gamma_{\rm st} (G_2)+1.$$ \item[(iii)] $y_1$ is not strong dominated by $x_1$, and $y_2$ is not strong dominated by $x_2$. By a similar discussion as part (ii), $$D_3=\left(D_1\setminus \{x_1\}\right)\cup \left(D_2\setminus \{x_2\}\right)\cup \{v_H(x_1x_2)\},$$ is a strong dominating set of $G_3$, and in the worst case, we have $$\gamma_{\rm st}(G_3) \leq \gamma_{\rm st} (G_1) + \gamma_{\rm st} (G_2)+1.$$ \item[(iv)] $x_1$ is strong dominated by $y_1$, and $x_2$ is strong dominated by $y_2$. Then clearly $$D_3=D_1\cup D_2\cup \{v_H(x_1x_2)\},$$ is a strong dominating set of $G_3$, and we have $$\gamma_{\rm st}(G_3) \leq \gamma_{\rm st} (G_1) + \gamma_{\rm st} (G_2)+1.$$ \item[(v)] $x_1$ is strong dominated by $y_1$, and $x_2$ is not strong dominated by $y_2$. Then we may have $y_2$ is strong dominated by $x_2$, which we have the result by similar argument as case (ii). Otherwise, by a similar argument as part (ii), $$D_3=\left(D_1\cup D_2\setminus \{x_2\}\right)\cup \{v_H(x_1x_2)\}.$$ is a strong dominating set of $G_3$, and in the worst case, we have $$\gamma_{\rm st}(G_3) \leq \gamma_{\rm st} (G_1) + \gamma_{\rm st} (G_2)+1.$$ \item[(vi)] $x_1$ is not strong dominated by $y_1$, and $x_2$ is not strong dominated by $y_2$. Then we may have $y_1$ is strong dominated by $x_1$, and $y_2$ is strong dominated by $x_2$, which gives us the result by case (i), or we may have $y_1$ is strong dominated by $x_1$, and $y_2$ is not strong dominated by $x_2$, which gives us the result by case (ii). Otherwise, by similar argument as before, $$D_3=\left(D_1\setminus \{x_1\}\right)\cup \left(D_2\setminus \{x_2\}\right)\cup \{v_H(x_1x_2)\},$$ is a strong dominating set of $G_3$, and in the worst case, we have $$\gamma_{\rm st}(G_3) \leq \gamma_{\rm st} (G_1) + \gamma_{\rm st} (G_2)+1.$$ \end{itemize} So, in general, we have $\gamma_{\rm st}(G_3) \leq \gamma_{\rm st} (G_1) + \gamma_{\rm st} (G_2)+1$. Now, we find the lower bound. Suppose that $S_3$ is a $\gamma_{\rm st}$-set of $G_3$. We find strong dominating sets of $G_1$ and $G_2$ based on $S_3$. We consider the following cases: \begin{itemize} \item[(i)] $v_H(x_1x_2)\in S_3$. Here we consider the following sub-cases: \begin{itemize} \item[(a)] $y_1\in S_3$ and $y_2\in S_3$. If $v_H(x_1x_2)$ is not strong dominating any vertices in $\overline{S_3}$, then $$S_1=\left(S_3\setminus \Big( V(G_2)\cup \{v_H(x_1x_2)\} \Big)\right)\cup \{x_1\}$$ is a strong dominating set of $G_1$, and $$S_2=\left(S_3\setminus \Big( V(G_1)\cup \{v_H(x_1x_2)\} \Big)\right)\cup \{x_2\}$$ is a strong dominating set of $G_2$. But, if $v_H(x_1x_2)$ is strong dominating some vertices in $\overline{S_3}$, then after forming $G_1$ and $G_2$ from $G_3$, then if $\deg(x_1)\geq \max \{\deg(u)~|~ u\in N(x_1)\}$, and $\deg(x_2)\geq \max \{\deg(v)~|~ v\in N(x_2)\}$, we consider $S_1$ and $S_2$ as mentioned. If $\deg(x_1)\geq \max \{\deg(u)~|~ u\in N(x_1)\}$, But $\deg(x_2)\ngeqslant \max \{\deg(v)~|~ v\in N(x_2)\}$, we consider $S_1$ as mentioned, and let $$S_2=\left(S_3\setminus \Big( V(G_1)\cup \{v_H(x_1x_2)\} \Big)\right)\cup N(x_2),$$ then one can easily check that $S_2$ is a strong dominating set of $G_2$. If $\deg(x_1)\ngeqslant \max \{\deg(u)~|~ u\in N(x_1)\}$, and $\deg(x_2)\ngeqslant \max \{\deg(v)~|~ v\in N(x_2)\}$, we consider $$S_1=\left(S_3\setminus \Big( V(G_2)\cup \{v_H(x_1x_2)\} \Big)\right)\cup N(x_1),$$ and $$S_2=\left(S_3\setminus \Big( V(G_1)\cup \{v_H(x_1x_2)\} \Big)\right)\cup N(x_2).$$ Here $S_1$ and $S_2$ are strong dominating sets of $G_1$ and $G_2$, respectively. So in the worst case we have $$\gamma_{\rm st}(G_1) + \gamma_{\rm st} (G_2) \leq \gamma_{\rm st} (G_3)-1 +\deg(x_1)-1+\deg(x_2)-1.$$ \item[(b)] $y_1\in S_3$ and $y_2\notin S_3$. If $v_H(x_1x_2)$ is not strong dominating any vertices in $\overline{S_3}$, then one can easily check that $$S_1=\left(S_3\setminus \Big( V(G_2)\cup \{v_H(x_1x_2)\} \Big)\right)\cup \{x_1\}$$ is a strong dominating set of $G_1$, and one of the $$S_2=\left(S_3\setminus \Big( V(G_1)\cup \{v_H(x_1x_2)\} \Big)\right)\cup \{x_2\},$$ or $$S_2'=\left(S_3\setminus \Big( V(G_1)\cup \{v_H(x_1x_2)\} \Big)\right)\cup \{y_2\}$$ is a strong dominating set of $G_2$ (or possibly both are strong dominating sets of $G_2$). Otherwise, by similar argument as part (a), we conclude that $$\gamma_{\rm st}(G_1) + \gamma_{\rm st} (G_2) \leq \gamma_{\rm st} (G_3)-1 +\deg(x_1)-1+\deg(x_2).$$ \item[(c)] $y_1\notin S_3$ and $y_2\notin S_3$. Then there exists $y_1'\in V(G_1)$ such that $y_1$ is strong dominated by that, and there exists $y_2'\in V(G_2)$ and is strong dominating $y_2$. Then one can easily check that $$S_1=\left(S_3\setminus \Big( V(G_2)\cup \{v_H(x_1x_2)\} \Big)\right)\cup \{x_1\}$$ is a strong dominating set of $G_1$, and $$S_2=\left(S_3\setminus \Big( V(G_1)\cup \{v_H(x_1x_2)\} \Big)\right)\cup \{x_2\}$$ is a strong dominating set of $G_2$, and we have $$\gamma_{\rm st}(G_1) + \gamma_{\rm st} (G_2) \leq \gamma_{\rm st} (G_3)+1.$$ \end{itemize} \item[(ii)] $v_H(x_1x_2)\notin S_3$. Without loss of generality, suppose that there exists $x_1'\in V(G_1)$ such that $v_H(x_1x_2)$ is strong dominated by $x_1'$. We consider the following cases: \begin{itemize} \item[(a)] $y_1\in S_3$ and $y_2\in S_3$. Then one can easily check that $$S_1=S_3\setminus V(G_2) $$ is a strong dominating set of $G_1$, and $$S_2=\left(S_3\setminus V(G_1)\right)\cup \{x_2\}$$ is a strong dominating set of $G_2$. So $$\gamma_{\rm st}(G_1) + \gamma_{\rm st} (G_2) \leq \gamma_{\rm st} (G_3)+1.$$ \item[(b)] $y_1\in S_3$ and $y_2\notin S_3$. Then $$S_1=S_3\setminus V(G_2) $$ is a strong dominating set of $G_1$, and $$S_2=\left(S_3\setminus V(G_1)\right)\cup \{x_2\}$$ or $$S_2'=\left(S_3\setminus V(G_1)\right)\cup \{y_2\}$$ is a strong dominating set of $G_2$ (or possibly both are strong dominating set of $G_2$). So $$\gamma_{\rm st}(G_1) + \gamma_{\rm st} (G_2) \leq \gamma_{\rm st} (G_3)+1.$$ \item[(c)] $y_1\notin S_3$ and $y_2\notin S_3$. Then by considering similar sets as part (a), we have $$\gamma_{\rm st}(G_1) + \gamma_{\rm st} (G_2) \leq \gamma_{\rm st} (G_3)+1.$$ \end{itemize} \end{itemize} Therefore we have $\gamma_{\rm st}(G_3)\geq \gamma_{\rm st} (G_1) + \gamma_{\rm st} (G_2) -\deg(x_1)-\deg(x_2)+2$, and we are done. \end{proof} \begin{figure}[!h] \begin{center} \psscalebox{0.6 0.6} { \begin{pspicture}(0,-12.499306)(13.602778,12.102083) \psline[linecolor=black, linewidth=0.08](11.001389,10.700695)(12.201389,11.900695)(12.201389,11.900695) \psline[linecolor=black, linewidth=0.08](11.001389,10.700695)(12.201389,11.100695)(12.201389,11.100695) \psline[linecolor=black, linewidth=0.08](11.001389,10.700695)(12.201389,10.300694)(12.201389,10.300694) \psline[linecolor=black, linewidth=0.08](11.001389,10.700695)(12.201389,9.500694)(12.201389,9.500694) \psline[linecolor=black, linewidth=0.08](11.001389,7.5006948)(12.201389,8.700695)(12.201389,8.700695) \psline[linecolor=black, linewidth=0.08](11.001389,7.5006948)(12.201389,7.9006944)(12.201389,7.9006944) \psline[linecolor=black, linewidth=0.08](11.001389,7.5006948)(12.201389,7.1006947)(12.201389,7.1006947) \psline[linecolor=black, linewidth=0.08](11.001389,7.5006948)(12.201389,6.3006945)(12.201389,6.3006945) \psline[linecolor=black, linewidth=0.08](11.001389,4.3006945)(12.201389,5.5006948)(12.201389,5.5006948) \psline[linecolor=black, linewidth=0.08](11.001389,4.3006945)(12.201389,4.7006946)(12.201389,4.7006946) \psline[linecolor=black, linewidth=0.08](11.001389,4.3006945)(12.201389,3.9006946)(12.201389,3.9006946) \psline[linecolor=black, linewidth=0.08](11.001389,4.3006945)(12.201389,3.1006947)(12.201389,3.1006947) \psline[linecolor=black, linewidth=0.08](11.001389,10.700695)(9.401389,7.5006948)(9.401389,7.5006948) \psline[linecolor=black, linewidth=0.08](9.401389,7.5006948)(11.001389,7.5006948)(11.001389,7.5006948) \psline[linecolor=black, linewidth=0.08](9.401389,7.5006948)(11.001389,4.3006945)(11.001389,4.3006945) \psline[linecolor=black, linewidth=0.08](9.401389,7.5006948)(9.401389,4.3006945)(9.401389,4.3006945) \psline[linecolor=black, linewidth=0.08](9.401389,4.3006945)(8.201389,3.5006945)(8.201389,3.5006945) \psline[linecolor=black, linewidth=0.08](9.401389,4.3006945)(9.001389,3.5006945)(9.001389,3.5006945) \psline[linecolor=black, linewidth=0.08](9.401389,4.3006945)(9.801389,3.5006945)(9.801389,3.5006945) \psline[linecolor=black, linewidth=0.08](9.401389,4.3006945)(10.601389,3.5006945)(10.601389,3.5006945) \psline[linecolor=black, linewidth=0.08](6.201389,3.5006945)(4.601389,4.3006945)(4.601389,4.3006945) \psline[linecolor=black, linewidth=0.08](4.601389,4.3006945)(5.4013886,3.5006945)(5.4013886,3.5006945) \psline[linecolor=black, linewidth=0.08](4.601389,4.3006945)(4.601389,3.5006945)(4.601389,3.5006945) \psline[linecolor=black, linewidth=0.08](4.601389,4.3006945)(3.8013887,3.5006945)(3.8013887,3.5006945) \psline[linecolor=black, linewidth=0.08](4.601389,4.3006945)(3.0013888,3.5006945)(3.0013888,3.5006945) \psline[linecolor=black, linewidth=0.08](4.601389,7.5006948)(4.601389,4.3006945)(4.601389,4.3006945) \psline[linecolor=black, linewidth=0.08](4.601389,7.5006948)(2.601389,6.3006945)(2.601389,6.3006945) \psline[linecolor=black, linewidth=0.08](4.601389,7.5006948)(2.601389,8.700695)(2.601389,8.700695) \psline[linecolor=black, linewidth=0.08](2.601389,8.700695)(1.4013889,7.9006944)(1.4013889,7.9006944) \psline[linecolor=black, linewidth=0.08](2.601389,8.700695)(1.4013889,8.700695)(1.4013889,8.700695) \psline[linecolor=black, linewidth=0.08](2.601389,8.700695)(1.4013889,9.500694)(1.4013889,9.500694) \psline[linecolor=black, linewidth=0.08](2.601389,6.3006945)(1.4013889,6.3006945)(1.4013889,6.3006945) \psline[linecolor=black, linewidth=0.08](1.4013889,7.1006947)(2.601389,6.3006945)(2.601389,6.3006945) \psline[linecolor=black, linewidth=0.08](1.4013889,5.5006948)(2.601389,6.3006945)(2.601389,6.3006945) \psline[linecolor=black, linewidth=0.08](13.401389,11.100695)(12.201389,11.100695)(12.201389,11.100695) \psline[linecolor=black, linewidth=0.08](13.401389,10.300694)(12.201389,10.300694)(12.201389,10.300694) \psline[linecolor=black, linewidth=0.08](13.401389,9.500694)(12.201389,9.500694)(12.201389,9.500694) \psline[linecolor=black, linewidth=0.08](12.201389,8.700695)(13.401389,8.700695)(13.401389,8.700695) \psline[linecolor=black, linewidth=0.08](12.201389,7.9006944)(13.401389,7.9006944)(13.401389,7.9006944) \psline[linecolor=black, linewidth=0.08](12.201389,7.1006947)(13.401389,7.1006947)(13.401389,7.1006947) \psline[linecolor=black, linewidth=0.08](12.201389,6.3006945)(13.401389,6.3006945)(13.401389,6.3006945) \psline[linecolor=black, linewidth=0.08](12.201389,5.5006948)(13.401389,5.5006948)(13.401389,5.5006948) \psline[linecolor=black, linewidth=0.08](12.201389,4.7006946)(13.401389,4.7006946)(13.401389,4.7006946) \psline[linecolor=black, linewidth=0.08](12.201389,3.9006946)(13.401389,3.9006946)(13.401389,3.9006946) \psline[linecolor=black, linewidth=0.08](12.201389,3.1006947)(13.401389,3.1006947)(13.401389,3.1006947) \psline[linecolor=black, linewidth=0.08](1.4013889,9.500694)(0.20138885,9.500694)(0.20138885,9.500694) \psline[linecolor=black, linewidth=0.08](0.20138885,8.700695)(1.8013889,8.700695) \psline[linecolor=black, linewidth=0.08](0.20138885,7.9006944)(1.4013889,7.9006944) \psline[linecolor=black, linewidth=0.08](0.20138885,7.1006947)(1.4013889,7.1006947) \psline[linecolor=black, linewidth=0.08](0.20138885,6.3006945)(1.4013889,6.3006945) \psline[linecolor=black, linewidth=0.08](0.20138885,5.5006948)(1.4013889,5.5006948)(1.4013889,5.5006948) \psline[linecolor=black, linewidth=0.08](3.0013888,3.5006945)(3.0013888,2.3006945)(3.0013888,2.3006945) \psline[linecolor=black, linewidth=0.08](3.8013887,3.5006945)(3.8013887,2.3006945)(3.8013887,2.3006945) \psline[linecolor=black, linewidth=0.08](4.601389,3.5006945)(4.601389,2.3006945) \psline[linecolor=black, linewidth=0.08](5.4013886,3.5006945)(5.4013886,2.3006945) \psline[linecolor=black, linewidth=0.08](6.201389,3.5006945)(6.201389,2.3006945) \psline[linecolor=black, linewidth=0.08](8.201389,3.5006945)(8.201389,2.3006945)(8.201389,2.3006945) \psline[linecolor=black, linewidth=0.08](9.001389,3.5006945)(9.001389,2.3006945)(9.001389,2.3006945) \psline[linecolor=black, linewidth=0.08](9.801389,3.5006945)(9.801389,2.3006945)(9.801389,2.3006945) \psline[linecolor=black, linewidth=0.08](10.601389,3.5006945)(10.601389,2.3006945)(10.601389,2.3006945) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.401389,11.100695) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.401389,10.300694) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.401389,9.500694) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.401389,8.700695) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.401389,7.9006944) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.401389,7.1006947) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.401389,6.3006945) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.401389,5.5006948) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.401389,4.7006946) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.401389,3.9006946) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.401389,3.1006947) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](10.601389,2.3006945) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](9.801389,2.3006945) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](9.001389,2.3006945) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](8.201389,2.3006945) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](6.201389,2.3006945) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](5.4013886,2.3006945) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](4.601389,2.3006945) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](3.8013887,2.3006945) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](3.0013888,2.3006945) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](0.20138885,5.5006948) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](0.20138885,6.3006945) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](0.20138885,7.1006947) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](0.20138885,7.9006944) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](0.20138885,8.700695) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](0.20138885,9.500694) \psdots[linecolor=black, dotsize=0.4](12.201389,11.900695) \psdots[linecolor=black, dotsize=0.4](12.201389,11.100695) \psdots[linecolor=black, dotsize=0.4](12.201389,10.300694) \psdots[linecolor=black, dotsize=0.4](12.201389,9.500694) \psdots[linecolor=black, dotsize=0.4](12.201389,8.700695) \psdots[linecolor=black, dotsize=0.4](12.201389,7.9006944) \psdots[linecolor=black, dotsize=0.4](12.201389,7.1006947) \psdots[linecolor=black, dotsize=0.4](12.201389,6.3006945) \psdots[linecolor=black, dotsize=0.4](12.201389,5.5006948) \psdots[linecolor=black, dotsize=0.4](12.201389,4.7006946) \psdots[linecolor=black, dotsize=0.4](12.201389,3.9006946) \psdots[linecolor=black, dotsize=0.4](12.201389,3.1006947) \psdots[linecolor=black, dotsize=0.4](10.601389,3.5006945) \psdots[linecolor=black, dotsize=0.4](9.801389,3.5006945) \psdots[linecolor=black, dotsize=0.4](9.001389,3.5006945) \psdots[linecolor=black, dotsize=0.4](8.201389,3.5006945) \psdots[linecolor=black, dotsize=0.4](6.201389,3.5006945) \psdots[linecolor=black, dotsize=0.4](5.4013886,3.5006945) \psdots[linecolor=black, dotsize=0.4](4.601389,3.5006945) \psdots[linecolor=black, dotsize=0.4](3.8013887,3.5006945) \psdots[linecolor=black, dotsize=0.4](3.0013888,3.5006945) \psdots[linecolor=black, dotsize=0.4](1.4013889,5.5006948) \psdots[linecolor=black, dotsize=0.4](1.4013889,6.3006945) \psdots[linecolor=black, dotsize=0.4](1.4013889,7.1006947) \psdots[linecolor=black, dotsize=0.4](1.4013889,7.9006944) \psdots[linecolor=black, dotsize=0.4](1.4013889,8.700695) \psdots[linecolor=black, dotsize=0.4](1.4013889,9.500694) \psline[linecolor=black, linewidth=0.08](9.401389,-2.4993055)(10.601389,-1.2993054)(10.601389,-1.2993054) \psline[linecolor=black, linewidth=0.08](9.401389,-2.4993055)(10.601389,-2.0993054)(10.601389,-2.0993054) \psline[linecolor=black, linewidth=0.08](9.401389,-2.4993055)(10.601389,-2.8993053)(10.601389,-2.8993053) \psline[linecolor=black, linewidth=0.08](9.401389,-2.4993055)(10.601389,-3.6993055)(10.601389,-3.6993055) \psline[linecolor=black, linewidth=0.08](9.401389,-5.6993055)(10.601389,-4.4993052)(10.601389,-4.4993052) \psline[linecolor=black, linewidth=0.08](9.401389,-5.6993055)(10.601389,-5.2993054)(10.601389,-5.2993054) \psline[linecolor=black, linewidth=0.08](9.401389,-5.6993055)(10.601389,-6.0993056)(10.601389,-6.0993056) \psline[linecolor=black, linewidth=0.08](9.401389,-5.6993055)(10.601389,-6.8993053)(10.601389,-6.8993053) \psline[linecolor=black, linewidth=0.08](9.401389,-8.899305)(10.601389,-7.6993055)(10.601389,-7.6993055) \psline[linecolor=black, linewidth=0.08](9.401389,-8.899305)(10.601389,-8.499306)(10.601389,-8.499306) \psline[linecolor=black, linewidth=0.08](9.401389,-8.899305)(10.601389,-9.299305)(10.601389,-9.299305) \psline[linecolor=black, linewidth=0.08](9.401389,-8.899305)(10.601389,-10.099305)(10.601389,-10.099305) \psline[linecolor=black, linewidth=0.08](9.401389,-2.4993055)(7.8013887,-5.6993055)(7.8013887,-5.6993055) \psline[linecolor=black, linewidth=0.08](7.8013887,-5.6993055)(9.401389,-5.6993055)(9.401389,-5.6993055) \psline[linecolor=black, linewidth=0.08](7.8013887,-5.6993055)(9.401389,-8.899305)(9.401389,-8.899305) \psline[linecolor=black, linewidth=0.08](7.8013887,-8.899305)(6.601389,-9.699306)(6.601389,-9.699306) \psline[linecolor=black, linewidth=0.08](7.8013887,-8.899305)(7.4013886,-9.699306)(7.4013886,-9.699306) \psline[linecolor=black, linewidth=0.08](7.8013887,-8.899305)(8.201389,-9.699306)(8.201389,-9.699306) \psline[linecolor=black, linewidth=0.08](7.8013887,-8.899305)(9.001389,-9.699306)(9.001389,-9.699306) \psline[linecolor=black, linewidth=0.08](5.001389,-9.699306)(3.401389,-8.899305)(3.401389,-8.899305) \psline[linecolor=black, linewidth=0.08](3.401389,-8.899305)(4.201389,-9.699306)(4.201389,-9.699306) \psline[linecolor=black, linewidth=0.08](3.401389,-8.899305)(3.401389,-9.699306)(3.401389,-9.699306) \psline[linecolor=black, linewidth=0.08](3.401389,-8.899305)(2.601389,-9.699306)(2.601389,-9.699306) \psline[linecolor=black, linewidth=0.08](3.401389,-8.899305)(1.8013889,-9.699306)(1.8013889,-9.699306) \psline[linecolor=black, linewidth=0.08](7.8013887,-5.6993055)(5.8013887,-6.8993053)(5.8013887,-6.8993053) \psline[linecolor=black, linewidth=0.08](7.8013887,-5.6993055)(5.8013887,-4.4993052)(5.8013887,-4.4993052) \psline[linecolor=black, linewidth=0.08](5.8013887,-4.4993052)(4.601389,-5.2993054)(4.601389,-5.2993054) \psline[linecolor=black, linewidth=0.08](5.8013887,-4.4993052)(4.601389,-4.4993052)(4.601389,-4.4993052) \psline[linecolor=black, linewidth=0.08](5.8013887,-4.4993052)(4.601389,-3.6993055)(4.601389,-3.6993055) \psline[linecolor=black, linewidth=0.08](5.8013887,-6.8993053)(4.601389,-6.8993053)(4.601389,-6.8993053) \psline[linecolor=black, linewidth=0.08](4.601389,-6.0993056)(5.8013887,-6.8993053)(5.8013887,-6.8993053) \psline[linecolor=black, linewidth=0.08](4.601389,-7.6993055)(5.8013887,-6.8993053)(5.8013887,-6.8993053) \psline[linecolor=black, linewidth=0.08](11.801389,-1.2993054)(10.601389,-1.2993054)(10.601389,-1.2993054) \psline[linecolor=black, linewidth=0.08](11.801389,-2.0993054)(10.601389,-2.0993054)(10.601389,-2.0993054) \psline[linecolor=black, linewidth=0.08](11.801389,-2.8993053)(10.601389,-2.8993053)(10.601389,-2.8993053) \psline[linecolor=black, linewidth=0.08](11.801389,-3.6993055)(10.601389,-3.6993055)(10.601389,-3.6993055) \psline[linecolor=black, linewidth=0.08](10.601389,-4.4993052)(11.801389,-4.4993052)(11.801389,-4.4993052) \psline[linecolor=black, linewidth=0.08](10.601389,-5.2993054)(11.801389,-5.2993054)(11.801389,-5.2993054) \psline[linecolor=black, linewidth=0.08](10.601389,-6.0993056)(11.801389,-6.0993056)(11.801389,-6.0993056) \psline[linecolor=black, linewidth=0.08](10.601389,-6.8993053)(11.801389,-6.8993053)(11.801389,-6.8993053) \psline[linecolor=black, linewidth=0.08](10.601389,-7.6993055)(11.801389,-7.6993055)(11.801389,-7.6993055) \psline[linecolor=black, linewidth=0.08](10.601389,-8.499306)(11.801389,-8.499306)(11.801389,-8.499306) \psline[linecolor=black, linewidth=0.08](10.601389,-9.299305)(11.801389,-9.299305)(11.801389,-9.299305) \psline[linecolor=black, linewidth=0.08](10.601389,-10.099305)(11.801389,-10.099305)(11.801389,-10.099305) \psline[linecolor=black, linewidth=0.08](4.601389,-3.6993055)(3.401389,-3.6993055)(3.401389,-3.6993055) \psline[linecolor=black, linewidth=0.08](3.401389,-4.4993052)(5.001389,-4.4993052) \psline[linecolor=black, linewidth=0.08](3.401389,-5.2993054)(4.601389,-5.2993054) \psline[linecolor=black, linewidth=0.08](3.401389,-6.0993056)(4.601389,-6.0993056) \psline[linecolor=black, linewidth=0.08](3.401389,-6.8993053)(4.601389,-6.8993053) \psline[linecolor=black, linewidth=0.08](3.401389,-7.6993055)(4.601389,-7.6993055)(4.601389,-7.6993055) \psline[linecolor=black, linewidth=0.08](1.8013889,-9.699306)(1.8013889,-10.899305)(1.8013889,-10.899305) \psline[linecolor=black, linewidth=0.08](2.601389,-9.699306)(2.601389,-10.899305)(2.601389,-10.899305) \psline[linecolor=black, linewidth=0.08](3.401389,-9.699306)(3.401389,-10.899305) \psline[linecolor=black, linewidth=0.08](4.201389,-9.699306)(4.201389,-10.899305) \psline[linecolor=black, linewidth=0.08](5.001389,-9.699306)(5.001389,-10.899305) \psline[linecolor=black, linewidth=0.08](6.601389,-9.699306)(6.601389,-10.899305)(6.601389,-10.899305) \psline[linecolor=black, linewidth=0.08](7.4013886,-9.699306)(7.4013886,-10.899305)(7.4013886,-10.899305) \psline[linecolor=black, linewidth=0.08](8.201389,-9.699306)(8.201389,-10.899305)(8.201389,-10.899305) \psline[linecolor=black, linewidth=0.08](9.001389,-9.699306)(9.001389,-10.899305)(9.001389,-10.899305) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.801389,-1.2993054) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.801389,-2.0993054) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.801389,-2.8993053) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.801389,-3.6993055) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.801389,-4.4993052) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.801389,-5.2993054) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.801389,-6.0993056) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.801389,-6.8993053) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.801389,-7.6993055) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.801389,-8.499306) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.801389,-9.299305) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.801389,-10.099305) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](9.001389,-10.899305) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](8.201389,-10.899305) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](7.4013886,-10.899305) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](6.601389,-10.899305) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](5.001389,-10.899305) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](4.201389,-10.899305) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](3.401389,-10.899305) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](2.601389,-10.899305) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](1.8013889,-10.899305) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](3.401389,-7.6993055) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](3.401389,-6.8993053) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](3.401389,-6.0993056) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](3.401389,-5.2993054) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](3.401389,-4.4993052) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](3.401389,-3.6993055) \psdots[linecolor=black, dotsize=0.4](10.601389,-1.2993054) \psdots[linecolor=black, dotsize=0.4](10.601389,-2.0993054) \psdots[linecolor=black, dotsize=0.4](10.601389,-2.8993053) \psdots[linecolor=black, dotsize=0.4](10.601389,-3.6993055) \psdots[linecolor=black, dotsize=0.4](10.601389,-4.4993052) \psdots[linecolor=black, dotsize=0.4](10.601389,-5.2993054) \psdots[linecolor=black, dotsize=0.4](10.601389,-6.0993056) \psdots[linecolor=black, dotsize=0.4](10.601389,-6.8993053) \psdots[linecolor=black, dotsize=0.4](10.601389,-7.6993055) \psdots[linecolor=black, dotsize=0.4](10.601389,-8.499306) \psdots[linecolor=black, dotsize=0.4](10.601389,-9.299305) \psdots[linecolor=black, dotsize=0.4](10.601389,-10.099305) \psdots[linecolor=black, dotsize=0.4](9.001389,-9.699306) \psdots[linecolor=black, dotsize=0.4](8.201389,-9.699306) \psdots[linecolor=black, dotsize=0.4](7.4013886,-9.699306) \psdots[linecolor=black, dotsize=0.4](6.601389,-9.699306) \psdots[linecolor=black, dotsize=0.4](5.001389,-9.699306) \psdots[linecolor=black, dotsize=0.4](4.201389,-9.699306) \psdots[linecolor=black, dotsize=0.4](3.401389,-9.699306) \psdots[linecolor=black, dotsize=0.4](2.601389,-9.699306) \psdots[linecolor=black, dotsize=0.4](1.8013889,-9.699306) \psdots[linecolor=black, dotsize=0.4](4.601389,-7.6993055) \psdots[linecolor=black, dotsize=0.4](4.601389,-6.8993053) \psdots[linecolor=black, dotsize=0.4](4.601389,-6.0993056) \psdots[linecolor=black, dotsize=0.4](4.601389,-5.2993054) \psdots[linecolor=black, dotsize=0.4](4.601389,-4.4993052) \psdots[linecolor=black, dotsize=0.4](4.601389,-3.6993055) \psline[linecolor=black, linewidth=0.08](3.401389,-8.899305)(7.8013887,-8.899305)(7.8013887,-8.899305) \psdots[linecolor=black, dotsize=0.4](2.601389,8.700695) \psdots[linecolor=black, dotsize=0.4](2.601389,6.3006945) \psdots[linecolor=black, dotsize=0.4](4.601389,4.3006945) \psdots[linecolor=black, dotsize=0.4](9.401389,4.3006945) \psdots[linecolor=black, dotsize=0.4](11.001389,4.3006945) \psdots[linecolor=black, dotsize=0.4](11.001389,7.5006948) \psdots[linecolor=black, dotsize=0.4](11.001389,10.700695) \psdots[linecolor=black, dotsize=0.4](7.8013887,-5.6993055) \psdots[linecolor=black, dotsize=0.4](3.401389,-8.899305) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](9.401389,-2.4993055) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](9.401389,-5.6993055) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](9.401389,-8.899305) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](5.8013887,-4.4993052) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](5.8013887,-6.8993053) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](7.8013887,-8.899305) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](9.401389,7.5006948) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](4.601389,7.5006948) \rput[bl](4.201389,-12.499306){\LARGE{$G_1(x_1y_1)+_H G_2(x_2y_2)$}} \rput[bl](10.601389,0.70069456){\LARGE{$G_2$}} \rput[bl](2.601389,0.70069456){\LARGE{$G_1$}} \psrotate(5.981389, -5.8793054){0.25591654}{\rput[bl](5.981389,-5.8793054){$v_H(x_1x_2)$}} \rput[bl](4.9213886,7.3606944){$x_1$} \rput[bl](8.601389,7.3806944){$x_2$} \rput[bl](4.8813887,4.460695){$y_1$} \rput[bl](2.881389,-8.619306){$y_1$} \rput[bl](8.721389,4.4006944){$y_2$} \rput[bl](7.641389,-8.539306){$y_2$} \psline[linecolor=black, linewidth=0.08](12.201389,11.900695)(13.401389,11.900695)(13.401389,11.900695) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.401389,11.900695) \end{pspicture} } \end{center} \caption{Haj\'{o}s construction of $G_1$ and $G_2$.} \label{fig:hajos-low} \end{figure} \begin{figure}[!h] \begin{center} \psscalebox{0.6 0.6} { \begin{pspicture}(0,-7.7393055)(18.402779,-2.2579165) \rput[bl](12.101389,-7.7393055){\LARGE{$G_1(x_1y_1)+_H G_2(x_2y_2)$}} \rput[bl](5.8013887,-7.6593056){\LARGE{$G_2$}} \rput[bl](1.0013889,-7.6593056){\LARGE{$G_1$}} \psrotate(13.901389, -3.5993054){0.25591654}{\rput[bl](13.901389,-3.5993054){$v_H(x_1x_2)$}} \rput[bl](3.061389,-3.7993054){$x_1$} \rput[bl](4.161389,-3.8193054){$x_2$} \rput[bl](2.881389,-6.9393053){$y_1$} \rput[bl](4.2813888,-6.9393053){$y_2$} \rput[bl](13.321389,-6.7193055){$y_1$} \rput[bl](15.5413885,-6.7193055){$y_2$} \psline[linecolor=black, linewidth=0.08](4.601389,-4.059305)(4.601389,-6.4593053)(4.601389,-6.4593053) \psline[linecolor=black, linewidth=0.08](4.601389,-4.059305)(6.201389,-4.059305)(6.201389,-4.059305) \psline[linecolor=black, linewidth=0.08](4.601389,-4.059305)(6.201389,-2.4593055)(6.201389,-2.4593055) \psline[linecolor=black, linewidth=0.08](4.601389,-4.059305)(6.201389,-5.6593056)(6.201389,-5.6593056) \psline[linecolor=black, linewidth=0.08](4.601389,-6.4593053)(6.201389,-4.059305)(6.201389,-4.059305) \psline[linecolor=black, linewidth=0.08](6.201389,-2.4593055)(6.201389,-4.059305)(6.201389,-4.059305) \psline[linecolor=black, linewidth=0.08](6.201389,-4.059305)(6.201389,-5.6593056)(6.201389,-5.6593056) \psline[linecolor=black, linewidth=0.08](6.201389,-5.6593056)(7.4013886,-5.6593056)(7.4013886,-5.6593056) \psline[linecolor=black, linewidth=0.08](6.201389,-4.059305)(7.4013886,-4.059305)(7.4013886,-4.059305) \psline[linecolor=black, linewidth=0.08](6.201389,-2.4593055)(7.4013886,-2.4593055)(7.4013886,-2.4593055) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](7.4013886,-2.4593055) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](7.4013886,-4.059305) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](7.4013886,-5.6593056) \psline[linecolor=black, linewidth=0.08](3.0013888,-4.059305)(3.0013888,-6.4593053)(3.0013888,-6.4593053) \psline[linecolor=black, linewidth=0.08](3.0013888,-4.059305)(1.4013889,-4.059305)(1.4013889,-4.059305) \psline[linecolor=black, linewidth=0.08](1.4013889,-4.059305)(3.0013888,-6.4593053)(3.0013888,-6.4593053) \psline[linecolor=black, linewidth=0.08](3.0013888,-4.059305)(1.4013889,-2.4593055)(1.4013889,-2.4593055) \psline[linecolor=black, linewidth=0.08](1.4013889,-2.4593055)(1.4013889,-4.059305)(1.4013889,-4.059305) \psline[linecolor=black, linewidth=0.08](1.4013889,-2.4593055)(0.20138885,-2.4593055)(0.20138885,-2.4593055) \psline[linecolor=black, linewidth=0.08](0.20138885,-4.059305)(1.4013889,-4.059305)(1.4013889,-4.059305) \psline[linecolor=black, linewidth=0.08](1.4013889,-4.059305)(1.4013889,-5.6593056)(1.4013889,-5.6593056) \psline[linecolor=black, linewidth=0.08](1.4013889,-5.6593056)(0.20138885,-5.6593056)(0.20138885,-5.6593056) \psline[linecolor=black, linewidth=0.08](3.0013888,-4.059305)(1.4013889,-5.6593056)(1.4013889,-5.6593056) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](0.20138885,-2.4593055) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](0.20138885,-4.059305) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](0.20138885,-5.6593056) \psline[linecolor=black, linewidth=0.08](15.401389,-6.4593053)(17.001389,-4.059305)(17.001389,-4.059305) \psline[linecolor=black, linewidth=0.08](17.001389,-2.4593055)(17.001389,-4.059305)(17.001389,-4.059305) \psline[linecolor=black, linewidth=0.08](17.001389,-4.059305)(17.001389,-5.6593056)(17.001389,-5.6593056) \psline[linecolor=black, linewidth=0.08](17.001389,-5.6593056)(18.20139,-5.6593056)(18.20139,-5.6593056) \psline[linecolor=black, linewidth=0.08](17.001389,-4.059305)(18.20139,-4.059305)(18.20139,-4.059305) \psline[linecolor=black, linewidth=0.08](17.001389,-2.4593055)(18.20139,-2.4593055)(18.20139,-2.4593055) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](18.20139,-2.4593055) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](18.20139,-4.059305) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](18.20139,-5.6593056) \psline[linecolor=black, linewidth=0.08](12.201389,-4.059305)(13.801389,-6.4593053)(13.801389,-6.4593053) \psline[linecolor=black, linewidth=0.08](12.201389,-2.4593055)(12.201389,-4.059305)(12.201389,-4.059305) \psline[linecolor=black, linewidth=0.08](12.201389,-2.4593055)(11.001389,-2.4593055)(11.001389,-2.4593055) \psline[linecolor=black, linewidth=0.08](11.001389,-4.059305)(12.201389,-4.059305)(12.201389,-4.059305) \psline[linecolor=black, linewidth=0.08](12.201389,-4.059305)(12.201389,-5.6593056)(12.201389,-5.6593056) \psline[linecolor=black, linewidth=0.08](12.201389,-5.6593056)(11.001389,-5.6593056)(11.001389,-5.6593056) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.001389,-2.4593055) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.001389,-4.059305) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](11.001389,-5.6593056) \psdots[linecolor=black, dotsize=0.4](17.001389,-2.4593055) \psdots[linecolor=black, dotsize=0.4](17.001389,-4.059305) \psdots[linecolor=black, dotsize=0.4](17.001389,-5.6593056) \psdots[linecolor=black, dotsize=0.4](12.201389,-2.4593055) \psdots[linecolor=black, dotsize=0.4](12.201389,-4.059305) \psdots[linecolor=black, dotsize=0.4](12.201389,-5.6593056) \psdots[linecolor=black, dotsize=0.4](1.4013889,-2.4593055) \psdots[linecolor=black, dotsize=0.4](1.4013889,-4.059305) \psdots[linecolor=black, dotsize=0.4](1.4013889,-5.6593056) \psdots[linecolor=black, dotsize=0.4](6.201389,-2.4593055) \psdots[linecolor=black, dotsize=0.4](6.201389,-4.059305) \psdots[linecolor=black, dotsize=0.4](6.201389,-5.6593056) \psdots[linecolor=black, dotsize=0.4](14.601389,-4.059305) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](3.0013888,-4.059305) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](4.601389,-4.059305) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](3.0013888,-6.4593053) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](4.601389,-6.4593053) \psline[linecolor=black, linewidth=0.08](13.801389,-6.4593053)(15.401389,-6.4593053)(15.401389,-6.4593053) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](13.801389,-6.4593053) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](15.401389,-6.4593053) \psline[linecolor=black, linewidth=0.08](12.201389,-2.4593055)(14.601389,-4.059305)(14.601389,-4.059305) \psline[linecolor=black, linewidth=0.08](12.201389,-4.059305)(14.601389,-4.059305)(14.601389,-4.059305) \psline[linecolor=black, linewidth=0.08](12.201389,-5.6593056)(14.601389,-4.059305)(14.601389,-4.059305) \psline[linecolor=black, linewidth=0.08](17.001389,-2.4593055)(14.601389,-4.059305)(14.601389,-4.059305) \psline[linecolor=black, linewidth=0.08](14.601389,-4.059305)(17.001389,-4.059305)(17.001389,-4.059305) \psline[linecolor=black, linewidth=0.08](14.601389,-4.059305)(17.001389,-5.6593056)(17.001389,-5.6593056) \end{pspicture} } \end{center} \caption{Haj\'{o}s construction of $G_1$ and $G_2$.} \label{fig:hajos-up} \end{figure} \begin{remark} The lower bounds in Theorem \ref{thm:Hajos} is tight. Consider Figure \ref{fig:hajos-low}. One can easily check that the set of black vertices in each graph is a strong dominating set of that and the equality holds. This idea can be generalized and therefore there is an infinite family of graphs such that the equality of the lower bound holds. Also, the upper bounds in Theorem \ref{thm:Hajos} is tight. Consider Figure \ref{fig:hajos-up}. By an easy argument, the set of black vertices in each graph is a strong dominating set of that and the equality of the upper bound holds. Since this idea can be generalized, then there is an infinite family of graphs such that the equality of the upper bound holds. \end{remark} \section{Vertex-Sum} In this section, we focus on the strong domination number of vertex-sum graphs. Given disjoint graphs $G_1,\ldots,G_k$ with $u_i\in V(G_i)$, $i=1,\ldots,k$, the vertex-sum of $G_1,\ldots,G_k$, at the vertices $u_1,\ldots,u_k$, is the graph $G_1\stackplus{u} G_2 \stackplus{u} \cdots \stackplus{u} G_k$ obtained from $G_1,\ldots,G_k$ by identifying the vertices $u_i$, $i=1,\ldots,k$, as the same vertex $u$. This definition is from~\cite{Barioli2004} by Barioli, Fallat and Hogben. We call $u$ the \textit{central vertex} of the vertex-sum. The vertex-sum of $t$ copies of a graph $G$ at a vertex $u$ is denoted by $G_u^t$, $t \geq 2$. For the sake of simplicity, we may assume that the vertex $u$ belongs to all the $G_i$. Recently the distinguishing number and the distinguishing threshold of some vertex-sum graphs studied in \cite{IJST}. The following theorem gives the lower bound and the upper bound for the strong domination number of vertex-sum of two graphs. \begin{theorem}\label{thm:v-sum} For the vertex-sum of disjoint graphs $G_1,G_2,\ldots,G_k$ with $u_i\in V(G_i)$, $i=1,2,\ldots,k$, we have $$\left( \sum_{i=1}^{k}\gamma_{\rm st}(G_i)-\deg(u_i)\right) +1 \leq \gamma_{\rm st}(G_1\underset{u}{+} G_2 \underset{u}{+} \ldots \underset{u}{+} G_k) \leq \left( \sum_{i=1}^{k}\gamma_{\rm st}(G_i)\right)+1. $$ \end{theorem} \begin{proof} First we find the upper bound. Suppose that $D_i$ is a $\gamma_{\rm st}$-set of $G_i$, for $i=1,2,\ldots,k$. Then clearly $$D=\bigcup\limits_{i=1}^{k} D_i \cup \{u\},$$ is a strong dominating set of $G_1\underset{u}{+} G_2 \underset{u}{+} \ldots \underset{u}{+} G_k$, and we are done. Now, we consider the lower bound and prove it. Suppose that $S$ is a $\gamma_{\rm st}$-set of $G_1\underset{u}{+} G_2 \underset{u}{+} \ldots \underset{u}{+} G_k$. We find strong dominating sets of $G_i$, for $i=1,2,\ldots,k$, based on $S$. We have two cases: \begin{itemize} \item[(i)] $u\notin S$. Then there exists $u'\in S$ and is strong dominating $u$. Without loss of generality, suppose that $u'\in V(G_1)$. Then one can easily check that $$S_1=S\setminus \left(\bigcup\limits_{i=2}^{k} V(G_i)\right)$$ is a strong dominating set of $G_1$, and for $i=2,3,\ldots,k$, $$S_i=\Big(S\cup\{u_i\}\Big)\setminus \left(\bigcup\limits_{\underset{j\neq i}{j=1}}^{k} V(G_j)\right) $$ is a strong dominating set of $G_i$. So we have $$ \sum_{i=1}^{k}\gamma_{\rm st}(G_i) \leq \gamma_{\rm st}(G_1\underset{u}{+} G_2 \underset{u}{+} \ldots \underset{u}{+} G_k)+k-1,$$ which is not in contradiction of the lower bound. \item[(ii)] $u\in S$. If after forming each $G_i$, for all $i=1,2,\ldots,k$, $\deg(u_i)\geq \max \{\deg(v)~|~ v\in N(u_i)\}$, then $$S_i=\Big(S\cup\{u_i\}\Big)\setminus \left(\bigcup\limits_{\underset{j\neq i}{j=1}}^{k} V(G_j)\cup\{u\}\right) $$ is a strong dominating set of $G_i$, for $i=1,2,\ldots,k$. So we have $$ \sum_{i=1}^{k}\gamma_{\rm st}(G_i) \leq \gamma_{\rm st}(G_1\underset{u}{+} G_2 \underset{u}{+} \ldots \underset{u}{+} G_k)+k-1,$$ which is not in contradiction of the lower bound. The worst case happens when after forming each $G_i$, for all $i=1,2,\ldots,k$, $\deg(u_i) < \max \{\deg(v)~|~ v\in N(u_i)\}$. Then by considering $$S_i=\Big(S\cup N(u_i)\Big)\setminus \left(\bigcup\limits_{\underset{j\neq i}{j=1}}^{k} V(G_j)\cup\{u\}\right), $$ one can easily check that $S_i$ is a a strong dominating set of $G_i$, for $i=1,2,\ldots,k$. So we have $$ \sum_{i=1}^{k}\gamma_{\rm st}(G_i) \leq \gamma_{\rm st}(G_1\underset{u}{+} G_2 \underset{u}{+} \ldots \underset{u}{+} G_k)+\left(\sum_{i=1}^{k}\deg(u_i)\right) -1.$$ \end{itemize} Therefore we have the result. \end{proof} As an immediate result of Theorem \ref{thm:v-sum}, we have: \begin{corollary} For the The vertex-sum of $t$ copies of a graph $G$ at a vertex $u$, we have $$t\big(\gamma_{\rm st}(G)-\deg(u)\big) +1 \leq \gamma_{\rm st}(G_u^t) \leq t\gamma_{\rm st}(G)+1. $$ \end{corollary} \begin{remark} Bounds in Theorem \ref{thm:v-sum} are tight. For the upper bound, consider $G_i$ as shown in Figure \ref{fig:v-sum-upper}. The set of black vertices is a $\gamma_{\rm st}$-set of $G_i$. Now, if we consider $G_1\underset{u}{+} G_2 \underset{u}{+} \ldots \underset{u}{+} G_k$, then we need all black vertices and $u$ in our strong dominating set. Therefore the equality holds. By generalizing this idea, we have an infinite family of graphs such that the equality of the upper bound holds. For the lower bound, consider $G_i$ as shown in Figure \ref{fig:v-sum-lower}. The set of black vertices, say $S_i$, is a $\gamma_{\rm st}$-set of $G_i$. Now, if we consider $G_1\underset{u}{+} G_2 \underset{u}{+} \ldots \underset{u}{+} G_k$, then clearly $ \left(\bigcup\limits_{i=1}^{k} S_i\cup\{u\}\right)\setminus \left(\bigcup\limits_{i=1}^{k} N(u_i)\right)$ is a $\gamma_{\rm st}$-set, and we are done. By generalizing this idea, we have an infinite family of graphs such that the equality of the lower bound holds. \end{remark} \begin{figure} \begin{center} \psscalebox{0.6 0.6} { \begin{pspicture}(0,-5.025)(6.8027782,-0.155) \psline[linecolor=black, linewidth=0.08](0.20138885,-3.965)(1.4013889,-2.765)(2.601389,-3.965)(2.601389,-3.965) \psline[linecolor=black, linewidth=0.08](1.4013889,-2.765)(3.401389,-0.765)(5.4013886,-2.765)(5.4013886,-2.765) \psline[linecolor=black, linewidth=0.08](5.4013886,-2.765)(4.201389,-3.965)(4.201389,-3.965) \psline[linecolor=black, linewidth=0.08](5.4013886,-2.765)(6.601389,-3.965)(6.601389,-3.965) \psdots[linecolor=black, dotsize=0.4](1.4013889,-2.765) \psdots[linecolor=black, dotsize=0.4](5.4013886,-2.765) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](0.20138885,-3.965) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](2.601389,-3.965) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](4.201389,-3.965) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](6.601389,-3.965) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](3.401389,-0.765) \rput[bl](3.2013888,-0.405){$u_i$} \rput[bl](3.2013888,-5.025){\Large{$G_i$}} \end{pspicture} } \end{center} \caption{Graph $G_i$, for $i=1,2,\ldots,k$.} \label{fig:v-sum-upper} \end{figure} \begin{figure} \begin{center} \psscalebox{0.6 0.6} { \begin{pspicture}(0,-8.155)(9.202778,-2.665) \rput[bl](4.441389,-2.915){$u_i$} \rput[bl](4.461389,-8.155){\Large{$G_i$}} \psline[linecolor=black, linewidth=0.08](1.4013889,-4.855)(0.20138885,-6.055)(0.20138885,-6.055) \psline[linecolor=black, linewidth=0.08](1.4013889,-4.855)(1.0013889,-6.055)(1.0013889,-6.055) \psline[linecolor=black, linewidth=0.08](1.4013889,-4.855)(1.8013889,-6.055)(1.8013889,-6.055) \psline[linecolor=black, linewidth=0.08](1.4013889,-4.855)(2.601389,-6.055)(2.601389,-6.055) \psline[linecolor=black, linewidth=0.08](4.601389,-4.855)(3.401389,-6.055)(3.401389,-6.055) \psline[linecolor=black, linewidth=0.08](4.601389,-4.855)(4.201389,-6.055)(4.201389,-6.055)(4.201389,-6.055) \psline[linecolor=black, linewidth=0.08](4.601389,-4.855)(5.001389,-6.055)(5.001389,-6.055) \psline[linecolor=black, linewidth=0.08](4.601389,-4.855)(5.8013887,-6.055)(5.8013887,-6.055) \psline[linecolor=black, linewidth=0.08](7.8013887,-4.855)(6.601389,-6.055)(6.601389,-6.055) \psline[linecolor=black, linewidth=0.08](7.8013887,-4.855)(7.4013886,-6.055)(7.4013886,-6.055) \psline[linecolor=black, linewidth=0.08](7.8013887,-4.855)(8.201389,-6.055)(8.201389,-6.055) \psline[linecolor=black, linewidth=0.08](7.8013887,-4.855)(9.001389,-6.055)(9.001389,-6.055) \psline[linecolor=black, linewidth=0.08](0.20138885,-6.055)(0.20138885,-7.255)(0.20138885,-7.255) \psline[linecolor=black, linewidth=0.08](1.0013889,-6.055)(1.0013889,-7.255)(1.0013889,-7.255) \psline[linecolor=black, linewidth=0.08](1.8013889,-6.055)(1.8013889,-7.255)(1.8013889,-7.255) \psline[linecolor=black, linewidth=0.08](2.601389,-6.055)(2.601389,-7.255)(2.601389,-7.255) \psline[linecolor=black, linewidth=0.08](3.401389,-6.055)(3.401389,-7.255)(3.401389,-7.255) \psline[linecolor=black, linewidth=0.08](4.201389,-6.055)(4.201389,-7.255)(4.201389,-7.255) \psline[linecolor=black, linewidth=0.08](5.001389,-6.055)(5.001389,-7.255)(5.001389,-7.255) \psline[linecolor=black, linewidth=0.08](5.8013887,-6.055)(5.8013887,-7.255)(5.8013887,-7.255) \psline[linecolor=black, linewidth=0.08](4.601389,-4.855)(4.601389,-3.255)(4.601389,-3.255) \psline[linecolor=black, linewidth=0.08](1.4013889,-4.855)(4.601389,-3.255)(4.601389,-3.255) \psline[linecolor=black, linewidth=0.08](4.601389,-3.255)(7.8013887,-4.855)(7.8013887,-4.855) \psline[linecolor=black, linewidth=0.08](6.601389,-7.255)(6.601389,-6.055)(6.601389,-6.055) \psline[linecolor=black, linewidth=0.08](7.4013886,-6.055)(7.4013886,-7.255) \psline[linecolor=black, linewidth=0.08](8.201389,-6.055)(8.201389,-7.255) \psline[linecolor=black, linewidth=0.08](9.001389,-6.055)(9.001389,-7.255)(9.001389,-7.255) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](4.601389,-3.255) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](0.20138885,-7.255) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](1.0013889,-7.255) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](1.8013889,-7.255) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](2.601389,-7.255) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](3.401389,-7.255) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](4.201389,-7.255) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](5.001389,-7.255) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](5.8013887,-7.255) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](6.601389,-7.255) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](7.4013886,-7.255) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](8.201389,-7.255) \psdots[linecolor=black, dotstyle=o, dotsize=0.4, fillcolor=white](9.001389,-7.255) \psdots[linecolor=black, dotsize=0.4](1.4013889,-4.855) \psdots[linecolor=black, dotsize=0.4](4.601389,-4.855) \psdots[linecolor=black, dotsize=0.4](7.8013887,-4.855) \psdots[linecolor=black, dotsize=0.4](0.20138885,-6.055) \psdots[linecolor=black, dotsize=0.4](1.0013889,-6.055) \psdots[linecolor=black, dotsize=0.4](1.8013889,-6.055) \psdots[linecolor=black, dotsize=0.4](2.601389,-6.055) \psdots[linecolor=black, dotsize=0.4](3.401389,-6.055) \psdots[linecolor=black, dotsize=0.4](4.201389,-6.055) \psdots[linecolor=black, dotsize=0.4](5.001389,-6.055) \psdots[linecolor=black, dotsize=0.4](5.8013887,-6.055) \psdots[linecolor=black, dotsize=0.4](6.601389,-6.055) \psdots[linecolor=black, dotsize=0.4](7.4013886,-6.055) \psdots[linecolor=black, dotsize=0.4](8.201389,-6.055) \psdots[linecolor=black, dotsize=0.4](9.001389,-6.055) \end{pspicture} } \end{center} \caption{Graph $G_i$, for $i=1,2,\ldots,k$.} \label{fig:v-sum-lower} \end{figure}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} This paper addresses the within host dynamics of the malaria parasite {\it Plasmodium falciparum} whose infection mechanism we briefly review here. Infection starts when a human is bitten by an infected mosquito that releases sporozoites in the bloodstream. The sporozoites quickly enter the liver where they mature, replicate, and differentiate into merozoites. The merozoites are then released into the bloodstream, where they go on to infect erythrocytes (red blood cells). Merozoites reproduce within infected erythrocytes for a period of about two days. Finally the infected erythrocyte ruptures and releases new merozoites that repeat the infection cycle. A discussion leading to a mathematical model that considers a single parasite strain can be found in \cite{saul,gravenor} and references therein. In practice however, there is a considerable diversity among the infected erythrocytes, which is reflected by a wide variety of the surface proteins (antigens) that are presented by the infected cells. A mathematical model that includes an arbitrary number of parasite strains was studied in a very elegant paper by Iggidr et al \cite{iggidr} where a competitive exclusion principle was established. Generically, only one strain survives while the others are driven to extinction. The mathematical models mentioned above do not include any immune response mounted by the human host. Although many details of immune responses to {\it P. falciparum} are presently not well understood, there is evidence that the antigenic variation between different strains of the parasite prompts the immune system to mount both strain specific as well as cross reactive responses \cite{antia,Nature,BMB}. The primary distinction between specific and cross-reactive responses is that they target major (unique to each strain) or minor (shared among strains) epitopes, respectively, on the infected cell's surface. The goal of this paper is to extend the analysis of the model proposed in \cite{Nature,BMB} which include the different immune responses described above. We provide some results concerning the global behavior of this model by \begin{enumerate} \item Showing global asymptotic stability of the system in two extreme cases (no cross immunity and perfect cross immunity). \item Showing the possibility of oscillatory destabilization in the case of partial cross immunity. \item Establishing conditions for both competitive exclusion as well as for persistence. \end{enumerate} Our results indicate that depending on parameter values, this model can exhibit a wide variety of dynamical behaviors. The full range of possible behaviors and biological implications is currently not fully understood and remains the objective of future research. The rest of this paper is organized as follows. In Section 2 we recall and slightly generalize the model from \cite{Nature,BMB} and in Section 3 we comment on the existence or non-existence of positive equilibria. In Section 4 we treat the case of a single parasitic strain and establish global asymptotic stability, even when the growth rate of infected cells is assumed to be logistic, as opposed to linear. Similar results are obtained in Section 5 in two special cases: the case of no cross immunity, and the case of perfect cross immunity. In case of partial cross immunity, the dynamic picture is not as simple and this is illustrated in Section 6 by analyzing a particular example. In Section 7 we return to the general model and establish sufficient conditions for competitive exclusion as well as for persistence. These conditions are compared to similar ones for certain associated Lotka-Volterra systems of lower dimension. \section{General modeling assumptions} The model that we study here was originally proposed by Recker et al \cite{Nature} and later analyzed by Recker and Gupta \cite{BMB}. The model has the following form \begin{eqnarray} \dot y_i & = & y_i(\phi - \alpha z_i - \alpha' w_i), \label{eqY'}\\ \dot z_i & = & \beta y_i - \mu z_i,\label{eqZ'}\\ \dot w_i & = & \beta' \sum_{j=1}^n c_{ij} y_j - \mu' w_i, \label{eqW'} \end{eqnarray} where $i=1,...,n.$ The variables $y_i$, $z_i$, and $w_i$ represent the abundance of the erythrocytes which are infected by the $i$-th parasite, and the magnitudes of the specific and cross-reactive immune response respectively. We assume that the immune responses are induced proportionally to the parasitic load at the rates $\beta$ and $\beta'$. The coefficients $\mu$ and $\mu'$ model the life-span of the corresponding immune responses. The efficiency of both responses are given by $\alpha$ and $\alpha'$. The coefficient $\phi$ represents the maximal growth rate of the parasite. We assume that all kinetic parameters are equal for all strains. Finally, we assume that each strain has a distinct major epitope, but two different strains may share common minor epitopes. In the model, we incorporate this assumption by introducing the non-negative cross-reactivity matrix $C$ such that $c_{ij}>0$ if the strains $i$ and $j$ share the same epitope and $c_{ij}=0$ otherwise. In the sequel we will refer to some special cases for which we introduce the following terminology: \begin{enumerate} \item We say that there is no cross immunity when $C=I$. \item We say that there is perfect cross immunity when $C={\bf 1}'{\bf 1}$, where ${\bf 1}=(1\dots 1)\in \R$. \item Otherwise we say that there is partial cross immunity. \end{enumerate} For mathematical convenience, we perform a simple rescaling of the original variables and rewrite system $(\ref{eqY'})-(\ref{eqW'})$ as \begin{eqnarray} \dot y_i & = & y_i(1 - z_i - w_i), \label{eqY}\\ \dot z_i & = & y_i - \mu_1 z_i,\label{eqZ}\\ \dot w_i & = & b \sum_{j=1}^n c_{ij} y_j - \mu_2 w_i, \label{eqW} \end{eqnarray} and define $\gamma_1:=\frac{1}{\mu_1}$ and $\gamma_2:=\frac{b}{\mu_2}.$ In case of P. falciparum, there is a natural carrying capacity given by the number of available erythrocytes which can be infected by the parasite. Setting aside the possible effects of erythropoesis, we can assume that such carrying capacity is constant and modify the model accordingly, \begin{eqnarray} \dot y_i & = & y_i(1-\frac{y}{K} - z_i - w_i),\\ \dot z_i & = & y_i - \mu_1 z_i,\\ \dot w_i & = & b \sum_{j=1}^n c_{ij} y_j - \mu_2 w_i. \end{eqnarray} \section{The positive equilibrium} Using the vector notation, we can express the equilibrium conditions of $(\ref{eqY})-(\ref{eqW})$ as follows: $ {\bf z}^\ast =\gamma_1 {\bf y}^\ast,$ and $ {\bf w}^\ast=\gamma_1 C {\bf y}^\ast.$ The positive equilibrium must then satisfy the condition \begin{equation} \gamma_1 {\bf y}^\ast+\gamma_2 C {\bf y}^\ast={\bf 1}. \label{pos}\end{equation} In case of perfect cross-reactivity, where $c_{ij}=1$ for all $i,j$, there exists a positive solution of the form $$ y_i^*=\bar{y}=\frac{1}{\gamma_1 + n \gamma_2}, \quad i=1,..., n,$$ which corresponds to a positive equilibrium. The positive equilibrium does not always exist. For instance, letting $n=3$, $\gamma_1=\gamma_2=1$ and \begin{equation} C=\left( \begin{array}{ccc}1 & 1+\epsilon & 0\\ 1+\epsilon & 1 & 1+\epsilon \\ 0 & 1+\epsilon & 1 \end{array}\right) \label{C}\end{equation} the solution of (\ref{pos}) is given by $ (y_1^*,y_2^*,y_3^*)=1/2(2-(1+\epsilon)^2)\left(1-\epsilon,-2\epsilon,1-\epsilon\right)$, which is non-negative for $\epsilon=0$, positive for small negative $\epsilon$, and neither for small positive $\epsilon$. \section{Global stability in case $n=1$} In the simplest case $n=1$, the model \begin{eqnarray} \dot y & = & y(1 - z - w), \label{eqY1}\\ \dot z & = & y - \mu_1 z,\label{eqZ1}\\ \dot w & = & b y - \mu_2 w, \label{eqW1} \end{eqnarray} admits a unique positive equilibrium $$ (y^*,z^*,w^*)=\biggl(\frac{1}{\gamma_1+\gamma_2}, \frac{\gamma_1} {\gamma_1+\gamma_2}, \frac{\gamma_2}{\gamma_1+\gamma_2} \biggr)$$ which is globally stable. To see this, we rewrite (\ref{eqY1}--\ref{eqW1}) as \begin{eqnarray*} \dot y & = & y((z^*-z) + (w^*- w)), \\ \dot z & = & (y-y^*) - \mu_1 (z-z^*),\\ \dot w & = & b (y-y^*) - \mu_2 (w-w^*), \end{eqnarray*} and define $$ V= \int_{y^*}^y \frac{s-y^*}{s}\, ds + \int_{z^*}^z (s-z^*)\, ds + \frac{1}{b}\int_{w^*}^w (s-w^*)\, ds.$$ The function $V$ clearly has a unique global minimum at $(y^*,z^*,w^*)$. In addition, $$ \dot V = (y-y^*)\bigl((z^*-z) + (w^*- w)\bigr) +(z-z^*)\bigl( (y-y^*) - \mu_1 (z-z^*) \bigr)+ \frac{1}{b}(w-w^*)\bigl( b (y-y^*) - \mu_2 (w-w^*) \bigr)$$ which simplifies to $$ \dot V = -\mu_1 (z-z^*)^2 -\frac{\mu_2}{b}(w-w^*)^2.$$ Clearly, the equilibrium $(y^*,z^*,w^*)$ is the only invariant set in $\{ \dot V=0\}$. LaSalle's invariance principle then implies global stability of $(y^*,z^*,w^*)$. Assuming a carrying capacity for the infected cells we have a different model \begin{eqnarray} \dot y & = & y(1-\frac{y}{K} - z - w), \label{eqY2}\\ \dot z & = & y - \mu_1 z,\label{eqZ2}\\ \dot w & = & b y - \mu_2 w. \label{eqW2} \end{eqnarray} It is easy to see that the modified model (\ref{eqY2}--\ref{eqW2}) also admits a unique positive equilibrium $(y^*,z^*,w^*)$. Using the same function $V$ as before, we observe that $$ \dot V = -\frac{1}{K}(y-y^*)^2-\mu_1(z-z^*)^2 -\frac{\mu_2}{b}(w-w^*)^2.$$ We conclude again that the positive equilibrium is globally asymptotically stable. \section{Global stability in case $n>1$} When there are two or more strains present, they can be antigenically distinct (no cross-reactivity), or antigenically similar (perfect cross-reactivity, see above), or there may be partial cross-reactivity. In this section, we prove global convergence for the first two cases. We also show that adding a carrying capacity does not alter the conclusions. \subsection{Perfect cross-reactivity without carrying capacity} The equations are \begin{eqnarray} \dot y_i & = & y_i(1 - z_i - w_i), \label{eqY6}\\ \dot z_i & = & y_i - \mu_1 z_i,\label{eqZ6}\\ \dot w_i & = & b \sum_{j=1}^n y_j - \mu_2 w_i, \label{eqW6} \end{eqnarray} for $i=1,...,n$ and they admit a unique positive equilibrium. We observe that for all $i,j$ $$\dot w_i -\dot w_j=-\mu_2 (w_i -w_j),$$ hence all pairwise differences $w_i-w_j$ decay exponentially to zero. To make this argument formal, using $w=w_1$ and $w_j=w+u_j$ for $j\neq 1$ we rewrite equations (\ref{eqY6}--\ref{eqW6}) as \begin{eqnarray} \dot y_1&=& y_1(1 - z_1 - w),\;\; \dot y_j = y_j(1 - z_j - (w+u_j)),\;\; j\neq 1, \label{eqY7}\\ \dot z_i & = & y_i - \mu_1 z_i,\label{eqZ7}\\ \dot w & = & b \sum_{j=1}^n y_j - \mu_2 w, \label{eqW7}\\ \dot u_j & = & -\mu_2 u_j,\;\; j\neq 1. \label{eqU7} \end{eqnarray} Clearly, the system (\ref{eqY7}--\ref{eqU7}) is asymptotic to the limiting system \begin{eqnarray} \dot y_i & = & y_i(1 - z_i - w), \label{eqY8}\\ \dot z_i & = & y_i - \mu_1 z_i,\label{eqZ8}\\ \dot w & = & b \sum_{j=1}^n y_j - \mu_2 w. \label{eqW8} \end{eqnarray} The Lyapunov function for (\ref{eqY8}--\ref{eqW8}) has the form $$ V= \sum_{i=1}^n \biggl(\int_{y_i^*}^{y_i} \frac{s-y_i^*}{s}\, ds + \int_{z_i^*}^{z_i} (s-z_i^*)\, ds \biggr) + \frac{1}{b}\int_{w^*}^{w} (s-w^*)\, ds .$$ Indeed, after simplifications, we find that $$ \dot V= -\mu_1 \sum_{i=1}^n (z_i-z_i^*)^2 -\frac{\mu_2}{b} (w-w^*)^2,$$ and then global asymptotic stability follows from Lasalle's invariance principle. \subsection{Perfect cross-reactivity with carrying capacity} The equations are \begin{eqnarray} \dot y_i & = & y_i\bigl(1-\frac{1}{K}\sum_{j=1}^n y_j - z_i - w_i \bigr), \label{eqY9}\\ \dot z_i & = & y_i - \mu_1 z_i,\label{eqZ9}\\ \dot w_i & = & b \sum_{j=1}^n y_j - \mu_2 w_i, \label{eqW9} \end{eqnarray} for $i=1,...,n$ and they admit a unique positive equilibrium. Arguing as before, we consider the limiting system \begin{eqnarray} \dot y_i & = & y_i\bigl(1-\frac{1}{K}\sum_{j=1}^n y_j- z_i - w \bigr), \label{eqY10}\\ \dot z_i & = & y_i - \mu_1 z_i,\label{eqZ10}\\ \dot w & = & b \sum_{j=1}^n y_j - \mu_2 w, \label{eqW10} \end{eqnarray} for which the Lyapunov function is $$ V= \sum_{i=1}^n \biggl(\int_{y_i^*}^{y_i} \frac{s-y_i^*}{s}\, ds + \int_{z_i^*}^{z_i} (s-z_i^*)\, ds \biggr) + \frac{1}{b}\int_{w^*}^{w} (s-w^*)\, ds .$$ Indeed, after simplifications, $$ \dot V= -\frac{1}{K} \biggl( \sum_{j=1}^n (y^*_j-y_j) \biggr)^2 -\mu_1 \sum_{i=1}^n (z_i-z_i^*)^2 -\frac{\mu_2}{b} (w-w^*)^2,$$ implying global asymptotic stability of the positive equilibrium. \section{Analysis of a specific case with $n=2$ and partial cross immunity} In this section, we consider the dynamics of the system with $n=2$ and $C$ given by $$ C=\left( \begin{array}{cc}1&1\\2&1\end{array} \right). $$ Notice that the dynamics of this system also arises when restricting the system with $C$ given by (\ref{C}) and $\epsilon=0$ to the invariant set $\{y_1=y_3,z_1=z_3,w_1=w_3\}$. The resulting equations have the following form \begin{eqnarray*} \dot y_i & = & y_i(1-z_i-w_i),\quad i=1,2,\\ \dot z_i & = & y_i-\mu_1 z_i, \quad i=1,2,\\ \dot w_1 & = & b (y_1+y_2) - \mu_2 w_1,\\ \dot w_2 & = & b (2y_1+y_2) - \mu_2 w_2. \end{eqnarray*} We re-introduce the coefficients $\gamma_1:=1/\mu_1$ and $\gamma_2:=b/\mu_2$. The Jacobian of the system is given by $$J =\left(\begin{array}{cccccc} 1-z_1-w_1 & -y_1 & -y_1 & 0 & 0 & 0\cr 1 & -\mu_1 & 0 & 0 & 0 & 0 \cr b & 0 & -\mu_2 & b & 0 & 0\cr 0 & 0 & 0 & 1-z_2-w_2 & -y_2 & -y_2\cr 0 & 0 & 0 & 1 & -\mu_1 & 0 \cr 2b & 0 & 0 & b & 0 & -\mu_2\cr \end{array}\right).$$ This model admits at most four equilibria: \begin{enumerate} \item The zero equilibrium $E_{00}$ always exists and is always unstable since the Jacobian $J(E_{00})$ (not shown) has eigenvalues $\lam_{1,2}=1,\ \lam_{3,4}=-\mu_1,\ \lam_{5,6}=-\mu_2$. \item The semitrivial equilibrium $$E_{10} =\biggl(\frac{1}{\gamma_1+\gamma_2}, \frac{\gamma_1}{\gamma_1+\gamma_2}, \frac{\gamma_2}{\gamma_1+\gamma_2},0,0,\frac{2 \gamma_2}{\gamma_1+\gamma_2}\biggr)$$ always exists. The Jacobian $J(E_{10})$ (not shown) has eigenvalues $\lam_4=\frac{\gamma_1-\gamma_2}{\gamma_1+\gamma_2},\ \lam_5=-\mu_1,\ \lam_6=-\mu_2,$ and $\lam_{1,2,3}$ are eigenvalues of the matrix $$\left(\begin{array}{ccc} 0 & -y_1 & -y_1\cr 1 & -\mu_1 & 0 \cr b & 0 & -\mu_2 \cr \end{array}\right). $$ From the preceding stability analysis in Section 3, we already know that $\Re(\lam_{1,2,3}) \leq 0$. Using the Routh-Hurwitz criterion, it is not difficult to show that in fact $\Re(\lam_{1,2,3})< 0$. Hence, the stability of $E_{10}$ is determined by the sign of $\lam_4$. Specifically, $E_{10}$ is (locally) stable if $\gamma_1< \gamma_2$, and unstable if $\gamma_1>\gamma_2$. \item The semitrivial equilibrium $$E_{01} =\biggl(0,0,\frac{\gamma_2}{\gamma_1+\gamma_2},\frac{1}{\gamma_1+\gamma_2}, \frac{\gamma_1}{\gamma_1+\gamma_2}, \frac{\gamma_2}{\gamma_1+\gamma_2}\biggr)$$ always exists. The Jacobian $J(E_{01})$ (not shown) has eigenvalues $\lam_1=\frac{\gamma_1}{\gamma_1+\gamma_2},\ \lam_2=-\mu_1,\ \lam_3=-\mu_2,$ and $\lam_{4,5,6}$ are eigenvalues of the submatrix $$\left(\begin{array}{ccc} 0 & -y_2 & -y_2\cr 1 & -\mu_1 & 0 \cr b & 0 & -\mu_2 \cr \end{array}\right). $$ As we argued previously, $\Re(\lam_{4,5,6})< 0$. Since $\lam_1>0$, $E_{01}$ is always unstable. \item The nontrivial equilibrium $E_{11}$ exists if and only if $\gamma_1>\gamma_2$, i.e. precisely when $E_{10}$ is unstable. The $(y_1,y_2)$ coordinates of $E_{11}$ are given by $$ y_1=\frac{\gamma_1}{(\gamma_1+\gamma_2)^2-2\gamma_2^2},\quad y_2=\frac{\gamma_1-\gamma_2}{(\gamma_1+\gamma_2)^2-2\gamma_2^2}.$$ The common denominator is positive iff $\gamma_1>(\sqrt{2}-1)\gamma_2$, and the numerator of $y_2$ is positive iff $\gamma_1>\gamma_2$. The Jacobian at $E_{11}$ is given by \begin{equation} J(E_{11}) =\left(\begin{array}{cccccc} 0 & -y_1 & -y_1 & 0 & 0 & 0\cr 1 & -\mu_1 & 0 & 0 & 0 & 0 \cr b & 0 & -\mu_2 & b & 0 & 0\cr 0 & 0 & 0 & 0 & -y_2 & -y_2\cr 0 & 0 & 0 & 1 & -\mu_1 & 0 \cr 2b & 0 & 0 & b & 0 & -\mu_2\cr \end{array}\right).\label{JacE11}\end{equation} As we showed previously, $$\det J(E_{11}) =y_1 y_2 \mu_1^2 \mu_2^2((\gamma_1+\gamma_2)^2-2\gamma_2^2)>0,$$ thus $J(E_{11})$ cannot have zero eigenvalues. It turns out, that in the special case $\mu_1=\mu_2=\mu$, all six eigenvalues of $J(E_{11})$ have strictly negative real parts: If $\mu_1=\mu_2=\mu$, the characteristic polynomial of $J(E_{11})$ has the following form: $$ p(\lam)=(\mu+\lam)^2 \biggl(\xi^2 + \xi(1+b)(y_1+y_2)+ y_1 y_2 (1+2b-b^2) \biggr),$$ where $\xi=\lam(\mu+\lam)$. Clearly, two roots are given by $\lam_{1,2}=-\mu$. The remaining four roots can be obtained by solving the quadratic equation in $\xi$. We have $$ y_1=\frac{\mu}{1+2b-b^2}, \quad y_1=\frac{\mu(1-b)}{1+2b-b^2},$$ hence $b\in [0,1)$. Substituting the values of $y_1$ and $y_2$, we have $$\xi^2 + \xi\frac{\mu(1+b)(2-b)}{1+2b-b^2}+\frac{\mu^2(1-b)}{1+2b-b^2}=0.$$ The discriminant of this equation is $${\cal D}=\mu^2\frac{(1+b)^2(2-b)^2-4(1-b)(1+2b-b^2)}{(1+2b-b^2)^2}.$$ Simplifying the numerator, we find that $${\cal D}=\mu^2\frac{b^2(3-b)^2}{(1+2b-b^2)^2} \geq 0.$$ Hence the roots are $$\xi_1=-\mu, \quad \xi_2=-\mu\frac{1-b}{1+2b-b^2}.$$ The corresponding lambdas are solutions of $$ \lam_{3,4}^2+\mu \lam_{3,4}+\mu=0, \quad \lam_{5,6}^2+\mu \lam_{5,6}+\mu\frac{1-b}{1+2b-b^2}=0.$$ The positivity of coefficients in the above quadratics implies that $\Re(\lam_{3,4,5,6})<0$. \end{enumerate} \subsection{Destabilizing the nontrivial equilibrium} In this section, we show that there exist a nonempty set of parameter combinations such that $E_{11}$ is unstable. To do so, we fix the value $b=1$ and let $\mu_1=\varepsilon$, $\mu_2=c \varepsilon$ where $c>0$ and $\varepsilon$ is small. Recalculating the equilibrium values, we find $$ y_1=\frac{c^2 \varepsilon}{c^2+2 c-1}, \quad y_2=\frac{c (c-1) \varepsilon}{c^2+2 c-1}.$$ The positive equilibrium exists for all $\varepsilon>0$ if and only if $c>1$. The Jacobian of interest has the form (\ref{JacE11}) with $y_1,y_2,\mu_1,\mu_2$ given above. \begin{comment} $$J(\varepsilon)=\begin{pmatrix} 0 & -\frac{c^2 \varepsilon}{c^2+2 c-1} & -\frac{c^2 \varepsilon}{c^2+2 c-1} & 0 & 0 & 0\cr 1 & -\varepsilon & 0 & 0 & 0 & 0 \cr 1 & 0 & -c \varepsilon & 1 & 0 & 0\cr 0 & 0 & 0 & 0 & -\frac{c (c-1) \varepsilon}{c^2+2 c-1} & -\frac{c (c-1) \varepsilon}{c^2+2 c-1}\cr 0 & 0 & 0 & 1 & -\varepsilon & 0 \cr 2 & 0 & 0 & 1 & 0 & -c \varepsilon\cr \end{pmatrix}.$$ \end{comment} The characteristic polynomial of $J(\varepsilon)$ has the form \begin{eqnarray*} p(z,\varepsilon) & = & \varepsilon^4 a_0(c)+\varepsilon^3a_1(c)(1+O(\varepsilon))z+\varepsilon^2 a_2(c)(1+O(\varepsilon))z^2\\ & & + \varepsilon^2 a_3(c)(1+O(\varepsilon))z^3+\varepsilon a_4(c)(1+O(\varepsilon))z^4+\varepsilon a_5(c)z^5+z^6,\end{eqnarray*} where \begin{eqnarray*} a_0(c) & = & \frac{c^3(c-1)}{c^2+2 c-1},\\ a_1(c) & = & \frac{4 c^4(c-1)}{(c^2+2 c-1)^2},\\ a_2(c) & = & \frac{2 c^3(c-1)}{(c^2+2 c-1)^2},\\ a_3(c) & = & \frac{3 c(2c-1)(c^3+3c^2+c-1)}{(c^2+2 c-1)^2},\\ a_4(c) & = & \frac{2 c(2 c-1)}{c^2+2 c-1},\\ a_5(c) & = & 2(c+1). \end{eqnarray*} Since $p(z,0)=z^6$, $J(0)$ has a zero eigenvalue of multiplicity 6. Now we expand the roots of $p$ in powers of $\varepsilon$. First, we evaluate $p(k \varepsilon^{\alpha},\varepsilon)$ and find that the leading terms are \begin{eqnarray*} p(k \varepsilon^{\alpha},\varepsilon) & = & \varepsilon^4 a_0(c)+k \varepsilon^{3+\alpha} a_1(c)(1+O(\varepsilon))+k^2 \varepsilon^{2+ 2 \alpha} a_2(c)(1+O(\varepsilon))\\ & & + k^3 \varepsilon^{2+3\alpha} a_3(c)(1+O(\varepsilon))+k^4 \varepsilon^{1+4\alpha} a_4(c)(1+O(\varepsilon))+k^5 \varepsilon^{1+5 \alpha} a_5(c)+k^6 \varepsilon^{6 \alpha},\end{eqnarray*} Now we construct the Newton diagram, that is, $$ n(\alpha)=\min(4,3+\alpha,2+2\alpha,2+3\alpha,1+4\alpha,1+5\alpha,6\alpha),$$ which has two positive vertices at $(1/2,3)$ and $(1,4)$. Hence, the leading power of $z$ is either $\alpha=1/2$ or $\alpha=1$. \begin{itemize} \item Case $\alpha=1$ corresponds to $z=k \varepsilon + o(\varepsilon)$. To determine the value of $k$, we set the leading terms of $p(k \varepsilon,\varepsilon)$ equal to zero and obtain the equation $a_0(c)+k a_1(c)+k^2 a_2(c)=0.$ Simplifying this equation, we find that it is equivalent to $$ \frac{c^3(c-1)}{(c^2+2 c-1)^2}(2 k^2+4 c k +(c^2+2c-1))=0.$$ Since $c>1$, the roots are $$ k_{1,2}=-c \pm \frac{c-1}{\sqrt{2}}$$ which are both strictly negative. \item Case $\alpha=1/2$ corresponds to $z=r \varepsilon^{1/2} + l \varepsilon + o(\varepsilon)$. Expanding $p(r \varepsilon^{1/2} + l \varepsilon ,\varepsilon),$ we find up to the two lowest orders of $\epsilon$ that \begin{eqnarray*} & & p(r \varepsilon^{1/2} + l \varepsilon ,\varepsilon) = \varepsilon^3r^2\Bigl(a_2(c)+a_4(c) r^2+r^4\Bigr)\\ & & + r \varepsilon^{7/2}\Bigl(a_1(c)+ 2 l a_2(c)+ r^2 a_3(c)+4 r^2 l a_4(c)+r^4 a_5(c) +6 r^4 l \Bigr). \end{eqnarray*} Setting the $\varepsilon^3$ term equal to zero, we find that either $r=0$ (in which case we are back to the previous step) or that $r$ satisfies the biquadratic equation $$ a_2(c)+a_4(c) r^2+r^4=0,$$ which is equivalent to $$ 2c^3(c-1)+2c(2c-1)(c^2+2c-1)r^2+(c^2+2c-1)^2r^4=0.$$ The discriminant of this equation $${\cal D}=4 c^2(c^2+2c-1)^2 \left( (2c-1)^2-2c(c-1)\right)=4 c^2(c^2+2c-1)^2 \left(c^2+(c-1)^2\right)$$ is clearly positive, and both roots $$ r^2= \frac{c}{c^2+2c-1}\bigl( -(2c-1)\pm\sqrt{(2c-1)^2-2c(c-1)} \bigr)$$ are strictly negative. Hence, we have two pairs of pure imaginary values for $r$: \begin{eqnarray*} r_{1,2} & = & \pm i \sqrt{\frac{c\bigl((2c-1)+\sqrt{(2c-1)^2-2c(c-1)}\bigr)}{c^2+2c-1}}, \\ r_{3,4} & = & \pm i \sqrt{\frac{c\bigl((2c-1)-\sqrt{(2c-1)^2-2c(c-1)}\bigr)}{c^2+2c-1}}. \end{eqnarray*} Substituting each pair into the $\varepsilon^{7/2}$ term and setting it equal to zero, we obtain the corresponding values of $l$: \begin{eqnarray*} l_1 & = & -\frac{a_1(c)+ r_{1,2}^2 a_3(c)+r_{1,2}^4 a_5(c)}{2 a_2(c)+ 4 r_{1,2}^2 a_4(c)+6 r_{1,2}^4}, \\ l_2 & = & -\frac{a_1(c)+ r_{3,4}^2 a_3(c)+r_{3,4}^4 a_5(c)}{2 a_2(c)+ 4 r_{3,4}^2 a_4(c)+6 r_{3,4}^4}. \end{eqnarray*} \end{itemize} At this point, we have established the existence of six distinct branches of eigenvalues for small $\varepsilon>0$: \begin{eqnarray*} z_1 & = & k_1 \varepsilon+o(\varepsilon),\\ z_2 & = & k_2\varepsilon +o(\varepsilon),\\ z_{3,4} & = & l_1 \varepsilon + r_{1,2} \varepsilon^{1/2}+o(\varepsilon),\\ z_{5,6} & = & l_2 \varepsilon + r_{3,4} \varepsilon^{1/2}+o(\varepsilon). \end{eqnarray*} The first two eigenvalues are real and negative for small $\varepsilon>0$, so it remains to show that either $l_1$ or $l_2$ may be positive for some values of $c$. The sign of the expression $ 2a_2+4 a_4 r^2+6 r^4$ can be determined as follows. Consider a cubic polynomial $f(x)=2x(a_2+a_4x+x^2)$ which has three simple zeros at $r^2_{1,2}<r^2_{3,4}<0$. Since $f(x)>0$ for $x>0$, we have that $f'(r^2_{1,2}), f'(0)>0$, and $f'(r^2_{3,4})<0$. Thus \begin{eqnarray*} f'(r^2_{1,2}) & = & 2 a_2(c)+ 4 r_{1,2}^2 a_4(c)+6 r_{1,2}^4>0,\\ f'(r^2_{3,4}) & = & 2 a_2(c)+ 4 r_{3,4}^2 a_4(c)+6 r_{3,4}^4<0. \end{eqnarray*} Since the denominators of $l_1$ and $l_2$ have opposite signs, it suffices to show that the numerators have the same sign. That would imply that one of $l_i$ is positive. We claim that the numerators of $l_1$ and $l_2$ are strictly positive for all sufficiently large $c$. Indeed, lets investigate the asymptotic behavior of the roots of the quadratics $Q_1(x)=a_1(c)+ a_3(c)x+ a_5(c)x^2$ and $Q_2(x)=a_2(c)+ a_4(c)x+ x^2$. \begin{itemize} \item Equation $Q_1=0$ is equivalent (after dividing through by $2c$) to $$ \frac{2 c^3(c-1)}{(c^2+2 c-1)^2}+ \frac{3 (c-1/2)(c^3+3c^2+c-1)}{(c^2+2 c-1)^2} x +(1+1/c)x^2=0.$$ As $c\to\infty$, the roots of this equation converge to the roots of $2+3x+x^2=0$, that is, $x=-2$ or $x=-1$. This follows from the continuity of roots. \item Similarly, as $c\to\infty$, the roots of $Q_2=0$ converge to the roots of $2+4x+x^2=0$, that is, $x=-2\pm \sqrt{2}$. An equivalent statement is that $$ \lim_{c\to\infty} r^2_{1,2}=-2-\sqrt{2}, \quad \lim_{c\to\infty} r^2_{3,4}= -2+\sqrt{2}.$$ \end{itemize} Since $-2-\sqrt{2}<-2<-1<-2+\sqrt{2}$ (i.e. the roots of $Q_1$ are located between the roots of $Q_2$), we conclude that the numerators of $l_1$ and $l_2$ are strictly positive for all sufficiently large values of $c$. Since the denominator of $l_1$ (respectively $l_2$) is positive(respectively negative), . we conclude that $l_1>0$ and $l_2<0$ for all sufficiently large $c$. (Numerically, this happens as long as $c>2.46$.) We summarize the results of this section in the following Lemma. {\bf Lemma 1.} {\it Let $b=1,\ \mu_1=\varepsilon, \ \mu_2=c\varepsilon$, and $$C=\left(\begin{array}{cc} 1 & 1\cr 2 & 1\cr\end{array}\right),$$ then there exist $\varepsilon^*>0$ and $c^*>1$ such that for all $0<\varepsilon<\varepsilon^*$ and $c>c^*$, the Jacobian at the positive equilibrium $E_{11}$ has two real negative eigenvalues, and two pairs of complex eigenvalues with positive and negative real parts respectively. In particular, the equilibrium $E_{11}$ is locally unstable with two-dimensional unstable manifold. } \section{Results on boundedness of solutions, competitive exclusion and persistence} \subsection{Boundedness of solutions} Without loss of generality, consider the scaled model \begin{eqnarray} \label{mal1}\dot y_i & = & y_i(1- z_i - w_i), \\ \label{mal2}\dot z_i & = & y_i - \mu_1 z_i,\\ \label{mal3}\dot w_i & = & b \sum_{j=1}^n c_{ij} y_j - \mu_2 w_i, \end{eqnarray} and suppose that $b,\mu_1,\mu_2>0$ and $c_{ii}>0$ for all $i$. {\bf Theorem 1} {\it All nonnegative solutions of $(\ref{mal1})-(\ref{mal3})$ are ultimately uniformly bounded.} {\bf Proof}. Without loss of generality, we may consider only positive solutions, that is $ y_i(t),z_i(t),w_i(t)>0$. First, it is clear that since $\dot y_i \leq y_i$, we have $y_i(t)\leq y_i(0) e^{t}$. Hence, all solutions are defined for $t\geq 0$. Next, we introduce the quantities $\alpha_i =y_i/(z_i+w_i)>0$. It follows that $$ \dot \alpha_i=\frac{y_i(1- z_i - w_i)(z_i+w_i)-y_i(y_i+b \sum_{j=1}^n c_{ij} y_j - \mu_1 z_i- \mu_2 w_i)}{(z_i+w_i)^2}.$$ Clearly, this implies that $$ \dot \alpha_i \leq \alpha_i(1-(1+b c_{ii})\alpha_i+ \frac{\mu_1 z_i+\mu_2 w_i}{z_i+w_i}).$$ Using the fact that $$\frac{\mu_1 z_i+\mu_2 w_i}{z_i+w_i} \leq \max(\mu_1,\mu_2), \quad z_i,w_i>0, $$ we obtain the inequality $$ \dot \alpha_i \leq \alpha_i(1+ \max(\mu_1,\mu_2)-(1+b c_{ii})\alpha_i).$$ Hence, $\dot \alpha_i<0$ as long as $\alpha_i>\alpha_i^*:=\frac{1+\max(\mu_1,\mu_2)}{1+bc_{ii}}.$ Consequently, $\alpha_i(t) \leq \hat \alpha_i:=\max(\alpha_i(0),\alpha_i^*)$ for all $t\geq 0$. Equivalently, we have that $y_i(t) \leq \hat \alpha_i(z_i(t)+w_i(t))$, which implies that $$ \dot y_i \leq y_i(1-\frac{y_i}{\hat \alpha_i}), \quad t\geq 0.$$ Therefore, $y_i(t)$ is bounded for all $t\geq 0$. Finally, we have that $$ \limsup_{t\to\infty} \alpha_i(t) \leq \alpha^*_i, \quad \limsup_{t\to\infty} y_i(t) \leq \alpha^*_i, \quad \limsup_{t\to\infty} z_i(t) \leq \frac{\alpha^*_i}{\mu_1}, \quad \limsup_{t\to\infty} w_i(t) \leq \frac{b \sum_{j} c_{ij}\alpha^*_j}{\mu_2}. \quad \di $$ \subsection{Competitive exclusion} Let $\gamma_1=1/\mu_1$ and $\gamma_2=b/\mu_2$, and define $A=\gamma_1 I + \gamma_2 C$. \medskip{\bf Theorem 2.} {\it Suppose that the following condition holds: \begin{equation} \exists r\in \{1,...,n\}: \forall {\bf x} \geq {\bf 0},\ A{\bf x}\geq {\bf 1} \Rightarrow (A{\bf x})_r >1,\label{excl} \end{equation} then for any positive solution $ y_i(t),z_i(t),w_i(t)>0$ of $(\ref{mal1})-(\ref{mal2})$, we have $\lim_{t\to\infty} y_r(t)=0.$} In (\ref{excl}), the vector inequalities correspond to the order induced by the standard cone $R^n_+$. \no{\bf Proof.} Let $\langle f(t)\rangle=\frac{1}{t}\int_0^t f(s)\, ds$ denote the time-average of the function $f(t)$. Then for any positive solution, we have that \begin{eqnarray*} \langle \dot y_i/y_i \rangle & = & 1- \langle z_i \rangle - \langle w_i \rangle, \\ \langle \dot z_i \rangle & = & \langle y_i \rangle - \mu_1 \langle z_i \rangle,\\ \langle \dot w_i \rangle & = & b \sum_{j=1}^n c_{ij} \langle y_j \rangle - \mu_2 \langle w_i \rangle, \end{eqnarray*} Boundedness of solutions implies that \begin{eqnarray*} \langle \dot z_i(t) \rangle & =& \frac{z_i(t)-z_i(0)}{t} \to 0, \quad t\to\infty,\\ \langle \dot w_i(t) \rangle & =& \frac{w_i(t)-w_i(0)}{t} \to 0, \quad t\to\infty,\\ \limsup_{t\to\infty} \langle \dot y_i/y_i \rangle & =& \limsup_{t\to\infty} 1- \langle z_i(t) \rangle-\langle w_i(t) \rangle\leq 0. \end{eqnarray*} Without loss of generality, there exists a convex compact set $K \subset R^n_+$ such that ${\bf y}(t) \in K$ for all $t\geq 0$. The convexity of $K$ implies that $\langle{\bf y}(t)\rangle \in K$ for all $t\geq 0$. Let $K'$ be the compact set $$ K'=\{{\bf x}\in K:\ A{\bf x} \geq {\bf 1} \}.$$ By $(\ref{excl})$, compactness of $K'$ and continuity, there exists $\varepsilon>0$ such that $(A{\bf x})_r >1+\varepsilon$ for all ${\bf x}\in K'$. Also by continuity, there exists $\delta>0$ such that $(A{\bf x})_r >1+\varepsilon/2$ for all ${\bf x}$ in the $\delta$-neighborhood of $K'$. Now we analyze the averages more carefully. Since $$ |\langle z_i \rangle - \gamma_1 \langle y_i \rangle |\to 0, \quad |\langle w_i \rangle - \gamma_2 \sum_{j=1}^n c_{ij} \langle y_j \rangle | \to 0,$$ we have that $$ \limsup_{t\to\infty} 1- \langle z_i(t) \rangle-\langle w_i(t) \rangle = \limsup_{t\to\infty} 1-\gamma_1\langle y_i(t) \rangle - \gamma_2 \sum_{j=1}^n c_{ij} \langle y_j(t) \rangle \leq 0, $$ that is, $$ \liminf_{t\to\infty} (A\langle {\bf y}(t) \rangle)_i \geq 1$$ for all $i=1,...,n$. It follows that there exists $T>0$ such that ${\rm dist}(\langle {\bf y}(t) \rangle,K')<\delta$ for all $t>T$. Therefore, $(A\langle {\bf y}(t) \rangle)_r >1+\varepsilon/2$ for all $t>T$. This in turn implies that there exists $T'>0$ such that $$\langle \dot y_r(t)/y_r(t) \rangle=1- \langle z_r(t) \rangle - \langle w_r(t) \rangle < -\varepsilon/4, \quad t >T',$$ or equivalently, $$ y_r(t) <y_r(0) \exp( -\varepsilon t/4), \quad t >T'.$$ This clearly implies that $\lim_{t\to\infty} y_r(t)=0.$ \hfill $\di$ \subsection{Partial persistence} Let \begin{eqnarray} \label{sys1} {\dot x}&=&f(x,y)\\ \label{sys2} {\dot y}&=&g(x,y) \end{eqnarray} be a forward complete system on $X\times Y:=\R^n_+\times\R^m_+$. We say that $(\ref{sys1})-(\ref{sys2})$ is {\it x-partially (strongly uniformly) persistent} if there is some $\delta>0$ so that for all $(x,y)\in \textrm{int}(\R^n_+)\times \textrm{int}(\R^m_+)$ there holds that $$ \liminf_{t\rightarrow \infty}x_i(t)\geq \delta,\;\; i=1,\dots, n. $$ Inspired by the persistence result in \cite{hofbauer} we have {\bf Theorem 3} {\it Assume that $\partial X \times Y$ is forward invariant for $(\ref{sys1})-(\ref{sys2})$, and suppose $K\subset X\times Y$ is a compact absorbing set (thus every forward solution of $(\ref{sys1})-(\ref{sys2})$ eventually enters and remains in $K$). Let $P:X\times Y\rightarrow \R$ be continuously differentiable and the restriction of $P$ to $\partial X\times Y$ be $0$, and positive elsewhere. Assume that there is a continuous function $\psi:X\times Y\rightarrow \R$ so that \begin{equation}\label{log-derivative} \frac{{\dot P}}{P}=\psi \textrm{ on }X\times Y\setminus (\partial X\times Y) \end{equation} If for all $(x,y)\in \partial X \times Y$, there is some $T>0$ such that: \begin{equation}\label{increase} \langle \psi(x(T),y(T))\rangle >0, \end{equation} then $(\ref{sys1})-(\ref{sys2})$ is $x$-partially persistent.} \\ The proof can be found in \cite{hiv-mutations} and is omitted here. \begin{remark}\label{een} Note that a result similar to Theorem 12.2.2 in \cite{hofbauer}, but now for system $(\ref{sys1})-(\ref{sys2})$, remains valid. It states that Theorem 3 remains true if condition $(\ref{increase})$ holds just for $(x,y)$ which are $\omega$ limit points of orbits in $\partial X\times Y$. The proof is exactly the same as in \cite{hofbauer}. \end{remark} We will apply Theorem 3 to prove a persistence result for the malaria model $(\ref{mal1})-(\ref{mal3})$, which we re-write in a more compact form first: \begin{eqnarray} \label{mala1}{\dot X}&=&\textrm{diag}(X)[{\bf 1}-(I_n \;\; I_n)Y],\label{comp1}\\ \label{mala2}{\dot Y}&=&-\textrm{diag}({\bf \mu})Y+BX ,\label{comp2} \end{eqnarray} where $\begin{pmatrix} X\\ Y\end{pmatrix}\in \R^n_+\times \R^{2n}_+$, ${\bf 1}=(1\dots 1)'\in \R^n$, ${\bf \mu}=(\mu_1 \dots \mu_1 \;\;\mu_2 \dots \mu_2)'\in \R^{2n}$ and $$ B=\begin{pmatrix}I\\bC \end{pmatrix}. $$ Note that $\partial \R^n_+\times \R^{2n}_+$ is forward invariant, and that there is a compact absorbing set $K$ in $\R^n_+\times \R^{2n}_+$ by Theorem 1. Let $$ A=(I_n\;\; I_n)\textrm{diag}^{-1}({\bf \mu})B. $$ We will show the following: {\bf Theorem 4} {\it If there is some $p\in\textrm{int}(\R^n_+)$ so that \begin{equation}\label{cond} p'[{\bf 1} -A{\bar X}]>0, \end{equation} for all ${\bar X}$ for which $\begin{pmatrix}{\bar X} \\ \textrm{diag}^{-1}({\bf \mu})B{\bar X}\end{pmatrix}$ is an equilibrium of $(\ref{mala1})-(\ref{mala2})$ in $\partial \R^n_+ \times \R^{2n}_+$, then system $(\ref{mala1})-(\ref{mala2})$ is persistent. } {\bf Proof}. The proof proceeds in two steps. We will first show that system $(\ref{mala1})-(\ref{mala2})$ is $X$-partially persistent using Theorem $3$ and Remark \ref{een}. Then we will show that the system $(\ref{mala1})-(\ref{mala2})$ is persistent. {\it Step 1}. Let us first establish $X$-partial persistence for $(\ref{mala1})-(\ref{mala2})$. Define the continuously differentiable (perhaps by multiplying the vector $p$ by a sufficiently large positive scalar) function $P:\R^n_+\times \R^{2n}_+ \rightarrow [0,\infty)$: $$ P(X,Y)=\Pi_{i=1}^nX_i^{p_i}, $$ which is $0$ on $\partial \R^n_+\times \R^{2n}_+$ and positive elsewhere. Note that $(\ref{log-derivative})$ holds on $\R^n_+\times \R^{2n}_+\setminus (\partial \R^n_+\times \R^{2n}_+)$ with $$ \psi(X,Y)=p'[{\bf 1}-(I_n\;\; I_n)Y] $$ We claim that for all $Z=(X, Y)\in \partial \R^n_+\times \R^{2n}$, there is some $T>0$ such that: $$ \langle \psi(Z(T))\rangle >0, $$ from which $X$-partial persistence will follow using Theorem 3. We will do this by induction on $r$, the number of non-zero components of $X$. If $r=0$, then $X(t)=0$ for all $t\geq 0$, hence $Y(t)\rightarrow 0$ as $t\rightarrow +\infty$, so that $\omega(Z)=\{0\}$. But since $0$ is an equilibrium point of $(\ref{mala1})-(\ref{mala2})$, $(\ref{cond})$ holds with ${\bar X}=0$, and therefore our claim follows from Remark \ref{een}. Assume that the claim has been established for $r=1,\dots, m-1$ but that $X$ has $m$ non-zero components (of course, $m<n$). Denote the indices of these components by $I$, a proper subset of $\{0,1,\dots,n\}$. There are two cases to consider: {\it Case 1}. The solution $Z(t)$ converges to the boundary of the set $D=\{(X\;\; Y)\in \R^n_+\times \R^{2n}_+\; |\;\; X_i\neq 0\textrm{ for all } i\in I\}$. Then $\omega (Z)$ is contained in part of the boundary of $\R^n_+\times \R^{2n}_+$ where at most $m-1$ components of $X$ are non-zero. The conclusion of our claim then follows from Remark \ref{een} and the induction hypothesis. {\it Case 2}. The solution $Z(t)$ does not converge to the boundary of $D$. Then there is some $\epsilon>0$ and an increasing sequence $t_k\rightarrow \infty$ so that $X_i(t_k)>\epsilon$ for all $k$ and all $i\in I$. For $i\notin I$ we have that $X_i(t)=0$ for all $t\geq 0$ and thus in particular for all $t=t_k$. Consider the (bounded) sequences of averages $\langle X(t_k)\rangle$ and $\langle Y(t_k) \rangle$, which we may assume -by passing to a subsequence if necessary- converge to limits ${\tilde X}$ and ${\tilde Y}$ with the property that ${\tilde X}_i>0$ if $i\in I$ and ${\tilde X}_i=0$ otherwise. Integrating $(\ref{mala2})$ between $0$ and $t_k$, dividing by $t_k$ and letting $t_k\rightarrow \infty$ yields: \begin{equation}\label{previous} 0=-\textrm{diag}{\tilde Y}+B{\tilde X}. \end{equation} Consider now the dynamics of the components $X_i$ with $i\in I$ as described by $(\ref{mala1})$. In particular, dividing by $X_i$, integrating between $0$ and $t_k$, dividing by $t_k$ and letting $t_k\rightarrow \infty$, and using $(\ref{previous})$ yields: $$ 0=1-(A{\tilde X})_i,\;\; i\in I. $$ Since ${\tilde X}_i=0$ for all $i\notin I$ we see that $\begin{pmatrix}{\tilde X}\\\textrm{diag}^{-1}(\mu)B{\tilde X}\end{pmatrix}$ is an equilibrium of $(\ref{mala1})-(\ref{mala2})$. Finally notice that as $t_k\rightarrow \infty$, we have that: $$ \langle \psi(Z(t_k))\rangle \rightarrow p'[{\bf 1}-A{\tilde X}], $$ which is positive by $(\ref{cond})$. This establishes our claim. {\it Step 2}. In Step 1 we have shown that $(\ref{mala1})-(\ref{mala2})$ is $X$-partially persistent, so that for all solutions starting in $\textrm{int}(\R^n_+)\times\textrm{int}(\R^{2n}_+)$ there is some $\delta>0$ such that $$ \liminf_{t\rightarrow \infty}X(t)\geq \delta {\bf 1}, $$ where the above vector inequality should be interpreted componentwise. Then $(\ref{mala2})$ implies that for all large $t$, we have that $$ {\dot Y}\geq -\textrm{diag}(\mu)Y+\frac{\delta}{2}B{\bf 1}. $$ This implies that: $$ \liminf_{t\rightarrow \infty}Y(t)\geq \frac{\delta}{2}\textrm{diag}^{-1}(\mu)B{\bf 1}, $$ where the vector on the right-hand side has positive components, which establishes persistence of $(\ref{mala1})-(\ref{mala2})$. \hfill $\di$ \subsection{Discussion} It is interesting to compare our competitive exclusion result (Theorem 2) and our persistence result (Theorem 4) obtained in the previous subsections to corresponding results for the following lower dimensional Lotka-Volterra system: \begin{equation}\label{VL} {\dot X}=\textrm{diag}(X)[{\bf 1}-AX] \end{equation} For this system we can easily prove the following competitive exclusion result, using similar arguments as those in the proof of Theorem 2. {\bf Lemma 3} {\it Suppose that $(\ref{excl})$ holds for system $(\ref{VL})$. Then for any solution $x(t)$ of $(\ref{VL})$ in $\textrm{int}(\R^n_+)$, there holds that $x_r(t)\rightarrow 0$ as $t\rightarrow \infty$.} For system $(\ref{VL})$, there is the following persistence result \cite{hofbauer}. {\bf Lemma 4} {\it If there is some $p\in \textrm{int}(\R^n_+)$ such that $(\ref{cond})$ holds for all ${\bar X}$ which are equilibria of $(\ref{VL})$ in $\partial \R^n_+$, then system $(\ref{VL})$ is persistent.} In other words, our conditions under which system $(\ref{comp1})-(\ref{comp2})$ exhibits competitive exclusion (see Theorem $2$), respectively persistence (see Theorem $4$) holds, are the same as for the reduced order system $(\ref{VL})$. Finally, we can interpret conditions $(\ref{excl})$ and $(\ref{cond})$ geometrically, and will see that they are not mutually exclusive. This implies that there are examples of system $(\ref{comp1})-(\ref{comp2})$ which don't fit our conditions for either competitive exclusion or persistence. In $\R^n$, define the closed convex set $$ D=\{x\in \R^n\,|\, {\bf 1}-Ax\leq 0\}. $$ The boundary of $D$ is given by those points $x$ in $D$ for which $1-(Ax)_i=0$ for some $i$. In this case we say that constraint $i$ is active for $x$. Condition $(\ref{excl})$ says that there must be a constraint $r$ which is never active in $\R^n_+$. Although a geometric interpretation of condition $(\ref{cond})$ is not immediately clear, it has been shown in \cite{hofbauer} that $(\ref{cond})$ is equivalent to the following condition which does have a clear geometric meaning. \begin{equation}\label{equiv} C\cap D_+ =\emptyset, \end{equation} where $C$ is the convex hull of the set of equilibria of $(\ref{VL})$ in $\partial \R^n_+$ and $D_+=D\cap \R^n_+$. To see that the exclusion condition $(\ref{excl})$ and $(\ref{cond})$ (or the equivalent $(\ref{equiv})$) are not mutually exclusive, consider a system $(\ref{VL})$ with $n=2$ with nullclines given in figure $\ref{bis}$ Clearly neither condition $(\ref{excl})$ nor condition $(\ref{equiv})$ hold. It is well-known that this is an example of a bistable Lotka-Volterra system. The equilibrium in $\textrm{int}(\R^2_+)$ is a saddle and every solution in $\textrm{int}(\R^2_+)$ not on the stable manifold of the interior equilibrium converges to either $E_1$ or $E_2$. \begin{figure}\label{bis} \centering \includegraphics[width=6cm]{bistab} \caption{An example of system $(\ref{VL})$ with $n=2$. Nullclines are the dashed lines. The hatched region represents $D_+$. The crosses represent the equilibria, the triangle $E_0-E_1-E_2$ represents $C$, hence $C \cap D_+\neq\emptyset$.} \end{figure}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} This is the first in a series of at least two papers \cite{mpss} in which we (resp.\ some of us) analyze the asymptotic structure, and a certain initial value problem, for vacuum solutions of Einstein's equations \begin{equation} R_{\mu\nu}\,=\,\Lambda g_{\mu\nu} \label{LambdaVac} \end{equation} on a 4-dimensional spacetime $({\cal M}, g)$, where $g$ is smooth and $\Lambda$ is a (``cosmological'') constant. We focus on the case $\Lambda > 0$ but compare occasionally with $\Lambda = 0$. Spacetime indices are greek, while coordinates in $1+3$ splits are denoted by $\{x^{\alpha}\} = \{t, x^i \}$ (rather than $\{x^0, x^i \}$), with corresponding tensorial indices. Our conventions for the signature, the curvature tensor $R_{\mu\nu\sigma}{}^{\kappa}$, the Weyl tensor $C_{\mu\nu\sigma}{}^{\kappa}$, the Ricci tensor $R_{\mu\sigma}$ and the scalar curvature $R$ follow e.g.\ \cite{Wald}. The Levi-Civita connection of $g$ is denoted by $\nabla$. The setting of our work is an asymptotic structure \emph{\`a la Penrose} \cite{p2, F_lambda}. By that we mean that an appropriate conformal rescaling of $({\cal M}, g)$ \begin{equation} g \mapsto \widetilde g = \Theta^2 g\;, \quad {\cal M} \overset{\phi}{\hookrightarrow} \widetilde{{\cal M}\enspace}\hspace{-0.5em} \;, \quad \Theta|_{\phi({\cal M})}>0\;, \end{equation} leads to an \emph{unphysical spacetime} $(\widetilde{{\cal M}\enspace}\hspace{-0.5em} ,\widetilde g)$ which admits a representation of null infinity $$ \scri=\{ \Theta=0\,, \; \mathrm{d}\Theta \ne 0\}\cap \partial\phi({\cal M})$$ through which the unphysical metric $\widetilde g$ and the conformal factor $\Theta$ can be smoothly extended. $\scri$ is a smooth hypersurface which consists of two (not necessarily connected) subsets: Future and past null infinity, distinguished by the absence of endpoints of past or future causal curves contained in $({\cal M},g)$, respectively. In this paper we will normally denote by $\scri^-$ and $\scri^+$ chosen connected components of past and future null infinity, respectively. Clearly, all initial value results in this paper starting from $\scri^-$ have obvious ``final value counterparts'' obtained via replacing $\scri^-$ by $\scri^+$, ``future'' by ``past'', etc. We will implicitly identify ${\cal M}$ with its image $\phi({\cal M})\subset \widetilde{{\cal M}\enspace}$, so that we can write $\widetilde g = \Theta^2 g$. Indices of physical and unphysical fields will be raised and lowered with $g$ and $\widetilde g$, respectively. In this setting, Friedrich \cite{F_lambda, F2} has shown that, in terms of suitable variables, the field equations (\ref{LambdaVac}) become a regular, symmetric hyperbolic system on $(\widetilde{{\cal M}\enspace}\hspace{-0.5em} ,\widetilde g)$. We recall these ``metric conformal field equations'' (MCFE) in Sect \ref{Mars-Simon_conf}. An important member of the MCFE is the rescaled Weyl tensor \begin{equation} \label{dten} \widetilde d_{\alpha\beta\gamma}{}^{\delta} := \Theta^{-1}C_{\alpha\beta\gamma}{}^{\delta}. \end{equation} and key properties of $C_{\alpha\beta\gamma\delta}$ and $\widetilde d_{\alpha\beta\gamma\delta}$ are the following: \begin{enumerate} \item[I.] $C_{\alpha\beta\gamma}{}^{\delta}$ vanishes on $\scri$, whence $\widetilde d_{\alpha\beta\gamma}{}^{\delta}$ extends regularly to $\scri$. \item[II.] $C_{\alpha\beta\gamma}{}^{\delta}$ satisfies a regular, linear, homogeneous symmetric hyperbolic system on $({\cal M}, g)$. \item[III.] $\widetilde d_{\alpha\beta\gamma}{}^{\delta}$ satisfies a regular, linear, homogeneous symmetric hyperbolic system on $(\widetilde{{\cal M}\enspace}\hspace{-0.5em} ,\widetilde g)$. \end{enumerate} These properties, together with stability of solutions of symmetric hyperbolic systems, are the key ingredients in uniqueness and stability results of asymptotically simple spacetimes $(\widetilde{{\cal M}}, \widetilde g)$ as defined in Def. 9.1. of \cite{F4}; the latter definition includes the requirements that $(\widetilde{{\cal M}}, \widetilde g)$ has a compact Cauchy hypersurface and every maximally extended null geodesic has a past endpoint on $\scri^-$ and a future endpoint on $\scri^+$. We give here first a uniqueness result for de Sitter spacetime and then a sketchy version of the stability result (Thm 9.8 of \cite{F4}) which applies in particular to de Sitter. \begin{theorem} ~ \begin{description} \item[Uniqueness of de Sitter.] Let smooth data for the MCFE be given on a $\scri^-$ which is topologically $\mathbb{S}^3$ and such that $\widetilde d_{\alpha\beta\gamma}{}^{\delta}|_{\scri^-}$ vanishes identically. Then the evolving spacetime $(\widetilde{{\cal M}}, \widetilde g)$ is isometric to de Sitter. \item[Stability of aymptotically simple solutions.] Given an asymptotically simple spacetime $(\widetilde{{\cal M}}, \widetilde g)$, then any data for the MCFE on $\scri^-$ which are close to the data for $(\widetilde{{\cal M}}, \widetilde g)$ (in terms of suitable Sobolev norms) evolve to an asymptotically simple spacetime. \end{description} \label{deSitterthm} \end{theorem} A motivation for the present work is to generalize these uniqueness and stability results to more general solutions of ($\ref{LambdaVac}$). Using again properties I.-III. above, it is straightforward to generalize the above results on asymptotically simple solutions to corresponding ``semiglobal'' results for any concrete family of solutions, where ``semiglobal'' means the domain of dependence of $\scri^-$. On the other hand, and needless to say, any fully \emph{global} results for solutions which are not asymptotically simple but contain horizons and singularities involve ``cosmic censorship'' issues and will be very complicated. The main targets of the present work are Kottler (Schwarzschild-de Sitter) and Kerr-de Sitter (KdS) spacetimes for which the topology of each connected component of $\scri$ is $\mathbb{R} \times \mathbb{S}^2$. Our main achievement is a \emph{semiglobal} uniqueness result, namely Thm. \ref{first_main_thm2}, for a class of solutions which includes Kerr- de Sitter. What makes our result highly non-trivial is its particular formulation which we \emph{expect to be} useful for the fully global problem, for reasons given below. In this uniqueness result, and from now onwards, we assume that $({\cal M}, g)$ admits a non-trivial Killing vector field (KVF) $X$, \begin{equation} \label{Kill} ({\mycal L}_X g)_{\mu\nu} \,\equiv\, 2\nabla_{(\mu}X_{\nu)} \,=\, 0 \;. \end{equation} Since $X^{\mu}$ is a KVF, $F_{\mu\nu}:= \nabla_{\mu}X_{\nu}$ is a two-form: $F_{(\mu\nu)}\,=\,0$. The main purpose of this assumption is to achieve a simplification and to permit the use of a special technique. However, as an aside we note that the existence of the isometry \emph{might} change the character of the stability problem substantially. To see this on a heuristic basis, consider data for the MCFE on $\scri^-$ which are at the same time Killing initial data, and which are \emph{close to} Kerr-de Sitter in a suitable sense. Now consider the time-evolution of such data, and assume that the spacetime can be extended beyond its (``cosmological'') Cauchy horizon (as it is the case for Schwarzschild-de Sitter and Kerr-de Sitter). In this extension, the isometry should become timelike, and now another conjecture, namely uniqueness of stationary black-holes, should lead to Kerr-de Sitter in the region between the event and the cosmological horizon. Extending backwards to the domain of dependence of $\scri^-$ suggests that the ``near Kerr-de Sitter'' data will actually be Kerr-de Sitter in the above setting. Accordingly, the existence of the isometry, together with reasonable global assumptions, can turn a stability into a uniqueness problem. This ``effect'' is of course familiar from uniqueness results for stationary, asymptotically flat solutions. While obtaining global results sketched above is far beyond our present scope, it motivates our local analysis, in particular the use of the so-called Mars-Simon tensor (MST) \cite{mars,mars2,simon} in Thm \ref{first_main_thm2}. This tensor is defined as follows. \begin{equation} \mathcal{S}_{\mu\nu\sigma\rho}\,:=\, \mathcal{C}_{\mu\nu\sigma\rho} + Q\, \mathcal{U}_{\mu\nu\sigma\rho} \;, \label{dfn_mars-simon} \end{equation} in terms of the quantities \begin{eqnarray} \mathcal{C}_{\mu\nu\sigma\rho} &:=& C_{\mu\nu\sigma\rho} +i C^{\star}_{\mu\nu\sigma\rho} \;, \label{Weyldual} \\ \mathcal{U}_{\mu\nu\sigma\rho} &:=& - \mathcal{F}_{\mu\nu}\mathcal{F}_{\sigma\rho} + \frac{1}{3}\mathcal{F}^2\mathcal{I}_{\mu\nu\sigma\rho} \;, \\ \mathcal{I}_{\mu\nu\sigma\rho} &:=& \frac{1}{4} (g_{\mu\sigma}g_{\nu\rho} -g_{\mu\rho}g_{\nu\sigma} + i\mbox{$\eta$}_{\mu\nu\sigma\rho} ) \;, \\ \mathcal{F}_{\mu\nu} &:=& F_{\mu\nu} +i F^{\star}_{\mu\nu} \;, \\ \mathcal{F}^2 &:=& \mathcal{F}_{\mu\nu} \mathcal{F}^{\mu\nu} \;. \label{Fsq} \end{eqnarray} In these expressions $\mbox{$\eta$}_{\mu\nu\sigma\rho}$ is the volume form of $g$, $\star$ the corresponding Hodge dual and $Q$ is a function. $\mathcal{F}_{\mu\nu}$ and ${\cal C}_{\alpha\beta\gamma\delta}$ are self-dual, i.e.\ they satisfy $\mathcal{F}^{\star}{}_{\mu\nu} = - i \mathcal{F}_{\mu\nu}$ and ${\cal C}_{~\alpha\beta\gamma\delta}^{\star} = -i {\cal C}_{\alpha\beta\gamma\delta}$. The symmetric double two-form $\mathcal{I}_{\mu\nu\sigma\rho}$ plays a natural role as a metric in the space of self-dual two-forms, in the sense that ${\mathcal{I}}_{\mu\nu\sigma\rho} {\mathcal W}^{\sigma\rho} = {\mathcal W}_{\mu\nu}$ for any self-dual two-form ${\mathcal W}_{\mu\nu}$. In connection with this definition and its applications, there now arise naturally two \emph{a priori} independent problems: \begin{enumerate} \item Classify the solutions of (\ref{LambdaVac}) for which there \emph{exists} a $Q$ such that the MST \eq{dfn_mars-simon} vanishes. \item Prescribe the function $Q$ such that properties I. -III. above (or a subset thereof) hold for the MST. \end{enumerate} Problem 1 has been settled in \cite{mars,mars2} for case $\Lambda = 0$, while the extension to $\Lambda \neq 0$ was accomplished in \cite{mars_senovilla}. The classes of solutions characterized in this way include Kerr and Kerr-de Sitter, respectively, and these solutions can in fact be singled out by supplementing the condition $\mathcal{S}_{\mu\nu\sigma\rho} = 0$ with suitable ``covariant'' conditions. As to problem 2 for $\Lambda = 0$, one sets \begin{equation} \label{Q} Q = 6 \sigma^{-1} \end{equation} in terms of the ``Ernst potential'' $\sigma$, defined up to an additive complex constant (called ``$\sigma$-constant'' henceforth) by \begin{equation} \partial_{\beta} \sigma = 2 X^{\alpha} {\cal F}_{\alpha\beta} \;. \end{equation} The corresponding MST then in fact satisfies a linear, homogeneous, symmetric hyperbolic system, irrespective of how the $\sigma$-constant has been chosen \cite{ik}. In the asymptotically flat setting the MST vanishes at infinity (which holds again for any choice of the $\sigma$-constant provided that the ADM mass is non-zero); it vanishes for all Kerr solutions in particular. The $\sigma$-constant is fixed uniquely in a natural way by requiring the Ernst potential to vanish at infinity. We remark that this symmetric hyperbolic system, or rather the wave equation which can be derived from it, has been used in uniqueness proofs for stationary, asymptotically flat black holes \cite{aik1,ik}. In analogy with \eq{dten} we now define \begin{equation} \label{T} \widetilde {\cal T}_{\alpha\beta\gamma}{}^{\delta} := \Theta^{-1}{\cal S}_{\alpha\beta\gamma}{}^{\delta}. \end{equation} For $\Lambda > 0$, key properties of these tensors can be summarized as follows (I.-III. is shown in the present work while IV. is a reformulation of a result of \cite{mars_senovilla}; II.\ and IV.\ in fact hold for any sign of the cosmological constant) \begin{enumerate} \item[I.] There exists a function $Q_0$ such that the corresponding MST ${\cal S}_{\alpha\beta\gamma\delta}^{(0)}$ vanishes on $\scri$, whence $\widetilde {\cal T}_{\alpha\beta\gamma\delta}^{(0)}$ extends regularly to $\scri$. \item[II.] There exists a function $Q^{(ev)}$ such that the corresponding MST ${\cal S}_{\alpha\beta\gamma\delta}^{(ev)}$ satisfies a linear, homogeneous symmetric hyperbolic system on $({\cal M}, g)$. \item[III.] ${\cal T}_{\alpha\beta\gamma\delta}^{(ev)}$ satisfies a linear, homogeneous symmetric hyperbolic system on $(\widetilde{{\cal M}\enspace}\hspace{-0.5em} ,\widetilde g)$ which is of ``Fuchsian type'' at $\scri$. \item[IV.] When ${\cal S}_{\alpha\beta\gamma\delta}$ is required to vanish identically for some $Q$, then $Q = Q_0 = Q^{(ev)}$. \end{enumerate} Conditions I. - III. stated above for the MST should be compared with the corresponding conditions stated earlier for the Weyl tensor. Unfortunately, or maybe for a deeper reason, there appears to be no universal definition of $Q$ anymore which satisfies I. - III. simultaneously. We proceed with explaining these findings in some detail, and with describing their arrangement in the following sections. The function $Q_0$ is introduced in \eq{definition_Q0}, and property I. is shown in Proposition \ref{prop_reg_S}. Next, Theorem \ref{thm_nec_cond} gives necessary and sufficient conditions on the data in order for $\widetilde {\cal T}_{\alpha\beta\gamma\delta}^{(0)}$ to vanish on $\scri$. These conditions agree with conditions (i) and (ii) in Theorem \ref{first_main_thm2} quoted below. On the other hand, in \eq{ev_dfn_Q}-\eq{defJ} we define a \emph{class of functions} $Q^{(ev)}$ for which we show in Section~\ref{sect_deriv_ev_MST} that the corresponding MST ${\cal S}_{\alpha\beta\gamma\delta}^{(ev)}$ satisfies a linear, homogeneous symmetric hyperbolic system, which gives property II (and from which one readily derives a system of wave equations). For the rescaled tensor $\widetilde {\cal T}_{\alpha\beta\gamma\delta}^{(ev)}$ we then obtain equations of the same form on $\widetilde{{\cal M}\enspace}$ (cf.\ Lemmas~ \ref{wave_eqn_MST} and \ref{lemma_evolution}). The appropriate definition of $Q^{\mathrm{(ev)}}$ involves a ``$\sigma$-(integration)-constant'' (called ``$a$'' in \eqref{sigma_i_prelim}), and in analogy with the case $\Lambda = 0$ mentioned before there is again a natural way (namely \eqref{afix}) of fixing the constant from the asymptotic conditions. However, in contrast to the case $\Lambda = 0$, the resulting ${\cal S}_{\alpha\beta\gamma\delta}^{(ev)}$ does not vanish automatically on $\scri^-$, whence ${\cal T}_{\alpha\beta\gamma\delta}^{(ev)}$ is not necessarily regular there. In Definition ~\ref{KdS_like} we call solutions for which ${\cal T}_{\alpha\beta\gamma\delta}^{(ev)}$ (with the optimal $\sigma$-constant) can be regularized on $\scri^-$ (and agrees with $ \widetilde{\mathcal{T}}^{(0)}_{\mu\nu\sigma}{}^{\rho}$ on $\scri^-$) ``asymptotically Kerr-de Sitter like''. This class can be characterized in terms of the data as follows (this is a shortened version of Thm. ~\ref{prop_Qs}): \begin{theorem} \label{short} Consider a $\Lambda>0$-vacuum spacetime which admits a smooth $\scri^-$ and a KVF $X$. Denote by $Y$ the CKVF induced, in the conformally rescaled spacetime, by $X$ on $\scri^-$. The condition \begin{equation*} \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\mu\nu\sigma}{}^{\rho}|_{\scri^-} \,=\, \widetilde{\mathcal{T}}^{(0)}_{\mu\nu\sigma}{}^{\rho}|_{\scri^-} \end{equation*} holds if and only if $Y^j$ is a common eigenvector of $\widehat C_{ij}$ and $D_{ij}$, where $\widehat C_{ij}$ is the Cotton-York tensor (\ref{cotton-york}) and $D_{ij} = d_{titj}|_{\scri^-}$. \end{theorem} This now suggests considering a Cauchy problem for the MCFE on $\scri^-$, starting from asymptotically Kerr-de Sitter-like data. However, in contrast to the evolution equation for the rescaled Weyl tensor $\widetilde d_{\alpha\beta\gamma\delta}$, the coefficients in the evolution equation for $\widetilde{\cal T}_{\alpha\beta\gamma\delta}^{(ev)}$ are not regular at $\scri$ and even not necessarily so off some neighborhood of $\scri$. (Non-regularities may occur already in the evolution equations for ${\cal S}_{\alpha\beta\gamma\delta}^{(ev)}$). Regarding $\scri$, we are now dealing with a linear homogeneous \emph{Fuchsian} symmetric hyperbolic system (Lemma \ref{lemma_evolution}). Adapting results available in the literature, we prove in Lemma \ref{UniquenessLemma} a local uniqueness theorem for regular solutions of a class of Fuchsian systems which includes the present one. We then apply this result to ``trivial'' data satisfying $\widetilde{\cal T}_{\alpha\beta\gamma\delta}^{(ev)}|_{\scri^-} = 0$, which we call ``Kerr-de Sitter-like''. Our preliminary uniqueness result, Lemma \ref{UniquenessT}, now yields local-in-time uniqueness of these solutions, and implies that ${\cal S}_{\alpha\beta\gamma\delta}^{(\mathrm{ev})}$ vanishes near $\scri^-$. However, this conclusion does not immediately extend to the whole domain of dependence of $\scri^-$ since the evolution equation is manifestly regular only in some neighborhood of (and excluding) $\scri^-$. Nevertheless, the required result does follow from the classification results of \cite{mars_senovilla}, so ${\cal S}_{\alpha\beta\gamma\delta}^{(\mathrm{ev})} \equiv 0$ indeed holds on the domain of dependence of $\scri^-$. Altogether this yields the following classification result for Kerr-de Sitter like spacetimes in terms of data on $\scri^-$, which may be considered as counterpart of the first part of \ref{deSitterthm} above: \begin{theorem} \label{first_main_thm2} Let $(\Sigma,h)$ be a Riemannian 3-manifold which admits a CKVF~ $Y$ with $|Y|^2>0$, complemented by a TT tensor $D_{ij}$ to asymptotic Cauchy data. Then there exists a maximal globally hyperbolic $\Lambda>0$-vacuum spacetime $({\cal M},g)$ which admits a KVF $X^i$ with $X^i|_{\scri^-} = Y^i$ and such that the associated MST vanishes, and $\Sigma$ represents past null infinity $\scri^-$ with $g_{ij}|_{\scri^-}=h_{ij}$ and $d_{titj}|_{\scri^-}= D_{ij}$ if and only if \begin{enumerate} \item[(i)] $ \widehat C_{ij} = \sqrt{\frac{\Lambda}{3}}C_{\mathrm{mag}}|Y|^{-5}(Y_iY_j -\frac{1}{3}|Y|^2 h_{ij})$ for some constant $C_{\mathrm{mag}}$, and \item[(ii)] $D_{ij} =C_{\mathrm{el}} |Y|^{-5} (Y_iY_j -\frac{1}{3}|Y|^2 h_{ij})$ for some constant $C_{\mathrm{el}}$. \end{enumerate} \end{theorem} Spacetimes with vanishing MST have very different properties depending on the values taken by the free constants in the family. In particular, the maximal domain of dependence of $\scri$ may or may not be extendible across a Killing horizon, and these different behaviours occur even within the class of spacetimes with vanishing $C_{\mathrm{mag}}$. In this latter case, $\scri$ is locally conformally flat and the data consist simply in a choice of a conformal Killing vector $Y$ in (a domain of) $\mathbb{S}^3$ and a choice of a constant $C_{\mathrm{el}}$. An interesting (and probably difficult) question is whether it is possible to identify directly at $\scri$ the behaviour of its domain of dependence in the large. In particular, it would be interesting to see if the properties of $Y$ at its zeroes can be related to the existence of a Killing horizon across which the domain of dependence of $\scri$ can be extended. We remark that we do not obtain a counterpart to the stability result (part 2 of Thm. \ref{deSitterthm}, since (to our knowledge) there is no general result guaranteeing existence and stability of solutions to Fuchsian systems. Recall also the remark after \eqref{Kill} in connection with the significance of the stability problem in the presence of isometries. In the final Section~\ref{sec_CKVFs} we analyze the relations between the vanishing of the rescaled MST (\ref{T}) (or the corresponding condition on the data) and the existence of other conformal Killing vector fields on $\scri$, and we discuss the extension of the latter to Killing vector fields on ${{\cal M}}$. This result, given in Proposition \ref{prop_2CKVF}, will be relevant for the classification of spacetimes with vanishing ${\cal S}_{\alpha\beta\gamma\delta}^{(\mathrm{ev})}$ and conformally flat $\scri$ presented in the subsequent paper mentioned above already \cite{mpss}. \section{The Mars-Simon tensor (MST) at null infinity} \label{section_mars_simon} \subsection{The conformally rescaled spacetime} \label{Mars-Simon_conf} In this section we collect key equations which are gauge-independent, and which hold irrespectively of the sign (or vanishing) of $\Lambda$. In the asymptotic setting described in the introduction the pair $(\widetilde g, \Theta)$ satisfies the \emph{metric conformal field equations (MCFE)} on $\widetilde{{\cal M}\enspace}$ \cite{F3} (we use tildes for all geometric objects associated to $\widetilde g$), \begin{eqnarray} && \widetilde \nabla_{\rho} \widetilde d_{\mu\nu\sigma}{}^{\rho} =0\;, \label{conf1} \\ && \widetilde\nabla_{\mu} \widetilde L_{\nu\sigma} - \widetilde\nabla_{\nu}\widetilde L_{\mu\sigma} = \widetilde\nabla_{\rho}\Theta \, \widetilde d_{\nu\mu\sigma}{}^{\rho}\;, \label{conf2} \\ && \widetilde\nabla_{\mu}\widetilde\nabla_{\nu}\Theta = -\Theta \widetilde L_{\mu\nu} +\widetilde s \widetilde g_{\mu\nu}\;, \label{conf3} \\ && \widetilde\nabla_{\mu}\widetilde s = -\widetilde L_{\mu\nu}\widetilde \nabla^{\nu}\Theta\;, \label{conf4} \\ && 2\Theta \widetilde s - \widetilde \nabla_{\mu}\Theta\widetilde \nabla^{\mu}\Theta = \Lambda /3 \label{conf5} \;, \\ && \widetilde R_{\mu\nu\sigma}{}^{\kappa}[\widetilde g] = \Theta \widetilde d_{\mu\nu\sigma}{}^{\kappa} + 2\big(\widetilde g_{\sigma[\mu}\widetilde L_{\nu]}{}^{\kappa} - \delta_{[\mu}{}^{\kappa}\widetilde L_{\nu]\sigma} \big) \label{conf6} \;. \end{eqnarray} where the Riemann tensor $\widetilde R_{\mu\nu\sigma}{}^{\kappa}[\widetilde g]$ is to be regarded as a differential operator on $\widetilde g$, while $\widetilde L_{\mu\nu}:= \frac{1}{2}\widetilde R_{\mu\nu} - \frac{1}{12}\widetilde R \widetilde g_{\mu\nu}$, $\widetilde d_{\mu\nu\sigma}{}^{\rho} := \Theta^{-1} \widetilde C_{\mu\nu\sigma}{}^{\rho}$ are, respectively, the Schouten and rescaled Weyl tensor of $\widetilde g$, and \begin{equation} \widetilde s \,:=\,\frac{1}{4}\Box_{\widetilde g} \Theta + \frac{1}{24} \widetilde R\Theta \;. \end{equation} Let us now express the MST in terms of unphysical fields on $(\widetilde{{\cal M}\enspace}\hspace{-0.5em} ,\widetilde g)$. We first of all note that the push-forward $\widetilde X^{\mu}$ of the KVF $X^{\mu}$, which we identify with $X^{\mu}$, satisfies the \emph{unphysical Killing equations} \cite{ttpKIDs} \begin{equation} \widetilde F_{(\mu\nu)}\,=\, 0 \quad \text{and} \quad \widetilde F\,=\, 4\widetilde X^{\mu}\widetilde\nabla_{\mu}\log\Theta \label{unphys_Killing} \;, \end{equation} where \begin{equation} \widetilde F_{\mu\nu} \,:=\, (\widetilde\nabla_{\mu}\widetilde X_{\nu})_{\mathrm{tf}}\;, \quad \widetilde F:=\widetilde\nabla_{\mu}\widetilde X^{\mu} \;. \end{equation} and the symbol $(.)_{\mathrm{tf}}$ denotes the trace-free part of the corresponding $(0,2)$-tensor. $\widetilde F_{\mu\nu}$ is hence a two-form and we can define $\widetilde{\mathcal{C}}_{\mu\nu\sigma\rho}$, $\widetilde{\mathcal{U}}_{\mu\nu\sigma\rho}$, $\widetilde{\mathcal{I}}_{\mu\nu\sigma\rho}$, $\widetilde{\mathcal{F}}_{\mu\nu}$ and $\widetilde{ \mathcal{F}}^2$ using definitions analogous to (\ref{Weyldual})-(\ref{Fsq}) with all geometric objects referred to $(\widetilde{{\cal M}\enspace}\hspace{-0.5em} ,\widetilde g)$. The following relations are found via a simple computation. \begin{eqnarray} \mathcal{C}_{\mu\nu\sigma}{}^{\rho} &=&\widetilde{ \mathcal{C}}_{\mu\nu\sigma}{}^{\rho}\;, \label{formula_C} \\ \mathcal{I}_{\mu\nu\sigma}{}^{\rho} &=& \Theta^{-2}\widetilde{\mathcal{I}}_{\mu\nu\sigma}{}^{\rho} \;, \\ F_{\mu\nu} &=& \nabla_{\mu}(\Theta^{-2}\widetilde X_{\nu}) \nonumber \\ & = & \Theta^{-2}(\widetilde F_{\mu\nu} + \frac{1}{4}\widetilde g_{\mu\nu}\widetilde F) + \Theta^{-3}(2\widetilde X_{[\mu}\widetilde \nabla_{\nu]}\Theta - \widetilde g_{\mu\nu}\widetilde X^{\sigma} \widetilde\nabla_{\sigma}\Theta ) \nonumber \\ &=& \Theta^{-2}(\widetilde F_{\mu\nu} +\Theta^{-1}\widetilde H_{\mu\nu} ) \;, \\ \mathcal{F}_{\mu\nu} &=& \Theta^{-2}(\widetilde{ \mathcal{F}}_{\mu\nu} +\Theta^{-1}\widetilde{\mathcal{H}}_{\mu\nu} ) \;, \\ \mathcal{F}^2 &=& \widetilde{ \mathcal{F}}^2 + 2\Theta^{-1} {\widetilde{ \mathcal{F}}}_{\alpha\beta}\widetilde{\mathcal{H}}^{\alpha\beta} +\Theta^{-2} \widetilde{\mathcal{H}}^2\;, \\ \mathcal{U}_{\mu\nu\sigma}{}^{\rho} &=& \Theta^{-2}\widetilde{\mathcal{U}}_{\mu\nu\sigma}{}^{\rho} -\Theta^{-3}\big( \widetilde{ \mathcal{F}}_{\mu\nu}\widetilde{\mathcal{H}}_{\sigma}{}^{\rho} + \widetilde{\mathcal{H}}_{\mu\nu} \widetilde{ \mathcal{F}}_{\sigma}{}^{\rho} -\frac{2}{3} {\widetilde{ \mathcal{F}}}_{\alpha\beta}\widetilde{\mathcal{H}}^{\alpha\beta}\widetilde{\mathcal{I}}_{\mu\nu\sigma}{}^{\rho} \big) \nonumber \\ && -\Theta^{-4}\big( \widetilde{\mathcal{H}}_{\mu\nu}\widetilde{\mathcal{H}}_{\sigma}{}^{\rho} -\frac{1}{3}\widetilde{\mathcal{H}}^2 \widetilde{\mathcal{I}}_{\mu\nu\sigma}{}^{\rho} \big) \;, \label{formula_Q} \end{eqnarray} where we have set \begin{eqnarray} \widetilde H_{\mu\nu} &:=& 2\widetilde X_{[\mu}\widetilde\nabla_{\nu]}\Theta\;, \\ \widetilde {\mathcal{H}}_{\mu\nu} &:=& \widetilde H_{\mu\nu} +i \widetilde H^{\star}_{\mu\nu} \;. \end{eqnarray} We want to investigate how the MST behaves when approaching the conformal boundary $\scri$. Note that the conformal Killing equation implies that $\widetilde X^{\mu}$ admits a smooth extension across $\scri$ \cite{RG}, in particular the tensor $\widetilde{\mathcal{U}}_{\mu\nu\sigma}{}^{\rho}$ is a regular object there. We observe that the following relations are fulfilled. The first two are general identities for self-dual two-forms and the third one is a consequence of $\widetilde H$ being a simple two-form, \begin{eqnarray} {\widetilde{ \mathcal{F}}}_{\alpha\beta}\widetilde{\mathcal{H}}^{\alpha\beta} &=& 2{\widetilde F}_{\alpha\beta} \widetilde H^{\alpha\beta} + 2 i {\widetilde F}^{\alpha\beta} \widetilde H^{\star}_{\alpha\beta} \,=\, 2 {\widetilde{ \mathcal{F}}}_{\alpha\beta}\widetilde{H}^{\alpha\beta}\;, \label{relation1} \\ \widetilde{\mathcal{F}}^2 &=& 2\widetilde F^2 + 2i {\widetilde F}^{\star}_{\alpha\beta}{\widetilde F}^{\alpha\beta} \;, \\ \widetilde{\mathcal{H}}^2 &=& 2\widetilde H^2 \;, \\ \widetilde X^{\mu} \widetilde{\mathcal{H}}_{\mu\nu} &=& \widetilde X^2\widetilde \nabla_{\nu}\Theta- \frac{1}{4} \Theta \widetilde F \widetilde X_{\nu} \label{formula_XH} \;. \end{eqnarray} Here and henceforth we write $\widetilde T^2:= \widetilde T_{\alpha_1 \cdots \alpha_p} \widetilde T^{\alpha_1 \cdots \alpha_p}$ for any $(0,p)$-\emph{spacetime}-tensor, while we write $| T|^2:= T_{i_1 \cdots i_p} T^{i_1 \cdots i_p}$ for any $(0,p)$-\emph{space}-tensor on $\scri$. Since we never write down explicitly the second component of a vector, the reader will not get confused by this notation. Moreover, the MCFE and the unphysical Killing equations imply that \begin{eqnarray} \widetilde{\mathcal{H}}^2 &=& -\frac{4}{3}\Lambda \widetilde X^2 + 8\Theta \widetilde s \widetilde X^2 - \frac{1}{4}\Theta^2 \widetilde F^2 \;, \label{relation_H2} \\ {\widetilde{ \mathcal{F}}}_{\alpha\beta}\widetilde{\mathcal{H}}^{\alpha\beta} &=& \Theta \widetilde X^{\alpha}\widetilde \nabla_{\alpha}\widetilde F + 4\Theta \widetilde X^{\alpha} \widetilde X^{\beta} \widetilde L_{\alpha\beta} - 4s\widetilde X^2 +2i \widetilde F^{\mu\nu} \widetilde H^{\star}_{\mu\nu} \;. \label{relation_FH} \end{eqnarray} \subsection{Cauchy data at $\scri^-$} \label{constraints} Let us henceforth assume a positive cosmological constant \begin{equation} \Lambda \,>\, 0\;. \end{equation} We consider a connected component $\scri^{-}$ of past null infinity. As in \cite{ttp2}, to which we refer the reader for further details, we use adapted coordinates $(x^0=t, x^i)$ with $\scri^-=\{t=0\}$ and impose a \emph{wave map gauge condition} with \begin{equation} \widetilde R=0\;, \quad \widetilde s|_{\scri^-}=0\;, \quad \widetilde g_{tt}|_{\scri^-}=-1\;, \quad \widetilde g_{ti}|_{\scri^-}=0\;, \quad \widetilde W^{\sigma}=0\;, \quad \check g_{\mu\nu} = \widetilde g_{\mu\nu}|_{\scri^-} \;. \label{gauge_conditions_compact} \end{equation} The gauge freedom to prescribe $\widetilde R$ and $\widetilde s|_{\scri^-}$ reflects the freedom to choose the conformal factor $\Theta$, which is treated as an unknown in the MCFE. It is well-known that the freedom to choose coordinates near a spacelike hypersurface with induced metric $h_{ij}$ can be employed to prescribe $\widetilde g_{tt}|_{\scri^-}$ and $\widetilde g_{ti}|_{\scri^-}$, as long as $(\widetilde g_{tt}-h^{ij}\widetilde g_{ti} \widetilde g_{tj})|_{\scri^-}<0$ is satisfied. The remaining freedom to choose coordinates is captured by the wave map gauge condition, a generalization of the classical harmonic gauge condition, and requires the vanishing of the so-called wave gauge vector \begin{equation} H^{\sigma}\,:=\, g^{\alpha\beta}(\Gamma^{\sigma}_{\alpha\beta}-\check\Gamma^{\sigma}_{\alpha\beta}) - W^{\sigma } =0 \;, \end{equation} where $\check g_{\mu\nu}$ denotes some target metric, the $\check \Gamma^{\sigma}_{\alpha\beta}$'s are the associated connection coefficients, and the $W^{\sigma}$'s are the gauge source functions, which can be arbitrarily prescribed \cite{F}. The target metric is introduced for the wave gauge vector to become a tensor. Here, as in \cite{ttp2}, we have chosen $\check g_{\mu\nu}$ to be independent of $t$ and to agree with $\widetilde g_{\mu\nu}$ on $\scri^-$. The gauge has been chosen in such a way that $\partial_t \widetilde g_{\mu\nu}$ vanishes on $\scri^-$, in order to make the computations as simple as possible. Given arbitrary coordinates the wave map gauge can be realized by solving wave equations. Viewing the MCFE as an evolution problem with initial data on $\scri^{-}$, the free data are a (connected) Riemannian 3-manifold $(\Sigma, h_{ij})$, which represents $\scri^-$ in the emerging spacetime,% \footnote{It is actually merely the conformal class of the Riemannian 3-manifold which matters geometrically. This will be relevant in paper II \cite{mpss}. } and a TT tensor $D_{ij}$ (i.e.: trace-free and divergence-free) which satisfies the relation \begin{equation} D_{ij}=\widetilde d_{titj}|_{\scri^-} \end{equation} once the asymptotic Cauchy problem has been solved \begin{theorem}[\cite{F_lambda}] Let $(\Sigma, h_{ij})$ be a Riemanian 3-manifold, $D_{ij}$ a symmetric $(0,2)$-tensor and $\Lambda>0$. Then, if and only if $D_{ij}$ is a TT tensor, the tuple $(\Sigma, h_{ij}, D_{ij})$ defines an (up to isometries) unique maximal globally hyperbolic development (in the unphysical spacetime) of the $\Lambda$-vacuum field equations where $\Sigma$ can be embedded, with embedding $\iota$, such that $\iota(\Sigma)$ represents $\scri^-$ with $\iota^* \widetilde g_{ij}|_{\Sigma}=h_{ij}$ and $\iota^* \widetilde d_{titj}|_{\Sigma}=D_{ij}$. \end{theorem} For simplicity, we will often identify $\Sigma$ with its image under $\iota$ and drop all reference to the embedding. It is a property of the spacelike Cauchy problem that all transverse derivatives can be computed algebraically from the initial data (here $h_{ij}$ and $D_{ij}$). In the gauge \eq{gauge_conditions_compact} the MCFE \eq{conf1}-\eq{conf6} enforce the following relations on $\scri^-$, cf.\ \cite{F_lambda, ttp2}, \begin{eqnarray} &\ol {\widetilde g}_{tt} =-1\;, \quad \ol {\widetilde g}_{ti}=0\;, \quad \ol {\widetilde g}_{ij} = h_{ij}\;, \quad \ol{\partial_t {\widetilde g}_{\mu\nu}}=0\;, \label{constr2} & \\ & \ol \Theta =0\;, \quad \ol{\partial_t \Theta} = \sqrt{\frac{\Lambda}{3}}\;, \quad \ol{\partial_t\partial_t\Theta}=0\;, \label{constr3} & \\ & \ol{\partial_t\partial_t\partial_t\Theta} = - \frac{1}{2}\sqrt{\frac{\Lambda}{3}} \widehat R \;,\quad \ol{ \partial_t\partial_t\partial_t\partial_t\Theta} = 0 \;, & \\ &\ol {\widetilde s}=0\;, \quad \ol{\partial_t \widetilde s} = \frac{1}{4}\sqrt{\frac{\Lambda}{3}} \,\widehat R\;, \label{constr4} & \\ &\ol {\widetilde L}_{ij} = \widehat L_{ij} \;, \quad \ol {\widetilde L}_{ti}=0\;, \quad \ol {\widetilde L}_{tt} = \frac{1}{4}\widehat R \;, \label{constr5} & \\ & \ol{\partial_t \widetilde L_{ij}} = -\sqrt{\frac{\Lambda}{3}} \, D_{ij}\;, \quad \ol{\partial_t \widetilde L_{ti}} =\frac{1}{4}\partial_i\widehat R \;, \quad \ol{\partial_t \widetilde L_{tt}} =0\;, \label{constr6} & \\ & \ol {\widetilde d}_{titj} = D_{ij}\;,\quad \ol {\widetilde d}_{tijk} = \sqrt{\frac{3}{\Lambda}} \widehat C_{ijk}\;, \label{constr7} & \\ & \ol{\partial_{t} {\widetilde d}_{titj}}= \sqrt{\frac{3}{\Lambda}}\widehat B_{ij}\;, \quad \ol{\partial_{t} {\widetilde d}_{tijk}} = 2\widehat \nabla_{[j}D_{k]i}\;. \label{constr8} & \\ & \ol {\widetilde \Gamma}^k_{ij} = \widehat \Gamma^k_{ij}\;, \quad \ol {\widetilde \Gamma}^t_{ij} = \ol {\widetilde \Gamma}^t_{ti} =\ol {\widetilde \Gamma}^t_{tt} = \ol {\widetilde \Gamma}^k_{tt} = \ol{\widetilde \Gamma}^k_{ti} = 0\;, \label{christoffel} & \\ &\ol {\widetilde R}_{tijk} =0\;, \quad \ol {\widetilde R}_{titj} =-\widehat L_{ij} + \frac{1}{4} h_{ij}\widehat R \;, & \\ & \ol{\partial_t \widetilde R_{tijk}} = \widehat C_{ijk}-\frac{1}{2} h_{i[j}\widehat \nabla_{k]}\widehat R \;, \quad \ol{\partial_t\widetilde R_{titj}} = 2\sqrt{\frac{\Lambda}{3}} D_{ij} \;. \label{constr_last} & \end{eqnarray} An overbar will be used to denote the restriction of spacetime objects to $\scri^-$, if not explicitly stated otherwise (in the latter cases it will denote ``complex conjugation''). We use the symbol $\enspace\widehat{}\enspace$ to denote objects associated to the induced Riemannian metric $h_{ij}$, in particular $\widehat C_{ijk}$, $\widehat L_{ij}$ and $\widehat B_{ij}$ denote the Cotton, Schouten and Bach tensor, respectively, of $h_{ij}$. Recall that they are defined by \begin{align} \widehat C_{ijk} &:= \widehat \nabla_k \widehat L_{ij} - \widehat \nabla_j \widehat L_{ik}\;, \quad \quad \widehat L_{ij} := \widehat R_{ij} - \frac{1}{4} \widehat R h_{ij} \;, \label{Cotton+Schouten}\\ \widehat B_{ij} & := -\widehat\nabla^k\widehat C_{ijk} = \widehat\nabla^k\widehat \nabla_i\widehat L_{jk} - \widehat \nabla_k \nabla^k \widehat L_{ij}\label{Bach} \;. \end{align} Note that due to \eq{christoffel} the actions of $\widetilde \nabla_t$ and $\partial_t$, as well as $\widetilde \nabla_i$ and $\widehat \nabla_i$, respectively, coincide on $\scri^-$, so we can use them interchangeably. Whenever $X^{\mu}$ is a KVF of the physical spacetime, the vector field \begin{equation} Y^i := \widetilde X^i|_{\scri^-} \end{equation} is a conformal Killing vector field (CKVF) of $(\scri^-, h_{ij})$, i.e.\ \begin{equation} ({\mycal L}_Y h)_{ij} \, \equiv \, 2\widehat\nabla_{(i}Y_{j)} \,=\, \frac{2}{3} \widehat\nabla_kY^k h_{ij} \;, \label{CKVF_eqn} \end{equation} which fulfills the \emph{KID equations} \cite{ttp2} \begin{equation} {\mycal L}_Y D_{ij} + \frac{1}{3}D_{ij}\widehat \nabla_k Y^k = 0 \;, \label{reduced_KID} \end{equation} and vice versa: \begin{theorem}[\cite{ttp2}] Let $(\Sigma,h_{ij})$ be a Riemannian 3-manifold, $D_{ij}$ a symmetric $(0,2)$-tensor on $\Sigma$ and $\Lambda>0$. Then, the tuple $(\Sigma,h_{ij}, D_{ij}, Y^i)$ defines an (up to isometries) unique, in the unphysical spacetime maximal globally hyperbolic $\Lambda$-vacuum spacetime with a smooth $\scri^-$, represented by $\iota(\Sigma)$, with $\iota^* \widetilde g_{ij}|_{\Sigma}=h_{ij}$ and $\iota^* \widetilde d_{titj}|_{\Sigma}=D_{ij}$, which contains a Killing vector field $X$ with $\ol {\widetilde X}^i=Y^i$, if and only if $D_{ij}$ is a TT tensor and $Y$ is a conformal Killing vector field on $(\Sigma,h_{ij})$ which satisfies the KID equations (\ref{reduced_KID}). Moreover, $\widetilde X^{\mu}$ satisfies \begin{equation} \ol {\widetilde X}^t=0\;, \quad \ol {\widetilde \nabla_t \widetilde X^t} = \frac{1}{3}\widehat \nabla_i Y^i\;, \quad \ol{\widetilde \nabla_t \widetilde X^i}=0\;. \label{rel_KVF} \end{equation} \end{theorem} From what has been shown in \cite{ttp2} one easily derives the following expressions on $\scri$, \begin{eqnarray} \ol{\widetilde F} &=&\frac{4}{3}\widehat\nabla_i Y^i\;, \label{Killing_rel_first} \\ \Delta_h Y_i &=& -\widehat L_{ij}Y^j - \frac{1}{4} \widehat R Y_i - \frac{1}{3}\widehat \nabla_i \widehat\nabla_j Y^j \;, \\ \Delta_h \ol{\widetilde F} &=& -Y^i\widehat \nabla_i\widehat R - \frac{1}{2}\widehat R\ol{\widetilde F} \label{Killing_div} \\ \ol{\widetilde\nabla_t\widetilde \nabla_t\widetilde X_t} &=& 0 \;, \\ \ol{\widetilde \nabla_t\widetilde \nabla_t\widetilde X_i} &=& \widehat L_{ij}Y^j - \frac{1}{4}\widehat R Y_i + \frac{1}{3}\widehat \nabla_i \widehat \nabla_j Y^j\;, \\ \ol{\widetilde\nabla_t\widetilde\nabla_t\widetilde\nabla_t\widetilde X_t} &=& -\frac{1}{4}\Delta_h \ol{\widetilde F} \;, \\ \ol{\widetilde\nabla_t\widetilde\nabla_t\widetilde\nabla_t\widetilde X_k} &=& - 2 \sqrt{\frac{\Lambda}{3}} D_{kl} Y^l \;, \\ \ol{\widetilde \nabla_t \widetilde F} &=& 0 \;, \\ \ol{\widetilde \nabla_t\widetilde \nabla_t \widetilde F} &=&\Delta_h \ol{\widetilde F} \;. \label{Killing_rel_last} \end{eqnarray} \subsection{The function $Q$} \label{functionQ} \subsubsection{A necessary condition for vanishing MST} \label{subsec_nec_cond} Our aim is to characterize initial data on a spacelike $\scri^-$ which lead to a vanishing MST. We have not specified the function $Q$ yet. Nonetheless, let us assume for the time being that $\Theta^{-4} Q$ does not tend to zero at $\scri$. Then, it follows from \eq{formula_C} and \eq{formula_Q} that a necessary condition for the MST to vanish on $\scri$ is \begin{equation} \Big[\widetilde{\mathcal{H}}_{\mu\nu}\widetilde{\mathcal{H}}_{\sigma}{}^{\rho} -\frac{1}{3}\widetilde{\mathcal{H}}^2 \widetilde{\mathcal{I}}_{\mu\nu\sigma}{}^{\rho}\Big]\Big|_{\scri} \,=\, 0 \;. \end{equation} A straightforward computation on a spacelike $\scri$ in the wave map gauge \eq{gauge_conditions_compact} shows that this is the case if and only if \begin{equation} 0 \,=\, \Big[\widetilde{\mathcal{H}}_{ti}\widetilde{\mathcal{H}}_{tj} - \frac{1}{3}\widetilde{\mathcal{H}}^2\widetilde{\mathcal{I}}_{titj}\Big]\Big|_{\scri} \,=\, \frac{\Lambda}{3} (Y_iY_j)_{\mathrm{tf}} \quad \Longleftrightarrow \quad Y^i\,=\,0 \;. \end{equation} This already implies \cite{ttp2} that the KVF $X^{\mu}$ is trivial. Hence, $\Theta^{-4} Q$ must necessarily go to zero whenever the MST vanishes on a spacelike $\scri$. In the next section we in fact show that \begin{equation} Q\,=\,O(\Theta^5) \end{equation} holds automatically for an appropriate definition of $Q$. \subsubsection{Definition and asymptotic behavior of the MST} In order to analyze the situation where $\mathcal{S}_{\mu\nu\sigma\rho}$ vanishes, it is natural to define $Q$ in such a way that a certain scalar constructed from $\mathcal{S}_{\mu\nu\sigma\rho}$ vanishes automatically. This tensor has the same algebraic properties as the Weyl tensor, so all its traces are identically zero and cannot be used to define $Q$. A convenient choice is to require \begin{equation} \mathcal{S}_{\mu\nu\sigma\rho}\mathcal{F}^{\mu\nu}\mathcal{F}^{\sigma\rho} \,=\,0 \;, \end{equation} or, equivalently, \begin{equation} Q\mathcal{F}^{4}\,=\, \frac{3}{2} \mathcal{F}^{\mu\nu}\mathcal{F}^{\sigma\rho} \mathcal{C}_{\mu\nu\sigma\rho} \,=\, 6F^{\mu\nu} F^{\sigma\rho} \mathcal{C}_{\mu\nu\sigma\rho} \label{definition_Q} \;. \end{equation} The function $Q$ necessarily needs to satisfy \eq{definition_Q} whenever the MST vanishes. Let us restrict attention to the case where $\mathcal{F}^2$ has no zeros. In fact, $\mathcal{F}^2=-\frac{4}{3}\Lambda \Theta^{-2} |Y|^2 + O(\Theta^{-1})$, so, at least sufficiently close to $\scri$, it suffices to assume that $Y$ has no zeros on $\scri$. Then \eq{definition_Q} determines $Q$. From now on this choice of $Q$ will be denoted by $Q_0$, \begin{equation} Q_0\,:=\, \frac{3}{2} \mathcal{F}^{-4}\mathcal{F}^{\mu\nu}\mathcal{F}^{\sigma\rho} \mathcal{C}_{\mu\nu\sigma\rho} \;, \label{definition_Q0} \end{equation} and the corresponding MST by $\mathcal{S}^{(0)}_{\mu\nu\sigma\rho}$. When we want to emphasize the metric $g$ with respect to which $\mathcal{S}^{(0)}_{\mu\nu\sigma\rho}$ is defined, we will write $\mathcal{S}^{(0)}_{\mu\nu\sigma\rho}[g]$. As has already been done for the other fields appearing in the definition of the MST, we express $Q_0$ in terms of the unphysical fields. First of all we set \begin{equation} \widetilde{ \mathcal{D}}_{\mu\nu\sigma\rho} \,=\, \Theta^{-1} \widetilde{\mathcal{C}}_{\mu\nu\sigma\rho} \;. \end{equation} Making use of the various relations \eq{formula_C}-\eq{formula_XH} we find that\footnote{Not all orders given here and later in several instances are needed for our calculations. Nevertheless, we have chosen to write them down for the sake of completeness.} \begin{eqnarray} Q_0 &=& - 6\mathcal{F}^{-4} F^{\mu\nu} F_{\rho}{}^{\sigma} \mathcal{C}_{\mu\nu\sigma}{}^{\rho} \\ &=& 6\Theta^4 \widetilde{\mathcal{C}}_{\mu\nu\sigma\rho}\frac{(\widetilde H^{\mu\nu} + \widetilde F^{\mu\nu}\Theta )(\widetilde H^{\sigma\rho} + \widetilde F^{\sigma\rho}\Theta ) }{[ \widetilde{\mathcal{H}}^2 + 2 {\widetilde{ \mathcal{F}}}_{\alpha\beta}\widetilde{\mathcal{H}}^{\alpha\beta}\Theta +\widetilde{ \mathcal{F}}^2\Theta^2 ]^2} \\ &=& 6\Theta^5 \widetilde{\mathcal{D}}_{\mu\nu\sigma\rho}\frac{ \widetilde H^{\mu\nu} \widetilde H^{\sigma\rho} + 2\widetilde H^{\mu\nu}\widetilde F^{\sigma\rho}\Theta + \widetilde F^{\sigma\rho} \widetilde F^{\mu\nu}\Theta ^2 }{[ \widetilde {\mathcal{H}}^2 + 2 {\widetilde{ \mathcal{F}}}_{\alpha\beta}\widetilde{\mathcal{H}}^{\alpha\beta}\Theta +\widetilde{ \mathcal{F}}^2\Theta^2 ]^2} \\ &=& \frac{3}{2}\Theta^5 \widetilde H^{-4}\widetilde{\mathcal{D}}_{\mu\nu\sigma\rho}\Big( \widetilde H^{\mu\nu} \widetilde H^{\sigma\rho} + 2\widetilde H^{\mu\nu}\widetilde F^{\sigma\rho}\Theta - 2\widetilde H^{-2} \widetilde H^{\mu\nu} \widetilde H^{\sigma\rho} {\widetilde{ \mathcal{F}}}_{\alpha\beta}\widetilde{\mathcal{H}}^{\alpha\beta} \Theta \Big) \nonumber \\ && + O(\Theta^7) \;. \end{eqnarray} Using $\Lambda>0$ and the relations \eq{relation_H2}-\eq{relation_FH}, which in particular imply \begin{equation} \widetilde H^{-2} \,=\, - \frac{3}{2}\Lambda^{-1} \widetilde X^{-2} +O(\Theta^2) \end{equation} (note that $\widetilde s=O(\Theta)$ due to \eq{gauge_conditions_compact}), we find the following expression for $Q_0$, \begin{eqnarray} Q_0 &=& \frac{27}{8}\Theta^5 \Lambda^{-2} \widetilde X^{-4} \widetilde{\mathcal{D}}_{\mu\nu\sigma\rho}\Big( \widetilde H^{\mu\nu} \widetilde H^{\sigma\rho} + 2\widetilde H^{\mu\nu}\widetilde F^{\sigma\rho}\Theta \nonumber \\ && \hspace{6em} +6i\Lambda^{-1} \widetilde X^{-2} \widetilde H^{\mu\nu} \widetilde H^{\sigma\rho} \widetilde H^{\alpha\beta} \widetilde F^{\star}_{\alpha\beta} \Theta \Big) + O(\Theta^7) \;. \label{expansion_Q} \end{eqnarray} We conclude that, in the wave map gauge \eq{gauge_conditions_compact}, \begin{equation} (\Theta^{-5} Q_0) |_{\scri} \,=\, \frac{27}{8}\Lambda^{-2} \widetilde X^{-4} \widetilde{\mathcal{D}}_{\mu\nu\sigma\rho} \widetilde H^{\mu\nu} \widetilde H^{\sigma\rho} \,=\, \frac{9}{2}\Lambda^{-1} |Y|^{-4} Y^{i}Y^{j}\widetilde{\mathcal{D}}_{titj} \;. \label{leading_order_Q} \end{equation} \subsection{Properties of the MST on $\scri$} \begin{proposition} \label{prop_reg_S} Consider a spacetime $({\cal M},g)$, solution to Einstein's vacuum field equations with $\Lambda>0$, which admits a smooth conformal extension through $\scri$ and which contains a KVF $X$ with $\widetilde X^2|_{\scri} > 0$. Then the MST $\mathcal{S}^{(0)}_{\mu\nu\sigma}{}^{\rho}[\Theta^{-2}\widetilde g_{\alpha\beta}]$ corresponding to $X$ with $Q=Q_0$ defined by \eq{definition_Q0} vanishes on $\scri$.% \end{proposition} \begin{proof} The Weyl tensor is known to vanish on $\scri$. Since $\mathcal{U}_{\mu\nu\sigma}{}^{\rho}=O(\Theta^{-4})$ by \eq{formula_Q} and $Q_0=O(\Theta^5)$ by \eq{expansion_Q}, the lemma is proved. \hfill $\Box$ \medskip \end{proof} \begin{corollary} The rescaled MST $$\widetilde{\mathcal{T}}^{(0)}_{\mu\nu\sigma}{}^{\rho}[\Theta, \widetilde g_{\alpha\beta}] :=\Theta^{-1} \mathcal{S}^{(0)}_{\mu\nu\sigma}{}^{\rho}[\Theta^{-2}\widetilde g_{\alpha\beta}]$$ is regular at $\scri$. \end{corollary} \subsection{The rescaled MST on $\scri$} \label{section_recaled_MS} In this section we determine the behavior of the rescaled MST $\widetilde{\mathcal{T}}^{(0)}_{\mu\nu\sigma}{}^{\rho}$ at $\scri$. For the tensor $\mathcal{U}_{\mu\nu\sigma}{}^{\rho} $ we find, using (\ref{formula_Q}), (\ref{relation_H2}) and (\ref{relation_FH}) \begin{eqnarray} \Theta^4\mathcal{U}_{\mu\nu\sigma}{}^{\rho} &=& -\big( \widetilde{\mathcal{H}}_{\mu\nu}\widetilde{\mathcal{H}}_{\sigma}{}^{\rho} +\frac{4}{9} \Lambda \widetilde X^2 \widetilde{\mathcal{I}}_{\mu\nu\sigma}{}^{\rho} \big) \nonumber \\ && -\Theta \big( \widetilde{ \mathcal{F}}_{\mu\nu}\widetilde{\mathcal{H}}_{\sigma}{}^{\rho} + \widetilde{\mathcal{H}}_{\mu\nu}\widetilde{ \mathcal{F}}_{\sigma}{}^{\rho} -\frac{4}{3}i \widetilde H^{\alpha\beta} \widetilde F^{\star}_{\alpha\beta}\widetilde{\mathcal{I}}_{\mu\nu\sigma}{}^{\rho} \big) + O(\Theta^2) \;. \phantom{xx} \label{theta4U} \end{eqnarray} Now we are ready to evaluate the rescaled MST $\widetilde{\mathcal{T}}^{(0)}_{\mu\nu\sigma\rho} =\widetilde g_{\rho\alpha} \widetilde{\mathcal{T}}^{(0)}_{\mu\nu\sigma}{}^{\alpha} $ on~$\scri$. From (\ref{leading_order_Q}) and (\ref{theta4U}), \begin{equation} \widetilde{\mathcal{T}}^{(0)}_{\mu\nu\sigma\rho}\big|_{\scri} \,=\, \widetilde {\mathcal{D}}_{\mu\nu\sigma\rho} -\frac{9}{2}\Lambda^{-1} |Y|^{-4} Y^{i}Y^{j}\widetilde{\mathcal{D}}_{titj} ( \widetilde{\mathcal{H}}_{\mu\nu}\widetilde{\mathcal{H}}_{\sigma\rho} -\frac{1}{3}\widetilde{\mathcal{H}}^2 \widetilde{\mathcal{I}}_{\mu\nu\sigma\rho}) \;. \end{equation} Since the rescaled MST is a self-dual Weyl field, its independent components on $\scri$ are $ \widetilde{\mathcal{T}}^{(0)}_{titj}|_{\scri}$. Employing the various relations collected in Section~\ref{constraints}, it follows that, in the wave map gauge \eq{gauge_conditions_compact}, \begin{eqnarray} ( \widetilde{\mathcal{H}}_{ti}\widetilde{\mathcal{H}}_{tj} -\frac{1}{3}\widetilde{\mathcal{H}}^2 \widetilde{\mathcal{I}}_{titj}) |_{\scri} &=& \frac{\Lambda}{3} (Y_iY_j)_{\mathrm{tf}} \;, \label{H_Y_relation} \\ \widetilde{\mathcal{D}}_{titj}|_{\scri} &=&D_{ij} - i \ \sqrt{\frac{3}{\Lambda}} \widehat C_{ij} \label{MS_scri2} \;. \end{eqnarray} Here $\widehat C_{ij}$ denotes the Cotton-York tensor \begin{equation} \widehat C_{ij} \,=\, -\frac{1}{2}\widehat \mbox{$\eta$}_i{}^{kl}\widehat C_{jkl} \quad \Longleftrightarrow \quad \widehat C_{ijk} \,=\, -\widehat\mbox{$\eta$}_{jk}{}^{l}\widehat C_{il} \label{cotton-york} \;, \end{equation} which is a TT tensor, and $\widehat \mbox{$\eta$}_{jkl}$ denotes the canonical volume 3-form relative to $h_{\ij}$. Note that $D_{ij}$ and $\widehat C_{ij}$ correspond to the asymptotic electric and magnetic part, respectively, of the conformal Weyl tensor. We observe that \eq{MS_scri2} immediately implies that $\scri$ will be locally conformally flat, i.e.\ has vanishing Cotton-York tensor, if and only if the magnetic part of the rescaled Weyl tensor $\widetilde d_{\mu\nu\sigma\rho}$ vanishes at $\scri$, cf.\ \cite{ashtekar}. \begin{proposition} \label{rescaledMST_scri} Consider a spacetime $({\cal M},g)$, solution to Einstein's vacuum field equations with $\Lambda>0$, which admits a smooth conformal extension through $\scri$ and which contains a KVF $X$ with $\widetilde X^2|_{\scri} > 0$. Then, the rescaled MST $\widetilde{\mathcal{T}}^{(0)}_{\mu\nu\sigma}{}^{\rho}$ satisfies \begin{equation*} \widetilde{\mathcal{T}}^{(0)}_{titj}\big|_{\scri}\,=\, D_{ij} -\frac{3}{2} |Y|^{-4} Y^{k}Y^{l} D_{kl}(Y_iY_j)_{\mathrm{tf}} - i \ \sqrt{\frac{3}{\Lambda}} \Big( \widehat C_{ij} - \frac{3}{2} |Y|^{-4} Y^{k}Y^{l} \widehat C_{kl}(Y_iY_j)_{\mathrm{tf}}\Big) \;. \end{equation*} (Recall that $\widetilde{\mathcal{T}}^{(0)}_{titj}\big|_{\scri}$ comprises all independent components.) \end{proposition} According to Proposition~\ref{rescaledMST_scri}, the rescaled MST vanishes on $\scri$ if and only if \begin{eqnarray} D_{ij} - \frac{3}{2} |Y|^{-4} Y^{k} Y^{l} D_{kl} (Y_iY_j)_{\mathrm{tf}} &=& 0\;, \label{condition_D} \\ \widehat C_{ij} - \frac{3}{2} |Y|^{-4} Y^{k} Y^{l} \widehat C_{kl} (Y_iY_j)_{\mathrm{tf}} &=& 0 \;. \label{condition_C} \end{eqnarray} We solve \eq{condition_D} on $\scri^{-}$. \eq{condition_C} can be treated in exactly the same manner. We define \begin{equation} d\,:=\, Y^iY^jD_{ij} \;. \end{equation} Applying $\widehat\nabla^j$ to \eq{condition_D} and employing the fact that the constraints equations enforce $D_{ij}$ to be a TT tensor, we are led to the equation \begin{equation} Y_i \Big(Y^j \widehat \nabla_j d + \frac{1}{3} d \widehat\nabla_j Y^j \Big) - \frac{1}{3} |Y|^2 \widehat \nabla_i d - \frac{1}{6}d\widehat \nabla_i |Y|^2 \,=\, 0 \;, \label{divergence_eqn} \end{equation} after using the following two consequences of the conformal Killing equation for $Y$, \begin{align} Y^j \widehat \nabla_j |Y|^2 & = \frac{2}{3} |Y|^2 \widehat \nabla_l Y^l, \\ Y^j \widehat \nabla_j Y_i & = \frac{2}{3} Y_i \widehat \nabla_l Y^l - \frac{1}{2} \widehat \nabla_i |Y|^2. \end{align} Contraction of (\ref{divergence_eqn}) with $Y^i$ gives \begin{equation} Y^j \widehat \nabla_j d + \frac{1}{3} d \widehat\nabla_j Y^j \,=\, 0 \;. \label{divergence_eqn2} \end{equation} Inserting this into \eq{divergence_eqn} yields \begin{equation} 2 \widehat \nabla_i d +d\widehat \nabla_i\log |Y|^2=0 \;. \label{divergence_eqn3} \end{equation} The general solution of this equation is, using that $\scri^{-}$ is connected, \begin{equation} d = \frac{2}{3} C_{\mathrm{el}} |Y|^{-1}\;, \enspace C_{\mathrm{el}}=\mathrm{const.} \end{equation} It follows that necessarily \begin{equation} D_{ij} \,=\,C_{\mathrm{el}} |Y|^{-5} (Y_i Y_j)_{\mathrm{tf}} \;, \label{condition_on_D} \end{equation} which is, indeed, a TT tensor satisfying \eq{condition_D}: \begin{lemma} Let $(\Sigma,h)$ be an $n$-dimensional Riemannian manifold. Let $Y$ be a vector field on $\Sigma$ with $|Y|^2\ne 0$, and denote by $\widehat\nabla$ the connection associated to $h$. Then $D_{ij}:=|Y|^{-n-2} (Y_iY_j)_{\mathrm{tf}}$ is a TT-tensor if and only \begin{equation} Y^j(\widehat\nabla_{(i}Y_{j)})_{\mathrm{tf}}\, = \,0 \;. \label{Y_TT-cond} \end{equation} (So in particular if $Y$ is a CKVF.) \end{lemma} \begin{proof} We compute the divergence of $D_{ij}$, \begin{equation} \widehat\nabla^jD_{ij} \,=\, - (n+2)|Y|^{-n-4} Y_iY ^j Y^k(\widehat\nabla_{(j} Y_{k)})_{\mathrm{tf}} +2 |Y|^{-n-2} Y^j(\widehat\nabla_{(i} Y_{j)})_{\mathrm{tf}} \label{eqn_TT_cond} \;, \end{equation} and observe that $D_{ij}$ is a TT-tensor if \eq{Y_TT-cond} holds. Conversely, contraction of \eq{eqn_TT_cond} with $Y^i$ yields \begin{equation} Y^i\widehat\nabla^jD_{ij} \,=\, -n |Y|^{-n-2} Y^iY^j(\widehat\nabla_{(i} Y_{j)})_{\mathrm{tf}} \label{eqn_TT_cond2} \;, \end{equation} which we insert into \eq{eqn_TT_cond}, \begin{equation} \widehat\nabla^jD_{ij} \,=\, \frac{n+2}{n}|Y|^{-2} Y_iY^j\widehat\nabla^kD_{jk} +2 |Y|^{-n-2} Y^j(\widehat\nabla_{(i} Y_{j)})_{\mathrm{tf}} \label{eqn_TT_cond3} \;. \end{equation} It follows that if $D_{ij}$ is a TT-tensor then \eq{Y_TT-cond} holds, which completes the proof of the lemma. \hfill $\Box$ \medskip \end{proof} Similarly, one shows that for some constant $C_{\mathrm{mag}}$ \begin{equation} \widehat C_{ij} \,=\, \sqrt{\frac{\Lambda}{3}}C_{\mathrm{mag}} |Y|^{-5}(Y_iY_j)_{\mathrm{tf}} \;. \label{condition_on_C} \end{equation} \begin{remark} {\rm If $Y^i$ is a CKVF, \eq{condition_on_D} defines, away from zeros of $Y$, a TT-tensor $D_{ij}$ which satisfies the KID equations \eq{reduced_KID}. On the other hand, a solution of \eq{reduced_KID} always satisfies \eq{divergence_eqn2}. } \end{remark} Up to this stage we had to assume that $|Y|^2>0$ on $\scri$. In fact, the above considerations reveal that this follows from the assumption of the existence of a smooth $\scri$ whenever the rescaled tensor $\widetilde{\mathcal{T}}^{(0)}_{\mu\nu\sigma}{}^{\rho}$ vanishes there: The CKVF $Y$ is not allowed to vanish in some open region of $\scri$, because this would imply that the corresponding KVF would vanish in the domain of dependence of that region. Let us assume that $|Y(p)|^2=0$ for some $p\in\scri$. Then it follows from \eq{condition_on_D}-\eq{condition_on_C} that, for either $C_{\mathrm{el}} \ne 0 $ or $C_{\mathrm{mag}} \ne 0$, \begin{eqnarray} \widetilde d_{\mu\nu\sigma\rho} \widetilde d^{\mu\nu\sigma\rho}|_{\scri} &=& (4\widetilde d_{t itj} \widetilde d^{titj} + 4\widetilde d_{t ijk} \widetilde d^{tijk} + \widetilde d_{ijkl} \widetilde d^{ijkl})|_{\scri} \\ &=& 8D_{ij}D^{ij}- \frac{24}{\Lambda}\widehat C_{ij}\widehat C^{ij} = \frac{16}{3 |Y|^6} \left ( C_{\mathrm{el}}^2 - C_{\mathrm{mag}}^2 \right ) \end{eqnarray} or \begin{eqnarray} \widetilde d^{\star}_{\mu\nu\sigma\rho} \widetilde d^{\mu\nu\sigma\rho}|_{\scri} &=& (4\widetilde\mbox{$\eta$}_{ti}{}^{jk}\widetilde d_{jktl} \widetilde d^{titl} +2 \widetilde\mbox{$\eta$}_{ti}{}^{jk}\widetilde d_{jklm} \widetilde d^{tilm} )|_{\scri} \\ &=& -16\sqrt{\frac{3}{\Lambda}} \widehat C_{ij} D^{ij} = - \frac{32}{3 |Y|^6} C_{\mathrm{el}}C_{\mathrm{mag}} \end{eqnarray} (we used that $\widetilde d_{ijkl}|_{\scri} = -\widetilde\mbox{$\eta$}_{ij}{}^{tm}\widetilde\mbox{$\eta$}_{kl}{}^{tn}\widetilde d_{tmtn}$), diverges at $p$, so that $p$ actually cannot belong to the (unphysical) manifold. This argument does not apply when $C_{\mathrm{el}}=C_{\mathrm{mag}}=0$. In this case the metric $h$ is conformally flat and $D_{ij}$ vanishes, so the data at $\scri^{-}$ correspond to data for the de Sitter metric. The maximal de Sitter data is $\scri^- = \mathbb{S}^3$ with $h$ the standard round metric. This space has ten linearly independent conformal Killing vectors, which generically vanish at some points. In this case the points where the conformal Killing vector vanishes do belong to $\scri^{-}$.% \footnote{ Note that in the de Sitter case we have $\mathcal{C}_{\alpha\beta\mu\nu}=0=Q_0$, so the MST associated to \emph{any} KVF vanishes identically. } This is why we need to exclude de Sitter explicitly in the following Theorem. \begin{theorem} \label{thm_nec_cond} Consider a spacetime $({\cal M},g)$ solution to Einstein's vacuum field equations with $\Lambda>0$, which admits a smooth conformal extension through $\scri$ and which contains a KVF $X$. Denote by $h$ the Riemannian metric induced by $\widetilde g=\Theta^2 g$ on $\scri$, and by $Y$ the CKVF induced by $X$ on $\scri$. Assume that $({\cal M},g)$ is not locally isometric to the de Sitter spacetime. Then $|Y|^2 > 0$, and the rescaled MST $\widetilde{\mathcal{T}}^{(0)}_{\mu\nu\sigma}{}^{\rho}=\Theta^{-1} \mathcal{S}^{(0)}_{\mu\nu\sigma}{}^{\rho}$ corresponding to $X$ with $Q=Q_0$ defined by \eq{definition_Q0} vanishes on a connected component $\scri^{-}$ of $\scri$ if and only if the following relations hold: \begin{enumerate} \item[(i)] $ \widehat C_{ij} = \sqrt{\frac{\Lambda}{3}}C_{\mathrm{mag}} |Y|^{-5}(Y_iY_j -\frac{1}{3}|Y|^2 h_{ij})$ for some constant $C_{\mathrm{mag}}$, where $\widehat C_{ij}$ is the Cotton-York tensor of the Riemannian 3-manifold $(\scri^{-}, h)$, and \item[(ii)] $D_{ij} = \widetilde d_{titj}|_{\scri^-} =C_{\mathrm{el}} |Y|^{-5}(Y_iY_j -\frac{1}{3}|Y|^2 h_{ij})$ for some constant $C_{\mathrm{el}}$. \end{enumerate} \end{theorem} \section{The functions $c$ and $k$ and their restrictions to~$\scri$} \subsection{The functions $c$ and $k$ and their constancy} \label{section_constant} Following \cite{mars_senovilla}, we define four real-valued functions $b_1$, $b_2$, $c$ and $k$ by the system (we make the assumption $Q\mathcal{F}^2-4\Lambda\ne 0$; later on it will become clear that this holds automatically near a regular $\scri$) \begin{eqnarray} b_2-ib_1 &=& - \frac{ 36 Q (\mathcal{F}^2)^{5/2} }{(Q\mathcal{F}^2-4\Lambda)^3} \label{equation_b1b2} \;, \\ c &=& - X^2- \mathrm{Re}\Big( \frac{6\mathcal{F}^2(Q\mathcal{F}^2+2\Lambda)}{(Q\mathcal{F}^2-4\Lambda)^2} \Big) \;, \label{equation_c} \\ k &=& \Big|\frac{36\mathcal{F}^2}{(Q\mathcal{F}^2-4\Lambda)^2}\Big| \nabla_{\mu}Z\nabla^{\mu}Z - b_2Z +cZ^2 +\frac{\Lambda}{3} Z^4 \;, \label{equation_k} \end{eqnarray} where \begin{equation} Z = 6\,\mathrm{Re} \Big( \frac{\sqrt{\mathcal{F}^2}}{Q\mathcal{F}^2-4\Lambda}\Big) \;. \end{equation} \label{rem_compl_sr} We note that the expression (\ref{equation_k}) for $k$ as given in \cite{mars_senovilla} has two typos both in the statement of Theorem 1 and of Theorem 6. A remark is in order concerning the appearance of square roots of the complex function $\mathcal{F}^2$. In the setting of \cite{mars_senovilla}, the function $\mathcal{F}^2$ is shown to be nowhere vanishing so we can prescribe the choice of square root at one point and extend it by continuity to the whole manifold. Since $\mathcal{F}^2$ does not vanish, no branch point of the root is ever met and $\sqrt{\mathcal{F}^2}$ is smooth everywhere. Moreover, the function $\mathcal{F}^2$ has strictly negative real part in a neighborhood of $\scri$ (see (\ref{used_relation_F2}) below). We can thus fix the square root $\sqrt{\mathcal{F}^2}$ in this neighborhood by choosing the positive branch near $\scri$, namely the branch that takes positive real numbers and gives positive real values. We will use this prescription for any function which is non-zero in a neighborhood of infinity. The following result is proven in \textcolor{blue}{ \cite{mars_senovilla}. Unfortunately, the corresponding statements in Theorems 4 and 6 in that reference have a missing hypothesis. This mistake has been corrected in the arXiv version \cite{mars_senovilla_arx} of the paper.} \begin{theorem} \label{constancy} Let $({\cal M},g)$ be a $\Lambda$-vacuum spacetime which admits a KVF $X$ such that the MST vanishes for some function $Q$. Assume further that the functions $Q \mathcal{F}^2$ and $Q\mathcal{F}^2-4\Lambda$ are not identically zero \textcolor{blue}{and that $y := \mbox{Re} \left ( \frac{6 i \sqrt{\mathcal{F}^2}}{Q \mathcal{F}^2 - 4 \Lambda} \right ) $ has non-zero gradient somewhere}. Then: \begin{enumerate} \item[(i)] $\mathcal{F}^2$ and $Q\mathcal{F}^2-4\Lambda$ are nowhere vanishing, \item[(ii)] $Q$ is given by \eq{definition_Q0}, i.e.\ $Q=Q_0$, and \item[(iii)] $b_1$, $b_2$, $c$ and $k$ are constant. \end{enumerate} \end{theorem} \begin{remark} \label{rem_const} {\rm If $\Lambda>0$ and $({\cal M},g)$ admits a smooth $\scri$, has vanishing MST and is not locally isometric to the de Sitter spacetime, it follows from \eq{Taylor_Q}-\eq{used_relation_F2} below that $Q$, $\mathcal{F}^2$ and $Q\mathcal{F}^2-4\Lambda$ \textcolor{blue}{are non-zero near $\scri$. Furthermore \eq{Taylory} implies that $y$ has non-zero gradient in a neighbourhood of $\scri$. Thus, all the hypotheses of the theorem hold and we conclude that } $b_1$, $b_2$, $c$ and $k$ are constant whenever the MST vanishes in a spacetime $({\cal M},g)$ as above. } \end{remark} Combining Theorem~\ref{constancy} and Remark~\ref{rem_const} it follows that a $\Lambda>0$-vacuum spacetime admitting a KVF $X$ with vanishing associated MST for some $Q$ and for which $\mathcal{F}^2=0$ somewhere cannot admit a smooth $\scri$, unless the spacetime is locally isometric to de Sitter. Although a priori interesting, this result turns out to be empty since it has been proven in \cite{mars_senovilla_null} that all spacetimes with vanishing MST and null Killing form $\mathcal{F}$ (somewhere, and hence everywhere) have necessarily $\Lambda \leq 0$. The above functions \eq{equation_b1b2}-\eq{equation_k}, or rather their restrictions to $\scri$, turn out to be crucial for the classification of vacuum spacetimes with vanishing MST (and conformally flat $\scri$, cf.\ \cite{mpss}). Our next aim will therefore be to find explicit expressions for them in terms of the data at $\scri$ under the assumption that the MST vanishes for some choice of $Q$. We wish to find expressions at null infinity that make sense (and generally cease to be constant) for any $\Lambda$-vacuum spacetime with a smooth conformal compactification and a KVF. Employing the relations collected in Sections~\ref{constraints}, \ref{functionQ} and \ref{section_recaled_MS} we find that, under the assumption that the MST vanishes, \begin{eqnarray} (\Theta^{-5} Q_0) &=& 3\Lambda^{-1} |Y|^{-5}\big(C_{\mathrm{el}} - i C_{\mathrm{mag}}\big)+ O(\Theta) \;, \label{Taylor_Q} \\ Q_0\mathcal{F}^2-4\Lambda &=& -4\Lambda + O(\Theta^3) \label{expansion_QF_Lambda} \;, \\ \mathcal{F}^2 &=& -\frac{4}{3}\Lambda \Theta^{-2} \widetilde X^2 + 2\widetilde F^2 -\frac{1}{4}\widetilde F^2 + 2i {\widetilde F}^{\star}_{\alpha\beta}{\widetilde F}^{\alpha\beta} \nonumber \\ && + 2 \widetilde X^{\alpha}\widetilde \nabla_{\alpha}\widetilde F + 8 \widetilde X^{\alpha} \widetilde X^{\beta} \widetilde L_{\alpha\beta} +4i \Theta^{-1} \widetilde F^{\mu\nu} \widetilde H^{\star}_{\mu\nu} \\ &=& -\frac{4}{3}\Lambda \Theta^{-2} \widetilde X^2 + 4i \sqrt{\frac{\Lambda}{3}}\Theta^{-1}Y_kN^k + |N|^2 -\frac{4}{9}f^2 \nonumber \\ && + \frac{8}{3} Y^{i}\widehat \nabla_{i}f + 8 Y^iY^j\widehat L_{ij} + O(\Theta) \label{used_relation_F2} \;. \\ \textcolor{blue}{\frac{6i \sqrt{\mathcal{F}^2}}{Q_0 \mathcal{F}^2 - 4 \Lambda}} & \textcolor{blue}{=} & \textcolor{blue}{ \frac{1}{\Theta} \sqrt{\frac{3}{\Lambda}} \sqrt{ \widetilde X^2 } + O(1)}. \label{Taylory} \end{eqnarray} Here \begin{equation} N^k \,:=\, \mathrm{curl} \,Y^k \,=\,\widehat\mbox{$\eta$}^{ijk}\widehat\nabla_iY_j \label{curl_of_Y} \end{equation} denotes the curl of $Y$, and \begin{equation} f \,:=\, \widehat\nabla_iY^i \label{div_of_Y} \end{equation} its divergence. Let us determine the trace of \eq{equation_b1b2}-\eq{equation_k} in the unphysical, conformally rescaled spacetime on $\scri$ under the assumption that the MST vanishes for some choice of $Q$. With \eq{Taylor_Q} and \eq{used_relation_F2}, we observe that, on $\scri$, equation \eq{equation_b1b2} yields \begin{equation} ( b_2-ib_1)|_{\scri} \,=\, \Big(\frac{9}{16}\Lambda^{-3} (\Theta^{-5}Q_0) (\Theta^2\mathcal{F}^2)^{5/2}\Big)\Big|_{\scri} \,=\, \frac{2}{\Lambda} \sqrt{ \frac{3}{\Lambda}} \big( C_{\mathrm{mag}} + iC_{\mathrm{el}} \big) \;. \end{equation} From \eq{equation_c} and \eq{used_relation_F2} we conclude that \begin{eqnarray} \frac{\Lambda}{3} c\Big|_{\scri} &=& \frac{\Lambda}{3}\Big( -\Theta^{-2}\widetilde X^2 - \frac{3}{4}\Lambda^{-1}\mathrm{Re} ( \mathcal{F}^2 )\Big)\Big|_{\scri} \\ &=& - \frac{1}{4} |N|^2 + \frac{1}{9}f^2 - \frac{2}{3} Y^i\widehat \nabla_{i}f -2 Y^iY^j \widehat L_{ij} \;. \label{expression_c} \end{eqnarray} Note that this implies that \begin{eqnarray} \mathcal{F}^2 &=& -\frac{4}{3}\Lambda \Theta^{-2} \widetilde X^2 +4i \Theta^{-1} \widetilde F^{\mu\nu} \widetilde H^{\star}_{\mu\nu}-\frac{4}{3}\Lambda c \nonumber \\ && -8D_{ij}Y^iY^j \Theta - 4i \sqrt{\frac{3}{\Lambda}}N^k\widetilde \nabla_t\widetilde \nabla_t\widetilde X_k \Theta + O(\Theta^2) \label{used_relation_F2v2} \\ &=& -\frac{4}{3}\Lambda (\Theta^{-2} |\widetilde X|^2 +c) +4i \sqrt{\frac{\Lambda}{3}}\Theta^{-1}Y_kN^k + O(\Theta) \label{expansion_F2} \;. \end{eqnarray} Next, we compute the function $Z$ (here an overbar means ``complex conjugation''), \begin{eqnarray*} Z &=& \mathrm{Re} \Big( \frac{6\sqrt{\mathcal{F}^2}}{Q_0\mathcal{F}^2-4\Lambda}\Big) \\ &=& \frac{6\, \mathrm{Re} \Big( \sqrt{\mathcal{F}^2}(\ol {Q_0\mathcal{F}^2}-4\Lambda)\Big)}{(Q_0\mathcal{F}^2-4\Lambda)(\ol {Q_0\mathcal{F}^2}-4\Lambda)} \\ &=& \Big(\frac{3}{8}\Lambda^{-2} + O(\Theta^3)\Big)\mathrm{Re} \Big( \sqrt{\mathcal{F}^2}(\ol {Q_0\mathcal{F}^2}-4\Lambda)\Big) \\ &=&-\frac{3}{2}\Lambda^{-1} \Theta^{-1}\mathrm{Re} \big( \sqrt{\Theta^2\mathcal{F}^2}\big) + O(\Theta^2) \;. \end{eqnarray*} Equation \eq{expansion_F2} yields \begin{equation} \mathrm{Re} \big( \sqrt{\Theta^2\mathcal{F}^2}\big) \,=\, |Y|^{-1}Y_kN^k\Theta + O(\Theta^3) \;. \end{equation} Thus \begin{equation} Z \,=\, -\frac{3}{2}\Lambda^{-1} |Y|^{-1}Y_kN^k + O(\Theta^2) \;, \end{equation} and we deduce from \eq{equation_k} that \begin{eqnarray*} k|_{\scri} &=&\Big( \big| - \frac{9}{4\Lambda^2}\Theta^2\mathcal{F}^2 (\widetilde\nabla_{t}Z)^2 +\frac{9}{4\Lambda^2}\Theta^{2}\mathcal{F}^2 \widetilde\nabla_{i}Z\widetilde \nabla^i Z\big| -b_2Z +cZ^2+\frac{1}{3}\Lambda Z^4\Big)\Big|_{\scri} \\ &=&\Big(\frac{3}{\Lambda}|Y|^2\widehat\nabla_{i}Z\widehat \nabla^i Z -b_2Z +cZ^2+\frac{1}{3}\Lambda Z^4\Big)\Big|_{\scri} \;. \end{eqnarray*} From the conformal Killing equation for $Y$ we find that \begin{eqnarray*} \Lambda |Y| \widehat\nabla_i Z \big|_{\scri} &=& -\frac{1}{2}f|Y|^{-2}Y_iY_kN^k -\frac{3}{4}|Y|^{-2}Y_kN^k\widehat \mbox{$\eta$}_{ijl}Y^j N^l + \frac{3}{2} \widehat\nabla_i(Y_kN^k) \;, \end{eqnarray*} whence, using \eq{expression_c}, \begin{eqnarray*} \Lambda^2|Y|^2 \widehat\nabla_i Z \widehat\nabla^i Z\big|_{\scri} &=& \frac{1}{4}f^2|Y|^{-2}(Y_kN^k )^2 -\frac{3}{4}f|Y|^{-2} Y^i\widehat\nabla_i(Y_kN^k)^2 \\ && +\frac{9}{16}|Y|^{-2}(Y_kN^k)^2|N|^2 - \frac{9}{16}|Y|^{-4}(Y_kN^k)^4 \\ && -\frac{9}{8}|Y|^{-2}\widehat\mbox{$\eta$}_{ijl}Y^j N^l \widehat\nabla^i(Y_kN^k)^2 +\frac{9}{4} \widehat\nabla_i(Y_jN^j) \widehat\nabla^i(Y_kN^k) \\ &=& \frac{1}{2}f^2|Y|^{-2}(Y_kN^k )^2 -\frac{3}{4}f|Y|^{-2} Y^i\widehat\nabla_i(Y_kN^k)^2 \\ && - \frac{3}{2}|Y|^{-2}(Y_kN^k)^2 Y^i\widehat \nabla_{i}f - \frac{9}{2}|Y|^{-2}(Y_kN^k)^2 Y^iY^j \widehat L_{ij} \\ && -\frac{9}{8}|Y|^{-2}\widehat\mbox{$\eta$}_{ijl}Y^j N^l \widehat\nabla^i(Y_kN^k)^2 +\frac{9}{4} \widehat\nabla_i(Y_jN^j) \widehat\nabla^i(Y_kN^k) \\ && -\frac{1}{3}\Lambda^3 c Z^2 - \frac{1}{9}\Lambda^{4}Z^4 \;, \end{eqnarray*} and \begin{eqnarray} \label{constk} \Big(\frac{\Lambda}{3}\Big)^3 k \big|_{\scri} &=& \frac{1}{18}|Y|^{-2}(Y_kN^k )^2 \Big(f^2 - 3Y^i\widehat \nabla_{i}f - 9 Y^iY^j \widehat L_{ij} \Big) + \frac{1}{2} (\widehat C_{ij} Y^i Y^j) Y_kN^k \nonumber \\ && -\frac{1}{8}\widetilde\nabla_i\log |Y|^2 \widetilde\nabla^i(Y_kN^k)^2 +\frac{1}{4} \widetilde\nabla_i(Y_jN^j) \widetilde\nabla^i(Y_kN^k) \;. \label{simplified_k} \end{eqnarray} where we have used $b_2 = 6 \Lambda^{-2}\sqrt{\frac{\Lambda}{3}}C_{\mathrm{mag}}$ and have replaced $C_{\mathrm{mag}}$ in terms of the Cotton-York tensor $C_{\mathrm{mag}}= \frac{3}{2}\sqrt{\frac{3}{\Lambda}} |Y| \widehat C_{ij} Y^i Y^j$. Expression (\ref{constk}) provides a simplified formula for $k$ on $\scri$ in terms of $Y$. It follows from Theorem~\ref{constancy} and Remark~\ref{rem_const} that the right-hand sides of \eq{expression_c} and \eq{simplified_k} are constant, whenever the MST vanishes for some function~$Q$. \subsection{The functions $\widehat c(Y)$ and $\widehat k(Y)$} In the previous section we have introduced the spacetime functions $c$ and $k$ and computed their restrictions on $\scri$ in terms of the induced metric $h$ and the CKVF $Y$, whenever the MST vanishes. Here we regard these restrictions as functions which are intrinsically defined on some Riemannian 3-manifold, whence we are led to the following \begin{definition} Let $(\Sigma,h)$ be a Riemannian 3-manifold which admits a CKVF~ $Y$. Then, guided by \eq{expression_c} and \eq{simplified_k}, we set \begin{eqnarray} \widehat c(Y) &:=& - \frac{1}{4} |N|^2 + \frac{1}{9}f^2 - \frac{2}{3} Y^i\widehat \nabla_{i}f -2 Y^iY^j \widehat L_{ij} \;, \label{defn_c(Y)} \\ \widehat k(Y) &:=& \frac{1}{18}|Y|^{-2}(Y_kN^k )^2 \Big(f^2 - 3Y^i\widehat \nabla_{i}f - 9 Y^iY^j \widehat L_{ij} \Big) + \frac{1}{2} (\widehat C_{ij} Y^i Y^j) Y_kN^k \nonumber \\ && -\frac{1}{8}\widehat \nabla_i\log |Y|^2 \widehat \nabla^i(Y_kN^k)^2 +\frac{1}{4} \widehat\nabla_i(Y_jN^j) \widehat\nabla^i(Y_kN^k) \;. \label{defn_k(Y)} \end{eqnarray} \end{definition} The spacetime functions $c$ and $k$, \eq{equation_c} and \eq{equation_k}, have been introduced in \cite{mars_senovilla} in the setting of a vanishing MST, where they arise naturally as integration constants. Concerning $\widehat c(Y)$ and $\widehat k(Y)$, in general one cannot expect them to be constant. However, let us assume that the Cotton-York tensor satisfies condition (\ref{condition_on_C}). Our main result, Theorem~\ref{first_main_thm2}, (which is a reformulation of Theorem~\ref{first_main_thm}) now implies the following: Choosing an initial $D_{ij}$ according to (\ref{condition_on_D}), $(\Sigma,h)$ can be extended to a $\Lambda>0$ vacuum spacetime for which $\Sigma$ represents $\scri^-$ and to which $Y$ extends as a KVF such that the associated MST vanishes for some function $Q$. One then deduces from the results in \cite{mars_senovilla} that $c$ and $k$, and therefore also $\widehat c(Y)$ and $\widehat k(Y)$ are constant: \begin{lemma} Let $(\Sigma, h)$ be a Riemannian 3-manifold which admits a CKVF ~$Y$ with $|Y|^2>0$ and such that $\widehat C_{ij} = C|Y|^{-5}(Y_iY_j)_{\mathrm{tf}}$ with $C$ constant. Then the functions $\widehat c(Y)$ and $\widehat k(Y)$ as given by \eq{defn_c(Y)} and \eq{defn_k(Y)} are constant. \end{lemma} In particular in the case of $\widehat k(Y)$ it is far from obvious that the condition \eq{condition_on_C} implies that this function is constant, but the proof via the extension of $(\Sigma,h)$ to a vacuum spacetime provides an elegant tool to prove that. As already indicated above, the constants $\widehat c$ and $\widehat k$ play a decisive role in the classification of $\Lambda>0$-vacuum spacetimes which admit a conformally flat $\scri$ and a KVF w.r.t.\ which the associated MST vanishes \cite{mpss}. \subsection{Constancy of $\widehat c(Y)$} \def:={:=} \def{\mathcal F}{{\mathcal F}} \def\nabla{\nabla} Let us focus attention on the function $\widehat c(Y)$. In Section~\ref{sec_alt_Q} we will introduce an alternative definition of the function $Q$ which permits the derivation of a evolution equations. It turns out that the associated MST will in general not be regular at $\scri$, and that the constancy of $\widehat c(Y)$ is a necessary condition to ensure regularity. Let us therefore consider the issue under which condition the function $\widehat c(Y)$ is constant. The aim of this section is to prove the following Lemma. \begin{lemma} \label{lem_constancy_c} Let $(\Sigma,h)$ be a 3-dimensional oriented Riemannian manifold which admits a CKVF $Y$, ${\mycal L}_Y h = \frac{2}{3} f h$ with $f$ and $N_i$ as defined in (\ref{curl_of_Y})-(\ref{div_of_Y}). Then the function $\widehat c(Y) $ introduced in (\ref{defn_c(Y)}) satisfies the following identity \begin{align*} \widehat\nabla_l \widehat c(Y) = -2 \widehat\mbox{$\eta$}^m_{\phantom{m}li} Y^i \widehat C_{mj} Y^j = \widehat{C}_{jli}Y^j Y^i . \end{align*} In particular, if, for some smooth function $H : \Sigma \mapsto \mathbb{R}$, the Cotton-York tensor (\ref{cotton-york}) satisfies \begin{align} \widehat C_{ij} = H ( Y_i Y_j)_{\mathrm{tr}} \end{align} and $\Sigma$ is connected, then the proportionality function necessarily takes the form $H=C|Y|^{-5}$ where $C$ is a constant, and $\widehat c(Y)$ is constant over the manifold. \end{lemma} \begin{remark} {\rm The lemma implies in particular that $\hat{c}$ is constant if and only if $Y^j$ is an eigenvector of the Cotton-York tensor.} \end{remark} \begin{proof} From the conformal Killing equation $\nabla_{(i} Y_{j)} = \frac{1}{3} f h_{ij}$ it follows \begin{align} \widehat\nabla_j \widehat \nabla_k f & = - 3({\mycal L}_{Y} \widehat L)_{jk} \label{HessPsi} \;, \\ \widehat\nabla_i \widehat\nabla_j Y_{l} &= Y_m\widehat R^{m}_{\phantom{m}ijl} +\frac{1}{3}\Big( h_{jl} \widehat\nabla_i f + h_{il} \widehat\nabla_j f - h_{ij} \widehat\nabla_l f \Big) \;. \label{HessY} \end{align} Evaluating $|N|^2$ as \begin{eqnarray*} |N|^2 &=& \widehat\mbox{$\eta$}_{ijk} \widehat\mbox{$\eta$}^{ilm} \widehat\nabla^{j} Y^k\widehat \nabla_{l} Y_{m} \,=\, \left ( \delta^{l}_{j} \delta^m_{k} - \delta^l_{k} \delta^m_{j} \right ) \widehat\nabla^{j} Y^k \widehat\nabla_{l} Y_{m} \\ &=& \widehat\nabla^j Y^k\widehat \nabla_j Y_k -\widehat \nabla^j Y^k \widehat\nabla_k Y_j \;, \end{eqnarray*} we can write $\widehat c(Y) =-\frac{1}{4}\widehat \nabla_i Y_j ( \widehat\nabla^i Y^j - \widehat\nabla^j Y^i ) + \frac{1}{9} f^2 - \frac{2}{3} Y^i \widehat\nabla_i f -2 Y^i Y^j \widehat L_{ij}$. It is convenient to split $\widehat c(Y)$ in two terms \begin{align*} \widehat c_1 &:= -\frac{2}{3} Y^i \widehat\nabla_i f -2 Y^i Y^j \widehat L_{ij} + \frac{1}{9} f^2 \;, \\ \widehat c_2 &:= -\frac{1}{4}\widehat \nabla_i Y_j ( \widehat\nabla^i Y^j - \widehat\nabla^j Y^i ) \;. \end{align*} so that $\widehat c = \widehat c_1 +\widehat c_2$. We start with $\widehat \nabla_l c_1$, \begin{align} \widehat\nabla_{l} \widehat c_1 = & -\frac{2}{3}\widehat\nabla_l Y^i \widehat\nabla_i f - \frac{2}{3}Y^i \widehat\nabla_l \widehat\nabla_i f -4 (\widehat\nabla_l Y^i) Y^j \widehat L_{ij} -2 Y^i Y^j \widehat\nabla_l \widehat L_{ij} + \frac{2}{9}f \widehat\nabla_l f \nonumber \\ = & - \frac{2}{3}\widehat\nabla_l Y^i \widehat\nabla_i f +2 Y^i ({\mycal L}_{Y} \widehat L)_{li} -4 (\widehat\nabla_{l} Y^i) Y^j \widehat L_{ij} -2 \mbox{$\eta$}^m_{\phantom{m}li} Y^i\widehat C_{mj} Y^j \nonumber \\ &-2 Y^j [ ({\mycal L}_Y L)_{lj} -\widehat L_{lm} \widehat\nabla_j Y^m -\widehat L_{mj} \widehat\nabla_l Y^m ] + \frac{2}{9}f \widehat\nabla_l f \nonumber \\ = & -\frac{2}{3}\widehat\nabla_l Y^i \widehat\nabla_i f-4 Y^j \widehat L_{i[j} \widehat\nabla_{l]} Y^i -2 \widehat\mbox{$\eta$}^m_{\phantom{m}li} Y^i \widehat C_{mj} Y^j + \frac{2}{9}f \widehat\nabla_l f \;, \label{derc1} \end{align} where in the second equality we inserted (\ref{HessPsi}) and $Y^i \widehat\nabla_l\widehat L_{ij} = Y^i \widehat\nabla_i\widehat L_{lj} + Y^i \widehat\mbox{$\eta$}^{m}_{\phantom{m}li} \widehat C_{mj} = ({\mycal L}_{Y} \widehat L)_{lj} - \widehat L_{lm} \widehat\nabla_j Y^m -\widehat L_{mj} \widehat\nabla_{l} Y^m + Y^i \widehat\mbox{$\eta$}^{m}_{\phantom{m}li} \widehat C_{mj}$ and in the third one obvious cancellations have been applied. Concerning $\widehat\nabla_l \widehat c_2$ we find, after a simple rearrangement of indices, \begin{align*} \widehat\nabla_l \widehat c_2 & = -\frac{1}{2} \widehat\nabla_{l} \widehat\nabla_{i} Y_j (\widehat \nabla^i Y^j - \widehat\nabla^j Y^i) = - Y_m \widehat R^m_{\phantom{m}lij} \widehat \nabla^i Y^j - \frac{1}{3}\widehat\nabla_i f ( \widehat \nabla^i Y_l - \widehat\nabla_l Y^i ) \;, \end{align*} where in the second equality we used (\ref{HessY}) and the antisymmetry of $(\widehat\nabla^i Y^j - \widehat\nabla^j Y^i)$. We now use the Riemann tensor decomposition in three dimensions, \begin{align*} \widehat R^{m}_{\phantom{m}lij} = \delta^m_i \widehat L_{lj} - \delta^m_j \widehat L_{li} +\widehat L^m_{\phantom{m}i} h_{lj} - \widehat L^m_{\phantom{m}j} h_{li} \;, \end{align*} to obtain \begin{align} \widehat\nabla_l\widehat c_2 = -( Y^m \widehat L_{mi} +\frac{1}{3}\widehat\nabla_i f) ( \widehat \nabla^i Y_l - \widehat\nabla_l Y^i ) - Y_i \widehat L_{lj} ( \widehat \nabla^i Y^j - \widehat \nabla^j Y^i ) \;. \label{derc2} \end{align} Combining (\ref{derc1}) and (\ref{derc2}) \begin{align*} \widehat\nabla_l \widehat c(Y) = & - Y^m \widehat L_{mi} ( \widehat\nabla^i Y_l + \widehat\nabla_l Y^i ) + Y_i\widehat L_{lj} ( \widehat\nabla^i Y^j + \widehat\nabla^j Y^i ) - \frac{1}{3} \widehat\nabla_i f ( \widehat\nabla^i Y_l + \widehat\nabla_l Y^i ) \\ & + \frac{2}{9} f \widehat\nabla_l f -2 \widehat\mbox{$\eta$}^m_{\phantom{m}li} Y^i \widehat C_{mj} Y^j \\ = & -2 \widehat\mbox{$\eta$}^m_{\phantom{m}li} Y^i \widehat C_{mj} Y^j \;, \end{align*} where in the second equality we used the conformal Killing equation. Now, whenever \eq{condition_on_C} holds we have $\widehat C_{mj} Y^j = \frac{2}{3} H |Y|^2 Y_m$ and $\widehat\nabla_l \widehat c(Y) =0$ so that $\widehat c(Y)$ is constant over the (connected) manifold $\Sigma$. The fact that $H$ is necessarily of the form $H = C |Y|^{-5}$ was already shown in the proof of Proposition \ref{rescaledMST_scri}. \hfill $\Box$ \medskip \end{proof} \begin{remark} {\rm A similar Lemma holds for three-dimensional manifolds of arbitrary signature. The term $|N|^2$ in $\widehat c(Y)$ needs to be replaced by $\epsilon |N|^2$ where $\epsilon$ is an appropriate sign depending on the signature. } \end{remark} Another problem of interest is to find necessary and sufficient conditions which ensure the constancy of $\widehat k(Y)$. Since this expression is of higher order in $Y$ than $\widehat c(Y)$, this is expected to be somewhat more involved. \section{Evolution of the MST} \subsection{The Ernst potential on $\scri$} In this section we make no assumption concerning the MST, so all the results hold generally for any $\Lambda$-vacuum spacetime admitting a KVF $X$ and a smooth conformal compactification. Using the results of Section~\ref{Mars-Simon_conf} the so-called Ernst one-form of $X$, $\sigma_{\mu} := 2X^{\alpha} \mathcal{F}_{\alpha\mu}$, has the following asymptotic expansion \begin{eqnarray} \sigma_{\mu} &=& 2X^{\alpha} \mathcal{F}_{\alpha\mu} \\ &=& 2X^{\alpha}\Big( \Theta^{-3}\widetilde{\mathcal{H}}_{\alpha\mu} + \Theta^{-2}\widetilde{ \mathcal{F}}_{\alpha\mu}\Big) \\ &=& 2 \Theta^{-3}\widetilde X^{\alpha}\Big( \widetilde H_{\alpha\mu} +\frac{i}{2} \widetilde\mbox{$\eta$}_{\alpha\mu}{}^{\nu\sigma}\widetilde H_{\nu\sigma} + \Theta (\widetilde F_{\alpha\mu} + \frac{i}{2} \widetilde\mbox{$\eta$}_{\alpha\mu}{}^{\nu\sigma} \widetilde F_{\nu\sigma} )\Big) \\ &=& 2 \Theta^{-3}\widetilde X^{\alpha}\Big( 2\widetilde X_{[\alpha}\widetilde\nabla_{\mu]}\Theta + \Theta ((\widetilde \nabla_{\alpha}\widetilde X_{\mu})_{\mathrm{tf}} + \frac{i}{2} \widetilde\mbox{$\eta$}_{\alpha\mu}{}^{\nu\sigma} \widetilde\nabla_{\nu}\widetilde X_{\sigma} )\Big) \;. \end{eqnarray} It is known (see e.g.\ \cite{KSMH}) that this covector field has an (``Ernst-'') potential $\sigma_{\mu}=\partial_{\mu}\sigma$, at least locally. Taking the following useful relations into account, \begin{eqnarray*} Y^i \widehat \mbox{$\eta$}_{i }{}^{jk}\widetilde\nabla_t\widetilde\nabla_t\widetilde \nabla_{j}\widetilde X_{k} \big|_{\scri} &=&2\widehat C_{ij}Y^iY^j \;, \\ \widetilde\nabla_t\Big(-|\widetilde X|^2 \Theta^{-2} +\frac{3}{\Lambda} i \widetilde H^{\alpha\beta} \widetilde F^{\star}_{\alpha\beta} \Theta^{-1}\Big) \Big|_{\scri} &=& 2\sqrt{\frac{\Lambda}{3}} |Y|^2\Theta^{-3} - i Y_i N^i \Theta^{-2} \\ &&\hspace{-5em} - \frac{1}{2}\sqrt{\frac{3}{\Lambda}} \widehat R |Y|^2 \Theta^{-1} +\frac{2}{3}\sqrt{\frac{3}{\Lambda}} D_{kl} Y^{k} Y^l \\ &&\hspace{-5em} +\frac{i}{2} \frac{3}{\Lambda} \Big( 2\widehat C_{kl}Y^kY^l+ N^k\ol{\widetilde\nabla_t\widetilde\nabla_t\widetilde X_{k}} \Big) + O(\Theta) \;, \end{eqnarray*} a somewhat lengthy computation making extensive use of the equations \eq{constr2}-\eq{constr_last} and the Killing relations \eq{rel_KVF}-\eq{Killing_rel_last} reveals that (as before an overbar means ``restriction to $\scri$'') \begin{eqnarray} \sigma_{t} &=& 2 \Theta^{-2}\widetilde X^{t}\widetilde \nabla_{t}\widetilde X_{t}+\frac{1}{2} \Theta^{-2}\widetilde X^{t}\widetilde\nabla_{\beta}\widetilde X^{\beta} + 2 \Theta^{-3}\widetilde X^{i}\widetilde X_{i}\widetilde\nabla_{t}\Theta \nonumber \\ && + 2 \Theta^{-2}\widetilde X^{i} \widetilde \nabla_{i}\widetilde X_{t} +i \Theta^{-2}\widetilde X^{i} \widetilde\mbox{$\eta$}_{i t}{}^{jk} \widetilde\nabla_{j}\widetilde X_{k} + O(\Theta) \\ &=& 2\sqrt{\frac{\Lambda}{3}} |Y|^2\Theta^{-3} - i Y_i N^i \Theta^{-2} - \frac{1}{2}\sqrt{\frac{3}{\Lambda}} \widehat R |Y|^2 \Theta^{-1} + \frac{2}{3} \sqrt{\frac{3}{\Lambda}} D_{kl}Y^{k} Y^l \nonumber \\ && -\frac{i}{2}\frac{3}{\Lambda}\Big( 2\widehat C_{ij}Y^iY^j+ N^i\ol{\widetilde \nabla_t\widetilde \nabla_t\widetilde X_{i} } \Big) + O(\Theta) \label{expansion_sigma_t} \\ &=& \widetilde\nabla_t\Big[-\widetilde X^2 \Theta^{-2} +\frac{3}{\Lambda} i \widetilde H^{\alpha\beta} \widetilde F^{\star}_{\alpha\beta} \Theta^{-1}+ q(x^i) \nonumber \\ &&-i\frac{3}{\Lambda}\sqrt{\frac{3}{\Lambda}}\Big( 2 \widehat C_{kl}Y^kY^l +N^k\ol{\widetilde \nabla_t\widetilde \nabla_t\widetilde X_k } \Big)\Theta + O(\Theta^2) \Big] \; \end{eqnarray} for some function $q(x^i)$. To determine this function it is necessary to compute $\sigma_i$ up to and including constant order. Using the relation \begin{eqnarray} \widehat\nabla_i(Y_kN^k) &=& \frac{1}{3}f N_i +2\widehat \mbox{$\eta$}_{ij}{}^{k} Y^jY^l \widehat L_{kl} + \frac{2}{3}\widehat\mbox{$\eta$}_{ij}{}^{k} Y^j\widehat \nabla_kf \;, \label{deriv_YN} \end{eqnarray} another lengthy calculation gives via \eq{constr2}-\eq{constr_last} and \eq{rel_KVF}-\eq{Killing_rel_last} \begin{eqnarray} \sigma_{i} &=& - \widehat \nabla_{i}|Y|^2 \Theta^{-2} + i\sqrt{\frac{3}{\Lambda}} \widehat\nabla_i(Y_kN^k) \Theta^{-1} + \frac{1}{\Lambda} \Big[ - Y_i \Big(\Delta_h -\frac{\widehat R}{3}\Big) f \nonumber \\ && - Y_{i}\ol{ \widetilde \nabla_t\widetilde\nabla_t\widetilde\nabla_t\widetilde X^t} +3 i\widehat \mbox{$\eta$}_{ij }{}^{k} Y^{j} \ol{\widetilde\nabla_t\widetilde\nabla_t\widetilde\nabla_{t}\widetilde X_{k} } + 3\ol{\widetilde\nabla_t\widetilde\nabla_t\widetilde X^j} \widehat \nabla_{j}Y_{i} \nonumber \\ && -\frac{1}{2} |Y|^2\widehat \nabla_{ i}\widehat R + \frac{1}{2} Y_{i}Y^{j} \widehat \nabla_{ j}\widehat R + 3 Y^{j}\ol{\widetilde\nabla_t\widetilde\nabla_t\widetilde \nabla_{j}\widetilde X_{i}} \Big] +O(\Theta) \\ &=& - \widehat \nabla_{i}|Y|^2 \Theta^{-2} + i\sqrt{\frac{3}{\Lambda}} \widehat\nabla_i(Y_kN^k) \Theta^{-1} \nonumber \\ && + \frac{1}{\Lambda} \Big[ \frac{2}{3}Y_i\underbrace{ \Big(\Delta_h f+ \frac{1}{2} \widehat R f+ \frac{3}{4}Y^{j} \widehat \nabla_{ j}\widehat R \Big)}_{=0 \text{ by \eq{HessPsi}}} -6 i\sqrt{\frac{\Lambda}{3}}\widehat \mbox{$\eta$}_{ij }{}^{k} D_{kl} Y^{j}Y^l \nonumber \\ && - 3\ol{\widetilde \nabla_t\widetilde\nabla_t\widetilde X^j }\widehat \nabla_{i}Y_{j} + 2 f\ol{\widetilde\nabla_t\widetilde\nabla_t\widetilde X_i} -\frac{1}{2} |Y|^2\widehat \nabla_{ i}\widehat R \nonumber \\ && - 3 Y^{j}\ol{\widetilde\nabla_t\widetilde\nabla_t\widetilde \nabla_{i}\widetilde X_{j}} \Big] +O(\Theta) \label{expansion_sigma_i} \\ &=& \widetilde \nabla_{i}\Big(-\widetilde X^2 \Theta^{-2} +\frac{3}{\Lambda} i \widetilde H^{\alpha\beta} \widetilde F^{\star}_{\alpha\beta} \Theta^{-1} -a +O(\Theta)\Big) \nonumber \\ &&-2 i\sqrt{\frac{3}{\Lambda}}\widehat \mbox{$\eta$}_{ij }{}^{k} D_{kl} Y^{j}Y^l \label{sigma_i_prelim} \end{eqnarray} for some complex constant $a$ (the ``$\sigma$-constant'' introduced in the Introduction). As one should expect, the last term in \eq{sigma_i_prelim} has, at least locally, a potential, supposing that $Y$ is a CKVF and $D_{ij}$ a TT tensor which together satisfy the KID equations \eq{reduced_KID}: Indeed, setting \begin{equation} P_i :=-2 \sqrt{\frac{3}{\Lambda}} \widehat \mbox{$\eta$}_{ij }{}^{k} D_{kl} Y^{j}Y^l \;, \end{equation} we find that \begin{eqnarray} \widehat\mbox{$\eta$}_i{}^{jk}\widehat\nabla_{j} P_{k} &=& 2 \sqrt{\frac{3}{\Lambda}} Y^j\Big( {\mycal L}_YD_{ij} + \frac{1}{3} fD_{ij} \Big) \,=\, 0 \\ \Longrightarrow \quad \widehat\nabla_{[i} P_{j]} &=& 0 \;. \end{eqnarray} On the simply connected components of the initial 3-manifold this implies \begin{equation} P_i =\widehat\nabla_i p\quad \text{for some real-valued function} \quad p=p(x^i) \;. \end{equation} (The fact that $p$ is only determined up to some constant is reflected in the $\sigma$-constant $a$ introduced above.) Thus, \begin{equation} \sigma_{i} = \widetilde \nabla_{i}\Big(-\widetilde X^2 \Theta^{-2} +\frac{3}{\Lambda} i \widetilde H^{\alpha\beta} \widetilde F^{\star}_{\alpha\beta} \Theta^{-1} + ip(x^j) -a +O(\Theta)\Big) \;. \label{sigma_i} \end{equation} Altogether, we conclude that $q(x^i) = i p(x^i) -a$ for some not yet specified $a\in\mathbb{C}$, and that \begin{eqnarray} \sigma &=& -\widetilde X^2 \Theta^{-2} +\frac{3}{\Lambda} i \widetilde H^{\alpha\beta} \widetilde F^{\star}_{\alpha\beta} \Theta^{-1} + ip(x^j) -a \nonumber \\ && -i\frac{3}{\Lambda}\sqrt{\frac{3}{\Lambda}}\Big( 2 \widehat C_{kl}Y^kY^l +N^k\widetilde \nabla_t\widetilde \nabla_t\widetilde X_k \Big)\Theta +O(\Theta^2) \\ &=& -|Y|^2 \Theta^{-2} + i \sqrt{\frac{3}{\Lambda}} Y_iN^i \Theta^{-1} +\frac{1}{3\Lambda} f^2 - \frac{3}{\Lambda}Y^{i}\ol{\widetilde\nabla_t\widetilde \nabla_t \widetilde X_{i}} \nonumber \\ && + ip(x^j) -a +\frac{3}{\Lambda}\Big( \frac{2}{3} D_{kl}Y^kY^l - i \sqrt{\frac{3}{\Lambda}}\widehat C_{kl}Y^kY^l \nonumber \\ &&- \frac{i}{4} \sqrt{\frac{3}{\Lambda}}N^k(2\ol{\widetilde \nabla_t\widetilde \nabla_t\widetilde X_k } + Y_k\widehat R) \Big)\Theta +O(\Theta^2) \;. \label{expansion_sigma} \end{eqnarray} \begin{proposition} Consider a $\Lambda>0$-vacuum spacetime which admits a KVF and a smooth $\scri$. Then the Ernst potential $\sigma$ can be computed explicitly near $\scri$, where it admits the expansion \eq{expansion_sigma}. \end{proposition} \subsection{Alternative definition of the function $Q$} \label{sec_alt_Q} In order to derive evolution equations it is convenient (cf.\ \cite{ik} for the $\Lambda=0$-case) to define, {\em for each Ernst potential $\sigma$}, a new function $Q=Q_{\mathrm{ev}}$ by the following set of equations, \begin{align} Q_{\mathrm{ev}} \, & :=\, \frac{3J}{R} - \frac{\Lambda}{R^2} \;, \label{ev_dfn_Q} \\ R \, & :=\, - \frac{1}{2}i\sqrt{\mathcal{F}^2} \label{defR} \;, \\ J \, & :=\, \frac{R + \sqrt{R^2 - \Lambda\sigma}}{\sigma} \;. \label{defJ} \end{align} and all square roots are chosen with the same prescription as explained above, cf.\ p.~\pageref{rem_compl_sr}. Alternatively, we could have defined $R = + (i/2)\sqrt{\mathcal{F}^2}$. Then the expression for $J$ would have changed accordingly. The choice (\ref{defR}) is preferable because then the real part of $R$ approaches minus infinity at $\scri$, in agreement with the usual behavior of Boyer-Lindquist type coordinates near infinity in Kerr-de Sitter and related metrics \cite{mars_senovilla}. Note that the definition of $J$ above implies the identity \begin{eqnarray} \sigma J^2 -2J R +\Lambda \,=\,0 \;, \label{quadratic} \end{eqnarray} which will be useful later. The MST associated with the choice $Q=Q_{\mathrm{ev}}$ will be denoted by $\mathcal{S}^{\mathrm{(ev)}}_{\mu\nu\sigma\rho}$. It follows from \eq{used_relation_F2v2} that \begin{eqnarray} R^2& =& -\frac{1}{4}\mathcal{F}^2 \\ &=& \frac{\Lambda}{3} \Theta^{-2} \widetilde X^2 - i \Theta^{-1}\widetilde H^{\alpha\beta} \widetilde F^{\star}_{\alpha\beta}+ \frac{\Lambda}{3} c(x^j) \nonumber \\ && +\Big(2 D_{ij}Y^iY^j + i \sqrt{\frac{3}{\Lambda}}N^k\ol{\widetilde \nabla_t\widetilde \nabla_t\widetilde X_k} \Big)\Theta + O(\Theta^2) \\ &=& \frac{\Lambda}{3} \Theta^{-2}|Y|^2 - i \sqrt{\frac{\Lambda}{3}}Y_{i}N^i\Theta^{-1} -\frac{1}{9} f^2 + Y^{i} \ol{\widetilde\nabla_t\widetilde \nabla_t \widetilde X_{i} } + \frac{\Lambda}{3} c(x^j) \nonumber \\ && +\Big(\frac{4}{3} D_{kl}Y^kY^l - i \sqrt{\frac{3}{\Lambda}}\widehat C_{kl}Y^kY^l + \frac{i}{4} \sqrt{\frac{3}{\Lambda}}N^k(2\ol{\widetilde \nabla_t\widetilde \nabla_t\widetilde X_k} + Y_k\widehat R ) \Big)\Theta \nonumber \\ && + O(\Theta^2) \;. \label{expansion_R} \end{eqnarray} One remark is in order: In this section we do \emph{not} assume that the MST vanishes, so there is no reason why the real function $c$, which has been defined on $\scri$ in \eq{expression_c}, should be constant. From \eq{expansion_sigma} and \eq{expansion_R} we observe that \begin{eqnarray} \Xi\,:=\, \sigma +\frac{3}{\Lambda}R^2 \,=\, c(x^j)+ ip(x^j) -a+\frac{6}{\Lambda}\widetilde{\mathcal{D}}_{tktl}|_{\scri}Y^kY^l \Theta + O(\Theta^2) \;, \end{eqnarray} whence \begin{eqnarray} Q_{\mathrm{ev}} &=& \frac{3R^2-\Lambda\sigma + 3 R\sqrt{R^2 - \Lambda\sigma} }{\sigma R^2} \label{expr_Q1} \\ &=&3 \frac{2R^2 - \frac{\Lambda}{3}\Xi - 2R^2\sqrt{1 - \frac{\Lambda}{4}\frac{\Xi}{R^2} } }{R^2 (\Xi- \frac{3}{\Lambda}R^2 ) } \label{expr_Q2} \\ &=& \frac{\Lambda^2}{12}\frac{\Xi}{R^4} +O\Big(\frac{\Xi^2}{R^6}\Big) \;. \label{expr_Q3} \end{eqnarray} \begin{remark} {\rm The expressions \eq{expr_Q1} and \eq{expr_Q2} for $Q_{\mathrm{ev}}$ rely on the choice of $R=-\sqrt{R^2}$. The final expression (\ref{expr_Q3}) is however independent of this choice, as it must be. This final expression ensures that, in an appropriate setting, $Q_{\mathrm{ev}}$ coincides with $Q_0$, as will be shown in Theorem~\ref{first_main_thm}. It should also be emphasized that this expression does not admit a limit $\Lambda \searrow 0$. This is because, when $\Lambda=0$, the function $\mathcal{F}^2$ approaches zero at infinity and the definition of square root needs to be worked out differently. } \end{remark} We have \begin{eqnarray} Q_{\mathrm{ev}} =\begin{cases} \frac{3}{4}\big(c(x^j)+ ip(x^j) -a\big)|Y|^{-4}\Theta^4 +O(\Theta^5) & \text{if $a \not\equiv c(x^j)+ ip(x^j)$} \;, \\ \frac{9}{2}\Lambda^{-1}|Y|^{-4}\widetilde{\mathcal{D}}_{tktl}Y^kY^l \Theta^5 +O(\Theta^6) & \text{if $a\equiv c(x^j)+ ip(x^j)$} \;. \end{cases} \label{final_Q_ev} \end{eqnarray} The rescaled MST with $Q = Q_{\mathrm{ev}}$ will be regular on $\scri$ if and only if $Q_{\mathrm{ev}} =O(\Theta^5)$, i.e.\ if and only if both functions $c$ and $p$ are constant and the $\sigma$-constant $a$ has been chosen such that \begin{equation} \mathrm{Re}(a) \,=\, c \quad \text{and} \quad \mathrm{Im}(a) \,=\, p \;. \label{afix} \end{equation} We remark that with this choice of $a$ the function $Q_{\mathrm{ev}} $ is completely determined. Note that the potential $p$ will be constant if and only if the covector field $P_i$ vanishes. The constancy of $c$ has been analyzed in Lemma~\ref{lem_constancy_c}. Comparison with \eq{leading_order_Q} then leads to the following result, a shortened version of which has been stated as Theorem~\ref{short} in the Introduction: \begin{theorem} \label{prop_Qs} Consider a $\Lambda>0$-vacuum spacetime% \footnote{Since $P_i$ needs to vanish in this setting, $P_i$ is exact and we need not assume that $\scri^-$ is simply connected in order to get a globally defined Ernst potential.} which admits a smooth $\scri^-$ and a KVF $X$. Denote by $Y$ the CKVF induced, in the conformally rescaled spacetime, by $X$ on $\scri^-$. If and only if \begin{enumerate} \item[(i)] $\widehat \mbox{$\eta$}_{ij }{}^{k} \widehat C_{kl} Y^{j}Y^l=0$ (so that the function $c=\frac{3}{\Lambda}\widehat c(Y)$ is constant on $\scri^-$), and \item[(ii)] $ \widehat \mbox{$\eta$}_{ij }{}^{k} D_{kl} Y^{j}Y^l=0$ (so that $P_i=0$ whence its potential $p$ is constant), \end{enumerate} there exists a unique $\sigma$-constant $a$, given by $a=c+ip$, which leads via $Q_{\mathrm{ev}}$ to a rescaled MST $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\mu\nu\sigma\rho}$ which is regular at $\scri^-$. In that case the leading order terms of $Q_{\mathrm{ev}}=O(\Theta^5)$ and $Q_0=O(\Theta^5)$ coincide, \begin{equation*} \lim_{\Theta \rightarrow 0} (\Theta^{-5}Q_{\mathrm{ev}})= \lim_{\Theta \rightarrow 0} (\Theta^{-5}Q_0) \;. \end{equation*} In particular, \begin{equation*} \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\mu\nu\sigma}{}^{\rho}|_{\scri^-} \,=\, \widetilde{\mathcal{T}}^{(0)}_{\mu\nu\sigma}{}^{\rho}|_{\scri^-} \;. \end{equation*} \end{theorem} \begin{remark} {\rm For initial data of the form \eq{condition_on_D}-\eq{condition_on_C}, which are necessary for the MST to vanish for some choice of $Q$, the conditions (i) and (ii) are satisfied. } \end{remark} It is worth to emphasize the roles of $\widehat C_{ij}$ and $D_{ij}$ which enter Theorem~\ref{thm_nec_cond} as well as Theorem~\ref{prop_Qs} in a completely symmetric manner. \subsection{(Asymptotically) KdS-like spacetimes} \label{subsec:AKdS} In Theorem \ref{prop_Qs} we have obtained necessary and sufficient conditions for the rescaled MST $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\mu\nu\sigma\rho}$ to be regular at $\scri^-$. As we shall see in Section~\ref{subsec:FOSH}, this tensor satisfies a homogeneous symmetric hyperbolic Fuchsian system with data prescribed at $\scri^-$. The zero data is such that its propagation stays zero. The resulting spacetime has vanishing MST and hence either a Kerr-de Sitter metric or one of the related metrics classified in \cite{mars_senovilla}. We call such spacetimes \emph{KdS-like}: \begin{definition} \label{KdS_like} Let $({\cal M},g)$ be a $\Lambda>0$-vacuum spacetime admitting smooth conformal compactification and corresponding null infinity $\scri$. $({\cal M},g)$ is called ``Kerr-de Sitter-like'' at a connected component $\scri^{-}$ of $\scri$ if it admits a KVF $X$ which induces a CKVF $Y$ on $\scri^-$, such that the rescaled MST $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\mu\nu\sigma\rho}$ vanishes at $\scri^-$. \end{definition} Note that also the Kerr-NUT-de Sitter spacetime belongs to the class of KdS-like space-times. In \cite{mpss} we analyze in detail KdS-like space-times which admit a conformally flat $\scri$. The case where the tensor $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\mu\nu\sigma\rho}$ is merely assumed to be finite at $\scri^-$ obviously includes the zero case (i.e. Kerr-de Sitter and related metrics) and at the same time excludes many other $\Lambda$-vacuum spacetimes with a smooth $\scri^-$. It makes sense to call such spacetimes {\it asymptotically Kerr-de Sitter-like}. We put forward the following definition: \begin{definition} \label{asympt_KdS_like} Let $({\cal M},g)$ be a $\Lambda>0$-vacuum spacetime admitting smooth conformal compactification and corresponding null infinity $\scri$. $({\cal M},g)$ is called ``asymptotically Kerr-de Sitter-like'' at a connected component $\scri^{-}$ of $\scri$ if it admits a KVF $X$ which induces a CKVF $Y$ on $\scri^-$, which satisfies $|Y|^2>0$, such that the conditions (i) and (ii) in Theorem~\ref{prop_Qs} are satisfied, or, equivalently, such that the rescaled MST $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\mu\nu\sigma\rho}$ is regular at $\scri^-$. \end{definition} \begin{remark} {\rm As will be shown later (cf.\ Corollary~\ref{cor_charact_KdS}), KdS-like space-times have a vanishing MST, whence, as shown in Section~\ref{section_recaled_MS}, the condition $|Y|^2>0$ follows automatically. In the asymptotically KdS-like case, though, the conditions (i) and (ii) in Theorem~\ref{prop_Qs} might be compatible with zeros of $Y$. } \end{remark} An interesting open problem is to classify ``asymptotically Kerr-de Sitter-like'' spacetimes. \subsection{Derivation of evolution equations for the (rescaled) MST} \label{sect_deriv_ev_MST} Based on the corresponding derivation for $\Lambda=0$ in \cite{ik}, we will show that the MST \begin{eqnarray} \mathcal{S}^{(\mathrm{ev})}_{\mu\nu\sigma\rho} &=& \mathcal{C}_{\mu\nu\sigma\rho} + Q_{\mathrm{ev}} \mathcal{U}_{\mu\nu\sigma\rho} \;, \end{eqnarray} with $ Q_{\mathrm{ev}}$ as defined in \eq{ev_dfn_Q}-\eq{defJ}, satisfies a symmetric hyperbolic system of evolution equations as well as a system of wave equations. \subsubsection{An analog to the Bianchi equation} First, we derive an analog to the Bianchi equation $\nabla_{\rho}C_{\mu\nu\sigma}{}^{\rho}=0$ for the MST $\mathcal{S}^{(\mathrm{ev})}_{\mu\nu\sigma}{}^{\rho}$. For this we set \begin{eqnarray} \mathcal{W}_{\alpha\beta} &:=& R^{-1}\mathcal{F}_{\alpha\beta} \;, \end{eqnarray} so that \begin{eqnarray} Q_{\mathrm{ev}} \mathcal{U}_{\alpha\beta\mu\nu} &=& \Big( \frac{3 J}{R} - \frac{\Lambda}{R^2}\Big) \Big(- \mathcal{F}_{\alpha\beta} \mathcal{F}_{\mu\nu}+ \frac{1}{3}\mathcal{F}^2\mathcal{I}_{\alpha\beta\mu\nu} \Big) \\ &=& ( \Lambda -3 J R ) \Big( \mathcal{W}_{\alpha\beta} \mathcal{W}_{\mu\nu} + \frac{4}{3}\mathcal{I}_{\alpha\beta\mu\nu} \Big) \;. \end{eqnarray} Differentiation yields \begin{eqnarray} \nabla_{\rho} (Q_{\mathrm{ev}} \mathcal{U}_{\alpha\beta\mu\nu}) &=& -3 \nabla_{\rho}(J R) \Big( \mathcal{W}_{\alpha\beta} \mathcal{W}_{\mu\nu} + \frac{4}{3}\mathcal{I}_{\alpha\beta\mu\nu} \Big) \nonumber \\ && + ( \Lambda -3 J R )( \mathcal{W}_{\mu\nu} \nabla_{\rho}\mathcal{W}_{\alpha\beta} + \mathcal{W}_{\alpha\beta} \nabla_{\rho} \mathcal{W}_{\mu\nu} ) \;. \end{eqnarray} First of all let us calculate the covariant derivative of $\mathcal{W}_{\mu\nu}$. From \begin{eqnarray} \nabla_{\rho}\mathcal{F}_{\mu\nu} &=& X^{\sigma}\Big(\mathcal{C}_{\mu\nu\sigma\rho} + \frac{4}{3}\Lambda \mathcal{I}_{\mu\nu\sigma\rho}\Big) \\ &=& X^{\sigma}\Big(\mathcal{S}_{\mu\nu\sigma\rho} - Q_{\mathrm{ev}} \mathcal{U}_{\mu\nu\sigma\rho} + \frac{4}{3}\Lambda \mathcal{I}_{\mu\nu\sigma\rho}\Big) \\ &=& \frac{ 3 J R - \Lambda}{2R^2} \sigma_{\rho}\mathcal{F}_{\mu\nu} +4 J RX^{\sigma} \mathcal{I}_{\mu\nu\sigma\rho} + X^{\sigma}\mathcal{S}_{\mu\nu\sigma\rho} \;, \\ \nabla_{\rho}\mathcal{F}^2 &=& 2\mathcal{F}^{\mu\nu}\nabla_{\rho}\mathcal{F}_{\mu\nu} \\ &=& -4 ( 2 J R - \Lambda) \sigma_{\rho} + 2X^{\sigma} \mathcal{F}^{\mu\nu}\mathcal{S}_{\mu\nu\sigma\rho} \\ &=& \frac{1}{3} ( 2 Q_{\mathrm{ev}}\mathcal{F}^2 +4 \Lambda) \sigma_{\rho} + 2X^{\sigma} \mathcal{F}^{\mu\nu}\mathcal{S}_{\mu\nu\sigma\rho} \;, \\ \nabla_{\rho} R &=& -\frac{1}{2} \nabla_{\rho}\sqrt{-\mathcal{F}^2}\,=\, -\frac{1}{8}R^{-1} \nabla_{\rho}\mathcal{F}^2 \\ &=&\frac{1}{6} \Big( 2 Q_{\mathrm{ev}}R - \frac{\Lambda}{R}\Big) \sigma_{\rho} -\frac{1}{4}X^{\sigma} \mathcal{W}^{\mu\nu}\mathcal{S}_{\mu\nu\sigma\rho} \\ &=& \Big( J-\frac{\Lambda}{2R} \Big) \sigma_{\rho} -\frac{1}{4}X^{\sigma} \mathcal{W}^{\mu\nu}\mathcal{S}_{\mu\nu\sigma\rho} \;, \end{eqnarray} we deduce that \begin{eqnarray} \nabla_{\rho}\mathcal{W}_{\mu\nu} &=& R^{-1} \nabla_{\rho}\mathcal{F}_{\mu\nu} -R^{-1} \mathcal{W}_{\mu\nu} \nabla_{\rho} R \\ &=& \frac{ J }{2R} \sigma_{\rho}\mathcal{W}_{\mu\nu} +4 J X^{\sigma} \mathcal{I}_{\mu\nu\sigma\rho} + R^{-1} X^{\sigma}\mathcal{S}_{\mu\nu\sigma\rho} \nonumber \\ && + \frac{1}{4R} X^{\sigma} \mathcal{W}_{\mu\nu}\mathcal{W}^{\alpha\beta}\mathcal{S}_{\alpha\beta\sigma\rho} \;, \\ \nabla^{\nu}\mathcal{W}_{\mu\nu} &=& \frac{ J }{2R} \sigma^{\nu}\mathcal{W}_{\mu\nu} + 3JX_{\mu} +\frac{1}{4R} X^{\sigma} \mathcal{W}_{\mu}{}^{\rho}\mathcal{W}^{\alpha\beta}\mathcal{S}_{\alpha\beta\sigma\rho} \\ &=& 2 J X_{\mu} +\frac{1}{4R} X^{\sigma} \mathcal{W}_{\mu}{}^{\rho}\mathcal{W}^{\alpha\beta}\mathcal{S}_{\alpha\beta\sigma\rho} \;, \end{eqnarray} where we used that $\sigma^{\rho}\mathcal{F}_{\mu\rho} = \frac{1}{2}\mathcal{F}^2X_{\mu}$ \cite{mars_senovilla}. Taking the derivative of \eq{quadratic}, we obtain \begin{eqnarray} \nabla_{\rho} (JR) &=& \frac{ J^2\sigma }{R-J\sigma} \Big( \frac{R}{2\sigma}\sigma_{\rho} - \nabla_{\rho} R\Big) \\ &=& \frac{ J }{R-J\sigma}\Big( \frac{-3 JR^2 + 2R\Lambda + \Lambda \sigma J}{2R} \sigma_{\rho} + \frac{1}{4} J\sigma X^{\sigma} \mathcal{W}^{\mu\nu}\mathcal{S}_{\mu\nu\sigma\rho} \Big) \phantom{xxxx} \\ &=& \frac{J}{2R} (3JR-\Lambda) \sigma_{\rho} + \frac{ J^2\sigma }{4(R-J\sigma)} X^{\sigma} \mathcal{W}^{\mu\nu}\mathcal{S}_{\mu\nu\sigma\rho} \;. \end{eqnarray} Altogether we find \begin{eqnarray} \nabla_{\rho} (Q_{\mathrm{ev}} \mathcal{U}_{\alpha\beta\mu}{}^{\rho}) &=& J (3JR-\Lambda) \Big(2 X_{\mu} \mathcal{W}_{\alpha\beta}- \frac{2}{R}\sigma^{\rho} \mathcal{I}_{\alpha\beta\mu\rho} -4 X^{\sigma}\mathcal{W}_{\mu}{}^{\rho} \mathcal{I}_{\alpha\beta\sigma\rho} \Big) \nonumber \\ && + ( \Lambda -3 J R) R^{-1} X^{\sigma}\mathcal{W}_{\mu}{}^{\rho}\Big( \mathcal{S}_{\alpha\beta\sigma\rho} + \frac{1}{2}\mathcal{W}_{\alpha\beta} \mathcal{W}^{\gamma\delta}\mathcal{S}_{\gamma\delta\sigma\rho} \Big) \nonumber \\ && - \frac{ 3 J^2\sigma }{4(R-J\sigma)} X^{\sigma} \mathcal{W}^{\gamma\delta}\mathcal{S}_{\gamma\delta\sigma\rho} \Big( \mathcal{W}_{\alpha\beta} \mathcal{W}_{\mu}{}^{\rho} + \frac{4}{3}\mathcal{I}_{\alpha\beta\mu}{}^{\rho} \Big) \;. \phantom{xxxxx} \end{eqnarray} Using \begin{equation} \mathcal{F}_{(\mu}{}^{\sigma}\mathcal{I}_{\nu)\sigma\alpha\beta} \,=\, \frac{1}{4} g_{\mu\nu}\mathcal{F}_{\alpha\beta} \;, \end{equation} cf.\ \cite[Equation (4.37)]{ik}, we obtain \begin{eqnarray} \frac{2}{R}\sigma^{\rho} \mathcal{I}_{\alpha\beta\mu\rho} +4 X^{\sigma}\mathcal{W}_{\mu}{}^{\rho} \mathcal{I}_{\alpha\beta\sigma\rho} &=& 4X^{\sigma}(\mathcal{W}_{\sigma}{}^{\rho} \mathcal{I}_{\alpha\beta\mu\rho} +\mathcal{W}_{\mu}{}^{\rho} \mathcal{I}_{\alpha\beta\sigma\rho}) \\ &=& 2X_{\mu}\mathcal{W}_{\alpha\beta} \;, \end{eqnarray} and thus \begin{eqnarray} \nabla_{\rho} \mathcal{S}^{(\mathrm{ev})}_{\alpha\beta\mu}{}^{\rho} &=& ( \Lambda -3 J R)\mathcal{W}_{\mu}{}^{\rho}\Big( R^{-1} X^{\sigma}\mathcal{S}^{(\mathrm{ev})}_{\alpha\beta\sigma\rho} + \frac{1}{2} R^{-1} X^{\sigma} \mathcal{W}_{\alpha\beta} \mathcal{W}^{\gamma\delta}\mathcal{S}^{(\mathrm{ev})}_{\gamma\delta\sigma\rho} \Big) \nonumber \\ && - \frac{ 3 J^2\sigma }{4(R-J\sigma)} X^{\sigma} \mathcal{W}^{\gamma\delta}\mathcal{S}^{(\mathrm{ev})}_{\gamma\delta\sigma\rho} \Big( \mathcal{W}_{\alpha\beta} \mathcal{W}_{\mu}{}^{\rho} + \frac{4}{3}\mathcal{I}_{\alpha\beta\mu}{}^{\rho} \Big) \\ &=& \frac{ \Lambda -3 J R}{R}\mathcal{W}_{\mu}{}^{\rho} X^{\sigma}\mathcal{S}^{(\mathrm{ev})}_{\alpha\beta\sigma\rho} - \frac{ J^2\sigma }{R-J\sigma} X^{\sigma} \mathcal{I}_{\alpha\beta\mu}{}^{\rho} \mathcal{W}^{\gamma\delta} \mathcal{S}^{(\mathrm{ev})}_{\gamma\delta\sigma\rho} \nonumber \\ &&-\frac{\Lambda}{4R} \frac{ R+ 2 J\sigma }{R-J\sigma} X^{\sigma} \mathcal{W}_{\alpha\beta} \mathcal{W}^{\gamma\delta} \mathcal{W}_{\mu}{}^{\rho} \mathcal{S}^{(\mathrm{ev})}_{\gamma\delta\sigma\rho} \;. \end{eqnarray} Expressed in terms of $\mathcal{F}_{\mu\nu}$ and $Q_{\mathrm{ev}}$ we finally end up with the desired equation for the MST, which may be regarded as an analog to the Bianchi equation, \begin{eqnarray} \nabla_{\rho} \mathcal{S}^{(\mathrm{ev})}_{\alpha\beta\mu}{}^{\rho} &=& \Big(4 \Lambda \frac{5Q_{\mathrm{ev}}\mathcal{F}^2 + 4\Lambda }{ Q_{\mathrm{ev}}\mathcal{F}^2 + 8\Lambda} \mathcal{F}_{\alpha\beta} \mathcal{F}_{\mu\rho} + \frac{2}{3}\mathcal{F}^{2} \frac{ Q_{\mathrm{ev}}^2\mathcal{F}^4 -2 \Lambda Q_{\mathrm{ev}}\mathcal{F}^2 -8\Lambda^2 }{Q_{\mathrm{ev}}\mathcal{F}^2 + 8\Lambda} \mathcal{I}_{\alpha\beta\mu\rho} \Big) \nonumber \\ && \times \mathcal{F}^{-4} X^{\sigma} \mathcal{F}^{\gamma\delta} \mathcal{S}^{(\mathrm{ev})}_{\gamma\delta\sigma}{}^{\rho} -Q_{\mathrm{ev}} \mathcal{F}_{\mu\rho} X^{\sigma}\mathcal{S}^{(\mathrm{ev})}_{\alpha\beta\sigma}{}^{\rho} \label{full_eqn_MST} \\ &=& -4 \Lambda \frac{ 5 Q_{\mathrm{ev}}\mathcal{F}^2 +4\Lambda }{Q_{\mathrm{ev}}\mathcal{F}^2 + 8\Lambda} \mathcal{U}_{\alpha\beta\mu\rho} \mathcal{F}^{-4} X^{\sigma} \mathcal{F}^{\gamma\delta} \mathcal{S}^{(\mathrm{ev})}_{\gamma\delta\sigma}{}^{\rho} \nonumber \\ && + Q_{\mathrm{ev}}X^{\sigma}\Big( \frac{2}{3} \mathcal{I}_{\alpha\beta\mu\rho} \mathcal{F}^{\gamma\delta} \mathcal{S}^{(\mathrm{ev})}_{\gamma\delta\sigma}{}^{\rho} - \mathcal{F}_{\mu\rho} \mathcal{S}^{(\mathrm{ev})}_{\alpha\beta\sigma}{}^{\rho} \Big) \\ &=:& \mathcal{J}(\mathcal{S^{(\mathrm{ev})}})_{\alpha\beta\mu} \;. \label{phys_ev} \end{eqnarray} Here we have introduced the shorthand $ \mathcal{J}(\mathcal{S^{(\mathrm{ev})}})_{\alpha\beta\mu}$ for the righthand side, which is a double $(2,1)$-form, linear and homogeneous in $\mathcal{S}^{(\mathrm{ev})}_{\alpha\beta\sigma}{}^{\rho}$, with the following properties \begin{equation} \mathcal{J}(\mathcal{S^{(\mathrm{ev})}})_{\alpha\beta\mu}= \mathcal{J}(\mathcal{S^{(\mathrm{ev})}})_{[\alpha\beta]\mu}, \hspace{3mm} \mathcal{J}(\mathcal{S^{(\mathrm{ev})}})_{[\alpha\beta\mu]}=0, \hspace{3mm} \mathcal{J}(\mathcal{S^{(\mathrm{ev})}})^\rho{}_{\beta\rho}=0 . \label{Jsymprop} \end{equation} It is also self-dual in the first pair of anti-symmetric indices: $\mathcal{J}^{\star}(\mathcal{S^{(\mathrm{ev})}})_{\alpha\beta\mu}=-i \mathcal{J}(\mathcal{S^{(\mathrm{ev})}})_{\alpha\beta\mu}$. Using the fact that the MST $\mathcal{S}^{(\mathrm{ev})}_{\mu\nu\sigma}{}^{\rho}$ has all the algebraic symmetries of the Weyl tensor, we immediately obtain a Bianchi-like equation from \eq{phys_ev} for the rescaled MST $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\mu\nu\sigma}{}^{\rho}$ in the conformally rescaled ``unphysical'' spacetime, \begin{equation} \widetilde\nabla_{\rho}\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\alpha\beta\mu}{}^{\rho} \,=\, \Theta^{-1}\nabla_{\rho}\mathcal{S}^{(\mathrm{ev})}_{\alpha\beta\mu}{}^{\rho} \,=\, \Theta^{-1} \mathcal{J}(\mathcal{S}^{(\mathrm{ev})})_{\alpha\beta\mu}\,=\, \mathcal{J}(\widetilde{\mathcal{T}}^{(\mathrm{ev})})_{\alpha\beta\mu} \;. \label{unphys_ev} \end{equation} \subsubsection{A symmetric hyperbolic system satisfied by the (rescaled) MST} \label{subsec:FOSH} We now want to show that the equations \eq{phys_ev} and \eq{unphys_ev} contain a system of linear first-order symmetric hyperbolic equations in their respective spacetimes. Given that \eq{phys_ev} and \eq{unphys_ev} have exactly the same structure, it is enough to perform the analysis for any one of the two systems, or to a model equivalent system in a given spacetime. Then, a further analysis of the regularity of the system \eq{unphys_ev} near $\scri$ is also needed, and this will be done later in this section. Let $\mathcal{S}_{\alpha\beta\lambda}{}^\mu$ represent either $\mathcal{S}^{(\mathrm{ev})}_{\alpha\beta\lambda}{}^{\mu}$ or $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\alpha\beta\lambda}{}^{\mu}$, or for that matter, any other self-dual symmetric and traceless double (2,2)-form satisfying a system of equations such as \eq{phys_ev} or \eq{unphys_ev}: \begin{equation} \nabla_\rho {\cal S}_{\gamma\mu\nu}{}^\rho= {\cal J}_{\gamma\mu\nu}({\cal S}), \hspace{1cm} \label{delta1T} \end{equation} where $ {\cal J}_{\gamma\mu\nu}({\cal S})$ is a self-dual double $(2,1)$-form, linear and homogeneous in $\mathcal{S}_{\alpha\beta\sigma}{}^{\rho}$, with the properties given in \eq{Jsymprop}. Employing the fact that the rescaled MST satisfies all the algebraic symmetries of the rescaled Weyl tensor we find that this system is equivalent to (cf.\ \cite{penrose, ik}) \begin{equation} \label{d1tildeT} 3 \nabla_{[\sigma}{\mathcal{S}}_{\mu\nu]\alpha\beta} = -\frac{1}{2} \mbox{$\eta$}_{\sigma\mu\nu\kappa} \mbox{$\eta$}^{\gamma\delta\rho\kappa}\nabla_{\gamma} {\mathcal{S}}_{\delta\rho\alpha\beta} = \mbox{$\eta$}_{\sigma\mu\nu}{}^{\kappa}\nabla_{\gamma} ({\mathcal{S}}_{\alpha\beta\kappa}{}^{\gamma} )^{\star} = - i \mbox{$\eta$}_{\sigma\mu\nu}{}^{\kappa}\nabla_{\gamma} {\mathcal{S}}_{\alpha\beta\kappa}{}^{\gamma} \end{equation} that is to say \begin{equation} 3 \nabla_{[\sigma}{\mathcal{S}}_{\mu\nu]\alpha\beta} = - i\mbox{$\eta$}_{\mu\nu\sigma}{}^{\rho} \mathcal{J}({\mathcal{S}})_{\alpha\beta\rho}\label{d1T} \end{equation} Observe that each of \eq{delta1T} and \eq{d1T} contains 8 complex (16 real) independent equations for only 5 complex (10 real) unknowns, hence they are overdetermined. Systems of this type have been analyzed many times in the literature (in order to see if they comprise a symmetric hyperbolic system), especially in connection with the Bianchi identities \cite{ChY2,Bo}. Here, to check that the systems \eq{delta1T}, or \eq{d1T}, and therefore \eq{phys_ev} and \eq{unphys_ev}, contain symmetric hyperbolic evolution equations we use the general ideas exposed in \cite{G}, which were applied to systems more general than --and including-- those of type (\ref{delta1T}) or (\ref{d1T}) and discussed at length in \cite{S}. The goal is to find a ``hyperbolization'', in the sense of \cite{G}. To that end, simplifying the calculations in \cite{S}, pick {\em any} timelike vector $v^\mu$ and contract \eq{delta1T} with \begin{equation} \label{Factor1} - v_{[\alpha} \delta^\nu_{\beta]}v^{[\gamma} v_{[\delta} \delta^{\mu]}_{\epsilon]} \end{equation} and add the result to the contraction of \eq{d1T} with \begin{equation} \label{Factor2} -\frac{1}{2} v^{[\lambda} \delta^\tau_\alpha \delta^{\nu]}_\beta v^{[\gamma} v_{[\delta} \delta^{\mu]}_{\epsilon]} \end{equation} to arrive at the following system \begin{equation} Q^{\tau}{}^{\lambda\nu\gamma\mu}_{\alpha\beta\delta\epsilon} \nabla_\tau {\cal S}_{\lambda\nu\gamma\mu} = {\cal J}_{\alpha\beta\delta\epsilon} \label{QnablaS} \end{equation} with $$ Q^{\tau}{}^{\lambda\nu\gamma\mu}_{\alpha\beta\delta\epsilon} =v^{[\gamma} v_{[\delta} \delta^{\mu]}_{\epsilon]}\Big(g^{\tau[\lambda}v_{[\alpha} \delta^{\nu]}_{\beta]} +\delta^\tau_{[\alpha}v^{[\lambda} \delta^{\nu]}_{\beta]} -\frac{1}{2} v^\tau \delta^{\lambda}_{[\alpha}\delta^\nu_{\beta]} \Big)\, . $$ By construction, the righthand side of (\ref{QnablaS}) is linear in ${\cal J}_{\gamma\mu\nu}(S)$ and {\em a fortiori} linear in ${\cal S}_{\gamma\mu\nu\rho}$, so that its explicit expression is unimportant. The system \eq{QnablaS} is symmetric hyperbolic. To prove it, we have to check two properties of $Q^{\tau}{}^{\lambda\nu\gamma\mu}_{\alpha\beta\delta\epsilon}$: it must be Hermitian in $\lambda\nu\gamma\mu \leftrightarrow {\alpha\beta\delta\epsilon}$, and there must exist a one-form $u_\tau$ such that its contraction with $Q^{\tau}{}^{\lambda\nu\gamma\mu}_{\alpha\beta\delta\epsilon}$ is positive definite. The first condition can be easily checked by first noticing that $Q^{\tau}{}^{\lambda\nu\gamma\mu}_{\alpha\beta\delta\epsilon}$ happens to be real and then contracting it with two arbitrary self-dual trace-free double (2,2) forms, say ${\cal A}_{\lambda\nu\gamma\mu}$ and ${\cal B}^{\alpha\beta\delta\epsilon}$. The result is \begin{eqnarray*} v^\beta v^\lambda v^\mu \Big[{\cal A}^\tau{}_{\rho\lambda\sigma} {\cal B}_{\beta}{}^\rho{}_\mu{}^\sigma +{\cal B}^\tau{}_{\rho\lambda\sigma} {\cal A}_{\beta}{}^\rho{}_\mu{}^\sigma -\frac{1}{2}\delta^\tau_\beta {\cal A}_{\alpha\rho\lambda\sigma} {\cal B}^{\alpha\rho}{}_\mu{}^\sigma \Big] \;, \end{eqnarray*} which is manifestly symmetric under the interchange of ${\cal A}$ and ${\cal B}$. Thus, the matrix of the system \eq{QnablaS} is Hermitian. With regard to the second condition, we contract $Q^{\tau}{}^{\lambda\nu\gamma\mu}_{\alpha\beta\delta\epsilon}$ with ${\cal A}_{\lambda\nu\gamma\mu}$, $\overline{\cal A}^{\alpha\beta\delta\epsilon}$, and with $u_\tau$. We stress that in this section an overbar means ``complex conjugation'' rather than ``restriction to $\scri$''. We get \begin{eqnarray*} u_\tau v^\beta v^\lambda v^\mu ({\cal A}^\tau{}_{\rho\lambda\sigma} \overline{\cal A}_{\beta}{}^\rho{}_\mu{}^\sigma +\overline{\cal A}^\tau{}_{\rho\lambda\sigma} {\cal A}_{\beta}{}^\rho{}_\mu{}^\sigma )= 2\, u_\tau v^\beta v^\lambda v^\mu ({\cal A}^\tau{}_{\rho\lambda\sigma} \overline{\cal A}_{\beta}{}^\rho{}_\mu{}^\sigma ) \; \end{eqnarray*} and note that the expression in brackets is precisely the Bel-Robinson superenergy tensor $t_{\tau\beta\lambda\mu}$ of the self-dual Weyl-type tensor ${\cal A}_{\alpha\beta\lambda\mu}$ \cite{penrose,S1}. It is known that this tensor satisfies the dominant property \cite{penrose,S1}, that is, $t_{\tau\beta\lambda\mu}v_1^\tau v_2^\beta v_3^\lambda v_4^\mu >0$ for arbitrary future-pointing timelike vectors $v_1^\tau, v_2^\beta, v_3^\lambda, v_4^\mu$. Thus, the previous expression is positive for {\em any} timelike $u_\tau$ with the same time orientation as $v^\tau$, as required. Actually, by choosing $u^{\tau}=v^{\tau}$ it becomes (twice) the so-called super-energy density of ${\cal A}_{\alpha\beta\lambda\mu}$ relative to $v^{\tau}$: \begin{equation} W_{v}({\cal A}) := v^\tau v^\beta v^\lambda v^\mu {\cal A}_{\tau\rho\lambda\sigma} \overline{\cal A}_{\beta}{}^\rho{}_\mu{}^\sigma \label{s-e} \end{equation} which is non-negative, vanishing if and only if so does the full ${\cal A}_{\alpha\beta\lambda\mu}$ \cite{S1}. In order to see the relation between the found symmetric hyperbolic system \eq{QnablaS} and the original equations \eq{delta1T} or \eq{d1T}, we take into account that $Q^{\tau}{}^{\lambda\nu\gamma\mu}_{\alpha\beta\delta\epsilon}$ is non-degenerate as an operator acting on 2-forms in its "$\gamma\mu , \delta\epsilon$" part so that \eq{QnablaS} is actually fully equivalent to $$ 3v^\lambda \nabla_{[\lambda}{\cal S}_{\tau\nu]\gamma\mu} +2\nabla_\rho {\cal S}_{\gamma\mu[\nu}{}^\rho v_{\tau]} =i v^\lambda {\cal J}_{\gamma\mu\sigma}(S) \mbox{$\eta$}^\sigma{}_{\lambda\tau\nu} +2 {\cal J}_{\gamma\mu[\nu}(S)v_{\tau]} \;. $$ There is some redundancy here due to the equivalence of \eq{delta1T} and \eq{d1T}. To optimize the expression of this symmetric hyperbolic system we note that, via the identity (\ref{d1tildeT}), it can be rewritten as $$ \overline{\cal I}^{\tau\nu}{}_{\lambda\sigma}\left(\nabla_\rho {\cal S}_{\gamma\mu[\nu}{}^\rho- {\cal J}_{\gamma\mu[\nu}({\cal S})\right)v_{\tau]}=0 $$ which is easily seen to be equivalent to \begin{equation} \left(\nabla_\rho {\cal S}_{\gamma\mu[\nu}{}^\rho- {\cal J}_{\gamma\mu[\nu}({\cal S})\right)v_{\tau]}=0 \label{evol2} \end{equation} with $v_\beta$ any timelike vector. The linear symmetric hyperbolic set (\ref{evol2}) constitutes the {\em evolution} equations of our system. Note that, taking into account trace and symmetry properties, there are precisely 5 complex (10 real) independent equations in (\ref{evol2}), which is the number of independent unknowns. The complete system (\ref{delta1T}) is re-obtained by adding the constraints, which can be written for any given spacelike hypersurface $\Sigma$ with timelike normal $n^\mu$ as (cf.\ \cite{S}, section 4) \begin{equation} n^\nu\left(\nabla_\rho {\cal S}_{\gamma\mu\nu}{}^\rho- {\cal J}_{\gamma\mu\nu}({\cal S})\right)=0\;. \label{const2} \end{equation} Notice, first of all, that only derivatives {\em tangent} to $\Sigma$ appear in (\ref{const2}). Observe furthermore that (\ref{const2}) contains 3 complex (6 real) independent equations which adds up with (\ref{evol2}) to the number of equations of the original system (\ref{delta1T}), rendering the former two equations fully equivalent with the latter one. To check this directly, contract (\ref{evol2}) with $v^\tau$ to get $$ (-v^2 \delta^\sigma_\nu +v^\sigma v_\nu)\left(\nabla_\rho {\cal S}_{\gamma\mu\sigma}{}^\rho- {\cal J}_{\gamma\mu\sigma}({\cal S})\right)=0 \;, $$ where \begin{equation} h^\sigma{}_\gamma := \delta^\sigma_\gamma -v^{-2}v_\gamma v^\sigma \label{proj} \end{equation} is the projector orthogonal to $v^\mu$, which immediately leads to (\ref{delta1T}) by taking into account (\ref{const2})--- e.g., by simply choosing $v^\mu$ pointing along $n^\mu$. As mentioned before, (\ref{evol2}) contains 5 equations for the 5 complex independent unknowns in ${\cal S}_{\gamma\mu\sigma}{}^\rho$. A convenient way of explicitly expressing this fact is by recalling the following identity \begin{eqnarray}\nonumber {\cal S}_{\alpha\beta\lambda\mu}=2\left[(h_{\alpha[\lambda} -v^{-2}v_\alpha v_{[\lambda}){\cal E}_{\mu]\beta} +(h_{\beta[\mu} -v^{-2}v_\beta v_{[\mu}){\cal E}_{\lambda]\alpha}\right.\\ \left. -i v^{{-2}}v^{\rho}\mbox{$\eta$}_{\rho\lambda\mu\sigma}v_{[\alpha}{\cal E}_{\beta]}{}^{\sigma}-i v^{{-2}}v^{\rho}\mbox{$\eta$}_{\rho\alpha\beta\sigma}v_{[\lambda}{\cal E}_{\mu]}{}^{\sigma}\right] \nonumber\end{eqnarray} in terms of the spatial ``electric-magnetic'' tensor defined for any timelike $v^{\tau}$ by \begin{equation} {\cal E}_{\beta\mu}:= -v^{-2} v^{\alpha} v^{\lambda} {\cal S}_{\alpha\beta\lambda\mu} . \label{e-m} \end{equation} Observe the following properties $$ {\cal E}_{\beta\mu}={\cal E}_{\mu\beta}, \hspace{3mm} {\cal E}_{\beta\mu}v^{\mu} =0, \hspace{3mm} {\cal E}^{\mu}{}_{\mu}=0. $$ Thus, ${\cal E}_{\beta\mu}$ contains 5 complex independent components and exactly the same information as the full ${\cal S}_{\gamma\mu\sigma}{}^\rho$. Note that the density (\ref{s-e}) is then expressed simply as \begin{equation} W_{v}({\cal S}) = {\cal E}_{\mu\nu}\overline{\cal E}^{\mu\nu} .\label{s-e2} \end{equation} In any orthonormal basis with its timelike `t'-part aligned with $v^{\mu}$, the five independent components of ${\cal E}_{\mu\nu}$ are given simply by $$ {\cal E}_{ij} = {\cal S}_{titj}, \hspace{1cm} {\cal S}_{tijl}=i \mbox{$\eta$}^{t}{}_{kjl}{\cal E}_{i}^{k} $$ where the second equation follows from the self-duality of ${\cal S}$. Using this, the evolution equations (\ref{evol2}) become simply \begin{equation} \nabla_{\rho}{\cal S}_{t(ij)}{}^{\rho} =\nabla_{t} {\cal E}_{ij}+i \mbox{$\eta$}^{t}{}_{lk(j}\nabla^{l}{\cal E}_{i)}^{k}={\cal J}({\cal S})_{t(ij)}\label{evol3} \end{equation} while the constraint equations (\ref{const2}) (with $n^{\tau}$ pointing along $v^{\tau}$) read \begin{equation} \nabla_{\rho}{\cal S}_{tit}{}^{\rho}=\nabla_{j}{\cal E}_{i}{}^{j}={\cal J}({\cal S})_{tit}\, . \label{const3} \end{equation} We will use these expressions later for the case ${\cal S}_{\alpha\beta\gamma\delta} = \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\alpha\beta\gamma\delta}$, to prove uniqueness of the solutions to \eq{unphys_ev}. All in all, as a generalization of \cite[Theorem~4.5 \& 4.7]{ik} we have obtained \begin{lemma} \label{sym_hyp_MST} \begin{enumerate} \item[(i)] The MST $\mathcal{S}^{(\mathrm{ev})}_{\mu\nu\sigma\rho}$ satisfies, for any sign of the cosmological constant $\Lambda$, a linear, homogeneous symmetric hyperbolic system of evolution equations in $({\cal M},g)$. \item[(ii)] The rescaled MST $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\mu\nu\sigma\rho}$ satisfies, for any sign of the cosmological constant $\Lambda$, a linear, homogeneous symmetric hyperbolic system of evolution equations in $(\widetilde{{\cal M}\enspace}\hspace{-0.5em} ,\widetilde g)$. \end{enumerate} \end{lemma} \begin{remark} {\rm An alternative route to arrive at the same result is by using spinors, see \cite{F1,F}. In this formalism \cite{penrose} the (rescaled) MST is represented by a fully symmetric spinor $\Upsilon_{ABCD}$, and equations (\ref{delta1T}) are written in the following form $$ \nabla^{A}{}_{A'}\Upsilon_{ABCD}=L_{A'BCD} $$ where $L_{A'BCD}=L_{A'(BCD)}$ is the spinor associated to ${\cal J}_{\gamma\mu\nu}({\cal S})$. Then this is easily put in symmetric hyperbolic form, writing it as in \cite{F1}, section 4, for the Bianchi equations. } \end{remark} \begin{remark} \label{rem_reg} {\rm Note that the denominator $Q_{\mathrm{ev}}\mathcal{F}^2 + 8\Lambda$ in the equation \eq{full_eqn_MST} for $\mathcal{S}^{(\mathrm{ev})}_{\mu\nu\sigma}{}^{\rho}$ might have zeros. Furthermore, the Ernst potential may have zeros so that $Q_{\mathrm{ev}}$ blows up. An analogous problem arises for $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\mu\nu\sigma}{}^{\rho}$, cf.\ \eq{sym_hyp0} below. In fact, it follows from \eq{expansion_sigma}, \eq{asympt_exp_Q}, \eq{asympt_exp_F2} and \eq{asympt_exp_H2} below, that this cannot happen sufficiently close to $\scri$ (for $\Lambda >0$). It is not clear, though, whether the evolution equations remain regular off some neighborhood of $\scri$. Moreover, it will be shown in the subsequent section that $\mathcal{J}(\widetilde{\mathcal{T}}^{(\mathrm{ev})})_{\alpha\beta\mu}$ is singular at $\scri$ due to the vanishing of the conformal factor $\Theta$ there --see \eq{sym_hyp1} below--, whence one actually has to deal with a Fuchsian system.} \end{remark} \subsubsection{Behavior of the Bianchi-like system for $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\mu\nu\sigma}{}^{\rho}$ near $\scri$} Let us analyze the behavior of the system \eq{unphys_ev} near $\scri$. Note that we are {\it not} assuming a priori that $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\mu\nu\sigma}{}^{\rho}$ is regular at $\scri$. First of all we employ the following expansions which have been derived in Section~\ref{Mars-Simon_conf}, and which do not rely on any gauge choice, \begin{eqnarray} Q_{\mathrm{ev}} &=& O(\Theta^4) \label{asympt_exp_Q} \;, \\ \mathcal{F}_{\mu\nu} &=& \widetilde{\mathcal{H}}_{\mu\nu}\Theta^{-3} + O(\Theta^{-2}) \;, \\ \mathcal{F}^2 &=& \Theta^{-2}\widetilde{\mathcal{H}}^2 + O(\Theta^{-1}) \label{asympt_exp_F2} \;, \\ g^{\mu\nu} &=& \Theta^2 \widetilde g^{\mu\nu} \;, \\ \mathcal{I}_{\alpha\beta\mu\nu} &=&\Theta^{-4} \widetilde {\mathcal{I}}_{\alpha\beta\mu\nu} \;, \\ \mathcal{U}_{\alpha\beta\mu}{}^{\nu} &=&- \Big(\widetilde{\mathcal{H}}_{\alpha\beta}\widetilde{\mathcal{H}}_{\mu}{}^{\nu} - \frac{1}{3}\widetilde{\mathcal{H}}^2\widetilde{\mathcal{I}}_{\alpha\beta\mu}{}^{\nu}\Big)\Theta^{-4} + O(\Theta^{-3}) \;. \end{eqnarray} This yields \begin{eqnarray} \mathcal{J}(\widetilde{\mathcal{T}}^{(\mathrm{ev})})_{\alpha\beta\mu} &=& -4 \Lambda \frac{ 5 Q_{\mathrm{ev}}\mathcal{F}^2 +4\Lambda }{Q_{\mathrm{ev}}\mathcal{F}^2 + 8\Lambda} \mathcal{U}_{\alpha\beta\mu}{}^{\nu} \mathcal{F}^{-4} X^{\sigma}g_{\rho\nu}g^{\gamma\kappa}g^{\delta\varkappa} \mathcal{F}_{\kappa\varkappa} \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\gamma\delta\sigma}{}^{\rho} \nonumber \\ && + Q_{\mathrm{ev}}X^{\sigma}\Big( \frac{2}{3} \mathcal{I}_{\alpha\beta\mu\rho} g^{\gamma\kappa} g^{\delta\varkappa} \mathcal{F}_{\kappa\varkappa} \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\gamma\delta\sigma}{}^{\rho} - \mathcal{F}_{\mu\rho} \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\alpha\beta\sigma}{}^{\rho} \Big) \label{sym_hyp0} \\ &=& -2 \Lambda \mathcal{U}_{\alpha\beta\mu}{}^{\nu} \mathcal{F}^{-4} X^{\sigma}g_{\rho\nu}g^{\gamma\kappa}g^{\delta\varkappa} \mathcal{F}_{\kappa\varkappa} \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\gamma\delta\sigma}{}^{\rho} \nonumber \\ && + (O(\Theta) \widetilde{\mathcal{T}}^{(\mathrm{ev})})_{\alpha\beta\mu} \\ &=& 2 \Lambda\Theta^{-1} \Big( \widetilde{\mathcal{H}}_{\alpha\beta}\widetilde{\mathcal{H}}_{\mu\rho} - \frac{1}{3}\widetilde{\mathcal{H}}^2\widetilde{\mathcal{I}}_{\alpha\beta\mu\rho}\Big) \widetilde{\mathcal{H}}^{-4} \widetilde{\mathcal{H}}^{\gamma\delta} \widetilde X^{\sigma} \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\gamma\delta\sigma}{}^{\rho} \nonumber \\ && + (O(1) \widetilde{\mathcal{T}}^{(\mathrm{ev})})_{\alpha\beta\mu} \;. \label{sym_hyp1} \end{eqnarray} In adapted coordinates $(t,x^i)$ where $\scri^-=\{t=0\}$, we have \begin{equation} X^{i}\,=\, \widetilde X^{i}\,=\, Y^i + O(\Theta) \;, \quad X^{t}\,=\, \widetilde X^{t}\,=\, O(\Theta) \;. \label{KVF_scri} \end{equation} Let us further assume a gauge where \begin{equation} \widetilde g_{tt}|_{\scri} \,=\, -1\;, \quad \widetilde g_{ti}|_{\scri} \,=\, 0 \;. \label{asymp_gauge} \end{equation} In particular this implies by \eq{conf5} that the conformal factor $\Theta$ satisfies \begin{equation} \Theta \,=\, \sqrt{\frac{\Lambda}{3}}t + O(1) \;. \label{exp_theta} \end{equation} Moreover, as for the wave map gauge \eq{gauge_conditions_compact}, which is compatible with \eq{asymp_gauge}, we find \begin{eqnarray} \widetilde{\mathcal{H}}_{ti} &=& - \sqrt{\frac{\Lambda}{3}} Y_i + O(\Theta) \;, \label{H_asymp_gauge1} \\ \widetilde{\mathcal{H}}_{ij} &=& i\sqrt{\frac{\Lambda}{3}} \widehat\eta_{ijk}Y^k + O(\Theta) \;, \label{H_asymp_gauge2} \\ \widetilde{\mathcal{H}}^2 &=& - 4 \frac{\Lambda}{3} |Y|^2 + O(\Theta) \label{asympt_exp_H2} \label{H_asymp_gauge3} \;. \end{eqnarray} Using further that \begin{equation} \widetilde{\mathcal{H}}^{\gamma\delta} \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\gamma\delta\sigma}{}^{\rho} \,=\, 2 \widetilde{H}^{\gamma\delta} \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\gamma\delta\sigma}{}^{\rho} \,=\, 4 \widetilde X^{\gamma}\widetilde\nabla^{\delta}\Theta \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\gamma\delta\sigma}{}^{\rho} \,=\, 4 \sqrt{\frac{\Lambda}{3}} Y^{i} \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{t i \sigma}{}^{\rho} + (O(\Theta) \widetilde{\mathcal{T}}^{(\mathrm{ev})})_{\sigma}{}^{\rho} \;, \label{HT_relation} \end{equation} we find that the system \eq{unphys_ev} has the following structure near $\scri$, \begin{align} \widetilde\nabla_{\rho} \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\alpha\beta\mu}{}^{\rho} \,=\, & \, \frac{9}{2} \sqrt{\frac{\Lambda}{3}}\Lambda^{-1} \Theta^{-1}|Y|^{-4}(\widetilde{\mathcal{H}}_{\alpha\beta}\widetilde{\mathcal{H}}_{\mu\rho} - \frac{1}{3}\widetilde{\mathcal{H}}^2\widetilde{\mathcal{I}}_{\alpha\beta\mu\rho})Y^i Y^k \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{ti k}{}^{\rho} \nonumber \\ & + (O(1) \widetilde{\mathcal{T}}^{(\mathrm{ev})})_{\alpha\beta\mu} \label{sym_hyp} \;, \end{align} in adapted coordinates and whenever \eq{asymp_gauge} holds. As explained in the previous section the system \eq{unphys_ev} splits into a symmetric hyperbolic system of evolution equations and a system of constraint equations for $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\alpha\beta\mu\nu}$. However, this requires an appropriate gauge choice. A convenient way to realize such a gauge is to impose the condition \eq{asymp_gauge} also off the initial surface, \begin{equation} \widetilde g_{tt} \,=\, -1\;, \quad \widetilde g_{ti}\,=\, 0 \;. \label{new_gauge} \end{equation} It is well known that these \emph{Gaussian normal coordinates} \cite{Wald} are obtained by shooting geodesics normally to $\scri$; the coordinate $t$ is then chosen to be an affine parameter along these geodesics, while the coordinates $\{x^i\}$ are transported from $\scri$ by requiring them to be constant along these geodesics. Setting ${\cal S}_{\alpha\beta\gamma\delta}=\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\alpha\beta\gamma\delta}$ in (\ref{evol3}) and (\ref{const3}) and using \eq{H_Y_relation}, which follows from \eq{H_asymp_gauge1}-\eq{H_asymp_gauge3}, we find in the unphysical spacetime (we have $\mathcal{E}_{ij}\equiv \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{ti tj}$) \begin{eqnarray} \widetilde\nabla_{\rho} \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{tit}{}^{\rho} &\equiv& \widetilde\nabla_{j}{\cal E}_{i}{}^{j} \\ &=& \frac{9}{2} \sqrt{\frac{\Lambda}{3}}\Lambda^{-1} \Theta^{-1}|Y|^{-4}(\widetilde{\mathcal{H}}_{ti}\widetilde{\mathcal{H}}_{tj} - \frac{1}{3}\widetilde{\mathcal{H}}^2\widetilde{\mathcal{I}}_{titj})Y^l Y^k \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{tlk}{}^{j} \nonumber \\ && + (O(1) \widetilde{\mathcal{T}}^{(\mathrm{ev})})_{tit} \\ &=& \frac{1}{2} \sqrt{\frac{\Lambda}{3}} \Theta^{-1}|Y|^{-2} Y^l Y^k \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{tlki} + (O(1) \widetilde{\mathcal{T}}^{(\mathrm{ev})})_{tit} \\ &=& -\frac{1}{2}i \sqrt{\frac{\Lambda}{3}} \Theta^{-1}|Y|^{-2} \widehat\mbox{$\eta$}_{i}{}^{jk}Y^l Y_{[j} \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{k]tlt} + (O(1) \widetilde{\mathcal{T}}^{(\mathrm{ev})})_{tit} \label{sym_hyp_spec} \\ &\equiv& -\frac{1}{2}i \sqrt{\frac{\Lambda}{3}} \Theta^{-1}|Y|^{-2} \widehat\mbox{$\eta$}_{i}{}^{jk}Y^l Y_{[j} {\cal E}_{k]l} + (O(1) \mathcal{E})_{tit} \end{eqnarray} for the constraint equations, and \begin{eqnarray} \widetilde\nabla_{\rho} \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{t(ij)}{}^{\rho} &\equiv & \widetilde\nabla_{t} {\cal E}_{ij}+i \widehat\mbox{$\eta$}_{(j}{}^{lk}\widetilde\nabla_{l}{\cal E}_{i)k} \\ &=& -\frac{9}{2} \sqrt{\frac{\Lambda}{3}}\Lambda^{-1} \Theta^{-1}|Y|^{-4}(\widetilde{\mathcal{H}}_{t(i}\widetilde{\mathcal{H}}_{|t|j)} - \frac{1}{3}\widetilde{\mathcal{H}}^2\widetilde{\mathcal{I}}_{t(i|t|j)})Y^k Y^l \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{tk t l} \nonumber \\ &&+ \frac{9}{2} \sqrt{\frac{\Lambda}{3}}\Lambda^{-1} \Theta^{-1}|Y|^{-4}(\widetilde{\mathcal{H}}_{t(i}\widetilde{\mathcal{H}}_{j)k} - \frac{1}{3}\widetilde{\mathcal{H}}^2\widetilde{\mathcal{I}}_{t(ij)k})Y^l Y^m \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{tlm}{}^{k} \nonumber \\ && + (O(1) \widetilde{\mathcal{T}}^{(\mathrm{ev})})_{(tij)} \\ &=& \sqrt{\frac{\Lambda}{3}}|Y|^{-2} \Theta^{-1}\Big( \frac{3}{2}Y_{(i} Y^l \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{|t|j)tl} - 3|Y|^{-2}Y_iY_jY^k Y^l \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{tk t l} \nonumber \\ && + \frac{1}{2} h_{ij} Y^k Y^l \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{tk t l} \Big) + (O(1) \widetilde{\mathcal{T}}^{(\mathrm{ev})})_{t(ij)} \label{ev_eqns} \\ &\equiv& \sqrt{\frac{\Lambda}{3}}|Y|^{-2} \Theta^{-1}\Big( \frac{3}{2}Y_{(i} Y^l \mathcal{E}_{j)l} - 3|Y|^{-2}Y_iY_jY^k Y^l \mathcal{E}_{k l} + \frac{1}{2} h_{ij} Y^k Y^l \mathcal{E}_{k l} \Big) \nonumber \\ && + (O(1) \mathcal{E})_{t(ij)} \label{ev_eqns2} \end{eqnarray} for the evolution equations. Note that the equations \eq{sym_hyp_spec} and \eq{ev_eqns} hold regardless of the gauge as long as the asymptotic gauge condition \eq{asymp_gauge} is ensured. However, the global gauge condition \eq{new_gauge} (or an analogous one, cf.\ Section~\ref{subsec:FOSH}) is needed to ensure that this realizes the splitting into constraint and evolution equations. The divergent terms in both constraint and evolution equations are regular if and only if \begin{align*} Y^l Y_{[j} \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{k]tlt}|_{\scri} =0=Y^l Y_{(j} \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{k)tlt}|_{\scri} & \quad\Longleftrightarrow \quad Y^j\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{titj}|_{\scri} \equiv Y^j \mathcal{E}_{ij}|_{\scri}= 0 \\ & \quad\Longleftrightarrow\quad (\widetilde {\mathcal{H}}^{\alpha\beta}\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\mu\nu\alpha\beta})|_{\scri} =0 \;. \end{align*} For the sake of consistency, we check that these conditions hold if and only if the spacetime is asymptotically KdS-like. Indeed \begin{eqnarray} 0 \,=\, Y^l Y_{[j} \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{k]tlt}|_{\scri} \,=\, Y^l Y_{[j}D_{k]l} - i \sqrt{\frac{3}{\Lambda}} Y^l Y_{[j} \widehat C_{k]l} \;, \label{asmpt_KdS} \end{eqnarray} holds if and only if $Y^{k}$ is an eigenvector of both $D_{jk}$ and $\widehat{C}_{jk}$, or in other words if and only if \begin{eqnarray} Y^k D_{jk} = |Y|^{-2} Y^k Y^l D_{kl} Y_{j}\;, \quad Y^k \widehat C_{jk} = |Y|^{-2} Y^k Y^l \widehat C_{kl} Y_{j} \;, \label{asmpt_KdS2} \end{eqnarray} which are precisely the conditions defining asymptotically KdS-like spacetimes in Definition \ref{asympt_KdS_like}. Analogously, the divergent term in the evolution equations will be regular if and only if \begin{eqnarray} 0 \,=\, Y^l Y_{(j} \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{k)tlt}|_{\scri} &=& Y^l Y_{(j} \mathcal{E}_{k)l}|_{\scri} \nonumber \\ &=& Y^l Y_{(j}D_{k)l} - |Y|^{-2} Y^{m}Y^{n} D_{mn} Y_jY_k \nonumber \\ & - i & \sqrt{\frac{3}{\Lambda}} \Big( Y^l Y_{(j}\widehat C_{k)l} - |Y|^{-2} Y^{m}Y^{n} \widehat C_{mn}Y_jY_k\Big) \;, \phantom{xxx} \label{cond_ev} \end{eqnarray} which is automatically true in the asymptotically KdS-like setting as follows from \eq{asmpt_KdS2}. In summary, the evolution equations (\ref{evol3}) for ${\cal S}_{\mu\nu\sigma}{}^{\rho}=\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\mu\nu\sigma}{}^{\rho}$ constitute a symmetric hyperbolic system in the unphysical spacetime with a righthand side of the form \begin{equation} \mathcal{J}(\widetilde{\mathcal{T}}^{(\mathrm{ev})})_{\alpha\beta\mu} = \frac{1}{\Theta} \mathcal{N}(\widetilde{\mathcal{T}}^{(\mathrm{ev})})_{\alpha\beta\mu} \end{equation} where ${\mathcal N}(\widetilde{\mathcal{T}}^{(\mathrm{ev})})$ denotes a linear map ${\mathcal N}(\widetilde{\mathcal{T}}^{(\mathrm{ev})})_{\alpha\beta\mu} = {\mathcal N}_{\alpha\beta\mu}{}^{\rho \nu \sigma}{}_{\kappa} \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\rho\nu\sigma}{}^{\kappa}$ being ${\mathcal N}_{\alpha\beta\mu}{}^{\rho \nu \sigma}{}_{\kappa}$ a smooth tensor field up to and including $\scri$, at least in some neighborhood of $\scri$, cf.\ Remark~\ref{rem_non_reg}. Equations with such divergent terms are called {\it Fuchsian} in the literature. We state the existence and properties of the evolution equation for $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\rho\sigma\mu}{}^{\nu}$ as a lemma. \begin{lemma} \label{lemma_evolution} The rescaled MST $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\rho\sigma\mu}{}^{\nu}$ with $Q=Q_{\mathrm{ev}}$, satisfies a symmetric hyperbolic, linear, homogeneous Fuchsian system of evolution equations near $\scri$. \end{lemma} \begin{remark} \label{rem_non_reg} {\rm As already discussed in Remark~\ref{rem_reg}, it is not clear that the evolution system remains regular outside some neighborhood of $\scri$, in its whole domain of dependence. } \end{remark} \subsubsection{A wave equation satisfied by the (rescaled) MST} We now recall that \eq{d1T} holds in particular for $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\alpha\beta\lambda}{}^{\mu}$, and with tildes on all quantities. Application of $\widetilde\nabla^{\sigma}$ yields, together with with \eq{unphys_ev}, the linear, homogeneous wave equation \begin{eqnarray} \Box_{\widetilde g} \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\alpha\beta\mu\nu} &=& - 2\widetilde\nabla^{\sigma}\widetilde\nabla_{[\mu}\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\nu]\sigma\alpha\beta} - i\widetilde\mbox{$\eta$}_{\mu\nu\sigma}{}^{\rho}\widetilde\nabla^{\sigma}\mathcal{J}(\widetilde{\mathcal{T}}^{(\mathrm{ev})})_{\alpha\beta\rho} \\ &=& - 2\widetilde\nabla_{[\mu} \mathcal{J}(\widetilde{\mathcal{T}}^{(\mathrm{ev})})_{|\alpha\beta|\nu]} - i\widetilde\mbox{$\eta$}_{\mu\nu}{}^{\sigma\rho}\widetilde\nabla_{\sigma}\mathcal{J}(\widetilde{\mathcal{T}}^{(\mathrm{ev})})_{\alpha\beta\rho} - 2\widetilde R_{\kappa [\mu }\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{|\alpha\beta|\nu]}{}^{\kappa} \nonumber \\ && - 2\widetilde R_{\alpha\kappa[\mu }{}^{\sigma}\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\nu]\sigma\beta}{}^{\kappa} + 2\widetilde R_{\beta\kappa[\mu }{}^{\sigma}\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\nu]\sigma\alpha}{}^{\kappa} + \widetilde R_{\mu \nu\sigma}{}^{\kappa}\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\alpha\beta\kappa}{}^{\sigma} \;. \label{wave_eqn_general} \end{eqnarray} Of course the same reasoning can be applied to $S_{\alpha\beta\gamma\delta}^{(ev)}$, and we are led to the following \begin{lemma} \label{wave_eqn_MST} \begin{enumerate} \item[(i)] The MST $\mathcal{S}^{(\mathrm{ev})}_{\mu\nu\sigma\rho}$ satisfies, for any sign of the cosmological constant $\Lambda$, a linear, homogeneous system of wave equations in $({\cal M},g)$. \item[(ii)] The MST $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\mu\nu\sigma\rho}$ satisfies, for any sign of the cosmological constant $\Lambda$, a linear, homogeneous system of wave equations in $(\widetilde{{\cal M}\enspace}\hspace{-0.5em} ,\widetilde g)$. \end{enumerate} \end{lemma} Some care is needed concerning the regularity of the coefficients in these wave equations, cf.\ Remark~\ref{rem_reg}. It follows from \eq{sym_hyp1} that \eq{wave_eqn_general} is a linear, homogeneous wave equation \emph{of Fuchsian type} at $\scri$. Indeed, using adapted coordinates $(x^0,x^i)$ and imposing the asymptotic gauge condition \eq{asymp_gauge} a more careful calculation which uses \eq{KVF_scri}, \eq{exp_theta}, \eq{H_asymp_gauge1}-\eq{H_asymp_gauge3} and \eq{HT_relation} shows (note that $\mathcal{E}_{ij} \equiv \widetilde{\mathcal{T}}^{(\mathrm{ev})}_{titj}$ encompasses all independent components of the rescaled MST), \begin{eqnarray} \Box_{\widetilde g} \mathcal{E}_{ij} &=& 3 \sqrt{\frac{\Lambda}{3}} |Y|^{-4} \Theta^{-1}\Big( (Y_iY_j)_{\mathrm{tf}} Y^kY^l \widetilde\nabla_{t}{\mathcal{E}}_{kl} - \frac{1}{2}|Y|^2 ( Y^k Y_{(i} \widetilde\nabla_{|t|} \mathcal{E}_{j)k} )_{\mathrm{tf}} \Big) \nonumber \\ && - \Lambda |Y|^{-4} \Theta^{-2}\Big( (Y_iY_j)_{\mathrm{tf}} Y^kY^l \mathcal{E}_{kl} - \frac{1}{2} |Y|^{2} ( Y^kY_{(i} \mathcal{E}_{j)k})_{\mathrm{tf}} \Big) \nonumber \\ && + i \sqrt{\frac{\Lambda}{3}} |Y|^{-4} \Theta^{-1}\widehat\mbox{$\eta$}_{(i}{}^{lm} \Big( \frac{1}{2}|Y|^2 Y^kY_{|l} \widetilde\nabla_{m|} \mathcal{E}_{j)k} - \frac{1}{2}|Y|^2 Y^kY_{|l|} \widetilde\nabla_{j)} \mathcal{E}_{mk} \nonumber \\ && - 3 Y_{j)}Y_l Y^kY^n \widetilde\nabla_{m} \mathcal{E}_{kn} + |Y|^2Y_{j)} Y^k \widetilde\nabla_{m} \mathcal{E}_{kl} \Big) \nonumber \\ && + (O(1) \widetilde\nabla{\mathcal{E}})_{ij} + (O(\Theta^{-1}){ \mathcal{E}})_{ij} \;. \end{eqnarray} \subsection{Uniqueness for the evolution equation for $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\rho\sigma\mu}{}^{\nu}$ } Existence of solutions of quasilinear symmetric hyperbolic Fuchsian systems with prescribed asymptotics at $\scri$ has been analyzed in the literature mainly in the analytic case. For the merely smooth case, there exist results by Claudel \& Newman \cite{Claudel}, Rendall \cite{Rendall}, and more recently Ames {\it et.al.} \cite{Ames1,Ames}. The results in these papers involve a number of algebraic requirements, as well as global conditions in space. It is an interesting problem to see whether any of these results can be adapted to our setting here, in particular in order to prove a localized existence result in which an appropriate singular behavior of $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\rho\sigma\mu}{}^{\nu}$ is prescribed on some domain $B$ of $\scri^{-}$ and existence and uniqueness of a corresponding solution is shown in the domain of dependence of $B$. This would also require studying the impact of the constraint equations and their preservation under evolution. For the purposes of this section, where we aim to show that the necessary conditions listed in items (i) and (ii) of Theorem \ref{thm_nec_cond} for the vanishing of the rescaled MST tensor $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\rho\sigma\mu}{}^{\nu}$ in a neighborhood of $\scri$ are also sufficient, we merely need a localized uniqueness theorem for the evolution system with trivial initial data. We state and prove such a result in a more general context by adapting some of the ideas in \cite{Ames1}. Then we show that this result applies to the evolution system satisfied by $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\rho\sigma\mu}{}^{\nu}$. \subsubsection{A localized uniqueness theorem for symmetric hyperbolic Fuchsian systems} \label{sec_uniqueness} Let $({\cal M},\tilde{g})$ be an $(n+1)$-dimensional spacetime and $\scri$ a spacelike hypersurface. Choose coordinates in a neighborhood of $\scri$ so that $\scri = \{ t=0 \}$, and the metric is such that $\tilde{g}_{tt}=-1$ and $\tilde{g}_{ti}=0$, $i=1,\cdots, n$. Let us consider the first order, homogenous linear symmetric hyperbolic system of PDEs \begin{equation} A^t \partial_t u + A^i \partial_i u + \frac{1}{t} N u = 0 \label{PDE} \end{equation} where $u : {\cal M} \mapsto \mathbb{C}^m$ is the unknown, $A^t$, $A^i$ and $N$ are $m \times m$ matrices which depend smoothly on the spacetime coordinates $(t,x^i)$. We assume that $\mathbb{C}^m$ is, at each spacetime point $(t,x)$, endowed with a positive definite sesquilinear product $\left \langle u, v \right \rangle$ such that the endomorphisms $A^{\mu}(t,x)$, for $\mu = t, i$, are Hermitian with respect to this product \begin{equation*} \left \langle u, A^{\mu} v \right \rangle = \left \langle \overline A^{\mu} u, v \right \rangle. \end{equation*} Define $N_0 (x):= N(t=0,x)$. Our main assumption is that $A^t + N_0$ is strictly positive definite with respect to $\left \langle \, , \,\right \rangle$ at every point $p \in \scri$. Since both the inner product and $N$ depend smoothly on $t$ and $x$, the same holds for a sufficiently small spacetime neighborhood of any point $p \in \scri$. The domain of dependence of (\ref{PDE}) is defined in the usual way (namely, the standard definition in terms of causal curves, with causality at $T_p {\cal M}$ being defined as $k \in T_p {\cal M}$ being future timelike (causal) iff $k^t > 0$ and $A^\mu|_{p} k_{\mu}$ is negative definite (semidefinite), where $k_{\mu}$ is obtained from $k$ by lowering indices with respect to $\widetilde{g}$). We also make the assumption that $k = \partial_t$ is future timelike in this sense. We want to prove that the PDE (\ref{PDE}) with trivial initial data on a domain $B \subset \scri$ vanishes identically in the domain of dependence of $B$, denoted by $D(B)$. More precisely \begin{lemma} \label{UniquenessLemma} Let $B \subset \scri$ be a domain with compact closure. Let $u$ be a $C^1$ map $u :{\cal M} \mapsto \mathbb{C}^m$ which vanishes at $B$ and solves (\ref{PDE}). Assume that $A^t$ and $A^t + N_0$ are positive definite, then $u$ vanishes at $D(B)$. \end{lemma} \begin{proof} The proof is adapted from the basic energy estimate Lemma 2.7 in \cite{Ames1}. Since our setup is simpler, we can use a domain of dependence-type argument, instead of a global-in-space argument as in \cite{Ames1,Ames}. First note that since $u$ vanishes at $t=0$, and it is $C^1$, \begin{equation*} \partial_t u |_{t=0,x} = \lim_{t\rightarrow 0} \frac{u(t,x)}{t} := u_1 \end{equation*} with $u_1: \scri \mapsto \mathbb{C}^m$ continuous. Taking the limit of (\ref{PDE}) as $t\rightarrow 0$ with $(0,x) \in B$ and using $u(0,x)=0$ it follows \begin{equation*} (A^t + N_0 ) u_1 =0 \quad \quad \Longrightarrow \quad \quad u_1 =0 \;, \end{equation*} because $A^t + N_0$ is positive definite, and hence has trivial kernel. Let us consider the real quantity \begin{equation*} {\cal Z}^{\mu} := e^{-kt} \left \langle \frac{u}{t}, A^{\mu} \frac{u}{t} \right \rangle \;, \quad \quad k \in \mathbb{R} \;, \end{equation*} and consider its (coordinate) divergence. Since the product $\left \langle \, , \, \right \rangle$ depends on the spacetime point, we denote by $\left \langle\,, \, \right \rangle_{_{\mu}}$ the bilinear form (at each spacetime point) defined by \begin{equation*} \partial_{\mu} \left \langle u,v \right \rangle = \left \langle \partial_{\mu} u, v \right \rangle + \left \langle u, \partial_{\mu} v \right \rangle + \left \langle u,v \right \rangle_{\mu}, \quad \quad \forall u,v \in \mathbb{R}^m. \end{equation*} It follows \begin{align} \partial_{\mu} {\cal Z}^{\mu} = & - \frac{e^{- k t}}{t^2} \Big ( k + \frac{2}{t} \Big ) \left \langle u, A^t u \right \rangle + \frac{e^{-kt}}{t^2} \Big ( 2 \left \langle u, A^{\mu} \partial_{\mu} u \right \rangle + \left \langle u, (\partial_{\mu} A^{\mu} ) u \right \rangle + \left \langle u, A^{\mu} u \right \rangle_{\mu} \Big ) \nonumber \\ = & - \frac{2}{t^3} e^{-kt} \left \langle u, (A^t + N ) u \right \rangle \nonumber \\ & + e^{-kt} \Big ( \left \langle \frac{u}{t}, \left ( - k A^t + \partial_{\mu} A^{\mu} \right ) \frac{u}{t} \right \rangle + \left \langle \frac{u}{t}, A^{\mu} \frac{u}{t} \right \rangle_{\mu} \Big ) \;, \label{est} \end{align} where in the first equality we have used that $A^{\mu}$ is Hermitian w.r.t.\ $\left \langle \, , \, \right \rangle $ and in the second equality we have used the fact that $u$ satisfies \eq{PDE}. Consider now a domain $V \subset \mathcal M$ bounded by three smooth hypersurfaces-with-boundary $B \subset \scri$, $B_T \subset \{t=T\}$ and $\Sigma$, whose union is a compact topological hypersurface. Note that $B$ and $B_T$ are spacelike (i.e.\ their normal is timelike). We choose $V$ so that $\Sigma$ is achronal and that its outward normal (defined as the normal one-form which contracted with any outward directed vector is positive) is past causal. Consider the domain $V_{\epsilon} = V \cap \{ t \geq \epsilon \}$ for $\epsilon >0$ small enough. The boundary splits as $\partial V_{\epsilon} = B_{\epsilon} \cup \Sigma_{\epsilon} \cup B_T$, with obvious notation. We integrate $\partial_{\mu} {\cal Z}^{\mu}$ on $V_{\epsilon}$ with respect to the spacetime volume form $\boldmath{\eta} = F dt dx$ and use the Gauss identity. Denote by $n$ an outward normal to $\partial V$, then \begin{align*} \int_V \left ( \partial_{\mu} {\cal Z}^{\mu} \right ) F dt dx & = \int_V \partial_{\mu} ({\cal Z}^{\mu} F) dt dx - \int_V (\partial_{\mu} F) {\cal Z}^{\mu} dt dx \\ & = \int_{\partial V} {\cal Z}^{\mu} n_{\mu} dS - \int_V (\partial_{\mu} F ) {\cal Z}^{\mu} dt dx \;, \end{align*} where $dS$ is the induced volume form on $\partial V$ corresponding to the choice of normal $n$. Note in particular that, as a vector, $n^{\mu}$ points {\it inwards} both on $B_{\epsilon}$ and on $B_T$, so they can be taken simply to be $n = \partial_t$ on $B_{\epsilon}$ and $n = - \partial_t$ on $B_T$, i.e. $\boldmath{n} = -dt$ on $B_{\epsilon}$ and $\boldmath{n} = dt$ on $B_T$. Inserting (\ref{est}) and splitting the integral at the boundary in three pieces yields \begin{align*} \int_{B_{T}} &e^{-k T} \left \langle \frac{u}{T}, A^t \frac{u}{T} \right \rangle dS - \int_{B_{\epsilon}} e^{-k \epsilon} \left \langle \frac{u}{\epsilon}, A^t \frac{u}{\epsilon} \right \rangle dS = \\ & - \int_{\Sigma_{\epsilon}} e^{-k t} \left \langle \frac{u}{t}, (A^{\mu} n_{\mu}) \frac{u}{t} \right \rangle dS \\ & - \int_V \frac{2}{t^3} e^{-kt} \left \langle u, (A^t + N ) u \right \rangle \boldmath{\eta} \\ & + \int_V e^{-kt} \left [ \left \langle \frac{u}{t}, \left ( - k A^t + (\partial_{\mu} F) A^{\mu} + \partial_{\mu} A^{\mu} \right ) \frac{u}{t} \right \rangle + \left \langle \frac{u}{t}, A^{\mu} \frac{u}{t} \right \rangle_{\mu} \right] \boldmath{\eta} := I_{V_{\epsilon}} \;. \end{align*} The matrix $A^{\mu} n_{\mu}$ on $\Sigma_{\epsilon}$ is positive semidefinite because $n$ is past causal. We now choose $T$ small enough so that $A^t + N$ is positive definite on $V$ and $k$ large enough so that the last term in $I_{V_{\epsilon}}$ is negative (recall that $V$ has compact closure). Thus we have $I_{V_{\epsilon}} \leq 0$ and in fact strictly negative unless $u=0$. Thus \begin{align*} \int_{B_{T}} e^{-k T} \left \langle \frac{u}{T}, A^t \frac{u}{T} \right \rangle dS - \int_{B_{\epsilon}} e^{-k \epsilon} \left \langle \frac{u}{\epsilon}, A^t \frac{u}{\epsilon} \right \rangle dS \leq 0 \;. \end{align*} We now take the limit $\epsilon \rightarrow 0$ and use the fact that $\frac{u}{\epsilon} \rightarrow u_1 =0 $ to conclude \begin{align*} \int_{B_{T}} e^{-k T} \left \langle \frac{u}{T}, A^t \frac{u}{T} \right \rangle dS \leq 0 \;. \end{align*} Since the product $\left \langle\, , \, \right \rangle$ is positive definite, it follows $u=0$ on $B_T$. As a~consequence $I_V=0$ which implies $u=0$ on $V$. It is clear that the domain of~dependence $D(B)$ can be exhausted by such $V$'s, so $u=0$ on $D(B)$ as claimed. \hfill $\Box$ \medskip \end{proof} \subsubsection{Application to the Fuchsian system satisfied by $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\rho\sigma\mu}{}^{\nu}$ } In this section we show that the symmetric hyperbolic evolution system (\ref{evol3}) for ${\cal S}_{\rho\sigma\mu}{}^{\nu}=\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\rho\sigma\mu}{}^{\nu}$ satisfies all the conditions of Lemma~\ref{UniquenessLemma} in the unphysical space-time and conclude that the unique solution with vanishing data at $\scri^{-}$ is trivial and hence, since the system is linear and homogeneous, we also get uniqueness of all solutions given regular initial data at $\scri^{-}$. We choose coordinates $\{t,x^{i}\}$ on a neighborhood of $\scri^{-}$ satisfying $\tilde{g}_{tt}=-1$, $\tilde{g}_{ti}=0$, and $\scri^{-} = \{ t = 0\}$. The induced metrics on the hypersurfaces $\Sigma_{t}$ of constant $x^{0}=t$ are denoted by $h^t$, with corresponding volume forms $\widehat\mbox{$\eta$}^{t}$. Rewriting \eq{ev_eqns2} by moving all the Christoffel symbol terms to the right-hand side and using \eq{exp_theta} we have \begin{align} \partial_{t} {\cal E}_{ij}+i \widehat\mbox{$\eta$}_{(j}{}^{lk}\partial_{l}{\cal E}_{i)k} = & -\frac{1}{t} \frac{1}{|Y|^4} \Big ( 3 Y_i Y_j Y^k - \frac{1}{2} h_{ij} |Y|^2 Y^k - \frac{3}{2} |Y|^2 Y_{(i} \delta^k_{j)} \Big ) Y^l {\mathcal E}_{lk} \nonumber \\ & + (O(1) {\mathcal E})_{ij} \label{complex} \;. \end{align} The unkown is the complex symmetric and trace-free tensor ${\mathcal E}_{ij}$ introduced in \eq{e-m}. The system (\ref{complex}) is of the form (\ref{PDE}) with $u= \{ {\cal E}_{ij} \} \in \mathbb{C}^{5}$, $A^t = \mbox{Id}_{5}$ and \begin{align*} A^{l}{}_{ij}^{nk}=i\mbox{$\eta$}^{tl(k}{}_{(j}\delta^{n)}_{i)}. \end{align*} Take the sesquilinear product $\left \langle \, , \, \right \rangle$ defined by $\langle {\cal E}, \widehat{\cal E} \rangle ={\cal E}_{ij} \widehat{\overline{\cal E}}{}^{ij}$ (indices lowered and raised with $h^t$), which is obviously positive definite --its norm leading to the density \eq{s-e2}. It is straightforward to check that $A^{\mu}$ is Hermitian with respect to this product and $A^t$ is obviously positive definite. It remains to check that $A^t + N_0 = \mbox{Id}_{5}+N_{0}$ is positive definite. The endomorphism $N_0$ is, from (\ref{complex}), \begin{align*} N_0({\cal E})_{ij} := \frac{1}{|Y|^4} \Big ( 3 Y_i Y_j Y^k - \frac{1}{2} h_{ij} |Y|^2 Y^k - \frac{3}{2} |Y|^2 Y_{(i} \delta^k_{j)} \Big ) Y^l {\cal E}_{lk} \end{align*} so that $$ \left \langle {\cal E}, (\mbox{Id}_{5}+N_{0}){\cal E} \right \rangle = {\cal E}_{ij} \overline{\cal E}^{ij}+\frac{3}{|Y|^4}\overline{\cal E}^{ij}Y_{i}Y_{j}{\cal E}_{kl}Y^{k}Y^{l}- \frac{3}{2}\frac{1}{|Y|^{2}}Y_{i}\overline{\cal E}^{ik}Y^l {\cal E}_{lk}. $$ To see if this has a sign we introduce the following objects orthogonal to $Y^{i}$ \begin{align*} c_{ij} &:= {\cal E}_{ij}-\frac{2}{|Y|^{2}} Y_{(i}c_{j)}-\frac{1}{|Y|^{4}}Y_{i}Y_{j}{\cal E}_{kl}Y^{k}Y^{l}, \\ c_{i}&:= Y^{k}{\cal E}_{ki}-\frac{1}{|Y|^{2}} Y_{i} {\cal E}_{kl}Y^{k}Y^{l} \end{align*} and the previous expression can be rewritten as $$ \left \langle {\cal E}, (\mbox{Id}_{5}+N_{0}){\cal E} \right \rangle = c_{ij}\overline{c}^{ij} +\frac{1}{2}\frac{1}{|Y|^{2}}c_{i}\overline{c}^{i} +\frac{5}{2}\frac{1}{|Y|^4}{\cal E}_{kl}Y^{k}Y^{l} \overline{\cal E}_{ij}Y^{i}Y^{j} $$ which is manifestly positive definite. We have thus proven \begin{Lemma} \label{UniquenessT} Let $({\cal M},\widetilde{g})$ be a spacetime admitting a smooth conformal compactification. If $({\cal M},\widetilde{g})$ admits a Killing vector field for which the rescaled MST $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\mu\nu\sigma}{}^{\rho}$ vanishes at $\scri^{-}$, then $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\mu\nu\sigma}{}^{\rho}$ vanishes in a neighborhood of $\scri^{-}$. \end{Lemma} The characteristics of the symmetric hyperbolic system \eq{complex} coincide with those of the propagational part of the Bianchi equation, and are computed and discussed in \cite[Section 4]{F1}. It is shown there that they form null \emph{and timelike} hypersurfaces. \subsection{Main result} We end up with the following main result: \begin{theorem} \label{first_main_thm} Consider a spacetime $({\cal M},g)$, or rather its conformally rescaled counterpart, in wave map gauge \eq{gauge_conditions_compact},% \footnote{ \label{footnote_gauge} In fact, it suffices if $\widetilde R$ and $\widetilde W^{\sigma}$, including certain transverse derivatives thereof, vanish on $\scri^-$. Moreover, a corresponding result must also hold for non-vanishing gauge source functions. We leave it to the reader to work this out. } solution to Einstein's vacuum field equations with $\Lambda>0$, which admits a smooth conformal extension through $\scri^-$ and which contains a KVF $X$. Denote by $h$ the Riemannian metric induced by $\widetilde g=\Theta^2 g$ on $\scri^-$, and by $Y$ the CKVF induced by $X$ on $\scri^-$. Then, there exists a function $Q$, namely $Q=Q_0$ ($=Q_{\mathrm{ev}}$ for an appropriate choice of the $\sigma$-constant), for which the MST $\mathcal{S}_{\mu\nu\sigma}{}^{\rho}$ corresponding to $X$ vanishes in the domain of dependence of $\scri^-$ if and only if the following relations hold: \begin{enumerate} \item[(i)] $ \widehat C_{ij} = \sqrt{\frac{\Lambda}{3}}C_{\mathrm{mag}}|Y|^{-5}(Y_iY_j -\frac{1}{3}|Y|^2 h_{ij})$ for some constant $C_{\mathrm{mag}}$, where $\widehat C_{ij}$ is the Cotton-York tensor of the Riemannian 3-manifold $(\scri^-, h)$, and \item[(ii)] $D_{ij} = \widetilde d_{titj}|_{\scri^-} =C_{\mathrm{el}} |Y|^{-5} (Y_iY_j -\frac{1}{3}|Y|^2 h_{ij})$ for some constant $C_{\mathrm{el}}$, where $\widetilde d_{\mu\nu\sigma}{}^{\rho}$ is the rescaled Weyl tensor of the unphysical spacetime $(\widetilde{{\cal M}\enspace}\hspace{-0.5em} ,\widetilde g)$. \end{enumerate} \end{theorem} \begin{proof} Theorem~\ref{thm_nec_cond} shows that (i) and (ii) are necessary conditions. Conversely, if (i) and (ii) hold, it follows from Theorem~\ref{prop_Qs} and Theorem~\ref{thm_nec_cond} that there exists a choice of the $\sigma$-constant $a$ in $Q_{\mathrm{ev}}$ for which the rescaled MST $\widetilde{\mathcal{T}}^{(\mathrm{ev})}_{\alpha\beta\mu\nu}$ vanishes on $\scri^-$. It then follows from Lemma~\ref{UniquenessT} that it vanishes in some neighborhood of $\scri^-$. However, once we know that the MST vanishes in such neighborhood, it follows from the results in \cite{mars_senovilla} that the metric has to take one of the local forms given there. But for all these metrics the MST vanishes globally. So we can conclude that it vanishes in fact in the whole domain of dependence of $\scri^-$. \hfill $\Box$ \medskip \end{proof} Recall the Definition~\ref{KdS_like} of Kerr-de Sitter-like spacetimes. Lemma~\ref{UniquenessT} and Theorem~\ref{first_main_thm} lead to the following characterization of KdS-like space-times: \begin{corollary} \label{cor_charact_KdS} Let $({\cal M},g)$ be a $\Lambda>0$-vacuum spacetime admitting smooth conformal compactification and corresponding null infinity $\scri$ as well as a KVF $X$. Then the following statements are equivalent: \begin{enumerate} \item[(i)] $({\cal M},g)$ is Kerr-de Sitter-like at a connected component $\scri^{-}$. \item[(ii)] There exists a function $Q$ such that the MST associated to $X$ vanishes in the domain of dependence of $\scri^{-}$. \item[(iii)] The CKVF $Y$ induced by $X$ on $\scri^-$ satisfies the conditions (i) and (ii) of Theorem~\ref{first_main_thm}. \end{enumerate} \end{corollary} Reformulated in terms of an asymptotic Cauchy problem Theorem~\ref{first_main_thm} becomes Theorem~\ref{first_main_thm2} given in the Introduction. \section{A second conformal Killing vector field} \label{sec_CKVFs} \subsection{Existence of a second conformal Killing vector field} \label{second_KVF} It follows from \cite[Theorem 4]{mars_senovilla} that (here overbars mean ``complex conjugate of'') \begin{eqnarray} \varsigma^{\mu} \,=\, \frac{4}{|Q\mathcal{F}^2-4\Lambda|^2}X^{\sigma}\ol{\mathcal{F}}_{\sigma}{}^{\rho}\mathcal{F}_{\rho}{}^{\mu} +\mathrm{Re}\Big(\frac{\mathcal{F}^2}{(Q\mathcal{F}^2-4\Lambda)^2}\Big) X^{\mu} \label{varsigma} \end{eqnarray} is another KVF which commutes with $X$, supposing that $({\cal M},g)$ is a $\Lambda$-vacuum spacetime for which $Q$, $\mathcal{F}^2$ and $Q\mathcal{F}^2 -4\Lambda$ are not identically zero and whose MST vanishes w.r.t.\ the KVF $X$ (cf.\ \cite{mars, perjes} for the $\Lambda=0$-case). Note that $\varsigma$ may be trivial or merely a multiple of $X$. However, expression (\ref{varsigma}) can be taken as definition of a vector field $\varsigma$ in any $\Lambda$-vacuum spacetime admitting a KVF $X$ and such that ${\mathcal F}^2 \neq 0$ and $Q \mathcal{F}^2 - 4 \Lambda \neq 0$. We take $Q=Q_0$ and assume that $({\cal M},g)$ admits a smooth $\scri$, but we do \emph{not} assume that the MST vanishes. A somewhat lengthy computation reveals that (recall that $f$ and $N$ denote divergence and curl of $Y$, respectively) \begin{eqnarray} \varsigma^{t} &=& -\frac{2}{|Q_0\mathcal{F}^2-4\Lambda|^2}\Theta^4h^{ij}\ol\sigma_{j}\mathcal{F}_{i t} +\mathrm{Re}\Big(\frac{\mathcal{F}^2}{(Q_0\mathcal{F}^2-4\Lambda)^2}\Big) \widetilde X^{t} \nonumber \\ &=& -\frac{1}{8\Lambda^2}\Theta^4h^{ij}\ol\sigma_{j}\mathcal{F}_{i t} +\frac{1}{16\Lambda^2}\mathrm{Re}(\mathcal{F}^2) \widetilde X^{t} +O(\Theta) \nonumber \\ &=&O(\Theta) \;, \\ \varsigma^{i} &=&\frac{2}{|Q_0\mathcal{F}^2-4\Lambda|^2} \Theta^4h^{ij}\Big( h^{ kl}\ol \sigma_l\mathcal{F}_{k j} +\ol \sigma_t\mathcal{F}_{ jt} \Big) +\mathrm{Re}\Big(\frac{\mathcal{F}^2}{(Q_0\mathcal{F}^2-4\Lambda)^2}\Big)\widetilde X^i \nonumber \\ &=&\frac{1}{8\Lambda^2} \Theta^4h^{ij}\Big( h^{ kl}\ol \sigma_l\mathcal{F}_{k j} +\ol \sigma_t\mathcal{F}_{jt} \Big) +\frac{1}{16\Lambda^2}\mathrm{Re}(\mathcal{F}^2)\widetilde X^i +O(\Theta) \\ &=& \frac{1}{8\Lambda^2}Y_k N^kN^i +\frac{1}{8\Lambda^2} Y^i \Big( -\frac{1}{2}|N|^2 +\frac{2}{9} f^2 -2 \widehat c\Big) \nonumber \\ && -\frac{1}{2\Lambda^2} |Y|^2 \Big(\widehat L^i{}_k Y^k + \frac{1}{3}\widehat \nabla^if \Big) + \frac{1}{12\Lambda^2} f\widehat\mbox{$\eta$}^{ikl}Y_kN_l +O(\Theta) \\ &=& \frac{1}{8\Lambda^2}Y_k N^k N^i +\frac{1}{2\Lambda^2} Y^i \Big( Y^kY^l\widehat L_{kl}+ \frac{1}{3}Y^k\widehat\nabla_k f \Big) \nonumber \\ && -\frac{1}{2\Lambda^2} |Y|^2 \Big(\widehat L^i{}_k Y^k + \frac{1}{3}\widehat \nabla^if \Big) + \frac{1}{12\Lambda^2} f\widehat\mbox{$\eta$}^{ikl}Y_kN_l +O(\Theta) \label{second_CKVF} \;, \end{eqnarray} where we used \eq{expansion_QF_Lambda}, \eq{expansion_sigma_t}, \eq{deriv_YN}, \eq{expansion_sigma_i} as well as the following relations: \begin{eqnarray} \mathrm{Re}(\mathcal{F}^2) &=& -\frac{4}{3}\Lambda|Y|^2 \Theta^{-2} -4Y^i\ol{\widetilde\nabla_t\widetilde\nabla_t\widetilde X_i} +\frac{4}{9} f^2 -\frac{4}{3}\Lambda c + O(\Theta) \;, \\ \mathcal{F}_{it} &=& \sqrt{\frac{\Lambda}{3}}Y_i\Theta^{-3} - \frac{i}{2}N_i \Theta^{-2} -\frac{1}{2} \sqrt{\frac{3}{\Lambda}} \Theta^{-1}\Big(\ol{\widetilde \nabla_t\widetilde \nabla_t\widetilde X_i}+ \frac{\widehat R}{2} Y_i\Big) +O(1) \;, \phantom{xx} \\ \mathcal{F}_{ij} &=& i\sqrt{\frac{\Lambda}{3}}\widehat \mbox{$\eta$}_{ijk} Y^k \Theta^{-3} +\frac{1}{2}\widehat\mbox{$\eta$}_{ijk}N^k\Theta^{-2} +O(\Theta^{-1}) \;, \\ \widetilde X^t &=& \frac{1}{3} \sqrt{\frac{3}{\Lambda}} f\Theta + O(\Theta^3) \;, \\ \widetilde X^i &=& Y^i + \frac{1}{2} \frac{3}{\Lambda} \ol{\widetilde\nabla_t\widetilde\nabla_t \widetilde X^i} \Theta^2 + O(\Theta^3) \;. \end{eqnarray} We conclude that the vector field $\varsigma$ is always tangential to $\scri$, and not just in the setting where the MST vanishes. Proceeding in the same manner as we did for the functions $\widehat c$ and $\widehat k$, we regard \eq{second_CKVF} as the definition of a vector field on some Riemannian 3-manifold: \begin{definition} \label{Defivarsigma} Let $(\Sigma,h)$ be a Riemannian 3-dimensional manifold which admits a CKVF $Y$. Then we define the vector field \begin{eqnarray} \widehat\varsigma^i(Y) &:=& \frac{9}{4}Y_k N^k N^i +9 Y^i \Big( Y^kY^l\widehat L_{kl}+ \frac{1}{3}Y^k\widehat\nabla_k f \Big) \nonumber \\ && - 9 |Y|^2 \Big(\widehat L^i{}_k Y^k + \frac{1}{3}\widehat \nabla^if \Big) + \frac{3}{2} f\widehat\mbox{$\eta$}^{ikl}Y_kN_l \;.\phantom{xx} \label{dfn_varsigma} \end{eqnarray} \end{definition} As in the case of $\widehat c$ and $\widehat k$, one can use a spacetime argument where $\Sigma$ is embedded as ``null infinity'' into a $\Lambda>0$-vacuum spacetime with vanishing MST to prove that $\widehat \varsigma(Y)$, which in that case reads $ \widehat\varsigma^i(Y)=18\Lambda^2\varsigma^i|_{\scri} $, is a (possibly trivial) CKVF which commutes with $Y$, supposing that $|Y|^2>0$ and (\ref{condition_on_C}) hold. Irrespective of that, one can raise the question under which conditions $\widehat\varsigma(Y)$ is a CKVF and under which conditions it commutes with $Y$. For this purpose let us compute the covariant derivative of $\widehat\varsigma$. A lengthy calculation which uses \eq{deriv_YN}, \eq{HessPsi}, \eq{HessY}, the conformal Killing equation for $Y$ as well as the relation \begin{equation} \widehat \nabla_i N_j \,=\,\frac{2}{3}\widehat\mbox{$\eta$}_{ij}{}^k\widehat\nabla_k f - \widehat\mbox{$\eta$}_j{}^{kl}\widehat R_{klim}Y^m \end{equation} gives \begin{eqnarray} \widehat\nabla_i\widehat\varsigma_j &=& 3\widehat\nabla_iY_j Y^k\widehat \nabla_{k}f + 3Y_j \widehat\nabla_iY^k\widehat \nabla_{k}f + 3Y^kY_j \widehat\nabla_i\widehat \nabla_{k}f - 6 Y^k \widehat\nabla_i Y_k \widehat \nabla_j f \nonumber \\ && + 3 Y^k \widehat \nabla_jY_k\widehat\nabla_if - 3|Y|^2 \widehat\nabla_i\widehat \nabla_j f + 3f\widehat\nabla_i Y^k \widehat \nabla_jY_k + 3f Y^k \widehat\nabla_i\widehat \nabla_jY_k \nonumber \\ && +\frac{9}{4} N_j\widehat\nabla_i( Y_k N^k ) +\frac{9}{4} Y_k N^k \widehat\nabla_i N_j - 2f Y_j \widehat\nabla_i f - f^2\widehat\nabla_i Y_j \nonumber \\ && + 9\widehat\nabla_iY_jY^kY^l\widehat L_{kl} + 18Y_jY^k\widehat\nabla_i Y^l\widehat L_{kl} + 9Y_jY^kY^l\widehat\nabla_i\widehat L_{kl} \nonumber \\ && - 18\widehat\nabla_iY_l Y^kY^l\widehat L_{jk} - 9|Y|^2\widehat\nabla_i Y^k\widehat L_{jk} - 9|Y|^2 Y^k\widehat\nabla_i\widehat L_{jk} \\ &=& \frac{3}{4}f|N|^2 h_{ij} -\frac{9}{2}\underbrace{Y_{[(i} N_k\widehat \nabla_{l]}f \widehat \mbox{$\eta$}_{j)}{}^{kl}}_{=\frac{1}{3} h_{ij}\widehat \mbox{$\eta$}^{klm}Y_kN_l\widehat\nabla_m f} - \frac{27}{2} \underbrace{ Y^m Y_{[j} N_k \widehat L_{l]m} \widehat \mbox{$\eta$}_{i}{}^{kl} }_{=\frac{1}{3} h_{ij}Y^mY_pN_k\widehat L_{lm}\widehat \mbox{$\eta$}^{pkl}} \nonumber \\ &&+ \frac{3}{2}\widehat \mbox{$\eta$}_{ijl}\Big( N^l Y^k\widehat \nabla_{k}f + Y_k N^k \widehat\nabla^l f +3 Y_m N^m Y^k \widehat L_{k}{}^l +3 N^l Y^kY^m\widehat L_{km} \nonumber \\ && - \frac{1}{3}f^2N^l \Big) + \frac{3}{2} \widehat \mbox{$\eta$}_{[i}{}^{kl}\Big( N_{j]} Y_k\widehat \nabla_lf - Y_{j]} N_k\widehat \nabla_{l}f -3 Y_k N_l\widehat\nabla_{j]}f \Big) \nonumber \\ && -2 fY_{[i} \widehat \nabla_{j]} f - 6fY^kY_{[i} \widehat L_{j]k} + 9 Y^m N_{l} Y_k\widehat L_{m[i}\widehat \mbox{$\eta$}_{j]}{}^{kl} \nonumber \\ && + 9Y_j\widehat \mbox{$\eta$}_{ik}{}^{l}\widehat C_{lp}Y^kY^p - 9|Y|^2\widehat \mbox{$\eta$}_{ik}{}^l\widehat C_{lj} Y^k \\ && + \frac{9}{2}\underbrace{\Big( 3Y^m N_{[k} Y_l\widehat L_{m]i}\widehat \mbox{$\eta$}_{j}{}^{kl} - Y_jN_k Y_l\widehat L_{mi} \widehat \mbox{$\eta$}^{mkl} \Big)}_{=0} \;. \label{deriv_varsigma} \end{eqnarray} Employing again the fact that $Y$ is a CKVF one shows that $Y$ and $\widehat\varsigma$ commute, \begin{eqnarray} [Y,\widehat\varsigma]_i &=& Y^j\widehat\nabla_j\widehat\varsigma_i - \widehat\varsigma^j\widehat\nabla_jY_i \\ &=& Y^j\widehat\nabla_j\widehat\varsigma_i + \frac{1}{2}\widehat \mbox{$\eta$}_{ijk}\widehat\varsigma^jN^k-\frac{1}{3} f\widehat\varsigma_i \\ &=& \frac{3}{4}f|N|^2 Y_i -\frac{9}{4} Y_i\widehat \mbox{$\eta$}^{klm}Y_kN_l\widehat\nabla_m f - \frac{9}{2} Y_iY^mY_pN_k\widehat L_{lm}\widehat \mbox{$\eta$}^{pkl} \nonumber \\ &&- \frac{1}{2}Y^j\widehat\mbox{$\eta$}_{ijl}\Big( -\frac{3}{2} N^l Y^k\widehat \nabla_{k}f -f^2N^l +\frac{9}{2} Y_k N^k \widehat\nabla^l f +9 Y_m N^m Y^k \widehat L_{k}{}^l \Big) \nonumber \\ && + \frac{3}{4}\widehat \mbox{$\eta$}_{i}{}^{kl} |Y|^2 N_k\widehat \nabla_{l}f + 3fY^jY^kY_{i} \widehat L_{jk} - 3f|Y|^2Y^k \widehat L_{ik} \nonumber \\ && + fY^jY_{i} \widehat \nabla_{j} f - f |Y|^2 \widehat \nabla_{i} f + \frac{1}{2}\widehat \mbox{$\eta$}_{ijk}\widehat\varsigma^jN^k -\frac{1}{3} f\widehat\varsigma_i \\ &=& \frac{9}{4}\Big( 3\widehat \mbox{$\eta$}_{i}{}^{jk}Y^l Y_{[j} N_k \widehat \nabla_{l]}f - Y_i\widehat \mbox{$\eta$}^{klm}Y_kN_l\widehat\nabla_m f\Big) \nonumber \\ && + \frac{9}{2}\Big(3\widehat \mbox{$\eta$}_{i}{}^{jk}Y^m N_{[j} Y_m \widehat L_{k]l} Y^l - Y_iY^mY_jN_k\widehat L_{lm}\widehat \mbox{$\eta$}^{jkl}\Big) \\ &=& 0 \end{eqnarray} \begin{lemma} Let $(\Sigma,h)$ be a Riemannian 3-manifold which admits a CKVF~ $Y$. Let the vector field $\widehat\varsigma(Y)$ be given by \eq{dfn_varsigma}. Then $$[Y,\widehat\varsigma]=0\;.$$ \end{lemma} We further deduce from \eq{deriv_varsigma} that \begin{equation} (\widehat\nabla_{(i}\widehat\varsigma_{j)})_{\mathrm{tf}} \,=\, 9Y_{(i}\widehat \mbox{$\eta$}_{j)k}{}^{l}\widehat C_{lp}Y^kY^p - 9|Y|^2\widehat \mbox{$\eta$}_{(i|k|}{}^l\widehat C_{j)l} Y^k \;, \end{equation} i.e.\ $\widehat\varsigma(Y)$ will be a (possibly trivial) CKVF if and only if \begin{eqnarray} Y_{(i}\widehat \mbox{$\eta$}_{j)k}{}^{l}\widehat C_{lp}Y^kY^p \,=\, |Y|^2\widehat \mbox{$\eta$}_{(i|k|}{}^l\widehat C_{j)l} Y^k \\ \Longleftrightarrow \quad (\widehat C_{lp} Y_{(i} -Y_p\widehat C_{l(i} ) \widehat \mbox{$\eta$}_{j)k}{}^{l}Y^kY^p \,=\, 0 \label{CKVF_condition} \;. \end{eqnarray} \begin{lemma} Let $(\Sigma,h)$ be a Riemannian 3-manifold which admits a CKVF $Y$. Then the vector field $\widehat\varsigma(Y)$, defined in \eq{dfn_varsigma}, is a (possibly trivial) CKVF if and only if \eq{CKVF_condition} holds. \end{lemma} \begin{remark} {\rm In particular \eq{CKVF_condition} is fulfilled supposing that \eq{condition_on_C} holds as one should expect from the results in \cite{mars_senovilla}. } \end{remark} \begin{remark} {\rm Observe that, from (\ref{cotton-york}), condition \eq{CKVF_condition} can be re-expressed as $$ Y_p Y^k \big( Y^p \widehat C_{(ij)k} - Y_{(i} \widehat C^p{}_{j)k}\big)=0 \;. $$ } \end{remark} \subsection{Properties of the KID equations} \label{app_KID_properties} In this section we study the case where the KID equations on a spacelike $\scri^-$ of a $\Lambda>0$-vacuum spacetime admit at least two solutions, as it is the case for e.g.\ Kerr-de Sitter, or, more generally, for spacetimes with vanishing MST \cite{mars_senovilla}. Recall the KID equations \cite{ttp2} \begin{eqnarray} {\mycal L}_Y D_{ij} + \frac{1}{3}D_{ij}\widehat \nabla_k Y^k &=&0 \label{reduced_KID2} \;. \end{eqnarray} Consider two CKVFs $Y$ and $\zeta$ on the Riemannian 3-manifold $(\scri^-,h)$ which both assumed to solve the KID equations. Then also their commutator, which is obviously a CKVF, provides another (possibly trivial) solution of the KID equations, \begin{equation} {\mycal L}_{[Y,\zeta]} D_{ij} + \frac{1}{3}D_{ij}\widehat\nabla_k[Y,\zeta]^k \,=\, 0 \;. \end{equation} This reflects the well-known fact that KVFs together with the commutator form a Lie algebra. Let us continue assuming that $Y$ and $\zeta$ are two CKVFs on $(\scri^-,h)$ which solve the KID equations, and let us further assume that $D_{ij}$ satisfies condition (\ref{condition_on_D}) (in particular, we assume $|Y|^2>0$). Then the KID equations \eq{reduced_KID2} for $\zeta$ can be written as (set $V:=[Y,\zeta]$ and assume $C_{\mathrm{el}} \ne 0 $) \begin{equation} 2Y_{(i} V_{j)} + (h_{ij}-5|Y|^{-2}Y_iY_j)Y_kV^k \,=\, 0 \;. \label{eqn_Y_Z} \end{equation} Contraction with $Y^j$ yields \begin{equation} 0\,=\, |Y|^2 V_{i} -3Y_i Y_jV^j \;. \label{contraction} \end{equation} Another contraction with $Y^i$ gives \begin{equation} Y^kV_{k} \,=\,0 \;. \end{equation} Inserting this into \eq{contraction} we find that \eq{eqn_Y_Z} is equivalent to \begin{equation} V\,=\, [Y,\zeta] \,=\, 0 \;. \end{equation} We have proven the following: \begin{lemma} \label{lemma_second_KVF} Let $(\scri^-,h_{ij})$ be a Riemannian 3-manifold which admits a CKVF $Y$ with $|Y|^2> 0$. Denote by $({\cal M},g_{\mu\nu})$ the $\Lambda>0$-vacuum spacetime constructed from the initial data $h_{ij}$ and $D_{ij}= C_{\mathrm{el}}|Y|^{-5}(Y_iY_j -\frac{1}{3}|Y|^2 h_{ij})$, $ C_{\mathrm{el}}\neq 0$. Then any other vector field on $(\scri^-,h_{ij})$ extends to a KVF of $({\cal M},g_{\mu\nu})$ if and only if it is a CKVF of $(\scri^-,h_{ij})$ which commutes with $Y$. \end{lemma} \begin{remark} {\rm Note that the unphysical Killing equations imply that a KVF in the physical spacetime can be non-trivial if and only if the induced CKVF on $\scri$ is non-trivial (compare \cite{ttp2}). } \end{remark} \begin{remark} {\rm Assume that $\scri$ is conformally flat. Then it can be shown that there exists at least one independent CKVF $\zeta$ which commutes with $Y$. It then follows from Lemma~\ref{lemma_second_KVF} that the emerging spacetime admits at least two KVFs. This provides a simple proof that $\Lambda>0$-vacuum spacetimes with vanishing MST and conformally flat $\scri$ have at least two KVFs (cf.\ \cite[Theorem 4]{mars_senovilla}). } \end{remark} Moreover, we have the following \begin{proposition} \label{prop_2CKVF} Let $(\Sigma, h_{ij})$ be a Riemannian 3-manifold which admits a CKVF $Y$ with $|Y|^2>0$. Assume further that its Cotton-York tensor satisfies $\widehat C_{ij}=C|Y|^{-5}(Y_iY_j -\frac{1}{3}|Y|^2 h_{ij})$, $C=\mathrm{const}$. Then $(\Sigma, h_{ij})$ admits a second, independent CKVF $\zeta$ which commutes with $Y$. \end{proposition} \begin{proof} One more time we use a spacetime argument: There exists a $\Lambda>0$-vacuum spacetime $({\cal M}, g_{\mu\nu})$ with a KVF $X$ such that the associated MST vanishes, such that $(\Sigma, h_{ij})$ can be identified with past null infinity, and such that $X^i|_{\Sigma}=Y^i$. It follows from the classification results in \cite{mars_senovilla} that $({\cal M}, g_{\mu\nu})$ admits a second independent KVF which commutes with $X$, and which induces a CKVF on $\scri^-$ with the asserted properties. \hfill $\Box$ \medskip \end{proof} \begin{remark} {\rm The second CKVF $\zeta$ may or may not be $\varsigma$ as given in Definition \ref{Defivarsigma}. The statement of the Proposition is that, even when $\varsigma$ happens to be linearly dependent to $Y$, there is still another independent CKVF on $(\Sigma,h_{ij})$. } \end{remark} Proposition~\ref{prop_2CKVF} might be useful to classify Riemannian 3-manifolds which admit a CKVF which is related to the Cotton-York tensor via \eq{condition_C}. \vspace{1.2em} \noindent {\textbf {Acknowledgements}} MM acknowledges financial support under the projects FIS2012-30926, FIS2015-65140-P (Spanish MINECO-fondos FEDER) and P09-FQM-4496 (J. Andaluc\'{\i}a---FEDER). TTP acknowledges financial support by the Austrian Science Fund (FWF): P 24170-N16. JMMS is supported under grant FIS2014-57956-P (Spanish MINECO-fondos FEDER), GIU12/15 (Gobierno Vasco), UFI 11/55 (UPV/EHU) and by project P09-FQM-4496 (J. Andaluc\'{\i}a---FEDER). The research of WS was funded by the Austrian Science Fund (FWF): P 23337-N16.
{ "redpajama_set_name": "RedPajamaArXiv" }
\subsection{Introduction} Quantum metamaterials are hybrid systems consisting of arrays of qubits coupled to the photon modes of a cavity \cite{Macha,Astafiev,Rakhmanov, Fistul,ZKF,SMRU,Brandes,Zou}. In solid state structures the qubits are realized using nitrogen-vacancy (NV) centers in diamonds \cite{nv-centers-0,nv-centers, nv-centers-1}, and spins of $^{31}$P dopants in $^{28}$Si crystals \cite{Morton} or Cr$^{3+}$ in Al$_2$O$_3$ samples \cite{Schuster}, and superconducting Josephson qubits \cite{MSS, Orlando,mooij}. Among of others, the Josephson qubits are particularly perspective for an implementation of quantum gates \cite{DiCarlo, MSS, Nation, Clarke} due to their high degree of tunability. Frequency of excitation, given by an energy difference between ground and excited states, can be controllably tuned in a wide range using the external magnetic flux threading a loop of the qubit. Modern technology allows for a production of metematerial structures obeying sophisticated geometry and low decoherence effects. High nonlinearity of the qubit excitation spectrum, combined with low decoherence, gives rise to unusual properties of quantum metamaterials, distinguishing them from the linear-optical metastructures. These unusual features are associated with intrinsic quantum dynamics of qubits and photon degrees of freedom. They are revealed in the optical response of a metamaterial to the external strong pump field, driving the system away from its ground state. A textbook example is the rotation on a Bloch sphere of the state of a single qubit subjected to an external field pulse. The well-understood solution for dynamics of a single qubit is commonly used as a key building block in the mean-field description of complex metamaterials containing a number of qubits and cavity modes. Assuming no correlations between the qubits and photons, one comes to the set of Maxwell-Bloch equations virtually describing qubits coupled to a classical field of the cavity and (or) external pump. This article is devoted to the role of quantum entanglement between qubit and cavity modes of the superconducting metamaterial. Whereas it is generally clear that these correlation effects beyond Maxwell-Bloch scheme are revealed in strong-driving regimes, their quantitative role in an experimentally/technologically relevant situation is not yet studied. At the same time, such study is highly motivated by the quantum technology development, because a realization of qubit gates and operation of quantum simulators assume applying of driving fields of strengths comparable with qubit-cavity coupling energy. We argue that a quantitative description of an operation of realistic quantum metamaterials, which involve non-adiabatic and strong perturbations, assumes taking into account quantum correlations. We present a study of the a simple yet realistic model of the quantum interface, defined as a dissipative hybrid system containing a resonant qubit being connected to the cavity mode and simultaneously subjected to the strong external field. We assume that the two-level system is highly anharmonic flux qubit, being a loop with several Josephson junctions, where highest levels are not excited by the external drivings. Hybridization between the qubit and the cavity mode provides a transfer of the pump photons to the cavity mode via qubit excitations. Therefore the internal qubit dynamics is fingerprinted in the cavity field, and can be later read out or transferred to another qubit. We describe the evolution of the many-body density matrix of the system using the Lindblad equation, and compare the results with whose obtained using Maxwell-Bloch approximation. We observe, that for a constant driving field the two approaches give almost same results (that is, qubit-cavity correlations are negligible) up to certain threshold value of the pump $f^*$ depending mainly on relaxation rates in a cavity and in a qubit. For higher pump field, the effect of correlations rapidly grow, making Maxwell-Bloch approximation quite inaccurate. There is a remarkable artifact which follows from the Maxwell-Bloch approach, but is not present in the many-body description: a hysteresis in photon number as a function of $f$. This behavior shows up in a certain range around of the threshold $f^*$ if a coupling energy between photons and the qubit is large enough. Furthermore, we find that non-adiabatic switching on of the driving, from zero to a certain value, reveals discrepancy between mean-field and exact solutions even for drivings below the steady-state threshold $f^*$. \section{Theoretical approach} \label{theory} Description of circuit quantum electrodynamics of superconducting metamaterials of qubits and transmission line are reduced to a Hamiltonian of Tavis-Cummings model \cite{Carmichael}. In our analysis we start from more simple situation of a single qubit which is coupled to photon mode. Quantum mechanical description is reduced to well-known Janes-Cummings model which is exactly integrable. Namely, Hamiltonian of an isolated qubit-cavity system is $$H_{JC}=\omega_R a^+a + \epsilon \sigma^+ \sigma^- +g (a\sigma^+ + a^+\sigma^-).$$ First term describes excitations in photon mode of the resonator, where bosonic operators $a,a^+$ obey commutation rules $[a,a^+]=1$. Second term is related to excitations in qubit where $\sigma^+ , \sigma^- $ are Pauli operators. External transversal driving applied to a qubit is accounted for by $$ H_{ext}=\frac{f(t)}{2} \left(e^{-i\omega t}\sigma^+ + e^{i\omega t}\sigma^-\right) $$ where $f(t)$ is slow envelope function and $\omega$ is fast reference frequency. In our studies we are limited by single non-adiabatic switching of the form \begin{equation} f(t)=f\theta(t). \label{f-t} \end{equation} The system under consideration is coupled to a dissipative environment, hence, we employ Lindblad equation on the density matrix $\rho(t)$ dynamics written in many-body basis of qubit and photon states. The Lindblad equation reads as \begin{equation} i(\partial_t\rho(t) - \Gamma[\rho(t)])=[H(t),\rho(t)]. \label{lindblad} \end{equation} where full Hamiltonian $$ H(t)=H_{JC}+H_{ext}(t) $$ and relaxations in qubit and cavity are taken into account by means of \begin{multline} \Gamma[\rho]=\frac{\kappa}{2}(2a\rho a^+ - a^+a \rho- \rho a^+a) + \\ +\frac{\Gamma_1}{2} (2\sigma^-\rho\sigma^+ - \sigma^+\sigma^-\rho- \rho\sigma^+\sigma^-). \end{multline} In our numerical solution we calculate $\rho(t)$ by means of direct integration of Lindblad equation in a truncated Hilbert space. In approximate mean-field techniques we derive equations on averages from this equation (\ref{lindblad}) as well. Note that everywhere below we perform transition into rotating frame basis related to the main frequency $\omega$ of driving signal which is tuned in resonance with cavity mode frequency $\omega=\omega_R$. The full Hamiltonian $H(t)$ in (\ref{lindblad}), given by $H_{JC}+H_{ext}(t)$, in this $\omega$-rotating frame basis reads \begin{equation} H(t)= \Delta \sigma^+\sigma^- +g (a\sigma^+ + a^+\sigma^-)+\frac{f(t)}{2} \sigma^x. \label{h} \end{equation} where qubit detuning is $\Delta=\epsilon-\omega_R$. From the Lindblad equation $ i(\partial_t\rho(t) -\Gamma[\rho(t)])=[H(t),\rho(t)]$ we derive equations for averages $\langle a \rangle, \langle a^+ \rangle, \langle \sigma^{\pm} \rangle,\langle \sigma^z \rangle$. This is done with use of definitions, e.g. applied to $a$, by the following scheme \begin{equation} \partial_t \langle a (t) \rangle ={\rm Tr}( \partial_t\rho(t) a)={\rm Tr}( -i[H(t),\rho(t)]a-\Gamma[\rho(t)])a ). \end{equation} We apply the following standard mean-field approximation: we factorize the averages \begin{equation} \langle a\sigma^+ \rangle\rightarrow \langle a\rangle \langle \sigma^+ \rangle, \langle a\sigma^z \rangle\rightarrow \langle a\rangle \langle \sigma^z \rangle \label{factorization-m-b} \end{equation} which appears in r.h.s. parts of equations on $\langle a \rangle, \langle a^+ \rangle, \langle \sigma^{\pm} \rangle,\langle \sigma^z \rangle$. On the level of density matrix this corresponds to the introduction of the reduced density matrices $\rho_q$ and $\rho_{ph}$ and the full one $\rho_{mf}=\rho_q\otimes \rho_{ph}$. This factorization (\ref{factorization-m-b}) is an approximation where we neglect correlation between fluctuations in qubit and photon mode ($\delta\sigma^{\pm}, \delta\sigma^{z}$ and $\delta a, \delta a^+$) \begin{equation} \langle a\sigma^{z,\pm} \rangle=\langle a\rangle \langle \sigma^{z,\pm} \rangle+\langle \delta a\delta\sigma^{z,\pm}\rangle. \label{fluctuations-0} \end{equation} After such the factorization (\ref{factorization-m-b}) we end up with Maxwell-Bloch non-linear equations (we do not write $\langle \rangle$ for brevity) \begin{equation} \partial_t a (t) = -\frac{\kappa}{2}a(t)-i g \sigma^-(t), \quad c.c., \label{a} \end{equation} \begin{equation} \partial_t \sigma^+ (t) = (i\Delta-\Gamma_1/2) \sigma^+ (t) -i\left(\frac{f(t)}{2}+g a^+(t)\right) \sigma^z(t), c.c., \label{sigma_minus} \end{equation} \begin{multline} \partial_t \sigma^z (t) =- \Gamma_1 \left(\sigma^z(t)+1\right) + \\ +2 i g \left( a^+(t)\sigma^-(t)-a(t)\sigma^+(t)\right)+\\ +i f(t) \left(\sigma^-(t)-\sigma^+(t)\right). \label{sigma-z} \end{multline} Note, that photon number dynamics is found from solution for $a(t)$ in such a mean-field technique as \begin{equation} n_{ph} (t) = |a(t)|^2. \label{n-mb-0} \end{equation} \section{Results} \subsection{ Steady state regime} In this part of the paper we demonstrate a comparison between results obtained from solutions of Lindblad (full many-body density matrix) and Maxwell-Bloch (mean-field) equations in a wide range of drivings $f$. Here and below we are limited by fully resonant regime where $\epsilon=\omega_R$, i.e. the detuning is zero $\Delta=0$. We evaluate numerically the $f$-dependences for qubit excited state occupation number $n_q$, generated photon number $n$ in the cavity and correlators $\langle \delta a\delta\sigma^{z,\pm}\rangle$. All the data presented in the paper are obtained for the system with the parameters $\Gamma_1=0.5$ MHz, $\kappa=0.4$ MHz. This section is devoted to the steady-state regime emerging after a long evolution of the system subjected to the driving field having a constant amplitude and phase. We observe from Figures \ref{result:stationary:nph} and \ref{result:stationary:nq} a very good agreement between the mean-field and the full density matrix solutions for $n_{ph}$ and $n_q$ at $f<f^*$ where the value of $f^*$ divides a ranges of weak and strong field steady state regimes. At $f> f^*$ we observe an agreement for qubit occupation number which is $n_q=1/2$ in both of solutions. Indeed there are significant distinctions in behavior of photon degree of freedom: in strong field limit of $f> f^*$ the photon number $n$ decays to zero in Maxwell-Bloch solution but saturates to a finite value in the Lindblad numerical calculation. The steady state solution of Maxwell-Bloch equations can be analyzed to explain the observed differences. Taken the l.h.s. parts of the equations (\ref{a},\ref{sigma_minus}) and their conjugates equal to zero, the following relations between $\langle a \rangle, \langle a^+ \rangle, \langle \sigma^{\pm} \rangle$ and $n_q=(\langle \sigma^z \rangle +1)/2$ are derived \begin{equation} \left( \begin{array}{c} \langle \sigma^{-} \rangle \\ \\ \langle \sigma^{+} \rangle \\ \\ \langle a \rangle \\ \\ \langle a^+ \rangle \\ \end{array} \right)= \left( \begin{array}{c} -\frac{i f (2n_q-1) \kappa }{4 g^2 (2n_q-1) -\Gamma_1 \kappa } \\ \\ \frac{i f (2n_q-1) \kappa }{4 g^2 (2n_q-1) -\Gamma_1 \kappa } \\ \\ -\frac{2 f g (2n_q-1) }{4 g^2 (2n_q-1) -\Gamma \kappa } \\ \\ -\frac{2 f g (2n_q-1) }{4 g^2 (2n_q-1) -\Gamma_1 \kappa } \\ \end{array} \right). \label{a-sigma-sol} \end{equation} Combining these results with (\ref{sigma-z}) with zero l.h.s. part we obtain the relation between $n_{ph}$ and $n_q$ \begin{equation} n_{ph}=-4n_q(2n_q-1)\frac{ g^2}{\kappa^2}. \label{nph} \end{equation} The relation between qubit occupation number itself and driving amplitude $f$ is given by the implicit expression which can be found from (\ref{a-sigma-sol}) as well \begin{equation} f=\frac{|\Gamma_1 \kappa - 4 g^2 (2n_q-1)|}{\kappa}\sqrt{\frac{ n_q}{1-2n_q}}. \label{f} \end{equation} \begin{figure}[h] \includegraphics[width=\linewidth]{1q-st-n.pdf} \caption{Photon number vs driving amplitude $f$ in the steady state regime.} \label{result:stationary:nph} \end{figure} \begin{figure}[h] \includegraphics[width=\linewidth]{1q-st-ns.pdf} \caption{Qubit occupation number vs driving amplitude $f$ in the steady state regime} \label{result:stationary:nq} \end{figure} Definitely, the zero value of $n_{ph}$ resulting from Eq. (\ref{nph}) at large $f$, when qubit occupation number is saturated to $n_q=1/2$ (see Fig. \ref{result:stationary:nq}), is wrong. A correct value for $n_{ph}$ can be easily found from the Hamiltonian (\ref{h}) in the limit of $f\gg g$. Namely, qubit ground state in such a limit is odd superposition $|\psi_{gs}\rangle=(|g\rangle-|e\rangle)/\sqrt{2}$, and, hence, $\sigma^\pm=1/2$. After that, we find perturbatively steady state $a=-2i(g/\kappa) \sigma^-$ from (\ref{a}) for the dissipative system, yielding $n_{ph}=|a|^2=(g/\kappa)^2$ from the mean-field definition of $n_{ph}$ (\ref{n-mb-0}). This result is in agreement with the tendency to saturation of photon number $n_{ph}$ at large $f$ observed in the numerical solution. Other comment is about the bistability region in the Maxwell-Bloch result seen in Figure \ref{result:stationary:nph} and \ref{result:stationary:nq}. Mathematically it is due to the fact that (\ref{f}) is a 3-rd order equation with respect to $f$. There exists a range for $f$, where three solutions for $n_q$, and, consequently, three values of $n_{ph}$ at a given $f$ are possible. The condition for an existence of the three solutions in Maxwell-Bloch equations in this stationary regime is \begin{equation} g>\sqrt{2 \Gamma_1 \kappa}. \label{gc} \end{equation} This condition follows from the expression for two extrema of the inverse relation between $n_q$ and $f$ (shown as dased curve in Fig. \ref{result:stationary:nq}): $$n_{q}^{(1,2)}= \frac{1}{8}\left( 3\pm \sqrt{1-\frac{2 \Gamma \kappa}{ g^2}} \right).$$ One of the three solutions appears unstable and does not show up in the curves obtained numerically. The two others are stable and give rise to a bistability regime similar to the one in \cite{SavageCarmichael} where a driving was applied to photon mode. We insist, however, that the solution of the Lindblad equation for the many-body density matrix does not contain such a bistable regime and we therefore interpret it as an artifact of the mean-field approximation. It is important that non-zero correlators $\langle \delta a\delta\sigma^{z,\pm}\rangle$ demonstrate the increase of the effect of quantum fluctuations in the regime of strong driving $f>f^*$, see Fig. \ref{result:stationary:spl-a}. In the regime of strong coupling (\ref{gc}) the typical $f^*$ can be estimated from the mean-field relation (\ref{f}) as follows $$ f^*\sim {\rm max}[\Gamma_1,\frac{g^2}{\kappa}]. $$ As it is seen from the curve for $n_{ph}$ these fluctuations make a significant contribution in the photon sector of the system. Value of the fluctuations can be perturbatively estimated from the Maxwell-Bloch equations : $$ \langle \delta a\delta\sigma^+\rangle=\frac{2i\kappa g (2n_q-1)(2g n_{ph}+f a)}{(2\kappa+\Gamma_1)\Gamma_1} - \frac{2 i g n_q}{2\kappa+\Gamma_1} $$ This correlator saturates to a non-zero value of $$\langle \delta a\delta\sigma^+\rangle_{f\gg f^*} = \frac{-i g}{2\kappa+\Gamma_1}$$ in the limit of strong driving where the qubit occupation number is $n_q=1/2$. In the Figure \ref{result:stationary:spl-a} we present the results for the correlators obtained from the Lindblad solution for the full density matrix. The saturation of $\langle \delta a\delta\sigma^+\rangle$ at high $f$, appeared in the mean-field approach, is observed in these data as well. \begin{figure}[h] \includegraphics[width=\linewidth]{1q-st-fluc.pdf} \caption{Correlator of fluctuations $ \langle \delta a\delta\sigma^{+}\rangle$ and $ \langle \delta a\delta\sigma^{z}\rangle$ extracted from solution of the Lindblad equation for the full density matrix.} \label{result:stationary:spl-a} \end{figure} In Figure \ref{result:stationary:s} we show the numerical results for the von Neumann entropy $S=-{\rm Tr}\rho \ln \rho$. The solid curve demonstrates $S(f)$ calculated from the Lindblad approach while the dashed one is related to the mean-field approximation where the effective Hamiltonian include the values of $\langle a \rangle, \langle a^+ \rangle, \langle \sigma^{\pm} \rangle$ found from the solution of Maxwell-Bloch equations. The difference between them at $f>f^*$ shows again that there is a significant entanglement between the qubit and photon degrees of freedom in the strong driving domain. The mean-field solution assumes that the many-body density matrix is a direct product of the qubit and photon ones $\rho_{mf}=\rho_{ph} \otimes \rho_q$, where the elements responsible for the entanglement are zero. These non-diagonal elements of the density matrix, taken into account in the solution of the Lindblad equation, increase the entropy. \begin{figure}[h] \includegraphics[width=\linewidth]{1q-st-s.pdf} \caption{Entropy vs driving amplitude $f$ in the stationary regime. The solid curve is related to the density matrix found from the solution of Lindblad equation. The dashed curve describes entropy calculated within the mean-field approximation. } \label{result:stationary:s} \end{figure} \subsection{Non-stationary regime} The second result of our paper is that quantum corrections $\langle \delta a\delta\sigma^{z,+}\rangle$ play a significant role in the non-stationary dynamics of the quantum interface even at drivings less than the steady state threshold $f^*$. This is demonstrated via time evolution of $n_{ph}(t)$ and $n_q(t)$ after the moment $t=0$ when the external driving is suddenly switched on. The threshold value, observed for the steady state regime, is estimated as $f\approx 1.5 g$ for our parameters of the system. We set the after-quench value of the driving at the smaller value $f=g$. Figures \ref{result:nonstationary:ns}, \ref{result:nonstationary:nph} and \ref{result:nonstationary:spl-a} demonstrate the distinctions between the qubit and photon occupation number dynamics obtained from non-stationary solutions of the Maxwell-Bloch (\ref{a},\ref{sigma_minus},\ref{sigma-z}) and Lindblad equation (\ref{lindblad}). \begin{figure}[h] \includegraphics[width=\linewidth]{1q-dyn-ns.pdf} \caption{Time evolution of the qubit occupation number $n_q(t)$ found from the solution on the full density matrix and the mean-field approach at $f=g$.} \label{result:nonstationary:ns} \end{figure} \begin{figure}[h] \includegraphics[width=\linewidth]{1q-dyn-nph.pdf} \caption{Time evolution of photon occupation number $n_{ph}(t)$ found from solution of the Lindblad equation on the full density matrix and the mean-field approach at $f=g$. } \label{result:nonstationary:nph} \end{figure} \begin{figure}[h] \includegraphics[width=\linewidth]{1q-dyn-fluc.pdf} \caption{Time evolution of correlations $ \langle \delta a\delta\sigma^{+}\rangle$ and $ \langle \delta a\delta\sigma^{z}\rangle$ extracted from the solution of the Lindblad equation at $f=g$.} \label{result:nonstationary:spl-a} \end{figure} \begin{figure}[h] \includegraphics[width=\linewidth]{1q-dyn-s-all.pdf} \caption{Time evolution of entropy $S$ at different amplitudes of the driving $f$.} \label{result:nonstationary:s} \end{figure} In Figure \ref{result:nonstationary:s} we present the results for von Neumann entropy as function of time at different values of the driving $f$. We observe a strong difference in values entropy found from solving of Lindblad (solid curves) and mean-field (dashed curves) equations. For $f>f^*$, the entropy grows almost monotonically, until the saturation at the steady-state value. Contrary, for $f<f^*$ there is a pronounced maximum at $t\approx 4 \mu s$. The peak is present and the full-$\rho$ result is different from the mean-field one even for a weak driving $f=0.1 g$, although the steady-state entropy is almost vanished for much larger $f=0.75 g$. This indicates an emergent entanglement between qubit and photon mode of the quantum interface being switched. \section{Conclusions} We have studied the response of a dissipative hybrid qubit-cavity system to the applied strong driving field, having in mind the future possible realization of quantum operations in superconducting quantum metamaterials. We demonstrated that for the case studied the many-body effects (or, equally, the entanglement between the qubit and photon excitations) are important and that the system cannot be treated by means of a mean-field approximation. This is shown from a comparison of analytical steady state solution of the standard Maxwell-Bloch equations and numerical simulations based on Lindblad equation on the many-body density matrix. Speaking more concretely, we have shown that mean-field approach, where the density matrix of the system can be represented via direct product of isolated qubit and photon ones $\rho_{mf}=\rho_{ph} \otimes \rho_q$, provides a good steady state solution up to certain threshold $f^*$ but at $f>f^*$ the strong discrepancy from the many-body result is observed. It is related with a growing value of quantum correlations between fluctuations of qubit and photons fields which start to play a significant role in behavior of the system. We show in our analysis that at large enough coupling energy between cavity and qubit modes the solution of Maxwell-Bloch equations reveals an artifact being a hysteresis in number of photons as function of the driving amplitude in vicinity of the threshold $f^*$. Such a hysteresis has not been observed in the full density matrix solution. Also we have studied an effect of the non-adiabatic switching of the driving and show that there is a difference between mean-field and the density matrix solutions even for the drivings weaker than the steady state threshold $f^*$. Our findings demonstrate quantitative limitations of standard mean-field description and show the crossover between the classical and many-body quantum regimes. In the classical regime, the qubit virtually acts as a linear (Gaussian) degree of freedom; this regime cannot reveal a difference between the quantum and linear-optical metamaterials. When the two-level nature of the qubit plays an essential role, its entanglement with the cavity mode is also large and should be accounted. We also point out that the effect of correlations is revealed while the number of photons in the cavity mode is not small and one could naively expect that the cavity operates in a classical regime. In our solutions we have used parameters relevant for contemporary metamaterials involving highly anharmonic flux qubits, and we expect that the obtained results will find an application in realization of quantum gates in superconducting quantum circuits and metamaterials. \section{Acknowledgments} Authors thank Yuriy E. Lozovik, Andrey A. Elistratov, Evgeny S. Andrianov and Kirill V. Shulga for fruitful discussions. The study was funded by the Russian Science Foundation (grant No. 16-12-00095).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Results} The full description of one's socioeconomic status is rather difficult as it is characterised not only by quantitative features but also related to one's social or cultural capital \cite{Bourdieu1984}, reputation, or professional skills. However, we can estimate socioeconomic status by assuming a correlation between one's social position and economic status, which can be approximated by following the network position and financial development of people. This approach in turn not only gives us a measure of an individual's socioeconomic status but can also help us to draw conclusions about the overall distribution of socioeconomic potential in the larger society. \subsection*{Economic status indicators} Our estimation of an individual's economic status is based on the measurement of consumption power. We use a dataset which contains the amount and type of daily debit/credit card purchases, monthly loan amounts, and some personal attributes such as age, gender, and zip code of billing address of $\sim 6$ million anonymised customers of a bank in the studied country over $8$ months (for further details see Data and Materials). In addition, for a smaller subset of clients, the data provide the precise salary and total monthly income that we use for verification purposes as explained later. By following the purchase history of each individual, we estimate their economic position from their average amount of debit card purchases. More precisely, for an individual $u$ who spent a total amount of $p_u(t)$ in month $t$, we estimate his/her average monthly purchase (AMP) as \begin{equation} P_u=\frac{\sum_{t\in T} p_u(t)}{|T|_u}, \end{equation} where $|T|_u$ corresponds to the number of active months of the user (with at least one purchase). In order to verify this individual economic indicator we check its correlations with other indicators, such as the salary $S_u$ (defined as the average monthly salary of individual $u$ over the observation period $T$) and the income $I_u$ (defined as the average total monthly income including salary and other incoming bank transfers). We find strong correlations between individual AMP $P_u$ and income $I_u$ with a Pearson correlation coefficient $r\approx 0.758$ ($p<.001$, $SE=7.33\times 10^{-4}$) (for correlation heat map see Fig.\ref{fig:1}a), and also between $P_u$ and salary $S_u$ with $r\approx 0.691$ ($p<.001$, $SE=9.695\times 10^{-4}$) (see Fig.\ref{fig:1}b). Note that direct economic indicators, such as $I_u$ and $S_u$, are available only for a smaller subset of users (for exact numbers see Fig.\ref{fig:1} caption), thus for the present study we decided to use $P_u$ since this measure is available for the whole set of users. \begin{figure}[t!] \centering \includegraphics[width=0.94\textwidth,angle=0]{Fig1.pdf} \caption{Correlations and distributions of individual economic indicators. The heat maps show correlations between the average monthly purchase $P_u$ and (a) average income $I_u$, (b) average salary $S_u$, and (c) average monthly debt $D_u$ for (a) 625,412 (b) 389,567 and (c) 339,288 customers who have (accordingly) both corresponding measures available. Colours in panels (a-c) depicts the logarithm of the fraction of customers with the given measures. (d) Cumulative distributions of $P_u$ (blue line) and $D_u$ (orange line) as functions of sorted fraction $f$ of individuals. Distributions were measured for $6,002,192$ (resp. $339,288$) individuals from whom AMP (resp. AMD) values were available. Dashed line shows the case of the perfectly balanced distribution. \label{fig:1}} \end{figure} At the same time we are interested in an equivalent indicator which estimates the financial commitments of individuals. We define the average monthly debt (AMD) of an individual $u$ by measuring \begin{equation} D_u=\frac{\sum_{t\in T} d_u(t)}{|T|_u}, \end{equation} where $d_u(t)$ indicates the debt of individual $u$ in month $t\in T$ and $|T|_u$ is the number of active months where the user had debt. Arguably individual debt could depend on the average income and thus on the AMP of a person due to the loaning policy of the bank. Interestingly, as demonstrated in Fig.\ref{fig:1}c, we found weak correlations between AMP and AMD with a small coefficient $r \approx 0.104$ ($p<.001$, $SE=2.48\times 10^{-3}$), which suggests that it is worth to treat these two indicators independently. \subsection*{Overall socioeconomic imbalances} The distribution of an individual economic indicator may disclose signs of socioeconomic imbalances on the population level. This hypothesis was first suggested by V. Pareto and later became widely known as the law named after him~\cite{Pareto1971Manual}. The present data provide a straightforward way to verify this hypothesis through the distribution of individual AMP. We measured the normalised cumulative function of AMP for $f$ fraction of people sorted by $P_u$ in an increasing order: \begin{equation} C_P(f)=\frac{1}{\sum_u P_u}\sum_{f} P_u \end{equation} We computed this distribution for the $6,002,192$ individuals assigned with AMP values. This function shows (see Fig.\ref{fig:1}d blue line) that AMP is distributed with a large variance, i.e., indicating large economical imbalances just as suggested by the Pareto's law. A conventional way to quantify the variation of this distribution is provided by the Gini coefficient $G$ \cite{Gastwirth1972The}, which characterises the deviation of the $C_P(f)$ function from a perfectly balanced situation, where wealth is evenly distributed among all individuals (diagonal dashed line in Fig.\ref{fig:1}d). In our case we found $G_P\approx 0.461$, which is relatively close to the World Bank reported value $G=0.481$ for the studied country~\cite{World2010Gini}, and corresponds to a Pareto index~\cite{Souma2000Physics} $\alpha=1.315$. This observation indicates a $0.73:0.27$ ratio characterising the uneven distribution of wealth, i.e., that the $27\%$ of people are responsible for the $73\%$ of total monthly purchases in the observed population. Note that these values are close to the values $G=0.6$ and $80:20$, which were suggested by Pareto. At the same time we have characterised the distribution of individual AMD by measuring the corresponding $C_D(f)$ function as shown in Fig.\ref{fig:1}d (orange line) for $339,288$ individuals for whom AMD values were available. It indicates even larger imbalances in case of debt with a Gini coefficient $G_D \approx 0.627$ and $\alpha=1.140$ indicating $19\%$ of the population to be actually responsible for $81\%$ of overall debt in the country. This observation suggests that Pareto's hypothesis holds not only for the distribution of purchases but for debt as well. Note that similar distribution of debt of bankrupt companies has been reported \cite{AoyamaPareto2000}. \subsection*{Class definition and demographic characters} The economic capacity of individuals arguably correlates with their professional occupation, education level, and housing, which in turn determine their social status and environment. At the same time status homophily~\cite{McPherson2001Birds,Lazarsfeld1954Friendship}, i.e., people's tendency to associate with others of similar social status, has been argued to be an important mechanism that drives the creation of social ties. Our hypothesis is that these two effects, diverse socioeconomic status and status homophily, potentially lead to the emergence of a stratified structure in the social network where people of the same social class tend to be better connected among themselves than with people from other classes. A similar hypothesis has been suggested earlier~\cite{Bottero2005Stratification} but its empirical verification has been impossible until now as this would require detailed knowledge about the social structure and precise estimators of individual economic status. In the following, our main contribution is to clearly identify signatures of social stratification in a representative society-level dataset, that contains information on both the social network structure and the economic status of people. In order to investigate signatures of social stratification, we combine the bank transaction data with data disclosing the social connections between the bank's customers. To identify social ties, we use a mobile communication dataset, provided by one mobile phone operator in the country, with a customer set that partially overlaps with the user set found in the bank data (for details on data matching policy see Data and Materials). To best estimate the social network, we connect people who at least once communicated with each other via call or SMS during the observation period of 21 months between January 2014 and September 2015, but we remove non-human actors, such as call centres and commercial communicators, by using a recursive filtering method. For the purpose of our study we select all mobile phone users who appear as customers in the bank dataset and take the largest connected component of the intersection graph. After this procedure we obtain a social network with $|E|=1,960,239$ links and $N=992,538$ nodes, each corresponding to an individual with a valid non-zero AMP value $P_u$. For further details about the datasets, their combinations, filtering, and network construction see Data and Materials. \begin{figure}[t!] \centering \includegraphics[width=0.94\textwidth,angle=0]{Fig2.pdf} \caption{Social class characteristics (a) Schematic demonstration of user partitions into 9 socioeconomic classes by using the cumulative average monthly purchase (AMP) function $C_P(f)$. Fraction of individuals belonging to a given class ($x$ axis) have the same sum of AMP $(\sum_u P_u)/n$ ($y$ axis) for each class. (b) Number of egos (blue), and the average AMP $\langle P \rangle$ (in USD \cite{Currency}) per individual (pink) in different classes. (c) Average age of different classes. (d) Age pyramids for men and women with colours indicating the corresponding socioeconomic groups and with bars proportional to absolute numbers. (e) Fraction of women in different classes. \label{fig:2}} \end{figure} Taking each individual in the selected social network, we assign each of them into one of $n=9$ socioeconomic classes based on their individual AMP values. This classification is defined by sorting individuals by their AMP, take the cumulative function $C_P(f)$ of AMP, and cut it in $n$ segments such that the sum of AMP in each class is equal to $(\sum_u P_u)/n$ (as shown in Fig.\ref{fig:2}a). Our selection of nine distinct classes is based on the common three-stratum model \cite{Brown2009Social,Akhbar2010Class}, which identifies three main social classes (lower, middle, and upper), and three sub-classes for each of them~\cite{Saunders1990Social}. More importantly, this way of classification relies merely on individual economic estimators, $P_u$, and naturally partition individuals into classes with decreasing sizes, and increasing $\langle P \rangle$ per capita average AMP values for richer groups (for exact values see Fig.\ref{fig:2}b)\cite{Currency}. To explore the demographic structure of the classes we used data on the age and gender of customers. We draw the population pyramids for men and women in Fig.\ref{fig:2}d with colour-bars indicating the number of people in a given social class at a given age. We found a positive correlation between social class and average age, suggesting that people in higher classes are also older on average (see Fig.\ref{fig:2}c). In addition, our data verifies the presence of gender imbalance as the fraction of women varies from $0.45$ to $0.25$ by going from lower to upper socioeconomic classes (see Fig.\ref{fig:2}e). \subsection*{Structural correlations and social stratification} Using the above-defined socioeconomic classes and the social network structure, we turn to look for correlations in the inter-connected class structure. To highlight structural correlations, such as the probability of connectedness, we use a randomised reference system. It is defined as the corresponding configuration network model structure where we take the original social network, select random pairs of links and swap them without allowing multiple links and self loops. In order to remove any residual correlations we repeated this procedure $5\times |E|$ times. This randomisation keeps the number of links, individual economic indicators $P_u$, and the assigned class of people unchanged, but destroys any structural correlations in the social structure and consequently between socioeconomic layers as well. In each case, we repeat this procedure for $100$ times and present results averaged over the independent random realisations. Taking the original (resp. randomised) network we count the number of links $|E(s_i,s_j)|$ (resp. $|E_{rn}(s_i,s_j)|$) connecting people in different classes $s_i$ and $s_j$. After repeating this procedure for each pair of classes in both networks, we take the fraction: \begin{equation} L(s_i,s_j)=\frac{|E(s_i,s_j)|}{|E_{rn}(s_i,s_j)|}, \label{eq:Lsisj} \end{equation} which gives us how many times more (or less) links are present between classes in the original structure as compared to the randomised one. Note that in the randomised structure the probability that two people from given classes are connected depends only on the number of social ties of the individuals and the size of the corresponding classes, but is independent of the effect of potential structural correlations. This way the comparison of the original and random structures highlights only the effect of structural correlations induced by status homophily or other tie creation mechanisms such as cyclic or triadic closure \cite{Kumpula2007Emergence}. \begin{figure}[t!] \centering \includegraphics[width=0.94\textwidth,angle=0]{Fig3.pdf} \caption{Structural correlations in the socioeconomic network (a) Chord diagram of connectedness of socioeconomic classes $s_i$, where each segment represents a social class $s_i$ connected by chords with width proportional to the corresponding inter-class link fraction $\tilde{L}_{s_i}(s_j)$, and using gradient colours matched with opposite ends $s_j$. Note that the $\tilde{L}_{s_i}(s_j)=L(s_i,s_j)/\Sigma_{s_j}L(s_i,s_j)$ normalised fraction of $L(s_i,s_j)$ (in Eq.\ref{eq:Lsisj}) was introduced here to assign equal segments for each class for better visualisation. Chords for each class are sorted in decreasing width order in the direction shown above the main panel. On the minor chord diagrams of panel (a), graphs corresponding to each class are shown with non-gradient link colours matching the opposite end other than the selected class. (b) Matrix representation of $L(s_i,s_j)$ (for definition see Eq.\ref{eq:Lsisj}) with logarithmic colour scale. (c) The $L(s_i,s_j)$ function extracted for three selected classes ($1$ (blue), $5$ (yellow), and $9$ (red)). Panels (a)-(d) provide quantitative evidence on the stratified structure of the social network and the upward-biased connections of middle classes. (d) ``Rich-club'' coefficient $\rho(P_>)$ (definition see Eq.\ref{eq:RCC}) based on the empirical (purple), and a degree-correlated null model (black) networks. On the individual level the richest people of the population appear to be eight times more densely connected than expected randomly. \label{fig:3}} \end{figure} From the chord diagram visualisation of this measure in Fig.\ref{fig:3}a, we can draw several conclusions. Note that for better visual presentation in Fig.\ref{fig:3}a we have normalised $L(s_i,s_j)$ and thus chord width indicates relative values $\widetilde{L}_{s_i}(s_j)=L(s_i,s_j)/\sum_{s_j}L(s_i,s_j)$ as compared to the origin class $s_j$ (as also explained in the figure caption). First, after sorting the chords of a given class $s_i$ in a decreasing $L(s_i,s_j)$ order, chords connecting a class to itself (self-links) appear always at top (or top 2nd) positions of the ranks. At the same time other top positions are always occupied by chords connecting to neighbouring social classes. These two observations (better visible in Fig.\ref{fig:3}a insets) indicate strong effects of status homophily and the existence of stratified social structure where people from a given class are the most connected with similar others from their own or from neighbouring classes, while connections with individuals from remote classes are least frequent. A second conclusion can be drawn by looking at the sorting of links in the middle and lower upper classes ($S4-S8$). As demonstrated in the inset of Fig.\ref{fig:3}a, people prefer to connect upward and tend to hold social ties with others from higher social classes rather than with people from lower classes. These conclusions can be further verified by looking at other representations of the same measure. First we show a heat map matrix representation of Eq.\ref{eq:Lsisj} (see Fig.\ref{fig:3}b), where $L(s_i,s_j)$ values are shown with logarithmic colour scales. This matrix has a strong diagonal component verifying that people of a given class are always better connected among themselves (red) and with others from neighbouring groups, while social ties with people from remote classes are largely underrepresented (blue) as compared to the expected value provided by the random reference model. This again indicates the presence of homophily and the stratified structure of the socioeconomic network. The upward-biased inter-class connectivity can also be concluded here from the increase of the red area around the diagonal by going towards richer classes. These conclusions are even more straightforward from Fig.\ref{fig:3}c where the $L(s_i,s_j)$ is shown for three selected classes ($1$-poor, $5$-middle, and $9$-rich). These curves clearly indicate the connection preferences of the selected classes. Moreover, they show that richest people appear with the strongest homophilic preferences as their class is $\sim 2.25$ times better connected among each other than expected by chance, on the expense of weaker connectivity to remote classes. This effect is somewhat weaker for middle classes, which function as bridges between poor and rich classes, but apparently upward biased towards richer classes. This set of results directly verifies our earlier conjectures that the structure of the socioeconomic network is strongly stratified and builds up from social ties, whose creation is potentially driven by status homophily, and determined by the socioeconomic characteristics of individuals. However, one can argue that the observed stratified structure can be simply the consequence of simultaneously present degree-degree and degree-wealth correlations. More precisely, if the degree of an individual is highly correlated with its economic status and at the same time the network is strongly assortative (i.e. people prefer to connect to other people with similar degrees) we may observe similar effects as in Fig.\ref{fig:3}a-c. To close out this possibility we completed an extensive correlation analysis, which showed us that no strong effects of degree-degree correlations can be detected and that the degree and wealth of individuals are very weakly correlated. To further clarify the effects of these correlations we performed a null model study where we carefully define random reference models to remove the correlations in focus in a controlled way and check their effects on the quantitative observations. As a conclusion we demonstrated that these correlations cannot explain the observed stratified structure. All of these results are presented in the Supplementary Materials (SM). The above observations further suggest that the social structure may show assortative correlations in terms of socioeconomic status on the individual level. In other words, richer people may be better connected among themselves than one would expect them by chance and this way they form tightly connected ``rich clubs'' in the structure similar to the suggestion of Mills \cite{Mills1956The}. This can be actually verified by measuring the rich-club coefficient \cite{Zhou2004Rich,Colizza2006Detecting}, after we adjust its definition to our system as follows. We take the original social network structure, sort individuals by their AMP value $P_u$ and remove them in an increasing order from the network (together with their connected links). At the same time we keep track of the density of the remaining network defined as \begin{equation} \phi(P_>)=\frac{2L_{P_>}}{N_{P_>}(N_{P_>}-1)} \label{eq:phiP} \end{equation} where $L_{P_>}$ and $N_{P_>}$ are the number of links and nodes remaining in the network after removing nodes with $P_u$ smaller than a given value $P_>$. In our case, we consider $P_>$ as a cumulative quantity going from $0$ to $\sum_u P_u$ with values determined just like in case of $C_P(f)$ in Fig.\ref{fig:2}a but now using $100$ segments. At the same time, we randomise the structure using a configuration network model and by removing nodes in the same order we calculate an equivalent measure $\phi_{rn}(P_{>})$ as defined in Eq.\ref{eq:phiP} but in the uncorrelated structure. For each randomisation process, we used the same parameters as earlier and calculated the average density $\langle {\phi}_{rn}\rangle (P_{>})$ of the networks over $100$ independent realisations. Using the two density functions we define the "rich-club" coefficient as \begin{equation} \rho(P_>)=\frac{\phi(P_>)}{\langle {\phi}_{rn}\rangle(P_>)}, \label{eq:RCC} \end{equation} which indicates how many times the remaining network of richer people is denser connected than expected from the reference model. In our case (see Fig.\ref{fig:3}d purple symbols) the rich-club coefficient increases monotonously with $P_>$ and grows rapidly once only the richer people remain in the network. At its maximum it shows that the richest people are $\sim 8$ times more connected in the original structure than in the uncorrelated case. This provides a direct evidence about the existence of tightly connected ``rich clubs'' \cite{Mills1956The}, and the presence of strong assortative correlations in the social structure on the level of individuals in terms of their socioeconomic status. Note that this measure also suggests that the observed ``rich-clubs'' were not induced by degree-wealth correlations. The connectedness of nodes in the randomised structure were actually determined merely by their degrees, and since we kept wealth-degree correlations, the wealth-sorted removal process shows exactly the expected density of remaining richer nodes assuming only their original degree but no other correlations. This way the fraction of the two network density curves, i.e. the rich club coefficient, actually characterises exactly the effect of status homophily as compared to the randomised case where only degrees and degree-wealth correlations determined the connectedness of the network. In addition, to rule out the possibility that our observation was induced by positive degree-degree correlations, we performed another randomisation of the network, where we kept node degrees, degree-degree, and degree-wealth correlations but vanished any other structural correlations. This randomisation procedure is a modification of the configuration network model and its definition is given in the SM. To measure the corresponding rich club coefficient function, we substituted in the numerator of Eq.\ref{eq:RCC} the residual network density function measured in this new degree correlated null model using the same wealth sorted removal sequence as earlier. Resutls in Fig.\ref{fig:3}d (black symbols) shows that the obtained rich-club coefficient appears approximately as a constant function around one. This way it demonstrates that the entangled effects of degree-degree and degree-wealth correlations cannot explain the emergence of ``rich-clubs'' observed in the empirical case. The network, which conserves degrees and these two correlations, emerges with a structure just as the network, which conserves degrees and degree-wealth correlations only. Consequently the observed increasing rich-club coefficient in case of the empirical structure is induced by status homophily or other tie creation mechanisms and not by degree-degree or degree-wealth correlations. \subsection*{Spatial correlations between socioeconomic classes} As we discussed earlier, the economic capacity of an ego strongly determines the possible places he/she can afford to live, arguably leading to somewhat homogeneous neighbourhoods, districts, towns, and regions occupied by people from similar socioeconomic classes. This effect may translate to correlations in the spatial distribution of socioeconomic classes in relation with each other. To study such correlations, we use three different types of geographical informations extracted for individuals from the data: the zip code of reported billing address; the home; and work locations estimated from call activity logs (for details see Data and Materials). To give an overall image about the spatial distribution of the investigated users we use their zip location and assign them in different states of the country as shown in Fig.\ref{fig:4}a. Importantly, the observed population distribution correlates well with census data \cite{INEG2015} with coefficient $r=0.861$ ($p<.001$) on the state level, which indicates that our data records a fairly unbiased sample of the population in terms of distribution in space. \begin{figure*} \centering \includegraphics[width=0.94\textwidth,angle=0]{Fig4.pdf} \caption{Spatial socioeconomic correlations (a) State level population distribution of egos based on their zip locations. Inset depicts a zoom on the capital district. Informations depicted here were entirely obtained from the utilised dataset. The map representation was generated by using an open source code available at \href{https://gist.github.com/diegovalle/5843688}{github.com/diegovalle} (no copyright reserved) and shape files openly available at \href{http://www.inegi.org.mx}{www.inegi.org.mx} (no copyright reserved). (b) Relative average geodesic distances for different classes using the measure $d^{s_i}_{r}(s_j)$ defined in Eq.\ref{eq:dsi}. (c) The same $d^{s_i}_{r}(s_j)$ functions as on panel (b) shown for a selected set of classes (1-poor (blue), 5-middle (yellow), 9-rich (red)). (d) $d_{\Delta}^{s_i}(d_{hw})$ differences between commuting distance distributions calculated for different classes and for the whole population. $x$ scale depicts in logarithmic values of $d_{hw}$ commuting distances. (e) The same $d_{\Delta}^{s_i}(d_{hw})$ functions as on panel (d) shown for a selected set of classes (1-poor (blue), 5-middle (yellow), 9-rich (red)). \label{fig:4}} \end{figure*} To quantify spatio-socioeconomic correlations, we measure the relative average geodesic distance between classes. More precisely, we take all connected egos $(u,v)\in E$ belonging to classes $u\in s_i$ and $v\in s_j$ respectively and measure the geodesic distance $d_{geo}^{zip}(a,b)$ between their zip locations. Using these values we calculate the average geodesic distance between any pairs of socioeconomic classes as \begin{equation} \langle d_{geo}(s_i,s_j)\rangle=\frac{1}{L(s_i,s_j)}\sum_{\substack{(u,v)\in E \\ u\in s_i, v \in s_j}} d_{geo}^{zip}(u,v) \label{eq:dgeo} \end{equation} where $L(s_i,s_j)$ assigns the number of links between nodes in classes $s_i$ and $s_j$. Note that since the social network is undirected the measure defined in Eq.\ref{eq:dgeo} is symmetric, i.e., $\langle d_{geo}(s_i,s_j)\rangle=\langle d_{geo}(s_j,s_i)\rangle$. Subsequently we calculate the average distance between nodes from class $s_i$ and any of their neighbours $\langle d_{geo}(s_i)\rangle$ to derive \begin{equation} d^{s_i}_{r}(s_j)=\frac{\langle d_{geo}(s_i,s_j)\rangle-\langle d_{geo}(s_i)\rangle}{\langle d_{geo}(s_i)\rangle}. \label{eq:dsi} \end{equation} This measure is not symmetric anymore and gives us the relative average geodesic distance between egos in $s_i$ to egos in other classes $s_j$ as compared to the average distance of egos $s_i$ from any of their connected peers. Results are presented as a heat map matrix in Fig.\ref{fig:4}b where the diagonal component suggests a peculiar correlation. It shows that the relative average distance is always minimal (and negative) between egos of the same class $s_i$. This means that people tend to live relatively the closest to similar others from their own socioeconomic class as to egos from different classes, independently in which class they belong to. This is even more visible in Fig.\ref{fig:4}c after extracting the $d^{s_i}_{r}(s_j)$ curves (corresponding to rows in Fig.\ref{fig:4}b) for three selected classes. It highlights that while people of the poorest class live relatively the closest to each other, rich people tend to leave relatively the furthest from anyone from lower socioeconomic classes. These correlations are very similar to ones we already observed in the social structure suggesting that the stratified structure and spatial segregation may have similar roots. They are determined by the entangled effects of economic status and status homophily, together with other factors such as ethnicity or other environmental effects, which we cannot consider here. Socioeconomic status of people may also correlate with their typical commuting distances (between home and work), a question which has been studied thoroughly during the last few decades. Some of these studies suggest a positive correlation between economical status (income) and the distance people travel every day between their home and work locations~\cite{Wheeler1967Occupational, Wheeler1969Some, Poston1972Socioeconomic}. Such correlations were partially explained by the positive payoff between commuting farther for better jobs, while keeping better housing conditions. On the other hand recent studies suggest that such trends may change nowadays as in central metropolitan areas, where the better job opportunities are concentrated, became more expensive to live and thus occupied by people from richer classes~\cite{LeRoy1983Paradise, Rosenthal2015Change}. Without going into details we looked for overall signs of such correlations by using the estimated home ($\ell_h$) and work ($\ell_w$) locations of individuals from different classes. For each ego we measure a commuting distances as $d_{hw}=|\ell_h-\ell_w|$ and compute the $P_{s_i}(d_{hw})$ distributions for everyone in a given $s_i$ class, together with the $P_{all}(d_{hw})$ distribution considering all individuals. For each class we are interested in \begin{equation} d_{\Delta}^{s_i}(d_{hw})=P_{s_i}(d_{hw})-P_{all}(d_{hw}), \end{equation} i.e., the difference of the corresponding distributions at each distance $d_{hw}$. This measure is positive (resp. negative) if more (resp. less) people commute at a distance $d_{hw}$ as compared to the overall distribution, thus indicating whether people of a given class are over (under)represented at a given distance. Interestingly, our data is in agreement with both above mentioned hypotheses, as seen in Fig.\ref{fig:4}d where we show $d_{\Delta}^{s_i}(d_{hw})$ for each class as a heat map. There, poorer people are over represented in shorter distances while this trend is shifted towards larger distances (see right skewed yellow component in Fig.\ref{fig:4}d) as going up in the class hierarchy. This continues until we reach the richest classes ($8$ and $9$) where the distance function becomes bimodal assigning that more people of these classes tend to live very far or very close to their work places as compared to expectations considering the whole population. This is even more visible in Fig.\ref{fig:4}e where selected $d_{\Delta}^{s_i}(d_{hw})$ functions are depicted for selected classes. \section*{Discussion} In this paper, we have investigated socioeconomic correlations through the analysis of a coupled dataset of mobile phone communication records and bank transaction history for millions of individuals over $8$ months. After mapping the social structure and estimating individual economic capacities, we addressed four different aspects of their correlations: (a) we showed that individual economic indicators such as average monthly purchases and also debts are unevenly distributed in the population in agreement with the Pareto principle; (b) after grouping people into nine socioeconomic classes we detected effects of status homophily and showed that the socioeconomic network is stratified as people most frequently maintain social ties with people from their own or neighbouring social classes; (c) we observed that the social structure is upward-biased towards wealthier classes and show that assortative correlations give rise to strongly connected ``rich clubs'' in the network; (d) finally, we demonstrated that people of the same socioeconomic class tend to live closer to each other as compared to people from other classes, and found a positive correlation between their economic capacities and the typical distance they use to commute. Even though our study is built on large and detailed data, the utilised data covers only partially the population of the investigated country. However, as we demonstrated above, for population-level measures, such as the Gini coefficient and spatial distribution, we obtained values close to independently reported cases, and thus our observations may generalise in this sense. In addition, the question remains how well mobile phone call networks approximate the real social structure. A recent study \cite{Eagle2009Inferring} demonstrated that real social ties can be effectively mapped from mobile call interactions with precision up to $95\%$. However, it is important to keep in mind that the poorest social class of the society is probably under-represented in the data as they may have no access to bank services and/or do not hold mobile phones. Datasets simultaneously disclosing the social structure and the socioeconomic indicators of a large number of individuals are still very rare. However, several promising directions have been proposed lately to estimate socioeconomic status from communication behaviour on regional level \cite{Specanovic2015Mobile, Blumenstock2010Mobile, Mao2015Quantifying} or even for individuals \cite{Blumenstock2015Predicting}, just to mention a few. In future works these methods could be used to generalise our results to other countries using mobile communication datasets. Here, our aim was to report some general observations in this direction using directly estimated individual economic indicators. Our overall motivation was to empirically verify some long-standing hypothesises and to explore a common ground between hypothesis-driven and data-driven research addressing social phenomena. \section*{Data and Materials} \subsection*{Mobile communication data} Communication data used in our study records the temporal sequence of 7,945,240,548 call and SMS interactions of 111,719,360 anonymised mobile phone users for $21$ months (between January 2014 and September 2015) in Mexico. Each call detailed record (CDR) contains the time, unique caller and callee IDs, the direction and duration of the interaction, and the cell tower location of the client(s) involved in the interaction. Other mobile phone users, who are not clients of the actual provider also appear in the dataset with unique IDs. All unique IDs are anonymised as explained below, thus individual identification of any person is impossible from the data. Using this dataset we constructed a large social network where nodes were users (whether clients or not of the actual provider), while links were drawn between them if they interacted (via call or SMS) at least once during the observation period. In order to filter out call services and other non-human actors from the social network, after construction we recursively removed all nodes (and connected links) who appeared with either in-degree $k_{in}=0$ or out-degree $k_{out}=0$. We repeated this procedure recursively until we received a network where each user had $k_{in}, k_{out}>0$, i.e. made at least one outgoing and received at least one incoming communication events during the nearly two years of observation. After construction and filtering the network remained with 82,453,814 users connected by 1,002,833,289 links, which were considered to be undirected after this point. \subsection*{Credit and purchase data} To estimate individual economic indicators we used a dataset provided by a single bank in the studied country. This data records financial details of $6,002,192$ of people assigned with unique anonymised identifiers over $8$ months from November 2014 to June 2015. The data provides time varying customer variables as the amount and type of their daily debit/credit card purchases, their monthly loan measures, and static user attributes as their billing postal code (zip code), their age, and gender. In addition for a subset of clients we have the records of monthly salary (38.9\% of users) and income (62.5\% of users) defined as the sum of their salaries and any incoming bank transactions. Note that the observation period of the bank credit informations falls within the observation period of the mobile communication dataset, this way ensuring the largest possible overlap between the sets of bank and mobile phone customers. \subsection*{Location data} We used two types of location data for a set of customers. We used the zip code of billing address of bank customers (also called zip location). We also estimated the work and home locations for a set of users using geo-localised mobile communication events. To determine home (resp. work) locations we looked for the most frequented locations during nights and weekends (resp. during daylight at working days). From the total 992,538 individuals we found 990,173 with correct zip codes, and 94,355 with detectable home and work locations (with at least 10 appearances at each location). Each method has some advantages and disadvantages. While frequency dependent locations are more precise, they strongly depend on the activity and regularity of users in terms of mobility. On the other hand, zip codes provide a more coarse-grained information about the location of individuals but they are assumed to be more reliable due to reporting constraints to the bank and because they do not depend on the call activity of individuals. \subsection*{Combined datasets and security policies} A subset of IDs of the anonymised bank and mobile phone customers were matched. The matching, data hashing, and anonymisation procedure was carried out through direct communication between the two providers (bank and mobile provider) and was approved by the national banking commission of the country. This procedure was done without the involvement of the scientific partners. After this procedure only anonymised hashed IDs were shared disallowing the direct identification of individuals in any of the datasets. Due to the signed non-disclosure agreements and the sensitive nature of the datasets it is impossible to share them publicly. This way of combining of the datasets allowed us to simultaneously observe the social structure and estimated economic status of the connected individuals. The combined dataset contained 999,456 IDs, which appeared in both corpuses. However, for the purpose of our study we considered only the largest connected component of this graph containing IDs valid in both data corpuses. This way we operate with a connected social graph of 992,538 people connected by 1,960,242 links, for all of them with communication events and detailed bank records available. \section*{Competing financial interests} We have no competing interests. \section*{Author contributions} All authors participated in the design of the project and the writing of the manuscript. Y.L., M.K. and E.F. designed the measures, Y.L., and M.K. performed the data analysis. All authors reviewed the manuscript. \section*{Acknowledgements} We thank for M. Fixman and J. Brea for assistance with the data set and for J. Saram\"aki and J.P. Chevrot for useful discussions and for the anonymous reviewers for their constructive comments. \section*{Funding} We acknowledge the support from the SticAmSud UCOOL project, INRIA, and the CODDDE (ANR-13-CORD-0017-01) and SoSweet (ANR-15-CE38-0011-01) ANR projects.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} We begin by introducing the geometric framework of our problem. We fix once and for all a natural number \[ n \in \mathbb{N} \setminus \{0,1\} \] that will be the dimension of the Euclidean space $\mathbb{R}^n$ we are going to work in. We also fix a parameter \[ \alpha \in ]0,1[\, , \] which we use to define the regularity of our sets and functions. In order to introduce the domains where our problem is defined, we take two sets $\Omega^o$ and $\Omega^i$ that satisfy the following conditions: \begin{equation} \begin{split} &\mbox{$\Omega^o$, $\Omega^i$ are bounded open connected subsets of $\mathbb{R}^n$ of class $C^{1,\alpha}$,} \\ &\mbox{with exteriors $\mathbb{R}^n\setminus \overline{\Omega^o}$ and $\mathbb{R}^n\setminus \overline{\Omega^i}$ connected and $\overline{\Omega^i}\subset \Omega^o$} \end{split} \end{equation} \begin{figure}[ht] \centering \includegraphics{fig1.pdf} \caption{\it The domains $\Omega^o$ and $\Omega^i$ ($n=2$)}\label{fig:Oe} \label{introsetconditions} \end{figure} (see Figure \ref{introsetconditions}). Here the superscript ``$o$'' stands for ``outer domain'' whereas the superscript ``$i$'' stands for ``inner domain.'' We first want to introduce a transmission problem in the pair of domains consisting of $\Omega^o \setminus \overline{\Omega^i}$ and $\Omega^i$. Therefore, to define the boundary conditions, we fix three functions \begin{equation}\label{introfunconditions} F_1 \in C^0(\partial \Omega ^i \times \mathbb{R} \times \mathbb{R}),\qquad F_2 \in C^0(\partial \Omega^i \times \mathbb{R} \times \mathbb{R}),\quad f^o \in C^{0,\alpha}(\partial\Omega^o). \end{equation} The functions $F_1$ and $F_2$ determine the transmission conditions on the inner boundary $\partial \Omega^i$. Instead, $f^o$ plays the role of the Neumann datum on the outer boundary $\partial \Omega^o$. We consider the following nonlinear transmission boundary value problem for a pair $(u^o,u^i) \in C^{1,\alpha}(\overline{\Omega^o} \setminus \Omega^i) \times C^{1,\alpha}(\overline{\Omega^i})$: \begin{equation}\label{princeq} \begin{cases} \Delta u^o = 0 & \mbox{in } \Omega^o \setminus \overline{\Omega^i}, \\ \Delta u^i = 0 & \mbox{in } \Omega^i, \\ \nu_{\Omega^o}(x) \cdot \nabla u^o(x)=f^o(x) & \forall x \in \partial \Omega^o, \\ \nu_{\Omega^i}(x) \cdot \nabla u^o (x) = F_1(x,u^o(x),u^i(x)) & \forall x \in \partial \Omega^i, \\ \nu_{\Omega^i}(x) \cdot \nabla u^i (x) = F_2(x,u^o(x),u^i(x)) & \forall x \in \partial \Omega^i, \end{cases} \end{equation} where $\nu_{\Omega^o}$ and $\nu_{\Omega^i}$ denote the outward unit normal vector field to $\partial \Omega^o$ and to $\partial \Omega^i$, respectively. We note that, \emph{a priori}, it is not clear why problem $(\ref{princeq})$ should admit a classical solution. As a first result, we prove that under suitable conditions on $F_1$ and $F_2$, problem \eqref{princeq} has at least one solution $(u^o,u^i) \in C^{1,\alpha}(\overline{\Omega^o} \setminus \Omega^i) \times C^{1,\alpha}(\overline{\Omega^i})$. We notice that a problem similar to \eqref{princeq} has been studied in Dalla Riva and Mishuris \cite{DaMi15}. More precisely, in \cite{DaMi15} the authors consider a nonlinear transmission problem with a Dirichlet boundary condition on $\partial\Omega^o$ and a jump type condition for the normal derivative across the interface $\partial\Omega^i$. Then we study the existence and the analytic dependence of the solutions of the transmission problem \eqref{princeq} upon domain perturbation of the inclusion, {i.e.} of the inner set $\Omega^i$. Hence, we introduce a ``perturbed'' version of problem \eqref{princeq}: we fix the external domain $\Omega^o$ and we assume that the boundary of the internal domain is of the form $\phi (\partial\Omega^i)$, where $\phi$ is a diffeomorphism of $\partial\Omega^i$ into a subset of $\mathbb{R}^n$ that belongs to the class \begin{equation}\label{A_Omega^i} \begin{split} \mathcal{A}_{\partial\Omega^i} \equiv \Big\{ \phi & \in C^{1,\alpha}(\partial\Omega^i, \mathbb{R}^n): \, \phi \text{ is injective and}\\ &\text{the differential} \, d\phi(y) \text{ is injective for all } y \in \partial\Omega^i \Big\}. \end{split} \end{equation} Clearly, the identity function of $\partial\Omega^i$ belongs to the class $\mathcal{A}_{\partial\Omega^i}$, and, for convenience, we set \begin{equation}\label{phi0} \phi_0 \equiv \text{id}_{\partial\Omega^i}. \end{equation} Then by the Jordan Leray Separation Theorem (cf., {e.g.}, Deimling \cite[Theorem 5.2]{De85} and \cite[\S A.4]{DaLaMu21}), $\mathbb{R}^n \setminus \phi(\partial\Omega^i)$ has exactly two open connected components for all $\phi \in \mathcal{A}_{\partial\Omega^i}$, and we define $\Omega^i[\phi]$ to be the unique bounded open connected component of $\mathbb{R}^n \setminus \phi(\partial\Omega^i)$. We set \[ \mathcal{A}^{\Omega^o}_{\partial\Omega^i} \equiv \left\{ \phi \in \mathcal{A}_{\partial\Omega^i} : \overline{\Omega^i[\phi]} \subset \Omega^o \right\}\, . \] By assumption \eqref{introsetconditions}, $\phi_0 \in \mathcal{A}^{\Omega^o}_{\partial\Omega^i}$. Now let $\phi \in \mathcal{A}^{\Omega^o}_{\partial\Omega^i}$. We wish to consider the following nonlinear transmission boundary value problem for a pair of functions $(u^o,u^i) \in C^{1,\alpha}(\overline{\Omega^o} \setminus \Omega^i[\phi]) \times C^{1,\alpha}(\overline{\Omega^i[\phi]})$: \begin{equation}\label{princeqpertu} \begin{cases} \Delta u^o = 0 & \mbox{in } \Omega^o \setminus \overline{\Omega^i[\phi]}, \\ \Delta u^i = 0 & \mbox{in } \Omega^i[\phi], \\ \nu_{\Omega^o}(x) \cdot \nabla u^o(x)=f^o(x) & \forall x \in \partial \Omega^o, \\ \nu_{\Omega^i[\phi]}(x) \cdot \nabla u^o (x) = F_1(\phi^{(-1)}(x),u^o(x),u^i(x)) & \forall x \in \phi(\partial\Omega^i), \\ \nu_{\Omega^i[\phi]}(x) \cdot \nabla u^i (x) = F_2(\phi^{(-1)}(x),u^o(x),u^i(x)) & \forall x \in \phi(\partial\Omega^i), \end{cases} \end{equation} where $\nu_{\Omega^i[\phi]}$ denotes the outward unit normal vector field to $\Omega^i[\phi]$. We prove that, under suitable conditions, problem \eqref{princeqpertu} admits a family of solutions $\{(u^o_\phi,u^i_\phi)\}_{\phi \in Q_0}$, where $Q_0$ is a neighbourhood of $\phi_0$ in $\mathcal{A}^{\Omega^o}_{\partial\Omega^i}$ and $(u^o_\phi,u^i_\phi) \in C^{1,\alpha}(\overline{\Omega^o} \setminus \Omega^i[\phi]) \times C^{1,\alpha}(\overline{\Omega^i[\phi]})$ for every $\phi \in Q_0$. In literature, the existence of solutions of nonlinear boundary value problems has been largely investigated by means of variational techniques (see, {e.g.}, the monographs of Ne\v{c}as \cite{Ne83} and of Roub\'i\v{c}ek \cite{Ro13} and the references therein). Moreover, potential theoretic techniques have been widely exploited to study nonlinear boundary value problems with transmission conditions by Berger, Warnecke, and Wendland \cite{BeWaWe90}, by Costabel and Stephan \cite{CoSt90}, by Gatica and Hsiao \cite{GaHs95}, and by Barrenechea and Gatica \cite{BaGa96}. Boundary integral methods have been applied also by Mityushev and Rogosin for the analysis of transmission problems in the plane (cf.~\cite[Chap.~5]{MiRo00}). Several authors have investigated the dependence upon domain perturbation of the solutions to boundary value problems and it is impossible to provide a complete list of contributions. Here we mention, for example, Henrot and Pierre \cite{HePi05}, Henry \cite{He05}, Keldysh \cite{Ke66}, Novotny and Soko\l owski \cite{NoSo13}, and Soko\l owski and Zol\'esio \cite{SoZo92}. Most of the contributions on this topic deal with first or second order shape derivability of functionals associated to the solutions of linear boundary value problems. In the present paper, instead, we are interested into higher order regularity properties (namely real analiticity) of the solutions of a nonlinear problem. To do so, we choose to adopt the Functional Analytic Approach, which has revealed to be a powerful tool to analyse perturbed linear and nonlinear boundary value problems. This method has been first applied to investigate regular and singular domain perturbation problems for elliptic equations and systems with the aim of proving real analytic dependence upon the perturbation parameter (cf. Lanza de Cristoforis \cite{La02,La07-2,La10}). An application to the study of the behaviour of the effective conductivity of a periodic two-phase composite upon perturbations of the inclusion can be found in Luzzini and Musolino \cite{LuMu20}. The key point of the strategy of the method is the transformation of the perturbed boundary value problem into an equivalent functional equation that can be studied by the Implicit Function Theorem. Typically, such a transformation is achieved by exploiting classical results of potential theory, for example integral representation of harmonic functions in terms of layer potentials. Nonlinear transmission problems in perturbed domains have been studied by Lanza de Cristoforis in \cite{La10} and by the authors of the present paper in \cite{DaMoMu19, Mo19}, where they have investigated the behavior of the solution of a nonlinear transmission problem for the Laplace equation in a domain with a small inclusion shrinking to a point. The paper is organised as follows. In Section \ref{notation} we define some of the symbols used later on. In Section \ref{Preliminaries} we introduce some classical results of potential theory that we need. Section \ref{sec princeq} is devoted to the study of problem \eqref{princeq}. We first prove a representation result for harmonic functions in $\overline{\Omega^o}\setminus\Omega$ and $\overline{\Omega}$ (where $\Omega$ is an open bounded connected subset of class $C^{1,\alpha}$ contained in $\Omega^o$) in terms of single layer potentials with appropriate densities and constant functions (cf. Lemma \ref{rapprharm}). Then we prove an uniqueness result in $C^{1,\alpha}(\Omega^o\setminus\overline{\Omega^i}) \times C^{1,\alpha}(\overline{\Omega^i})$ for an homogeneous linear transmission problem in the pair of domains $\Omega^o\setminus\overline{\Omega^i}$ and $\Omega^i$ and we analyse an auxiliary boundary operator arising from the integral formulation of that problem (cf. Lemma \ref{Alemma} and Proposition \ref{J_A}). In Proposition \ref{propintsys} we provide a formulation of problem \eqref{princeq} in terms of integral equations. The obtained integral system is solved by means of a fixed-point theorem, namely the Leray-Schauder Theorem (cf. Proposition \ref{Tcontcomp} and Proposition \ref{prop mu_0}). Finally, under suitable conditions on the functions $F_1$ and $F_2$, we obtain an existence results in $C^{1,\alpha}(\Omega^o\setminus\overline{\Omega^i}) \times C^{1,\alpha}(\overline{\Omega^i})$ for problem \eqref{princeq} (cf. Proposition \ref{prop u^o_0,u^i_0}). Section \ref{sec princeqpertu} is devoted to the study of problem \eqref{princeqpertu}. We provide a formulation of problem \eqref{princeqpertu} in terms of integral equations depending on the diffeomorphism $\phi$ which we rewrite into an equation of the type $M[\phi,\mu] = 0$ for an auxiliary map $M: \mathcal{A}^{\Omega^o}_{\partial\Omega^i} \times X \to Y$ (with $X$ and $Y$ suitable Banach spaces), where the variable $\mu$ is related to the densities of the integral representation of the solution (cf. Proposition \ref{M=0prop}). Then, by analiticity results for the dependence of single and double layer potentials upon the perturbation of the support, we prove that $M$ is real analytic (cf. Proposition \ref{Mrealanal}) and the differential of $M$ with respect to the variable $\mu \in X$ is an isomorphism (cf. Proposition \ref{diffMprop}). Hence, by the Implicit Function Theorem, we show the existence of a family of solutions $\{(u^o_\phi,u^i_\phi)\}_{\phi \in Q_0}$ of \eqref{princeqpertu} (cf. Theorem \ref{upertuex}) and we prove that it can be represented in terms of real analytic functions (cf. Theorem \ref{upertuana}). \section{Notation}\label{notation} We denote by $\mathbb{N}$ the set of natural numbers including $0$. We denote the norm of a real normed space $X$ by $\| \cdot \| _X$. We denote by $I_X$ the identity operator from $X$ to itself and we omit the subscript $X$ where no ambiguity can occur. If $X$ and $Y$ are normed spaces we consider on the product space $X \times Y$ the norm defined by $\| (x,y) \|_{X \times Y} \equiv \|x\|_X + \|y\|_Y $ for all $(x,y) \in X \times Y$, while we use the Euclidean norm for $\mathbb{R}^d$, $d\in\mathbb{N}\setminus\{0,1\}$. If $U$ is an open subset of $X$, and $F:U \to Y$ is a Fr\'echet-differentiable map in $U$, we denote the differential of $F$ by $dF$. The inverse function of an invertible function $f$ is denoted by $f^{(-1)}$, while the reciprocal of a non-zero scalar function $g$ or the inverse of an invertible matrix $A$ are denoted by $g^{-1}$ and $A^{-1}$ respectively. Let $\Omega \subseteq \mathbb{R}^n$. Then $\overline{\Omega}$ denotes the closure of $\Omega$ in $\mathbb{R}^n$, $\partial \Omega$ denotes the boundary of $\Omega$, and $\nu_\Omega$ denotes the outward unit normal to $\partial \Omega$. For $x \in \mathbb{R}^d$, $x_j$ denotes the $j$-th coordinate of $x$, $|x|$ denotes the Euclidean modulus of $x$ in $\mathbb{R}^d$. If $x \in \mathbb{R}^d$ and $r>0$, we denote by $B_d(x,r)$ the open ball of center $x$ and radius $r$. Let $\Omega$ be an open subset of $\mathbb{R}^n$ and $m \in \mathbb{N} \setminus \{0\}$. The space of $m$ times continuously differentiable real-valued function on $\Omega$ is denoted by $C^m(\Omega,\mathbb{R})$ or more simply by $C^m(\Omega)$. Let $r \in \mathbb{N} \setminus \{0\}$, $f \in (C^m(\Omega))^r$. The $s$-th component of $f$ is denoted by $f_s$ and the gradient of $f_s$ is denoted by $\nabla f_s$. Let $\eta=(\eta_1, \dots ,\eta_n) \in \mathbb{N}^n$ and $|\eta|=\eta_1+ \dots+\eta_n$. Then $D^\eta f \equiv \frac{\partial^{|\eta|}f}{\partial x^{\eta_1}_1, \dots , \partial x^{\eta_n}_n}$. We retain the standard notation for the space $C^{\infty}(\Omega)$ and its subspace $C^{\infty}_c(\Omega)$ of functions with compact support. The subspace of $C^m(\Omega)$ of those functions $f$ such that $f$ and its derivatives $D^\eta f$ of order $|\eta|\le m$ can be extended with continuity to $\overline{\Omega}$ is denoted $C^m(\overline{\Omega})$. We denote by $C^m_b(\overline{\Omega})$ the space of functions of $C^m(\overline{\Omega})$ such that $D^{\eta} f$ is bounded for $|\eta|\leq m$. Then the space $C^m_b(\overline{\Omega})$ equipped with the usual norm $\|f\|_{C^m_b(\overline{\Omega})} \equiv \sum_{|\eta|\leq m} \sup_{\overline{\Omega}} |D^\eta f|$ is well known to be a Banach space. Let $f \in C^0(\overline{\Omega})$. Then we define its H\"{o}lder constant as \begin{equation*} |f : \Omega|_\alpha\equiv \mbox{sup} \left\{\frac{|f(x)-f(y)|}{|x-y|^\alpha} : x,y \in \overline{\Omega}, x \neq y \right\}. \end{equation*} We define the subspace of $C^0(\overline{\Omega})$ of H\"{o}lder continuous functions with exponent $\alpha \in ]0,1[$ by $C^{0,\alpha}(\overline{\Omega}) \equiv \{f \in C^0(\overline{\Omega}) : \, |f : \Omega|_\alpha < \infty \}$. Similarly, the subspace of $C^m(\overline{\Omega})$ whose functions have $m$-th order derivatives that are H\"{o}lder continuous with exponent $\alpha \in ]0,1[$ is denoted $C^{m,\alpha}(\overline{\Omega})$. Then the space $C^{m,\alpha}_b(\overline{\Omega}) \equiv C^{m,\alpha}(\overline{\Omega}) \cap C^m_b(\overline{\Omega}) \,,$ equipped with its usual norm $\|f\|_{C^{m,\alpha}_b(\overline{\Omega})} \equiv \|f\|_{C^{m}_b(\overline{\Omega})} + \sum_{|\eta|=m}{|D^\eta f : \Omega|_\alpha}$, is a Banach space. If $\Omega$ is bounded, then $C^{m,\alpha}_b(\overline{\Omega}) = C^{m,\alpha}(\overline{\Omega})$, and we omit the subscript $b$. We denote by $C^{m,\alpha}_{\mathrm{loc}}(\mathbb{R}^n \setminus \Omega)$ the space of functions on $\mathbb{R}^n \setminus \Omega$ whose restriction to $\overline{U}$ belongs to $C^{m,\alpha}(\overline{U})$ for all open bounded subsets $U$ of $\mathbb{R}^n \setminus \Omega$. On $C^{m,\alpha}_{\mathrm{loc}}(\mathbb{R}^n \setminus \Omega)$ we consider the natural structure of Fr\'echet space. Finally if $\Omega$ is bounded, we set \begin{equation*} C^{m,\alpha}_{\mathrm{h}}(\overline{\Omega}) \equiv \{ u \in C^{m,\alpha}(\overline{\Omega}) \cap C^2(\Omega): \Delta u = 0 \text{ in } \Omega \}, \end{equation*} \begin{equation*} \begin{split} C^{m,\alpha}_{\mathrm{h}}(\mathbb{R}^n \setminus \Omega) \equiv \{ u \in C^{m,\alpha}(\mathbb{R}^n \setminus \Omega)& \cap C^2(\mathbb{R}^n \setminus \overline{\Omega}): \Delta u = 0 \text{ in } \mathbb{R}^n \setminus \overline{\Omega}, \\ & |u(x)| = O(|x|^{2-n}) \text{ as } x \to +\infty \}. \end{split} \end{equation*} The condition $|u(x)| = O(|x|^{2-n})$ as $x \to +\infty$ in the above definition is equivalent for an harmonic function to the so-called harmonicity at infinity (see Folland \cite[Prop.~(2.74), p.~112]{Fo95}). We say that a bounded open subset of $\mathbb{R}^n$ is of class $C^{m,\alpha}$ if it is a manifold with boundary imbedded in $\mathbb{R}^n$ of class $C^{m,\alpha}$. In particular if $\Omega$ is a $C^{1,\alpha}$ subset of $\mathbb{R}^n$, then $\partial\Omega$ is a $C^{1,\alpha}$ sub-manifold of $\mathbb{R}^n$ of co-dimension $1$. If $M$ is a $C^{m,\alpha}$ sub-manifold of $\mathbb{R}^n$ of dimension $d\ge 1$, we define the space $C^{m,\alpha}(M)$ by exploiting a finite local parametrization. We retain the standard definition of the Lebesgue spaces $L^p$, $p\ge 1$. If $\Omega$ is of class $C^{1,\alpha}$, we denote by $d\sigma$ the area element on $\partial\Omega$. If $Z$ is a subspace of $L^1(\partial \Omega)$, we set \[ Z_0 \equiv \left\{ f \in Z : \int_{\partial\Omega} f \,d\sigma = 0 \right\}. \] Then we introduce a notation for superposition operators: if $H$ is a function from $\partial \Omega^i \times\mathbb{R}\times \mathbb{R}$ to $\mathbb{R}$, then we denote by $\mathcal{N}_{H}$ the nonlinear nonautonomous superposition operator that take a pair $(h^1,h^2)$ of functions from $\partial\Omega^i$ to $\mathbb{R}$ to the function $\mathcal{N}_{H}(h^1,h^2)$ defined by \begin{equation*} \mathcal{N}_{H}(h^1,h^2)(x) \equiv H(x,h^1(x),h^2(x)) \quad\forall x \in \partial\Omega^i. \end{equation*} Here the letter ``$\mathcal{N}$'' stands for ``Nemytskii operator.'' Finally, we have the following by Lanza de Cristoforis and Rossi \cite[Lemma 3.3, Prop. 3.13]{LaRo04}. \begin{lemma}\label{lemmanotation} Let $\Omega^i$ be as in \eqref{introsetconditions} and let $\mathcal{A}_{\partial\Omega^i}$ be as in \eqref{A_Omega^i}. Let $\phi \in \mathcal{A}_{\partial\Omega^i}$. Then there exists a unique function $\tilde{\sigma}_n[\phi] \in C^{0,\alpha}(\partial\Omega^i)$ such that \[ \int_{\phi(\partial\Omega^i)} f(y) \,d\sigma_y = \int_{\partial\Omega^i} f(\phi(s)) \, \tilde{\sigma}_n[\phi](s) \,d\sigma_s \quad\forall f \in L^1(\phi(\partial\Omega^i)). \] Moreover the map from $\mathcal{A}_{\partial\Omega^i}$ to $C^{0,\alpha}(\partial\Omega^i)$ that takes $\phi$ to $\tilde{\sigma}_n[\phi]$ and the map from $\mathcal{A}_{\partial\Omega^i}$ to $C^{0,\alpha}(\partial\Omega^i)$ that takes $\phi$ to $\nu_{\Omega^i[\phi]}(\phi(\cdot))$ are real analytic. \end{lemma} \section{Some preliminaries of potential theory}\label{Preliminaries} As we have mentioned, a key point of the Functional Analytic Approach is the reformulation of the boundary value problem in terms of an equivalent integral equation. To this aim, we exploit representation formulas for harmonic functions in terms of layer potentials. In this section, we collect some classical results of potential theory. We do not present proofs that can be found, for example, in Folland \cite[Chap. 3]{Fo95}, in Gilbarg and Trudinger \cite[Sec. 2]{GiTr83}. \begin{defin} We denote by $S_n$ the function from $\mathbb{R}^n \setminus \{0\}$ to $\mathbb{R}$ defined by \begin{equation*} S_n(x) \equiv \begin{cases} \frac{1}{s_n} \log |x| & \forall x \in \mathbb{R}^n \setminus \{0\} \qquad \mbox{if } n=2 \\ \frac{1}{(2-n) s_n} |x|^{2-n} & \forall x \in \mathbb{R}^n \setminus \{0\} \qquad \mbox{if } n>2 \end{cases} \end{equation*} where $s_n$ denotes the $(n-1)$-dimensional measure of $\partial B_n(0,1)$. \end{defin} $S_n$ is well known to be a fundamental solution of the Laplace operator $\Delta=\sum_{j=1}^n\partial^2_{x_j}$. We now assume that \[ \text{$\Omega$ is an open bounded subset of $\mathbb{R}^n$ of class $C^{1,\alpha}$}. \] In the following definition we introduce the single layer potential, which we use to transform our problems into integral equations. \begin{defin} We denote by $v_{\Omega}[\mu]$ the single layer potential with density $\mu$, {i.e.} the function defined by \begin{equation*} v_{\Omega}[\mu](x) \equiv \int_{\partial \Omega}{S_n(x-y) \mu(y) \,d\sigma_y} \qquad \forall x \in \mathbb{R}^n \, , \forall \mu \in L^2(\partial\Omega)\, . \end{equation*} \end{defin} It is well known that if $\mu \in C^{0,\alpha}(\partial\Omega)$, then $v_{\Omega}[\mu] \in C^0(\mathbb{R}^n)$. We set \begin{equation*} v^+_{\Omega}[\mu]\equiv v_{\Omega}[\mu]_{| \overline{\Omega}}, \qquad v^-_{\Omega}[\mu] \equiv v_{\Omega}[\mu]_{| \mathbb{R}^n \setminus \Omega}. \end{equation*} Then we define the boundary integral operators associated to the trace of the single layer potential and its normal derivative. \begin{defin}\label{defV-W} We denote by $V_{\partial\Omega}$ the operator from $L^2(\partial\Omega)$ to itself that takes $\mu$ to the function $V_{\partial\Omega}[\mu]$ defined in the trace sense by \[ V_{\partial\Omega}[\mu]\equiv v_{\Omega}[\mu]_{|\partial\Omega}. \] We denote by $W_{\partial\Omega}$ the integral operator from $L^2(\partial \Omega)$ to itself defined by \begin{equation*} W_{\partial\Omega}[\mu](x) \equiv - \int_{\partial \Omega}\! \!{\nu_\Omega(y) \cdot \nabla S_n(x-y) \mu(y) \,d\sigma_y} \ \ \text{for a.e. } x \in \partial \Omega, \forall \mu \in L^2(\partial\Omega). \end{equation*} We denote by $W^\ast_{\partial\Omega}$ the integral operator from $L^2(\partial \Omega)$ to itself which is the transpose of $W_{\partial\Omega}$ and that is defined by \[ W^\ast_{\partial\Omega}[\mu](x) \equiv \int_{\partial \Omega}\! \!{\nu_\Omega(x) \cdot \nabla S_n(x-y) \mu(y) \,d\sigma_y} \ \ \mbox{for a.e. } x \in \partial \Omega, \forall \mu \in L^2(\partial\Omega). \] \end{defin} As it is well known, since $\Omega$ is of class $C^{1,\alpha}$, $W_{\partial\Omega}$ and $W^\ast_{\partial\Omega}$ are compact operators from $L^2(\partial\Omega)$ to itself (both display a weak singularity). In particular $\left( \pm\frac{1}{2} I + W^\ast_{\partial\Omega} \right)$ are Fredholm operators of index $0$ from $L^2(\partial\Omega)$ to itself. Moreover, one verifies that $W_{\partial\Omega}: C^{1,\alpha}(\partial\Omega) \to C^{1,\alpha}(\partial\Omega)$ and $W^\ast_{\partial\Omega}: C^{0,\alpha}(\partial\Omega) \to C^{0,\alpha}(\partial\Omega)$ are transpose to one another with respect to the duality of $C^{1,\alpha}(\partial\Omega) \times C^{0,\alpha}(\partial\Omega)$ induced by the inner product of $L^2(\partial\Omega)$. We collect some well known properties of the single layer potential in the theorem below. In particular, we note that the operator of statement (iv) is an isomorphism both in the case of dimension $n=2$ and $n \geq 3$ (see, {e.g.}, \cite[Theorem 6.47]{DaLaMu21}). \begin{teo}[Properties of the single layer potential]\label{sdp} The following statements hold. \begin{enumerate} \item[(i)] For all $\mu \in L^2(\partial\Omega)$, the function $v_{\Omega}[\mu]$ is harmonic in $\mathbb{R}^n\setminus \partial\Omega$. If $n\geq3$ or if $n=2$ and $\int_{\partial \Omega}\mu\, d\sigma=0$ then $v_{\Omega}[\mu]$ is also harmonic at infinity. \item[(ii)] If $\mu \in C^{0,\alpha}(\partial\Omega)$, then $v^+_{\Omega}[\mu] \in C^{1,\alpha}(\overline{\Omega})$ and the map from $C^{0,\alpha}(\partial\Omega)$ to $C^{1,\alpha}(\overline{\Omega})$ that takes $\mu$ to $v^+_{\Omega}[\mu]$ is linear and continuous. Moreover, $v^-_{\Omega}[\mu] \in C^{1,\alpha}_{\mathrm{loc}}(\mathbb{R}^n \setminus \Omega)$ and the map from $C^{0,\alpha}(\partial \Omega)$ to $C^{1,\alpha}_{\mathrm{loc}}(\mathbb{R}^n \setminus \Omega)$ that takes $\mu$ to $v^-_{\Omega}[\mu]$ is linear and continuous. \item[(iii)] If $\mu \in C^{0,\alpha}(\partial\Omega)$, then we have following jump relations \begin{equation*} \nu_\Omega(x) \cdot \nabla v^\pm_{\Omega}[\mu] (x) = \left( \mp \frac{1}{2} I + W^\ast_{\partial\Omega} \right)[\mu](x) \qquad \forall x \in \partial \Omega. \end{equation*} \item[(iv)] The map from $C^{0,\alpha}(\partial\Omega)_0 \times \mathbb{R}$ to $C^{0,\alpha}(\partial\Omega)$ that takes a pair $(\mu,\rho)$ to $V_{\partial\Omega}[\mu] + \rho$ is an isomorphism. \end{enumerate} \end{teo} Since $\Omega$ is of class $C^{1,\alpha}$, the following classical compactness result holds (cf.~Schauder \cite{Sc31,Sc32}). \begin{teo}\label{Schaudercompact} The map that takes $\mu$ to $W^\ast_{\partial\Omega}[\mu]$ is compact from $C^{0,\alpha}(\partial\Omega)$ to itself. \end{teo} Theorem \ref{Schaudercompact} implies that $\left( \pm\frac{1}{2} I + W^\ast_{\partial\Omega} \right)$ are Fredholm operators of index $0$ from $C^{0,\alpha}(\partial\Omega)$ into itself. We now collect some regularity results for integral operators. We first introduce the following (see Folland \cite[Chap. 3 \S B]{Fo95}). \begin{defin}\label{kerneldef} Let $K$ be a measurable function from $\partial\Omega \times \partial\Omega$ to $\mathbb{R}$ and let $0 \leq \beta < n-1$. We say that $K$ is a continuous kernel of order $\beta$ if \begin{equation*} K(x,y) = k(x,y) |x-y|^{-\beta} \quad \forall(x,y)\in \partial\Omega \times \partial\Omega, \end{equation*} for some continuous function $k$ on $\partial\Omega \times \partial\Omega$. \\ If $K$ is a continuous kernel of order $\beta$, we denote by $\mathcal{K}_K$ the integral operator from $L^2(\partial\Omega)$ to itself defined by \begin{equation*} \mathcal{K}_K [\mu] (x) \equiv \int_{\partial\Omega} K(x,y) \mu (y) \,d\sigma_y \qquad \text{for a.e. } x \in \partial \Omega\, , \forall \mu \in L^2(\partial\Omega)\, . \end{equation*} \end{defin} We observe that the functions $K_1(x,y) \equiv S_n(x-y)$ and $K_2(x,y) \equiv \nu_\Omega(y) \cdot \nabla S_n(x-y)$ of $(x,y) \in \partial\Omega\times \partial \Omega$, $x \neq y$, are continuous kernels of order $n-2$ (cf. Folland \cite[Prop. 3.17]{Fo95}). Clearly, we can extend the notion of integral operator with a continuous kernel to the vectorial case just applying the definition above component-wise. Then we present a vectorial version of a classical regularity result (see, for example, Folland \cite[Prop. 3.13]{Fo95}). \begin{teo}\label{regularityvecttheorem} Let $0\leq \beta < n-1$. Let $K_i^j$ with $i,j \in \{1,2\}$ be continuous kernels of order $\beta$. Let $\mathcal{K}=(\mathcal{K}_1,\mathcal{K}_2)$ be the operator from $(L^2(\partial\Omega))^2$ to itself defined by \[ \mathcal{K}_1 [\mu_1,\mu_2] = \mathcal{K}_{K_1^1}[\mu_1] + \mathcal{K}_{K_2^1}[\mu_2],\qquad \mathcal{K}_2 [\mu_1,\mu_2] = \mathcal{K}_{K_1^2}[\mu_1] + \mathcal{K}_{K_2^2}[\mu_2], \] for all $(\mu_1,\mu_2) \in (L^2(\partial\Omega))^2$. If $(I + \mathcal{K})[\mu_1,\mu_2] \in (C^{0}(\partial\Omega))^2$, then $(\mu_1,\mu_2) \in (C^{0}(\partial\Omega))^2$. \end{teo} Finally, we present in Theorem \ref{regularitytheorem} a regularity result that will be widely used in what follows. The proof exploits a standard argument on iterated kernels and can be found, {e.g.}, in Dalla Riva and Mishuris \cite[Lem.~3.3]{DaMi15}. \begin{teo}\label{regularitytheorem} Let $\mu \in L^2(\partial\Omega)$. Let $\beta \in [0,\alpha]$. If $\left( \frac{1}{2} I + W^\ast_{\partial\Omega} \right)[\mu]$ or $\left( -\frac{1}{2} I + W^\ast_{\partial\Omega} \right)[\mu]$ belongs to $C^{0,\beta}(\partial\Omega)$, then $\mu \in C^{0,\beta}(\partial\Omega)$. \end{teo} \section{Existence result for problem (\ref{princeq})}\label{sec princeq} The aim of this section is to prove an existence result for problem \eqref{princeq}. We start with the following representation result for harmonic functions in $\Omega^o \setminus \overline{\Omega}$ and in $\Omega$ in terms of single layer potentials plus constant functions. The set $\Omega$ in the Lemma \ref{rapprharm} will be later replaced by the set $\Omega^i$ and by the perturbed set $\Omega^i[\phi]$. \begin{lemma}\label{rapprharm} Let $\Omega$ be an open bounded connected subset of $\mathbb{R}^n$ of class $C^{1,\alpha}$, such that $\mathbb{R}^n\setminus \overline{\Omega}$ is connected and $\overline{\Omega}\subset \Omega^o$. Then the map from $C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega) \times C^{0,\alpha}(\partial\Omega)_0 \times \mathbb{R}^2$ to $C^{1,\alpha}_{\mathrm{h}}(\overline{\Omega^o} \setminus \Omega) \times C^{1,\alpha}_{\mathrm{h}}(\overline{\Omega})$ that takes a quintuple $(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i)$ to the pair of functions $(U^o_\Omega[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i], U^i_\Omega[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i])$ defined by \begin{equation}\label{U^o,U^i} \begin{aligned} & U^o_\Omega[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i] \equiv (v^+_{\Omega^o} [\mu^o] + v^-_{\Omega}[\mu^i] + \rho^o)_{| \overline{\Omega^o} \setminus \Omega} \\ & U^i_\Omega[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i] \equiv v^+_{\Omega}[\eta^i] + \rho^i \end{aligned} \end{equation} is bijective. \end{lemma} \begin{proof} \ The map is well defined. Indeed, by the harmonicity and regularity properties of single layer potentials (cf.~Theorem \ref{sdp} (i)-(ii)), we know that \begin{align*} &\Delta U^o_\Omega[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i]=0 \quad \text{on } \Omega^o \setminus \overline{\Omega}, \\ &\Delta U^i_\Omega[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i]=0 \quad \text{on } \Omega, \\ &(U^o_\Omega[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i],U^i_\Omega[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i]) \in C^{1,\alpha}(\overline{\Omega^o} \setminus \Omega) \times C^{1,\alpha}(\overline{\Omega}), \end{align*} for all $(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i) \in C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega) \times C^{0,\alpha}(\partial\Omega)_0 \times \mathbb{R}^2$. We now show that it is bijective. So, we take a pair of functions $(h^o,h^i) \in C^{1,\alpha}_{\mathrm{h}}(\overline{\Omega^o} \setminus \Omega) \times C^{1,\alpha}_{\mathrm{h}}(\overline{\Omega})$ and we prove that there exists a unique quintuple $(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i) \in C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega) \times C^{0,\alpha}(\partial\Omega)_0 \times \mathbb{R}^2$ such that \begin{equation}\label{h^o,h^i} (U^o_\Omega[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i],U^i_\Omega[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i])=(h^o,h^i). \end{equation} By the uniqueness of the classical solution of the Dirichlet boundary value problem, the second equation in \eqref{h^o,h^i} is equivalent to \begin{equation}\label{h^i} V_{\partial\Omega}[\eta^i] + \rho^i = h^i_{| \partial \Omega} \end{equation} (notice that, since $h^i$ is an element of $C^{1,\alpha}_{\mathrm{h}}(\overline{\Omega})$, we have $h^i_{|\partial\Omega} \in C^{1,\alpha} (\partial\Omega) \subseteq C^{0,\alpha} (\partial\Omega)$). By Theorem \ref{sdp} (iv), there exists a unique pair $(\eta^i,\rho^i) \in C^{0,\alpha}(\partial\Omega)_0 \times \mathbb{R}$ such that \eqref{h^i} holds. Then it remains to show that there exists a unique triple $(\mu^o,\mu^i,\rho^o) \in C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega) \times \mathbb{R}$ such that \begin{equation}\label{h^o} (v^+_{\Omega^o} [\mu^o] + v^-_{\Omega^i}[\mu^i] + \rho^o)_{| \overline{\Omega^o} \setminus \Omega} = h^o. \end{equation} By the jump relations for the single layer potential (cf.~Theorem \ref{sdp} (iii)) and by the uniqueness of the classical solution of the Neumann-Dirichlet mixed boundary value problem, equation \eqref{h^o} is equivalent to the following system of integral equations: \begin{equation}\label{sysinteq h^o} \begin{aligned} & V_{\partial\Omega^o} [\mu^o] + v^-_{\Omega}[\mu^i]_{|\partial\Omega^o} + \rho^o = h^o_{|\partial\Omega^o}, \\ & \left( \frac{1}{2} I + W^\ast_{\partial\Omega} \right) [\mu^i] + \nu_{\Omega} \cdot \nabla v^+_{\Omega^o}[\mu^o]_{|\partial\Omega} = \nu_{\Omega} \cdot \nabla h^o_{|\partial\Omega} \end{aligned} \end{equation} (notice that, by $h^o \in C^{1,\alpha}_{\mathrm{h}}(\overline{\Omega})$, we get $h^o_{|\partial\Omega} \in C^{1,\alpha} (\partial\Omega) \subseteq C^{0,\alpha} (\partial\Omega)$ and $\nu_{\Omega} \cdot \nabla h^o_{|\partial\Omega} \in C^{0,\alpha} (\partial\Omega)$). Then we observe that by Theorem \ref{sdp} (iv), the map from $C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega) \times \mathbb{R}$ to $C^{0,\alpha}(\partial\Omega^o) \times C^{0,\alpha}(\partial\Omega)$ that takes a triple $(\mu^o,\mu^i,\rho^o)$ to the pair of functions $\left(V_{\partial\Omega^o} [\mu^o] + \rho^o, \frac{1}{2} \mu^i \right)$ is an isomorphism. Moreover, by the properties of integral operators with real analytic kernel and no singularities (cf. Lanza de Cristoforis and Musolino \cite[Prop. 4.1]{LaMu13}) and by Theorem \ref{Schaudercompact}, the map from $C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega) \times \mathbb{R}$ to $C^{0,\alpha}(\partial\Omega^o) \times C^{0,\alpha}(\partial\Omega)$ that takes a triple $(\mu^o,\mu^i,\rho^o)$ to the pair of functions $(v^-_{\Omega}[\mu^i]_{|\partial\Omega^o}, W^\ast_{\partial\Omega}[\mu^i] + \nu_{\Omega} \cdot \nabla v^+_{\Omega^o}[\mu^o]_{|\partial\Omega})$ is compact. Hence, the map from $C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega) \times \mathbb{R}$ to $C^{0,\alpha}(\partial\Omega^o) \times C^{0,\alpha}(\partial\Omega)$ that takes a triple $(\mu^o,\mu^i,\rho^o)$ to the pair of functions \begin{equation*} \left(V_{\partial\Omega^o} [\mu^o] + v^-_{\Omega}[\mu^i]_{|\partial\Omega^o} + \rho^o, \left( \frac{1}{2} I + W^\ast_{\partial\Omega} \right) [\mu^i] + \nu_{\Omega} \cdot \nabla v^+_{\Omega^o}[\mu^o]_{|\partial\Omega} \right) \end{equation*} is a compact perturbation of an isomorphism and therefore it is a Fredholm operator of index 0. Thus, to complete the proof, it suffices to show that \eqref{sysinteq h^o} with $(h^o_{|\partial\Omega^o}, \nu_{\Omega} \cdot \nabla h^o_{|\partial\Omega}) = (0,0)$ implies $(\mu^o,\mu^i,\rho^o)=(0,0,0)$. If \begin{equation}\label{sysinteq h^o=0} \left(V_{\partial\Omega^o} [\mu^o] + v^-_{\Omega}[\mu^i]_{|\partial\Omega^o} + \rho^o, \left( \frac{1}{2} I + W^\ast_{\partial\Omega} \right) [\mu^i] + \nu_{\Omega} \cdot \nabla v^+_{\Omega^o}[\mu^o]_{|\partial\Omega} \right) =(0,0), \end{equation} then by the jump relations for the single layer potential (cf.~Theorem \ref{sdp} (iii)) and by the uniqueness of the classical solution of Neumann-Dirichlet mixed boundary value problem, one deduces that $(v^+_{\Omega^o} [\mu^o] + v^-_{\Omega}[\mu^i] + \rho^o)_{| \overline{\Omega^o} \setminus \Omega}=0$. Moreover, by the continuity of $v_{\Omega}[\mu^i]$ in $\mathbb{R}^n$, we have that $(v^+_{\Omega^o} [\mu^o] + v^-_{\Omega}[\mu^i] + \rho^o)_{| \partial\Omega} = (v^+_{\Omega^o} [\mu^o] + v^+_{\Omega}[\mu^i] + \rho^o)_{|\partial\Omega} = 0$. Then by the uniqueness of the classical solution of Dirichlet boundary value problem in $\Omega$ we deduce that \begin{equation}\label{mu^o mu^i rho^o} (v^+_{\Omega^o} [\mu^o] + v^+_{\Omega}[\mu^i] + \rho^o)_{| \overline{\Omega} }=0. \end{equation} By the jump relations for the single layer potential (cf.~Theorem \ref{sdp} (iii)), adding and subtracting the term $\nu_{\Omega} \cdot \nabla ( v^+_{\Omega^o}[\mu^o] +\rho^o)_{|\partial\Omega}$ and taking into account \eqref{mu^o mu^i rho^o}, we get \begin{equation*} \begin{split} \mu^i &= \nu_{\Omega} \cdot \nabla v^-_{\Omega}[\mu^i]_{|\partial\Omega} - \nu_{\Omega} \cdot \nabla v^+_{\Omega}[\mu^i]_{|\partial\Omega} \\ &= \nu_{\Omega} \cdot \nabla ( v^+_{\Omega^o}[\mu^o] + v^-_{\Omega}[\mu^i] +\rho^o)_{|\partial\Omega} - \nu_{\Omega} \cdot \nabla ( v^+_{\Omega^o}[\mu^o] + v^+_{\Omega}[\mu^i] +\rho^o)_{|\partial\Omega} = 0. \end{split} \end{equation*} Thus, by \eqref{sysinteq h^o=0}, we obtain $V_{\Omega^o} [\mu^o] + \rho^o = 0$ on $\partial \Omega^o$, which implies $(\mu^o,\rho^o)=(0,0)$ (cf.~Theorem \ref{sdp} (iv)). Hence $(\mu^o,\mu^i,\rho^o)=(0,0,0)$ and the proof is complete. \end{proof} To represent the boundary conditions of a linearised version of problem \eqref{princeq}, we find convenient to introduce a matrix function \[ A(\cdot) = \begin{pmatrix} A_{11}(\cdot) & A_{12}(\cdot) \\ A_{21}(\cdot) & A_{22}(\cdot) \end{pmatrix} : \partial \Omega^i \to M_2(\mathbb{R}). \] Here above, the symbol $M_2(\mathbb{R})$ denotes the set of $2\times 2$ matrices with real entries. We set \[ \tilde{A}(\cdot)\equiv \begin{pmatrix} A_{11} (\cdot)& A_{12}(\cdot) \\ -A_{21} (\cdot)& -A_{22} (\cdot) \end{pmatrix}\, . \] We will assume the following conditions on the matrix $A$: \begin{equation}\label{Acondition} \begin{split} &\bullet \, A_{j,k} \in C^{0,\alpha} (\partial\Omega^i) \mbox{ for all } j,k \in \{1,2\}; \\ &\bullet \,\mbox{For every } (\xi_1,\xi_2) \in \mathbb{R}^2, (\xi_1,\xi_2) \tilde{A} (\xi_1,\xi_2)^T \geq 0 \mbox{ on } \partial\Omega^i; \\ &\bullet \, \mbox{If } (c_1,c_2) \in \mathbb{R}^2 \mbox{ and } A(x)(c_1,c_2)^T = 0 \mbox{ for all } x \in \partial \Omega^i, \mbox{ then } (c_1,c_2)=(0,0). \end{split} \end{equation} We remark that in literature the third condition in \eqref{Acondition} is often replaced by a condition on the invertibility of the matrix $A$, namely \begin{equation}\label{A*condition} \bullet \mbox{There exists a point } x \in \partial\Omega^i \mbox{ such that } A(x) \mbox{ is invertible}. \end{equation} We point out that, for instance, the matrix $A(x) = \begin{pmatrix} x_1^2 & x_1\\ -x_1 & -1 \end{pmatrix}$ with $x = (x_1,\dots,x_n) \in \partial\Omega^i$ satisfies the third condition in \eqref{Acondition} but not condition \eqref{A*condition}. Then by a standard energy argument we deduce the following result on the uniqueness of the solution of a transmission problem \begin{lemma}\label{Alemma} Let $A$ be as in \eqref{Acondition}. Then the unique solution in $C^{1,\alpha}(\overline{\Omega^o} \setminus \Omega^i) \times C^{1,\alpha}(\overline{\Omega^i})$ of problem \begin{equation}\label{Aproblem} \begin{cases} \Delta u^o = 0 & \quad \mbox{in } \Omega^o \setminus \overline{\Omega^i}, \\ \Delta u^i = 0 & \quad \mbox{in } \Omega^i, \\ \nu_{\Omega^o}(x) \cdot \nabla u^o(x)= 0 & \quad\forall x \in \partial \Omega^o, \\ \nu_{\Omega^i}(x) \cdot \nabla u^o (x) - A_{11}(x) u^o(x) - A_{12}(x) u^i(x) = 0 & \quad\forall x \in \partial \Omega^i, \\ \nu_{\Omega^i}(x) \cdot \nabla u^i (x) - A_{21}(x) u^o(x) - A_{22}(x) u^i(x) = 0 & \quad \forall x \in \partial \Omega^i, \end{cases} \end{equation} is $(u^o,u^i)=(0,0)$. \end{lemma} In the following proposition, we investigate the properties of an auxiliary boundary operator, $J_A$, which we will exploit in the integral formulation of our problem, in order to recast a fixed point equation. More precisely, we prove that $J_A$ is an isomorphism in $L^2$, in $C^0$, and in $C^{0,\alpha}$. All those three frameworks will be important: the first setting is suitable to use Fredholm theory and to directly prove the isomorphic property of $J_A$, the second setting will be used in order to apply Leray-Shauder Theorem to the aforementioned obtained fixed point equation (see Propositions \ref{Tcontcomp} and \ref{prop mu_0} below) and the third setting will be central to deduce that the solution of problem \eqref{princeq} we built is actually a classical solution, in particular of class $C^{1,\alpha}$ (cf. Propositions \ref{propintsys} and \ref{prop u^o_0,u^i_0}) \begin{prop}\label{J_A} Let $A$ be as in \eqref{Acondition}. Let $J_A$ be the map from $L^2(\partial\Omega^o)_0 \times L^2(\partial\Omega^i) \times L^2(\partial\Omega^i)_0 \times \mathbb{R}^2$ to $L^2(\partial\Omega^o) \times (L^2(\partial\Omega^i))^2$ that takes a quintuple $(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i)$ to the triple $J_A[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i]$ defined by \begin{equation}\label{J_A eq} \begin{aligned} J_{A,1}[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i] &\equiv \left( -\frac{1}{2} I + W^\ast_{\partial\Omega^o} \right) [\mu^o] + \nu_{\Omega^o} \cdot \nabla v^-_{\Omega^i}[\mu^i]_{|\partial\Omega^o} \qquad \mbox{on } \partial\Omega^o, \\ J_{A,2}[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i] &\equiv \left( \frac{1}{2} I + W^\ast_{\partial\Omega^i} \right) [\mu^i] + \nu_{\Omega^i} \cdot \nabla v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} \\ \quad - (A_{11},A_{12}) &\cdot (v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} + V_{\partial\Omega^i}[\mu^i] + \rho^o ,V_{\partial\Omega^i}[\eta^i] + \rho^i ) \qquad\mbox{on } \partial\Omega^i, \\ J_{A,3}[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i] &\equiv \left( -\frac{1}{2} I + W^\ast_{\partial\Omega^i} \right) [\eta^i] \\ \quad - (A_{21},A_{22}) &\cdot (v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} + V_{\partial\Omega^i}[\mu^i] + \rho^o ,V_{\partial\Omega^i}[\eta^i] + \rho^i ) \qquad \mbox{on } \partial\Omega^i. \end{aligned} \end{equation} Then the following statements hold. \begin{enumerate} \item[(i)] $J_A$ is a linear isomorphism from $L^2(\partial\Omega^o)_0 \times L^2(\partial\Omega^i) \times L^2(\partial\Omega^i)_0 \times \mathbb{R}^2$ to $L^2(\partial\Omega^o) \times (L^2(\partial\Omega^i))^2$. \item[(ii)] $J_A$ is a linear isomorphism from $C^0(\partial\Omega^o)_0 \times C^0(\partial\Omega^i) \times C^0(\partial\Omega^i)_0 \times \mathbb{R}^2$ to $C^0(\partial\Omega^o) \times (C^0(\partial\Omega^i))^2$. \item[(iii)] $J_A$ is a linear isomorphism from $C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0 \times \mathbb{R}^2$ to $C^{0,\alpha}(\partial\Omega^o) \times (C^{0,\alpha}(\partial\Omega^i))^2$. \end{enumerate} \end{prop} \begin{proof} \ We first prove (i). We write $J_A$ in the form $J_A = \tilde{J}^{+}_A \circ \tilde{J}_A \circ \tilde{J}^{-}_A$, where $\tilde{J}^{-}_A$ is the inclusion of $L^2(\partial\Omega^o)_0 \times L^2(\partial\Omega^i) \times L^2(\partial\Omega^i)_0 \times \mathbb{R}^2$ into $L^2(\partial\Omega^o) \times (L^2(\partial\Omega^i))^2 \times \mathbb{R}^2$, $\tilde{J}_A$ is the map from $L^2(\partial\Omega^o) \times (L^2(\partial\Omega^i))^2 \times \mathbb{R}^2$ into itself that takes $(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i)$ to the quintuple $\tilde{J}_A[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i]$ defined by \begin{align*} \tilde{J}_{A,1}[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i] &\equiv \left( -\frac{1}{2} I + W^\ast_{\partial\Omega^o} \right) [\mu^o] + \nu_{\Omega^o} \cdot \nabla v^-_{\Omega^i}[\mu^i]_{|\partial\Omega^o} \qquad \mbox{on } \partial\Omega^o, \\ \tilde{J}_{A,2}[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i] &\equiv \left( \frac{1}{2} I + W^\ast_{\partial\Omega^i} \right) [\mu^i] + \nu_{\Omega^i} \cdot \nabla v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} \\ - (A_{11},A_{12}) &\cdot (v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} + V_{\partial\Omega^i}[\mu^i] ,V_{\partial\Omega^i}[\eta^i] ) \qquad \mbox{on } \partial\Omega^i, \\ \tilde{J}_{A,3}[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i] &\equiv \left( -\frac{1}{2} I + W^\ast_{\partial\Omega^i} \right) [\eta^i]\\ - (A_{21},A_{22})& \cdot (v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} + V_{\partial\Omega^i}[\mu^i],V_{\partial\Omega^i}[\eta^i]) \qquad \mbox{on } \partial\Omega^i, \\ \tilde{J}_{A,4}[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i] &\equiv \rho^o, \\ \tilde{J}_{A,5}[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i] &\equiv \rho^i, \end{align*} and $\tilde{J}^{+}_A$ is the map from $L^2(\partial\Omega^o) \times (L^2(\partial\Omega^i))^2 \times \mathbb{R}^2$ into $L^2(\partial\Omega^o) \times (L^2(\partial\Omega^i))^2$ that takes a quintuple $(f,g_1,g_2,c_1,c_2)$ to the triple $\tilde{J}^{+}_A[f,g_1,g_2,c_1,c_2]$ defined by \begin{equation*} \tilde{J}^{+}_A[f,g_1,g_2,c_1,c_2] \equiv (f, g_1 - (A_{11},A_{12}) \cdot (c_1,c_2),g_2 - (A_{21},A_{22}) \cdot (c_1,c_2) ) . \end{equation*} Then we observe that $\tilde{J}^{+}_A$ is a Fredholm operator of index $2$, because $\mathrm{Coker}\, \tilde{J}^{+}_A = \{0\}$ and $\mathrm{Ker}\, \tilde{J}^{+}_A = \mathrm{Span}\, \{(0,A_{11},A_{21},1,0), (0,A_{12},A_{22},0,1)\}$, and that $\tilde{J}^{-}_A$ is Fredholm of index $-2$, because $\mathrm{Ker}\, \tilde{J}^{-}_A = \{0\}$ and $\mathrm{Coker}\, \tilde{J}^{-}_A = \mathrm{Span}\, \{(1,0,0,0,0), (0,0,1,0,0)\}$. Next, we observe that the map from $L^2(\partial\Omega^o) \times (L^2(\partial\Omega^i))^2 \times \mathbb{R}^2$ into itself that takes a quintuple $(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i)$ to the quintuple $(-\frac{1}{2}\mu^o,\frac{1}{2}\mu^i,-\frac{1}{2}\eta^i,\rho^o,\rho^i)$ is a linear isomorphism. Moreover, by the mapping properties of the integral operators with real analytic kernel and no singularity (cf. Lanza de Cristoforis and Musolino \cite[Prop. 4.1]{LaMu13}), by the compactness of the operators $W^\ast_{\partial\Omega^o}$ and $W^\ast_{\partial\Omega^i}$ from $L^2(\partial\Omega^o)$ to itself and from $L^2(\partial\Omega^i)$ to itself, respectively (see comments below Definition \ref{defV-W}), by the compactness of the operator $V_{\partial\Omega^i}$ from $L^2(\partial\Omega^i)$ into itself (see Costabel \cite[Thm. 1]{Co88}), and by the bilinearity and continuity of the product from $C^{0,\alpha}(\partial\Omega^i) \times L^2(\partial\Omega^i)$ to $L^2(\partial\Omega^i)$, we deduce that the map from $L^2(\partial\Omega^o) \times (L^2(\partial\Omega^i))^2 \times \mathbb{R}^2$ into itself that takes a quintuple $(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i)$ to the quintuple $\tilde{J}^C_A[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i]$ defined by \begin{align*} \tilde{J}^C_{A,1}[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i] &= W^\ast_{\partial\Omega^o} [\mu^o] + \nu_{\Omega^o} \cdot \nabla v^-_{\Omega^i}[\mu^i]_{|\partial\Omega^o} \qquad \mbox{on } \partial\Omega^o, \\ \tilde{J}^C_{A,2}[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i] & = W^\ast_{\partial\Omega^i} [\mu^i] + \nu_{\Omega^i} \cdot \nabla v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} \\ - (A_{11},A_{12}) & \cdot (v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} + V_{\partial\Omega^i}[\mu^i] ,V_{\partial\Omega^i}[\eta^i] ) \qquad \mbox{on } \partial\Omega^i, \\ \tilde{J}^C_{A,3}[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i] & = W^\ast_{\partial\Omega^i} [\eta^i] \\- (A_{21},A_{22})& \cdot (v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} + V_{\partial\Omega^i}[\mu^i],V_{\partial\Omega^i}[\eta^i]) \qquad \mbox{on } \partial\Omega^i, \\ \tilde{J}^C_{A,4}[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i] & = 0, \\ \tilde{J}^C_{A,5}[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i] & = 0, \end{align*} is compact. Hence, we conclude that $\tilde{J}_A$ is a compact perturbation of an isomorphism and therefore it is Fredholm of index 0. Since the index of a composition of Fredholm operators is the sum of the indexes of the components, we deduce that $J_A$ is a Fredholm operator of index $0$. Therefore, in order to complete the proof of point $(i)$, it suffices to prove that $J_A$ is injective. Thus, we now assume that $(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i) \in L^2(\partial\Omega^o)_0 \times L^2(\partial\Omega^i) \times L^2(\partial\Omega^i)_0 \times \mathbb{R}^2$ and that \begin{equation}\label{J_A=0} J_A[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i] = (0,0,0). \end{equation} We first verify that $(\mu^o,\mu^i,\eta^i)$ is actually in $C^0(\partial\Omega^o)_0 \times C^0(\partial\Omega^i) \times C^0(\partial\Omega^i)_0$. In fact, by the definition of $J_{A,1}$ in \eqref{J_A eq}, by the fact that $\nu_{\Omega^o} \cdot \nabla v^-_{\Omega^i}[\mu^i]_{|\partial\Omega^o} \in C^{0}(\partial\Omega^o)$ (cf. Lanza de Cristoforis and Musolino \cite[Prop. 4.1]{LaMu13}), and by Theorem \ref{regularitytheorem}, we deduce that $\mu^o \in C^{0}(\partial\Omega^o)$. Let $\mathcal{K}\equiv(\mathcal{K}_1, \mathcal{K}_2)$ be the map from $L^2(\partial\Omega^i)\times L^2(\partial\Omega^i)_0$ to itself that takes a pair $(\mu^i,\eta^i)\in L^2(\partial\Omega^i)\times L^2(\partial\Omega^i)_0$ to \begin{align*} \mathcal{K}_1[\mu^i,\eta^i] &\equiv 2W^\ast_{\partial\Omega^i}[\mu^i] - 2A_{11} V_{\partial\Omega^i}[\mu^i] - 2A_{12}V_{\partial\Omega^i}[\eta^i] &&\mbox{on } \partial\Omega^i, \\ \mathcal{K}_2[\mu^i,\eta^i] &\equiv -2W^\ast_{\partial\Omega^i} [\eta^i] +2 A_{21} V_{\partial\Omega^i}[\mu^i] +2 A_{22} V_{\partial\Omega^i}[\eta^i] &&\mbox{on } \partial\Omega^i. \end{align*} Notice that each component of $\mathcal{K}$ is a linear combinations of integral operators with a continuous kernel of order $n-2$ (see Definition \ref{kerneldef} and comments below). By the fact that $\mu^o \in C^0(\partial\Omega^i)$ and by the first condition in \eqref{Acondition}, we know that $2(A_{11},A_{12}) \cdot (v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} + \rho^o , \rho^i ), -2(A_{21},A_{22}) \cdot (v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} + \rho^o ,\rho^i )$ belong to $C^{0,\alpha}(\partial\Omega^i) \subseteq C^{0}(\partial\Omega^i)$. Then \eqref{J_A=0} and the definition of the operator $\mathcal{K}$ imply that $(I + \mathcal{K})[\mu^i,\eta^i] \in (C^{0}(\partial\Omega^i))^2$. Hence, by Theorem \ref{regularityvecttheorem} we conclude that $(\mu^i,\eta^i) \in C^0(\partial\Omega^i) \times C^0(\partial\Omega^i)_0$. Then by mapping properties of integral operators with real analytic kernel and no singularity (cf. Lanza de Cristoforis and Musolino \cite[Prop. 4.1]{LaMu13}) and by classical known results in potential theory (cf.~Miranda \cite[Chap. II, \S 14]{Mi70}), we know that $ \nu_{\Omega^o} \cdot \nabla v^-_{\Omega^i}[\mu^i]_{|\partial\Omega^o} \in C^{0,\alpha}(\partial\Omega^o)$ and $v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i}, \, \nu_{\Omega^i} \cdot \nabla v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i}, \, V_{\partial\Omega^i}[\eta^i], \, V_{\partial\Omega^i}[\mu^i]\in C^{0,\alpha}(\partial\Omega^i)$. Hence, by \eqref{J_A=0} and by the membership of $A \in M_2(C^{0,\alpha} (\partial\Omega^i))$ (cf.~first condition in \eqref{Acondition}), we obtain that $\left( -\frac{1}{2} I + W^\ast_{\partial\Omega^o} \right) [\mu^o] \in C^{0,\alpha}(\partial\Omega^o)$ and $ \left( \frac{1}{2} I + W^\ast_{\partial\Omega^i} \right) [\mu^i], \left( -\frac{1}{2} I + W^\ast_{\partial\Omega^i} \right) [\eta^i] \in C^{0,\alpha}(\partial\Omega^i)$. Then Theorem \ref{regularitytheorem} implies $(\mu^o,\mu^i,\eta^i) \in C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0$. By the jump relations (cf.~Theorem \ref{sdp} (iii)), by Lemma \ref{rapprharm}, and by \eqref{J_A=0}, we deduce that the pair $(U^o_{\Omega^i}[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i],U^i_{\Omega^i}[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i])$ defined by \eqref{U^o,U^i} is a solution of the boundary value problem \eqref{Aproblem}. Then by Lemma \ref{Alemma}, we have that \[ (U^o_{\Omega^i}[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i],U^i_{\Omega^i}[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i])=(0,0),\] which implies $(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i)=(0,0,0,0,0)$, by the uniqueness of the representation provided by Lemma \ref{rapprharm}. We now prove statement (ii). First we note that the integral operators that appear in the definition \eqref{J_A eq} of $J_A$ have either a weakly singular or a real analytic kernel. It follows that $J_A$ is continuous from $C^0(\partial\Omega^o)_0 \times C^0(\partial\Omega^i) \times C^0(\partial\Omega^i)_0 \times \mathbb{R}^2$ to $C^0(\partial\Omega^o) \times (C^0(\partial\Omega^i))^2$ (cf.~Lanza de Cristoforis and Musolino \cite[Prop. 4.1]{LaMu13} for the properties of integral operators with real analytic kernels). Then we observe that, by Theorems \ref{regularityvecttheorem} and \ref{regularitytheorem}, if we have $ J_A[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i] \in C^0(\partial\Omega^o) \times (C^0(\partial\Omega^i))^2$ for some $(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i) \in L^2(\partial\Omega^o)_0 \times L^2(\partial\Omega^i) \times L^2(\partial\Omega^i)_0 \times \mathbb{R}^2$, then $(\mu^o,\mu^i,\eta^i) \in C^0(\partial\Omega^o)_0 \times C^0(\partial\Omega^i) \times C^0(\partial\Omega^i)_0$ (see also the argument used after \eqref{J_A=0} to prove that $(\mu^o,\mu^i,\eta^i)$ belongs to $C^0(\partial\Omega^o)_0 \times C^0(\partial\Omega^i) \times C^0(\partial\Omega^i)_0$). Then, by statement (i) we deduce that $J_A$ is a bijective continuous linear map from $C^0(\partial\Omega^o)_0 \times C^0(\partial\Omega^i) \times C^0(\partial\Omega^i)_0 \times \mathbb{R}^2$ to $C^0(\partial\Omega^o) \times (C^0(\partial\Omega^i))^2$. By the Open Mapping Theorem it follows that $J_A$ is a linear homeomorphism from $C^0(\partial\Omega^o)_0 \times C^0(\partial\Omega^i) \times C^0(\partial\Omega^i)_0 \times \mathbb{R}^2$ to $C^0(\partial\Omega^o) \times (C^0(\partial\Omega^i))^2$. The proof of statement (iii) is similar to that of statement (ii) and we leave it to the zealous reader (see also the argument used after \eqref{J_A=0} to prove that $(\mu^o,\mu^i,\eta^i)$ belongs to $C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0$). \end{proof} We are now ready to convert \eqref{princeq} into a system of integral equations. \begin{prop}\label{propintsys} Let $A$ be as in \eqref{Acondition}. Let $(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i) \in C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0 \times \mathbb{R}^2$. Let $(U^o_{\Omega^i}[\cdot,\cdot,\cdot,\cdot,\cdot],U^i_{\Omega^i}[\cdot,\cdot,\cdot,\cdot,\cdot])$ be defined by \eqref{U^o,U^i}. Let $J_A$ be as in Proposition \ref{J_A}. Then $(U^o_{\Omega^i}[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i],U^i_{\Omega^i}[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i])$ is a solution of \eqref{princeq} if and only if \begin{equation}\label{princintsys} \begin{aligned} \begin{pmatrix} \mu^o \\ \mu^i \\ \eta^i \\ \rho^o \\ \rho^i \end{pmatrix} = J_A^{(-1)}& \left[ \begin{pmatrix} f^o \\ \mathcal{N}_{F_1}(v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} + V_{\partial\Omega^i}[\mu^i] +\rho^o ,V_{\partial\Omega^i}[\eta^i] +\rho^i) \\ \mathcal{N}_{F_2}(v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} + V_{\partial\Omega^i}[\mu^i] +\rho^o ,V_{\partial\Omega^i}[\eta^i] +\rho^i) \end{pmatrix} \right. \\ & \left. - \begin{pmatrix} 0 & 0 & 0 \\ 0 & A_{11} & A_{12} \\ 0 & A_{21} & A_{22} \end{pmatrix} \begin{pmatrix} 0 \\ v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} + V_{\partial\Omega^i}[\mu^i] +\rho^o \\ V_{\partial\Omega^i}[\eta^i] +\rho^i \end{pmatrix} \right]. \end{aligned} \end{equation} \end{prop} \begin{proof} \ By Lemma \ref{rapprharm} and by the jump relations of Theorem \ref{sdp}, we know that if $(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i) \in C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0 \times \mathbb{R}^2$ then the pair \[(U^o_{\Omega^i}[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i],U^i_{\Omega^i}[\mu^o,\mu^i,\eta^i,\rho^o,\rho^i])\] defined by \eqref{U^o,U^i} is a solution of problem \eqref{princeq} if an only if \begin{equation}\label{intsys} \begin{split} & \begin{pmatrix} \left( -\frac{1}{2} I + W^\ast_{\partial\Omega^o} \right) [\mu^o] + \nu_{\Omega^o} \cdot \nabla v^-_{\Omega^i}[\mu^i]_{|\partial\Omega^o} \\ \left( \frac{1}{2} I + W^\ast_{\partial\Omega^i} \right) [\mu^i] + \nu_{\Omega^i} \cdot \nabla v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} \\ \left( -\frac{1}{2} I + W^\ast_{\partial\Omega^i} \right) [\eta^i] \end{pmatrix} \\ &\qquad = \begin{pmatrix} f^o \\ \mathcal{N}_{F_1}(v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} + V_{\partial\Omega^i}[\mu^i] +\rho^o ,V_{\partial\Omega^i}[\eta^i] +\rho^i) \\ \mathcal{N}_{F_2}(v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} + V_{\partial\Omega^i}[\mu^i] +\rho^o ,V_{\partial\Omega^i}[\eta^i] +\rho^i) \end{pmatrix}. \end{split} \end{equation} Then, by subtracting in both sides of \eqref{intsys} the term \begin{equation*} \begin{pmatrix} 0 & 0 & 0 \\ 0 & A_{11} & A_{12} \\ 0 & A_{21} & A_{22} \end{pmatrix} \begin{pmatrix} 0 \\ v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} + V_{\partial\Omega^i}[\mu^i] +\rho^o \\ V_{\partial\Omega^i}[\eta^i] +\rho^i \end{pmatrix} \in C^{0,\alpha}(\partial\Omega^o) \times (C^{0,\alpha}(\partial\Omega^i))^2 \end{equation*} and by the invertibility of $J_A$ from $C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0 \times \mathbb{R}^2$ to $C^{0,\alpha}(\partial\Omega^o) \times (C^{0,\alpha}(\partial\Omega^i))^2$ provided by Proposition \ref{J_A} (iii), the validity of the statement follows. \end{proof} We now introduce an auxiliary map. If $A$ is as in \eqref{Acondition} and $J_A$ is as in Proposition \ref{J_A}, we denote by $T_A$ the map from $C^0(\partial\Omega^o)_0 \times C^0(\partial\Omega^i) \times C^0(\partial\Omega^i)_0 \times \mathbb{R}^2$ to $C^0(\partial\Omega^o) \times (C^0(\partial\Omega^i))^2$ defined by \begin{equation}\label{T} \begin{aligned} T_A&(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i) \\&\equiv J_A^{(-1)} \left[ \begin{pmatrix} f^o \\ \mathcal{N}_{F_1}(v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} + V_{\partial\Omega^i}[\mu^i] +\rho^o ,V_{\partial\Omega^i}[\eta^i] +\rho^i) \\ \mathcal{N}_{F_2}(v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} + V_{\partial\Omega^i}[\mu^i] +\rho^o ,V_{\partial\Omega^i}[\eta^i] +\rho^i) \end{pmatrix} \right. \\ & \left. - \begin{pmatrix} 0 & 0 & 0 \\ 0 & A_{11} & A_{12} \\ 0 & A_{21} & A_{22} \end{pmatrix} \begin{pmatrix} 0 \\ v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} + V_{\partial\Omega^i}[\mu^i] +\rho^o \\ V_{\partial\Omega^i}[\eta^i] +\rho^i \end{pmatrix} \right]\, . \end{aligned} \end{equation} We study the continuity and compactness of $T_A$ in the following proposition. \begin{prop}\label{Tcontcomp} Let $A$ be as in \eqref{Acondition}. Let $T_A$ be as in \eqref{T}. Then $T_A$ is a continuous (nonlinear) operator from $C^0(\partial\Omega^o)_0 \times C^0(\partial\Omega^i) \times C^0(\partial\Omega^i)_0 \times \mathbb{R}^2$ to $C^0(\partial\Omega^o) \times (C^0(\partial\Omega^i))^2$ and is compact. \end{prop} \begin{proof} \ By the properties of integral operators with real analytic kernel and no singularities (cf. Lanza de Cristoforis and Musolino \cite[Prop. 4.1]{LaMu13}) and by the compactness of the embedding of $C^{0,\alpha}(\partial\Omega^i)$ into $C^0(\partial\Omega^i)$, $v^+_{\Omega^o}[\cdot]_{|\partial\Omega^i}$ is compact from $C^0(\partial\Omega^o)_0$ into $C^0(\partial\Omega^i)$. By mapping properties of the single layer potential (cf.~Miranda \cite[Chap.~II, \S14, III]{Mi70}) and by the compactness of the embedding of $C^{0,\alpha}(\partial\Omega^i)$ into $C^0(\partial\Omega^i)$, $V_{\partial\Omega^i}$ is compact from $C^0(\partial\Omega^i)$ into itself. Hence, by the bilinearity and continuity of the product of continuous functions, the map from $C^0(\partial\Omega^o)_0 \times C^0(\partial\Omega^i) \times C^0(\partial\Omega^i)_0 \times \mathbb{R}^2$ to $C^0(\partial\Omega^o) \times (C^0(\partial\Omega^i))^2$ that takes the quintuple $(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i)$ to the triple given by \begin{equation*} \begin{pmatrix} 0 & 0 & 0 \\ 0 & A_{11} & A_{12} \\ 0 & A_{21} & A_{22} \end{pmatrix} \begin{pmatrix} 0 \\ v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} + V_{\partial\Omega^i}[\mu^i] +\rho^o \\ V_{\partial\Omega^i}[\eta^i] +\rho^i \end{pmatrix} \end{equation*} is continuous and maps bounded sets into sets with compact closure, {i.e.}, is compact. Moreover, by assumption \eqref{introfunconditions}, one readily verifies that the operators $\mathcal{N}_{F_1}$ and $\mathcal{N}_{F_2}$ are continuous from $(C^0(\partial\Omega^i))^2$ into $C^0(\partial\Omega^i)$. Hence the map from $C^0(\partial\Omega^o)_0 \times C^0(\partial\Omega^i) \times C^0(\partial\Omega^i)_0$ to $C^0(\partial\Omega^o) \times (C^0(\partial\Omega^i))^2$ that takes the quintuple $(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i)$ to the triple \begin{equation*} \begin{pmatrix} f^0 \\ \mathcal{N}_{F_1}(v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} + V_{\partial\Omega^i}[\mu^i] +\rho^o ,V_{\partial\Omega^i}[\eta^i] +\rho^i) \\ \mathcal{N}_{F_2}(v^+_{\Omega^o}[\mu^o]_{|\partial\Omega^i} + V_{\partial\Omega^i}[\mu^i] +\rho^o ,V_{\partial\Omega^i}[\eta^i] +\rho^i) \end{pmatrix} \end{equation*} is compact. Finally, by Proposition \ref{J_A} (ii), $J_A$ is a linear isomorphism from $C^0(\partial\Omega^o)_0 \times C^0(\partial\Omega^i) \times C^0(\partial\Omega^i)_0 \times \mathbb{R}^2$ to $C^0(\partial\Omega^o) \times (C^0(\partial\Omega^i))^2$ and, accordingly, $T_A$ is compact. \end{proof} In what follows we will assume the following growth condition on the pair $(F_1,F_2)$ with respect to the matrix function $A$ defined as in \eqref{Acondition}: \begin{equation}\label{conditionF1F2} \begin{split} \bullet\, &\mbox{There exist two constants } C_F \in ]0,+\infty[ \mbox{ and } \delta \in ]0,1[ \mbox{ such that} \\ & \qquad \left|\begin{pmatrix} F_1(x, \zeta_1,\zeta_2) \\ F_2(x, \zeta_1,\zeta_2) \end{pmatrix} - A(x) \begin{pmatrix} \zeta_1 \\ \zeta_2 \end{pmatrix} \right| \leq C_F (1 + |\zeta_1| + |\zeta_2|)^\delta \\& \mbox{for all } (x,\zeta_1,\zeta_2) \in \partial\Omega^i \times \mathbb{R}^2. \end{split} \end{equation} In Proposition \ref{prop mu_0} below we prove the existence of a solution in $C^0(\partial\Omega^o)_0 \times C^0(\partial\Omega^i) \times C^0(\partial\Omega^i)_0 \times \mathbb{R}^2$ of \eqref{princintsys}. Our argument exploits the Leray-Schauder Fixed-Point Theorem (cf.~Gilbarg and Trudinger \cite[Thm.~11.3]{GiTr83}). \begin{teo}[Leray-Schauder Theorem]\label{Thm Leray Schauder} Let $X$ be a Banach space. Let $T$ be a continuous operator from $X$ into itself. If $T$ is compact and there exists a constant $M \in ]0,+\infty[$ such that $\|x\|_{X} \leq M$ for all $(x,\lambda) \in X \times [0,1]$ satisfying $x=\lambda T(x)$, then $T$ has at least one fixed point $x \in X$ such that $\|x\|_{X} \leq M$. \end{teo} Then we have the following. \begin{prop}\label{prop mu_0} Let $A$ be as in \eqref{Acondition}. Let assumption \eqref{conditionF1F2} holds. Let $J_A$ be as in Proposition \ref{J_A}. Then the nonlinear system \eqref{princintsys} has at least one solution \[(\mu^o_0,\mu^i_0,\eta^i_0,\rho^o_0,\rho^i_0) \in C^0(\partial\Omega^o)_0 \times C^0(\partial\Omega^i) \times C^0(\partial\Omega^i)_0 \times \mathbb{R}^2.\] \end{prop} \begin{proof} \ We plan to apply the Leray-Schauder Theorem \ref{Thm Leray Schauder} to the operator $T_A$ defined by \eqref{T} in the Banach space $C^0(\partial\Omega^o)_0 \times C^0(\partial\Omega^i) \times C^0(\partial\Omega^i)_0 \times \mathbb{R}^2$. By Proposition \ref{Tcontcomp} we already know that $T_A$ is a continuous operator from $C^0(\partial\Omega^o)_0 \times C^0(\partial\Omega^i) \times C^0(\partial\Omega^i)_0 \times \mathbb{R}^2$ to $C^0(\partial\Omega^o) \times (C^0(\partial\Omega^i))^2$ and maps bounded sets into sets with compact closure. So in order to apply the Leray-Schauder Theorem \ref{Thm Leray Schauder}, we are left to show that if $\lambda \in ]0,1[$ and if \begin{equation}\label{mu=lamdaT(mu)} (\mu^o,\mu^i,\eta^i,\rho^o,\rho^i) = \lambda T_A(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i) \end{equation} with $(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i) \in C^0(\partial\Omega^o)_0 \times C^0(\partial\Omega^i) \times C^0(\partial\Omega^i)_0 \times \mathbb{R}^2$, then there exists a constant $C \in ]0,+\infty[$ (which does not depend on $\lambda$ and $(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i)$), such that \begin{equation}\label{mu<C} \|(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i)\|_{C^0(\partial\Omega^o) \times (C^0(\partial\Omega^i))^2\times \mathbb{R}^2} \leq C. \end{equation} By \eqref{mu=lamdaT(mu)} and by $|\lambda|<1$, we readily deduce that \begin{equation}\label{|mu|<|T(mu)|} \begin{split} &\|(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i)\|_{C^0(\partial\Omega^o) \times (C^0(\partial\Omega^i))^2 \times \mathbb{R}^2} \\ &\qquad\leq \|T_A(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i)\|_{C^0(\partial\Omega^o) \times (C^0(\partial\Omega^i))^2}\, . \end{split} \end{equation} By the growth condition \eqref{conditionF1F2}, we can show that \begin{equation}\label{inequalityNF1NF2} \begin{split} &\left\| \begin{pmatrix} \mathcal{N}_{F_1}(h^i_1,h^i_2) \\ \mathcal{N}_{F_2}(h^i_1,h^i_2) \end{pmatrix} - A \begin{pmatrix} h^i_1 \\ h^i_2 \end{pmatrix} \right\|_{(C^0(\partial\Omega^i))^2} \\ & \qquad\leq C_F (1+\|h^i_1\|_{C^0(\partial\Omega^i)}+\|h^i_2\|_{C^0(\partial\Omega^i)})^\delta \end{split} \end{equation} for all pair of functions $(h^i_1,h^i_2) \in (C^0(\partial\Omega^i))^2$. Hence, by \eqref{|mu|<|T(mu)|} and by the definition of $T_A$ in \eqref{T}, we deduce that there exist two constants $C_1,C_2 \in ]0,+\infty[$, which depend only on the operator norm of $J_A^{(-1)}$ from $C^0(\partial\Omega^o) \times (C^0(\partial\Omega^i))^2$ to $C^0(\partial\Omega^o)_0 \times C^0(\partial\Omega^i) \times C^0(\partial\Omega^i)_0 \times \mathbb{R}^2$ (cf.~Theorem \ref{J_A} (ii)), on $\|f^o\|_{\partial\Omega^o}$, on the constant $C_F \in ]0,+\infty[$ provided by the growth condition \eqref{conditionF1F2} (cf. \eqref{inequalityNF1NF2}), on the norm of the bounded linear operator $v^+_{\Omega^o}[\cdot]_{|\partial\Omega^i}$ from $C^0(\partial\Omega^o)$ to $C^0(\partial\Omega^i)$, and on the norm of the bounded linear operator $V_{\partial\Omega^i}$ from $C^0(\partial\Omega^i)$ into itself, such that \[ \begin{split} &\|(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i)\|_{C^0(\partial\Omega^o) \times (C^0(\partial\Omega^i))^2 \times \mathbb{R}^2}\\ & \qquad \leq C_1 (C_2 + \|(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i)\|_{C^0(\partial\Omega^o) \times (C^0(\partial\Omega^i))^2 \times \mathbb{R}^2} )^\delta\, . \end{split} \] Then, by a straightforward calculation, we can show the existence of a constant $C>0$ such that inequality \eqref{mu<C} holds true (cf. Lanza de Cristoforis \cite[proof of Thm.~7.2]{La07}). Hence, by the Leray-Schauder Theorem \ref{Thm Leray Schauder} there exists at least one solution \[(\mu^o_0,\mu^i_0,\eta^i_0,\rho^o_0,\rho^i_0) \in C^0(\partial\Omega^o)_0 \times C^0(\partial\Omega^i) \times C^0(\partial\Omega^i)_0 \times \mathbb{R}^2\] of $(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i) = T_A(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i)$. Finally, by the definition of $T_A$ (cf. \eqref{T}), we conclude that $(\mu^o_0,\mu^i_0,\eta^i_0,\rho^o_0,\rho^i_0)$ is a solution in $C^0(\partial\Omega^o)_0 \times C^0(\partial\Omega^i) \times C^0(\partial\Omega^i)_0 \times \mathbb{R}^2$ of the nonlinear system \eqref{princintsys}. \end{proof} In what follows we will exploit a continuity condition on the superposition operators generated by $F_1$ and $F_2$, namely \begin{equation}\label{conditionNF} \begin{split} \bullet & \, \mbox{The superposition operators } \mathcal{N}_{F_1} \mbox{ and } \mathcal{N}_{F_2} \mbox{ are continuous from } \\ & (C^{0,\alpha}(\partial\Omega^i))^2 \mbox{ into } C^{0,\alpha}(\partial\Omega^i). \end{split} \end{equation} For conditions on $F_1$ and $F_2$ which imply the validity of assumption \eqref{conditionNF}, we refer to Appell and Zabrejko \cite[Ch.~8]{ApZa90} and to Valent \cite[Chap. II]{Va88}. Then we can prove a regularity result for the fixed point provided by Proposition \ref{prop mu_0}, and, thus, an existence result for problem \eqref{princeq}. \begin{prop}\label{prop u^o_0,u^i_0} Let A be as in \eqref{Acondition}. Let assumptions \eqref{conditionF1F2} and \eqref{conditionNF} hold. Then the nonlinear system \eqref{princintsys} has at least one solution $(\mu^o_0,\mu^i_0,\eta^i_0,\rho^o_0,\rho^i_0) \in C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0 \times \mathbb{R}^2$. In particular, problem \eqref{princeq} has at least one solution $(u^o_0,u^i_0) \in C^{1,\alpha}(\overline{\Omega^o} \setminus \Omega^i) \times C^{1,\alpha}(\overline{\Omega^i})$ given by \begin{equation}\label{u^o_0,u^i_0} (u^o_0,u^i_0) \equiv (U^o_{\Omega^i}[\mu^o_0,\mu^i_0,\eta^i_0,\rho^o_0,\rho^i_0],U^i_{\Omega^i}[\mu^o_0,\mu^i_0,\eta^i_0,\rho^o_0,\rho^i_0]) \end{equation} where the pair $(U^o_{\Omega^i}[\cdot,\cdot,\cdot,\cdot,\cdot],U^i_{\Omega^i}[\cdot,\cdot,\cdot,\cdot,\cdot])$ is defined by \eqref{U^o,U^i}. \end{prop} \begin{proof} \ Let $T_A$ be as in \eqref{T}. By Proposition \ref{prop mu_0}, we deduce the existence of a quintuple $(\mu^o_0,\mu^i_0,\eta^i_0,\rho^o_0,\rho^i_0) \in C^0(\partial\Omega^o)_0 \times C^0(\partial\Omega^i) \times C^0(\partial\Omega^i)_0 \times \mathbb{R}^2$ such that $ (\mu^o_0,\mu^i_0,\eta^i_0,\rho^o_0,\rho^i_0) = T_A(\mu^o_0,\mu^i_0,\eta^i_0,\rho^o_0,\rho^i_0). $ By the mapping properties of integral operators with real analytic kernel and no singularities (cf. Lanza de Cristoforis and Musolino \cite[Prop. 4.1]{LaMu13}), $v^+_{\Omega^o}[\mu^o_0]_{|\partial\Omega^i}$ belongs to $C^{0,\alpha}(\partial\Omega^i)$. By classical results in potential theory (cf.~Miranda \cite[Chap.~II, \S14, III]{Mi70}), $V_{\Omega^i}[\mu^i_0]$ and $V_{\Omega^i}[\eta^i_0]$ belong to $C^{0,\alpha}(\partial\Omega^i)$. Then, by condition \eqref{conditionNF} and by the membership of $A \in M_2(C^{0,\alpha} (\partial\Omega^i))$ and of $f^o \in C^{0,\alpha} (\partial\Omega^o)$, we obtain that \begin{equation*} \begin{split} &\begin{pmatrix} f^o \\ \mathcal{N}_{F_1}(v^+_{\Omega^o}[\mu^o_0]_{|\partial\Omega^i} + V_{\partial\Omega^i}[\mu^i_0] +\rho^o_0 ,V_{\partial\Omega^i}[\eta^i_0] +\rho^i_0) \\ \mathcal{N}_{F_2}(v^+_{\Omega^o}[\mu^o_0]_{|\partial\Omega^i} + V_{\partial\Omega^i}[\mu^i_0] +\rho^o_0 ,V_{\partial\Omega^i}[\eta^i_0] +\rho^i_0) \end{pmatrix} \\ &\qquad - \begin{pmatrix} 0 & 0 & 0 \\ 0 & A_{11} & A_{12} \\ 0 & A_{21} & A_{22} \end{pmatrix} \begin{pmatrix} 0 \\ v^+_{\Omega^o}[\mu^o_0]_{|\partial\Omega^i} + V_{\partial\Omega^i}[\mu^i_0] +\rho^o_0 \\ V_{\partial\Omega^i}[\eta^i_0] +\rho^i_0 \end{pmatrix} \end{split} \end{equation*} belongs to the product space $C^{0,\alpha}(\partial\Omega^o) \times (C^{0,\alpha}(\partial\Omega^i))^2$. Finally, by the invertibility of the operator $J_A$ from $C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0 \times \mathbb{R}^2$ to $C^{0,\alpha}(\partial\Omega^o) \times (C^{0,\alpha}(\partial\Omega^i))^2$, we obtain that $(\mu^o_0,\mu^i_0,\eta^i_0,\rho^o_0,\rho^i_0) \in C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0 \times \mathbb{R}^2$. In particular, by Proposition \ref{propintsys} we deduce that the pair given by \eqref{u^o_0,u^i_0} is a solution of \eqref{princeq} (cf.~\eqref{T}). \end{proof} \section{The perturbed transmission problem (\ref{princeqpertu})}\label{sec princeqpertu} This section is devoted to the study of the perturbed transmission problem \eqref{princeqpertu}. We introduce the map $M=(M_1,M_2,M_3)$ from $\mathcal{A}^{\Omega^o}_{\partial\Omega^i} \times C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0 \times \mathbb{R}^2$ to $C^{0,\alpha}(\partial\Omega^o) \times (C^{0,\alpha}(\partial\Omega^i))^2$ defined by \begin{equation}\label{M} \begin{aligned} & M_1[\phi,\mu^o,\mu^i,\eta^i,\rho^o,\rho^i](x) \\ & \equiv \left( -\frac{1}{2} I + W^\ast_{\partial\Omega^o} \right) [\mu^o] (x) + \nu_{\Omega^o}(x) \cdot \nabla v^-_{\Omega^i[\phi]}[\mu^i \circ \phi^{(-1)}] (x) - f^o(x) \quad \forall x \in \partial\Omega^o \\ & M_2[\phi,\mu^o,\mu^i,\eta^i,\rho^o,\rho^i] (t) \\&\equiv \left( \frac{1}{2} I + W^\ast_{\partial\Omega^i[\phi]} \right) [\mu^i \circ \phi^{(-1)}] (\phi(t)) + \nu_{\Omega^i[\phi]}(\phi(t)) \cdot \nabla v^+_{\Omega^o}[\mu^o](\phi(t)) \\ & \qquad \qquad - F_1\bigg(t,v^+_{\Omega^o}[\mu^o](\phi(t)) + V_{\partial\Omega^i[\phi]}[\mu^i\circ \phi^{(-1)}](\phi(t)) +\rho^o , \\ & \qquad \qquad \qquad \qquad V_{\partial\Omega^i[\phi]}[\eta^i\circ \phi^{(-1)}](\phi(t)) +\rho^i \bigg) \quad \forall t \in \partial\Omega^i \\ & M_3[\phi,\mu^o,\mu^i,\eta^i,\rho^o,\rho^i] (t) \\ &\equiv \left( -\frac{1}{2} I + W^\ast_{\partial\Omega^i[\phi]} \right) [\eta^i \circ \phi^{(-1)}] (\phi(t)) \\ & \qquad \qquad - F_2\bigg(t,v^+_{\Omega^o}[\mu^o](\phi(t)) + V_{\partial\Omega^i[\phi]}[\mu^i\circ \phi^{(-1)}](\phi(t)) +\rho^o ,\\ & \qquad \qquad \qquad \qquad V_{\partial\Omega^i[\phi]}[\eta^i\circ \phi^{(-1)}](\phi(t)) +\rho^i \bigg ) \quad\forall t \in \partial\Omega^i \end{aligned} \end{equation} for all $(\phi,\mu^o,\mu^i,\eta^i,\rho^o,\rho^i) \in \mathcal{A}^{\Omega^o}_{\partial\Omega^i} \times C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0 \times \mathbb{R}^2$. We incidentally observe that by the definition of $\Omega^i[\phi]$ we have that $\partial \Omega^i[\phi]=\phi(\partial\Omega^i)$. Then, by the definition of $M$, we can deduce the following result. \begin{prop}\label{M=0prop} Let A be as in \eqref{Acondition}. Let assumptions \eqref{conditionF1F2} and \eqref{conditionNF} hold. Let \begin{equation*} (\phi,\mu^o,\mu^i,\eta^i,\rho^o,\rho^i) \in \mathcal{A}^{\Omega^o}_{\partial\Omega^i} \times C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0 \times \mathbb{R}^2. \end{equation*} Then the pair of functions \begin{equation*} (U^o_{\Omega^i[\phi]}[\mu^o,\mu^i\circ \phi^{(-1)},\eta^i\circ \phi^{(-1)},\rho^o,\rho^i],U^i_{\Omega^i[\phi]}[\mu^o,\mu^i\circ \phi^{(-1)},\eta^i\circ \phi^{(-1)},\rho^o,\rho^i]) \end{equation*} defined by \eqref{U^o,U^i} is a solution of problem \eqref{princeqpertu} if and only if \begin{equation}\label{M=0} M[\phi,\mu^o,\mu^i,\eta^i,\rho^o,\rho^i] = (0,0,0). \end{equation} In particular, equation \begin{equation}\label{M_0=0} M[\phi_0,\mu^o,\mu^i,\eta^i,\rho^o,\rho^i] = (0,0,0) \end{equation} is equivalent to the system \eqref{princintsys} and has a solution $(\mu^o_0,\mu^i_0,\eta^i_0,\rho^o_0,\rho^i_0) \in C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0 \times \mathbb{R}^2$ (recall that $\phi_0\equiv \text{id}_{\partial\Omega^i}$). \end{prop} \begin{proof} \ We first observe that, by the regularity of $\phi \in \mathcal{A}^{\Omega^o}_{\partial\Omega^i}$, if $(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i) \in C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0 \times \mathbb{R}^2$, then \begin{equation*} (\mu^o,\mu^i,\eta^i,\rho^o,\rho^i) \in C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0 \times \mathbb{R}^2. \end{equation*} Moreover, since $\overline{\Omega^i[\phi]} \subset \Omega^o$, we can apply Lemma \ref{rapprharm} with $\Omega=\Omega^i[\phi]$. Then by the jump relations for the single layer potential (cf.~Theorem \ref{sdp} (iii)), by a change of variable on $\phi(\partial \Omega^i)$ and by the definition of $M$ (cf.~\eqref{M}), we obtain that the pair of functions \begin{align*} & U^o_{\Omega^i[\phi]}[\mu^o,\mu^i\circ \phi^{(-1)},\eta^i\circ \phi^{(-1)},\rho^o,\rho^i] = (v^+_{\Omega^o} [\mu^o] + v^-_{\Omega^i[\phi]}[\mu^i\circ \phi^{(-1)}] + \rho^o)_{| \overline{\Omega^o} \setminus \Omega^i[\phi]}, \\ & U^i_{\Omega^i[\phi]}[\mu^o,\mu^i\circ \phi^{(-1)},\eta^i\circ \phi^{(-1)},\rho^o,\rho^i] = v^+_{\Omega^i[\phi]}[\eta^i\circ \phi^{(-1)}] + \rho^i \end{align*} is a solution of problem \eqref{princeqpertu} if and only if \eqref{M=0} is satisfied. Finally, since $\phi_0\equiv \text{id}_{\partial\Omega^i}$ (cf. \eqref{phi0}) and by the definition of $J_A$ (cf.~\eqref{J_A eq}), we obtain that, for all $(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i) \in C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0 \times \mathbb{R}^2$, equation \eqref{M_0=0} is equivalent to the system \eqref{princintsys}. Then the existence of a solution $(\mu^o_0,\mu^i_0,\eta^i_0,\rho^o_0,\rho^i_0)$ of \eqref{M_0=0} follows by Proposition \ref{prop mu_0}. \end{proof} By Proposition \ref{M=0prop}, the study of problem \eqref{princeqpertu} is reduced to that of equation \eqref{M=0}. We now wish to apply the Implicit Function Theorem for real analytic maps in Banach spaces (cf. Deimling \cite[Thm. 15.3]{De85}) to equation \eqref{M=0} around the value $\phi_0$. As a first step we have to analyse the regularity of the map $M$. In what follows we will assume the following: \begin{equation}\label{conditionNF*} \begin{split} \bullet & \,\mbox{The superposition operators } \mathcal{N}_{F_1} \mbox{ and } \mathcal{N}_{F_2} \mbox{ are real analytic from } \\ & (C^{0,\alpha}(\partial\Omega^i))^2 \mbox{ into } C^{0,\alpha}(\partial\Omega^i). \end{split} \end{equation} For conditions on $F_1$ and $F_2$ which imply the validity of assumption \eqref{conditionNF*}, we refer to Valent \cite[Chap. II]{Va88}. We now show that $M$ is real analytic. \begin{prop}\label{Mrealanal} Let assumption \eqref{conditionNF*} holds. Then the map $M$ is real analytic from $\mathcal{A}^{\Omega^o}_{\partial\Omega^i} \times C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0 \times \mathbb{R}^2$ to $C^{0,\alpha}(\partial\Omega^o) \times (C^{0,\alpha}(\partial\Omega^i))^2$. \end{prop} \begin{proof} \ We only prove the analyticity of $M_2$. The analyticity of $M_1$ and of $M_3$ can be proved similarly and it is left to the reader. Therefore, we now analyse $M_2$. The map from $\mathcal{A}^{\Omega^o}_{\partial\Omega^i} \times C^{0,\alpha}(\partial\Omega^i)$ to $C^{0,\alpha}(\partial\Omega^i)$ that takes $(\phi,\mu^i)$ to the function of the variable $t\in\partial\Omega^i$ defined by \[ \begin{split} &\left( \frac{1}{2} I + W^\ast_{\partial\Omega^i[\phi]} \right) [\mu^i \circ \phi^{(-1)}] (\phi(t)) = \frac{1}{2}\mu^i(t) + W^\ast_{\partial\Omega^i[\phi]} [ \mu^i\circ \phi^{(-1)}] (\phi(t)) \\ &= \frac{1}{2}\mu^i(t) + \int_{\partial\Omega^i} (\nu_{\Omega^i[\phi]}(\phi(t))) \cdot \nabla S_n(\phi(t)-\phi(s))) \,\mu^i(s)\, \tilde{\sigma}_n[\phi](s) \,d\sigma_s \end{split} \] is real analytic by the real analyticity result for the dependence of layer potentials upon perturbation of the support and of the density of Lanza de Cristoforis and Rossi \cite[Thm.~3.12]{LaRo04} and Lanza de Cristoforis \cite[Prop.~7]{La07-2} (see also Lemma \ref{lemmanotation}). The map from $\mathcal{A}^{\Omega^o}_{\partial\Omega^i} \times C^{0,\alpha}(\partial\Omega^o)$ to $C^{0,\alpha}(\partial\Omega^i)$ that takes $(\phi,\mu^o)$ to the function of the variable $t \in\partial\Omega^i$ defined by \[ \nu_{\Omega^i[\phi]}(\phi(t)) \cdot \nabla v^+_{\Omega^o}[\mu^o](\phi(t)) = \int_{\partial\Omega^o} (\nu_{\Omega^i[\phi]}(\phi(t)) \cdot \nabla S_n(\phi(t)-y)) \, \mu^o(y) \,d\sigma_y \] can be proven to be real analytic by the properties of integral operators with real analytic kernels and no singularities (see Lanza de Cristoforis and Musolino \cite[Prop. 4.1]{LaMu13}). For the third term of $M_2$ we proceed in this way. The map from $\mathcal{A}^{\Omega^o}_{\partial\Omega^i} \times C^{0,\alpha}(\partial\Omega^o)$ to $C^{0,\alpha}(\partial\Omega^i)$ that takes $(\phi,\mu^o)$ to the function of the variable $t\in\partial\Omega^i$ defined by \[ v^+_{\Omega^o}[\mu^o](\phi(t)) = \int_{\partial\Omega^o} S_n(\phi(t)-y) \, \mu^o(y) \,d\sigma_y \] can be proven to be real analytic by the properties of integral operators with real analytic kernels and no singularities (see Lanza de Cristoforis and Musolino \cite[Prop. 4.1]{LaMu13}). The map from $\mathcal{A}^{\Omega^o}_{\partial\Omega^i} \times C^{0,\alpha}(\partial\Omega^i)$ to $C^{0,\alpha}(\partial\Omega^i)$ that takes $(\phi,\mu^o)$ to the function of the variable $t\in\partial\Omega^i$ defined by \[ V_{\partial\Omega^i[\phi]}[\mu^i\circ \phi^{(-1)}](\phi(t)) = \int_{\phi(\partial\Omega^i)} S_n(\phi(t)-y) \, \mu^i\circ \phi^{(-1)}(y) \,d\sigma_y \] is real analytic by a result of real analytic dependence for the single layer potential upon perturbation of the support and of the density (see Lanza de Cristoforis and Rossi \cite[Thm.~3.12]{LaRo04}, Lanza de Cristoforis \cite[Prop.~7]{La07-2}). Similarly we can treat $V_{\partial\Omega^i[\phi]}[\eta^i\circ \phi^{(-1)}](\phi(\cdot))$. Hence, by the real analyticity of the composition of real analytic maps and by \eqref{conditionNF*}, we conclude that the map from $\mathcal{A}^{\Omega^o}_{\partial\Omega^i} \times C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0 \times \mathbb{R}^2$ to $C^{0,\alpha}(\partial\Omega^i)$ that takes a sextuple $(\phi,\mu^o,\mu^i,\eta^i,\rho^o,\rho^i)$ to the function \[ \begin{split} \mathcal{N}_{F_1}\bigg( v^+_{\Omega^o}[\mu^o](\phi(\cdot))_{|\partial\Omega^i} + V_{\partial\Omega^i[\phi]}[\mu^i\circ \phi^{(-1)}]&(\phi(\cdot)) +\rho^o ,\\ & V_{\partial\Omega^i[\phi]}[\eta^i\circ \phi^{(-1)}](\phi(\cdot)) +\rho^i \bigg) \end{split} \] is real analytic. As a consequence $M_2$ is real analytic. \end{proof} It will be convenient to consider $F_1$, $F_2$ as two components of a vector field on $\partial\Omega^i\times \mathbb{R}^2$. We denote by $F$ the function from $\partial\Omega^i\times \mathbb{R}^2$ to $\mathbb{R}^2$ defined by \begin{equation*}\label{defF} F(t,\zeta_1,\zeta_2) = (F_1(t,\zeta_1,\zeta_2),F_2(t,\zeta_1,\zeta_2)) \quad \forall (t,\zeta_1,\zeta_2) \in \partial\Omega^i\times\mathbb{R}^2\, . \end{equation*} Clearly, we can extend the definition of the superposition operator (cf. Section \ref{notation}) in a natural way, {i.e.}, by setting \begin{equation*}\label{defNF} \mathcal{N}_F : (C^{0,\alpha}(\partial\Omega^i))^2 \to (C^{0,\alpha}(\partial\Omega^i))^2, \, \mathcal{N}_F \equiv (\mathcal{N}_{F_1},\mathcal{N}_{F_2})\, . \end{equation*} Now let $(\mu^o_0,\mu^i_0,\eta^i_0,\rho^o_0,\rho^i_0) \in C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0 \times \mathbb{R}^2$ be as in Proposition \ref{prop u^o_0,u^i_0}. By standard calculus in Banach space, we have the following formula regarding the first order differential of $\mathcal{N}_F$: \[ d\mathcal{N}_F(v^+_{\Omega^o}[\mu^o_0]_{|\partial\Omega^i} + V_{\Omega^i}[\mu^i_0] + \rho^o_0 , V_{\Omega^i}[\eta^i_0] +\rho^i_0) .(h_1,h_2) = A_{\mathcal{N}_F,0} \begin{pmatrix} h_1 \\ h_2 \end{pmatrix} \] for all $(h_1,h_2) \in (C^{0,\alpha}(\partial\Omega^i))^2$, where \begin{equation}\label{A_{N_F}} A_{\mathcal{N}_F,0} \equiv \begin{pmatrix} \mathcal{N}_{\partial_{\zeta_1}F_1}(\alpha^1_0 , \alpha^2_0) & \mathcal{N}_{\partial_{\zeta_2}F_1}(\alpha^1_0 , \alpha^2_0) \\ \mathcal{N}_{\partial_{\zeta_1}F_2}(\alpha^1_0 , \alpha^2_0) & \mathcal{N}_{\partial_{\zeta_2}F_2}(\alpha^1_0 , \alpha^2_0) \end{pmatrix} \end{equation} and $\alpha^1_0$ and $\alpha^2_0$ are the functions from $\partial\Omega^i$ to $\mathbb{R}$ defined by \begin{equation}\label{A_{N_F}bis} \begin{split} \alpha^1_0 \equiv v^+_{\Omega^o}[\mu^o_0]_{|\partial\Omega^i} + V_{\Omega^i}[\mu^i_0] + \rho^o_0, \quad \alpha^2_0 \equiv V_{\Omega^i}[\eta^i_0] +\rho^i_0. \end{split} \end{equation} We will require that the matrix $A_{\mathcal{N}_F,0}$ given by \eqref{A_{N_F}}-\eqref{A_{N_F}bis} satisfies assumption \eqref{Acondition}. In particular, we notice that assumption \eqref{conditionNF*} implies the validity of the first of the three conditions of \eqref{Acondition} for the matrix $A_{\mathcal{N}_F,0}$. In order to apply the Implicit Function Theorem (cf. Deimling \cite[Thm. 15.3]{De85}) to equation \eqref{M=0} we need to prove the invertibility of the partial differential of $M$. \begin{prop}\label{diffMprop} Let assumptions \eqref{conditionF1F2} and \eqref{conditionNF*} hold. Let $(\mu^o_0,\mu^i_0,\eta^i_0,\rho^o_0,\rho^i_0) \in C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0 \times \mathbb{R}^2$ be as in Proposition \ref{prop u^o_0,u^i_0}. Let $A_{\mathcal{N}_F,0}$ be as in \eqref{A_{N_F}}-\eqref{A_{N_F}bis} and assume that satisfies assumption \eqref{Acondition}. Then the partial differential of $M$ with respect to $(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i)$ evaluated at the point $(\phi_0,\mu^o_0,\mu^i_0,\eta^i_0,\rho^o_0,\rho^i_0)$, which we denote by \begin{equation}\label{partdiff M} \partial_{(\mu^o, \mu^i, \eta^i, \rho^o,\rho^i)} M[\phi_0,\mu^o_0, \mu^i_0 ,\eta^i_0,\rho^o_0,\rho^i_0], \end{equation} is an isomorphism from $C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0 \times \mathbb{R}^2$ to $C^{0,\alpha}(\partial\Omega^o) \times (C^{0,\alpha}(\partial\Omega^i))^2$. \end{prop} \begin{proof} \ By standard calculus in Banach spaces, we can verify that the partial differential \eqref{partdiff M} is the linear and continuous operator defined by \begin{align*} & \partial_{(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i)} M_1[\phi_0,\mu^o_0,\mu^i_0,\eta^i_0,\rho^o_0,\rho^i_0]. (\tilde{\mu}^o,\tilde{\mu}^i,\tilde{\eta}^i,\tilde{\rho}^o,\tilde{\rho}^i)(x) \\ &\qquad = \left( -\frac{1}{2} I + W^\ast_{\Omega^o} \right) [\tilde{\mu}^o] (x) + \nu_{\Omega^o}(x) \cdot \nabla v^-_{\Omega^i}[\tilde{\mu}^i] (x) \qquad \forall x \in \partial\Omega^o \\ & \partial_{(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i)} M_2[\phi_0,\mu^o_0,\mu^i_0,\eta^i_0,\rho^o_0,\rho^i_0]. (\tilde{\mu}^o,\tilde{\mu}^i,\tilde{\eta}^i,\tilde{\rho}^o,\tilde{\rho}^i)(t) \\ &\qquad = \left( \frac{1}{2} I + W^\ast_{\Omega^i} \right) [\tilde{\mu}^i] (t) + \nu_{\Omega^i}(t) \cdot \nabla v^+_{\Omega^o}[\mu^o](t) \\ & \qquad -\partial_{\zeta_1}F_1\left(t,v^+_{\Omega^o}[\mu^o_0](t) + V_{\Omega^i}[\mu^i_0](t) + \rho^o_0 , V_{\Omega^i}[\eta^i_0](t) +\rho^i_0 \right) \, \\ & \qquad \qquad \times \left(v^+_{\Omega^o}[\tilde{\mu}^o](t) + V_{\Omega^i}[\tilde{\mu}^i](t) + \tilde{\rho}^o\right) \\ & \qquad -\partial_{\zeta_2}F_1\left(t,v^+_{\Omega^o}[\mu^o_0](t) + V_{\Omega^i}[\mu^i_0](t) + \rho^o_0 , V_{\Omega^i}[\eta^i_0](t) +\rho^i_0 \right) \, \\ & \qquad \qquad \times \left( V_{\Omega^i}[\tilde{\eta}^i](t) + \tilde{\rho}^i\right) \qquad \forall t \in \partial\Omega^i \\ & \partial_{(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i)} M_3[\phi_0,\mu^o_0,\mu^i_0,\eta^i_0,\rho^o_0,\rho^i_0]. (\tilde{\mu}^o,\tilde{\mu}^i,\tilde{\eta}^i,\tilde{\rho}^o,\tilde{\rho}^i)(t) \\ &\qquad = \left( -\frac{1}{2} I + W^\ast_{\Omega^i} \right) [\tilde{\eta}^i] (t) \\ & \qquad -\partial_{\zeta_1}F_2\left(t,v^+_{\Omega^o}[\mu^o_0](t) + V_{\Omega^i}[\mu^i_0](t) + \rho^o_0 , V_{\Omega^i}[\eta^i_0](t) +\rho^i_0 \right) \, \\ & \qquad \qquad \times \left(v^+_{\Omega^o}[\tilde{\mu}^o](t) + V_{\Omega^i}[\tilde{\mu}^i](t) + \tilde{\rho}^o\right) \\ & \qquad -\partial_{\zeta_2}F_2\left(t,v^+_{\Omega^o}[\mu^o_0](t) + V_{\Omega^i}[\mu^i_0](t) + \rho^o_0 , V_{\Omega^i}[\eta^i_0](t) +\rho^i_0 \right) \, \\ & \qquad \qquad \times \left( V_{\Omega^i}[\tilde{\eta}^i](t) + \tilde{\rho}^i\right) \qquad \forall t \in \partial\Omega^i \end{align*} for all $(\tilde{\mu}^o,\tilde{\mu}^i,\tilde{\eta}^i,\tilde{\rho}^o,\tilde{\rho}^i) \in C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0 \times \mathbb{R}^2$. Then, by Proposition \ref{J_A} with $A=A_{\mathcal{N}_F,0}$ (cf.~\eqref{J_A eq}) and since $A_{\mathcal{N}_F,0}$ satisfies \eqref{Acondition}, we conclude that $ \partial_{(\mu^o,\mu^i,\eta^i,\rho^o,\rho^i)} M[\phi_0,\mu^o_0,\mu^i_0,\eta^i_0,\rho^o_0,\rho^i_0]$ is an isomorphism of Banach spaces. \end{proof} By Propositions \ref{M=0prop}, \ref{Mrealanal}, \ref{diffMprop}, and by applying the Implicit Function Theorem for real analytic functions in Banach spaces (cf.~Deimling \cite[Thm.~15.3]{De85}) to equation \eqref{M=0}, we deduce the following real analyticity result for the dependence of the densities in the integral representation fomula for the solutions of problem \eqref{princeqpertu} upon the perturbation of the shape of the inclusion $\Omega^i$. \begin{teo}\label{M^oteo} Let assumptions \eqref{conditionF1F2} and \eqref{conditionNF*} hold. Let $(\mu^o_0,\mu^i_0,\eta^i_0,\rho^o_0,\rho^i_0) \in C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0 \times \mathbb{R}^2$ be as in Proposition \ref{prop u^o_0,u^i_0}. Let $A_{\mathcal{N}_F,0}$ be as in \eqref{A_{N_F}}-\eqref{A_{N_F}bis} and assume that satisfies assumption \eqref{Acondition}. Then, there exist two open neighbourhoods $Q_0$ of $\phi_0$ in $\mathcal{A}^{\Omega^o}_{\partial\Omega^i}$ and $U_0$ of $(\mu^o_0,\mu^i_0,\eta^i_0,\rho^o_0,\rho^i_0)$ in $C^{0,\alpha}(\partial\Omega^o)_0 \times C^{0,\alpha}(\partial\Omega^i) \times C^{0,\alpha}(\partial\Omega^i)_0 \times \mathbb{R}^2$, and a real analytic map $ \Lambda \equiv (M^o,M^i,N^i,R^o,R^i): Q_0 \to U_0$ such that the set of zeros of $M$ in $Q_0 \times U_0$ coincides with the graph of the function $\Lambda$. In particular, \ \Lambda[\phi_0]=(M^o[\phi_0],M^i[\phi_0],N^i[\phi_0],R^o[\phi_0],R^i[\phi_0])= (\mu^o_0,\mu^i_0,\eta^i_0,\rho^o_0,\rho^i_0). \ \end{teo} We are now ready to exhibit a family of solutions of problem \eqref{princeqpertu}. \begin{defin}\label{u^o_phi,u^i_phi def} Let assumptions \eqref{conditionF1F2} and \eqref{conditionNF*} hold. Let $A_{\mathcal{N}_F,0}$ be as in \eqref{A_{N_F}}-\eqref{A_{N_F}bis} and assume that satisfies assumption \eqref{Acondition}. Let $Q_0$ and $\Lambda\equiv (M^o,M^i,N^i,R^o,R^i)$ be as in Theorem \ref{M^oteo}. Then, for each $\phi \in Q_0$ we set \begin{align*} u^o_\phi(x) &= U^o_{\Omega^i[\phi]}[M^o[\phi],M^i[\phi]\circ \phi^{(-1)},N^i[\phi]\circ \phi^{(-1)},R^o[\phi],R^i[\phi]](x) \\& \quad\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\qquad \forall x \in \overline{\Omega^o} \setminus \Omega^i[\phi], \\ u^i_\phi(x) &= U^i_{\Omega^i[\phi]}[M^o[\phi],M^i[\phi]\circ \phi^{(-1)},N^i[\phi]\circ \phi^{(-1)},R^o[\phi],R^i[\phi]](x) \quad \forall x \in \overline{\Omega^i[\phi]}, \end{align*} where the pair $(U^o_{\Omega^i[\phi]}[\cdot,\cdot,\cdot,\cdot,\cdot],U^i_{\Omega^i[\phi]}[\cdot,\cdot,\cdot,\cdot,\cdot])$ is defined by \eqref{U^o,U^i}. \end{defin} By Propositions \ref{prop u^o_0,u^i_0}, \ref{M=0prop}, and Theorem \ref{M^oteo}, we deduce the following. \begin{teo}\label{upertuex} Let assumptions \eqref{conditionF1F2} and \eqref{conditionNF*} hold. Let $A_{\mathcal{N}_F,0}$ be as in \eqref{A_{N_F}}-\eqref{A_{N_F}bis} and assume that satisfies assumption \eqref{Acondition}. Let $Q_0$ be as in Theorem \ref{M^oteo} and let $(u^o_\phi,u^i_\phi)$ be as in Definition \ref{u^o_phi,u^i_phi def}. Then, for all $\phi \in Q_0$, $(u^o_\phi,u^i_\phi) \in C^{1,\alpha}(\overline{\Omega^o} \setminus \Omega^i[\phi]) \times C^{1,\alpha}(\overline{\Omega^i[\phi]})$ is a solution of problem \eqref{princeqpertu}. In particular $(u^o_{\phi_0},u^i_{\phi_0})= (u^o_0,u^i_0)$ is a solution of problem \eqref{princeq}. \end{teo} We are now ready to prove our main result, where we show that suitable restrictions of the functions $u^o_\phi$ and $u^i_\phi$ depend real analytically on the parameter $\phi$ which determines the domain perturbation. \begin{teo}\label{upertuana} Let assumptions \eqref{conditionF1F2} and \eqref{conditionNF*} hold. Let $A_{\mathcal{N}_F,0}$ be as in \eqref{A_{N_F}}-\eqref{A_{N_F}bis} and assume that satisfies assumption \eqref{Acondition}. Let $Q_0$ be as in Theorem \ref{M^oteo} and let $(u^o_\phi,u^i_\phi)$ be as in Definition \ref{u^o_phi,u^i_phi def}. Then, the following statements hold. \begin{enumerate} \item[(i)] Let $\Omega_\mathtt{int}$ be a bounded open subset of $\Omega^o$. Let $Q_\mathtt{int} \subseteq Q_0$ be an open neighbourhood of $\phi_0$ such that \[ \overline{\Omega_\mathtt{int}} \subset {\Omega^i[\phi]} \quad \forall \phi \in Q_\mathtt{int}. \] Then the map from $Q_\mathtt{int}$ to $C^{1,\alpha}(\overline{\Omega_\mathtt{int}})$ that takes $\phi$ to $u^i_{\phi| \overline{\Omega_\mathtt{int}}}$ is real analytic. \item[(ii)] Let $\Omega_\mathtt{ext}$ be a bounded open subset of $\Omega^o$. Let $Q_\mathtt{ext} \subseteq Q_0$ be an open neighbourhood of $\phi_0$ such that \[ \overline{\Omega_\mathtt{ext}} \subset \Omega^o \setminus \overline{\Omega^i[\phi]} \quad \forall \phi \in Q_\mathtt{ext}.\] Then the map from $Q_\mathtt{ext}$ to $C^{1,\alpha}(\overline{\Omega_\mathtt{ext}})$ that takes $\phi$ to $u^o_{\phi| \overline{\Omega_\mathtt{ext}}}$ is real analytic. \end{enumerate} \end{teo} \begin{proof} \ We prove (i). By Definition \ref{u^o_phi,u^i_phi def}, by \eqref{U^o,U^i} and by Lemma \ref{lemmanotation}, we have \begin{equation*} \begin{split} u^i_\phi(x) &= U^i_{\Omega^i[\phi]}[M^o[\phi],M^i[\phi]\circ \phi^{(-1)},N^i[\phi]\circ \phi^{(-1)},R^o[\phi],R^i[\phi]](x) \\ & = \int_{\partial\Omega^i} S_n(x-\phi(s)) \, N^i[\phi](s) \, \tilde{\sigma}_n[\phi](s) \,d\sigma_s + R^i[\phi] \qquad \forall x \in \overline{\Omega^i[\phi]} \end{split} \end{equation*} and for all $\phi \in Q_0$. By the assumption $Q_\mathtt{int} \subseteq Q_0$ and Theorem \ref{M^oteo}, we know that the map from $Q_\mathtt{int}$ to $\mathbb{R}$ that takes $\phi$ to $R^i[\phi]$ is real analytic. Moreover, by the real analyticity of $N^i[\cdot]$ (cf.~Theorem \ref{M^oteo}) and by the properties of integral operators with real analytic kernels and no singularities (see Lanza de Cristoforis and Musolino \cite[Prop. 4.1]{LaMu13}), we can prove that the map from $Q_\mathtt{int}$ to $C^{1,\alpha}(\overline{\Omega_\mathtt{int}})$ that takes $\phi$ to the function $ \int_{\partial\Omega^i} S_n(x-\phi(s)) \, N^i[\phi](s) \, \tilde{\sigma}_n[\phi](s) \,d\sigma_s$ of the variable $x \in \overline{\Omega_\mathtt{int}}$ is real analytic (see also Lemma \ref{lemmanotation}). Hence, we deduce the validity of (i). The proof of (ii) is similar and it is left to the reader. \end{proof} \section*{Acknowledgements} The authors are members of the ``Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni'' (GNAMPA) of the ``Istituto Nazionale di Alta Matematica'' (INdAM). R.M. acknowledges the support of the Project "Variational methods for stationary and evolution problems with singularities and interfaces" (PRIN 2017) funded by the Italian Ministry of Education, University, and Research. P.M. acknowledges the support of the Project BIRD191739/19 ``Sensitivity analysis of partial differential equations in the mathematical theory of electromagnetism'' (University of Padova), of the ``INdAM GNAMPA Project 2020 - Analisi e ottimizzazione asintotica per autovalori in domini con piccoli buchi'', and of the grant ``Challenges in Asymptotic and Shape Analysis - CASA'' (Ca' Foscari University of Venice).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section*{Content} \section*{Introduction} \medskip Networks provide a useful paradigm to incorporate contact patterns and various heterogeneities within a population \cite{pastor2014epidemic,newman2002spread,konyv}. The basic ingredients of such models are nodes and links, usually representing individuals and the contacts between them, but they may represent also groups of individuals (such as the population at some geographic location), and the connectedness of these groups (such as transportation routes \cite{dia,yuki}). In simple disease outbreak models, the status of an individual can be susceptible ($S$), infected ($I$) or recovered ($R$). A key parameter associated with most epidemic models is the basic reproduction number (denoted by $\mathcal{R}_0$), which denotes the expected number of secondary infections generated by a typical infected individual introduced into a fully susceptible population \cite{diekmann}. The reproduction number is also a threshold quantity: if $ \mathcal{R}_0<1$ the epidemic will die out, while if $\mathcal{R}_0>1$ the disease will spread. Another important measure of epidemic severity is the final epidemic size, which is the total number of individuals who become infected during the time course of the epidemic. These two quantities are often connected via the so-called final size relation. In these simple models that assume a fully mixed population, the final fraction that is not infected $s_\infty$ solves the implicit relation \[ s_\infty = S(0) e^{-\mathcal{R}_0 (1-s_\infty)} \, . \] If infected individuals transmit with constant rate $\beta$, then in this well-mixed model $\mathcal{R}_0 = \beta \ave{\mathcal{I}}$ where $\ave{\mathcal{I}}$ is the average infection duration, and so variance in the distribution of infection duration does not affect the final size~\cite{miller:final,ma}. Modelling epidemics on networks however increases the complexity of the models since the underlying population structure means that individuals are not interchangeable. Thus we must track which individuals are in each status rather than simply how many individuals are in each status. For example, in the most fundamental case of Markovian transmission and recovery, both time to infection and the time spent as infected and infectious is taken from exponential distributions with appropriate rates. Even for the purely Markovian case we need to deal with a continuous time Markov chain with a discrete state space with $3^N$ elements, where three stands for the three possible states a node can be in ($S$, $I$ and $R$) and $N$ denotes the number of nodes in the network. Writing down evolution equations for the probability of the system being in any of these states is possible but impractical due to the high dimensionality of the system. Hence, in order to deal with this complexity one need to employ some `clever' averaging. Probabilistic methods, such as branching processes can be used to deal with the early growth and the asymptotic behaviour \cite{Ball}, with percolation theory also leading to good analytical treatment for the early growth and final size \cite{Perc}. For the later dynamics, we generally need to derive a mean-field model, e.g. a low dimensional system of ODEs. There are many well established ways to derive mean-field models. Perhaps the most compact method is the so called edge based compartmental model (EBCM) \cite{EBCM} which has been successfully used to capture SIR dynamics with arbitrary transmission and infection processes \cite{Neil} on configuration-like networks. The EBCM provides an excellent approximation of the exact stochastic network epidemic, which becomes exact in some appropriate limits and conditions on the underlying network \cite{Decreusefond,Janson}. Another powerful method to model epidemic spread on network is provided by the message passing approach \cite{MP} and this works for arbitrary transmission and recovery processes but at the expense of a system consisting of a large number of integro-differential equations. In addition, pairwise models have been successfully used to approximate stochastic epidemics on networks and represent a vast improvement on compartmental models. Pairwise models also have the advantage of being easy to understand and very intuitive when compared to the EBCM or the message passing model. All the above are able to capture the time evolution of the epidemic while also offering insights about the epidemic threshold and final size. All these models have the same starting point and not surprisingly it can be shown that often these models are equivalent~\cite{MLA,Neil,konyv} and they simply represent different choices of how one averages and how the reduced state space is defined \cite{konyv}. While dealing with the complexity and the modelling of contact structures, the dynamics of the disease needs to be accounted for appropriately. It is well known that the duration of the infectiousness has a major impact on whether an outbreak happens and how many people it affects as being a key parameter in the basic reproduction number. To highlight a recent example, in the West-African ebola outbreak one crucial part of the intervention strategy was to reduce the length of the post-mortem infectious period \cite{barbarossa}. In this paper we bridge the gap by considering a model that can capture both the complexity of contact structure as well as the features of the disease itself. To do this we consider pairwise models with Markovian infection but arbitrary recovery process and we focus on the outbreak threshold derived from this model and its dependence on the choice of the recovery process. The paper is structured as follows. First, we introduce the pairwise model, the analytical final epidemic size relation followed by the newly introduced basic pairwise reproduction number $\mathcal{R}_0^p$. The main result of the paper is on the relation between the variance in the distribution of the recovery process and the basic pairwise reproduction number. This is followed by some discussion of our results with respect to the concept of stochastic ordering, and the possible extension of our results to heterogeneous networks. We conclude with extensive numerical results and a discussion of our findings. \section*{Model} \medskip Pairwise models are formulated in terms of the expected values for the number of susceptible ($[S]$), infected ($[I]$) and recovered ($[R]$) nodes, which depend on the expected values of ($SS$) pairs ($[SS]$) and ($SI$) pairs ($[SI]$). Introducing the usual notations \begin{itemize} \item $[X](t)$ for the expected number of nodes in state $X$ at time $t$, \item $[XY](t)$ for the expected number of links connecting a node in state $X$ to another in state $Y$, and \item $[XYZ](t)$ for the expected number of triplets in state $X-Y-Z$, \end{itemize} where, $X, Y, Z\in \{S, I, R\}$, and by summing up all possible transitions, the pairwise model reads as \begin{eqnarray} \dot{[S]}(t)&=&-\tau [SI](t),\nonumber \\ \dot{[I]}(t)&=&\tau [SI](t)-\gamma [I](t), \nonumber \\ \dot{[SS]}(t)&=&-2\tau [SSI](t), \label{Original_Pairwise}\\ \dot{[SI]}(t)&=&\tau [SSI](t)-\tau [ISI](t)-\tau [SI](t)-\gamma [SI](t),\nonumber \end{eqnarray} where $\tau$ is the per contact infection rate and $\gamma$ is the recovery rate. Here $[S]+[I]+[R]=N$ is the total number of nodes in the network, and only those equations are listed which are necessary to derive a complete self-consistent system. The equations for links contain triplets, thus we have to break the dependence on higher order terms to obtain a closed system. The closure approximation formula $[XSY]=\frac{n-1}{n} \frac{[XS] [SY]}{[S]}$, where $n$ is the average number of links per node, leads to the self-consistent system \cite{keeling} \begin{eqnarray} \label{eq:pairmarkov} \dot{[S]}(t)&=&-\tau [SI](t),\nonumber \\ \dot{[I]}(t)&=&\tau [SI](t)-\gamma [I](t),\nonumber \\ \dot{[SS]}(t)&=&-2\tau \frac{n-1}{n} \frac{[SS](t)[SI](t)}{[S](t)}, \\ \dot{[SI]}(t)&=&\tau \frac{n-1}{n} \left(\frac{[SS](t)[SI](t)}{[S](t)}-\frac{[SI](t)[SI](t)}{[S](t)}\right)-(\tau+\gamma)[SI](t)\nonumber. \end{eqnarray} Closing at the level of pairs with the approximation $[XY]=n[X]\frac{[Y]}{N}$, one obtains the so called mean-field model (or compartmental model) \begin{eqnarray} \dot{S}(t)&=&-\tau \frac{n}{N} S(t) I(t),\nonumber \\ \dot{I}(t)&=&\tau \frac{n}{N} S(t) I(t)-\gamma I(t), \end{eqnarray} with basic reproduction number \begin{equation} \mathcal{R}_0=\frac{n}{N}\tau \mathbb{E}(\mathcal{I}) S_{0},\label{eq:mfstandardR0} \end{equation} where, $\mathbb{E}(\mathcal{I})=1/\gamma$ is the expected infectious period. The final size relation associated to the mean-field model is \begin{eqnarray} \label{eq:finalsizemftheorem} \ln\left(s_\infty\right)=\mathcal{R}_0\left(s_\infty-1\right), \end{eqnarray} where $S_0$ is the number of susceptible individuals at time $t=0$ and $s_\infty=S_{\infty}/S_0$, where $S({\infty})=S_{\infty}$. There are many results for the Markovian pairwise models \cite{saldana,keeling,konyv}, for example, the final epidemic size is given by \begin{equation} \frac{s_\infty^{\frac{1}{n}}-1}{\frac{1}{n-1}}=\frac{n-1}{N}\frac{\tau}{\tau+\gamma}[S]_0\left(s_\infty^{\frac{n-1}{n}} -1 \right), \label{eq:homogeneous_final} \end{equation} where $[S]_0$ is the number of susceptible individuals at time $t=0$ and $s_\infty=[S]_{\infty}/[S]_0$, where $[S]({\infty})=[S]_{\infty}$. \subsection*{Non-Markovian Recovery} \medskip The Markovianity of the recovery process is a strong simplifying assumption. For many epidemics, the infectious period has great importance and it is measured empirically. Recently, pairwise approximations of the SIR dynamics with non-Markovian recovery have been derived, see \cite{prl,wilkinson,biomat,proca}. In the special case of fixed recovery time $\sigma$, the mean-field model is given by \begin{eqnarray} \label{eq:meanfield} S'(t)&=&-\tau \frac{n}{N} S(t)I(t), \nonumber \\ I'(t)&=&\tau \frac{n}{N} S(t)I(t)-\tau \frac{n}{N} S(t-\sigma)I(t-\sigma), \end{eqnarray} while the pairwise model turned out to be \cite{prl} \begin{eqnarray} \label{eq:closeq} \dot{[S]}(t)&=&-\tau [SI](t),\nonumber \\ \dot{[SS]}(t)&=&-2\tau \frac{n-1}{n} \frac{[SS](t) [SI](t)}{[S](t)},\nonumber \\ \dot{[I]}(t)&=& \tau [SI](t) - \tau [SI](t-\sigma), \nonumber \\ \dot{[SI]}(t)&=& \tau \frac{n-1}{n}\frac{[SS](t)[SI](t)}{[S](t)}-\tau \frac{n-1}{n}\frac{[SI](t)[SI](t)}{[S](t)} -\tau [SI](t) \nonumber \\ & &-\tau \frac{n-1}{n}\frac{[SS](t-\sigma)[SI](t-\sigma)}{[S](t-\sigma)} e^{-\int_{t-\sigma}^{t}\tau\frac{n-1}{n}\frac{[SI](u)}{[S](u)}+\tau du}. \end{eqnarray} Both systems are now delay differential equations rather than ordinary differential equations, as is the case for Markovian epidemics. In \cite{prl}, the following final epidemic size relation has been derived: \begin{equation} \frac{s_\infty^{\frac{1}{n}}-1}{\frac{1}{n-1}}=\frac{n-1}{N}\left(1-e^{-\tau \sigma}\right)[S]_0\left(s_\infty^{\frac{n-1}{n}}-1\right). \label{finalsize} \end{equation} Considering a general distribution for the recovery period, the pairwise model can be formulated as a system of integro-differential equations \cite{proca,wilkinson}, which is given by \begin{subequations} \label{eq:closeq2} \begin{align} \dot{[S]}(t)&=-\tau [SI](t) \label{eq:closedeqS}\\ \dot{[SS]}(t)&=-2\tau \frac{n-1}{n} \frac{[SS](t) [SI](t)}{[S](t)} \label{eq:closedeqSS}\\ \dot{[I]}(t)&=\tau [SI](t) - \int_{0}^{t} \tau [SI](t-a) f(a) da - \int_{t}^{\infty} \varphi(a-t) \frac{f_\mathcal{I}(a)}{\xi(a-t)} da\label{eq:closedeqI}\\ \dot{[SI]}(t)&=\tau \frac{n-1}{n}\frac{[SS](t)[SI](t)}{[S](t)}-\tau \frac{n-1}{n}\frac{[SI](t)}{[S](t)}[SI](t)-\tau [SI](t)\nonumber\\ &-\int_0^t \tau \frac{n-1}{n}\frac{[SS](t-a)[SI](t-a)}{[S](t-a)} e^{-\int_{t-a}^t \tau\frac{n-1}{n}\frac{[SI](s)}{[S](s)}+\tau ds} f_\mathcal{I}(a)da\nonumber \\ &-\int_t^{\infty}\frac{n}{N} [S]_0 \varphi (a-t) e^{-\int_{0}^t \tau\frac{n-1}{n}\frac{[SI](s)}{[S](s)} +\tau ds} \frac{f_\mathcal{I}(a)}{\xi(a-t)}da. \label{eq:closedeqSI} \end{align} \end{subequations} Above we assume that the infection process along $S$--$I$ links is Markovian with transmission rate $\tau>0$. The recovery part is considered to be non-Markovian given by a random variable $\mathcal{I}$, with a cumulative distribution function $F_\mathcal{I}(a)$ and probability density function $f_\mathcal{I}(a)$. We use the associated survival function $\xi(a)=1-F_\mathcal{I}(a)$ and hazard function $h(a)=-\frac{\xi'(a)}{\xi(a)}=\frac{f(a)}{\xi(a)}$. We note that $\varphi (a)$ is the initial condition which gives the age of infection of individuals at time $t=0$. From Eq.~\eqref{eq:closeq2}, the associated mean-field model can be easily deduced by using the closure approximation formula for homogeneous networks (i.e. $n$-regular graphs) \begin{equation} \label{eq:closeformmf} [XY](t)=\frac{n}{N}[X](t)[Y](t), \end{equation} thus the node-level system becomes \begin{subequations} \label{eq:closmfeq} \begin{align} \dot{S}(t)&=-\tau \frac{n}{N}S(t)I(t) \label{eq:closedmfeqS}\\ \dot{I}(t)&=\tau \frac{n}{N}S(t)I(t) - \int_{0}^{t} \tau \frac{n}{N}S(t-a)I(t-a) f_\mathcal{I}(a) da- \int_{t}^{\infty} \varphi(a-t) \frac{f_\mathcal{I}(a)}{\xi(a-t)} da.\label{eq:closedmfeqI} \end{align} \end{subequations} \section*{The Pairwise Reproduction Number and Infectious Times} \medskip In \cite{prl}, a newly introduced basic reproduction-like number is defined for fixed length infectious periods as \begin{equation} \mathcal{R}_0^{p}:=\frac{n-1}{N}\left(1-e^{-\tau \sigma}\right)[S]_0, \label{eq:fixed_time_R0} \end{equation} which appears also in equation~\eqref{finalsize}. It has also been shown, that for arbitrary infectious periods, the basic reproduction number of the pairwise model is \begin{equation} \mathcal{R}_0^p=\frac{n-1}{N}\left(1-\mathcal L[f_\mathcal{I}](\tau)\right)[S]_0,\label{eq:genR0p} \end{equation} where $\mathcal L[\cdot]$ is the Laplace transform and $f_\mathcal{I}$ is the probability density function of the recovery process given by the random variable $\mathcal{I}$. Numerical tests and analytical results have both confirmed that, in general, the following implicit relation for the final epidemic size holds \begin{align} \frac{s_\infty^{\frac{1}{n}}-1}{\frac{1}{n-1}}&=\mathcal{R}^p_0\left(s_\infty^{\frac{n-1}{n}}-1\right) =\frac{n-1}{N}\left(1-\mathcal L[f_\mathcal{I}](\tau)\right)[S]_0\left(s_\infty^{\frac{n-1}{n}}-1\right). \label{eq:finalsizegenimprel} \end{align} Several important observations can be made. The first is around the interpretation of the Laplace transform of $f_\mathcal{I}$. Let us consider an isolated $S$--$I$ link, and let $\mathcal{E}$ be the exponentially distributed random variable of the time of infection along this link, with parameter $\tau$. Then the probability of transmission is the same as the probability that infection occurs before recovery, that is \begin{equation}T=P(\mathcal{E}<\mathcal{I})=\int_0^\infty F_\mathcal{E}(y) f_\mathcal{I}(y) dy=\int_0^\infty(1-e^{-\tau y}) f_\mathcal{I}(y) dy=1-\mathcal L[f_\mathcal{I}](\tau).\end{equation} Hence, the Laplace transform has natural interpretation and enters the calculation of the probability of transmission across an isolated $S$--$I$ link. The intuitive derivation for $\mathcal{R}^p_0$ follows from considering the rate at which new $S$--$I$ links are created. From~\eqref{eq:closedeqSI}, and focusing on the single positive term on the right hand side, it follows that $S$--$I$ links are created at rate $\frac{\tau(n-1)}{n}\frac{[SS]}{[S]}$ which at time $t=0$ and with a vanishingly small initial number of infected nodes reduces to $\tau (n-1)$. Now, multiplying this by the average lifetime of an $S$--$I$ link, which is $\frac{1-\mathcal L[f_\mathcal{I}](\tau)}{\tau}$ \cite{konyv}, gives the desired threshold value in the limit of $[S] \rightarrow N$ at $t=0$. Notice that while $\mathcal{R}_0$ depends on the expected value only, see \eqref{eq:mfstandardR0}, the pairwise reproduction number \eqref{eq:genR0p} uses the complete density function, thus the average length of the infectious period itself does not determine exactly the reproduction number. As a consequence, for an epidemic we have to know as precisely as possible the shape of the distribution. We shall analyse how the basic reproduction number \eqref{eq:genR0p}, which is not only an epidemic threshold but also determines the final size via \eqref{eq:finalsizegenimprel}, depends on the variance of the recovery time distribution. In \cite{biomat}, using gamma, lognormal and uniform distributions we showed that within each of those distribution families, once the mean infectious period is fixed, smaller variance in the infectious period gives a higher reproduction number and consequently a more severe epidemic. Next we generalize this result without restricting ourselves to special distributions. \section*{Main Result: Relationship Between the Variance and the Reproduction Number} \medskip In this section we give some simple conditions which may guarantee that smaller variance induces higher pairwise reproduction number. We consider a random variable $\mathcal{I}$ corresponding to recovery times with probability density functions $f_{\mathcal{I}}(t)$, cumulative distribution function $F_{\mathcal{I}}(t)=\int_{0}^{t} f_{\mathcal{I}}(s) ds$ and we shall use the integral function of the CDF $\mathcal{F}_{\mathcal{I}}(t) := \int_{0}^{t} F_{\mathcal{I}}(s) ds$. Clearly, $\frac{d^2}{dt^2}\mathcal{F}_{\mathcal{I}}(t)=\frac{d}{dt}F_{\mathcal{I}}(t)=f_{\mathcal{I}}(t)$. Moreover, $F_{\mathcal{I}}(0)=\mathcal{F}_{\mathcal{I}}(0)=0.$ \begin{theorem} \label{generalimpact} Consider two random variables $\mathcal{I}_1$ and $\mathcal{I}_2$ such that \begin{equation} \label{eq:E} \mathbb{E}(\mathcal{I}_1)=\mathbb{E}(\mathcal{I}_2)<\infty, \end{equation} and \begin{equation} \label{eq:V} \mathrm{Var}(\mathcal{I}_1)<\mathrm{Var}(\mathcal{I}_2)<\infty. \end{equation} Assume that \begin{equation} \label{eq:M3alt} \lim\limits_{t\rightarrow\infty} t^3 f_{\mathcal{I}_j}(t) = 0, \quad j\in \{1,2\}, \end{equation} and for all $t>0$, \begin{equation} \label{eq:mainassumption} \mathcal{F}_{\mathcal{I}_1}(t)\neq\mathcal{F}_{\mathcal{I}_2}(t) \end{equation} holds. If $\mathcal{I}_1$ and $\mathcal{I}_2$ represent the recovery time distribution, then for the corresponding reproduction numbers the relation $\mathcal{R}_{0,\mathcal{I}_1}^p>\mathcal{R}_{0,\mathcal{I}_2}^p$ holds. \end{theorem} \begin{proof} Using assumption~\eqref{eq:E}, we deduce \begin{eqnarray*} \int_{0}^{\infty} t\left(f_{\mathcal{I}_1}(t)-f_{\mathcal{I}_2}(t)\right) dt&=&\left[t(F_{\mathcal{I}_1}(t)-F_{\mathcal{I}_2}(t))\right]_0^\infty-\int_{0}^{\infty}(F_{\mathcal{I}_1}(t)-F_{\mathcal{I}_2}(t)) dt \nonumber\\ &=&\lim\limits_{t\rightarrow\infty} t(F_{\mathcal{I}_1}(t)-F_{\mathcal{I}_2}(t))-\left[\mathcal{F}_{\mathcal{I}_1}(t)-\mathcal{F}_{\mathcal{I}_2}(t)\right]_0^\infty\nonumber\\ &\stackrel{[*]}{=}&-\lim\limits_{t\rightarrow\infty}(\mathcal{F}_{\mathcal{I}_1}(t)-\mathcal{F}_{\mathcal{I}_2}(t))=0 \end{eqnarray*} thus \begin{equation} \label{eq:Ealt} \lim\limits_{t\rightarrow\infty}(\mathcal{F}_{\mathcal{I}_1}(t)-\mathcal{F}_{\mathcal{I}_2}(t))=0. \end{equation} To see $[*]$, i.e. $\lim\limits_{t\rightarrow\infty} t(F_{\mathcal{I}_1}(t)-F_{\mathcal{I}_2}(t))=0$, we need some algebraic manipulations: \begin{eqnarray*} \lim\limits_{t\rightarrow\infty} t(F_{\mathcal{I}_1}(t)-F_{\mathcal{I}_2}(t))&=&\lim\limits_{t\rightarrow\infty} \frac{F_{\mathcal{I}_1}(t)-F_{\mathcal{I}_2}(t)}{\frac{1}{t}}\stackrel{\mathrm{L'H}}{=}\lim\limits_{t\rightarrow\infty}\frac{f_{\mathcal{I}_1}(t)-f_{\mathcal{I}_2}(t)}{-\frac{1}{t^2}}\nonumber\\ &=&-\lim\limits_{t\rightarrow\infty} t^2(f_{\mathcal{I}_1}(t)-f_{\mathcal{I}_2}(t)\stackrel{\eqref{eq:M3alt}}{=}0, \end{eqnarray*} where L'H refers to the L'Hospital rule. From assumption~\eqref{eq:V}, we have \begin{eqnarray*} \mathrm{Var}(\mathcal{I}_1)&=&\mathbb{E}(\mathcal{I}_1^2)-(\mathbb{E}(\mathcal{I}_1))^2<\mathbb{E}(\mathcal{I}_2^2)-(\mathbb{E}(\mathcal{I}_2))^2=\mathrm{Var}(\mathcal{I}_2)\nonumber\\ && \stackrel{\eqref{eq:E}}{\Rightarrow}\mathbb{E}(\mathcal{I}_1^2)<\mathbb{E}(\mathcal{I}_2^2), \end{eqnarray*} or equivalently $\int_{0}^{\infty}t^2(f_{\mathcal{I}_1}-f_{\mathcal{I}_2})dt<0$. We can carry out some calculations on the left-hand side of this inequality: \begin{eqnarray*} \int_{0}^{\infty}t^2(f_{\mathcal{I}_1}-f_{\mathcal{I}_2})dt &=& [t^2 (F_{\mathcal{I}_1}(t)-F_{\mathcal{I}_2}(t))]_0^\infty - 2 \int_{0}^{\infty} t(F_{\mathcal{I}_1}(t)-F_{\mathcal{I}_2}(t)) dt\nonumber\\ &=&\lim\limits_{t\rightarrow\infty}t^2 (F_{\mathcal{I}_1}(t)-F_{\mathcal{I}_2}(t))-2 [t(\mathcal{F}_{\mathcal{I}_1}(t)-\mathcal{F}_{\mathcal{I}_2}(t))]_0^\infty\nonumber\\ &+& 2 \int_{0}^{\infty} \mathcal{F}_{\mathcal{I}_1}(t)-\mathcal{F}_{\mathcal{I}_2}(t)dt\nonumber\\ &\stackrel{[**]}{=}& -2 \lim\limits_{t\rightarrow\infty} t(\mathcal{F}_{\mathcal{I}_1}(t)-\mathcal{F}_{\mathcal{I}_2}(t))+ 2 \int_{0}^{\infty} \mathcal{F}_{\mathcal{I}_1}(t)-\mathcal{F}_{\mathcal{I}_2}(t)dt\nonumber\\ &\stackrel{[**]}{=}& 2\int_{0}^{\infty} \mathcal{F}_{\mathcal{I}_1}(t)-\mathcal{F}_{\mathcal{I}_2}(t)dt, \end{eqnarray*} consequently \begin{equation} \label{eq:Valt} \int_{0}^{\infty} \mathcal{F}_{\mathcal{I}_1}(t)-\mathcal{F}_{\mathcal{I}_2}(t)dt<0. \end{equation} To prove $[**]$, i.e. $\lim\limits_{t\rightarrow\infty}t^2 (F_{\mathcal{I}_1}(t)-F_{\mathcal{I}_2}(t))=\lim\limits_{t\rightarrow\infty} t(\mathcal{F}_{\mathcal{I}_1}(t)-\mathcal{F}_{\mathcal{I}_2}(t))=0$, we have \begin{eqnarray*} \lim\limits_{t\rightarrow\infty} t(\mathcal{F}_{\mathcal{I}_1}(t)-\mathcal{F}_{\mathcal{I}_2}(t))&=&\lim\limits_{t\rightarrow\infty} \frac{\mathcal{F}_{\mathcal{I}_1}(t)-\mathcal{F}_{\mathcal{I}_2}(t)}{\frac{1}{t}} \stackrel{\mathrm{L'H}}{=}\lim\limits_{t\rightarrow\infty}\frac{F_{\mathcal{I}_1}(t)-F_{\mathcal{I}_2}(t)}{-\frac{1}{t^2}}\nonumber\\ &=& -\lim\limits_{t\rightarrow\infty}t^2 (F_{\mathcal{I}_1}(t)-F_{\mathcal{I}_2}(t))\nonumber\\ &\stackrel{\mathrm{L'H}}{=}& \lim\limits_{t\rightarrow\infty}\frac{f_{\mathcal{I}_1}(t)-f_{\mathcal{I}_2}(t)}{\frac{2}{t^3}}=\frac{1}{2}\lim\limits_{t\rightarrow\infty} t^3(f_{\mathcal{I}_1}(t)-f_{\mathcal{I}_2}(t))\nonumber\\ &\stackrel{\eqref{eq:M3alt}}{=}& 0. \end{eqnarray*} Since $F_{\mathcal{I}}(t)\geq0, t\geq0$ and monotone increasing, the integral function of CDF $\mathcal{F}_{\mathcal{I}}(t)$ is monotone increasing and convex. Using \eqref{eq:mainassumption} and \eqref{eq:Valt}, we obtain \begin{equation} \label{eq:dominance} \mathcal{F}_{\mathcal{I}_1}(t)<\mathcal{F}_{\mathcal{I}_2}(t), \end{equation} for all $t>0$. Clearly, for $\mathcal{R}_{0,\mathcal{I}_1}^p>\mathcal{R}_{0,\mathcal{I}_2}^p$, it is enough to prove, that $\mathcal{L}[f_{\mathcal{I}_1}](\tau)<\mathcal{L}[f_{\mathcal{I}_2}](\tau)$, i.e. $\int_{0}^{\infty}e^{-\tau t}(f_{\mathcal{I}_1}(t)-f_{\mathcal{I}_2}(t))dt<0.$ First, we perform some algebraic manipulation on the left-hand side: \begin{eqnarray*} \int_{0}^{\infty}e^{-\tau t}(f_{\mathcal{I}_1}(t)-f_{\mathcal{I}_2}(t))dt&=&[e^{-\tau t}(F_{\mathcal{I}_1}(t)-F_{\mathcal{I}_2}(t))]_0^\infty\nonumber\\ && +\tau \int_{0}^{\infty}e^{-\tau t}(F_{\mathcal{I}_1}(t)-F_{\mathcal{I}_2}(t)) dt\nonumber\\ &=& \tau [e^{-\tau t}(\mathcal{F}_{\mathcal{I}_1}(t)-\mathcal{F}_{\mathcal{I}_2}(t))]_0^\infty\nonumber\\ && +\tau^2 \int_{0}^{\infty}e^{-\tau t}(\mathcal{F}_{\mathcal{I}_1}(t)-\mathcal{F}_{\mathcal{I}_2}(t)) dt\nonumber\\ &\stackrel{\eqref{eq:Ealt}}{=}& \tau^2 \int_{0}^{\infty}e^{-\tau t}(\mathcal{F}_{\mathcal{I}_1}(t)-\mathcal{F}_{\mathcal{I}_2}(t)) dt. \end{eqnarray*} In conclusion, we have \begin{equation} \tau^2 \int_{0}^{\infty}e^{-\tau t}(\mathcal{F}_{\mathcal{I}_1}(t)-\mathcal{F}_{\mathcal{I}_2}(t)) dt \stackrel{\eqref{eq:dominance}}{<}0, \label{remarkhoz} \end{equation} therefore $\mathcal{L}[f_{\mathcal{I}_1}](\tau)<\mathcal{L}[f_{\mathcal{I}_2}](\tau)$, which gives $\mathcal{R}_{0,\mathcal{I}_1}^p>\mathcal{R}_{0,\mathcal{I}_2}^p$. \end{proof} \begin{remark} While one can easily construct a specific example for which the technical condition \eqref{eq:M3alt} does not hold, it is satisfied by all epidemiologically meaningful distributions, since extremely long infectious periods do not occur in epidemics. It trivially holds for distributions with compact support, and even for power law distributions with finite variance. \end{remark} \begin{corollary} \label{cor:fsize} Assume that the conditions of Theorem 1 hold. Then the infectious period distribution with smaller variance induces a larger epidemic outbreak. \end{corollary} \begin{proof} Let $z=s_\infty^{\frac{1}{n}}$. Then, \eqref{eq:finalsizegenimprel} can be written as \begin{align} (z-1)(n-1)&=\mathcal{R}^p_0\left(z^{n-1}-1\right), \end{align} which, since we are interested in the root $z\in (0,1)$, simplifies to \begin{align} (n-1)&=\mathcal{R}^p_0\left(z^{n-2}+\dots+z+1\right). \end{align} Clearly larger $\mathcal{R}^p_0$ results in smaller $z$, that means smaller $s_\infty$ thus larger epidemic. Combining this with Theorem 1 yields the result. \end{proof} \section*{Relation to Stochastic Ordering} \medskip In a very recent work \cite{wilkinson2}, Wilkinson and Sharkey considered a general class of network based stochastic epidemic models, and proved a monotonic relationship between the variability of the infectious period and the probability that the infection will spread to an arbitrary subset of the population by time $t$. Below we show that, while the work \cite{wilkinson2} was done in a different context, the main conclusion is essentially the same as what follows from our main result. In \cite{wilkinson}, the variability was represented by the convex order of the distributions of infectious periods. Given two random variables $\mathcal{I}_1$ and $\mathcal{I}_2$ whose expectations exist, such that \begin{equation} \mathbb{E}(\phi(\mathcal{I}_1)) \leq \mathbb{E}(\phi(\mathcal{I}_2)) \hbox{\quad for all convex functions \quad } \phi: \mathbb{R} \to \mathbb{R}, \end{equation} $\mathcal{I}_1$ is said to be smaller than $\mathcal{I}_2$ in the convex order, denoted by $\mathcal{I}_1 \leq_{cx} \mathcal{I}_2$, see monograph \cite{order} for a comprehensive description of various stochastic orders, their properties and relations. \begin{theorem} Assume that $\mathcal{I}_1 \leq_{cx} \mathcal{I}_2$, and the technical condition \eqref{eq:M3alt} holds. Then, $\mathcal{R}_{0,\mathcal{I}_1}^p>\mathcal{R}_{0,\mathcal{I}_2}^p$ holds. \end{theorem} \begin{proof} From the convexity of $\phi(x)=x$ and $\phi(x)=-x$, \eqref{eq:E} follows, and the convexity of $\phi(x)=x^2$ yields \eqref{eq:V}. From the convexity of $\phi_a(x)=(x-a)_+$, Theorem 3.A.1 in \cite{order} deduced that $\mathcal{I}_1 \leq_{cx} \mathcal{I}_2$ if and only if $\mathcal{F}_{\mathcal{I}_1}(t)\leq\mathcal{F}_{\mathcal{I}_2}(t)$ for all $t>0$. Now instead of the strict inequality of \eqref{eq:dominance}, we have less or equal, but from \eqref{eq:V} the two functions are not identical, hence analogously to the proof of Theorem 1 we can conclude \eqref{remarkhoz}, which completes the proof. \end{proof} \begin{remark} The distribution $\mathcal{I}_1$ is said to be smaller than $\mathcal{I}_1$ in the Laplace transform order, denoted by $\mathcal{I}_1 \leq_{Lt} \mathcal{I}_2$, if $\mathbb{E}(e^{-\tau\mathcal{I}_1}) = \mathcal{L}[f_{\mathcal{I}_1}](\tau) \geq \mathcal{L}[f_{\mathcal{I}_2}](\tau) = \mathbb{E}(e^{-\tau\mathcal{I}_2}).$ Now clearly the ordering of reproduction numbers $\mathcal{R}_{0,\mathcal{I}}^p$ are tied up to the Laplace order of the underlying distributions, and Theorem 1 can be viewed as providing easily verifiable sufficient conditions for Laplace ordering. There are examples in the literature (see \cite{counterexample}), showing that the Laplace transform order is different from the convex order. Hence, the pairwise reproduction number approach can be applied in some situations that are not covered by the convex order approach. \end{remark} \section*{Implications for Heterogeneous Degree Distributions} \medskip In a Configuration-Model network, given a random $S$--$I$ link, we expect the susceptible individual to have degree $k$ with probability proportional to $k[S_k]$ where $[S_k]$ is the number of susceptible individuals with degree $k$. Repeating our earlier derivation of equation~\eqref{eq:genR0p} for $\mathcal{R}_0^p$ in the homogeneous network case, we anticipate that for fixed duration $\sigma$, \begin{equation} \mathcal{R}_0^p = \sum (k-1) (1-e^{-\tau \sigma}) \frac{k [S_k](0)}{\ave{k}N}, \end{equation} where $\ave{k}$ is the average degree. Extending this to the case of heterogeneous infection duration, we find \begin{equation} \mathcal{R}_0^p = (1 -\mathcal{L}[f_{\mathcal{I}}](\tau)) \frac{\sum_k k [S_k](0)}{N\ave{k}}. \label{eq:hetR0p} \end{equation} It can be shown~\cite{Perc,kenah:second,newman2002spread} that the final number of degree $k$ individuals infected is given by \begin{equation} [S_k]_\infty = [S_k]_0 \theta_\infty^k \label{eq:Skinf} \end{equation} where the following implicit relation holds: \begin{equation} \theta_\infty = \mathcal{L}[f_{\mathcal{I}}](\tau) + (1-\mathcal{L}[f_{\mathcal{I}}](\tau)) \frac{\sum_k [S_k]_0\theta_\infty^{k-1}}{N\ave{k}} \label{eq:Thetainf} \end{equation} Here $\theta_\infty$ is a per-edge measure of the probability of \emph{not} being infected. So an initially susceptible individual with degree $k$ remains susceptible with probability $\theta_\infty^k$. The role of $\theta_\infty$ is the same as $s_\infty^{1/n}$ in Eq.~\eqref{eq:homogeneous_final}. Note that in $\mathcal{R}_0^p$, the terms capturing the distribution of infection durations separate from the terms capturing the distribution of degrees. The ordering of $\mathcal{R}_0^p$ as the infection duration distribution changes is independent of the degree distribution. So the ordering of $\mathcal{R}_0^p$ is the same as found in the regular networks. The final size depends monotonically on the Laplace transform of $f_{\mathcal{I}}$, and so the results about the ordering of final sizes in regular networks carry over to heterogeneous networks as well. \section*{Numerical Simulations and Conclusion} \medskip The role of the shape of the distribution of infectious periods in disease spread has been in the interest of modellers for some time \cite{wallinga}. Our previous works already indicated that for pairwise models of network epidemic, not only the mean, but higher order properties of the distribution of the recovery times have an impact on the outcome of the epidemic. We derived useful threshold quantities for non-Markovian recovery in \cite{prl}. In \cite{biomat}, we showed that for particular distribution families (typically two parameter families such as gamma, lognormal, and uniform distribution), smaller variance leads to higher reproduction number within the same family when the mean is fixed. Our new result in this study allows us to make comparisons between distributions of different kinds. To show the usefulness of Theorem 1, as an example, we consider $\mathcal{I}_1 \sim \mathrm{Exp}(\gamma)$ and $\mathcal{I}_2 \sim \mathrm{Fixed}\left(\frac{1}{\gamma}\right)$, i.e. $f_{\mathcal{I}_1}(t)=\gamma e^{-\gamma t}, t\geq 0$ and $f_{\mathcal{I}_2}(t)=\delta\left(t-\frac{1}{\gamma}\right)$, where $\delta(t)$ denotes the Dirac delta function. Clearly, we obtain $\mathcal{F}_{\mathcal{I}_1}(t)=t+\frac{1}{\gamma}e^{-\gamma t}-\frac{1}{\gamma}$ and $\mathcal{F}_{\mathcal{I}_2}(t)=(t-\frac{1}{\gamma})_+$, thus there is no $t_0>0$, such that $\mathcal{F}_{\mathcal{I}_1}(t_0)=\mathcal{F}_{\mathcal{I}_2}(t_0)$. Since $\mathbb{E}(\mathcal{I}_1)=\mathbb{E}(\mathcal{I}_2)=\frac{1}{\gamma}$, $\frac{1}{\gamma^2}=\mathrm{Var}(\mathcal{I}_1)>\mathrm{Var}(\mathcal{I}_2)=0$ and the other conditions of Theorem \ref{generalimpact} are satisfied, we find $\mathcal{R}_{0,\mathcal{I}_1}^p<\mathcal{R}_{0,\mathcal{I}_2}^p$. \begin{table}[h] \begin{tabular}{||c c c c||} \hline Distribution & Parameters & Mean & Variance \\ [0.5ex] \hline\hline Fixed & 3/2 & 3/2 & 0 \\ \hline Uniform & U(1,2) & 3/2 & 1/12=0.08(3)\\ \hline Gamma & \text{scale} =0.5, \ \text{shape} = 3 & 3/2 & 0.75\\ \hline Exponential & 2/3 & 3/2 & 9/4=2.25 \\ \hline Lognormal & $\sigma = 1$, \ $\mu=\ln(3/2)-1/2$ & 3/2 & 3.866 \\ \hline Weibull & \text{scale} = 1, \ \text{shape} = 0.6014 & 3/2 & 6.914 \\ \hline \end{tabular} \vspace{0.5cm} \caption{Details of all the distributions of the infection times used for the explicit stochastic network simulations.}\label{tab:distparammeanvar} \end{table} We have carried out extensive numerical simulations to test the final epidemic size formula \eqref{eq:finalsizegenimprel}, with $\mathcal{R}_0^p$ taken from \eqref{eq:hetR0p}, for Fixed, Uniform, Gamma, Exponential, Log-normal, Weibull distributed infection times on regular (see Fig.~\ref{fig:regularsize}), Erd\H{o}s-R\'enyi (see Fig.~\ref{fig:ERsize}) and truncated scale-free (see Fig.~\ref{fig:PLsize}) networks. It is worth noting that the same final size relation can be obtained by combining equations \eqref{eq:Skinf}, \eqref{eq:Thetainf} and that for $\mathcal{R}_0^p$ for the heterogenous degree distributions. The agreement between the analytical final epidemic size and explicit stochastic network simulations is excellent for all distributions and networks. The parameters, mean and variance of the distributions are given in Table~\ref{tab:distparammeanvar}. Several observations can be made. In Figs.~\ref{fig:regularsize}, \ref{fig:ERsize} and \ref{fig:PLsize} one can note that the epidemic threshold depends heavily on the distribution of the infectious period. While all distributions have the same mean, they differ in terms of their variance. In fact, the variance of the distributions are ordered as shown in Table~\ref{tab:distparammeanvar}. Based on Theorem~\ref{generalimpact} and Corollary~\ref{cor:fsize} we know that for a fixed transmission rate $\tau$ and for infectious period distributions with the same mean, the distribution with the higher variance will lead to a smaller $\mathcal{R}_0^p$ and hence smaller attack rate. This confirms that the ordering of the variances in Table \ref{tab:distparammeanvar} is reflected accurately in all attack rate versus $\tau$ plots. Moreover, the insets in Figs.~\ref{fig:regularsize}, \ref{fig:ERsize} and \ref{fig:PLsize} shows that the final epidemic size relation in terms of $\mathcal{R}_0^p$ is universal, independently of how the infectious periods are distributed. For the truncated scale-free networks in Fig.~\ref{fig:PLsize}, the attack rate behaves differently but the general analytical final epidemic size relation remains extremely accurate. Obviously high degree heterogeneity leads to large variance and this makes the value of $\mathcal{R}_0^p$ to be large and above threshold even for small values of $\tau$. Figures~\ref{fig:regularevol}, \ref{fig:ERevol} and \ref{fig:PLevol} show the initial growth of the epidemic. The relation between variance and attack rate seems to translate into a straightforward association between variance and initial growth rate. Namely, distributions with higher variance leads to slower initial growth. This is not always the case since $\mathcal{R}_0^p$ is a generation rather than time based measure. However, here the mean of the distributions and the transmission rates are identical and thus the ordering seems to carry through. As next steps one could consider the extension of $\mathcal{R}_0^p$ and the final size formula for epidemics where both the infection and transmission processes are non-Markovian. Such results already exist \cite{Neil} but there an EBCM was used. It would also be appropriate to explore the applicability of this newly introduced pairwise reproduction number given that it lent itself to derive a number of analytical results and it fits with the network and contact concepts. In particular one would explore how could this be measured in practice and how does its value translate into control measures. \begin{backmatter} \section*{Acknowledgements} This article is a significantly extended version of R\"ost G., Kiss I. Z., Vizi Z., Variance of Infectious Periods and Reproduction Numbers for Network Epidemics with Non-Markovian Recovery, Progress in Industrial Mathematics at ECMI 2016. ZsV was supported by the EU-funded Hungarian grant EFOP-3.6.2-16-2017-00015. GR was supported by Hungarian National Research Fund Grant NKFI FK 124016 and MSCA-IF 748193. JCM was supported by Global Good.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} A large fraction of early-type main sequence stars are associated with ultracompact H{\sc ii}-Re\-gions (UCH{\sc ii}s, Wood \& Churchwell \cite{wood}). These are characterized by large electron densities $N_{\rm e} \ge 10^4$ $\mbox{cm}^{-3}$, sizes $< 0.1$~pc and temperatures $T_e \approx 10\,000$~K. The overpressure in these regions should lead to expansion and dissipation on time scales of a few thousand years. Considering the expected lifetimes of massive stars of several million years, however, the high abundance of observed UCH{\sc ii}s translates into UCH{\sc ii} mean lifetimes of several $10^5$ years. This contradiction can be resolved by: 1) the UCH{\sc ii}s could be constrained by high pressure in their vicinity, or 2) by gravitationally infalling material (Reid et al. \cite{reid}), or 3) there could exist a process which continuously ``feeds'' the UCH{\sc ii}s with matter. High pressures certainly can be expected in the highly turbulent molecular cloud cores, which are the birthplaces of young massive stars (De~Pree et al.~\cite{depree}, Garc\'{\i}a-Segura \&\ Franco~\cite{segura}, Xie et al. \cite{xie}). Still, it is not clear how turbulence in a cold clumpy medium can contain the warm ($T \sim 10^4$~K), high density ionized material for extended periods of time -- many of the technical details of this proposal need to be worked out. The photoevaporating disk model proposed by Hollenbach et al. (\cite{hollenbach93}) and Yorke \&\ Welz (\cite{yowe93}) offers an attractive alternative. A circumstellar disk around a luminous OB star is continuously photoionized by the central source. The existence of a powerful stellar wind can modify the quantitative details of this model, but the basic result remains the same. Long-lived UCH{\sc ii}s are the necessary consequence of disks around hydrogen-ionizing sources. In a subsequent paper by Hollenbach et al. (\cite{hollenbach94}) the quasi-steady state structure of disks a\-round ionizing sources with winds has been calculated \mbox{(semi-)} analytically and in Yorke (\cite{yorke95}), Yorke \&\ Welz (\cite{yowe96}, hereafter Paper I), and Richling \&\ Yorke (\cite{riyo}, hereafter Paper II) the evolution of such circumstellar disks has been followed numerically under a variety of conditions. In Paper I it has been stressed that the phenomenon of disks in the process of photoionization is not restricted to the (pre\-sum\-ably highly symmetrical) case of circumstellar disks a\-round OB stars. Disk formation is a common by-product of the star formation process. Because OB stars seldom form in isolation, close companions with disks to a powerful source of ionizing {\sc uv} radiation and a stellar wind should be common. Strongly asymmetric UCH{\sc ii}s should result. Wood \&\ Churchwell~(\cite{wood}) observed 75 UCH{\sc ii}s at $\lambda=2$~cm and 6~cm with spatial resolution of 0\mysec4 using the VLA telescope and classified them by their spatial morphological structure into several types: \begin{itemize} \item cometary shaped (20\%), \item core-halo (16\%), \item shell type (4\%), \item irregular or multiply peaked (17\%) and \item spherical or unresolved (43\%). \end{itemize} In order to interpret these observations in light of the photoionized disk models, further work must be done in refining the hydrodynamical models for the asymmetric morphological configurations expected when a disk is ionized by external sources. Diagnostic radiation transfer calculations of these numerical models are necessary for a quantitative comparison. Goal of the present investigation is to determine spectral characteristics and to calculate the expected isophote maps of the {\em symmetrical} UCH{\sc ii}s which result from circumstellar disks around OB stars. We are restricted by the limited number of star/disk configurations which have been considered to date. We discuss in detail the physical (Sect.~\ref{physmod}) and numerical (Sect.~\ref{nummod}) models of radiation transfer which we employed. The results for selected hydrodynamical models from Papers I and II are discussed in Sect.~\ref{results} and compared to observations of specific sources in Sect. \ref{comparison}. We summarize our main conclusions in Sect.~\ref{sect:conclusions}. \section{The physical model} \label{physmod} In Papers I and II of this series the time dependent photo\-e\-vap\-o\-ration of a 1.6~M$_{\odot}$ circumstellar disk around a 8.4~M$_{\odot}$ star was calculated under a variety of physical conditions. The ionizing flux of the central source and its ``hardness'' as well as the stellar wind parameters (mass loss rate and terminal velocity) were varied. States of these models at selected evolutionary times are the basis for our diagnostic radiation transfer calculations. \subsection{Continuum transport} To determine the continuum spectral energy distribution (SED) over a frequency range from the radio region up to the optical, we take into account three major radiation processes: thermal free-free radiation (i.e.\ bremsstrahlung of electrons moving in the potential of protons in the H\,{\sc ii}-region), thermal dust radiation and the radiation emitted from the photosphere of an embedded source. \subsubsection{Free-free radiation} For this process we adopt the approximation for the emission coefficient (Spitzer~\cite{spitzer}): \begin{equation} \label{eq:kapff} \epsilon_{\rm ff}=\frac{8}{3} \left( \frac{2 \pi}{3} \right)^{\frac{1}{2}} \frac{e^6}{m^2c^3} \left( \frac{m}{kT_{\rm e}} \right)^{\frac{1}{2}} g_{\rm ff} N_{\rm e} N_{\rm p} \exp \left(-\frac{h\nu}{kT_{\rm e}} \right) . \end{equation} Here, $N_{\rm e}$ and $N_{\rm p}$ are the particle densities of electrons and protons. All other symbols have their usual meanings. We approximate the Gaunt factor $g_{\rm ff}$ for a non-relativistic plasma by: \begin{equation} g_{\rm ff}=\mbox{max} \left\{ \frac{3^{1/2}}{\pi} \left( \ln \frac{\left(2kT_{\rm e}\right)^{3/2}}{\pi e^2 \nu m^{1/2}}-\frac{5 \gamma}{2}\right) ,1 \right\} , \end{equation} where $\gamma$ ($\approx 0.577$) is Euler's constant. Assuming the validity of Kirchhoff's Law $S_{\nu} = \epsilon_\nu / \kappa_\nu = B_{\nu}(T_{\rm e})$, the absorption coefficient for thermal free-free radiation can be written ($h\nu \ll kT_{\rm e}$): \begin{equation} \kappa_\nu^{\rm ff} = \frac{4 (2\pi)^{1/2}e^6N_{\rm e} N_{\rm p} g_{\rm ff}}{(3 m k)^{3/2} c T_{\rm e}^{3/2} \nu^2} . \end{equation} \subsubsection{Dust emission} We adopt the `dirty ice' dust model developed by Preibisch et al.~(\cite{preib}), which includes two refractory components: amorphous carbon grains (aC) and silicate grains as well as volatile ice coatings on the surface of the silicate grains at temperatures below 125 K (Core Mantle Particles, CMP's). The icy coatings contain 7\%\ of the available amorphous carbon and consist of water and ammonium with a volume ratio of 3:1. At temperatures above 125~K the silicate core and approximately 11 amorphous carbon particles are released into the dusty gas for each CMP. In Table~\ref{tab:dust} the sublimation temperature $T_{\rm sub}$, the mean radius $\bar{a}_{\rm d}$ and the number of grains per gram gas $n_{\rm d}$ are listed for the different species. \begin{table}[tb] \caption[Staubparameter]{Parameters for the grain species used in the dust model of Preibisch et al.~(\cite{preib}).} \begin{flushleft} \begin{tabular}{llll} \hline\noalign{\smallskip} Grain Species & $T_{\rm sub}/[\mbox{K}]$ & $\log \bar{a}_{\rm d}/[\mbox{cm}]$ & $\log n_{\rm d}/[\mbox{g}^{-1}]$ \\ \noalign{\smallskip} \hline\noalign{\smallskip} aC & 2000 & $-$6.024 & 14.387 \\ Silicate & 1500 & $-$5.281 & $-$ \\ CMP & 125 & $-$5.222 & 12.218 \\ \noalign{\smallskip} \hline \end{tabular} \label{tab:dust} \end{flushleft} \end{table} The absorption coefficient [${\rm cm}^{-1}$] for the individual dust components is given by: \begin{equation} \kappa_\nu^{\rm d} = n_{\rm d} \rho \pi \bar{a}_{\rm d}^2 Q^{\rm abs}_{\rm d,\nu} \; , \end{equation} where the mean absorption efficiency $Q^{\rm abs}_{\rm d,\nu}$ for grain type ``d'' has been determined using Mie theory for spherical grains of a given size distribution. Figure~\ref{fig:dustparms} displays the absorption efficiencies for the different dust components as a function of frequency. Each dust component's contribution to the source function due to thermal emission $S_{\nu}^{\rm d}$ is also calculated under the assumption that $S_{\nu}^{\rm d} = B_{\nu}(T_{\rm d})$. \begin{figure} \begin{center} \epsfig{file=figure1.ps,height=5.02cm} \end{center} \caption[]{Mean absorption efficiencies for the different dust components. Solid line: amorphous carbon, dotted line: silicate, dashed line: CMP's.} \label{fig:dustparms} \end{figure} \subsubsection{Net continuum absorption and emission} Both emission processes mentioned above occur simultaneous\-ly within the same volume. Thus the net absorption coefficient and source function are: \begin{equation} \kappa_\nu = \sum_{\rm d} \kappa^{\rm d}_\nu + \kappa^{\rm ff}_\nu \end{equation} \begin{equation} S_\nu = \frac{1}{\kappa_\nu} \left( \sum_{\rm d} \kappa^{\rm d}_\nu B_\nu(T_{\rm d}) + \kappa^{\rm ff}_\nu B_\nu(T_{\rm e}) \right) . \end{equation} \subsection{Forbidden lines} In order to calculate profiles of the forbidden lines for the elements oxygen and nitrogen ([O\,{\sc ii}] 3726, [O\,{\sc iii}] 5007 and [N\,{\sc ii}] 6584), we adopt the following procedure. First, the equilibrium ionization structure of these elements is calculated over the volume of consideration. Next, the occupation densities of me\-ta\-stable levels $N_{\rm i}$ due to collisional excitation by electrons is determined. We take into account Doppler shifts due to bulk gas motions and thermal Doppler broadening to calculate the profile function $\phi$: \begin{equation} \phi(\nu) = \frac{1}{\sqrt{\pi} \Delta \nu_{\rm D}} \exp \left[ -\left( \frac{\nu - \tilde{\nu}}{\Delta \nu_{\rm D}} \right)^2 \right] , \end{equation} with the thermal Doppler width: \begin{equation} \label{eq:doppler} \Delta \nu_{\rm D} = \frac{\tilde{\nu}}{c}\sqrt{\frac{2RT_{\rm e}}{\mu}} . \end{equation} Here $R$ is the gas constant, $\mu$ the atomic weight of the relevant ion, and $\tilde{\nu} = \nu_{\rm 0}(1+v_{\rm R}/c)$ is the transition frequency $\nu_{\rm 0}$ Doppler-shifted by the radial velocity $v_{\rm R}$ of the gas relative to the observer. The emission coefficient of the transition $k \rightarrow j$, which enters into the equation of radiative transfer, is then given by: \begin{equation} \label{eq:lineemis} \epsilon_{\rm L}(\nu) = \frac{1}{4 \pi} N_{\rm k} A_{\rm kj} h \nu_{\rm 0} \phi(\nu) = \tilde{\epsilon}_{\rm L} \cdot \phi(\nu) , \end{equation} where $A_{\rm kj}$ is the Einstein coefficient for spontaneous emission. Note that we have neglected radiative excitation and stimulated emission in this approximation. \subsubsection{Ionization equilibrium} The equations for ionization equilibrium for two neighboring ionization stages, ${\rm r}$ and ${\rm r}+1$, are: \begin{eqnarray} \label{eq:ionglg} N^{\rm r} \left[ \int_{\nu_{\rm I}}^\infty f_\nu \sigma_\nu^{\rm r} \mbox{d} \nu +N_{\rm e} q^{\rm r}+N_{\rm p} \delta^{\rm r} \right] = \nonumber \\ N^{{\rm r}+1} \left[ N_{\rm e} \left( \alpha_{\rm R}^{\rm r}+\alpha_{\rm D}^{\rm r} \right) + N_{{\rm H}^{\rm 0}} \delta'^{\rm r} \right] . \end{eqnarray} We solve these equations simultaneously for the $N^{\rm r}$ up to the ionization stage $r=3$ for both oxygen and nitrogen. \paragraph{Radiative ionization.} The rate of radiative ionization is calculated from the flux of incident photons $f_\nu$ and the absorption cross section $\sigma_\nu^r$ integrated over all ionizing frequencies. We use the radiation field of the central source and neglect scattering to determine $f_\nu$: \begin{equation} f_\nu = \frac{1}{h\nu} \frac{B_\nu(T_*) R_*^2 \pi}{4 \pi R^2} \exp (-\tau) . \end{equation} An analytical expression for the absorption cross section $\sigma_\nu^r$ is given in Henry~(\cite{henry}). \paragraph{Collisional ionization.} This ionization process is important in hot plasmas, where the mean kinetic energy of the electrons is comparable to the ionization potentials of the ions. N\,{\sc i}, for example, has an ionization potential 14.5~eV; the corresponding Boltzmann temperature is $\sim$~170\,000~K. The coefficient for collisional ionization $q^{\rm r}$ is approximated by the analytical expression in Shull \&\ van Steenberg~(\cite{shull}). \paragraph{Radiative recombination.} This is the inverse process to radiative ionization. For the recombination coefficient $\alpha_{\rm R}$ we use the formula given in Aldrovandi \&\ Pequinot~(\cite{aldro1}, \cite{aldro2}). \paragraph{Dielectronic recombination.} The probability for recombination is enhanced when the electron being captured has a kinetic energy equal to the energy necessary to excite a second electron in the shell of the capturing ion. The density of excited levels in the term scheme of the ions grows with energy. Thus, this process becomes more and more important with increasing temperature. We use two analytical expressions for $\alpha_{\rm D}$: one for temperatures between 2000~K and 60\,000~K (Nussbaumer \&\ Storey~\cite{nuss}) and one for higher temperatures (Shull \&\ van Steenberg~\cite{shull}). \paragraph{Charge exchange.} \label{sect:chargeex} The exchange of electrons during encounters of atoms and ions, e.g. $N^{++}+H^{\rm 0} \rightarrow N^++H^+$ is also important. Arnaud \&\ Rothenflug~(\cite{arnaud}) give an expression for the coefficients $\delta'^{\rm r}$. Special care is necessary in the case of the reaction $O^+ + H^{\rm 0} \rightleftarrows O^{\rm 0}+H^+$. Due to the similarity of the ionization energies of hydrogen and oxygen ($\Delta E=0.19$~eV) the backward reaction is also very effective. At sufficiently high electron temperatures this leads to the establishment of an ionization ratio $ N_{O^{\rm 0}}/N_{O^+} \approx (9/8) N_{H^{\rm 0}}/N_{H^+} $, even in the absence of ionizing radiation. We explicitly include both reactions in Eq.~(\ref{eq:ionglg}) via the term $\delta^{\rm r}$. An expression for this coefficient can also be found in Arnaud \&\ Rothenflug~(\cite{arnaud}). \subsubsection{Collisional excitation of metastable states} Neglecting the effects of radiative excitation and stimulated emission, we solve the equations of excitation equilibrium for the population densities $N_{\rm k}$ (sums over all values ``j'' for which the conditions under the summation signs are fulfilled): \begin{eqnarray} N_{\rm k} \left[ N_{\rm e} \sum_{E_{\rm j} \neq E_{\rm k}} q_{\rm kj}+ \sum_{E_{\rm j}<E_{\rm k}} A_{\rm kj} \right] & = & \nonumber \\ N_{\rm e} \sum_{E_{\rm j} \neq E_{\rm k}} N_{\rm j} q_{\rm jk} & + & \sum_{E_{\rm j}>E_{\rm k}} N_{\rm j} A_{\rm jk} , \end{eqnarray} together with the condition $ \sum N_{\rm j} = N_{\rm ges}$. We use the formulae for the activation and deactivation coefficients given in e.g.\ Osterbrock~(\cite{oster}): \begin{equation} q_{12}=8.63 \cdot 10^{-6} \, \frac{\Omega_{12}}{\omega_1} \, T_{\rm e}^{-1/2} \exp \left(- \frac{\Delta E_{12}}{k T_{\rm e}} \right) \end{equation} and \begin{equation} q_{21}=8.63 \cdot 10^{-6} \, \frac{\Omega_{12}}{\omega_2} \, T_{\rm e}^{-1/2} , \end{equation} where $\Omega_{12}$ denotes the collision strength for the transition $1 \rightarrow 2$, $\omega_1$ and $\omega_2$ the statistical weights of both states involved and $\Delta E_{12}$ the energy difference between them. For the $\Omega_{12}$ we use the tables given in Osterbrock~(\cite{oster}). \subsection{Balmer lines} Our neglect of line absorption of Balmer photons by hydrogen is justified as long as the density of Ly$_\alpha$ photons is sufficiently low to insure that the hydrogen 2p state is not significantly populated. This is equivalent to the assumption that Ly$_\alpha$ photons generated in the nebula by recombination either are quickly destroyed, e.g.\ by dust absorption or by hydrogen Ly$_\alpha$ absorption followed by 2-photon emission, or are able to escape sufficiently rapidly, e.g.\ by a random walk in frequency (Osterbrock~\cite{oster2}). The emission coefficient of the Balmer lines is given by: \begin{equation} \tilde{\epsilon}_{\rm L}({\rm H}_{\rm i}) = \frac{1}{4 \pi} \alpha_{{\rm H}_{\rm i}}^{\rm eff} \cdot N_{\rm p} N_{\rm e} h \nu_{{\rm H}_{\rm i}} . \end{equation} The effective recombination coefficients $\alpha_{{\rm H}_{\rm i}}^{\rm eff}$ used in this work were adopted from Hummer \&\ Storey~(\cite{hummer}). \subsection{Radiation from the central star} As argued in Paper I the resulting {\sc uv} spectrum of a star accreting material via an accretion disk is very uncertain. For simplicity we have assumed that the photospheric emission of the central source (star + transition zone) can be approximated by a black body of given temperature $T_*$ in the frequency range of interest ($\lambda < 100$~nm). $T_*$ determines the ``hardness'' of the ionizing photons, thus affecting both the nebula temperature and the ionization fraction of oxygen and nitrogen. We use the same values for $T_*$ as in Papers I and II for the hydrodynamic models. Nevertheless, the successful spectral classification of the ionizing star in the UCH{\sc ii} region G29.96-0.02 by Watson \&\ Hanson (\cite{watson}) gives rise to the hope that more information on the spectral properties of young, still accreting massive stars will be available in the future. \section{The Numerical Model} \label{nummod} \subsection{Structure of the underlying models} \label{intmod} \begin{figure} \epsfig{file=figure2.ps,width=8.6cm} \caption[]{Density, velocity and ionization structure of model A and C. Gray scale and black contour lines display the density structure. These contour lines vary from $\log\rho=-13.0$ to $\log\rho=-19.5$ in increments of $\Delta\log\rho=0.5$. The white contour lines mark the position of the ionization front and the arrows show the velocity field. The normalization is given at the upper right corner.} \label{fig:acmodels} \end{figure} \begin{figure} \epsfig{file=figure3.ps,width=8.8cm,clip=1} \caption[]{Density, velocity and ionization structure of model G2, G3 and G4. Symbols and lines have the same meaning as in Fig.~\ref{fig:acmodels} except the black density contour lines, which are drawn down to $\log\rho=-21.5$.} \label{fig:gmodels} \end{figure} The underlying numerical models were calculated on five multiply nested grids, each with 62 x 62 grid cells (see Yorke \&\ Kaisig~\cite{yokai}, Paper I, and Paper II). The spatial resolution of the finest grid was $\Delta R = \Delta Z \approx 2 \times 10^{13}$~cm (R is the distance to the symmetry axis, Z to the equatorial plane). Axial symmetry and mirror symmetry with respect to the equatorial plane were assumed for the models. The simulations were performed within a volume $(R_{\rm max}^2 + Z_{\rm max}^2)^{1/2} \le 10^{16}$~cm until a quasi-steady state was reached. For the diagnostic radiation transfer calculations discussed here we use the final states of five simulations described in Paper II. Some of the relevant parameters of these simulations are given in Table~\ref{tab:models}. Figure~\ref{fig:acmodels} and Fig.~\ref{fig:gmodels} display the density and ionization structure as well as the velocity field of the selected models. Models A and C are the results of simulations with the same moderate stellar wind and the same radiation source. But in the simulation leading to model A the diffuse {\sc uv} radiation field originating from scattering on dust grains was completely neglected. For that reason we got a higher photoevaporation rate $\dot{M}_{\rm ph}$ for model C. In Fig.~\ref{fig:acmodels} this is recognizable by the greater overall density in the ionized regions and by the higher velocity in the ``shadow'' regions of the disk in the case of model C. In order to investigate the variation of spectral characteristics with the stellar wind velocity we chose the models with the greatest wind velocities G2, G3 and G4. Figure~\ref{fig:gmodels} shows the increasing opening angle of the cone of freely expanding wind with increasing wind velocity. \subsection{Strategy of solution} We use the model data to calculate the ionization structure and the level population. From the level populations we determine the emissivities of each line transition and the continuum emission at each point within the volume of the hydrodynamic mo\-del. For each viewing angle $\Theta$ considered, we solve the time independent equation of radiation transfer in a non-relativistic moving medium along a grid of lines of sight (LOS) through the domain, neglecting the effects of scattering: \begin{equation} \label{eq:rte} \frac{{\rm d}I_\nu}{{\rm d}\tau_\nu}=-I_{\nu}+S_{\nu} , \end{equation} where the optical depth is defined as $\tau_\nu=\int \kappa_\nu {\rm d}s$. Integrations were performed for a given set of frequencies, whereby the effects of Doppler shifts for the line emissivities were taken into account. The resulting intensities are used to determine SEDs, intensity maps and line profiles. Spectra are obtained from the spatial intensity distributions by integration, taking into account that each LOS has an associated ``area''. Depending on $\Theta$ the symmetry of the configurations could be utilized to minimize the computational effort (see Fig.~\ref{fig:parcels}). For the pole-on view ($\Theta = 0^\circ$), for example, only a one dimensional LOS array need be considered. For the edge-on view ($\Theta = 90^\circ$) lines of sight either through a single quadrant (continuum transfer) or through two quadrants (line transfer) are necessary. The resolution of the central regions is enhanced by overlaying a finer LOS grid in accordance with the multiple nested grid strategy used in the hydrodynamic calculations. Each point in Fig.~\ref{fig:parcels} corresponds to an LOS trajecto\-ry through the model. Mapping such a trajectory onto the (R,Z) model grid yields hyperbolic curves as displayed in Fig.~\ref{fig:path}. Beginning with a starting intensity ($I_\nu(-\infty)$), the solution of Eq.~(\ref{eq:rte}) is obtained by subdividing the LOS into finite intervals and analytically integrating over each interval assuming a sub-grid model (see below). \begin{figure*} \begin{center} \epsfig{file=figure4.ps,height=6.0cm} \end{center} \caption[]{Choice of lines of sight (LOS) and their associated areas for different viewing angles $\Theta$. Filled dots indicate the LOS used for the continuum calculations. Empty dots refer to the additional Lines of Sight necessary for the line profile calculations.} \label{fig:parcels} \end{figure*} \begin{figure} \begin{center} \epsfig{file=figure5.ps,height=7.2cm} \end{center} \caption{Projection of a typical LOS trajectory (curved dashed line) onto the model data grid (solid lines). Temperature, density, degree of ionization and velocity are defined at cell centers. The small circles divide the LOS into subintervals; the source function $S_\nu$ is evaluated at the location of the circles, chosen to lie on the intersections of the LOS with lines connecting the grid cell centers.} \label{fig:path} \end{figure} \subsection{Continuum radiation transfer} If no discontinuities are present within the subinterval under consideration, we assume $S_\nu$ varies linearly with $\tau$, i.e.\ \begin{equation} S_\nu(\tau)=S_\nu^i+(S_\nu^{i+1}-S_\nu^i) \frac{\tau}{\Delta \tau} , \end{equation} where $i$ and $i+1$ denote the starting and end points of the interval, respectively, and $\Delta \tau$ is a mean optical depth over the interval: \begin{equation} \Delta \tau = \frac{\kappa^i + \kappa^{i+1}}{2} \Delta s . \end{equation} With this formulation the solution of Eq.~(\ref{eq:rte}) over the entire interval is given by (see Yorke~\cite{yorke1}): \begin{eqnarray} \label{eq:step} I_{i+1} = I_{i} \exp ( - \Delta \tau) & + & S_{i} \left[ \frac{ 1-\exp (-\Delta \tau)}{\Delta \tau} -\exp (-\Delta \tau) \right] \nonumber \\ & + & S_{i+1} \left[ 1-\frac{1-{\rm exp}(-\Delta \tau)}{\Delta \tau} \right] . \end{eqnarray} For the cases considered here we choose $I_0 = 0$ as the starting LOS intensity. For ``proplyd''-type models (considered in a subsequent paper of this series) a non-negligible background intensity should be specified. \subsection{Radiation transfer in emission lines} For the transitions considered here the radiation field can be considered ``diffuse'' $I_\nu \ll B_\nu$ and the contribution of spontaneous emission dominates over line absorption and stimulated emission processes. After separating the source function $S_\nu = S_{\rm C} + S_{\rm L}$ and the intensity $I_\nu = I_{\rm C} + I_{\rm L}$ of Eq.~(\ref{eq:rte}) into the contributions of the continuum and the line, we obtain \begin{equation} \label{eq:linint} I_{\rm L} = \frac{\tilde{S}_{\rm L} \exp (-\Delta \tau)}{ \sqrt{\pi} \Delta \nu_{\rm th}} \int_0^{\Delta \tau}\exp\left[\tau-\left( \frac{\nu - \tilde{\nu}(\tau)}{\Delta \nu_{\rm th}} \right)^2 \right] {\rm d} \tau , \end{equation} where ${\rm d}\tau = \kappa_{\rm C} \, {\rm d}s$ and $S_{\rm L}=\epsilon_{\rm L}/ \kappa_{\rm C}$. Here $\tilde{\nu}(\tau)$ is the Doppler-shifted frequency of the transition, $\Delta \nu_{\rm th}$ the Doppler width and $\tilde{S}_{\rm L}$ the net source function integrated over the line. Assuming that $\tilde{\nu}$ is linear in $\tau$ over the whole interval yields the analytical solution to Eq.~(\ref{eq:linint}): \begin{equation} \label{eq:linsol} I_{\rm L} = \frac{\tilde{S}_{\rm L} \Delta \tau}{2 \Delta \tilde{\nu}} \exp \left( - \frac{( \tilde{\nu}_2 - \nu) \Delta \tau}{\Delta \tilde{\nu}} \right) \cdot \left[ \mbox{erf} (Y_2) - \mbox{erf} (Y_1) \right] , \end{equation} where ${\rm erf}(y)=2/\sqrt{\pi} \int_0^z \exp(-t^2) dt$ is the error function and $Y_i=(\tilde{\nu}_i-\nu)/\Delta \nu_{\rm th}$ a dimensionless frequency shift. The net source function $\tilde{S}_{\rm L}$ is calculated according to the algorithm suggested by Yorke (\cite{yorke1}): \begin{eqnarray} \tilde{S}_{\rm L} & = & \frac{ \mbox{erf}(Y_{\rm M}) - \mbox{erf}(Y_{\rm 1})}{\mbox{erf} (Y_2) - \mbox{erf} (Y_1)} \tilde{S}_1 + \frac{ \mbox{erf} (Y_2) - \mbox{erf}(Y_{\rm M})}{\mbox{erf}(Y_2) - \mbox{erf}(Y_1)} \tilde{S}_2, \end{eqnarray} with $Y_{\rm M} = (Y_1 + Y_2) / 2$. The line source functions $\tilde{S}_1$ and $\tilde{S}_2$ are calculated from the total line emission coefficient $\tilde{\epsilon}_{\rm L}$ as defined in Eq.~(\ref{eq:lineemis}) at the boundaries of the evaluated interval and from the continuum absorption coefficient: $\tilde{S}_{\rm i}= \tilde{\epsilon}_{\rm L,i}/{\kappa_{\rm C}}$. If $\Delta \tilde{\nu} \ll \Delta \nu_{\rm th}$, i.e.\ there is negligible Doppler shift within the subinterval, the solution of Eq.~(\ref{eq:linint}) with $\tilde{\nu}_1=\tilde{\nu}_2=\tilde{\nu}$ is used: \begin{equation} \label{eq:narrowl} I_{\rm L} = \frac{S_{\rm L}}{\sqrt{\pi}\Delta \nu_{\rm th}} \exp \left[ - \left( \frac{\nu-\tilde{\nu}}{\Delta \nu_{\rm th}} \right)^2 \right] \left(1-\exp(-\Delta \tau) \right) . \end{equation} \subsection{Treatment of ionization fronts} The numerical models considered contain unresolved ionization fronts due to the coarseness of the hydrodynamic grid. At these positions jumps occur in the physical parameters and the solutions given by Eq.~(\ref{eq:step}) and Eq.~(\ref{eq:linsol}/\ref{eq:narrowl}) are poor approximations. The exact location of the fronts within a grid cell are unknown; we assume they lie at the center of the corresponding interval. Our criterion for the presence of an ionization front is a change in the degree of ionization $\Delta x > 0.1$ between two evaluation points. For the continuum calculations Eq.~(\ref{eq:step}) is applied to each half interval with $\Delta \tau = \kappa_i \Delta x /2$. For the first half $S= S_i$ kept constant and for the second half $S=S_{i+1}$ is held constant. For the line calculations Eq.~(\ref{eq:narrowl}) is used with $\tilde{\nu}=\nu_1$ ($\nu_2$) and $S_{\rm L}=S_1$ ($S_2$) for the first (second) half interval. \subsection{Treatment of the central radiation source} The central source is modeled by a black body radiator of temperature $T_*$ and radius $R_*$. The integration along the line of sight through the center is started at the position of the source with the initial intensity \begin{equation} I_0 = B_\nu(T_*) \frac{R_*^2 \pi}{A} , \end{equation} with $A$ is the area associated with the central LOS. \section{Results} \label{results} With the code described above we determined SEDs, continuum isophotal maps and line profiles for the forbidden lines [N{\sc ii}] 6584, [O{\sc ii}]~3726 and [O{\sc iii}] 5007 as well as the H$\alpha$-line for the models introduced in Sect.~\ref{intmod}. Their relevant physical parameters are listed in Table~\ref{tab:models}. \subsection{Continuum emission} \subsubsection{Spectral Energy Distributions} Figure~\ref{fig:modg2cont} shows the SEDs of model G2 presented in Paper~II for different viewing angles $\Theta$. The spectra can be divided into three regimes dominated by different physical processes: \begin{enumerate} \item In the frequency range from $10^8$ to $10^{11}$ Hz the SED is dominated by the thermal free-free radiation in the ionized region around the dust torus. \item The {\sc ir}-excess from $10^{11}$ to $10^{14}$ Hz is due to the optically thick dusty torus itself, which has a mean surface temperature of about 250 K. \item Beyond $10^{14}$ Hz the SED depends strongly on the viewing angle: if the star is obscured by the dusty torus then the free-free radiation of the H{\sc ii}-region again dominates the spectrum, otherwise the stellar atmosphere shows up. Due to the uncertainties in the stellar spectra and the neglect of scattering by dust in Eq.~(\ref{eq:rte}), which becomes more and more important with increasing frequency, a discussion of the SED beyond $10^{14}$~Hz and comparison with observations are not useful. \end{enumerate} \begin{table*}[tb] \caption{Scattering coefficient $\kappa^{\rm scat}_{\rm dust} \rho^{-1}$ as well as parameters for the stellar wind (mass loss rate $\dot{M}_{\rm wind}$ and velocity $v_{\rm wind}$) and the ionizing source (stellar photon rate $S_{\rm star}$ and temperature $T_{\rm eff}$) used in the calculations. The evaporation time scale $t_{\rm evap}$ is calculated from $t_{\rm evap}=M_{\rm disk}/\dot{M}_{\rm ph}$ with $M_{\rm disk} = 1.67\,M_\odot$.} \begin{center} \begin{tabular}{cccccccc} \hline\noalign{\smallskip} model & $\kappa^{\rm scat}_{\rm dust}/\rho$ & $\dot{M}_{\rm wind}$ & $v_{\rm wind}$ & $\log_{10} S_{\rm star}$ & $T_{\rm eff}$ & $\dot{M}_{\rm ph}$ & $t_{\rm evap}$ \\ & $\mbox{cm}^2\mbox{g}^{-1}$ & $10^{-8}M_\odot\mbox{yr}^{-1}$ & $\mbox{km\,s}^{-1}$ & $\mbox{s}^{-1}$ & $\mbox{K}$ & $10^{-6}M_\odot\mbox{yr}^{-1}$ & $10^6\mbox{yr}$ \\ \noalign{\smallskip} \hline\noalign{\smallskip} A & 0 & 2 & 50 & 46.89 & 30\,000 & 0.565 & 2.96 \\ C & 200 & 2 & 50 & 46.89 & 30\,000 & 1.35 & 1.24 \\ \hline\noalign{\smallskip} G2 & 200 & 2 & 400 & 46.89 & 30\,000 & 1.65 & 1.01 \\ G3 & 200 & 2 & 600 & 46.89 & 30\,000 & 1.67 & 1.00 \\ G4 & 200 & 2 & 1000 & 46.89 & 30\,000 & 1.64 & 1.02 \\ \noalign{\smallskip} \hline \end{tabular} \label{tab:models} \end{center} \end{table*} According to the analysis of Panagia \&\ Felli~(\cite{panagia}), who calculated analytically the free-free emission of an isothermal, spherical, ionized wind, the flux density should obey a $\nu^{0.6}$-law: \begin{eqnarray} F_{\nu}=6.46 \cdot 10^{-3}\; \mbox{Jy} \cdot \left[ \frac{\dot{M}}{10^{-5} \mbox{~M}_\odot / \mbox{yr}} \right] ^{4/3} \cdot \left[ \frac{\nu}{10\mbox{~GHz}} \right] ^{0.6} \cdot \nonumber \\ \cdot \left[ \frac{T}{10^4\mbox{~K}} \right] ^{0.1} \cdot \left[ \frac{d}{\mbox{kpc}} \right] ^{-2} \cdot \left[ \frac{v_{\rm wind}}{10^3\mbox{~km\,s$^{-1}$}} \right] ^{-4/3}. \label{eq:panagia} \end{eqnarray} Schmid-Burgk~(\cite{schmid}) showed that this holds, modified by a geo\-metry dependent factor of order unity, even for non-symmetri\-cal point-source winds as long as $\rho$ drops as $R^{-2}$. Additionally he postulated that the flux density should hardly be dependent on the angle at which the object is viewed. In Fig.~\ref{fig:modg2cont} we include for comparison the flux distribution given by Eq.~\ref{eq:panagia} for the photoevaporation rate $\dot{M}_{\rm ph}=1.65\cdot 10^{-6}\;\mbox{M}_\odot\mbox{yr}^{-1}$ (see Paper II), $T=10\,000$~K and $v_{\rm wind}=20\;\mbox{km~s}^{-1}$ derived from the line profiles in Sect.~\ref{sect:lines}. The slope of the SED in regime 1 is slightly steeper in our results, because our volume of integration is finite; Panagia \&\ Felli~(\cite{panagia}) derived their analytical results by assuming an infinite integration volume. The flux is almost independent of $\Theta$, which is in good agreement with Schmid-Burgk~(\cite{schmid}). The deviations between $10^{10}$ and $10^{11}$ Hz are due to the break in the $R^{-2}$-law caused by the neutral dust torus. Figure~\ref{fig:modg2cont} also includes the SED of a blackbody at $T=240$~K. The flux $F_{\nu} \propto \nu^{2.2}$ in the far {\sc ir} between $5\cdot10^{11}$\,Hz and $3\cdot10^{12}$\,Hz is slightly steeper than the comparison blackbody spectrum, because the dust torus is not quite optically thick. With increasing $\Theta$ the maximum shifts towards lower frequencies, because the warm dust on the inside of the torus is being concealed by the torus itself. We obtain qualitatively the same results for a number of models presented in Paper II. \begin{figure} \begin{center} \epsfig{file=figure6.ps,height=6.25cm} \end{center} \caption[]{Spectral Energy Distribution for model G2 and different $\Theta$.} \label{fig:modg2cont} \end{figure} \begin{figure*} \begin{center} \epsfig{file=figure7.ps,height=21cm} \end{center} \caption[]{Isophotal maps of model C for different viewing angles and wavelengths as indicated. Assumed distance 300 pc. The circle in the lower right corner marks the FWHM of the point spread function. Values for contours see text.} \label{fig:contourc} \end{figure*} \begin{figure*} \begin{center} \epsfig{file=figure8.ps,height=21cm} \end{center} \caption[]{Isophotal maps of model G4 for different viewing angles and wavelengths as indicated. Assumed distance 300 pc. The circle in the lower right corner marks the FWHM of the point spread function. Values for contours see text.} \label{fig:contourg4} \end{figure*} \subsubsection{Isophotal maps} We also calculated isophotal maps over the projected $(S,T)$ grid of the sky for models C and G4 (Figs.~\ref{fig:contourc},\ref{fig:contourg4}). The maps were convolved with a Gaussian point spread function (FWHM 0\mysec3 for $\lambda=6$\,cm, 0\mysec1 for $\lambda=2$\,cm and $12\,\mu$m) in order to compare the numerical models with observations of limited resolution. The values (in percent) of the contour lines relative to the maximum flux per beam are 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 30, 40, 50, 60, 70, 80, 90 for $\lambda=6\,$cm, and 2, 5, 15, 25, 35, 45, 55, 65, 75, 85, 95 for $\lambda=2\,$cm and $12\mu$m. The difference in these two models is revealed most strikingly in the maps for $\lambda=6\,$cm and $2\,$cm. The mass outflow of model C is more evenly distributed over its whole opening angle, whereas in model G4 most of the mass is transported outward in a cone between $\Theta=30^\circ$ and $70^\circ$. This leads to the X-shape of the corresponding radio maps for viewing angles $\Theta \ge 60^\circ$. The high density in the region between star and disk for this model results in an optically thick torus in this region at $\lambda=2\,$cm. Thus the contours for $\Theta=30^\circ$ and $60^\circ$ are not symmetric to the equatorial plane at $T=0^{''}$. In the maps corresponding to $\Theta = 30^\circ, \lambda = 12 \mu$m there appears a peculiar horseshoe-like feature. It is generated by the hottest region of the dust torus, which is the innermost boundary with the smallest distance to the star. It can be seen as a ring in the maps for $\Theta = 0^\circ$. For $\Theta = 30^\circ$ the part of the ring next to the observer is obscured by the dust torus, whereas the other parts are still visible. The beam width chosen by us allows the resolution of the ring and thus leads to the horseshoe-like feature. For $\Theta = 60^\circ$ only the most distant part of this hot ring is visible, leading to a maximum in the flux with a smaller spatial extent than for $\Theta = 30^\circ$. \begin{figure*} \begin{center} \epsfig{file=figure9.ps,height=17cm,angle=270} \end{center} \caption[]{Lines [N{\sc ii}] 6584 and H$\alpha$ for models G2, G3 and G4, and different ``viewing angles'' $\Theta$. The models differ only in wind velocity $v_{\rm wind}$.} \label{fig:lines1} \end{figure*} \begin{figure*} \begin{center} \epsfig{file=figure10.ps,height=17cm,angle=270} \end{center} \caption[]{Lines [N{\sc ii}] 6584 and H$\alpha$ for models A and C, and different ``viewing angles'' $\Theta$. In model A scattering of {\sc uv} photons by dust was neglected, in model C included. Note the different scales on the abscissae.} \label{fig:lines3} \end{figure*} \begin{figure*} \begin{center} \epsfig{file=figure11.ps,height=17cm,angle=270} \end{center} \caption[]{Lines [O{\sc ii}] 3726 and [O{\sc iii}] 5007 for models G2, G3 and G4, and different ``viewing angles'' $\Theta$. The models differ only in wind velocity $v_{\rm wind}$.} \label{fig:lines2} \end{figure*} \begin{figure*} \begin{center} \epsfig{file=figure12.ps,height=17cm,angle=270} \end{center} \caption[]{Lines [O{\sc ii}] 3726 and [O{\sc iii}] 5007 for models A and C, and different ``viewing angles'' $\Theta$. In model A scattering of {\sc uv} photons by dust was neglected, in model C included. Note the different scales on the abscissae.} \label{fig:lines4} \end{figure*} \subsection{Line profiles} \label{sect:lines} Line profiles can provide useful information on the velocity structure in H{\sc ii} regions. Because the thermal doppler broadening decreases with increasing atomic weight as $A^{-1/2}$, ions such as N{\sc ii} and O{\sc iii} are better suited for velocity diagnostics than H$_\alpha$. This can be seen in the line profiles we obtained (Figs.~\ref{fig:lines1}-\ref{fig:lines4}). They show the flux integrated over the whole area of the object including the disk, the evaporated flow and the cone of the stellar wind. In all cases the line broadening of several ten km\,s$^{-1}$ is dominated by the velocity distribution of the escaping gas. For comparison, the rotational velocity of the dust torus $v_{\rm rot} \simeq 13$~km\,s$^{-1}$ for the inner parts, and the thermal Doppler broadening from Eq.~(\ref{eq:doppler}) at $T = 10\,000$~K is $v_{\rm th} \simeq 9$~km\,s$^{-1}$ for H$\alpha$ and $\simeq 2$~km\,s$^{-1}$ for O{\sc iii}. Figures~\ref{fig:lines3} and \ref{fig:lines4} show the line profiles for the models A (no scattering of H-ionizing photons) and C (calculated assuming non-negligible {\sc uv} scattering during the hydrodynamical evolution) from Paper~II. {\sc uv} scattering leads to stronger illumination of the neutral torus by ionizing radiation and thus to a higher photoevaporation rate in model C (by a factor of $\sim 2.3$) compared to model A. Due to the higher density in the regions filled with photoevaporated gas, the lines for model C are generally more intense. In the case of the line [O{\sc ii}] 3726 one notices that the difference between the fluxes for different angles is the smallest of all transitions. Not only is the density of the outflowing ionized gas higher for case~C, but the charge exchange reactions discussed in Sect.~\ref{sect:chargeex} lead to the establishment of a non-negligible O{\sc ii}-abundance even in the ``shadow'' regions not accessible to direct stellar illumination. These regions dominate the line spectra for all angles. Figures~\ref{fig:lines1} and \ref{fig:lines2} show the calculated line profiles for the models G2, G3 and G4, which only differ by the assumed stellar wind velocity $v_{\rm wind}$, increasing from 400~km\,s$^{-1}$ (G2) to 1\,000~km\,s$^{-1}$ (G4). Comparing the results we find two features: \begin{enumerate} \item The intensity and overall structure of the line profiles considered are almost independent of the stellar wind velocity $v_{\rm wind}$. \item No high-velocity component appears in the lines due to the stellar wind. \end{enumerate} Both features can be explained by the fact that the density of material contained in the stellar wind is much lower than the density in the photoionized outflow. Remembering that $\rho = \dot M / 4\pi r^2 v$ in a steady-state outflow and that the line emissivity $\epsilon_{\rm L} \propto \rho^2$, we can understand why the low expansion velocities of about $20$~km\,s$^{-1}$ (i.e. material close to the torus) dominate the spectra. The overall evaporation rates as well as the expansion velocities are almost equal for all three models, leading to very similar line profiles. It would be necessary to consider transitions which predominate in hot winds in order to detect this high velocity component and to find significant differences between these models. In spite of our assumption of optically thin line transfer, the profiles for $\Theta=90^\circ$ are not symmetric. This is due to the dust extinction within the H{\sc ii}-region. The receding material is on average further away from the observer than the approaching. The longer light paths result in stronger extinction of the redshifted components. We are aware of the fact that the neglection of scattering by dust in Eq.~(\ref{eq:rte}) may lead to serious errors in the calculated line profiles. We expect non-negligible contributions especially in the red-shifted parts of the lines due to light scattered by the dense, dusty surface of the neutral torus. This light was originally emitted by gas receding from the torus. The resulting redshift ``seen'' by the torus remains unchanged during the scattering process and will thus lead to enhancement of the red-shifted wings of the lines. Nevertheless, we expect our qualitative results still to hold since the arguments mentioned above referring to the geometry of the underlying models are still applicable. \section{Comparison with observations} \label{comparison} Although the cases presented here describe the situation of an intermediate mass ionizing source (8.4 M$_\odot$) with a circumstellar disk, many of the basic spectral features can be generalized. Thus, it is interesting to compare and contrast our results with observed UCH{\sc ii}s, even though many of the central sources are presumably much more massive. The collections of photometry data in the catalogues of Wood \&\ Churchwell~(\cite{wood}) and Kurtz et al.~(\cite{kurtz}) show that the SEDs of almost all UCH{\sc ii}-regions possess roughly the same structure as the ones of the models discussed here: a flat spectrum in the radio and mm regime following a $\nu ^{0.6}$-law due to free-free emission and an {\sc ir} excess originating from heated dust exceeding the free-free emission by $\sim 3-4$ orders of magnitude. A closer inspection shows that the dust temperatures are by an order of magnitude lower in most of the observed sources when compared to our models. This may be an indication that the disks are being photoionized by a close companion in a multiple system rather than the central source. Alternatively, the emitting dust could be distributed in a shell swept up by the expanding H{\sc ii}-region and thus further away from the exciting star than for the cases discussed here. The large beam width of IRAS and the tendency of massive stars to form in clusters also make it likely that the {\sc ir} fluxes belong to dust emission caused by more than one heating star. Objects of this type would appear as ``unresolved'' in the maps presented by Wood \&\ Churchwell~(\cite{wood}) for distances larger than $\sim$~300~pc. Due to the cooler dust, the ``unresolved'' objects cannot be explained by the models of circumstellar disks around {\sc uv} luminous sources specifically discussed in this paper. Certainly the cometary shaped UCH{\sc ii}s can be explained by a disk being evaporated by the ionizing radiation of an external star and interaction with its stellar wind. Numerical models dealing with this scenario will be presented in the next paper of this series. \begin{figure*} \begin{center} \epsfig{file=figure13.ps,width=5.5cm,angle=270} \end{center} \caption{VLA-maps of \object{MWC~349}. Left: $\lambda=$~6~cm (Cohen et al. \cite{cohen}), FWHM~$=$~0\mysec3. Contour levels at 1, 2,..., 9, 10, 20,..., 80, 90\% of maximum flux 16,59~mJy\,beam$^{-1}$. Right: $\lambda=$~2~cm (White \&\ Becker~\cite{white}), FWHM~$=$~0\mysec1. Contour levels at $-2$, 2, 5, 15, 25,...95\% of maximum flux 156~mJy\,beam$^{-1}$.} \label{fig:mwcvla} \end{figure*} \subsection{\object{MWC~349}~A} A commonly accepted candidate for an evaporating disk around a young massive star is \object{MWC~349}~A. Its radio continuum flux obeys the $\nu^{0.6}$-law up to $\lambda=30$~cm (see collection of photometry results in Thum \&\ Mart\'{\i}n-Pintado~\cite{thum}). Radio maps obtained by Cohen et al.~(\cite{cohen}) and White \&\ Becker~(\cite{white}) show an extended ionized bipolar outflow with a peculiar X-shape (Fig.~\ref{fig:mwcvla}). In the center Leinert~(\cite{leinert}) finds an elongated, dense clump with $T \sim 900\,$K, with optically thick {\sc ir} emission, which makes \object{MWC~349}~A one of the brightest IRAS sources. The elongated structure shows an almost Keplerian velocity profile along its major axis, perpendicular to the outflow axis (Thum \&\ Mart\'{\i}n-Pintado~\cite{thum}). This leads to the assumption of a neutral dust torus around the central star with a small outer radius $< 100\,$AU. Kelly et al.~(\cite{kelly}) estimated the extinction towards this early-type star to be $A_{\rm V} = 10.8$. The SED of \object{MWC~349}~A shows all the features which we find for our models. The extinction and the geometry of the outflow, as well as the lack of a high-velocity component in the line profiles (Hartmann et al. \cite{hartmann}, Hamann \&\ Simon~\cite{hamann}), could be explained by a model with fast stellar wind presented in this paper, assuming a viewing angle $\Theta \sim 90^\circ$. On the other hand, the high dust temperatures in the torus, $T_{\rm d} \sim 800$\,K, and the extremely high mass loss rate in the outflow, $\dot{M} = (1.16 \pm 0.05) \times 10^{-5}\,$M$_\odot$\,yr$^{-1}$ (Cohen et al. \cite{cohen}) remain puzzling and need clarification by a numerical model after the method described in this series but especially ``tailored'' to suit the needs of \object{MWC~349}~A. \section{Conclusions} \label{sect:conclusions} In this paper we showed that the models of photoevaporating disks around intermediate mass stars cannot explain the large number of ``unresolved'' UCH{\sc ii}'s observed by Wood \&\ Churchwell~(\cite{wood}) and Kurtz et al.~(\cite{kurtz}), because the inferred dust temperatures of these objects are in most cases an order of magnitude lower than those obtained in the numerical models. But the question remains whether the disks of more massive stars than considered here could be responsible for the high abundance of the ``unresolved'' UCH{\sc ii}'s. Disks around close companions of massive stars should be treated in greater detail. If we assume that circumstellar disks are the rule in the process of star formation, the simplicity and straightforwardness of the model make it favorable compared to alternative suggestions. The extremely high radiation pressure in the vicinity of massive stars could lead to a larger distance between star and disk and thus to smaller dust temperatures. Another important result of this work is that the profiles of forbidden lines in the optical for the models G2, G3 and G4 with wind velocities of $400-1000$ km\,s$^{-1}$ are almost independent of $v_{\rm wind}$. This is due to the fact that the mass loss rate and velocity of the evaporated disk material is not affected by its interaction with the stellar wind, but by the rate of ionizing photons and the peculiar shock structure which is very similar in the numerical models (see Fig.~\ref{fig:gmodels}). Since the total mass loss rate is dominated by the evaporated component with low velocity, and the emission is proportional to $\rho^2$, the intensity in the lines does not depend on details of the stellar wind and their profiles show the same low-velocity components. Treatment of optically thick line emission and scattering effects is not possible with the method presented above. In order to compare non-LTE-effects like masing lines, which are observed in various objects related with the formation of massive stars, one has to refer to different methods, e.g. the Monte-Carlo method presented by Juvela~(\cite{juvela}). This would immensely help us in our understanding of the process of formation and evolution of massive stars. \begin{acknowledgements} This research has been supported by the Deutsche Forschungsgemeinschaft (DFG) under grants number Yo 5/19-1 and Yo 5/19-2. Calculations were performed at the HLRZ in J\"ulich and the LRZ in Munich. We'd also like to thank D.J. Hollenbach for useful discussions. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In most product formulas~\cite{ref:Trotter,ref:KatoTrotter,ref:Lie}, there is a subtle interplay between two competing dynamics. Such interplay has multiple facets, both physical (as for example in the quantum Zeno effect~\cite{ref:QZEMisraSudarshan,ref:QZEreview-JPA}) and mathematical~\cite{ref:kickfix}. In physics, the seminal ideas can be traced back to Feynman, who, working on his path-integral formulation of quantum mechanics \cite{ref:Feynman(1948),ref:Dirac1933,ref:FeynmanHibbs(1965),Schulman1981}, wrote the full dynamics of a quantum particle in the form \begin{eqnarray} \mathrm{e}^{-\mathrm{i} t H} = \mathrm{e}^{-\mathrm{i} t (T+V)} = \lim_{n\rightarrow\infty} \bigl(\mathrm{e}^{-\mathrm{i} \frac{t}{n} T} \mathrm{e}^{-\mathrm{i} \frac{t}{n} V}\bigr)^n, \label{eq:feynmanTV} \end{eqnarray} where $H=T+V$ is the Hamiltonian, and $T$ and $V$ the kinetic and potential energy, respectively. Feynman was attacking the formidable problem of calculating the exponential of the sum of two non-commuting operators. In mathematics, a similar problem was first posed by Lie~\cite{ref:Lie}, who proved that \begin{eqnarray} \label{eq:lie} \mathrm{e}^{A+B} = \lim_{n\rightarrow\infty} \bigl(\mathrm{e}^{A/n} \mathrm{e}^{B/n}\bigr)^n, \end{eqnarray} for square matrices $A$ and $B$. Formulas~(\ref{eq:feynmanTV}) and~(\ref{eq:lie}) disguise, among serious mathematical difficulties, a subtle (and intriguing) standpoint: when one factors the exponentials, one always implicitly assumes that $n$ appears \emph{at the first power} in the denominator of the exponents. One is so accustomed to such a stance, that other scalings have not been looked at. What, then, about evolutions of the following type \begin{eqnarray} \label{eq:lie2} \bigl(\mathrm{e}^{A/n^\gamma} \mathrm{e}^{B/n}\bigr)^n ? \end{eqnarray} In the above formula, $\gamma$ is in general different from one or, alternatively, the evolution times under the action of the kinetic and potential energies in Eq.~(\ref{eq:feynmanTV}) are scaled differently. The most interesting situations arise when $0 \leq \gamma \leq 1$ (for $\gamma >1 $ the limit is trivially $\mathrm{e}^{B}$, while for $\gamma <0$ the limit might not exist, as we will see later). In this Article we will investigate the mathematical features and limits of expressions of the type~(\ref{eq:lie2}). One expects that the factor $\mathrm{e}^{A/n^\gamma}$ dominates over $\mathrm{e}^{B/n}$ for $0 \leq \gamma < 1$, leading to quantum control (in the sense that $B$ will be \emph{modified} into an effective generator $B_Z$ yielding a controlled dynamics characterized by superselection sectors, as explained in section~\ref{sec:quantumcont}). We will indeed see that these formulas yield quantum Zeno subspaces~\cite{ref:QZS,ref:artzeno}, that are robust against the detrimental effects of decoherence. This observation provides a strong physical motivation for our analysis. Our analysis will be organized as follows. In Sec.~\ref{sec:quantumcont} we revisit two standard control techniques---frequently kicked evolution and strong continuous coupling---by exhibiting bounds on the control errors. We show that the two protocols only differ in the order a double limit is taken. Then, in Section~\ref{sec:intermediateLim}, we show that it is still possible to get quantum control in an intermediate situation, where the operators in the exponentials scale differently with~$n$. Finally, in Section~\ref{sec:GeneralizedPF}, as a byproduct of our results, we discuss the generalization~\eqref{eq:lie2} of the Trotter product formula, by providing analytical bounds on the convergence rate and by comparing them with a numerical analysis. Four appendices are devoted to the proofs of the theorems. \section{Preliminaries: notation and quantum control} \label{sec:quantumcont} We shall first introduce notation by adhering to the terminology of quantum applications, and then look in detail at two different quantum control protocols, examining similarities and differences. Consider a quantum system living in a Hilbert space $\Hi$ with finite dimension, $\dim\Hi<\infty$. Let $U(t)=\mathrm{e}^{-\mathrm{i} tH}$ be the (``free") evolution operator, $H$ being the Hamiltonian of the system. Let $\{P_\mu \}$ be a complete family of orthogonal projections, that is a set of $m$ projection operators, with $m\leqslant \dim\Hi$, satisfying \begin{equation} P_\mu^\dagger=P_\mu,\qquad P_\mu P_\nu=\delta_{\mu\nu}P_\mu,\qquad \sum_{\mu=1}^m P_\mu=I. \label{eq:pmu} \end{equation} The aim of quantum control, in the context of decoherence suppression, is to engineer an evolution in which the Hilbert space is dynamically partitioned \begin{equation} \Hi=\bigoplus_{\mu=1}^m{\Hi_\mu} , \end{equation} so that transitions between different subspaces $\Hi_\mu = P_\mu \Hi$ are suppressed. See Fig.~\ref{fig:HZ}. The subspaces will be called quantum Zeno subspaces and the control procedures will be referred to as quantum Zeno dynamics (QZD). \begin{figure}[t]\centering \includegraphics[width=.5\textwidth]{Partitions.pdf} \caption{Pictorial representation. The Hilbert space $\Hi$ is partitioned into quantum Zeno subspaces $\Hi_\mu=P_\mu \Hi$. If the system is in a given subspace (say $\Hi_5$) at the initial time $t_0$, it will coherently evolve within this subspace and will never make transitions to other subspaces.} \label{fig:HZ} \end{figure} \subsection{Frequently pulsed evolution} QZD can be obtained by applying frequent and instantaneous unitary transformations to the evolving state of the system. The control procedure consists in alternating free evolutions of the system with instantaneous unitary ``kicks" \begin{equation} U_n(t)=\bigl(\Uk \mathrm{e}^{-\mathrm{i} \frac{t}{n}H}\bigr)^n . \end{equation} The ensuing control techniques were first investigated in the 60's in relation to magnetic resonance~\cite{anderson,ernst,freeman,levitt} and are often referred to as ``bang-bang" dynamics~\cite{viola98} in the more recent quantum literature. For a good review, see Ref.~\cite{lidarrev}. Evolutions of this type are of tantamount importance in the study of quantum chaos~\cite{ref:CasatiChaos,ref:BerryChaos,ref:Gutzwiller,ref:qchaos}, although in that case the frequency is kept finite and $t$ scaled like $n$. Let \begin{equation} \Uk=\sum_{\mu=1}^m \mathrm{e}^{-\mathrm{i} \phi_\mu}P_\mu , \label{eq:specdecUk} \end{equation} with $m\leqslant \dim\Hi$, be the spectral decomposition of $\Uk$, where $\{P_\mu\}$ is a complete family of projections~\eqref{eq:pmu} and $\mathrm{e}^{-\mathrm{i} \phi_\mu}\neq \mathrm{e}^{-\mathrm{i} \phi_\nu}$ for $\mu\neq\nu$. In the $n\rightarrow\infty$ limit (infinitely frequent pulses applied in a fixed time interval $(0,t)$), one obtains a QZD with Zeno subspaces defined by the eigenspaces of the unitary kick~(\ref{eq:specdecUk}). This is a consequence of the following \begin{thm} \label{thm:PulsedFormulation} Let $H$ be a Hermitian operator and $\Uk$ be a unitary operator on a finite dimensional Hilbert space $\Hi$. Then the following limit holds \begin{equation}\label{eq:PulsedLimit} \Uk^{\dagger n} U_n(t) \to \mathrm{e}^{-\mathrm{i} tH_Z}, \qquad \text{as } n\to\infty, \end{equation} where \begin{equation}\label{eq:HZ} H_Z=\sum_{\mu=1}^m P_\mu H P_\mu \end{equation} is the Zeno Hamiltonian with respect to the eigenprojections $\{P_\mu\}$ of $\Uk$. In particular, for large $n$ we get \begin{equation} \label{eq:UZK} U_n(t)=\Uk^{n} \mathrm{e}^{-\mathrm{i} tH_Z}+\mathcal{O}\left(\frac{1}{n}\right) . \end{equation} \end{thm} The proof is given in \ref{sec:app1}. As one can see, there is an important contribution of the Hamiltonian $H$ to the evolution, which stems from its diagonal part with respect to the unitary kick (note that $[\Uk,H_Z]=0$). It is useful to re-write the above evolution as follows \begin{equation} U_n(t)=\mathrm{e}^{-\mathrm{i} \sum_\mu (n\phi_\mu P_\mu+ t P_\mu H P_\mu)}+\mathcal{O}\left(\frac{1}{n}\right) =\sum_{\mu=1}^m \mathrm{e}^{-\mathrm{i} n\phi_\mu - \mathrm{i} t P_\mu H P_\mu}P_\mu+\mathcal{O}\left(\frac{1}{n}\right). \label{eq:un} \end{equation} This expression clarifies that the system evolves in each subspace $\Hi_\mu = P_\mu \Hi$ of the kick operator according to the projected Hamiltonian $P_\mu H P_\mu$, with a subspace-dependent phase $n\phi_\mu$. \subsection{Strong Continuous Coupling} QZD can also be obtained by coupling the system with Hamiltonian $H$ to a (control) potential $V$. The evolution is \begin{equation} U_K(t)=\mathrm{e}^{-\mathrm{i} t(H+KV)}, \end{equation} where $K$ is the coupling constant, to be taken large if one aims at getting a good control procedure. Let \begin{equation}\label{eq:Vspecdec} V=\sum_{\mu=1}^m \lambda_\mu P_\mu \end{equation} be the spectral decomposition of the control potential $V$, where $\lambda_\mu$'s are the (possibly degenerate) distinct eigenvalues of $V$ and $P_\mu$'s the corresponding eigenprojections, satisfying conditions (\ref{eq:pmu}). Naively, one might expect that, as $K\rightarrow\infty$, it becomes possible to neglect the action of $H$, so that the system evolves under the sole action of the control potential $V$. Transition among different subspaces would be avoided and the state would simply acquire a subspace-dependent phase. However, a more careful analysis shows that Hamiltonian $H$ yields a non-trivial contribution to the limiting evolution. This is the consequence of the following \begin{thm} \label{thm:ContFormulation} Let $H$ and $V$ be Hermitian operators acting on a finite dimensional Hilbert space $\Hi$, with $V$ having the spectral decomposition~\eqref{eq:Vspecdec}. Then the following limit holds \begin{equation}\label{eq:StrCoupLim} \mathrm{e}^{\mathrm{i} tKV}\mathrm{e}^{-\mathrm{i} t(H+KV)}\to \mathrm{e}^{-\mathrm{i} tH_Z} , \qquad \text{as } K\to\infty, \end{equation} uniformly on compact time intervals, where $H_Z$ is the Zeno Hamiltonian~\eqref{eq:HZ} with respect to the eigenprojections $\{P_\mu\}$ of $V$. In particular, for large $K$ we have \begin{equation} \label{eq:strcoupapproachrate} \mathrm{e}^{-\mathrm{i} t(H+KV)}=\mathrm{e}^{-\mathrm{i} tKV}\mathrm{e}^{-\mathrm{i} tH_Z}+\mathcal{O}\left(\frac{1}{K}\right). \end{equation} \end{thm} The theorem is proved by going to the $H$-interaction picture and by using Kato's adiabatic theorem~\cite{ref:KatoAdiabatic,ref:avron,ref:unity1}. A rapid review of the adiabatic theorem is provided for completeness in~\ref{sec:appadiab}, while Theorem~\ref{thm:ContFormulation} is proved in~\ref{app:StrongCouplingProof}. As with a unitary kick, the contribution of the Hamiltonian $H$ to the limiting evolution stems from its diagonal part with respect to the control potential (note that $[V,H_Z]=0$). Using the spectral decomposition of $V$ in Eq.~\eqref{eq:Vspecdec} the evolution operator can be written \begin{align}\notag U_K(t)&=\mathrm{e}^{-\mathrm{i} t\sum_\mu K\lambda_\mu P_\mu+P_\mu H P_\mu}+\mathcal{O}\left(\frac{1}{K}\right) \\ &=\sum_{\mu=1}^m \mathrm{e}^{-\mathrm{i} t(K\lambda_\mu+P_\mu H P_\mu)}P_\mu+\mathcal{O}\left(\frac{1}{K}\right). \label{eq:uk} \end{align} Off-diagonal transitions (with respect to the eigenspaces of $V$) are suppressed, while in each $V$-eigenspace $\Hi_\mu$ the system evolves non-trivially according to the projected Hamiltonian $P_\mu H P_\mu$. \subsection{Similarities and differences} The limiting dynamics~(\ref{eq:un}) and~(\ref{eq:uk}) are strikingly similar. We now show that this similarity is a consequence of a double limit~\cite{ref:BBZeno}, where the order in which the two limits are taken is immaterial. Let us first observe that, using the Trotter product formula, one can get a continuous coupling starting from a pulsed-like evolution: \begin{equation}\label{eq:Trotter} \left(\mathrm{e}^{-\mathrm{i} \frac{t}{n}KV}\mathrm{e}^{-\mathrm{i} \frac{t}{n}H}\right)^n=\mathrm{e}^{-\mathrm{i} t(H+KV)}+\mathcal{O}\left(\frac{1}{n}\right). \end{equation} Define the evolution operator \begin{equation} \label{eq:UnK} U_{n,K}(t)=\left(\mathrm{e}^{-\mathrm{i} \frac{t}{n}KV}\mathrm{e}^{-\mathrm{i} \frac{t}{n}H}\right)^n . \end{equation} The strong coupling limit~\eqref{eq:StrCoupLim} can be written \begin{equation} \lim_{K\to\infty}\lim_{n\to\infty}\mathrm{e}^{\mathrm{i} tKV}U_{n,K}(t)=\mathrm{e}^{-\mathrm{i} tH_Z} , \end{equation} where the inner $n$-limit yields continuous coupling, while the outer $K$-limit yields the strong coupling limit. On the other hand, one gets the kicked (bang-bang) control~\eqref{eq:PulsedLimit} with $\Uk=\mathrm{e}^{-\mathrm{i} tV}$ as \begin{equation}\label{standard} \lim_{K=n \to \infty}\mathrm{e}^{\mathrm{i} tKV}U_{n,K}(t)=\mathrm{e}^{-\mathrm{i} tH_Z} . \end{equation} Both cases make use of a double limit in the variables $n,K$. In the former case, the limit is first taken on $n$ and then on $K$, while in the latter case the limit is taken along the diagonal of the $(n,K)$ plane, as shown in Fig.~\ref{fig:(n,K)plane} (solid-blue line). The dashed-red line in Fig.~\ref{fig:(n,K)plane} corresponds to the limit $n\rightarrow\infty$, yielding the Trotter product formula~\eqref{eq:Trotter}, namely continuous coupling without the strong coupling limit. A few warnings are necessary. Although the \emph{limiting} procedures are equivalent, the details and speed of convergence depend on (physical) procedures and experimental implementation~\cite{ref:berry,ref:ControlDecoZeno,Ketterle}, in particular because for $n$ or $K$ large but finite one might incur in the inverse Zeno effect~\cite{ref:AZE,ref:InverseZeno,ref:koshinoshimuzu}, whereby transitions to the other Zeno subspaces are accelerated, rather than suppressed~\cite{Wilkinson}. This is a crucial factor, in the light of the many recent experiments on QZD~\cite{Raimond2010,Firenze2014,Signoles2014,Wineland,Reichel}. \begin{figure}[t] \begin{center} \includegraphics[width=.55\textwidth]{nK_Plane.pdf} \end{center} \caption{The full (blue) line corresponds to the ``simultaneous" limit in $K$ and $n$, representing the pulsed dynamics. The dashed (red) curve corresponds to the limit taken only over $n$, which is the Trotter limit, yielding continuous coupling. In such a case there is no control since there is no strong coupling limit. The other curves refer to a coupling constant $K_n =n^\alpha$ with $0<\alpha<1$.} \label{fig:(n,K)plane} \end{figure} \section{An intermediate limit}\label{sec:intermediateLim} Motivated by the preceding comments, and by the pictorial view in Fig.~\ref{fig:(n,K)plane}, we now consider intermediate situations, and ask whether any interesting limit can be proved (therefore yielding quantum control) also in the region of the $(n,K)$ plane between the two extremal cases considered. The answer is affermative, and, in particular, the double limit along the curves \begin{equation} K_n =n^\alpha, \qquad \text{with } \alpha\in (0,1) \label{eq:Knnalpha} \end{equation} yields quantum control, as shown in the following theorem. \begin{thm}\label{thm:doublelimit} Let $U_{n,K}(t)$ be the pulsed evolution~\eqref{eq:UnK}, with $V$ having the spectral decomposition~\eqref{eq:Vspecdec}, and assume that \begin{equation} K_n\to \infty, \qquad \text{with } K_n =o\left( n \right), \qquad \text {as } n\rightarrow\infty. \label{eq:Kncond} \end{equation} Then one has \begin{equation} \label{eq:IntermediateLimit} U_{n,K_n }(t)=\mathrm{e}^{-\mathrm{i} tK_n V} \mathrm{e}^{-\mathrm{i} tH_Z}+\mathcal{O}\left(\frac{1}{K_n }\right), \end{equation} as $n\to\infty$, uniformly on compact time intervals, where $H_Z$ is the Zeno Hamiltonian~\eqref{eq:HZ} with respect to the eigenprojections $\{P_\mu\}$ of $V$. \end{thm} The theorem is proved in \ref{sec:appinterm}. The error estimate in the above formula is obtained by using the same technique adopted for the pulsed procedure. The proof is a corollary of (the proof) of Theorem~\ref{thm:PulsedFormulation} when the unitary kick is given by \begin{equation} \Uk = \mathrm{e}^{-\mathrm{i} \frac{t}{n}K_n V}, \end{equation} whose spectral resolution is~\eqref{eq:specdecUk} with $\phi_\mu = t \lambda_\mu K_n/ n$. Notice that, since by assumption $K_n/n\to 0$, for sufficiently large $n$ one has $\max_{\mu,\nu} |\phi_\mu-\phi_\nu|\in (0,2\pi)$, whence $\mathrm{e}^{-\mathrm{i} \phi_\mu} \neq \mathrm{e}^{-\mathrm{i} \phi_\nu}$ for all~$\mu\neq\nu$, and the eigenprojections of $V$ and $\Uk$ coincide. We now pause for a moment and give a pictorial view of the evolution~(\ref{eq:UnK}), yielding the limit~(\ref{eq:IntermediateLimit}). This can be interpreted in two equivalent ways. One can assume that each unitary acts for a time $t/n$, but the two generators $H$ and $KV$ are scaled differently, with $K=K_n$ as in Eq.~(\ref{eq:Kncond}): see left panel in Fig.~\ref{fig:picfeynman}. Alternatively, one can consider the two generators $H$ and $V$ acting for different times $t/n$ and $Kt/n$, with $K=K_n$ as in Eq.~(\ref{eq:Knnalpha}): see right panel in Fig.~\ref{fig:picfeynman}. The two pictures are equivalent, and in both cases the factor $\mathrm{e}^{-\mathrm{i} tK_n V}$ ``dominates" (controls) $\mathrm{e}^{-\mathrm{i} tH}$ in~(\ref{eq:UnK}), the latter yielding the controlled dynamics $\mathrm{e}^{-\mathrm{i} tH_Z}$ in~(\ref{eq:IntermediateLimit}), that acts within the Zeno subspaces. \begin{figure}[t] \begin{center} \includegraphics[width=\textwidth]{Equivalence.pdf} \end{center} \caption{Two equivalent ways of viewing the Trotter dynamics.} \label{fig:picfeynman} \end{figure} Applying Theorem~\ref{thm:doublelimit} to a coupling $K_n$ of the form~\eqref{eq:Knnalpha} we get \begin{equation}\label{eq:IntermediateLimitalpha} \bigl(\mathrm{e}^{-\mathrm{i} t \frac{n^\alpha}{n} V}\mathrm{e}^{-\mathrm{i} \frac{t}{n}H}\bigr)^n =\mathrm{e}^{-\mathrm{i} t n^\alpha V} \mathrm{e}^{-\mathrm{i} tH_Z}+\mathcal{O}\left(\frac{1}{n^\alpha }\right), \end{equation} for all $\alpha \in (0,1)$. This represents a first step towards our seminal motivation, Eq.~(\ref{eq:lie2}). Notice that the control can be extended up to $\alpha=1$, i.e.\ $K_n=n$, for all non-resonant times $t$ such that \begin{equation} \mathrm{e}^{-\mathrm{i} t \lambda_\mu} \neq \mathrm{e}^{-\mathrm{i} t\lambda_\nu}, \qquad \text{for all } \mu\neq\nu \label{eq:nonres} \end{equation} (this requirement is needed in order to assure that the eigenprojections of $\mathrm{e}^{-\mathrm{i} tK_n V}$ and $V$ are the same). However, in general, for $\alpha>1$ the limit may not exist due to resonances: assume for example that $V^2=I$, then \begin{equation} U_{n,n^2}(\pi/2)=\left(\mathrm{e}^{-\mathrm{i} n \frac{\pi}{2} V}\mathrm{e}^{-\mathrm{i} \frac{\pi}{2 n}H}\right)^n \end{equation} does not have a limit. Indeed, for even $n$, $\mathrm{e}^{-\mathrm{i} n \frac{\pi}{2} V}=(-1)^{n/2} I$, and thus the control is ineffective, $U_{n,n^2}(\pi/2)=\left(\mathrm{e}^{-\mathrm{i} \frac{\pi}{2 n}H}\right)^n = \mathrm{e}^{-\mathrm{i} \frac{\pi}{2} H}$, while for odd $n$, Eq.~\eqref{eq:IntermediateLimit} holds. Similar phenomena were obtained for $\alpha>1$ in the context of the quantum Zeno effect, where the limit evolution was shown to be sensitive to the spectral properties of the periodic projections and to the arithmetic properties of~$\alpha$~\cite{Zenolimit}. A numerical analysis shows that the error estimate in~\eqref{eq:IntermediateLimitalpha} is indeed sharp. See Fig.~\ref{fig:ZenoAlpha}. \begin{figure}[t] \centering \includegraphics[height=11.9 em]{ZenoAlpha=03 \includegraphics[height=11.9 em]{ZenoAlpha=05 \includegraphics[height=11.9 em]{ZenoAlpha=08} \caption{Error~(\ref{eq:IntermediateLimit}), as defined in Eq.~(\ref{eq:errornum}), for three different values of $\alpha$. The fit always yields an error $\mathcal{O}(K_n^{-1})=\mathcal{O}(n^{-\alpha})$} \label{fig:ZenoAlpha} \end{figure} We perform the numerical simulation by considering $5\times 5$ matrices and set $t=1$. For the free Hamiltonian $H$, we generate a matrix $A$ with random entries in the square $[-1,1]\times[-i,i]$ of the complex plane, and consider the Hermitian matrix \begin{equation}\label{eq:randomHermitian} H=\frac{A+A^\dagger}{2}. \end{equation} For the control potential $V$, we take a matrix with two eigenspaces, a $2$- and a $3$-dimensional one: $V=\mathrm{diag}(\lambda_1,\lambda_1,\lambda_2,\lambda_2,\lambda_2)$. The particular choice of $\lambda_1,\lambda_2$ is irrelevant as long as they are different. We set $\lambda_1=1,\lambda_2=0$. Let \begin{equation} \label{eq:errornum} \varepsilon_\alpha^Z(n)=\bigl\| U_{n,n^\alpha}(t)- \mathrm{e}^{-\mathrm{i} t n^\alpha V} \mathrm{e}^{-\mathrm{i} tH_Z} \bigr\| , \end{equation} where $\Norm{A}:=\sqrt{\tr{(A^\dagger A)}}$ is the Hilbert-Schmidt norm. In order to determine the asymptotic behaviour, we take a linear fit of the above quantity over the last decade of points in a logarithmic plot. Figure~\ref{fig:ZenoAlpha} displays our results. One observes that for three different values of $\alpha$, the distance~(\ref{eq:errornum}) decays like $K_n^{-1}=n^{-\alpha}$, proving that the limit \eqref{eq:IntermediateLimit} is sharp. \section{Generalized product formula}\label{sec:GeneralizedPF} The link between the pulsed dynamics and the continuous coupling has been established using the Trotter approximation~\eqref{eq:Trotter}, where the two parameters $n$ and $K$ are considered independent. By contrast, in the intermediate situation considered in the previous section, these parameters satisfy a given relation $K=K_n $. Loosely speaking, a glance at Eq.~(\ref{eq:IntermediateLimit}) suggests that one manages to control the dynamics of the system as if the Trotter product formula were valid, despite the dependence $K=K_n $. To see this, note that by comparing the asymptotics~\eqref{eq:IntermediateLimit}, \begin{equation}\label{eq:intermediateapprorate} \bigl(\mathrm{e}^{-\mathrm{i} \frac{t}{n}K_n V}\mathrm{e}^{-\mathrm{i} \frac{t}{n}H}\bigr)^n=\mathrm{e}^{-\mathrm{i} tK_n V} \mathrm{e}^{-\mathrm{i} tH_Z}+\mathcal{O}\left(\frac{1}{K_n }\right), \end{equation} with the strong coupling limit~\eqref{eq:strcoupapproachrate}, \begin{equation}\label{eq:strcouplapprorate2} \mathrm{e}^{-\mathrm{i} t(H+K_n V)}=\mathrm{e}^{-\mathrm{i} tK_n V} \mathrm{e}^{-\mathrm{i} tH_Z}+\mathcal{O}\left(\frac{1}{K_n }\right), \end{equation} one gets \begin{equation}\label{eq:GeneralizedAnalyticBound2} \bigl(\mathrm{e}^{-\mathrm{i} \frac{t}{n}K_n V}\mathrm{e}^{-\mathrm{i} \frac{t}{n}H}\bigr)^n=\mathrm{e}^{-\mathrm{i} t(H+K_n V)}+\mathcal{O}\left(\frac{1}{K_n }\right), \end{equation} as $n\to\infty$, with $K_n $ satisfying~\eqref{eq:Kncond}. This equation resembles the Trotter product formula, except for the $n$-dependence of the coupling constant $K_n $, suggesting that an approximation of this sort might be valid in more general situations. Due to many physical applications, the extended validity of Trotter's formula is interesting in its own right, so that it would be desirable to understand under which conditions this approximation can be used and which errors are implied. One gets the following result. \begin{thm} \label{thm:GeneralizedProductFormula} Let $H$ and $V$ be Hermitian operators acting on a finite dimensional Hilbert space $\Hi$ and let $K_n $ be a real-valued function of $n$ such that \begin{equation} K_n =o\left( n \right), \qquad \text {as } n\rightarrow\infty. \end{equation} Then \begin{equation} \label{eq:GeneralizedProductFormula2} \lim_{n\rightarrow\infty}\left[\bigl(\mathrm{e}^{-\mathrm{i} \frac{t}{n}K_n V}\mathrm{e}^{-\mathrm{i} \frac{t}{n}H}\bigr)^n-\mathrm{e}^{-\mathrm{i} t(K_n +H)}\right]=0. \end{equation} In particular, for large $n$ one has \begin{equation}\label{eq:GeneralizedAnalyticBound} \bigl(\mathrm{e}^{-\mathrm{i} \frac{t}{n} {K_n }V}\mathrm{e}^{-\mathrm{i} \frac{t}{n}H}\bigr)^n-\mathrm{e}^{-\mathrm{i} t( {K_n } V +H)}=\mathcal{O}\left(\frac{K_n }{n}\right). \end{equation} \end{thm} This is proved in~\ref{app:GenTrotter}, by exploiting the usual techniques adopted to prove the classical Trotter product formula. By comparing~\eqref{eq:GeneralizedAnalyticBound} with~\eqref{eq:GeneralizedAnalyticBound2} we see immediately that the error bound in Theorem~\eqref{thm:GeneralizedProductFormula} is not optimal, since for $K_n =n^\alpha$ with $1/2<\alpha<1$, Eq.~\eqref{eq:GeneralizedAnalyticBound2} establishes a better bound. Using these two estimates together we can establish that the error bound is smaller than $\mathcal{O}(1/\sqrt{n})$ and the worst case occurs at $\alpha=1/2$. \subsection{Numerical analysis} \label{numan} \begin{figure}[t] \centering \includegraphics[height=11 em]{TrotterAlpha=03 \includegraphics[height=11 em]{TrotterAlpha=05 \includegraphics[height=11 em]{TrotterAlpha=08} \caption{The error~(\ref{eq:errt}) is independent of $\alpha$ and is always $\mathcal{O}(1/n)$. } \label{fig:TrotterAlpha} \end{figure} We have performed a numerical analysis of the generalized product formula~(\ref{eq:GeneralizedProductFormula2}), using random Hermitian matrices $H$ and $V$ generated as in~\eqref{eq:randomHermitian} and analyzing the quantity \begin{equation} \label{eq:errt} \varepsilon_\alpha^T(n)=\bigl\| \bigl(\mathrm{e}^{-\mathrm{i} \frac{t}{n} n^\alpha V}\mathrm{e}^{-\mathrm{i} \frac{t}{n}H}\bigr)^n-\mathrm{e}^{-\mathrm{i} t(n^\alpha V+H)}\bigr\| \end{equation} as a function of $n$. The results are displayed in Fig.~\ref{fig:TrotterAlpha}, and are of interest, in that they show that the error is always $\mathcal{O}(n^{-1})$. We offer no analytic explanation, at this stage, for this bound. In order to better characterize the asymptotic behaviour for different values of $\alpha\in(0,1)$, we performed further numerical analyses in accordance with the following procedure: i) divide the interval $[0,1]$ in equal steps $\Delta\alpha=0.05$ (yielding $21$ values for $\alpha$); ii) for each value of $\alpha$, perform a linear fit of the curve $\varepsilon_\alpha^T(n)$ in a logarithmic scale for two decades of points between $n=10^4$ and $n=10^6$; iii) plot the exponent $\beta$ of the asymptotic power behaviour of $\varepsilon_\alpha^T(n)$ vs $\alpha$; iv) iterate the procedure for several random matrices ($N_{\text{iter}}=10$). The results obtained from this procedure are shown in Fig.~\ref{fig:TrotterAllAlpha} (different curves corresponding to different random iterations) and confirm that the power behaviour of $\varepsilon_\alpha^T(n)$ is independent of $\alpha$, and yields with very good approximation \begin{equation} \label{eq:numerr} \left(\mathrm{e}^{-\mathrm{i} \frac{t}{n}K_n V}\mathrm{e}^{-\mathrm{i} \frac{t}{n}H}\right)^n=\mathrm{e}^{-\mathrm{i} t(K_n V+H)}+\mathcal{O}\left(\frac{1}{n}\right). \end{equation} The numerical analysis confirms that the Trotter product formula works exactly as if $K$ were independent of $n$. We conclude this section with a final comment on the oscillations ($\simeq 10\%$) of the exponent around the value $\alpha\simeq 0.15$, that are observed in Fig.~\ref{fig:TrotterAllAlpha}. They can be explained by scrutinizing a particular iteration about this value, see the right panel of Fig.~\ref{fig:TrotterAllAlpha}. The evolution has not reached yet the asymptotic regime for $n=10^4-10^6$, and oscillations distort the linear fit. \begin{figure}[t] \centering \includegraphics[width=.45\textwidth]{TrotterAllAlpha} \includegraphics[width=.45\textwidth]{TrotterAlpha=015} \caption{\textit{Left panel}: Exponent $\beta$ of the asymptotic power-like behavior $\varepsilon^T_\alpha(n)=\mathcal{O}(n^\beta)$ vs $\alpha$. Different curves correspond to different random iterations. The exponent is essentially $-1$ for large enough $\alpha$. \textit{Right panel}: Error $\varepsilon^T_{\alpha=0.15}(n)$, in the region of the left panel where the power-like behaviour is less stable. The region where the linear fit is computed, between $n=10^4$ and $n=10^6$, is not in the full asymptotic regime and displays oscillations that distort the linear fit. } \label{fig:TrotterAllAlpha} \end{figure} \subsection{The qubit case} \label{sec:qubit} We corroborate the independence of the convergence rate in~\eqref{eq:numerr} from $\alpha$, as shown by the numerics, by providing an explicit example for the qubit case, where the bound $\mathcal{O}(1/n)$ can be analytically obtained for $0\leqslant \alpha < 1$. Let $V= Z$ and $H=X$, where $X$ and $Z$ are the first and third Pauli matrix, respectively, and take for simplicity $t=1$. Let \begin{equation} U_n=\bigl(\mathrm{e}^{-\mathrm{i}\frac{n^\alpha}{n} Z} \mathrm{e}^{-\mathrm{i}\frac{1}{n}X}\bigr)^n, \qquad V_n=\mathrm{e}^{-\mathrm{i}(n^\alpha Z+X)}. \end{equation} We will prove that \begin{equation}\label{eq:QubitAnalyticBound} U_n-V_n=\mathcal{O}\left(\frac{1}{n}\right). \end{equation} For this purpose, first note that \begin{equation} \mathrm{e}^{-\mathrm{i}\frac{n^\alpha}{n} Z}\mathrm{e}^{-\mathrm{i}\frac{1}{n}X}=\left(\cos \frac{n^\alpha}{n} \, I-\mathrm{i}\sin \frac{n^\alpha}{n}\, Z\right)\left(\cos \frac{1}{n}\, I-\mathrm{i}\sin \frac{1}{n}\, X\right) =\mathrm{e}^{-\mathrm{i}\theta_n \vec{u}_n\cdot \vec{\sigma}} \end{equation} where $\vec{\sigma}=(X,Y,Z)$, \begin{equation} \theta_n=\arccos\left(\cos \frac{n^\alpha}{n} \, \cos \frac{1}{n} \right), \end{equation} and \begin{equation} \vec{u}_n=\frac{1}{\sin \theta_n }\left(\cos \frac{n^\alpha}{n}\, \sin \frac{1}{n},\, \sin \frac{n^\alpha}{n}\,\sin \frac{1}{n},\,\sin\frac{n^\alpha}{n}\,\cos\frac{1}{n}\right) \end{equation} is a unit vector, $\abs{\vec{u}_n}=1$. Thus we have \begin{equation} U_n=\mathrm{e}^{-\mathrm{i} n\theta_n \vec{u}_n\cdot \vec{\sigma}}, \qquad V_n=\mathrm{e}^{-\mathrm{i} \phi_n \vec{v}_n\cdot \vec{\sigma}}, \end{equation} where \begin{equation} \phi_n=\sqrt{n^{2\alpha}+1}, \qquad \vec{v}_n=\frac{1}{\phi_n}\left(1,0,n^\alpha \right). \end{equation} Therefore, \begin{align}\notag U_n-V_n&=\left(\mathrm{e}^{-\mathrm{i} n\theta_n\vec{u}_n\cdot\vec{\sigma}}-\mathrm{e}^{-\mathrm{i}\phi_n\vec{u}_n\cdot\sigma}\right)+\left(\mathrm{e}^{-\mathrm{i}\phi_n\vec{u}_n\cdot\vec{\sigma}}-\mathrm{e}^{-\mathrm{i}\phi_n\vec{v}_n\cdot\vec{\sigma}}\right)\\ &=\mathrm{e}^{-\mathrm{i}\phi_n \vec{u}_n\cdot \vec{\sigma}}\left(\mathrm{e}^{\mathrm{i}(\phi_n-n\theta_n)\vec{u}_n\cdot\vec{\sigma}}-I\right)-\mathrm{i}\sin \phi_n\, (\vec{u}_n -\vec{v}_n)\cdot \vec{\sigma} . \label{eq:Un-Vn} \end{align} It follows that the distance between the two evolutions is controlled by the differences $\phi_n-n\theta_n$ and $\vec{u}_n-\vec{v}_n$, whose asymptotic is \begin{equation}\label{eq:AnaliticBound1} \phi_n-n\theta_n \sim \frac{1}{6} \frac{n^\alpha}{n^2}, \qquad \vec{u}_n-\vec{v}_n \sim \left( -\frac{1}{3} \frac{n^\alpha}{n^2},\, \frac{1}{n} ,\, -\frac{1}{6}\frac{1}{n^2} \right), \end{equation} as $n\to\infty$. By plugging~\eqref{eq:AnaliticBound1} into Eq.~\eqref{eq:Un-Vn}, we finally get \begin{equation} U_n-V_n \sim -\mathrm{i} \frac{1}{n} \sin \phi_n\, Y, \end{equation} as $n\to\infty$, that is~\eqref{eq:QubitAnalyticBound}. Notice that the dominant term in the convergence error comes from the difference of the unit vectors, $\vec{u}_n-\vec{v}_n= \mathcal{O}(1/n)$, the phase difference being of smaller order, $\phi_n-n\theta_n = o (1/n)$. \section{Conclusions and outlook} By providing a different scaling in the exponent of product evolutions, we have provided a bridge between periodically kicked systems, Trotter product formulas and strong-coupling limits. Our studies with numerical and analytical examples indicate a surprisingly good scaling of the error terms obtained. This paves the way to more efficient quantum control techniques: while in the standard scaling Eq.~\eqref{standard} for bang-bang control the total time of the control pulses (in \figurename~\ref{fig:picfeynman} on the right the total base length of blue rectangles) grows as $n$, using shorter kicks (having the same strength) the same limit can be approached with a total duration of the control pulses scaling as $n^\alpha$, at the price of a slower convergence rate. Further work is needed in order to elucidate the $\mathcal{O}(1/n)$ behavior of the error~(\ref{eq:numerr}), numerically observed and discussed in Sec.~\ref{numan}. The qubit's example discussed in Sec.~\ref{sec:qubit} might serve the purpose, and in particular the fact that the convergence rate comes from the error $\mathcal{O}(1/n)$ on the eigenprojections, the error on the eigenvalues being of smaller order---a fact that might be of general nature. \section*{Acknowledgments} We thank Kazuya Yuasa for discussions. PF, GG and SP are partially supported by Istituto Nazionale di Fisica Nucleare (INFN) through the project ``QUANTUM". PF and GG are partially supported by the Italian National Group of Mathematical Physics (GNFM-INdAM).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The EM (Expectation-Maximisation) algorithm \citep{dempster:laird:rubin:1977} is a popular tool for maximum-likelihood (or maximum a posteriori) estimation. The common strand to problems where this approach is applicable is a notion of {\em incomplete data}, which includes the conventional sense of missing data but is much broader than that. The EM algorithm demonstrates its strength in situations where some hypothetical experiments yields \emph{complete} data that are related to the parameters more conveniently than the measurements are. Problems where the EM algorithm has proven to be useful include, among many others, mixture of densities \citep{titterington:smith:makov:1985}, censored data models \citep{tanner:1993}, etc. The EM algorithm has several appealing properties. Because it relies on complete data computations, it is generally simple to implement: at each iteration, \emph{(i)} the so-called \emph{E-step} only involves taking expectation over the conditional distribution of the latent data given the observations and \emph{(ii)} the \emph{M-step} is analogous to complete data weighted maximum-likelihood estimation. Moreover, \emph{(iii)} the EM algorithm naturally is an ascent algorithm, in the sense that it increases the (observed) likelihood at each iteration. Finally under some mild additional conditions, \emph{(iv)} the EM algorithm may be shown to converge to a \emph{stationary point} (i.e., a point where the gradient vanishes) of the log-likelihood \citep{wu:1983}. Note that convergence to the maximum likelihood estimator cannot in general be guaranteed due to possible presence of multiple stationary points. When processing large data sets or data streams however, the EM algorithm becomes impractical due to the requirement that the whole data be available at each iteration of the algorithm. For this reason, there has been a strong interest for online variants of the EM which make it possible to estimate the parameters of a latent data model without storing the data. In this work, we consider online algorithms for latent data models with independent observations. The dominant approach (see also Section~\ref{sec:state-of-ze-art} below) to online EM-like estimation follows the method proposed by \cite{titterington:1984} which consists in using a stochastic approximation algorithm, where the parameters are updated after each new observation using the gradient of the incomplete data likelihood weighted by the complete data Fisher information matrix. This approach has been used, with some variations, in many different applications (see, e.g., \citealp{chung:bohme:2005,liu:almhana:choulakian:mcgorman:2006}); a proof of convergence was given by \cite{wang:zhao:2006}. In this contribution, we propose a new online EM algorithm that sticks more closely to the principles of the original (batch-mode) EM algorithm. In particular, each iteration of the proposed algorithm is decomposed into two steps, where the first one is a stochastic approximation version of the E-step aimed at incorporating the information brought by the newly available observation, and, the second step consists in the maximisation program that appears in the M-step of the traditional EM algorithm. In addition, the proposed algorithm does not rely on the complete data information matrix, which has two important consequences: firstly, from a practical point of view, the evaluation and inversion of the information matrix is no longer required, secondly, the convergence of the procedure does not rely on the implicit assumption that the model is \emph{well-specified}, that is, that the data under consideration is actually generated by the model, for some unknown value of the parameter. As a consequence, and in contrast to previous work, we provide an analysis of the proposed algorithm also for the case where the observations are not assumed to follow the fitted statistical model. This consideration is particularly relevant in the case of conditional missing data models, a simple case of which is used as an illustration of the proposed online EM algorithm. Finally, it is shown that, with the additional use of Polyak-Ruppert averaging, the proposed approach converges to the stationary points of the limiting normalised log-likelihood criterion (i.e., the Kullback-Leibler divergence between the marginal density of the observations and the model pdf) at a rate which is optimal. The paper is organised as follows: In Section~\ref{sec:algo}, we review the basics of the EM and associated algorithms and introduce the proposed approach. The connections with other existing methods are discussed at the end of Section~\ref{sec:onlineEM_def} and a simple example of application is described in Section~\ref{sec:exp:pois}. Convergence results are stated in Section~\ref{sec:conv}, first in term of consistency (Section~\ref{sec:conv:consist}) and then of convergence rate (Section~\ref{sec:conv:rate}), with the corresponding proofs given in Appendix~\ref{sec:proofs}. Finally in Section~\ref{sec:regmix}, the performance of this approach is illustrated in the context of mixture of linear regressions. \section{Algorithm Derivation} \label{sec:algo} \subsection{EM Basics} \label{sec:intro} In this section, we review the key properties of the EM algorithm as introduced by \cite{dempster:laird:rubin:1977}. The latent variable statistical model postulates the existence of a non-observable or latent variable $X$ distributed under $f(x; \theta)$ where $\{ f(x; \theta), \theta \in \Theta \}$ denotes a parametric family of probability density functions indexed by a parameter value $\theta \in \Theta \subset \mathbb{R}^\nbt$. The observation $Y$ is then viewed as a deterministic function of $X$ which takes its values in the set ${\mathcal{Y}}$. This latent variable mechanism provides a unified framework for situations which includes missing data, censored observations, noisily observed data, \dots \citep{dempster:laird:rubin:1977}. We will denote by $g(y;\theta)$ the (observed) likelihood function induced by the latent data model. In addition, the notations $\mathbb{E}_\theta[\cdot]$ and $\mathbb{E}_\theta[\cdot|Y]$ will be used to denote, respectively, the expectation and conditional expectation under the model parameterised by $\theta$. Likewise, let $\pi$ denote the probability density function of the observation $Y$, where we stress again that we do not restrict ourselves to the case where $\pi(\cdot) = g(\cdot;\theta^\star)$, for an unknown value $\theta^\star$ of the parameter. The notations $\mathbb{P}_\pi$ and $\mathbb{E}_\pi$ will be used to denote probability and expectation under the actual distribution of the observation. Given $n$ independent and identically distributed observations $Y_{1:n} \ensuremath{\stackrel{\mathrm{def}}{=}} (Y_1, \dots, Y_n)$, the maximum likelihood estimator is defined as $\hat{\theta}_n \ensuremath{\stackrel{\mathrm{def}}{=}} \mathrm{argmax}_{\theta\in\Theta} \, n^{-1}\logl{Y_{1:n}}{\theta}$, where \begin{equation} \label{eq:LikelihoodFunction} \logl{Y_{1:n}}{\theta} \ensuremath{\stackrel{\mathrm{def}}{=}} \sum_{i=1}^n \logl{Y_i}{\theta} \;. \end{equation} Note that we normalise the log-likelihood (by $n$) to ease the transition to the online setting where $n$ increases when new observations become available. The EM algorithm is an iterative optimisation algorithm that maximises the above (normalised) log-likelihood function despite the possibly complicated form of $g$ resulting from the latent data model. Traditionally, each EM iteration is decomposed in two steps. The E-step consists in evaluating the conditional expectation \begin{equation} \label{eq:EM-auxiliary-function} \funcEM{Y_{1:n}}{\theta}{\hat{\theta}_k} \ensuremath{\stackrel{\mathrm{def}}{=}} n^{-1} \sum_{i=1}^n \mathbb{E}_{\hat{\theta}_k}\left[ \log \fx( X_i; \theta) \big| Y_i \right] \end{equation} where $\hat{\theta}_k$ is the current estimate of $\theta$, after $k$ iterations of the algorithm. In the M-step, the value of $\theta$ maximising $\funcEM{Y_{1:n}}{\theta}{\hat{\theta}_k}$ is found. This yields the new parameter estimate $\hat{\theta}_{k+1}$. This two step process is repeated until convergence. The essence of the EM algorithm is that increasing $\funcEM{Y_{1:n}}{\theta}{\hat{\theta}_k}$ forces an increase of the log-likelihood $\logl{Y_{1:n}}{\theta}$ \citep{dempster:laird:rubin:1977}. For $\mu: \Theta \to \mathbb{R}$ a differentiable function, denote by $\nabla_\theta \mu= (\partial \mu/ \partial \theta_1, \dots, \partial \mu/ \partial \theta_{d_\theta})^T$ the \emph{gradient} of $\mu$. If $\mu$ is twice differentiable, we denote by $\nabla_\theta^2 \mu$ the \emph{Hessian matrix} which is a $d_\theta \times d_\theta$ matrix whose components are given by $[\nabla^2_\theta \mu]_{i,j}= \frac{\partial^2 \mu}{\partial \theta_i \partial \theta_j}$, $1 \leq i,j \leq d_\theta$. Following \cite{lange:1995}, if the M-step of the EM algorithm is replaced by a Newton update, one obtains, assuming some regularity, the following recursion \begin{equation} \label{eq:gradientEM-algorithm-lange} \hat{\theta}_{k+1} = \hat{\theta}_k + \gamma_{k+1} \left[\FIMcond{Y_{1:n}}{\hat{\theta}_k}\right]^{-1} \sum_{i=1}^n \mathbb{E}_{\hat{\theta}_k} \left[ \nabla_{\theta} \log \fx( X_i; \hat{\theta}_k) \big| Y_i \right] \; , \end{equation} where $\gamma_{k+1}$ is a step size ($\gamma_{k+1} = 1$ correspond to the actual Newton update) and $\FIMcond{Y_{1:n}}{\theta} = n^{-1} \sum_{i=1}^n \FIMcond{Y_i}{\theta}$ with $\FIMcond{y}{\theta} = - \mathbb{E}_\theta \left[ \nabla_\theta^2 \log \fx( X; \theta) \big| Y=y \right]$. Note that due to the so-called Fisher's identity (see discussion of \citealp{dempster:laird:rubin:1977}), the gradient term indeed coincides with the (observed data) score function as \begin{equation} \mathbb{E}_\theta \left[ \nabla_\theta \log \fx( X; \theta) \big| Y \right] = {\nabla_{\theta}\log g}(Y;\theta) \; . \label{eq:Fisher} \end{equation} The algorithm in~\eqref{eq:gradientEM-algorithm-lange} can be shown to be locally equivalent to the EM algorithm at convergence \citep{lange:1995}. In practise, the step-size $\gamma_{k+1}$ is often adjusted using line searches to ensure that the likelihood is indeed increased at each iteration. In addition, $\FIMcond{Y_{1:n}}{\theta}$ is not necessarily a positive definite matrix or could be badly conditioned; therefore, some adjustment of the weight matrix $\FIMcond{Y_{1:n}}{\theta}$ may be necessary to avoid numerical problems. A well-known modification of the Newton recursion consists in replacing $\FIMcond{Y_{1:n}}{\theta}$ in~\eqref{eq:gradientEM-algorithm-lange} by the Fisher Information Matrix (FIM) associated to a \emph{complete} observation, \begin{equation} \label{eq:completeobservationFIM} \FIMcomplete{}{\theta} \ensuremath{\stackrel{\mathrm{def}}{=}} - \mathbb{E}_\theta \left[ \nabla_\theta^2 \log f(X;\theta) \right] \; . \end{equation} Under the mild assumption that the complete data model is regular, $\FIMcomplete{}{\theta}$ is guaranteed to be positive definite. This modified recursion, which is more robust, may be seen as an approximation of the \emph{scoring method} \citep{mclachlan:krishnan:1997}, where the complete data FIM is used in place of the actual (\emph{observed}) FIM \begin{equation} \label{eq:FIM} \FIMincomplete{\theta}\ensuremath{\stackrel{\mathrm{def}}{=}} - \mathbb{E}_\theta \left[ \nabla_\theta^2 \log g(Y;\theta) \right] \; , \end{equation} despite the fact that, in general, $\FIMincomplete{\theta}$ and $\FIMcomplete{}{\theta}$ are different. $\FIMcomplete{}{\theta}$ usually also differs from $\FIMcond{Y_{1:n}}{\theta}$, as $\FIMcond{Y_{1:n}}{\theta}$ converges, as $n$ tends to infinity, to \begin{equation} \label{eq:definitionFIMcomplete} \FIMcomplete{\pi}{\theta} \ensuremath{\stackrel{\mathrm{def}}{=}} - \mathbb{E}_\pi\left[ \mathbb{E}_\theta \left[ \nabla_\theta^2 \log f(X;\theta) \big| Y \right] \right] \; , \end{equation} which doesn't correspond to a Fisher information matrix in the complete data model, except when $\pi$ coincides with $f(\cdot;\theta)$. In the particular case, however, where the complete data model belongs to a canonical (or naturally parameterised) exponential family of distributions, $\FIMcond{Y_{1:n}}{\theta}$ coincides with $\FIMcomplete{}{\theta}$ and thus does not depend on $\pi$ anymore. Hence, except in some specific cases or if one assumes that the model is well-specified (i.e., $\pi = g(\cdot;\theta^\star)$), the convergence behaviour of the recursion in~\eqref{eq:gradientEM-algorithm-lange} will be different when $\FIMcomplete{}{\theta}$ is used instead of $\FIMcond{Y_{1:n}}{\theta}$. \subsection{Stochastic Gradient EM Algorithms} \label{sec:state-of-ze-art} Being able to perform online estimation means that the data must be run through only once, which is obviously not possible with the vanilla EM algorithm. To overcome this difficulty we consider in the sequel online algorithms which produce, at a fixed computational cost, an updated parameter estimate $\hat{\theta}_{n}$ for each new observation $Y_n$. Note that in the online setting, the iteration index (which was previously denoted by $k$) is identical to the observation index $n$ and we will use the latter when describing the algorithms. To our best knowledge, the first online parameter estimation procedure for latent data models is due to \cite{titterington:1984} who proposed to use a stochastic approximation version of the modified gradient recursion: \begin{equation} \label{eq:recursiveEM-Titterington} \hat{\theta}_{n+1} = \hat{\theta}_n + \gamma_{n+1} \FIMcomplete[-1]{}{\hat{\theta}_n} {\nabla_{\theta}\log g}(Y_{n+1}; \hat{\theta}_n) \;, \end{equation} where $\{ \gamma_n \}$ is a decreasing sequence of positive step sizes. One may also consider using a stochastic approximation version of the original (Newton) recursion in~(\ref{eq:gradientEM-algorithm-lange}): \begin{equation} \label{eq:recursiveEM-Lange} \hat{\theta}_{n+1} = \hat{\theta}_n + \gamma_{n+1} \FIMcomplete[-1]{\pi}{\hat{\theta}_n} {\nabla_{\theta}\log g}(Y_{n+1}; \hat{\theta}_n) \; . \end{equation} Note that \eqref{eq:recursiveEM-Lange} does not correspond to a practical algorithm as $\FIMcomplete{\pi}{\hat{\theta}_n}$ is usually unknown, although it can be estimated, for instance, by recursively averaging over the values of $\FIMcond{Y_n}{\hat{\theta}_n}$. As discussed above however, this algorithm may be less robust than~\eqref{eq:recursiveEM-Titterington} because $\FIMcond{Y_n}{\hat{\theta}_n}$ is (usually) not guaranteed to be positive definite. In the following, we will refer to~\eqref{eq:recursiveEM-Titterington} as \emph{Titterington's online algorithm} and to~\eqref{eq:recursiveEM-Lange} as the \emph{online gradient algorithm} (in reference to the title of the paper by \citealp{lange:1995}). Note that both of these algorithms are based on the stochastic gradient approach and bear very little resemblance with the original EM algorithm. \subsection{The Proposed Online EM Algorithm} \label{sec:onlineEM_def} We now consider an online approach which is more directly related to the principle underpinning the EM algorithm. The basic idea is to replace the expectation step by a stochastic approximation step, while keeping the maximisation step unchanged. More precisely, at iteration $n$, consider the function \begin{equation} \label{eq:recursiveEM} \hat{Q}_{n+1}( \theta) = \hat{Q}_{n}( \theta) + \gamma_{n+1} \left(\mathbb{E}_{\hat{\theta}_n} \left[ \log f(X_{n+1}; \theta) \big| Y_{n+1} \right] - \hat{Q}_{n}(\theta) \right) \; , \end{equation} and set $\hat{\theta}_{n+1}$ as the maximum of the function $\theta \mapsto \hat{Q}_{n+1}(\theta)$ over the feasible set $\Theta$. One important advantage of \eqref{eq:recursiveEM} compared to \eqref{eq:recursiveEM-Titterington} is that it automatically satisfies the parameter constraints without requiring any further modification. In addition, \eqref{eq:recursiveEM} does not explicitly require the inversion of a $(\nbt \times \nbt)$ matrix. For further comparisons between both approaches, both practical and in terms of rate of convergence, we refer to the example of Section~\ref{sec:exp:pois} and to the analysis of Section~\ref{sec:conv}. Of course, this algorithm is of practical interest only if it is possible to compute and maximise $\hat{Q}_n(\theta)$ efficiently. In the following we focus on the case where the complete data likelihood belongs to an \emph{exponential family} satisfying the following assumptions. Let $\pscal{\cdot}{\cdot}$ denotes the scalar product between two vectors of $\mathbb{R}^d$ and $|\cdot|$ the associated norm. \begin{assumption} \label{assum:expon} \begin{description} \item[(a)] The complete data likelihood is of the form \begin{equation} \label{eq:curved-ex} f(x; \theta) = h(x) \exp\left\{-\psi(\theta) + \pscal{S(x)}{\phi(\theta)}\right\} \; . \end{equation} \item[(b)] The function \begin{equation} \label{eq:defcondexpsuffstat} \condexpsuffstat{y}{\theta} \ensuremath{\stackrel{\mathrm{def}}{=}} \mathbb{E}_\theta \left[ S(X) \big| Y=y \right] \; , \end{equation} is well defined for all $(y,\theta) \in {\mathcal{Y}} \times \Theta$. \item[(c)] There exists a convex open subset $\mathcal{S} \subset \mathbb{R}^d$, which is such that \begin{itemize} \item for all $s \in \mathcal{S}$, $(y,\theta) \in {\mathcal{Y}} \times \Theta$ and $\gamma \in [0,1)$, $(1-\gamma) s + \gamma \condexpsuffstat{y}{\theta} \in \mathcal{S}$. \item for any $s \in \mathcal{S}$, the function $\theta \mapsto \ell(s;\theta) \ensuremath{\stackrel{\mathrm{def}}{=}} -\psi(\theta) + \pscal{s}{\phi(\theta)}$ has a unique global maximum over $\Theta$ denoted $\functheta{s}$,i.e.\ \begin{equation} \label{eq:definition-functheta} \functheta{s} \ensuremath{\stackrel{\mathrm{def}}{=}} \mathrm{argmax}_{\theta \in \Theta} \ell(s;\theta) \;. \end{equation} \end{itemize} \end{description} \end{assumption} Assumption \ref{assum:expon} implies that the evaluation of $\mathbb{E}_\theta \left[ \log f(X; \theta) \big| Y\right]$, and hence the E-step of the EM algorithm, reduces to the computation of the expected value $\mathbb{E}_\theta \left[ S(X) \big| Y \right]$ of the \emph{complete data sufficient statistic} $S(X)$. Indeed, the EM reestimation functional $\funcEM{Y_{1:n}}{\theta}{\theta'}$ is then defined by \begin{equation*} \funcEM{Y_{1:n}}{\theta}{\theta'} = \ell \left( n^{-1} \sum_{i=1}^n \condexpsuffstat{Y_i}{\theta'}; \theta \right) \; . \end{equation*} The $(k+1)$-th iteration of the (batch mode) EM algorithm may thus be expressed as \begin{equation} \label{eq:EM-recursion} \hat{\theta}_{k+1}= \functheta{ n^{-1} \sum_{i=1}^n \condexpsuffstat{Y_i}{\hat{\theta}_k}}\; , \end{equation} where the M-step corresponds to the application of the function $\bar\theta$. Note that the construction of the set $\mathcal{S}$ in Assumption \ref{assum:expon}(c) reflects the fact that in most applications of EM, the M-step is unambiguous only when a sufficient number of observations have been gathered. This point will be illustrated in the example to be considered in Section~\ref{sec:regmix} below. Assumption \ref{assum:expon}(c) takes care of this issue in the case of the online EM algorithm. As an additional comment about Assumption \ref{assum:expon}, note that we do not require that $\phi$ be a one to one mapping and hence the complete data model may also correspond to a \emph{curved} exponential family, where typically $\theta$ is of much lower dimension than $\psi(\theta)$ (see, for instance, \cite{chung:bohme:2005,cappe:charbit:moulines:2006} for an example involving Gaussian densities with structured covariance matrices). In this setting, the proposed online EM algorithm takes the following form \begin{align} \hat{s}_{n+1} & = \hat{s}_n + \gamma_{n+1}(\condexpsuffstat{Y_{n+1}}{\hat{\theta}_n} -\hat{s}_n) \; , \nonumber \\ \hat{\theta}_{n+1} & = \functheta{\hat{s}_{n+1}} \; . \label{eq:recursiveEM-curvedexponentialfamily} \end{align} Algorithm of that kind have a rather long history in the machine learning community. The idea of sequentially updating the vector of sufficient statistics has apparently been first proposed by \cite{nowlan:1991}, using a fixed step size (or learning rate) $\gamma_n = \gamma$ (see also \citealp{jordan:jacobs:1994}). The online EM algorithm \eqref{eq:recursiveEM-curvedexponentialfamily} is also closely related to the ``incremental'' version of the EM algorithm derived by \cite{neal:hinton:1999}. The incremental setting is more general than the recursive setting considered here, because the observations are not necessarily processed sequentially in time and might be used several times. The incremental EM algorithm of \cite{neal:hinton:1999} defines the $(k+1)$-th parameter estimate as \begin{equation} \label{eq:IncrmentalEM-recursion} \hat{\theta}_{k+1}= \functheta{ \left[\min(k+1,n)\right]^{-1} \sum_{i=1}^{\min(k+1,n)} \hat{s}_{k+1,i} } \;, \end{equation} where $\hat{s}_{k+1,i} = \hat{s}_{k,i}$ if $i \neq I_{k+1}$ and $\hat{s}_{k+1,I_{k+1}} = \condexpsuffstat{Y_{I_{k+1}}}{\hat{\theta}_k}$. The index $I_{k+1}$ is typically chosen as $k+1$ while $k \leq n-1$ and runs through the data set, that is, $I_k \in \{1, \dots, n\}$, in a fixed or pseudo random scanning order for subsequent iterations. When used in batch mode (that is when $k > n$) it is seen that it mostly differs from the traditional EM strategy in~\eqref{eq:EM-recursion} by the fact that the parameters are updated after each computation of the conditional expectation of the complete data sufficient statistic corresponding to one observation. When used in online mode ($k \leq n$), the algorithm of \cite{neal:hinton:1999} coincides with the proposed online EM algorithm with a step-size of $\gamma_k = 1/k$ (see Section~\ref{sec:conv} for further discussion of this particular choice of step sizes). A specific instance of the proposed online EM algorithm has been derived by \cite{sato:ishii:2000} for maximum likelihood estimation in the so-called normalised Gaussian network; this algorithm was later extended by \cite{sato:2000} to a canonical exponential family ($\phi(\theta)= \theta$ in \eqref{eq:curved-ex}) and a sketch of the proof of convergence, based on stochastic approximation results, was given. The online EM algorithm defined in \eqref{eq:recursiveEM-curvedexponentialfamily} may be seen as a generalisation of this scheme. \subsection{An Example: Poisson Mixture} \label{sec:exp:pois} Before analysing the convergence of the above algorithm, we first consider a simple example of application borrowed from \cite{liu:almhana:choulakian:mcgorman:2006}: consider the case of a mixture of $m$ Poisson distributions \begin{equation} \label{eq:PoissonMixture:IncompleteLikelihood} g(y; \theta)= \sum_{j=1}^m \mixtureweight{j} \frac{\mixtureparam{j}^y}{y !} \mathrm{e}^{-\mixtureparam{j}} \;, \quad \text{for $y=0,1,2, \dots$} \;, \end{equation} where the unknown parameters $\theta= (\mixtureweight{1}, \dots, \mixtureweight{m}, \mixtureparam{1}, \dots, \mixtureparam{m})$ satisfies the constraints $\mixtureweight{j} > 0$, $\sum_{i=1}^m \mixtureweight{j} = 1$ and $\mixtureparam{j} > 0$. In the mixture problem, the incompleteness is caused by the ignorance of the component of the mixture. Let $W$ be a random variable taking value in $\{1, \dots, m\}$ with probabilities $\{ \mixtureweight{1}, \dots, \mixtureweight{m} \}$. The random variable $W$ is called the regime or state and is not observable. The probability density defined in \eqref{eq:PoissonMixture:IncompleteLikelihood} corresponds to the assumption that $Y$ is distributed, given that $W=j$, according to a Poisson random variable with parameter $\mixtureparam{j}$. Note that in this case, as in all examples which involve the simpler missing data mechanism rather than the general latent data model introduced in Section~\ref{sec:intro}, the complete data $X$ simply consists of the couple $(Y,W)$ and hence conditional expectations of $X$ given $Y$ really boils down to expectations of $W$ given $Y$. For the Poisson mixture, the complete data log-likelihood is given by \begin{equation} \label{eq:PoissonMixture:CompleteLikelihood} \log f(y,w;\theta) = - \log(y!) + \sum_{j=1}^m \left[ \log (\mixtureweight{j}) - \mixtureparam{j} \right] \delta_{w,j} + \sum_{j=1}^m \log(\mixtureparam{j}) y \delta_{w,j} \;, \end{equation} where $\delta_{i,l}$ is the Kronecker delta symbol: $\delta_{i,l} = 1$ if $i=l$ and $\delta_{i,l}= 0$ otherwise. The complete data likelihood may be rewritten as in \eqref{eq:curved-ex} with $h(y,w)= - \log( y!)$, $S(y,w) = (S_1(y,w), \dots, S_m(y,w))$ and $\phi(\theta) = (\phi_1(\theta), \dots, \phi_m(\theta))$, where \begin{equation*} S_j(y,w) \ensuremath{\stackrel{\mathrm{def}}{=}} \begin{pmatrix} \delta_{w,j} \\ y \delta_{w,j} \end{pmatrix} \;, \qquad \text{and} \quad \phi_j(\theta) \ensuremath{\stackrel{\mathrm{def}}{=}} \begin{pmatrix} \log(\mixtureweight{j}) - \mixtureparam{j} \\ \log(\mixtureparam{j}) \end{pmatrix} \; . \end{equation*} In this case, the conditional expectation of the complete data sufficient statistics is fully determined by the posterior probabilities of the mixture components defined by \begin{equation*} \bmixtureind{j}(y;\theta) \ensuremath{\stackrel{\mathrm{def}}{=}} \mathbb{P}_\theta[W=j|Y=y] = \frac{\mixtureweight{j} \mixtureparam{j}^y \mathrm{e}^{-\mixtureparam{j}}}{\sum_{l=1}^m \mixtureweight{l} \mixtureparam{l}^y \mathrm{e}^{-\mixtureparam{l}}} \; , \quad \text{for $j=1, \dots, m$} \; . \end{equation*} The $(n+1)$-th step of the online EM algorithm consists in computing, for $j=1, \dots, m$, \begin{align} \hat{s}_{j,n+1} & = \hat{s}_{j,n} + \gamma_{n+1} \left\{ \begin{pmatrix} \bmixtureind{j}(Y_{n+1};\hat{\theta}_n) \\ \bmixtureind{j}(Y_{n+1};\hat{\theta}_n) Y_{n+1} \end{pmatrix} - \hat{s}_{j,n} \right\} \; , \nonumber \\ \hmixtureweight{j,n+1} & = \hat{s}_{j,n+1}(1) \; , \qquad \hmixtureparam{j,n+1} = \frac{\hat{s}_{j,n+1}(2)}{\hat{s}_{j,n+1}(1)} \; . \label{eq:REM-mixture-Poisson} \end{align} Comparing with the generic update equations in (\ref{eq:recursiveEM-curvedexponentialfamily}), one recognises the stochastic approximation version of the E-step, in the first line of~\eqref{eq:REM-mixture-Poisson}, followed by the application of $\bar\theta$. To compare with Titterington's online algorithm in \eqref{eq:recursiveEM-Titterington}, one first need to evaluate the complete Fisher information matrix $\FIMcomplete{}{\theta}$. To deal with the equality constraint $\sum_{j=1}^m \mixtureweight{j} = 1$, only the first $m-1$ weights are used as parameters and the remaining one is represented as $\mixtureweight{m} = 1 - \sum_{j=1}^{m-1} \mixtureweight{j}$ as in \cite{titterington:1984}. The complete data Fisher information matrix defined in~\eqref{eq:completeobservationFIM} is then given by \[ \FIMcomplete{}{\theta} = \begin{pmatrix} \operatorname{diag}(\mixtureweight{1}^{-1}, \dots, \mixtureweight{m-1}^{-1}) + \mixtureweight{m}^{-1}\mathbf{1}_{m-1}\mathbf{1}_{m-1}^T & \mathbf{0}_{(m-1) \times m} \\ \mathbf{0}_{m \times (m-1)} & \operatorname{diag}(\mixtureweight{1}/\mixtureparam{1}, \dots, \mixtureweight{m}/\mixtureparam{m}) \end{pmatrix} \; , \] where the superscript $T$ denotes transposition, $\mathbf{1}$ and $\mathbf{0}$ respectively denote a vector of ones and a matrix of zeros, whose dimensions are specified as subscript. Upon inverting $\FIMcomplete{}{\theta}$, the following expression for the $(n+1)$-th step of Titterington's online algorithm is obtained~: \begin{align} \hmixtureweight{j,n+1} &= \hmixtureweight{j,n} + \gamma_{n+1} \left( \bmixtureind{j}(Y_{n+1},\hat{\theta}_n) - \hmixtureweight{j,n} \right) \;, \nonumber \\ \hmixtureparam{j,n+1} & = \hmixtureparam{j,n} + \gamma_{n+1} \frac{\bmixtureind{j}(Y_{n+1}; \hat{\theta}_n)}{\hmixtureweight{j,n}} \left( Y_{n+1} - \hmixtureparam{j,n} \right) \; . \label{eq:REM-mixture-Poisson-Titterington} \end{align} To make the connection more explicit with the update of the online EM algorithm, note that due to the fact that, in this simple case, there is an identification between some components of the vector of sufficient statistics and the weight parameters (i.e., $\functheta{s_{j,n}} = \mixtureweight{j}$), it is possible to rewrite~\eqref{eq:REM-mixture-Poisson} in terms of the latter only: \begin{align*} \nonumber \hmixtureweight{j,n+1} & = \hmixtureweight{j,n} + \gamma_{n+1} \left( \bmixtureind{j}(Y_{n+1};\hat{\theta}_n) - \hmixtureweight{j,n} \right) \; , \\ \nonumber \hmixtureparam{j,n+1} & = \frac{ \hmixtureparam{j,n}\hmixtureweight{j,n} + \gamma_{n+1} \left( \bmixtureind{j}(Y_{n+1};\hat{\theta}_n)Y_{n+1} - \hmixtureparam{j,n}\hmixtureweight{j,n}\right)}{\hmixtureweight{j,n+1}} \; . \end{align*} In the Poisson mixture example, the two algorithms differ only in the way the intensities of the Poisson components are updated. Whereas the online EM algorithm in~\eqref{eq:REM-mixture-Poisson} does ensure that all parameter constraints are satisfied, it may happen, in contrast, that~\eqref{eq:REM-mixture-Poisson-Titterington} produces negatives values for the intensities. Near convergence however, the two algorithms behave very similarly in this simple case (see Proposition~\ref{prop:asymptoticequivalencerecursiveEMs} below). \subsection{Extensions} \label{sec:exts} As previously mentioned, \cite{neal:hinton:1999} advocate the use of online algorithms also in the case of batch training with large sample sizes. The online algorithm then operates by repeatedly scanning through the available sample. In our setting, this use of the proposed online EM algorithm may be analysed by letting $\pi$ denote the empirical measure associated with the fixed sample $X_1, \dots, X_n$. The results to follow thus also apply in this context, at least when the data scanning order is random. In semi-parametric regression models, each observation $Y$ comes with a vector of covariates $Z$ whose distribution is usually unspecified and treated as a nuisance parameter. To handle latent data versions of regression models (mixture of regressions, mixture of experts, etc.---see \citealp{gruen:leisch:2007,jordan:jacobs:1994}, as well as the example of Section~\ref{sec:regmix}) in our framework, one only needs to assume that the model consists of a parametric family $\{f(x|z;\theta), \theta\in\Theta)\}$ of \emph{conditional} pdfs. In this setting however, it is not possible anymore to compute expectations under the complete data distribution and the model can never be well-specified, as the distribution of $Z$ is left unspecified. Thus Titterington's algorithm in (\ref{eq:recursiveEM-Titterington}) does not directly apply in this setting. In contrast, the proposed algorithm straightforwardly extends to this case by considering covariate-dependent expectations of the sufficient statistics of $f(x|z;\theta)$, of the form $\condexpsuffstat{y,z}{\theta} = \mathbb{E}_\theta[S(X)|Y=y,Z=z]$, instead of (\ref{eq:defcondexpsuffstat}). For notational simplicity, we state our results in the following section without assuming the presence of covariates but extension to the case where there are covariates is straightforward; the example of Section~\ref{sec:regmix} corresponds to a case where covariates are available. \section{Convergence Issues} \label{sec:conv} \subsection{Consistency} \label{sec:conv:consist} In this section, we establish the convergence of the proposed algorithm towards the set of stationary points of the Kullback-Leibler divergence between the actual observation density and the model likelihood. These results are the analogues of those given by \cite{wang:zhao:2006} for Titterington's online algorithm, with a somewhat broader scope since we do not assume that the model is well-specified. The proofs corresponding to this section are given in Appendix \ref{sec:proofs}. In addition to the conditions listed in Assumption~\ref{assum:expon}, we will require the following additional regularity assumptions. \begin{assumption} \label{assum:reg} \begin{description} \item[(a)] The parameter space $\Theta$ is a convex open subset of $\mathbb{R}^\nbt$ and $\psi$ and $\phi$ in~(\ref{eq:curved-ex}) are twice continuously differentiable on $\Theta$. \item[(b)] The function $s \mapsto \functheta{s}$, defined in \eqref{eq:definition-functheta}, is continuously differentiable on $\mathcal{S}$, \item[(c)] For some $p > 2$, and all compact subsets $\mathcal{K} \subset \mathcal{S}$, \[ \sup_{s\in\mathcal{K}} \mathbb{E}_\pi\left( \left| \condexpsuffstat{Y}{\functheta{s}} \right|^p \right) < \infty \; . \] \end{description} \end{assumption} To analyse the recursion \eqref{eq:recursiveEM-curvedexponentialfamily}, the first step consists in expressing it as a standard Robbins-Monro stochastic approximation procedure operating on the complete data sufficient statistics: \begin{equation} \label{eq:recursiveEM-curvedexponential-RM} \hat{s}_{n+1} = \hat{s}_n + \gamma_{n+1} \left(\mathrm{h}(\hat{s}_n) + \xi_{n+1}\right) \; , \end{equation} where $\mathrm{h}: \mathcal{S} \to \mathbb{R}^{\nbt}$ is the so-called \emph{mean field} given by \begin{equation} \label{eq:recursiveEM-curvedexponential-meanfield} \mathrm{h}(s) \ensuremath{\stackrel{\mathrm{def}}{=}} \mathbb{E}_\pi \left[ \condexpsuffstat{Y}{\functheta{s}} \right] - s \;, \end{equation} and $\{ \xi_n \}_{n \geq 1}$ is a sequence of random variables representing stochastic perturbations defined by \begin{align} \label{eq:recursiveEM-curvedexponentialfamily-noise} \xi_{n+1} &\ensuremath{\stackrel{\mathrm{def}}{=}} \condexpsuffstat{Y_{n+1}}{\functheta{\hat{s}_{n}}} - \CPE{\condexpsuffstat{Y_{n+1}}{\functheta{\hat{s}_{n}}}}{\mathcal{F}_n} \; , \end{align} where $\mathcal{F}_n$ is the $\sigma$-field generated by $ \left(\hat{s}_0, \{ Y_i \}_{i=1}^n \right)$. The aim of the Robbins-Monro procedure \eqref{eq:recursiveEM-curvedexponential-RM} is to solve the equation $\mathrm{h}(s)= 0$. As a preliminary step, we first characterise the set of roots of the mean field $\mathrm{h}$. The following proposition shows that, if $s^\star$ belongs to \begin{equation} \label{eq:definition-gamma} \Gamma \ensuremath{\stackrel{\mathrm{def}}{=}} \left\{s \in \mathcal{S}: \mathrm{h}(s)= 0 \right\} \;, \end{equation} then $\theta^\star = \functheta{s^\star}$ is a stationary point of the Kullback-Leibler divergence between $\pi$ and $g_\theta$, \begin{equation} \label{eq:KullbackLeiblerDivergence} \kullback{\pi}{g_\theta} \ensuremath{\stackrel{\mathrm{def}}{=}} \mathbb{E}_\pi \left[ \log \left( \frac{\pi(Y)}{g(Y;\theta)} \right) \right] \;. \end{equation} \begin{proposition} \label{prop:characterization-stationary-points} Under Assumptions~\ref{assum:expon}--\ref{assum:reg}, if $s^\star \in \mathcal{S}$ is a root of $\mathrm{h}$, i.e., $\mathrm{h}(s^\star)=0$, then $\theta^\star = \functheta{s^\star}$ is a stationary point of the function $\theta \mapsto \kullback{\pi}{g_\theta}$, i.e., $\nabla_\theta \left. \kullback{\pi}{g_\theta} \right|_{\theta= \theta^\star}= 0$. Conversely, if $\theta^\star$ is a stationary point of $\theta \mapsto \kullback{\pi}{g_\theta}$, then $s^\star = \mathbb{E}_\pi [\condexpsuffstat{Y}{\theta^\star}]$ is a root of $\mathrm{h}$. \end{proposition} We then show that the function $\mathrm{w}: \mathcal{S} \to [0,\infty)$ defined by \begin{equation} \label{eq:definition-lyapunov} \mathrm{w}(s) \ensuremath{\stackrel{\mathrm{def}}{=}} \kullback{\pi}{g_{\functheta{s}}} \;, \end{equation} is a \emph{Lyapunov function} for the mean field $\mathrm{h}$ and the set $\Gamma$, i.e.\ for any $s \in \mathcal{S}$, $\pscal{\nabla_s \mathrm{w}(s)}{\mathrm{h}(s)} \leq 0$ and $\pscal{\nabla_s \mathrm{w}(s)}{\mathrm{h}(s)} =0$ if and only if $\mathrm{h}(s)= 0$. The existence of a Lyapunov function is a standard argument to prove the global asymptotic stability of the solutions of the Robbins-Monro procedure. This property can be seen as an analog of the monotonicity property of the EM algorithm: each unperturbed iteration $\bar{s}_{k+1} = \bar{s}_k + \gamma_{k+1} \mathrm{h}(\bar{s}_k)$ decreases the Kullback-Leibler divergence to the target distribution $\pi$, provided that $\gamma_{k+1}$ is small enough. \begin{proposition} \label{prop:lyapunov-function-h} Under Assumptions~\ref{assum:expon}--\ref{assum:reg}, \begin{itemize} \item $\mathrm{w}(s)$ is continuously differentiable on $\mathcal{S}$, \item for any compact subset $\mathcal{K} \subset \mathcal{S} \setminus \Gamma$, $$ \sup_{s \in \mathcal{K}} \pscal{\nabla_s \mathrm{w}(s)}{\mathrm{h}(s)} < 0 \;. $$ \end{itemize} \end{proposition} Using this result, we may now prove the convergence of the sequence $\{\hat{s}_k\}$. Denote by $\mathcal{L} = \left\{ \theta \in \Theta : \nabla_\theta \kullback{\pi}{g_\theta}= 0 \right\}$ the set of stationary points of the Kullback-Leibler divergence, and, for $x \in \mathbb{R}^m$ and $A \subset \mathbb{R}^m$, let $d(x,A) = \inf \{ y \in A, |x-y| \}$. \begin{theorem} \label{theo:wp1convergencerecursiveEM} Assume~\ref{assum:expon}--\ref{assum:reg} and that, in addition, \begin{enumerate} \item $0< \gamma_i < 1$, $\sum_{i=1}^\infty \gamma_i = \infty$ and $\sum_{i=1}^\infty \gamma_i^2 < \infty$, \item $\hat{s}_0 \in \mathcal{S}$ and with probability one, $\limsup |\hat{s}_n| < \infty$ and $\liminf d(\hat{s}_n,\mathcal{S}^c) > 0$. \item The set $\mathrm{w}(\Gamma)$ is nowhere dense. \end{enumerate} Then, $\lim_{n \to \infty} d(\hat{s}_n,\Gamma)= 0$ and $\lim_{n \to \infty} d( \hat{\theta}_n, \mathcal{L}) = 0$, with probability one. \end{theorem} The first condition of Theorem~\ref{theo:wp1convergencerecursiveEM} is standard for decreasing step-size stochastic approximation procedures \citep{kushner:yin:1997}. It is satisfied for instance by setting $\gamma_i = \gamma_0 i^{-\alpha}$, with $\alpha \in (1/2,1]$. The additional requirements that $\gamma_i$ be less than 1 and $\hat{s}_0$ be chosen in $\mathcal{S}$ is just meant to ensure that the whole sequence $\{\hat{s}_k\}$ stays in $\mathcal{S}$ (see Assumption~\ref{assum:expon}(c)). The rest of the second assumption of Theorem~\ref{theo:wp1convergencerecursiveEM} correspond to a stability assumption which is not trivial. In general settings, the stability of the algorithm can be enforced by truncating the algorithm updates, either on a fixed set (see, e.g., \citealp[chapter 2]{kushner:yin:2003}) or on an expanding sequence of sets (see, e.g., \citealp[chapter 2]{chen:book:2002}, or \citealp{andrieu:moulines:priouret:2005}). We do not explicitly carry out these constructions here to keep the exposition concise. \subsection{Rate of Convergence} \label{sec:conv:rate} In this section, we show that when approaching convergence, the online EM algorithm is comparable to the online gradient algorithm in~(\ref{eq:recursiveEM-Lange}). The existence of such links is hardly surprising, in view of the discussions in Section 4 of \cite{titterington:1984} and Section 3 of \cite{lange:1995}, and may be seen as a counterpart, for stochastic approximation, of the asymptotic equivalence of the gradient EM algorithm of \cite{lange:1995} and the EM algorithm. To highlight these relations, we first express the online EM algorithm as a stochastic approximation procedure on $\theta$. \begin{proposition} \label{prop:asymptoticequivalencerecursiveEMs} Under the assumptions of Theorem \ref{theo:wp1convergencerecursiveEM}, the online EM sequence $\{\hat{\theta}_n \}_{n \geq 0}$ given by \eqref{eq:recursiveEM-curvedexponentialfamily} follows the recursion \begin{equation} \label{eq:recursiveEM-in-theta} \hat{\theta}_{n+1} = \hat{\theta}_n + \gamma_{n+1} \, \FIMcomplete[-1]{\pi}{\hat{\theta}_n} {\nabla_{\theta}\log g}(Y_{n+1};\hat{\theta}_n) + \gamma_{n+1} \rho_{n+1} \end{equation} where $\lim_{n \to \infty} \rho_n = 0$ a.s.\ and $\FIMcomplete{\pi}{\theta}$ is defined in \eqref{eq:definitionFIMcomplete}. \end{proposition} Hence, the online EM algorithm is equivalent, when approaching convergence, to the online gradient algorithm defined in (\ref{eq:recursiveEM-Lange}) which coincides with Titterington's online algorithm with $\FIMcomplete{\pi}{\hat{\theta}_n}$ substituted for $\FIMcomplete{}{\hat{\theta}_n}$. It is remarkable that the online EM algorithm can achieve a convergence performance similar to that of the online gradient algorithm without explicit matrix approximation nor inversion. Note that, as previously discussed in Section~\ref{sec:intro}, in the particular case of canonical exponential families, $\FIMcomplete{\pi}{\theta}$ and $\FIMcomplete{}{\theta}$ coincide and the proposed online EM algorithm is thus also equivalent (near convergence) to Titterington's online algorithm. Although the recursion \eqref{eq:recursiveEM-in-theta} will not lead to asymptotic efficiency, we can, under appropriate additional conditions, guarantee $\gamma_n^{-1/2}$-consistency and asymptotic normality. We use the weak convergence result presented in \citealp[Theorem 1]{pelletier:1998}. \begin{theorem} \label{theo:weak-convergence-REM} Under the assumptions of Theorem \ref{theo:wp1convergencerecursiveEM}, let $\theta^\star$ be a (possibly local) minimum of the Kullback-Leibler divergence: $\theta \mapsto \kullback{\pi}{g_\theta}$. Denote by \begin{align*} &H(\theta^\star) \ensuremath{\stackrel{\mathrm{def}}{=}} \FIMcomplete[-1]{\pi}{\theta^\star} \left[ - \nabla_\theta^2 \left. \kullback{\pi}{g_\theta} \right|_{\theta= \theta^\star} \right] \;, \\ &\Gamma(\theta^\star) \ensuremath{\stackrel{\mathrm{def}}{=}} \FIMcomplete[-1]{\pi}{\theta^\star} \, \mathbb{E}_\pi \left( {\nabla_{\theta}\log g}(Y;{\theta^\star}) \left\{{\nabla_{\theta}\log g}(Y;{\theta^\star})\right\}^T\right) \, \FIMcomplete[-1]{\pi}{\theta^\star} \;. \end{align*} Then, \begin{enumerate} \item \label{item:weak-convergence-REM:stable} $H(\theta^\star)$ is a stable matrix whose eigenvalues have their real part upper bounded by $-\lambda(\theta^\star)$, where $\lambda(\theta^\star) > 0$. \item \label{item:weak-convergence-REM:WK} Let $\gamma_n = \gamma_0 n^{-\alpha}$, where $\gamma_0$ may be chosen freely in $(0,1)$ when $\alpha \in (1/2,1)$ but must satisfy $\gamma_0 > \lambda(\theta^\star)$ when $\alpha = 1$; then, on the event $\Omega(\theta^\star) = \{ \lim_{n \to \infty} \hat{\theta}_n = \theta^\star\}$, the sequence $\gamma_n^{-1/2} \left( \hat{\theta}_n - \theta^\star \right)$ converges in distribution to a zero mean Gaussian distribution with covariance $\Sigma(\theta^\star)$, where $\Sigma(\theta^\star)$ is the solution of the Lyapunov equation \begin{equation} \label{eq:LyapunovEquation} \left( H(\theta^\star) + \zeta \mathrm{Id} \right) \Sigma(\theta^\star) + \Sigma(\theta^\star) \left( H^T(\theta^\star) + \zeta \mathrm{Id} \right) = - \Gamma(\theta^\star) \;, \end{equation} where $\zeta=0$ if $\alpha \in (1/2,1)$ and $\zeta= \gamma_0^{-1}$ if $\alpha = 1$, and, $\mathrm{Id}$ denotes the identity matrix. \end{enumerate} \end{theorem} Solving \eqref{eq:LyapunovEquation} is easy for a well-specified model, that is, when $\pi=g_{\theta^\star}$, as the FIM $\FIMincomplete{\theta^\star}$ associated to the (observed) data model then satisfies \begin{multline*} \FIMincomplete{\theta^\star} = - \mathbb{E}_{\theta^\star} \left[ \nabla_\theta^2 \log g(Y;{\theta^\star}) \right] = \FIMcomplete[-1]{\pi}{\theta^\star} \\ = - \nabla_\theta^2 \left. \kullback{g_{\theta^\star}}{g_\theta} \right|_{\theta= \theta^\star} = \mathbb{E}_\pi \left( {\nabla_{\theta}\log g}(Y;{\theta^\star}) \left\{{\nabla_{\theta}\log g}(Y;{\theta^\star})\right\}^T \right) \; . \end{multline*} When $\zeta= 0$, the solution of the Lyapunov equation is given by $\Sigma(\theta^\star) = \FIMincomplete[-1]{\theta^\star}/2$, the asymptotic covariance matrix is equal to one-half the inverse of the FIM. When $\zeta \ne 0$, the Lyapunov equation cannot be solved in explicitly, except when the parameter is scalar (the result then coincides with \citealp[Theorem 1]{titterington:1984}). Note that using $\gamma_n = \gamma_0 n^{-\alpha}$ with $\alpha = 1$ provides the optimal convergence rate of $1/\sqrt{n}$ but only at the price of a constraint on the scale $\gamma_0$, which is usually impossible to check in practice. On the other hand, using $\alpha \in (1/2,1)$ results in a slower convergence rate but without constraint on the scale $\gamma_0$ of the step-size (except for the fact that it has to be smaller than 1). To circumvent this difficulty, we recommend to use the so-called Polyak-Ruppert averaging technique \citep{polyak:1990,ruppert:1988} as a post-processing step. Following \cite{polyak:1990} ---see also \cite{polyak:juditsky:1992,mokakdem:pelletier:2006}---, if $\gamma= \gamma_0 n^{-\alpha}$, with $\alpha \in (1/2,1)$, then the running average \begin{equation} \label{eq:SGA:averaging} \tilde{\theta}_n \ensuremath{\stackrel{\mathrm{def}}{=}} n^{-1} \sum_{j=n_0}^n \hat{\theta}_j \;, \qquad n \geq 1 \end{equation} converges at rate $1/\sqrt{n}$, for all values of $\gamma_0$. Furthermore, on the event $\Omega(\theta^\star)$ defined in Theorem~\ref{theo:weak-convergence-REM} above, $\sqrt{n}( \tilde{\theta}_n - \theta^\star)$ is asymptotically normal, with asymptotic covariance matrix \begin{multline} \label{eq:CovarianceMatrixAveraging} \overline{\Sigma}(\theta^\star)= H^{-1}(\theta^*) \Gamma(\theta^\star) H^{-1}(\theta^\star) = \\ \left[ - \nabla_\theta^2 \left. \kullback{\pi}{g_\theta} \right|_{\theta= \theta^\star} \right]^{-1} \pi \left( {\nabla_{\theta}\log g}_{\theta^\star} \left\{{\nabla_{\theta}\log g}_{\theta^\star}\right\}^T \right) \left[ - \nabla_\theta^2 \left. \kullback{\pi}{g_\theta} \right|_{\theta= \theta^\star} \right]^{-1} \;, \end{multline} which is known to be optimal \citep{kushner:yin:1997}. If $\pi = g_{\theta^\star}$, the previous result shows that the averaged sequence $\tilde{\theta}_n$ is an asymptotically efficient sequence of estimates of $\theta^\star$, i.e.\ the asymptotic covariance of $\sqrt{n} (\tilde{\theta}_n - \theta^\star)$ is equal to the inverse of the (observed data) FIM $\FIMincomplete{\theta^\star}$. \section{Application to Mixtures of Gaussian Regressions} \label{sec:regmix} To illustrate the performance of the proposed method, we consider a regression model which, as discussed in Section~\ref{sec:exts}, corresponds to a case where the complete data FIM is not available. In contrast, we illustrate below the fact that the proposed algorithm, without explicitly requesting the determination of a weighting matrix does provide asymptotically efficient parameter estimates when Polyak-Ruppert averaging is used. The model we consider is a finite mixture of Gaussian linear regressions, where the complete data consists of the response variable $R$, here assumed to be scalar for simplicity, the $\nbz$-dimensional vector $Z$ that contains the explanatory variables, and, $W$ which corresponds, as in the example of Section~\ref{sec:exp:pois}, to a latent mixture indicator taking its value in the finite set $\{1,\dots,m\}$. We assume that given $W=j$ and $Z$, $R$ is distributed as a $\mathcal{N}(\beta_j^T Z, \sigma_j^2)$ Gaussian variable, while $W$ is independent of $Z$ and such that $\mathbb{P}_\theta(W = j) = \omega_j$. Thus the parameters of the model are the mixture weights $\omega_j$ and the regression vectors $\beta_j$ and variances $\sigma_j^2$, for $j=1,\dots,m$. As is usually the case in conditional regression models, we specify only the part of the complete data likelihood that depends on the parameters, without explicitly modelling the marginal distribution of the vector of regressors $Z$. In terms of our general notations, the complete data $X$ is the triple $(R,Z,W)$, the observed data is the couple $(R,Z)$ and the model is not well-specified, in the sense that the distribution of the observation $(R,Z)$ is not fully determined by the model. We refer to \cite{hurn:justel:robert:2003} or \cite{gruen:leisch:2007} and references therein for more information on mixture of regression models and their practical use. In the mixture of Gaussian regressions model, the part of the complete data log-likelihood that depends on the parameters may be written as \begin{equation} \label{eq:RegressionMixture:CompleteLikelihood} \log f(r,w,z;\theta) = \sum_{j=1}^m \left\{ \log (\mixtureweight{j}) - \frac12 \left[\log\sigma_j^2 + \frac{\left(r-\beta_j^T z\right)^2}{\sigma_j^2}\right] \right\} \delta_{w,j} \;, \end{equation} where $\delta$ denotes, as before, the Kronecker delta. To put~\eqref{eq:RegressionMixture:CompleteLikelihood} in the form given in~~(\ref{eq:curved-ex}), one needs to define the statistics $S = (S_{1,j},S_{2,j},S_{3,j},S_{4,j})_{1\leq j\leq m}$ where \begin{align} \label{eq:sufficient-statistics:regressionmixture} & S_{1,j}(r,w,z) = \delta_{w,j} & \quad & \text{(scalar)} \; , \nonumber\\ & S_{2,j}(r,w,z) = \delta_{w,j} r z & & \text{($\nbz \times 1$)} \; , \nonumber \\ & S_{3,j}(r,w,z) = \delta_{w,j} r zz^T & & \text{($\nbz \times \nbz$)} \; , \nonumber \\ & S_{4,j}(r,w,z) = \delta_{w,j} r^2 & & \text{(scalar)} \; . \end{align} As in the simple Poisson mixture example of Section~\ref{sec:exp:pois}, the E-step statistics only depend on the conditional expectation of the indicator variable $W$ through \begin{align} & \bar{s}_{1,j}(r,z;\theta) = \bar{w}_{j}(r,z;\theta) \; , \nonumber\\ & \bar{s}_{2,j}(r,z;\theta) = \bar{w}_{j}(r,z;\theta) r z \; , \nonumber \\ & \bar{s}_{3,j}(r,z;\theta) = \bar{w}_{j}(r,z;\theta) r zz^T \; , \nonumber \\ & \bar{s}_{4,j}(r,z;\theta) = \bar{w}_{j}(r,z;\theta) r^2 \; , \label{eq:E-step:regressionmixture} \end{align} where \begin{equation*} \bar{w}_{j}(r,z;\theta) \ensuremath{\stackrel{\mathrm{def}}{=}} \mathbb{E}_\theta[W=j|R=r,Z=z] = \frac{\frac{\mixtureweight{j}}{\sigma_j} \exp\left[-\frac12 \frac{(r-\beta_j^T z)^2}{\sigma_j^2}\right]}{\sum_{l=1}^m \frac{\mixtureweight{l}}{\sigma_l} \exp\left[-\frac12 \frac{(r-\beta_l^T z)^2}{\sigma_l^2}\right]} \; . \end{equation*} Finally, it is easily checked that that the M-step is equivalent to an application of the function $\bar\theta : s \mapsto \left(\bar\omega_j(s),\bar\beta_j(s),\bar\sigma_j(s)\right)_{1\leq j\leq m}$ where \begin{align} & \bar\omega_j(s) = s_{1,j} \; , \nonumber \\ & \bar{\beta}_j(s) = s_{3,j}^{-1} s_{2,j} \; , \nonumber \\ & \bar{\sigma}_j^2(s) = \left(s_{4,j} - \bar{\beta}_j^T(s) s_{2,j} \right) / s_{1,j} \; \; . \label{eq:M-step:regressionmixture} \end{align} In this example, the role played by the set $\mathcal{S}$ in Assumption~\ref{assum:expon}(c) is important: In order to apply~\eqref{eq:M-step:regressionmixture}, its required that the scalars $s_{1,j}$ belong to the open set $(0,1)$ and that the $(\nbz+1)$-dimensional matrices block-defined by \[ M_j = \begin{pmatrix} s_{3,j} & s_{2,j} \\ s_{2,j}^T & s_{4,j} \\ \end{pmatrix} \; , \] be positive definite, since $\bar{\sigma}_j^2(s)$ is, up to normalisation by $s_{1,j}$, the Schur complement of $M_j$. These constraints, for $j=1,\dots,m$ define the set $\mathcal{S}$ which is indeed open and convex. The function $\bar{s}$ defined in~\eqref{eq:E-step:regressionmixture} however \emph{never} produces values of $s$ which are in $\mathcal{S}$. In particular, $\bar{s}_{3,j}(r,z;\theta)$ is a rank one matrix which is not invertible (unless $\nbz=1$). Hence the importance of using an initialisation $s_0$ which is chosen in $\mathcal{S}$. For the simulations below, we took care of this issue by inhibiting the parameter re-estimation step in~\eqref{eq:M-step:regressionmixture} for the first twenty observations of each run. In other words, the first twenty observations are used only to build a up a value of $\hat{s}_{20}$, using the first line of~\eqref{eq:recursiveEM-curvedexponentialfamily}, which is in $\mathcal{S}$ with great probability. For illustration purpose, we consider a variation of a simple simulation example used in the \verb+flexmix+ \verb+R+ package \citep{leisch:2004}, where $m = 2$, $\omega_1 = \omega_2 = 0.5$, and \begin{equation*} R = \begin{cases} 5 U + V & \text{(when $W = 1$)} \\ 15 + 10 \, U - U^2 + V & \text{(when $W = 2$)} \\ \end{cases} \; , \end{equation*} where $U \sim \operatorname{Unif}(0,10)$ and $V \sim \mathcal{N}(0,9^2)$. In order to balance the asymptotic variances of the regression parameters (see below) we used $Z^T = (1, U, U^2/10)$ as the vector of regressors, hence the actual value of the regression parameter is $\beta_1^T = (0, 5, 0)$ for the first component and $\beta_2^T = (15, 10, -10)$. The corresponding data is shown in Figure~\ref{fig:batch_data} where the points corresponding to both classes are plotted differently for illustration purpose, despite the fact that only unsupervised estimation is considered here. The labelling is indeed rather ambiguous in this case as the posterior probability of belonging to one of the two classes is between 0.25 and 0.75 for about 40\% of the points. \begin{figure}[hbtp] \centering \includegraphics[width=10cm]{figures/batch_data500} \caption{500 points drawn from the model: circles, points drawn from the first class, and, crosses, points drawn from the second class (the algorithms discussed here ignore the class labels).} \label{fig:batch_data} \end{figure} Clearly, the mixture of regressions model is such that the associated complete data sufficient likelihood has the form given in Assumption~\ref{assum:expon}(a), where the marginal density of the explanatory variables in $Z$ appears in the term $h(x)$ since it does not depend on the parameters. Hence the previous theory applies straightforwardly and the online EM algorithm may be used to maximise the conditional likelihood function of the responses $R$ given the regressors $Z$. However, the explicit evaluation of the complete data FIM $\FIMcomplete{}{\theta}$ defined in~\eqref{eq:completeobservationFIM} is not an option here because the model does not specify the marginal distribution of $Z$. Titterington's online algorithm may thus not be used directly. Applying the recursion in (\ref{eq:recursiveEM-Titterington}) without a weighting matrix is not recommended here as the regression parameters are greatly correlated due to the non-orthogonality of the regressors. \begin{figure}[p] \centering \includegraphics[width=10cm]{figures/batch_beta2_n100} \caption{Box-and-whisker plots of the three components of $\beta_2$ (from top to bottom) estimated from 500 independent runs of length $n=100$ for, EM5: five iterations of the batch EM algorithm, OL1: online EM algorithm with $\gamma_i = 1/i$, OL06: online EM algorithm with $\gamma_i = 1/i^{0.6}$, OL06a: online EM algorithm with $\gamma_i = 1/i^{0.6}$ and averaging started from the 50th iteration. The horizontal dashed line corresponds to the actual parameter value and the interval in bold at the right of each plot to the interquartile range deduced from the asymptotic normal approximation.} \label{fig:batch_n100} \end{figure} \begin{figure}[p] \centering \includegraphics[width=10cm]{figures/batch_beta2_n10000} \caption{Same plots as in Figure~\ref{fig:batch_n100} for signals of length $n = 10,000$ (OL06a uses averaging started from the $5,000$th iteration).} \label{fig:batch_n10000} \end{figure} In order to determine a suitable weighting matrix, one can use Fisher's relation~\eqref{eq:Fisher} which gives, for the regression parameters, \begin{multline*} \nabla_{\beta_j} \log g(r|z;\theta) = \mathbb{E}_\theta \left[\left. \nabla_{\beta_j} \log f(R,W|Z;\theta) \right| R=r, Z=z; \theta\right] \\ = \mathbb{E}_\theta \left[\left. \delta_{W,j} \frac{(R-\beta_j^T Z) Z}{\sigma_j^2} \right| R=r, Z=z; \theta\right] = \bar{w}_{j}(r,z;\theta) \frac{(r-\beta_j^T z) z}{\sigma_j^2} \; . \label{eq:regmix:Fisher} \end{multline*} Hence, if we assume that the model is well-specified, the (observed) FIM $\FIMincomplete{\theta}$ may be approximated, near convergence, by computing empirical averages of the form \[ 1/n \sum_{i=1}^n \nabla_{\beta_j} \log g(R_i|Z_i;\theta) \left\{\nabla_{\beta_j} \log g(R_i|Z_i;\theta) \right\}^T \; . \] As the online EM algorithm does not require such computations, this estimate has been used only to determine the FIM at the actual parameter value for comparison purpose. It is easily checked that due to the linearity of the model and the fact that both components have equal weights and variance, the covariance matrices for $\beta_1$ and $\beta_2$ are the same. The numerical approximation determined from a million simulated observations yields asymptotic standard deviations of $(47.8, 22.1, 21.1)$ for the coordinates of $\beta_j$, with an associated correlation matrix of \[ \begin{pmatrix} 1 & -0.87 & 0.75 \\ -0.87 & 1 & -0.97 \\ 0.75 & -0.97 & 1 \end{pmatrix} \; . \] As noted above, the coordinates of the regression vector are very correlated which would make the unweighted parameter-space stochastic approximation algorithm (i.e., \eqref{eq:recursiveEM-Titterington} with an identity matrix instead of $\FIMcomplete[-1]{}{\hat{\theta}_n}$) very inefficient. \begin{figure}[hbtp] \centering \includegraphics[width=10cm]{figures/batch_traj_5000a1000} \caption{Example of parameters trajectories for the three components of $\beta_2$ (from top to bottom) for a signal of length $n = 5,000$: OL1, dashed line, OL06 doted line, OL06a solid line (with averaging started after $1,000$ iterations.} \label{fig:batch_traj} \end{figure} For run-lengths of $n=100$ and $n=10,000$ observations, we illustrate the performance of the following four algorithmic options~: \begin{description} \item[EM5] Five iterations of the batch EM algorithm, using the whole data. \item[OL1] The online EM algorithm with step size $\gamma_n = 1/n$. \item[OL06] The online EM algorithm with step size $\gamma_n = n^{-0.6}$. \item[OL06a] The online EM algorithm with step size $\gamma_n = n^{-0.6}$ and averaging started from $n_0 = n/2$ according to (\ref{eq:SGA:averaging}). \end{description} Note that whereas OL1 and OL06a have the same computational complexity (as the averaging post-processing has a negligible cost), EM5 is significantly more costly requiring five times as many E-step computations; it is also non-recursive. All algorithm are started from the same point and run for 500 independent simulated replicas of the data. The results (for $\beta_2$) are summarised as box-and-whisker plots in Figure~\ref{fig:batch_n100}, for $n=100$, and Figure~\ref{fig:batch_n10000} for $n=10,000$. Comparing both figures, one observes that OL06a is the only approach which appears to be consistent with a variance compatible with the asymptotic interquartile range shown on the right of each plot. EM5 (five iterations of batch EM) is clearly the method which has the less variability but Figure~\ref{fig:batch_n10000} suggests that it is not $1/\sqrt{n}$ consistent, which was indeed confirmed using longer runs not shown here. This observation supports the claim of \cite{neal:hinton:1999} that, for large sample sizes, online EM approaches are more efficient, from a computational point of view, than the batch EM algorithm which requires several iterations to converge properly. The online EM with step size $\gamma_n = 1/n$ (OL1) presents a bias which becomes very significant when $n$ increases. According to Theorem~\ref{theo:weak-convergence-REM}, this problem could be avoided (asymptotically) by choosing a sufficiently small value of $\gamma_0$. For fixed $n$ however, lowering $\gamma_0$ can only reduce the perceived speed of convergence, which is already very slow, as illustrated by Figure~\ref{fig:batch_traj}. In contrast, the online EM algorithm with Polyak-Ruppert averaging (OL06a) appears to be very efficient: averaging significantly reduces the variability of the OL06 estimate, reducing it to a level which is consistent with the asymptotic interquartile range, while maintaining a systematic bias which vanishes as $n$ increases, as expected. \section{Conclusion} Compared to other alternatives, the main advantages of the proposed approach to online parameter estimation in latent data models are its analogy with the standard batch EM algorithm, which makes the online algorithm easy to implement, and its provably optimal convergence behaviour. In addition, the combination of a slow parameter decrease ($\gamma_n = n^{-0.5+\epsilon}$ being a typical choice) with Polyak-Ruppert averaging appears to be very robust. A limitation is the fact that the function $\functheta{s}$ has to be explicit, which, for instance, would not be the case for mixture of regression models with generalised link functions. Another extension of interest concerns non independent models and in particular hidden Markov models or Markov random fields.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Consider the space $\mathbb{R}^d$ with Euclidean norm $|\cdot |$, where $d\ge 2$. Consider $\mathbb{Z}^d$ as a subset of this space, and say that two points $x$ and $y$ in $\mathbb{Z}^d$ are nearest neighbors if $|x-y|=1$. Let $E(\mathbb{Z}^d)$ be the set of nearest neighbor bonds in $\mathbb{Z}^d$. Let $t= (t_e)_{e\in E(\mathbb{Z}^d)}$ be a collection of i.i.d.\ non-negative random variables. In first-passage percolation, the variable $t_e$ is usually called the `passage time' through the edge $e$, alternately called the `edge-weight' of $e$. We will sometimes refer to the collection $t$ of edge-weights as the `environment'. The total passage time, or total weight, of a path $P$ in the environment $t$ is simply the sum of the weights of the edges in $P$ and will be denoted by $t(P)$ in this article. The first-passage time $T(x,y)$ from a point $x$ to a point $y$ is the minimum total passage time among all lattice paths from $x$ to $y$. For all our purposes, it will suffice to consider self-avoiding paths; henceforth, `lattice path' will refer to only self-avoiding paths. Note that if the edge-weights are continuous random variables, then with probability one there is a unique `geodesic' between any two points $x$ and $y$. This is denoted by $G(x,y)$ in this paper. Let $D(x,y)$ be the maximum deviation (in Euclidean distance) of this path from the straight line segment joining $x$ and $y$ (see Figure \ref{fig0}). \begin{figure}[h] \begin{pdfpic} \begin{pspicture}(-1,-1)(10,2.5) \psset{xunit=1cm,yunit=1cm} \rput(0,-.2){\small $x$} \rput(8.9,-.2){\small $y$} \psline{*-*}(0,0)(9,0) \pscurve{-}(0,0)(2,2)(4.5,-1.1)(7,.5)(9,0) \psline[linestyle=dashed]{->}(2,1.2)(2,2) \rput(2,1){\small $D(x,y)$} \psline[linestyle=dashed]{->}(2, .8)(2,0) \rput(4.5,-.7){\small $G(x,y)$} \end{pspicture} \end{pdfpic} \caption{The geodesic $G(x,y)$ and the deviation $D(x,y)$.} \label{fig0} \end{figure} Although invented by mathematicians \cite{hw65}, the first-passage percolation and related models have attracted considerable attention in the theoretical physics literature (see \cite{ks91} for a survey). Among other things, the physicists are particularly interested in two `scaling exponents', sometimes denoted by $\chi$ and $\xi$ in the mathematical physics literature. The {\it fluctuation exponent} $\chi$ is a number that quantifies the order of fluctuations of the first-passage time $T(x,y)$. Roughly speaking, for any $x, y$, \[ \text{the typical value of } T(x,y)-\mathbb{E} T(x,y) \text{ is of the order } |x-y|^\chi. \] The {\it wandering exponent} $\xi$ quantifies the magnitude of $D(x,y)$. Again, roughly speaking, for any $x,y$, \[ \text{the typical value of } D(x,y) \text{ is of the order } |x-y|^\xi. \] There have been several attempts to give precise mathematical definitions for these exponents (see \cite{lnp96} for some examples) but I could not find a consensus in the literature. The main hurdle is that no one knows whether the exponents actually exist, and if they do, in what sense. There are many conjectures related to $\chi$ and $\xi$. The main among these, to be found in numerous physics papers \cite{hh85, kpz86, kz87, krug87, km89, ks91, meakin86, medina89, wk87}, including the famous paper of Kardar, Parisi and Zhang \cite{kpz86}, is that although $\chi$ and $\xi$ may depend on the dimension, they always satisfy the relation \begin{equation*}\label{kpz} \chi = 2\xi - 1. \end{equation*} A well-known conjecture from \cite{kpz86} is that when $d=2$, $\chi = 1/3$ and $\xi = 2/3$. Yet another belief is that $\chi = 0$ if $d$ is sufficiently large. Incidentally, due to its connection with \cite{kpz86}, I've heard in private conversations the relation $\chi=2\xi -1$ being referred to as the `KPZ relation' between $\chi$ and~$\xi$. There are a number of rigorous results for $\chi$ and $\xi$, mainly from the late eighties and early nineties. One of the first non-trivial results is due to Kesten \cite[Theorem 1]{kesten93}, who proved that $\chi \le 1/2$ in any dimension. The only improvement on Kesten's result till date is due to Benjamini, Kalai and Schramm \cite{bks03}, who proved that for first-passage percolation in $d\ge 2$ with binary edge-weights, \begin{equation}\label{bksineq} \sup_{v\in \mathbb{Z}^d, \ |v| >1} \frac{\mathrm{Var} T(0,v)}{|v|/\log |v|} < \infty. \end{equation} Bena\"im and Rossignol \cite{br08} extended this result to a large class of edge-weight distributions that they call `nearly gamma' distributions. The definition of a nearly gamma distribution is as follows. A positive random variable $X$ is said to have a nearly gamma distribution if it has a continuous probability density function $h$ supported on an interval $I$ (which may be unbounded), and its distribution function $H$ satisfies, for all $y\in I$, \[ \Phi'\circ \Phi^{-1} (H(y)) \le A \sqrt{y} h(y), \] for some constant $A$, where $\Phi$ is the distribution function of the standard normal distribution. Although the definition may seem a bit strange, Bena\"im and Rossignol \cite{br08} proved that this class is actually quite large, including e.g.\ exponential, gamma, beta and uniform distributions on intervals. The only non-trivial lower bound on the fluctuations of passage times is due to Newman and Piza \cite{np95} and Pemantle and Peres \cite{pp94}, who showed that in $d=2$, $\mathrm{Var} T(0,v)$ must grow at least as fast as $\log|v|$. Better lower bounds can be proved if one can show that with high probability, the geodesics lie in `thin cylinders' \cite{cd10}. For the wandering exponent $\xi$, the main rigorous results are due to Licea, Newman and Piza \cite{lnp96} who showed that $\xi^{(2)} \ge 1/2$ in any dimension, and $\xi^{(3)} \ge 3/5$ when $d=2$, where $\xi^{(2)}$ and $\xi^{(3)}$ are exponents defined in their paper which may be equal to $\xi$. Besides the bounds on $\chi$ and $\xi$ mentioned above, there are some rigorous results relating $\chi$ and $\xi$ through inequalities. Wehr and Aizenman~\cite{wa90} proved the inequality $\chi \ge (1-(d-1)\xi)/2$ in a related model, and the version of this inequality for first-passage percolation was proved by Licea, Newman and Piza~\cite{lnp96}. The closest that anyone came to proving $\chi=2\xi -1$ is a result of Newman and Piza \cite{np95}, who proved that $\chi' \ge 2\xi -1$, where $\chi'$ is a related exponent which may be equal to $\chi$. This has also been observed by Howard \cite{howard04} under different assumptions. Incidentally, in the model of Brownian motion in a Poissonian potential, W\"uthrich \cite{wuthrich98} proved the equivalent of the KPZ relation assuming that the exponents exist. The following theorem establishes the relation $\chi = 2\xi -1$ assuming that the exponents $\chi$ and $\xi$ exist in a certain sense (to be defined in the statement of the theorem) and that the distribution of edge-weights is nearly gamma. \begin{thm}\label{kpzthm} Consider the first-passage percolation model on $\mathbb{Z}^d$, $d\ge 2$, with i.i.d.\ edge-weights. Assume that the distribution of edge-weights is `nearly gamma' in the sense of Bena\"im and Rossignol \cite{br08} (which includes exponential, gamma, beta and uniform distributions, among others), and has a finite moment generating function in a neighborhood of zero. Let $\chi_a$ and $\xi_a$ be the smallest real numbers such that for all $\chi'> \chi_a$ and $\xi'> \xi_a$, there exists $\alpha > 0$ such that \begin{align} &\sup_{v\in \mathbb{Z}^d\backslash\{0\}} \mathbb{E}\exp\biggl(\alpha \frac{|T(0,v) - \mathbb{E} T(0,v)|}{|v|^{\chi'}}\biggr) < \infty, \tag{A1} \label{up1}\\ &\sup_{v\in \mathbb{Z}^d\backslash\{0\}} \mathbb{E}\exp\biggl(\alpha \frac{D(0,v)}{|v|^{\xi'}}\biggr) < \infty. \tag{A2} \label{up2} \end{align} Let $\chi_b$ and $\xi_b$ be the largest real numbers such that for all $\chi'< \chi_b$ and $\xi'< \xi_b$, there exists $C > 0$ such that \begin{align} &\inf_{v\in \mathbb{Z}^d, \ |v| > C} \frac{\mathrm{Var}(T(0,v))}{|v|^{2{\chi'}}} > 0, \tag{A3} \label{down1}\\ &\inf_{v\in \mathbb{Z}^d, \ |v| > C} \frac{\mathbb{E} D(0,v)}{|v|^{\xi'}} > 0. \tag{A4} \label{down2} \end{align} Then $0\le \chi_b \le \chi_a \le 1/2$, $0\le \xi_b\le \xi_a \le 1$ and $\chi_a \ge 2\xi_b -1$. Moreover, if it so happens that $\chi_a=\chi_b$ and $\xi_a = \xi_b$, and these two numbers are denoted by $\chi$ and $\xi$, then they must necessarily satisfy the relation $\chi = 2\xi - 1$. \end{thm} Note that if $\chi_a=\chi_b$ and $\xi_a = \xi_b$ and these two numbers are denoted by $\chi$ and $\xi$, then $\chi$ and $\xi$ are characterized by the properties that for every $\chi' >\chi$ and $\xi' > \xi$, there are some positive $\alpha$ and $C$ such that for all $v\ne 0$, \begin{align*} &\mathbb{E}\exp\biggl(\alpha \frac{|T(0,v) - \mathbb{E} T(0,v)|}{|v|^{\chi'}}\biggr) < C \ \ \text{and} \ \ \mathbb{E}\exp\biggl(\alpha \frac{D(0,v)}{|v|^{\xi'}}\biggr) < C, \end{align*} and for every $\chi' < \chi$ and $\xi'< \xi$ there are some positive $B$ and $C$ such that for all $v$ with $|v| > C$, \begin{align*} &\mathrm{Var}(T(0,v)) > B|v|^{2{\chi'}} \ \ \text{and} \ \ \mathbb{E} D(0,v) > B|v|^{\xi'}. \end{align*} It seems reasonable to expect that if the two exponents $\chi$ and $\xi$ indeed exist, then they should satisfy the above properties. Incidentally, a few months after the first draft of this paper was put up on arXiv, Auffinger and Damron \cite{ad11} were able to replace a crucial part of the proof of Theorem \ref{kpzthm} with a simpler argument that allowed them to remove the assumption that the edge-weights are nearly-gamma. Section \ref{sketch} has a sketch of the proof of Theorem \ref{kpzthm}. The rest of the paper is devoted to the actual proof. Proving that $0\le \chi_b\le \chi_a \le1/2$ and $0\le \xi_b \le \xi_a\le1$ is a routine exercise; this is done in Section \ref{trivial}. Proving that $\chi_a \ge 2\xi_b -1$ is also relatively easy and similar to the existing proofs of analogous inequalities, e.g.\ in \cite{np95, howard04}. This is done in Section~\ref{easypart}. The `hard part' is proving the opposite inequality, that is, $\chi\le 2\xi -1$ when $\chi=\chi_a=\chi_b$ and $\xi=\xi_a=\xi_b$. This is done in Sections \ref{012}, \ref{hard2} and \ref{hard3}. \section{Proof sketch}\label{sketch} I will try to give a sketch of the proof in this section. I have found it very hard to aptly summarize the main ideas in the proof without going into the details. This proof-sketch represents the end-result of my best efforts in this direction. If the interested reader finds the proof sketch too obscure, I would like to request him to return to this section after going through the complete proof, whereupon this high-level sketch may shed some illuminating insights. Throughout this proof sketch, $C$ will denote any positive constant that depends only on the edge-weight distribution and the dimension. Let $h(x) := \mathbb{E}(T(0,x))$. The function $h$ is subadditive. Therefore the limit \[ g(x) := \lim_{n\rightarrow\infty} \frac{h(nx)}{n} \] exists for all $x\in \mathbb{Z}^d$. The definition can be extended to all $x\in \mathbb{Q}^d$ by taking $n\rightarrow\infty$ through a subsequence, and can be further extended to all $x\in \mathbb{R}^d$ by uniform continuity. The function $g$ is a norm on $\mathbb{R}^d$. The function $g$ is a norm, and hence much more well-behaved than $h$. If $|x|$ is large, $g(x)$ is supposed to be a good approximation of $h(x)$. A method developed by Ken Alexander \cite{alexander93, alexander97} uses the order of fluctuations of passage times to infer bounds on $|h(x)-g(x)|$. In the setting of Theorem~\ref{kpzthm}, Alexander's method yields that for any $\varepsilon >0$, there exists $C$ such that for all $x\ne 0$, \begin{equation}\label{alex} g(x)\le h(x)\le g(x)+C|x|^{\chi_a + \varepsilon}. \end{equation} This is formally recorded in Theorem \ref{gapthm}. In the proof of the main result, the above approximation will allow us to replace the expected passage time $h(x)$ by the norm $g(x)$. In Lemma \ref{curve}, we prove that there is a unit vector $x_0$ and a hyperplane $H_0$ perpendicular to $x_0$ such that for some $C>0$, for all $z\in H_0$, \[ |g(x_0+z)-g(x_0)| \le C|z|^2. \] Similarly, there is a unit vector $x_1$ and a hyperplane $H_1$ perpendicular to $x_1$ such that for some $C>0$, for all $z\in H_1$, $|z|\le 1$, \[ g(x_1+z)\ge g(x_1) + C|z|^2. \] The interpretations of these two inequalities is as follows. In the direction $x_0$, the unit sphere of the norm $g$ is `at most as curved as an Euclidean sphere' and in the direction $x_1$, it is `at least as curved as an Euclidean sphere'. Now take a look at Figure \ref{figx1}. Think of $m$ as a fraction of $n$. By the definition of the direction of curvature~$x_1$ and Alexander's approximation~\eqref{alex}, for any $\varepsilon >0$, \begin{align*} &\text{Expected passage time of the path $P$}\\ &\ge g(mx_1+z) + g(nx_1 - (mx_1+z)) + O(n^{\chi+\varepsilon})\\ &= m g(x_1 + z/m) + (n-m) g(x_1 + z/(n-m)) + O(n^{\chi + \varepsilon})\\ &\ge n g(x_1) + C|z|^2/n + O(n^{\chi+\varepsilon})\\ &\ge \mathbb{E} (T(0, nx_1))+ C|z|^2/n + O(n^{\chi+\varepsilon}). \end{align*} Suppose $|z|=n^\xi$. Then $|z|^2/n = n^{2\xi-1}$. Fluctuations of $T(0,nx_1)$ are of order $n^\chi$. Thus, if $2\xi-1 > \chi$, then $P$ cannot be a geodesic from $0$ to $nx_1$. This sketch is formalized into a rigorous argument in Section \ref{easypart} to prove that $\chi_a \ge 2\xi_b -1$. \begin{figure}[h] \begin{pdfpic} \begin{pspicture}(-1,-1)(10,2.5) \psset{xunit=1cm,yunit=1cm} \rput(0,.2){\small $0$} \rput(8.9,.2){\small $nx_1$} \psline{*-*}(0,0)(9,0) \psline[linestyle=dashed]{*-*}(3,0)(3,2) \rput(3.4, 0.2){\small $mx_1$} \rput(3.6, 2.2){\small $mx_1+z$} \psline[linestyle=dashed]{*-*}(0,0)(3,2) \psline[linestyle=dashed]{*-*}(3,2)(9,0) \pscurve(0,0)(1,2)(2,1)(3,2)(5,1)(7,2)(8.5,-.5)(9,0) \rput(6, 1.7){\small $P$} \end{pspicture} \end{pdfpic} \caption{Proving $\chi \ge 2\xi -1$} \label{figx1} \end{figure} Next, let me sketch the proof of $\chi \le 2\xi -1$ when $\chi > 0$. The methods developed in \cite{cd10} for first-passage percolation in thin cylinders have some bearing on this part of the proof. Recall the direction of curvature $x_0$. Let $a= n^\beta$, $\beta<1$. Let $m = n/a = n^{1-\beta}$. Under the conditions $\chi > 2\xi -1$ and $\chi > 0$, we will show that there is a $\beta < 1$ such that \[ T(0,nx_0) = \sum_{i=0}^{m-1} T(ia x_0, (i+1)ax_0) + o(n^{\chi}). \tag{$\star$} \] This will lead to a contradiction, as follows. Let $f(n) := \mathrm{Var} T(0, nx_0)$. Then by Bena\"im and Rossignol \cite{br08}, $f(n)\le Cn/\log n$. Under ($\star$), by the Harris-FKG inequality, \begin{align*} f(n) = \mathrm{Var} T(0,nx_0) &\ge m \mathrm{Var} T(0, ax_0) + o(n^{2\chi})\\ &= n^{1-\beta} f(n^\beta) + o(n^{2\chi}). \end{align*} If $\beta$ is chosen sufficiently small, the first term on the right will dominate the second. Consequently, \[ \liminf_{n\rightarrow\infty} \frac{f(n)}{n^{1-\beta}f(n^\beta)} \ge 1. \tag{$\dagger$} \] Choose $n_0>1$ and define $n_{i+1} = n_i^{1/\beta}$ for each $i$. Let $v(n) := f(n)/n$. Then $v(n_i)\le C/\log n_i \le C \beta^i$. But by ($\dagger$), $\liminf v(n_{i+1})/v(n_i)\ge 1$, and so for all $i$ large enough, $v(n_{i+1})\ge \beta^{1/2}v(n_i)$. In particular, there is a positive constant $c$ such that for all $i$, $v(n_i) \ge c \beta^{i/2}$. Since $\beta < 1$, this gives a contradiction for $i$ large, therefore proving that $\chi\le 2\xi-1$. Let me now sketch a proof of ($\star$) under the conditions $\chi > 2\xi -1$ and $\chi > 0$. Let $a=n^\beta$ and $b= n^{\beta'}$, where $\beta' < \beta<1$. Consider a cylinder of width $n^\xi$ around the line joining $0$ and $nx_0$. Partition the cylinder into alternating big and small cylinders of widths $a$ and $b$ respectively. Call the boundary walls of these cylinders $U_0, V_0, U_1, V_1, \ldots, V_{m-1}, U_m$, where $m$ is roughly $n^{1-\beta}$ (see Figure \ref{figx3}). \begin{figure}[h] \begin{pdfpic} \begin{pspicture}(-1,-1)(10,2.5) \psset{xunit=1cm,yunit=1cm} \pspolygon(0,0)(2,0)(2,1)(0,1) \pspolygon[fillstyle=solid, fillcolor=lightgray](2,0)(3,0)(3,1)(2,1) \pspolygon(3,0)(5,0)(5,1)(3,1) \pspolygon[fillstyle=solid, fillcolor=lightgray](5,0)(6,0)(6,1)(5,1) \psline{-}(6,0)(6.5,0) \psline{-}(6,1)(6.5,1) \psline[linestyle=dotted]{-}(6.5,0)(7.5,0) \psline[linestyle=dotted]{-}(6.5,1)(7.5,1) \psline{-}(7.5,0)(8,0) \psline{-}(7.5,1)(8,1) \pspolygon[fillstyle=solid, fillcolor=lightgray](8,0)(9,0)(9,1)(8,1) \rput(0,-.2){\small $U_0$} \rput(2,-.2){\small $V_0$} \rput(3,-.2){\small $U_1$} \rput(5,-.2){\small $V_1$} \rput(6,-.2){\small $U_2$} \rput(8,-.2){\small $V_{m-1}$} \rput(9,-.2){\small $U_m$} \psline[linestyle=dotted]{*-*}(0,.5)(0,.5) \psline[linestyle=dotted]{*-*}(9,.5)(9,.5) \rput(-.2,.5){\small $0$} \rput(9.4, .5){\small $nx_0$} \psline{<-}(0,0.5)(.75, 0.5) \rput(.9,0.5){\small $a$} \psline{->}(1.05, .5)(2,.5) \psline{<-}(2,0.5)(2.35, 0.5) \rput(2.5,0.5){\small $b$} \psline{->}(2.65, .5)(3,.5) \end{pspicture} \end{pdfpic} \caption{Cylinder of width $n^\xi$ around the line joining $0$ and $nx_0$} \label{figx3} \end{figure} Let $G_i := G(U_i, V_i)$, that is, the path with minimum passage time between any vertex in $U_i$ and any vertex in $V_i$. Let $u_i$ and $v_i$ be the endpoints of $G_i$. Let $G_i' := G(v_i, u_{i+1})$. The concatenation of the paths $G_0'$, $G_1$, $G_1'$, $G_2$, $\ldots$, $G_{m-1}'$, $G_m$ is a path from $U_0$ to $U_{m}$. Therefore, \begin{align*} T(U_0, U_{m}) &\le\sum_{i=1}^{m-1} T(U_i, V_i) + \sum_{i=0}^{m-1} T(v_i, u_{i+1}). \end{align*} Next, let $G := G(U_0, U_m)$. Let $u_i'$ be the first vertex in $U_i$ visited by $G$ and let $v_i'$ be the first vertex in $V_i$ visited by $G$. If $G$ stays within the cylinder throughout, then $T(u_i', v_i') \ge T(U_i, V_i)$ and $T(v_i', u_{i+1}')\ge T(V_i, U_{i+1})$. Thus, \begin{align*} T(U_0, U_{m}) &\ge \sum_{i=0}^{m-1} T(U_i, V_i) + \sum_{i=0}^{m-1} T(V_i, U_{i+1}). \end{align*} Thus, if $G(U_0, U_m)$ stays in a cylinder of width $n^\xi$, then \begin{align*} 0&\le T(U_0,U_m)-\sum_{i=0}^{m-1} (T(U_i, V_i)+T(V_i, U_{i+1})) \\ &\le \sum_{i=0}^{m-1}(T(v_i, u_{i+1})-T(V_i, U_{i+1})). \end{align*} Therefore, \begin{align*} &\biggl|T(U_0, U_m) -\sum_{i=0}^{m-1} (T(U_i, V_i) + T(V_i, U_{i+1})) \biggr|\le \sum_{i=0}^{m-1} M_i, \end{align*} where $M_i := \max_{v,v'\in V_i, \ u,u'\in U_{i+1}} |T(v,u) - T(v', u')|$. Note that the errors $M_i$ come only from the small blocks. By curvature estimate in direction $x_0$, for any $v,v'\in V_i$ and $u,u'\in U_{i+1}$, \[ |\mathbb{E} T(v,u) - \mathbb{E} T(v',u')| \le C(n^\xi)^2/n^{\beta'} = Cn^{2\xi-\beta'}. \] Fluctuations of $T(v,u)$ are of order $n^{\beta'\chi}$. If $2\xi-1 < \chi$, then we can choose $\beta'$ so close to $1$ that $2\xi-\beta' < \beta' \chi$. That is, fluctuations dominate while estimating $M_i$. Consequently, $M_i$ is of order $n^{\beta' \chi}$. Thus, total error $= n^{1-\beta+\beta'\chi}$. Since $\beta' < \beta$ and $\chi >0$, this gives us the opportunity of choosing $\beta', \beta$ such that the exponent is $< \chi$. This proves ($\star$) for passage times from `boundary to boundary'. Proving ($\star$) for `point to point' passage times is only slightly more complicated. The program is carried out in Sections \ref{012} and \ref{hard2}. Finally, for the case $\chi = 0$, we have to prove that $\xi \ge 1/2$. This was proved by Licea, Newman and Piza \cite{lnp96} for a different definition of the wandering exponent. The argument does not seem to work with our definition. A proof is given in Section \ref{hard3}; I will omit this part from the proof sketch. \section{A priori bounds}\label{trivial} In this section we prove the a priori bounds $0\le \chi_b\le \chi_a \le1/2$ and $0\le \xi_b \le \xi_a \le 1$. First, note that the inequalities $\chi_b \le \chi_a$ and $\xi_b\le \xi_a$ are easy. For example, if $\chi_b > \chi_a$, then for any $\chi_a < \chi'< \chi''< \chi_b$, \eqref{up1} implies that \[ \sup_{v\in \mathbb{Z}^d\backslash \{0\}} \frac{\mathrm{Var}(T(0,v))}{|v|^{2\chi'}} < \infty, \] and hence for any sequence $v_n$ such that $|v_n|\rightarrow\infty$, \[ \lim_{n\rightarrow\infty} \frac{\mathrm{Var}(T(0,v_n))}{|v_n|^{2\chi''}} = 0, \] which contradicts \eqref{down1}. A similar argument shows that $\xi_b \le \xi_a$. To show that $\chi_b \ge 0$, let $E_0$ denote the set of all edges incident to the origin. Let $\mathcal{F}_0$ denote the sigma-algebra generated by $(t_e)_{e\not\in E_0}$. Since the edge-weight distribution is non-degenerate, there exists $c_1 < c_2$ such that for an edge $e$, $\mathbb{P}(t_e < c_1) >0$ and $\mathbb{P}(t_e > c_2) >0$. Therefore, \begin{equation}\label{ppmax} \mathbb{P}(\max_{e\in E_0} t_e < c_1) > 0, \ \ \mathbb{P}(\min_{e\in E_0} t_e > c_2) > 0. \end{equation} Let $(t'_e)_{e\in E_0}$ be an independent configuration of edge weights. Define $t_e' = t_e$ if $e\not\in E_0$. Let $T'(0,v)$ be the first-passage time from $0$ to a vertex $v$ in the new environment $t'$. If $t_e < c_1$ and $t_e' > c_2$ for all $e\in E_0$, then $T'(0,v) > T(0,v) + c_2-c_1$. Thus, by \eqref{ppmax}, there exists $\delta > 0$ such that for any $v$ with $|v| \ge 2$, \[ \mathbb{E} \mathrm{Var}(T(0,v)|\mathcal{F}_0) = \frac{1}{2}\mathbb{E}(T(0,v)-T'(0,v))^2 > \delta. \] Therefore $\mathrm{Var}(T(0,v)) > \delta$ and so $\chi_b \ge 0$. To show that $\xi_b \ge 0$, note that there is an $\epsilon>0$ small enough such that for any $v\in \mathbb{Z}^d$ with $|v|\ge 2$, there can be at most one lattice path from $0$ to $v$ that stays within distance $\epsilon$ from the straight line segment joining $0$ to $v$. Fix such a vertex $v$ and such a path $P$. If the number of edges in $P$ is sufficiently large, one can use the non-degeneracy of the edge-weight distribution to show by an explicit assignment of edge weights that \[ \mathbb{P}(\text{$P$ is a geodesic}) < \delta, \] where $\delta< 1$ is a constant that depends only on the edge-weight distribution (and not on $v$ or $P$). This shows that for $|v|$ sufficiently large, $\mathbb{E} D(0,v)$ is bounded below by a positive constant that does not depend on $v$, thereby proving that $\xi_b \ge 0$. Let us next show that $\chi_a \le 1/2$. Essentially, this follows from \cite[Theorem 1]{kesten93} or \cite[Proposition 8.3]{talagrand95}, with a little bit of extra work. Below, we give a proof using \cite[Theorem 5.4]{br08}. First, note that there is a constant $C_0$ such that for all $v$, \begin{align}\label{tv1} \mathbb{E} T(0,v)\le C_0|v|_1, \end{align} where $|v|_1$ is the $\ell_1$ norm of $v$. From the assumptions about the distribution of edge-weights, \cite[Theorem 5.4]{br08} implies that there are positive constants $C_1$ and $C_2$ such that for any $v\in \mathbb{Z}^d$ with $|v|_1\ge 2$, and any $0\le t\le |v|_1$, \begin{equation}\label{uptail} \mathbb{P}\biggl(|T(0,v)-\mathbb{E} T(0,v)| \ge t\sqrt{\frac{|v|_1}{\log|v|_1}}\biggr) \le C_1 e^{-C_2 t}. \end{equation} Fix a path $P$ from $0$ to $v$ with $|v|_1$ edges. Recall that $t(P)$ denotes the sum of the weights of the edges in $P$. Since the edge-weight distribution has finite moment generating function in a neighborhood of zero and~\eqref{tv1} holds, it is easy to see that there are positive constants~$C_3$, $C_4$ and $C_4'$ such that if $|v|_1 > C_3$, then for any $t> |v|_1$, \begin{equation}\label{uptail2} \begin{split} &\mathbb{P}\biggl(|T(0,v)-\mathbb{E} T(0,v)| \ge t\sqrt{\frac{|v|_1}{\log|v|_1}}\biggr)\\ &\le \mathbb{P}\biggl(T(0,v) \ge C_0|v|_1 + t\sqrt{\frac{|v|_1}{\log|v|_1}}\biggr)\\ &\le\mathbb{P}\biggl(t(P) \ge C_0|v|_1 + t\sqrt{\frac{|v|_1}{\log|v|_1}} \biggr) \le e^{C_4|v|_1 - C_4't\sqrt{|v|_1/\log|v|_1}}. \end{split} \end{equation} Combining \eqref{uptail} and \eqref{uptail2} it follows that there are constants $C_5$, $C_6$ and $C_7$ such that for any~$v$ with $|v|_1> C_5$, \[ \mathbb{E}\exp\biggl(C_6 \frac{|T(0,v)-\mathbb{E} T(0,v)|}{\sqrt{|v|_1/\log|v|_1}}\biggr) \le C_7. \] Appropriately increasing $C_7$, one sees that the above inequality holds for all $v$ with $|v|_1\ge 2$. In particular, $\chi_a \le 1/2$. Finally, let us prove that $\xi_a \le 1$. Consider a self-avoiding path $P$ starting at the origin, containing $m$ edges. By the strict positivity of the edge-weight distributions, for any edge~$e$, \[ \lim_{\theta \rightarrow\infty}\mathbb{E}(e^{- \theta t_e}) =0. \] Now, for any $\theta, c>0$, \begin{align*} \mathbb{P}(t(P) \le cm) = \mathbb{P}(e^{-t(P)/c}\ge e^{-m}) &\le (e\mathbb{E}(e^{-t_e/c}))^m. \end{align*} Thus, given any $\delta >0$ there exists $c$ small enough such that for any $m$ and any self-avoiding path $P$ with $m$ edges, \[ \mathbb{P}(t(P) \le cm) \le \delta^m. \] Since there are at most $(2d)^m$ paths with $m$ edges, therefore there exists $c$ small enough such that \[ \mathbb{P}(t(P) \le cm \text{ for some $P$ with $m$ edges}) \le 2^{-m-1}, \] and therefore \begin{equation}\label{tp} \mathbb{P}(t(P) \le cm \text{ for some $P$ with $\ge m$ edges}) \le 2^{-m}. \end{equation} There is a constant $B>0$ such that for any $t\ge 1$ and any vertex $v\ne 0$, if $D(0,v) \ge t|v|$, then $G(0,v)$ has at least $Bt|v|$ edges. Therefore from \eqref{tp}, \begin{align*} \mathbb{P}(D(0,v) \ge t|v|) &\le \mathbb{P}(T(0,v) \ge Bt|v|/c) + 2^{-Bt|v|}. \end{align*} As in \eqref{uptail2}, there is a constant $C$ such that if $P$ is a path from $0$ to $v$ with $|v|_1$ edges, \[ \mathbb{P}(T(0,v) \ge Bt|v|/c) \le \mathbb{P}(t(P) \ge Bt |v|/c) \le e^{C|v| - Bt|v|/c}. \] Combining the last two displays shows that for some $\alpha$ small enough, \[ \sup_{v\ne 0}\mathbb{E}\exp\biggl(\alpha \frac{D(0,v)}{|v|}\biggr) < \infty, \] and thus, $\xi_a \le 1$. \section{Alexander's subadditive approximation theory} The first step in the proof of Theorem \ref{kpzthm} is to find a suitable approximation of $\mathbb{E} T(0,x)$ by a convex function $g(x)$. For $x\in \mathbb{Z}^d$, define \begin{equation}\label{hdef} h(x) := \mathbb{E} T(0,x). \end{equation} It is easy to see that $h$ satisfies the subadditive inequality \[ h(x+y)\le h(x)+h(y). \] By the standard subadditive argument, it follows that \begin{equation}\label{gdef} g(x) := \lim_{n\rightarrow\infty} \frac{h(nx)}{n} \end{equation} exists for each $x\in \mathbb{Z}^d$. In fact, $g(x)$ may be defined similarly for $x\in \mathbb{Q}^d$ by taking $n\rightarrow\infty$ through a sequence of $n$ such that $nx\in \mathbb{Z}^d$. The function $g$ extends continuously to the whole of $\mathbb{R}^d$, and the extension is a norm on $\mathbb{R}^d$ (see e.g.\ \cite[Lemma 1.5]{alexander97}). Note that by subadditivity, \begin{equation}\label{gh} g(x)\le h(x) \text{ for all } x\in \mathbb{Z}^d. \end{equation} Since the edge-weight distribution is continuous in the setting of Theorem~\ref{kpzthm}, it follows by a well-known result (see \cite{kesten86}) that $g(x) > 0$ for each $x\ne 0$. Let $e_i$ denote the $i$th coordinate vector in $\mathbb{R}^d$. Since $g$ is symmetric with respect to interchange of coordinates and reflections across all coordinate hyperplanes, it is easy to show using subadditivity that \begin{equation}\label{ge} |x|_\infty \le g(x)/g(e_1) \le |x|_1 \text{ for all } x\ne 0, \end{equation} where $|x|_p$ denotes the $\ell_p$ norm of the vector $x$. How well does $g(x)$ approximate $h(x)$? Following the work of Kesten \cite{kesten86, kesten93}, Alexander \cite{alexander93, alexander97} developed a general theory for tackling such questions. One of the main results of Alexander \cite{alexander97} is that under appropriate hypotheses on the edge-weights, there exists some $C >0$ such that for all $x\in \mathbb{Z}^d\backslash\{0\}$, \[ g(x)\le h(x) \le g(x) + C|x|^{1/2}\log |x|. \] Incidentally, Alexander has recently been able to obtain slightly improved results for nearly gamma edge-weights \cite{alexander11}. It turns out that under the hypotheses of Theorem \ref{kpzthm}, Alexander's argument goes through almost verbatim to yield the following result. \begin{thm}\label{gapthm} Consider the setup of Theorem \ref{kpzthm}. Let $g$ and $h$ be defined as in \eqref{gdef} and \eqref{hdef} above. Then for any $\chi' >\chi_a$, there exists $C > 0$ such that for all $x\in \mathbb{Z}^d$ with $|x|>1$, \[ g(x) \le h(x)\le g(x)+C|x|^{\chi'} \log |x|. \] \end{thm} Sacrificing brevity for the sake of completeness, I will now prove Theorem~\ref{gapthm} by copying Alexander's argument with only minor changes at the appropriate points. Fix $\chi'> \chi_a$. Since $0\le \chi_a\le 1/2$, so $\chi'$ can be chosen to satisfy $0<\chi'<1$. Let $B_0 := \{x: g(x)\le 1\}$. Given $x\in \mathbb{R}^d$, let $H_x$ denote a hyperplane tangent to the boundary of $g(x)B_0$ at $x$. Note that if the boundary is not smooth, the choice of $H_x$ may not be unique. Let $H_x^0$ be the hyperplane through the origin that is parallel to $H_x$. There is a unique linear functional $g_x$ on $\mathbb{R}^d$ satisfying \[ g_x(y) = 0 \text{ for all } y\in H_x^0, \ \ g_x(x) = g(x). \] For each $x\in \mathbb{R}^d$, $C>0$ and $K > 0$ let \begin{align*} &Q_x(C, K) \\ &\quad := \{y\in \mathbb{Z}^d: |y|\le K|x|, \ g_x(y)\le g(x),\ h(y) \le g_x(y) + C |x|^{\chi'}\log|x|\}. \end{align*} The following key result is taken from \cite{alexander97}. \begin{lmm}[Alexander \cite{alexander97}, Theorem 1.8]\label{alexander} Consider the setting of Theorem~\ref{gapthm}. Suppose that for some $M> 1$, $C>0$, $K>0$ and $a> 1$, the following holds. For each $x\in \mathbb{Q}^d$ with $|x|\ge M$, there exists an integer $n\ge 1$, a lattice path $\gamma$ from $0$ to $nx$, and a sequence of sites $0=v_0, v_1,\ldots, v_m = nx$ in $\gamma$ such that $m\le an$ and $v_i - v_{i-1}\in Q_x(C, K)$ for all $1\le i\le m$. Then the conclusion of Theorem \ref{gapthm} holds. \end{lmm} Before proving that the conditions of Lemma \ref{alexander} hold, we need some preliminary definitions and results. Define \[ s_x(y) := h(y) - g_x(y), \ \ y\in \mathbb{Z}^d. \] By the definition of $g_x$ and the fact that $g$ is a norm, it is easy to see that \begin{equation}\label{gg} |g_x(y)|\le g(y), \end{equation} and by subadditivity, $g(y) \le h(y)$. Therefore $s_x(y)\ge 0$. Again from subadditivity of $h$ and linearity of $g_x$, \begin{equation}\label{ss} s_x(y+z)\le s_x(y) + s_x(z) \ \ \text{for all } y,z\in \mathbb{Z}^d. \end{equation} Let $C_1 := 320 d^2/\alpha$, where $\alpha$ is from the statement of Theorem \ref{kpzthm}. As in~\cite{alexander97}, define \begin{align*} Q_x &:= Q_x(C_1, 2d+1),\\ G_x &:= \{y\in \mathbb{Z}^d: g_x(y) > g(x)\},\\ \Delta_x &:= \{y\in Q_x: y \text{ adjacent to } \mathbb{Z}^d\backslash Q_x, \ y \text{ not adjacent to } G_x\}, \\ D_x &:= \{y\in Q_x: y \text{ adjacent to } G_x\}. \end{align*} The following Lemma is simply a slightly altered copy of Lemma 3.3 in \cite{alexander97}. \begin{lmm}\label{copy} Assume the conditions of Theorem \ref{kpzthm}. Then there exists a constant $C_2$ such that if $|x|\ge C_2$, the following hold. \begin{itemize} \item[(i)] If $y\in Q_x$ then $g(y) \le 2g(x)$ and $|y|\le 2d|x|$. \item[(ii)] If $y\in \Delta_x$ then $s_x(y)\ge C_1 |x|^{\chi'}(\log |x|)/2$. \item[(iii)] If $y\in D_x$ then $g_x(y) \ge 5g(x)/6$. \end{itemize} \end{lmm} \begin{proof} (i) Suppose $g(y) > 2g(x)$ and $g_x(y) \le g(x)$. Then using \eqref{gh} and \eqref{gg}, \begin{align*} 2g(x) < g(y) \le h(y) = g_x(y)+s_x(y) \le g(x) + s_x(y), \end{align*} so from \eqref{ge}, $s_x(y) > g(x) > C_1 |x|^{\chi'}\log|x|$ provided $|x| \ge C_2$. Thus $y\not \in Q_x$ and the first conclusion in (i) follows. The second conclusion then follows from~\eqref{ge}. (ii) Note that $z= y\pm e_i$ for some $z\in \mathbb{Z}^d \cap Q_x^c \cap G_x^c$ and $i\le d$. From (i) we have $|y|\le 2d|x|$, so $|z|\le (2d+1)|x|$, provided $|x| > 1$. Since $z\not \in Q_x$ we must then have $s_x(z)> C_1 |x|^{\chi'}\log |x|$, while using \eqref{gg}, \[ h(\pm e_i) = s_x(\pm e_i) + g_x(\pm e_i) \ge s_x(\pm e_i) - g(\pm e_i). \] Consequently, by \eqref{ss}, if $|x|\ge C_2$, \begin{align*} s_x(y) &\ge s_x(z) - s_x(\pm e_i) \\ &\ge C_1 |x|^{\chi'}\log|x| - h(\pm e_i) - g(\pm e_i) \\ &\ge C_1 |x|^{\chi'}(\log|x|) /2. \end{align*} (iii) As in (ii) we have $z=y\pm e_i$ for some $z\in \mathbb{Z}^d \cap G_x$ and $i\le d$. Therefore using \eqref{ge} and \eqref{gg}, \[ g_x(y) = g_x(z) - g_x(\pm e_i) \ge g_x(z)- g(\pm e_i) \ge 5g(x)/6 \] for all $|x|\ge C_2$. \end{proof} Let us call the $m+1$ sites in Lemma \ref{alexander} marked sites. If $m$ is unrestricted, it is easy to find inductively a sequence of marked sites for any path $\gamma$ from $0$ to $nx$, as follows. One can start at $v_0 = 0$, and given $v_i$, let $v_{i+1}'$ be the first site (if any) in $\gamma$, coming after $v_i$, such that $v_{i+1}' - v_i \not \in Q_x$; then let $v_{i+1}$ be the last site in $\gamma$ before $v_{i+1}'$ if $v_{i+1}'$ exists; otherwise let $v_{i+1}=nx$ and end the construction. If $|x|$ is large enough, then it is easy to deduce from \eqref{ge} and \eqref{gg} that all neighbors of the origin must belong to $Q_x$ and therefore $v_{i+1}\ne v_i$ for each $i$ and hence the construction must end after a finite number of steps. We call the sequence of marked sites obtained from a self-avoiding path $\gamma$ in this way, the $Q_x$-skeleton of $\gamma$. Given such a skeleton $(v_0,\ldots, v_m)$, abbreviated $(v_i)$, of some lattice path, we divide the corresponding indices into two classes, corresponding to `long' and `short' increments: \begin{align*} S((v_i)) &:= \{i: 0\le i< m-1, \ v_{i+1}-v_i\in \Delta_x\},\\ L((v_i)) &:= \{i: 0\le i< m-1, \ v_{i+1}-v_i\in D_x\}. \end{align*} Note that the final index $m$ is in neither class, and by Lemma \ref{copy}(ii), \begin{equation}\label{svi} j\in S((v_i)) \ \text{ implies } \ s_x(v_{j+1}-v_j) > C_1 |x|^{\chi'}(\log|x|) /2. \end{equation} The next result is analogous to Proposition 3.4 in \cite{alexander97}. \begin{prop}\label{copy2} Assume the conditions of Theorem \ref{kpzthm}. There exists a constant $C_3$ such that if $|x|\ge C_3$ then for sufficiently large $n$ there exists a lattice path from $0$ to $nx$ with $Q_x$-skeleton of $2n+1$ or fewer vertices. \end{prop} \begin{proof} Let $(v_0,\ldots, v_m)$ be a $Q_x$-skeleton of some lattice path and let \[ Y_i := \mathbb{E} T(v_i, v_{i+1}) - T(v_i, v_{i+1}). \] Then by \eqref{up1} of Theorem \ref{kpzthm} and Lemma \ref{copy}(i), there are constants $C_4 := \alpha/(2d)^{\chi'} \ge \alpha/2d$ and $C_5$ such that for $0\le i\le m-1$, \begin{equation}\label{yp} \mathbb{E}\exp(C_4 |Y_i|/|x|^{\chi'}) \le C_5. \end{equation} Let $Y_0', Y_1',\ldots, Y_{m-1}'$ be independent random variables with $Y_i'$ having the same distribution as $Y_i$. Let $T(0, w; (v_j))$ be the minimum passage time among all lattice paths from $0$ to a site $w$ with $Q_x$-skeleton $(v_j)$. By \cite[equation (4.13)]{kesten86} or \cite[Theorem 2.3]{alexander93}, for all $t\ge 0$, \[ \mathbb{P}\biggl(\sum_{i=0}^{m-1} Y_i' \ge t\biggr) \ge \mathbb{P}\biggl(\sum_{i=0}^{m-1} \mathbb{E} T(v_i, v_{i+1}) - T(0, v_m; (v_j)) \ge t\biggr). \] Now by \eqref{yp}, \begin{align*} \mathbb{P}\biggl(\sum_{i=0}^{m-1} Y_i' \ge t\biggr) &\le e^{-C_4 t/|x|^{\chi'}} C_5^m. \end{align*} Let $C_6 := 20 d^2/\alpha$. Taking $t = C_6 m |x|^{\chi'}\log|x|$, the above display shows that there is a constant $C_7$ such that for all $|x| \ge C_7$, \begin{align*} \mathbb{P}\biggl(\sum_{i=0}^{m-1} \mathbb{E} T(v_i, v_{i+1}) - T(0, v_m; (v_j)) \ge C_6m |x|^{\chi'}\log|x|\biggr) &\le (C_5e^{- 10 d \log |x|})^m. \end{align*} From the definition of a $Q_x$-skeleton, it is easy to see that there is a constant $C_8$ such that there are at most $(C_8 |x|^d)^m$ $Q_x$-skeletons with $m+1$ vertices. Therefore, the above display shows that there are constants $C_9$ and $C_{10}$ such that when $|x| \ge C_9$, \begin{align*} &\mathbb{P}\biggl(\sum_{i=0}^{m-1} \mathbb{E} T(v_i, v_{i+1}) - T(0, v_m; (v_j)) \ge C_6m |x|^{\chi'}\log|x|\\ &\qquad \qquad \text{ for some $Q_x$-skeleton with $m+1$ vertices}\biggr) \le e^{-C_{10} m \log |x|}. \end{align*} This in turn yields that for some constant $C_{11}$, for all $|x|\ge C_{11}$, \begin{equation}\label{alexmain} \begin{split} &\mathbb{P}\biggl(\sum_{i=0}^{m-1} \mathbb{E} T(v_i, v_{i+1}) - T(0, v_m; (v_j)) \ge C_6m |x|^{\chi'}\log|x|\\ &\qquad \text{ for some $m \ge 1$ and some $Q_x$-skeleton with $m+1$ vertices}\biggr) \\ &\qquad \qquad \le 2e^{-C_{10} \log |x|}. \end{split} \end{equation} Now let $\omega := \{t_e: e \text{ is an edge in } \mathbb{Z}^d\}$ be a fixed configuration of passage times (to be further specified later) and let $(v_0,\ldots, v_m)$ be the $Q_x$-skeleton of a route from $0$ to $nx$. Then since $v_{i+1}-v_i\in Q_x$, \[ m g(x) \ge \sum_{i=0}^{m-1} g_x(v_{i+1}-v_i) = g_x(nx) = n g(x). \] Therefore \begin{equation}\label{nm} n\le m. \end{equation} From the concentration of first-passage times, \[ \mathbb{P}(T(0,nx) \le n g(x)+n) \rightarrow 1 \ \text{ as } n \rightarrow\infty, \] so by \eqref{alexmain} if $n$ is large there exists a configuration $\omega$ and a $Q_x$-skeleton $(v_0,\ldots, v_m)$ of a path from $0$ to $nx$ such that \begin{align}\label{ttg} T(0, nx; (v_j)) = T(0,nx) \le ng(x)+n \end{align} and \begin{align}\label{ttg1} \sum_{i=0}^{m-1} \mathbb{E} T(v_i, v_{i+1}) - T(0, nx; (v_j)) < C_6m |x|^{\chi'}\log|x|. \end{align} Thus for some constant $C_{12}$, if $|x|\ge C_{12}$ then by \eqref{nm}, \eqref{ttg} and \eqref{ttg1}, \begin{equation}\label{tvv} \begin{split} \sum_{i=0}^{m-1} \mathbb{E} T(v_i, v_{i+1}) &< ng(x) + n + C_6m |x|^{\chi'}\log|x|\\ &\le n g(x) + 2C_6m |x|^{\chi'}\log|x|. \end{split} \end{equation} But by \eqref{svi}, \begin{align*} \sum_{i=0}^{m-1} \mathbb{E} T(v_i, v_{i+1}) &= \sum_{i=0}^{m-1}(g_x(v_{i+1}-v_i) + s_x(v_{i+1}-v_i)) \\ &\ge g_x(nx) + C_1 |S((v_i))| |x|^{\chi'} (\log|x|)/2, \end{align*} which, together with \eqref{tvv}, yields \begin{equation}\label{sineq} |S((v_i))| \le 4C_6m/C_1 = m/4. \end{equation} At the same time, using Lemma \ref{copy}(iii), \begin{align*} \sum_{i=0}^{m-1} \mathbb{E} T(v_i, v_{i+1}) &= \sum_{i=0}^{m-1}(g_x(v_{i+1}-v_i) + s_x(v_{i+1}-v_i)) \\ &\ge 5|L((v_i))| g(x)/6. \end{align*} With \eqref{tvv}, \eqref{ge} and the assumption that $\chi' < 1$, this implies that there is a constant $C_{13}$ such that, provided $|x|\ge C_{13}$, \[ |L((v_i))| \le 6n/5 + \frac{12C_6 m|x|^{\chi'}\log|x|}{6g(e_1) |x|/\sqrt{d}} \le 6n/5 + m/8. \] This and \eqref{sineq} give \[ m = |L((v_i))| + |S((v_i))| + 1\le 6n/5 + 3m/8 + 1, \] which, for $n$ large, implies $m\le 2n$, proving the Proposition. \end{proof} \begin{proof}[Proof of Theorem \ref{gapthm}] Lemma \ref{alexander} and Proposition \ref{copy2} prove the conclusion of Theorem \ref{gapthm} for $x$ with sufficiently large Euclidean norm. To prove this for all $x$ with $|x|>1$, one simply has to increase the value of $C$. \end{proof} \section{Curvature bounds} The unit ball of the $g$-norm, usually called the `limit shape' of first-passage percolation, is an object of great interest and intrigue in this literature. Very little is known rigorously about the limit shape, except for a fundamental result about convergence to the limit shape due to Cox and Durrett \cite{cd81}, some qualitative results of Kesten \cite{kesten86} who proved, in particular, that the limit shape may not be an Euclidean ball, an important result of Durrett and Liggett \cite{dl81} who showed that the boundary of the limit shape may contain straight lines, and some bounds on the rate of convergence to the limit shape \cite{kesten93, alexander97}. In particular, it is not even known whether the limit shape may be strictly convex in every direction (except for the related continuum model of `Riemannian first-passage percolation' \cite{lw10} and first-passage percolation with stationary ergodic edge-weights \cite{hm95}). The following Proposition lists two properties of the limit shape that are crucial for our purposes. \begin{prop}\label{curve} Let $g$ be defined as in \eqref{gdef} and assume that the distribution of edge-weights is continuous. Then there exists $x_0\in \mathbb{R}^d$ with $|x_0| =1$, a constant $C\ge 0$ and a hyperplane $H_0$ through the origin perpendicular to $x_0$ such that for all $z\in H_0$, \[ |g(x_0+z) - g(x_0)| \le C|z|^2. \] There also exists $x_1\in \mathbb{R}^d$ with $|x_1|=1$ and a hyperplane $H_1$ through the origin perpendicular to $x_1$ such that for all $z\in H_1$, \[ g(x_1+z) \ge \sqrt{ 1+|z|^2} g(x_1). \] \end{prop} \begin{proof} The proof is similar to that of \cite[Lemma 5]{np95}. Let $B(0,r)$ denote the Euclidean ball of radius $r$ centered at the origin and let \[ B_g(0,r) := \{x: g(x)\le r\} \] denote the ball of radius $r$ centered at the origin for the norm $g$. Let $r$ be the smallest number such that $B_g(0,r)\supseteq B(0,1)$. Let $x_0$ be a point of intersection of $\partial B_g(0,r)$ and $\partial B(0,1)$. Let $H_0$ be a hyperplane tangent to $\partial B_g(0,r)$ at $x_0$, translated to contain the origin. Note that $x_0+H_0$ is also a tangent hyperplane for $B(0,1)$ at $x_0$, since it touches $B(0,1)$ only at~$x_0$. Therefore $H_0$ is perpendicular to $x_0$. Now for any $z\in H_0$, the point $y := (x_0+z)/|x_0+z|$ is a point on $\partial B(0,1)$ and hence contained in $B_g(0,r)$. Therefore \[ g(x_0)=r \ge g(y) = \frac{1}{|x_0+z|} g(x_0+z)= \frac{1}{\sqrt{1+|z|^2}} g(x_0+z). \] Since $g(x_0+z)$ grows like $|z|$ as $|z|\rightarrow\infty$, this shows that there is a constant $C$ such that \[ g(x_0+z) \le g(x_0)+C|z|^2 \] for all $z\in H_0$. Also, since $x_0+z\not\in B_g(0,r)$ for $z\in H_0\backslash\{0\}$, therefore $g(x_0)\le g(x_0+z)$ for all $z\in H_0$. This proves the first assertion of the Proposition. For the second, we proceed similarly. Let $r$ be the largest number such that $B_g(0,r) \subseteq B(0,1)$. Let $x_1$ be a point in the intersection of $\partial B_g(0,r)$ and $\partial B(0,1)$. Let $H_1$ be the hyperplane tangent to $\partial B(0,1)$ at $x_1$, translated to contain the origin. Note that this is simply the hyperplane through the origin that is perpendicular to $x_1$. Since $B(0,1)$ contains $B_g(0,r)$, and $y:= (x_1+z)/|x_1+z|$ is a point in $\partial B(0,1)$, therefore \[ g(x_1) =r \le g(y) = \frac{1}{|x_1+z|} g(x_1 + z) = \frac{1}{\sqrt{1 + |z|^2}} g(x_1+z). \] This completes the argument. \end{proof} \section{Proof of $\chi_a \ge 2\xi_b -1 $}\label{easypart} We will prove by contradiction. Suppose that $2\xi_b-1 > \chi_a$. Choose $\xi'$ such that \begin{equation*}\label{cxx} \frac{1+\chi_a}{2} < \xi' < \xi_b. \end{equation*} Note that $\xi'< 1$. Let $x_1$ and $H_1$ be as in Proposition \ref{curve}. Let $n$ be a positive integer, to be chosen later. Throughout this proof, $C$ will denote any positive constant that does not depend on $n$. The value of $C$ may change from line to line. Also, we will assume without mention that `$n$ is large enough' wherever required. Let $y$ be the closest point in $\mathbb{Z}^d$ to $nx_1$. Note that \begin{equation}\label{ynx} |y-nx_1|\le \sqrt{d}. \end{equation} Let $L$ denote the line passing through $0$ and $nx_1$ and let $L'$ denote the line segment joining $0$ to $nx_1$ (but not including the endpoints). Let $V$ be the set of all points in $\mathbb{Z}^d$ whose distance from $L'$ lies in the interval $[n^{\xi'}, 2n^{\xi'}]$. Take any $v\in V$. We claim that there is a constant $C$ (not depending on $n$) such that for any $v\in V$, \begin{align}\label{ggg} g(v)+ g(nx_1-v) \ge g(nx_1) + C n^{2\xi'-1}. \end{align} Let us now prove this claim. Let $w$ be the projection of $v$ onto $L$ along $H_1$ (i.e.\ the perpendicular projection). To prove \eqref{ggg}, there are three cases to consider. First suppose that $w$ lies in $L'$. Note that $w/|w| = x_1$. Let $v' := v/|w|$ and $z := v'- x_1 = (v-w)/|w|$. \begin{figure}[h] \begin{pdfpic} \begin{pspicture}(-.5,-.5)(10,3) \psset{xunit=1cm,yunit=1cm} \rput(3.2, .15){\small $w$} \rput(0,.2){\small $0$} \rput(9,.2){\small $nx_1$} \psline{*-*}(0,0)(9,0) \rput(3.2, 2.4){\small $v$} \psline[linestyle=dashed]{*-*}(3, 2.5)(3,0) \psline[linestyle=dashed]{-}(3,2.5)(0,0) \psline[linestyle=dashed]{*-*}(.75, .625)(.75,0) \rput(.75, .9){\small $v'$} \rput(.95,.15){\small $x_1$} \end{pspicture} \end{pdfpic} \caption{The relative positions of $x_1, v', v, w, nx_1$. } \label{fig} \end{figure} Note that $z\in H_1$. Thus by Proposition \ref{curve}, \[ g(v') = g (x_1 + z) \ge \sqrt{1+|z|^2} g(x_1). \] Consequently, \begin{equation}\label{gv1} g(v) \ge |w|\sqrt{1+|z|^2} g(x_1). \end{equation} Next, let $w' := nx_1 - w$. Note that $w'/|w'| = x_1$. let $v'' := (nx_1 - v)/|w'|$, and \[ z' := v'' - x_1 = (w-v)/|w'|. \] Then $z' \in H_1$, and hence by Proposition \ref{curve}, \[ g(v'') = g(x_1 + z') \ge \sqrt{1+|z'|^2} g(x_1). \] Consequently, \begin{equation}\label{gv2} g(nx_1 - v) \ge |w'|\sqrt{1+|z'|^2}g(x_1). \end{equation} Since $v\in V$, therefore $|v-w| \ge n^{\xi'}$. Again, $|w|+|w'| = n$. Thus, \[ \min\{|z|, |z'|\} \ge n^{\xi'-1}. \] Combining this with \eqref{gv1}, \eqref{gv2}, \eqref{ge} and the fact that $\xi' < 1$, we have \begin{align*} g(v) + g(nx_1 - v) &\ge (|w|+ |w'|) \sqrt{1+n^{2\xi'-2}}g(x_1)\\ &= \sqrt{1+n^{2\xi'-2}} g(nx_1)\\ &\ge g(nx_1) + Cn^{2\xi' -1}. \end{align*} Next, suppose that $w$ lies in $L\backslash L'$, on the side closer to $nx_1$. As above, let $z := (v-w)/|w|$. As in \eqref{gv1}, we conclude that \begin{equation}\label{gvw} g(v) \ge |w| \sqrt{1+|z|^2} g(x_1). \end{equation} By the definition of $V$, the distance between $v$ and $nx_1$ must be greater than $n^{\xi'}$. But in this case \[ |v-nx_1|^2 = (|w|-n)^2 + |v-w|^2 = (|w|-n)^2 + |w|^2|z|^2, \] and we also have $n\le |w| \le 3n$. Thus, either $|w|^2|z|^2 > n^{2\xi'}/2$ (which implies $|z|^2\ge Cn^{2\xi'-2}$), or $|w| \ge n + n^{\xi'}/\sqrt{2}$. Since $\xi' > 2\xi'-1$, therefore by \eqref{gvw}, in either situation we have \[ g(v)\ge g(nx_1) + Cn^{2\xi' -1}. \] Similarly, if $w$ lies in $L\backslash L'$, on the side closer to $0$, then \[ g(nx_1 - v) \ge g(nx_1) + Cn^{2\xi'-1}. \] This completes the proof of \eqref{ggg}. Now \eqref{ggg} combined with Theorem \ref{gapthm},~\eqref{ynx} and the fact that $2\xi'-1 > \chi_a$ implies that if $n$ is large enough, then for any $v\in V$, \begin{equation}\label{hhh} h(v) + h(y-v) \ge h(y) + Cn^{2\xi'-1}. \end{equation} Choose $\chi_1, \chi_2$ such that $\chi_a < \chi_1< \chi_2 < 2\xi' -1$. Then by \eqref{up1} of Theorem~\ref{kpzthm}, there is a constant $C$ such that for $n$ large enough, \[ \mathbb{P}(T(0,y) > h(y) + n^{\chi_2}) \le e^{-Cn^{\chi_2-\chi_1}}. \] Now, for any $v\in V$, both $|v|$ and $|y-v|$ are bounded above by $Cn$. Therefore again by \eqref{up1}, \begin{align*} \mathbb{P}(T(0,v) < h(v) - n^{\chi_2}) &\le e^{-Cn^{\chi_2-\chi_1}},\\ \mathbb{P}(T(v,y) < h(y-v) - n^{\chi_2}) &\le e^{-Cn^{\chi_2-\chi_1}}. \end{align*} This, together with \eqref{hhh}, shows that if $n$ is large enough, then for any $v\in V$, \[ \mathbb{P}(T(0,y) = T(0,v) + T(v,y)) \le e^{-C n^{\chi_2-\chi_1}}. \] Since the size of $V$ grows polynomially with $n$, this shows that \[ \mathbb{P}(T(0,y) = T(0,v) + T(v,y) \text{ for some } v\in V) \le e^{-C n^{\chi_2-\chi_1}}. \] Note that if the geodesic from $0$ to $y$ passes through $V$, then $T(0,y) = T(0,v) + T(v,y)$ for some $v\in V$. If $D(0,y) > n^{\xi'}$ then the geodesic must pass through $V$. Thus, the above inequality implies that \[ \mathbb{P}(D(0,y)> n^{\xi'}) \le e^{-C n^{\chi_2-\chi_1}}. \] By \eqref{up2} of Theorem \ref{kpzthm}, this gives \begin{align*} \mathbb{E} D(0,y) &\le n^{\xi'} + \mathbb{E}(D(0,y) 1_{\{D(0,y) > n^{\xi'}\}})\\ &\le n^{\xi'} + \sqrt{\mathbb{E} (D(0,y)^2) \mathbb{P}(D(0,y) > n^{\xi'})}\\ &\le n^{\xi'} + C_1n^{C_1} e^{-C_2 n^{\chi_2-\chi_1}}. \end{align*} Taking $n \rightarrow\infty$, this shows that \eqref{down2} of Theorem \ref{kpzthm} is violated (since $\xi' < \xi_b$), leading to a contradiction to our original assumption that $\chi_a < 2\xi_b -1$. Thus, $\chi_a \ge 2\xi_b -1$. \section{Proof of $\chi \le 2\xi -1$ when $0 < \chi < 1/2$}\label{012} In this section and the rest of the manuscript, we assume that $\chi_a=\chi_b$ and $\xi_a = \xi_b$, and denote these two numbers by $\chi$ and $\xi$. Again we prove by contradiction. Suppose that $0 < \chi < 1/2$ and $\chi > 2\xi -1$. Fix $\chi_1< \chi < \chi_2$, to be chosen later. Choose $\xi'$ such that \[ \xi < \xi' < \frac{1+\chi}{2}. \] Define: \begin{align*} \beta' &:= \frac{1}{2} + \frac{\xi'}{1+\chi}. \\ \beta &:= 1-\frac{\chi}{2} + \frac{\chi}{2}\beta'. \\ \varepsilon &:= (1-\beta)\biggl(1-\frac{\chi}{2}\biggr). \end{align*} We need several inequalities involving the numbers $\beta'$, $\beta$ and $\varepsilon$. Since \[ 0 < \frac{\xi'}{1+\chi} < \frac{1}{2}, \] therefore \begin{equation}\label{b1} \frac{1}{2}< \beta' < 1. \end{equation} Since $\chi < 1$ and $\xi'<(1+\chi)/2 <1$, \begin{equation}\label{bpxi} \beta' > \frac{1}{2} + \frac{\xi'}{2} > \xi'. \end{equation} Since $\beta$ is a convex combination of $1$ and $\beta'$ and $\chi > 0$, \begin{equation}\label{bb} \beta' < \beta < 1. \end{equation} Since $0 < \chi < 1$ and $0<\beta<1$, \begin{equation}\label{veb} 0 <\varepsilon < 1-\beta. \end{equation} Since $\beta'$ is the average of $1$ and $2\xi'/(1+\chi)\in (0,1)$, therefore $\beta'$ is strictly bigger than $2\xi'/(1+\chi)$ and hence \begin{equation}\label{xibp} \begin{split} 2\xi' - \beta' &< 2\xi' - \frac{2\xi'}{1+\chi}\\ &= \frac{2\xi'}{1+\chi} \chi < \beta' \chi. \end{split} \end{equation} By \eqref{bb}, this implies that \begin{equation}\label{xib} 2\xi' - \beta < 2\xi' - \beta' < \beta' \chi < \beta \chi. \end{equation} Next, by \eqref{b1}, \begin{equation}\label{chib} \begin{split} 1-\beta + \beta'\chi = \frac{\chi}{2}(1+\beta') < \chi. \end{split} \end{equation} And finally by \eqref{b1}, \begin{equation}\label{chibep} \beta \chi + 1-\beta - \varepsilon = \beta \chi + (1-\beta) \frac{\chi}{2} < \chi. \end{equation} Let $q$ be a large positive integer, to be chosen later. Throughout this proof, we will assume without mention that $q$ is `large enough' wherever required. Also, $C$ will denote any constant that does not depend on our choice of $q$, but may depend on all other parameters. Let $r$ be an integer between $\frac{1}{2}q^{(1-\beta-\varepsilon)/\varepsilon}$ and $2q^{(1-\beta-\varepsilon)/\varepsilon}$, recalling that by~\eqref{veb}, $1-\beta-\varepsilon > 0$. Let $k= rq$. Let $a$ be a real number between $q^{\beta/\varepsilon}$ and $2q^{\beta/\varepsilon}$. Let $n = ak$. Note that $n= arq$, which gives $\frac{1}{2}q^{1/\varepsilon} \le n\le 4q^{1/\varepsilon}$. From this it is easy to see that there are positive constants $C_1$ and $C_2$, depending only on $\beta$ and $\varepsilon$, such that \begin{align} &C_1n^{\varepsilon} \le q \le C_2n^{\varepsilon}, \label{qbd}\\ &C_1n^{1-\beta} \le k\le C_2n^{1-\beta}, \label{kbd}\\ &C_1n^\beta \le a \le C_2n^\beta, \label{abd}\\ &C_1n^{1-\beta-\varepsilon} < r < C_2n^{1-\beta-\varepsilon}. \label{rbd} \end{align} Let $b := n^{\beta'}$. Note that by \eqref{bb}, $b$ is negligible compared to $a$ if $q$ is large. Note also that, although $r$, $k$ and $q$ are integers, $a$, $n$ and $b$ need not be. Let $x_0$ and $H_0$ be as in Proposition~\ref{curve}. For $0\le i\le k$, define \begin{align*} U_i' &:= H_0 + i a x_0\ , \\ V_i' &:= H_0 + (ia + a-b) x_0\ . \end{align*} Let $U_i$ be the set of points in $\mathbb{Z}^d$ that are within distance $\sqrt{d}$ from $U_i'$. Let $V_i$ be the set of points in $\mathbb{Z}^d$ that are within distance $\sqrt{d}$ from $V_i'$. For $0\le i\le k$ let $y_i$ be the closest point in $\mathbb{Z}^d$ to $iax_0$, and let $z_i$ be the closest point in $\mathbb{Z}^d$ to $(ia + a-b) x_0$, applying some arbitrary rule to break ties. Note that if $x\in \mathbb{R}^d$, and $y\in\mathbb{Z}^d$ is closest to $x$, then $|x-y|\le \sqrt{d}$. Therefore $y_i\in U_i$ and $z_i\in V_i$. Figure \ref{fignew} gives a pictorial representation of the above definitions, assuming for simplicity that $U_i=U_i'$ and $V_i=V_i'$. \begin{figure}[h] \begin{pdfpic} \begin{pspicture}(-1,-1)(13,3) \psset{xunit=.8cm,yunit=.8cm} \psline{-}(4,-1)(4,3) \psline{-}(8,-1)(8,3) \psline{-}(10, -1)(10,3) \psline[linestyle=dashed]{-}(1.5,1)(12.5,1) \psline{<-}(4,.5)(6.95, .5) \psline{->}(7.25, .5)(10,.5) \rput(7.1, .5){\small $a$} \psline{<-}(8, 0)(8.85,0) \rput(9,0){\small $b$} \psline{->}(9.15, 0)(10,0) \psline{*-*}(4,1)(4,1) \psline{*-*}(8,1)(8,1) \psline{*-*}(10,1)(10,1) \rput(4.25, 1.2){\small $y_i$} \rput(8.25, 1.2){\small $z_i$} \rput(10.45, 1.2){\small $y_{i+1}$} \rput(4.25, -.5){\small $U_i$} \rput(8.25, -.5){\small $V_i$} \rput(10.45, -.5){\small $U_{i+1}$} \end{pspicture} \end{pdfpic} \caption{Diagrammatic representation of $y_i$, $z_i$, $U_i$ and $V_i$.} \label{fignew} \end{figure} Let $U_i^o$ be the subset of $U_i$ that is within distance $n^{\xi'}$ from $y_i$. Similarly let $V_i^o$ be the subset of $V_i$ that is within distance $n^{\xi'}$ from $z_i$. For any $A, B\subseteq \mathbb{Z}^d$, let $T(A,B)$ denote the minimum passage time from $A$ to $B$. Let $G(A, B)$ denote the (unique) geodesic from $A$ to $B$, so that $T(A, B)$ is the sum of edge-weights of $G(A,B)$. Fix any two integers $0\le l<m\le k$ such that $m-l > 3$. Consider the geodesic $G := G(y_l, y_m)$. Since $x_0\not \in H_0$, it is easy to see that $G$ must `hit' each $U_i$ and $V_i$, $l\le i\le m-1$. Arranging the vertices of $G$ in a sequence starting at $y_l$ and ending at $y_m$, for each $l\le i< m$ let $u_i'$ be the first vertex in $U_i$ visited by $G$ and let $v_i'$ be the first vertex in $V_i$ visited by $G$. Let $u_m' := y_m$. Note that $G$ visits these vertices in the order $u_l', v_l', u_{l+1}', v_{l+1}', \ldots, v_{m-1}', u_m'$. Figure \ref{figg} gives a pictorial representation of the points $u_i'$ and $v_i'$ on the geodesic~$G$. \begin{figure}[h] \begin{pdfpic} \begin{pspicture}(-1,-1)(10,3) \psset{xunit=1cm,yunit=1cm} \psline{-}(0, 0)(0, 3) \psline{-}(0,0)(6.5,0) \psline{-}(0,3)(6.5,3) \psline[linestyle=dashed]{-}(6.5,0)(8,0) \psline[linestyle=dashed]{-}(6.5,3)(8,3) \pspolygon[fillstyle=solid, fillcolor=lightgray](4,0)(4,3)(6,3)(6,0) \pscurve{-}(0,1.5)(.2,1.7)(-.2, 1.9)(1,2.6)(3.5,1.6)(4,1.9)(4.8,1.5)(3.7,.8)(3.65,.7)(5, .6)(5.8, 1.1)(6,1.2)(6.1,1.4)(5.7, 1.6)(6, 2)(6.5, 2.2) \pscurve[linestyle=dashed]{-}(6.5,2.2)(7, 2.3)(7.5,1.8)(8,1.9) \pscurve{*}(0,1.5)(0,1.5)(0,1.5) \pscurve{*}(4,1.9)(4,1.9)(4,1.9) \pscurve{*}(6,1.2)(6,1.2)(6,1.2) \rput(1.9, 2.6){\small $G$} \rput(0.3,1.3){\small $u_0'$} \rput(4.25,1.65){\small $v_0'$} \rput(6.25, 1.1){\small $u_1'$} \rput(2,-.19){\small $n^\beta$} \psline{<-}(0.05,-.3)(1.75, -.3) \psline{->}(2.15,-.3)(3.95,-.3) \rput(5,-.17){\small $n^{\beta'}$} \psline{<-}(4.05,-.3)(4.75, -.3) \psline{->}(5.15,-.3)(5.95,-.3) \end{pspicture} \end{pdfpic} \caption{Location of $u_0', v_0', u_1', v_1', \ldots$ on the geodesic $G$.} \label{figg} \end{figure} Let $T_i'$ be the sum of edge-weights of the portion of $G$ from $u_i'$ to $v_i'$. Let $E$ be the event that $u_i'\in U_i^o$ and $v_i'\in V_i^o$ for each $i$. If $E$ happens, then clearly \[ T_i' \ge T(U_i^o, V_i^o). \] Similarly, note that weight of the part of $G$ from $v_i'$ to $u_{i+1}'$ must exceed or equal $T(v_i', u_{i+1}')$. Therefore, if $E$ happens, then \begin{equation}\label{lower} \begin{split} T(y_{l}, y_{m}) &\ge \sum_{i=l}^{m-1} T_i' + \sum_{i=l}^{m-1} T(v_i', u_{i+1}')\\ &\ge \sum_{i=l}^{m-1} T(U_i^o, V_i^o) + \sum_{i=l}^{m-1} T(v_i', u_{i+1}'). \end{split} \end{equation} Next, for each $0\le i< k$ let $G_i := G(U_i^o, V_i^o)$. Let $u_i$ and $v_i$ be the endpoints of $G_i$. Let $G_i' := G(v_i, u_{i+1})$. Figure~\ref{figgg} gives a picture illustrating the paths $G_i$ and $G_i'$. \begin{figure}[h] \begin{pdfpic} \begin{pspicture}(-1,-1)(10,3) \psset{xunit=1cm,yunit=1cm} \pspolygon[fillstyle=solid](0,0)(0,2)(9,2)(9,0) \pspolygon[fillstyle=solid, fillcolor=lightgray](3,0)(3,2)(2,2)(2,0) \pspolygon[fillstyle=solid, fillcolor=lightgray](6,0)(6,2)(5,2)(5,0) \pspolygon[fillstyle=solid, fillcolor=lightgray](9,0)(9,2)(8,2)(8,0) \psline{-}(9,2)(9.5,2) \psline[linestyle=dashed]{-}(9.5,2)(10.5,2) \psline{-}(9,0)(9.5,0) \psline[linestyle=dashed]{-}(9.5,0)(10.5,0) \pscurve{-}(9,1)(9.3,1.3)(9.5, 1.2) \pscurve[linestyle=dashed](9.5,1.2)(9.7,1.1)(10.1,1.3)(10.5,1.3) \pscurve{*-*}(0,1.5)(1, 1.7)(1.5,.7)(2,1) \pscurve[linestyle=dashed]{*-*}(2,1,1)(2.2, 1.1)(2.7,1.3)(3,1.5) \pscurve{*-*}(3,1.5)(4, 1.3)(4.5,1.6)(5,1.3) \pscurve[linestyle=dashed]{*-*}(5, 1.3)(5.5,1.5)(6,1.4) \pscurve{*-*}(6,1.4)(6.5,1.3)(7, 1.7)(7.5,.7)(8,.8) \pscurve[linestyle=dashed]{*-*}(8, .8)(8.3, 1.3)(8.6,.9)(9,1) \rput(.23,1.33){\small $u_0$} \rput(2.23,.83){\small $v_0$} \rput(3.22, 1.24){\small $u_1$} \rput(5.23, 1.13){\small $v_1$} \rput(6.23, 1.23){\small $u_2$} \rput(8.22, .63){\small $v_2$} \rput(9.22, .83){\small $u_3$} \rput(1.1, .9){\small $G_0$} \rput(2.4, 1.45){\small $G_0'$} \rput(4, 1.12){\small $G_1$} \rput(5.4, 1.67){\small $G_1'$} \rput(7.1, .9){\small $G_2$} \rput(8.3, 1.47){\small $G_2'$} \rput(1,-.2){\small $n^\beta$} \psline{<-}(0.05,-.3)(.7, -.3) \psline{->}(1.1,-.3)(1.95,-.3) \rput(2.6,-.2){\small $n^{\beta'}$} \psline{<-}(2.05,-.3)(2.35, -.3) \psline{->}(2.7,-.3)(2.95,-.3) \end{pspicture} \end{pdfpic} \caption{The paths $G_0, G_0', G_1, G_1',\ldots$.} \label{figgg} \end{figure} The concatenation of the paths $G(y_{l}, v_{l})$, $G_l'$, $G_{l+1}$, $G_{l+1}'$, $G_{l+2}$, $\ldots$, $G_{m-2}'$, $G_{m-1}$, $G(v_{m-1}, y_{m})$ is a path from $y_{l}$ to $y_{m}$ (need not be self-avoiding). Therefore, \begin{equation}\label{upper} \begin{split} T(y_{l}, y_{m}) &\le T(y_{l}, v_{l}) + \sum_{i=l+1}^{m-1} T(U_i^o, V_i^o) + \sum_{i=l}^{m-2} T(v_i, u_{i+1})\\ &\qquad + T(v_{m-1}, y_{m}). \end{split} \end{equation} Define \[ \Delta_{l,m} := T(y_l, y_m) -\sum_{i=l}^{m-1} (T(U_i^o, V_i^o) + T(V_i^o, U_{i+1}^o)). \] Combining \eqref{lower} and \eqref{upper} implies that if $E$ happens, then \begin{equation*}\label{big} \begin{split} |\Delta_{l,m}| &\le \sum_{i=l}^{m-1} |T(V_i^o, U_{i+1}^o) - T(v_i', u_{i+1}')| + \sum_{i=l}^{m-2} |T(V_i^o, U_{i+1}^o) - T(v_i, u_{i+1})| \\ &\qquad + |T(U_l^o, V_l^o) - T(y_{l}, v_{l})| + |T(V_{m-1}^o, U_{m}^o)- T(v_{m-1}, y_{m})|. \end{split} \end{equation*} Thus, if \begin{align*} M_i := \max_{v,v'\in V_i^o, \ u,u'\in U_{i+1}^o} |T(v,u) - T(v', u')|,\\ N_i := \max_{u,u'\in U_i^o, \ v,v'\in V_{i}^o} |T(u,v) - T(u', v')|, \end{align*} and the event $E$ happens, then \begin{equation}\label{big2} \begin{split} |\Delta_{l,m}| &\le 2\sum_{i=l}^{m-1} M_i + N_l. \end{split} \end{equation} For a random variable $X$, let $\|X\|_p := (\mathbb{E}|X|^p)^{1/p}$ denote its $L^p$ norm. It is easy to see that $\|\Delta_{l,m}\|_4\le n^C$, where recall that $C$ stands for any constant that does not depend on our choice of the integer $q$, but may depend on $\chi$, $\xi$, $\xi'$ and the distribution of edge weights. Take any $\xi_1 \in (\xi, \xi')$. By \eqref{up2} of Theorem~\ref{kpzthm}, $\mathbb{P}(E^c) \le e^{-Cn^{\xi'-\xi_1}}$. Together with \eqref{big2}, this shows that for some constants $C_3$ and $C_4$, \begin{equation}\label{big3} \begin{split} \|\Delta_{l,m}\|_2 &\le \|\Delta_{l,m} 1_{E^c}\|_2 + \|\Delta_{l,m} 1_E\|_2\\ &\le \|\Delta_{l,m}\|_4 (\mathbb{P}(E^c))^{1/4} + \|\Delta_{l,m} 1_E\|_2\\ &\le n^{C_3} e^{-C_4n^{\xi'-\xi_1}} + 2\sum_{i=l}^{m-1} \|M_i\|_2 + \|N_l\|_2. \end{split} \end{equation} Fix $0\le i\le k-1$ and $v\in V_i^o$, $u\in U_{i+1}^o$. Let $x$ be the nearest point to $v$ in $V_i'$ and $y$ be the nearest point to $u$ in $U_{i+1}'$. Then by definition of $V_i'$ and $U_{i+1}'$, there are vectors $z, z'\in H_0$ such that $|z|$ and $|z'|$ are bounded by $Cn^{\xi'}$, and $x = (ia+a-b) x_0+ z$ and $y = (ia+a)x_0 + z'$. Thus by Proposition \ref{curve}, \begin{align*} |g(y-x) - g(bx_0)| &= |g(b x_0 + z'-z) - g(bx_0)| \\ &= b |g(x_0 + (z'-z)/b) - g(x_0)|\\ &\le \frac{C|z'-z|^2}{b} \le Cn^{2\xi' - \beta'}. \end{align*} Thus, for any $v,v'\in V_i^o$ and $u,u'\in U_{i+1}^o$, \[ |g(u-v)-g(u'-v')| \le Cn^{2\xi'-\beta'}. \] Note also that $|y-x|\le C(n^{\beta'} + n^{\xi'}) \le Cn^{\beta'}$ by \eqref{bpxi}. This, together with Theorem \ref{gapthm}, shows that for any $v,v'\in V_i^o$ and $u,u'\in U_{i+1}^o$, \begin{equation*} |\mathbb{E} T(v,u) - \mathbb{E} T(v',u')| \le Cn^{2\xi' - \beta'} + Cn^{\beta' \chi_2}\log n. \end{equation*} By \eqref{xibp}, this implies \begin{equation}\label{tvu} |\mathbb{E} T(v,u) - \mathbb{E} T(v',u')| \le Cn^{\beta' \chi_2}\log n. \end{equation} Let \[ M := \max_{v\in V_i^o, \ u\in U_{i+1}^o} \frac{|T(v,u)- \mathbb{E} T(v,u)|}{|u-v|^{\chi_2}}. \] By \eqref{up1} of Theorem \ref{kpzthm}, \begin{align*} \mathbb{E} (e^{\alpha M}) &\le \sum_{v\in V_i^o, \ u\in U_{i+1}^o} \mathbb{E}\exp\biggl(\alpha \frac{|T(v,u)- \mathbb{E} T(v,u)|}{|u-v|^{\chi_2}}\biggr)\\ &\le C |V_i^o| |U_{i+1}^o|\le Cn^C. \end{align*} This implies that $\mathbb{P}(M > t) \le Cn^C e^{-\alpha t}$, which in turn gives $\|M\|_2\le C \log n$. Let \[ M' := \max_{v\in V_i^o, \ u\in U_{i+1}^o} |T(v,u)- \mathbb{E} T(v,u)|. \] Since by \eqref{bpxi}, $|u-v| \le C(n^{\beta'}+n^{\xi'})\le Cn^{\beta'}$ for all $v\in V_i^o$, $u\in U_{i+1}^o$, therefore $M' \le Cn^{\beta'\chi_2} M$. Thus, \[ \|M'\|_2\le Cn^{\beta' \chi_2}\log n. \] From this and \eqref{tvu} it follows that \[ \|M_i\|_2 \le Cn^{\beta' \chi_2} \log n. \] By an exactly similar sequence of steps, replacing $\beta'$ by $\beta$ everywhere and using \eqref{xib} instead of \eqref{xibp}, one can deduce that \[ \|N_i\|_2\le Cn^{\beta \chi_2}\log n. \] Combining with \eqref{big3} this gives \begin{equation}\label{delta} \|\Delta_{l,m}\|_2 \le Cn^{\beta\chi_2} \log n + C (m-l) n^{\beta'\chi_2} \log n, \end{equation} since the exponential term in \eqref{big3} is negligible compared to the rest. Now, from the definition of $\Delta_{l,m}$, the fact that $k=rq$, and the triangle inequality, it is easy to see that \begin{align*} \biggl| T(y_0, y_k) - \sum_{j=0}^{r-1} T(y_{jq}, y_{(j+1)q})\biggr| &\le |\Delta_{0,k}| + \sum_{j=0}^{r-1} |\Delta_{jq, (j+1)q}|. \end{align*} Thus by \eqref{delta}, \eqref{rbd} and \eqref{kbd}, \begin{equation}\label{tt} \begin{split} \biggl\| T(y_0, y_k) &- \sum_{j=0}^{r-1} T(y_{jq}, y_{(j+1)q})\biggr\|_2 \le \|\Delta_{0,k}\|_2 + \sum_{j=0}^{r-1} \|\Delta_{jq, (j+1)q}\|_2\\ &\le C (r+1) n^{\beta\chi_2} \log n + Ck n^{\beta' \chi_2} \log n\\ &\le Cn^{1-\beta-\varepsilon + \beta\chi_2}\log n + C n^{1-\beta + \beta' \chi_2}\log n. \end{split} \end{equation} For any two random variables $X$ and $Y$, \begin{align} \bigl|\sqrt{\mathrm{Var}(X)} - \sqrt{\mathrm{Var}(Y)}\bigr| &= |\|X- \mathbb{E} X\|_2 - \|Y-\mathbb{E} Y\|_2| \nonumber \\ &\le \|(X- \mathbb{E} X) - (Y-\mathbb{E} Y)\|_2 \nonumber \\ &\le \|X-Y\|_2 + |\mathbb{E} X-\mathbb{E} Y| \le 2\|X-Y\|_2. \label{referee1} \end{align} Therefore it follows from \eqref{tt} that \begin{equation}\label{imp1} \begin{split} \biggl|(\mathrm{Var} T(y_0, y_k))^{1/2} &- \biggl( \mathrm{Var} \sum_{j=0}^{r-1} T(y_{jq}, y_{(j+1)q})\biggr)^{1/2}\biggr| \\ &\le Cn^{1-\beta-\varepsilon + \beta\chi_2}\log n + C n^{1-\beta + \beta' \chi_2}\log n. \end{split} \end{equation} For any $x,y\in \mathbb{Z}^d$, $T(x,y)$ is an increasing function of the edge weights. So by the Harris-FKG inequality \cite{harris60}, $\mathrm{Cov}(T(x,y), T(x', y')) \ge 0$ for any $x,y,x',y'\in \mathbb{Z}^d$. Therefore by \eqref{down1} of Theorem \ref{kpzthm} and \eqref{abd}, \eqref{rbd} and \eqref{qbd}, \begin{equation}\label{imp2} \begin{split} \mathrm{Var} \sum_{j=0}^{r-1} T(y_{jq}, y_{(j+1)q}) &\ge \sum_{j=0}^{r-1}\mathrm{Var} T(y_{jq}, y_{(j+1)q})\\ &\ge C \sum_{j=0}^{r-1} |y_{jq}- y_{(j+1)q}|^{2\chi_1}\\ &\ge C r (aq)^{2\chi_1} \ge C n^{(1-\beta-\varepsilon) + (\beta+\varepsilon) 2\chi_1}. \end{split} \end{equation} By the inequalities \eqref{chib} and \eqref{chibep}, we see that if $\chi_1$ and $\chi_2$ are chosen sufficiently close to $\chi$, then $\chi_1$ is strictly bigger than both $1-\beta-\varepsilon +\beta \chi_2$ and $1-\beta+\beta'\chi_2$. Therefore by \eqref{imp1} and \eqref{imp2}, and since $1-\beta - \varepsilon + (\beta+\varepsilon)2\chi_1 > 2\chi_1$, \[ \mathrm{Var} T(y_0, y_k) \ge C n^{(1-\beta-\varepsilon) + (\beta+\varepsilon) 2\chi_1}. \] By \eqref{veb} and the assumption that $\chi < 1/2$, we again have that if $\chi_1$ is chosen sufficiently close to $\chi$, \[ (1-\beta-\varepsilon) + (\beta+\varepsilon) 2\chi_1 > 2\chi. \] Since $|y_0-y_k|\le Cak \le Cn$ by \eqref{abd} and \eqref{kbd}, therefore taking $q\rightarrow\infty$ (and hence $n\rightarrow\infty$) gives a contradiction to \eqref{up1} of Theorem \ref{kpzthm}, thereby proving that $\chi \le 2\xi -1 $ when $0 < \chi < 1/2$. \section{Proof of $\chi \le 2\xi -1$ when $\chi = 1/2$}\label{hard2} Suppose that $\chi=1/2$ and $\chi > 2\xi -1$. Define $\chi_1$, $\chi_2$, $x_0$, $H_0$, $\xi'$, $\beta$, $\beta'$, $\varepsilon$, $q$, $a$, $r$, $k$, $n$, $y_i$ and $z_i$ exactly as in Section \ref{012}, considering $a$, $r$, $k$ and $n$ as functions of $q$. Then all steps go through, except the very last, where we used $\chi < 1/2$ to get a contradiction. Therefore all we need to do is the modify this last step to get a contradiction in a different way. This is where we need the sublinear variance inequality~\eqref{bksineq}. As before, throughout the proof $C$ denotes any constant that does not depend on $q$. For each real number $m\ge 1$, let $w_m$ be the nearest lattice point to $mx_0$. Note that $y_i = w_{ia}$. Let \[ f(m) := \mathrm{Var} T(0, w_m). \] Note that there is a constant $C_0$ such that $f(m) \le C_0m$ for all $m$. Again by \eqref{down1}, there is a $C_1>0$ such that for all $m$, \begin{equation}\label{fm} f(m)\ge C_1m^{2\chi_1}. \end{equation} Now, $|(w_{(j+1)aq} - w_{jaq}) - w_{aq}| \le C$. Again, as a consequence of \eqref{referee1} we have that for any two random variables $X$ and $Y$, \begin{align} \bigl|\mathrm{Var}(X)-\mathrm{Var}(Y)\bigr| &= \bigl|\sqrt{\mathrm{Var}(X)} - \sqrt{\mathrm{Var}(Y)}\bigr|\bigl(\sqrt{\mathrm{Var}(X)} + \sqrt{\mathrm{Var}(Y)}\bigr)\nonumber \\ &\le 2\|X-Y\|_2 \bigl(2\sqrt{\mathrm{Var}(X)} + 2\|X-Y\|_2).\label{referee2} \end{align} By \eqref{referee2} and the subadditivity of first-passage times, \begin{align*} \mathrm{Var}(T(w_{jaq}, w_{(j+1)aq})) &\ge f(aq) - C\sqrt{f(aq)} - C \\ &\ge f(n/r) - C\sqrt{n/r}. \end{align*} Therefore by the Harris-FKG inequality, \begin{equation}\label{lowtt} \mathrm{Var}\biggl(\sum_{j=0}^{r-1} T(w_{jaq}, w_{(j+1)aq})\biggr) \ge r f(n/r) - C\sqrt{nr}. \end{equation} Now, by \eqref{chib} and \eqref{chibep}, if $\chi_2$ is sufficiently close to $\chi$, then both $1-\beta-\varepsilon + \beta\chi_2$ and $1-\beta+\beta'\chi_2$ are strictly smaller than $1/2$. Therefore by~\eqref{tt}, \eqref{referee2} and the fact that $f(n)\le Cn$, \begin{align*} &\biggl|f(n) - \mathrm{Var}\biggl(\sum_{j=0}^{r-1} T(w_{jaq}, w_{(j+1)aq})\biggr)\biggr|\\ &\le C\sqrt{n}(n^{1-\beta-\varepsilon + \beta\chi_2}\log n + n^{1-\beta + \beta' \chi_2}\log n). \end{align*} Combining this with \eqref{lowtt} gives \begin{align* f(n) \ge r f(n/r) - C\sqrt{nr} - C\sqrt{n}(n^{1-\beta-\varepsilon + \beta\chi_2}\log n + n^{1-\beta + \beta' \chi_2}\log n). \end{align*} Again by \eqref{rbd} and \eqref{fm}, \begin{align* r f(n/r) \ge Cn^{(1-\beta-\varepsilon) + (\beta+\varepsilon)2\chi_1}. \end{align*} Combining \eqref{rbd} with the last two displays, it follows that we can choose $\chi_1$ and $\chi_2$ so close to $1/2$ that as $q\rightarrow\infty$, \[ \liminf\frac{f(n)}{rf(n/r)} \ge 1. \] In particular, for any $\delta > 0$, there exists an integer $q(\delta)$ such that if $q\ge q(\delta)$, then \begin{equation}\label{fineq} f(n) \ge (1-\delta) r f(n/r). \end{equation} Fix $\delta = (1-\beta-\varepsilon)/2$ and choose $q(\delta)$ satisfying the above criterion. Note that $q(\delta)$ can be chosen as large as we like. Let $m_0:= aq = n/r$ and $m_1 = n$. The above inequality implies that \[ \frac{f(m_1)}{m_1} \ge (1-\delta) \frac{f(m_0)}{m_0}. \] Note that by \eqref{qbd}, if $q(\delta)$ is chosen sufficiently large to begin with, then \[ m_1^{\varepsilon/(\beta+\varepsilon)} > Cq^{1/(\beta+\varepsilon)}> q(\delta). \] We now inductively define an increasing sequence $m_2, m_3,\ldots$ as follows. Suppose that $m_{i-1}$ has been defined such that \begin{equation}\label{qq} m_{i-1}^{\varepsilon/(\beta+\varepsilon)} > q(\delta). \end{equation} Let \[ q_i := [m_{i-1}^{\varepsilon/(\beta+\varepsilon)}] + 1, \] where $[x]$ denotes the integer part of a real number $x$. By \eqref{qq}, $q_i \ge q(\delta)$. Let $a_i := m_{i-1}/q_i$. Then if $q(\delta)$ is chosen large enough, \[ a_i \ge \frac{2}{3}m_{i-1}^{\beta/(\beta+\varepsilon)}\ge \frac{1}{2}q_i^{\beta/\varepsilon}, \] and \[ a_i \le m_{i-1}^{\beta/(\beta+\varepsilon)}\le q_i^{\beta/\varepsilon}. \] Let $r_i$ be an integer between $q_i^{(1-\beta-\varepsilon)/\varepsilon}$ and $2q_i^{(1-\beta-\varepsilon)/\varepsilon}$. Let $k_i = r_i q_i$ and $n_i = a_i k_i=a_i r_i q_i = r_i m_{i-1}$. If we carry out the argument of Section~\ref{012} with $q_i, r_i, k_i, a_i, n_i$ in place of $q,r,k,a,n$, then, since $q_i \ge q(\delta)$, as before we arrive at the inequality \[ f(n_i) \ge (1-\delta) r_i f(n_i/r_i) = (1-\delta) r_i f(m_{i-1}). \] Define $m_i := n_i$. Then the above inequality shows that \begin{equation}\label{mi} \frac{f(m_i)}{m_i} \ge (1-\delta) \frac{f(m_{i-1})}{m_{i-1}}. \end{equation} Note that since $r_i$ is a positive integer and $m_i = r_i m_{i-1}$, therefore $m_i \ge m_{i-1}$. In particular, \eqref{qq} is satisfied with $m_i$ in place of $m_{i-1}$. This allows us to carry on the inductive construction such that \eqref{mi} is satisfied for each~$i$. Now, the above construction shows that if the initial $q$ was chosen large enough, then for each $i$, \[ m_i=r_i m_{i-1} \ge q_i^{(1-\beta-\varepsilon)/\varepsilon} m_{i-1}\ge m_{i-1}^{1/(\beta+\varepsilon)}. \] Therefore, for all $i\ge 2$, \[ m_i \ge m_1^{(\beta+\varepsilon)^{-(i-1)}}. \] So, by \eqref{bksineq}, there exists a constant $C_3$ such that \[ \frac{f(m_i)}{m_i}\le \frac{C}{\log m_i} \le C_3(\beta+\varepsilon)^{i-1}. \] However, \eqref{mi} shows that there is $C_4 >0$ such that \[ \frac{f(m_i)}{m_i}\ge C_4(1-\delta)^{i-1}. \] Since $1-\delta > \beta+\varepsilon$, we get a contradiction for sufficiently large $i$. \section{Proof of $\chi \le 2\xi - 1$ when $\chi = 0$}\label{hard3} As usual, we prove by contradiction. Assume that $\chi = 0$ and $2\xi -1 < \chi$. Then $\xi < 1/2$. Choose $\xi_1$, $\xi'$ and $\xi''$ such that $\xi < \xi_1 < \xi'' < \xi' < 1/2$. From this point on, however, the proof is quite different than the case $\chi > 0$. Recall that $t(P)$ is the sum of edge-weights of a path $P$ in the environment $t=(t_e)_{e\in E(\mathbb{Z}^d)}$. This notation is used several times in this section. First, we need a simple lemma about the norm $g$. \begin{lmm}\label{glmm} Assume that the edge-weight distribution is continuous, and let $L$ denote the infimum of its support. Then there exists $M > L$ such that for all $x\in \mathbb{R}^d \backslash\{0\}$, $g(x) \ge M|x|_1$, where $|x|_1$ is the $\ell_1$ norm of $x$. \end{lmm} \begin{proof} Since $g$ is a norm on $\mathbb{R}^d$, \[ M := \inf_{x\ne 0} \frac{g(x)}{|x|_1} > 0, \] and the infimum is attained. Choose $x\ne 0$ such that $g(x)=M|x|_1$. Define a new set of edge-weights $s_e$ as $s_e := t_e - L$. Then $s_e$ are non-negative and i.i.d. Let the function $g^s$ be defined for these new edge-weights the same way $g$ was defined for the old weights. Similarly, define $h^s$ and~$T^s$. Since any path $P$ from a point $y$ to a point $z$ must have at least $|z-y|_1$ many edges, therefore $s(P) \le t(P)-L|z-y|_1$. Thus, \[ T^s(y,z) \le T(y,z) - L|z-y|_1. \] In particular, $h^s(y) \le h(y) - L|y|_1$ for any $y$. Considering a sequence $y_n$ in $\mathbb{Z}^d$ such that $y_n/n \rightarrow x$, we see that \begin{align*} g^s(x) &= \lim_{n\rightarrow\infty} \frac{h^s(y_n)}{n} \le \lim_{n\rightarrow\infty} \frac{h(y_n)-L|y_n|_1}{n} \\ &= g(x)-L|x|_1 = (M-L)|x|_1. \end{align*} Since $t_e$ has a continuous distribution, $s_e$ has no mass at $0$. Therefore, by a well-known result (see~\cite{kesten86}), $g^s(x) > 0$. This shows that $M > L$. \end{proof} Choose $\beta$, $\varepsilon'$ and $\varepsilon$ so small that $0< \varepsilon' < \varepsilon < \beta < (\xi''-\xi_1)/d$. Choose $x_0$ and $H_0$ as in Proposition \ref{curve}. Let $n$ be a positive integer, to be chosen arbitrarily large at the end of the proof. Again, as usual, $C$ denotes any positive constant that does not depend on our choice of $n$. Choose a point $z\in H_0$ such that $|z|\in [n^{\xi'}, 2n^{\xi'}]$. Let $v:= nx_0/2 + z$. Then by Proposition \ref{curve} and the fact that $\xi' < 1/2$, \begin{equation}\label{gvv1} \begin{split} |g(v) - g(nx_0/2)| &= (n/2) |g(x_0 + 2z/n)-g(x_0)|\\ &\le C|z|^2/n\le Cn^{2\xi' -1}\le C. \end{split} \end{equation} Similarly, \begin{equation}\label{gvv2} |g(nx_0-v) - g(nx_0/2)|\le Cn^{2\xi'-1}\le C. \end{equation} Let $w$ be the closest lattice point to $v$ and let $y$ be the closest lattice point to~$nx_0$. Then $|w-v|$ and $|y-nx_0|$ are bounded by $\sqrt{d}$. Therefore inequalities~\eqref{gvv1} and~\eqref{gvv2} imply that \begin{equation}\label{gvv3} |g(y)-(g(w)+g(y-w))| \le C. \end{equation} Figure \ref{figvg} has an illustration of the relative locations of $y$ and $w$, together with some other objects that will be defined below. By Theorem \ref{gapthm} and the assumption that $\chi = 0$, $|h(y)-g(y)|$, $|h(w)-g(w)|$ and $|h(y-w)-g(y-w)|$ are all bounded by $Cn^{\varepsilon}$. Again by~\eqref{up1} of Theorem \ref{kpzthm} and the assumption that $\chi=0$, the probabilities $\mathbb{P}(|T(0,w) - h(w)| > n^{\varepsilon})$, $\mathbb{P}(|T(w,y)- h(y-w)|>n^\varepsilon)$ and $\mathbb{P}(|T(0,y) - h(y)| > n^\varepsilon)$ are all bounded by $e^{-Cn^{\varepsilon-\varepsilon'}}$. These observations, together with~\eqref{gvv3}, imply that there are constants $C_1$ and $C_2$, independent of our choice of $n$, such that \begin{equation}\label{ttt} \mathbb{P}(|T(0,y) - (T(0,w) + T(w, y))| > C_1n^\varepsilon) \le e^{-C_2n^{\varepsilon-\varepsilon'}}. \end{equation} Let $T_o(0,y)$ be the minimum passage time from $0$ to $y$ among all paths that do not deviate by more than $n^{\xi''}$ from the straight line segment joining $0$ and~$y$. By assumption \eqref{up2} of Theorem~\ref{kpzthm}, \[ \mathbb{P}(T_o(0,y) = T(0,y)) \ge 1- e^{-Cn^{\xi''-\xi_1}}. \] Combining this with \eqref{ttt}, we see that if $E_1$ is the event \begin{equation}\label{e1def} E_1 := \{|T_o(0,y) - (T(0,w) + T(w, y))| \le C_1n^\varepsilon\}, \end{equation} where $C_1$ is the constant from \eqref{ttt}, then there is a constant $C_3$ such that \begin{equation}\label{e1} \mathbb{P}(E_1) \ge 1- e^{-C_3n^{\xi''-\xi_1}} - e^{-C_3n^{\varepsilon-\varepsilon'}}. \end{equation} Let $V$ be the set of all lattice points within $\ell_1$ distance $n^\beta$ from $w$. Let $\partial V$ denote the boundary of $V$ in $\mathbb{Z}^d$, that is, all points in $V$ that have at least one neighbor outside of $V$. Let $w_1$ be the first point in $G(0,w)$ that belongs to $\partial V$, when the points are arranged in a sequence from $0$ to $w$. Let $w_2$ be the last point in $G(w,y)$ that belongs to $\partial V$, when the points are arranged in a sequence from $w$ to $y$. Let $G_1$ denote the portion of $G(0,w)$ connecting $w_1$ and $w$, and let $G_2$ denote the portion of $G(w,y)$ connecting $w$ and $w_2$. Let $G_0$ be the portion of $G(0,w)$ from $0$ to $w_1$ and let $G_3$ be the portion of $G(w, y)$ from $w_2$ to $y$. Note that $G_0$ and $G_3$ lie entirely outside of $V$. Figure~\ref{figvg} provides a schematic diagram to illustrate the above definitions. \begin{figure}[h] \begin{pdfpic} \begin{pspicture}(-1,-1)(10,5) \psset{xunit=1cm,yunit=1cm} \rput(0,-.2){\small $0$} \rput(9,-.2){\small $y$} \psline{*-*}(0,0)(9,0) \pspolygon[fillstyle=solid, fillcolor=lightgray](4.5,4)(5.5,3)(4.5,2)(3.5,3) \rput(4.33,2.82){\small $w$} \psline{*-*}(4.5,3)(4.5,3) \pscurve{*-*}(0,0)(1,-1)(2,2)(3,3.5)(4, 3.5)(5.2, 3.5)(4.5,3)(4.52,2.8)(3.8, 2.2)(4.5, 1.8)(4.8,2.6)(5,2.5)(7, 2)(7.3, 1.4)(8, -.5)(9.2,-.5)(9,0) \psline{*-*}(4,3.5)(4,3.5) \psline{*-*}(5,2.5)(5,2.5) \rput(4,3.75){\small $w_1$} \rput(5.05,2.27){\small $w_2$} \rput(2,2.54){\small $G_0$} \rput(5,3.8){\small $G_1$} \rput(4.25, 1.6){\small $G_2$} \rput(7.35, 1.9){\small $G_3$} \pscurve{->}(3.4,2.3)(3.8, 2.5)(4.0,2.9) \rput(3.3,2.3){\small $V$} \pscurve[linestyle=dashed]{-}(0,0)(2,1)(4.5,-1)(7,.5)(9,0) \rput(4.45,-.7){\small $G(0,y)$} \end{pspicture} \end{pdfpic} \caption{Schematic diagram for $V, w, w_1,w_2$ and $G_0, G_1, G_2, G_3$.} \label{figvg} \end{figure} Let $L$ and $M$ be as in Lemma \ref{glmm}. Choose $L', M'$ such that $L < L' < M' <M$. Take any $u\in \partial V$. By Lemma \ref{glmm}, $g(u-w) \ge M|u-w|_1$. Therefore by Theorem \ref{gapthm}, \[ h(u-w) \ge M|u-w|_1 - C|u-w|^\varepsilon\ge M|u-w|_1 - Cn^{\beta \varepsilon}. \] Now, $|u-w|_1 \ge Cn^\beta$. Therefore by assumption \eqref{up1} of Theorem \ref{kpzthm} and the above inequality, \begin{align*} &\mathbb{P}(T(u,w) < M'|u-w|_1) \\ &\le \mathbb{P}(|T(u,w) - h(u-w)| > (M-M')|u-w|_1 - Cn^{\beta \varepsilon})\\ &\le \mathbb{P}(|T(u,w) - h(u-w)| > Cn^{\beta})\le e^{-n^{\beta-\varepsilon'}/C}. \end{align*} Since there are at most $n^C$ points in $\partial V$, the above bound shows that \[ \mathbb{P}(T(u, w) < M'|u-w|_1 \text{ for some } u\in \partial V) \le n^C e^{-n^{\beta-\varepsilon'}/C}. \] In particular, if $E_2$ and $E_3$ are the events \begin{align*} E_2 &:= \{t(G_1) \ge M'|w-w_1|_1\},\\ E_3 &:= \{t(G_2) \ge M'|w-w_2|_1\}, \end{align*} then there is a constant $C_4$ such that \begin{align} \mathbb{P}(E_2\cap E_3) &\ge 1- n^{C_4}e^{-n^{\beta-\varepsilon'}/C_4}. \label{te} \end{align} Let $E(V)$ denote the set of edges between members of $V$. Let $(t'_e)_{e\in E(V)}$ be a collection of i.i.d.\ random variables, independent of the original edge-weights, but having the same distribution. For $e\not\in E(V)$, let $t'_e := t_e$. Let $E_4$ be the event \[ E_4 := \{t_e' \le L' \text{ for each } e\in E(V)\}. \] If $E_4$ happens, then there is a path $P_1$ from $w_1$ to $w$ and a path $P_2$ from $w$ to $w_2$ such that $t'(P_1) \le L'|w-w_1|_1$ and $t'(P_2)\le L'|w-w_2|_1$. Let $P$ be the concatenation of the paths $G_0$, $P_1$, $P_2$ and $G_3$. Since $t'(G_0)=t(G_0)$ and $t'(G_3)=t(G_3)$, therefore under $E_4$, \[ t'(P) \le t(G_0)+t(G_3) + L'|w-w_1|_1+L'|w-w_2|_1. \] On the other hand, under $E_2\cap E_3$, \begin{align*} T(0,w)+T(w,y) &= t(G_0)+t(G_1)+t(G_2)+t(G_3) \\ &\ge t(G_0)+t(G_3) + M'|w-w_1|_1 + M'|w-w_2|_1. \end{align*} Consequently, if $E_1, E_2, E_3, E_4$ all happen simultaneously, then there is a (deterministic) positive constant $C_5$ such that \[ T_o(0,y) \ge t'(P) + C_5 n^\beta - C_1 n^\varepsilon, \] where $C_1$ is the constant in the definition \eqref{e1def} of $E_1$. Since $\beta < \xi'' < \xi'$ and $x_0\not\in H_0$, the edges within distance $n^{\xi''}$ of the line segment joining $0$ and $y$ have the same weights in the environment $t'$ as in $t$. Since $\beta > \varepsilon$, this observation and the above display proves that $E_1\cap E_2\cap E_3 \cap E_4$ implies $D'(0,y) \ge n^{\xi''}$, where $D'(0,y)$ is the value of $D(0,y)$ in the new environment~$t'$. (To put it differently, if $E_1\cap E_2 \cap E_3 \cap E_4$ happens then there is a path $P$ that has less $t'$-weight than the least $t'$-weight path within distance $n^{\xi''}$ of the straight line connecting $0$ to $y$, and therefore $D'(0,y)$ must be greater than or equal to $n^{\xi''}$.) Now note that the event $E_4$ is independent of $E_1$, $E_2$ and $E_3$. Moreover, since $L'>L$, there is a constant $C_6$ such that $\mathbb{P}(E_4) \ge e^{-C_6 n^{\beta d}}$. Combining this with \eqref{e1}, \eqref{te} and the last observation from the previous paragraph, we get \begin{align*} \mathbb{P}(D'(0,y)\ge n^{\xi''}) &\ge \mathbb{P}(E_1\cap E_2\cap E_3 \cap E_4)\\ &= \mathbb{P}(E_1\cap E_2\cap E_3)\mathbb{P}(E_4)\\ &\ge (1-e^{-C_3 n^{\xi''-\xi_1}} - e^{-C_3 n^{\varepsilon-\varepsilon'}} - n^{C_4}e^{-n^{\beta-\varepsilon'}/C_4}) e^{-C_6 n^{\beta d}}\\ &\ge e^{-C_7 n^{\beta d}}. \end{align*} Now $D'(0,y)$ has the same distribution as $D(0,y)$. But by \eqref{up2} of Theorem~\ref{kpzthm}, $\mathbb{P}(D(0,y) \ge n^{\xi''}) \le e^{-C_8n^{\xi''-\xi_1}}$, and $\beta d < \xi''-\xi_1$ by our choice of $\beta$. Together with the above display, this gives a contradiction, thereby proving that $\chi \le 2\xi -1$ when $\chi = 0$. \vskip.2in \noindent {\bf Acknowledgments.} I would like to thank Tom LaGatta, Partha Dey, Elena Kosygina, Alain Sznitman, Rapha\"el Rossignol and Rongfeng Sun for useful discussions and references. I would also like to specially thank one of the referees for a number of very useful suggestions and pointing out some errors in the first draft.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Assisted Common Information Region} \subsection{Characterization} We say that a rate pair $(R_1,R_2)$ {\em enables} a common information rate $R_\ensuremath{\text{\sf CI}}$ if for every $\epsilon>0$, there is a large enough integer $n$ and (deterministic) functions $f_k:\ensuremath{\mathcal X}^n\times\ensuremath{\mathcal Y}^n \rightarrow \{1,\ldots, 2^{n(R_k+\epsilon)}\}$, $(k=1,2)$, $g_1:\ensuremath{\mathcal X}^n\times\{1,\ldots, 2^{n(R_1+\epsilon)}\} \rightarrow \ensuremath{\mathbb Z}$, and $g_2:\ensuremath{\mathcal Y}^n\times\{1,\ldots, 2^{n(R_2+\epsilon)}\} \rightarrow \ensuremath{\mathbb Z}$ (where $\ensuremath{\mathbb Z}$ is the set of integers) such that \begin{align} &\ensuremath{\text Pr}\left( g_1(X^n,f_1(X^n,Y^n)) \neq g_2(Y^n,f_2(X^n,Y^n)) \right) \leq \epsilon,\label{eq:prob-of-error}\\ &\frac{1}{n}I(X^n,Y^n;g_1(X^n,f_1(X^n,Y^n))) \geq R_\ensuremath{\text{\sf CI}} - \epsilon. \label{eq:CIrate} \end{align} We denote the closure of the set of all rate pairs which enable a common information rate $R_\ensuremath{\text{\sf CI}}$ by $\ensuremath{{\mathcal R}_\CI}(R_\ensuremath{\text{\sf CI}})$. We call this the {\em rate-region} for enabling a common information rate of $R_\ensuremath{\text{\sf CI}}$. Note that the largest value of $R_\ensuremath{\text{\sf CI}}$ we need consider is $H(X,Y)$. For larger values of $R_\ensuremath{\text{\sf CI}}$, $\ensuremath{{\mathcal R}_\CI}(R_\ensuremath{\text{\sf CI}})$ is clearly empty. Similarly, we define the {\em rate-region} $\ensuremath{{\mathcal R}_\RD}(R_\ensuremath{\text{\sf RD}})$ for enabling a residual dependency rate of $R_\ensuremath{\text{\sf RD}}$ as the closure of the set of all rate pairs which enable a residual dependency rate $R_\ensuremath{\text{\sf RD}}$, where the definition of what it means for a rate pair to {\em enable} a residual dependency rate $R_\ensuremath{\text{\sf RD}}$ is exactly as above except \eqref{eq:CIrate} is replaced by \begin{align*} \frac{1}{n} I(X^n;Y^n|g_1(X^n,f_1(X^n,Y^n))) \leq R_\ensuremath{\text{\sf RD}} + \epsilon. \end{align*} We also define the following ``single-letter'' regions {\small \begin{align} &\ensuremath{{\mathcal R}_{\star\CI}}(R_\ensuremath{\text{\sf CI}}) = \left\{ (I(Y;Q|X),I(X;Q|Y)) : I(X,Y;Q) \geq R_\ensuremath{\text{\sf CI}}\right\},\label{eq:CIregion}\\ &\ensuremath{{\mathcal R}_{\star\RD}}(R_\ensuremath{\text{\sf RD}}) = \left\{ (I(Y;Q|X),I(X;Q|Y)) : I(X;Y|Q) \leq R_\ensuremath{\text{\sf RD}}\right\}.\label{eq:RDregion} \end{align}} Here $Q$ is any random variable dependent on $(X,Y)$. The main result of this section is a characterization of the rate-regions defined abov (proof is sketched in section~\ref{subsec:proof}): \begin{thm} \label{thm:main} \begin{align} \ensuremath{{\mathcal R}_\CI}&=\ensuremath{{\mathcal R}_{\star\CI}},\\ \ensuremath{{\mathcal R}_\RD}&=\ensuremath{{\mathcal R}_{\star\RD}}. \end{align} Further, the cardinality of the alphabet $\ensuremath{\mathcal Q}$ of $Q$ in \eqref{eq:CIregion}-\eqref{eq:RDregion} can be restricted to $|\ensuremath{\mathcal X}||\ensuremath{\mathcal Y}|+2$. \end{thm} \TODO{} \subsection{Behavior at $R_1=R_2=0$ and Connection to G\'{a}cs-K\"{o}rner~\cite{GacsKo73}} As discussed in the introduction, G\'{a}cs-K\"{o}rner showed that when there is no genie, the common information rate is zero unless $X=(X',Q)$, $Y=(Y',Q)$, and $H(Q)>0$. Since the absence of links from the genie is a more restrictive condition than zero-rate links from the genie, we can ask whether introducing an omniscient genie, but with {\em zero-rate} links to the observers, changes the conclusion of G\'{a}cs-K\"{o}rner. The corollary below answers this question in the negative. Also note that the result of G\'{a}cs-K\"{o}rner can be obtained as a simple consequence of this corollary. \begin{align*} \text{Let}\qquad R_\ensuremath{\text{\sf CI-0}}&=\sup\; \{R_\ensuremath{\text{\sf CI}}: (0,0)\in\ensuremath{{\mathcal R}_\CI}(R_\ensuremath{\text{\sf CI}})\}, \text{ and}\\ R_\ensuremath{\text{\sf RD-0}}&=\inf\; \{R_\ensuremath{\text{\sf RD}}: (0,0)\in\ensuremath{{\mathcal R}_\RD}(R_\ensuremath{\text{\sf RD}})\}. \end{align*} \begin{corol} \label{cor:GacsKo} $R_\ensuremath{\text{\sf CI-0}} > 0$ (or, $R_\ensuremath{\text{\sf RD-0}}< I(X;Y)$) only if there are $X',Y',Q'$ such that $X=(X',Q')$, $Y=(Y',Q')$, $R_\ensuremath{\text{\sf CI-0}}=H(Q')$, and $R_\ensuremath{\text{\sf RD-0}}=I(X;Y|Q')$. \end{corol} \begin{IEEEproof}[Proof sketch.] We first observe that the only $Q$'s allowed in \eqref{eq:CIregion} and \eqref{eq:RDregion} if the rate pair $(0,0)$ is a member are such that $I(Q;Y|X)=I(Q;X|Y)=0$. Thus, the joint p.m.f. of $X,Y,Q$ has the form\[ p(x,y,q)=p(x,y)p(q|x)=p(x,y)p(q|y).\] Hence, for all $(x,y)$ such that $p(x,y)>0$, we must have $p(q|x)=p(q|y)$, $\forall q$. This implies that, if we consider the bipartite graph with vertices in $\ensuremath{\mathcal X}\cup\ensuremath{\mathcal Y}$ and an edge between $x\in\ensuremath{\mathcal X}$ and $y\in\ensuremath{\mathcal Y}$ if and only if $p(x,y)>0$, for all vertices in the same connected component, $p(q|\text{vertex})$ is the same. Using this, and defining $Q'$ to be the connected component to which $X$ (or, equivalently $Y$) belongs, we can show that \begin{align*} I(X,Y;Q) &= I(Q';Q) \leq H(Q'),\\ I(X;Y|Q) &= H(Q'|Q) + I(X;Y|Q') \geq I(X;Y|Q'). \end{align*} If there is only one connected component, this implies that $R_\ensuremath{\text{\sf CI-0}}=0$ and $R_\ensuremath{\text{\sf RD-0}} = I(X;Y)$. Hence, if $R_\ensuremath{\text{\sf CI-0}}>0$ (or, $R_\ensuremath{\text{\sf RD-0}} < I(X;Y)$), more than one connected component must exist; moreover $R_\ensuremath{\text{\sf CI-0}}=H(Q')$ and $R_\ensuremath{\text{\sf RD-0}}=I(X;Y|Q')$. \end{IEEEproof} Thus, at zero rates, common information exhibits trivial behavior. However, for positive rates, the behavior is, in general, non-trivial. Presently, we will demonstrate this through a few examples. But before that, we will show that Wyner's common information can also be obtained as a special case of our characterization. \subsection{Connection to Wyner's Common Information~\cite{Wyner75}} Wyner offered an alternative definition for common information in~\cite{Wyner75}. Briefly, Wyner's common information is the ``minimum binary rate of the common input to two independent processors that generate an approximation to $X,Y$.'' From~\cite{Wyner75}, Wyner's common information is \begin{align*} \ensuremath{C_\text{\sf Wyner}} =\inf I(X,Y;U), \end{align*} where the infimum is taken over $U$ such that $X - U - Y$ is a Markov chain. It is easy to show that $\ensuremath{C_\text{\sf Wyner}}\geq I(X;Y)$. Wyner's common information can be obtained as a special case of our characterization: (proof omitted due to space constraints) \begin{corol} \begin{align*} \ensuremath{C_\text{\sf Wyner}} - I(X;Y) = \min_{(R_1,R_2)\in\ensuremath{{\mathcal R}_\RD}(0)} R_1 + R_2. \end{align*} \end{corol} \subsection{Non-Trivial Behavior at Non-Zero Rates} \begin{figure}[tb] \centering \scalebox{0.32}{\includegraphics{Gaussian}} \caption{\small An achievable trade-off between $R_1=R_2=R$ and $R_\ensuremath{\text{\sf CI}}$ (also $R_\ensuremath{\text{\sf RD}}$) for jointly Gaussian $X,Y$ of unit variance and correlation $\rho=0.95$. The trade-off is obtained by choosing $Q$ in \eqref{eq:CIregion} and \eqref{eq:RDregion} to be the optimal jointly Gaussian choice. The optimal $R_\ensuremath{\text{\sf CI}}$ is at least as much as shown and the optimal $R_\ensuremath{\text{\sf RD}}$ is at most what is shown. Note that $R_\ensuremath{\text{\sf CI}}$ is strictly positive for all $R>0$.} \label{fig:Gaussian} \end{figure} \begin{figure}[tb] \centering \scalebox{0.3}{\includegraphics{Zsource}} \caption{\small $U,V$ are binary random variables with joint p.m.f. $p(0,0)=p(1,1)=p$, $p(1,0)=1-2p$, and $p(0,1)=0$. Boundary of $\ensuremath{{\mathcal R}_\RD}(0)$ for $p=1/3$ is shown. The marked point is the minimum sum-rate point.} \label{fig:zsource} \end{figure} \begin{figure}[tb] \centering \scalebox{0.3}{\includegraphics{Connected}} \caption{\small \TODO{$X,Y$ are dependent random variables whose joint p.m.f is shown. The solid lines each carry a probability mass of $\frac{1-\delta}{8}$ and the lighter ones $\frac{\delta}{8}$. In the plot, all points on the dotted lines belong to $\ensuremath{{\mathcal R}_\RD}(0)$.}} \label{fig:connected} \end{figure} \begin{eg}{\em Jointly Gaussian random variables.} \label{eg:Gaussian} We consider jointly Gaussian\footnote{While the discussion has been for discrete random variables, it extends directly to continuous random variables.} $X,Y$ each of unit variance and with correlation coefficient $\rho$. Let the rates of the links from the genie to the two observers be the same, $R_1=R_2=R$. \figureref{Gaussian} plots an achievable $R_\ensuremath{\text{\sf CI}}$ and $R_\ensuremath{\text{\sf RD}}$ by choosing $Q$ in \eqref{eq:CIregion} and \eqref{eq:RDregion} to be the optimal jointly Gaussian choice (jointly Gaussian with $X,Y$); i.e, the optimal $R_\ensuremath{\text{\sf CI}}$ is at least as much as shown and the optimal $R_\ensuremath{\text{\sf RD}}$ is at most what is shown. Note that $R_\ensuremath{\text{\sf CI}}=0$ when $R=0$ consistent with Corollary~\ref{cor:GacsKo}, but $R_\ensuremath{\text{\sf CI}}$ is strictly positive for all $R>0$. \end{eg} \begin{eg}{\em A binary example.} \label{eg:zsource} \figureref{zsource} shows the joint p.m.f. of a pair of dependent binary random variables $U,V$. The boundary of the rate region $\ensuremath{{\mathcal R}_\RD}(0)$ is plotted in \figureref{zsource}. This is the optimal trade-off of rates at which the genie can communicate with the observers so that they may produce a common random variable which can render their observations practically conditionally independent. \end{eg} \TODO{\begin{eg} \label{eg:connected} \figureref{connected} shows the joint p.m.f. of a pair of dependent random variables $X,Y$. When $\delta=0$, they have the simple dependency structure of $X=(X',Q), Y=(Y',Q)$ where $X',Y',Q$ are independent. This is the trivial case in the introduction, and the observers can each produce, without any assistance from the genie, $Q$ which renders their observations conditionally independent. Thus, $\ensuremath{{\mathcal R}_\RD}(0)$ is the entire positive quadrant. For small values of $\delta$ we intuitively expect the random variables to be ``close'' to this case. A measure such as the common information of G\'{a}cs and K\"{o}rner fails to bring this out (common information is discontinuous in $\delta$ jumping from $H(Q)=1$ at $\delta=0$ to 0 for $\delta>0$). However, the intuition is borne out by our trade-off regions. For instance, for $\delta=0.05$, \figureref{connected} shows that $\ensuremath{{\mathcal R}_\RD}(0)$ is nearly all of the positive quadrant. \end{eg}} In \sectionref{crypto}, we will use the characterization developed in this section to compare the pairs of random variables in the last two examples in a cryptographic context. See \exampleref{crypto-example}. \subsection{Relationship between $\ensuremath{{\mathcal R}_\CI}$ and $\ensuremath{{\mathcal R}_\RD}$} The residual dependency rate-region can be written in terms of the common information rate-region: (proof is omitted due to space constraints) \begin{corol} {\small \begin{align*} \ensuremath{{\mathcal R}_\RD}(R_\ensuremath{\text{\sf RD}})=\{ &(R_1,R_2): \exists (r_1,r_2)\in \ensuremath{\delta{\mathcal R}_{\CI}}(r_\ensuremath{\text{\sf CI}}) \text{ s.t. } r_\ensuremath{\text{\sf CI}} \geq\\&I(X;Y)-R_\ensuremath{\text{\sf RD}}+r_1+r_2, R_1\geq r_1, \text{ and } R_2\geq r_2\},\\ \intertext{where} \ensuremath{\delta{\mathcal R}_{\CI}}(R_\ensuremath{\text{\sf CI}})=\{&(R_1,R_2)\in\ensuremath{{\mathcal R}_\CI}(R_\ensuremath{\text{\sf CI}}): \not{\exists} (r_1,r_2)\in\ensuremath{{\mathcal R}_\CI}(R_\ensuremath{\text{\sf CI}})\\ &\text{ s.t. } r_1\leq R_1, r_2\leq R_2, \text{ and } (r_1,r_2)\neq (R_1,R_2)\}. \end{align*}} \end{corol} \subsection{Sketch of Proof of Theorem~\ref{thm:main}} \label{subsec:proof} Proof of achievability ($\ensuremath{\mathcal R}_\star \supseteq \ensuremath{\mathcal R}$), which is based on Wyner-Ziv's source coding with side-information~\cite{WynerZi73}, is omitted in the interest of space. The cardinality bound can be shown using Carath\'{e}odory's theorem. To prove the converse, let $\epsilon>0$ and $n,f_1,f_2,g_1,g_2$ be such that \eqref{eq:prob-of-error} and \eqref{eq:CIrate} hold. Let $C_k=f_k(X^n,Y^n)$, for $k=1,2$, and $W_1=g_1(X^n,C_1)$ and $W_2=g_2(Y^n,C_2)$. Then, {\small \begin{align*} R_1 + \epsilon &\geq \frac{1}{n} H(C_1) \geq \frac{1}{n} H(C_1|X^n) \geq \frac{1}{n} H(W_1|X^n)\\ &\geq \frac{1}{n} I(Y^n;W_1|X^n)\\ &\stackrel{(a)}{=} \frac{1}{n} \sum_{i=1}^n H(Y_i|X_i) - H(Y_i|Y^{i-1},X^n,W_1)\\ &\geq \frac{1}{n} \sum_{i=1}^n H(Y_i|X_i) - H(Y_i|X_i,W_1,Y^{i-1},X^{i-1})\\ &= \sum_{i=1}^n \frac{1}{n} I(Y_i;Q_i|X_i),\;Q_i:=(W_1,Y^{i-1},X^{i-1})\\ &\stackrel{(b)}{=} I(Y_J;Q_J|X_J,J) \stackrel{(c)}{=} I(Y_J;Q|X_J),\;Q:=(Q_J,J), \end{align*}} where (a) follows from the independence of $(X_i,Y_i)$ pairs across $i$. In (b), we define $J$ to be a random variable uniformly distributed over $\{1,\ldots,n\}$ and independent of $(X^n,Y^n)$. And (c) follows from the independence of $J$ and $(X^n,Y^n)$. Similarly, {\small \begin{align*} R_2 + \epsilon &\geq \frac{1}{n} H(C_2|Y^n) \geq \frac{1}{n} H(W_2|Y^n) \\ &\geq \frac{1}{n} H(W_1|X^n) - \frac{1}{n} H(W_2|W_1)\\ &\stackrel{(a)}{\geq} H(W_1|X^n) - \kappa\epsilon\\ &\geq\frac{1}{n} I(X^n;W_1|Y^n) - \kappa\epsilon\\ &\stackrel{(b)}{\geq} I(X_J;Q|Y_J) - \kappa\epsilon, \end{align*}} where (a) (with $\kappa:=1+\log|\ensuremath{\mathcal X}||\ensuremath{\mathcal Y}|$) follows from Fano's inequality and the fact that the range of $g_1$ can be restricted without loss of generality to a set of cardinality $|\ensuremath{\mathcal X}|^n|\ensuremath{\mathcal Y}|^n$. And (b) can be shown along the same lines as the chain of inequalities which gave a lower bound for $R_1$ above. Moreover, {\small \begin{align*} \frac{1}{n} I(X^n,Y^n;W_1) &= \frac{1}{n} \sum_{i=1}^n H(X_i,Y_i) - H(X_i,Y_i|W_1,X^{i-1},Y^{i-1})\\ &= \frac{1}{n} \sum_{i=1}^n I(X_i,Y_i;Q_i)\\ &= I(X_J,Y_J;Q). \end{align*}} Since $X_J,Y_J$ has the same joint distribution as $X,Y$, the converse ($\ensuremath{{\mathcal R}_\CI}\subseteq\ensuremath{{\mathcal R}_{\star\CI}}$) for common information follows. Similarly, the converse ($\ensuremath{{\mathcal R}_\RD}\subseteq\ensuremath{{\mathcal R}_{\star\RD}}$) for residual dependency can be shown using {\small \begin{align*} \frac{1}{n} I(X^n;Y^n|W_1) &= \frac{1}{n} \sum_{i=1}^n I(X_i;Y^n|W_1,X^{i-1})\\ &\geq \frac{1}{n} \sum_{i=1}^n I(X_i;Y_i|W_1,X^{i-1},Y^{i-1})\\ &= I(X_J;Y_J|Q). \end{align*}} \section{Cryptographic Application} \label{sec:crypto} \subsection{Background} \label{sec:cryptobg} Secure multi-party computation is a central problem in modern cryptography. Roughly, the goal of secure multi-party computation is to carry out computations on inputs distributed among two (or more) parties, so as to provide each of them with no more information than what their respective inputs and outputs reveal to them. Our focus in this section is on an important sub-class of such problems --- which we shall call {\em secure 2-party sampling} --- in which the computation has no inputs, but the outputs to the parties are required to be from a given joint distribution (and each party should not learn anything more than its part of the output). Also we shall restrict ourselves to the case of honest-but-curious adversaries. It is well-known (see for instance \cite{Wullschleger08thesis} and references therein) that very few distributions can be sampled from in this way, unless the computation is aided by a {\em set up} --- some correlated random variables that are given to the parties at the beginning of the protocol. The set up itself will be from some distribution $(X,Y)$ (Alice gets $X$ and Bob gets $Y$) which is different from the desired distribution $(U,V)$ (Alice getting $U$ and Bob getting $V$). The fundamental question then is, which set ups $(X,Y)$ can be used to securely sample which distributions $(U,V)$, and {\em how efficiently}. While the feasibility question can be answered using combinatorial analysis (as, for instance, was done in \cite{Kilian00}), information theoretic tools have been put to good use to show bounds on efficiency of protocols (e.g. \cite{Beaver96,DodisMi99,WinterNaIm03,ImaiMuNaWi04,WolfWu08,ImaiMoNa06,CsiszarAh07,WinklerWu09}). Our work continues on this vein of using information theory to formulate and answer efficiency questions in cryptography. Specifically, {the quantities explored in the previous section} lead to effective tools in providing new and improved upper-bounds on the rate at which samples from a distribution $(U,V)$ can be securely generated, per sample drawn from a set up distribution $(X,Y)$. Below we sketch the outline of this application, which is further developed in \cite{MajiPrPrRo10}. \paragraph{Secure Protocols} A two-party protocol $\Pi$ is specified by a pair of (possibly randomized) functions \ensuremath{\pi_{\mathrm{Alice}}}\xspace and \ensuremath{\pi_{\mathrm{Bob}}}\xspace, that are used by each party to operate on its current state $W$ to produce a message $m$ (that is sent to the other party) and a new state $W'$ for itself. The initial state of the parties may consist of correlated random variables $(X,Y)$, with Alice's state being $X$ and Bob's state being $Y$; such a pair is called a set up for the protocol. The protocol proceeds by the parties taking turns to apply their respective functions to their state, and sending the resulting message to the other party; this message is added to the state of the other party. $\pi_{\mathrm{Alice}}$ and $\pi_{\mathrm{Bob}}$ also specify when the protocol terminates and produces output (instead of producing the next message in the protocol). A protocol is considered valid only if both parties terminate in a finite number of rounds (with probability 1). The {\em view} of a party in an execution of the protocol is a random variable which is defined as the collection of its states so far in the protocol execution. For a valid protocol $\Pi=(\ensuremath{\pi_{\mathrm{Alice}}}\xspace,\ensuremath{\pi_{\mathrm{Bob}}}\xspace)$, we shall denote the final views of the two parties as $(\ensuremath{\Pi^{\mathrm{view}}_{\mathrm{Alice}}}\xspace(X,Y),\ensuremath{\Pi^{\mathrm{view}}_{\mathrm{Bob}}}\xspace(X,Y))$. Also, we shall denote the outputs as $(\ensuremath{\Pi^{\mathrm{out}}_{\mathrm{Alice}}}\xspace(X,Y),\ensuremath{\Pi^{\mathrm{out}}_{\mathrm{Bob}}}\xspace(X,Y))$. For a protocol $\Pi$ to be a secure realization of $(U,V)$ given a set up $(X,Y)$, firstly, the outputs $(\ensuremath{\Pi^{\mathrm{out}}_{\mathrm{Alice}}}\xspace(X,Y),\ensuremath{\Pi^{\mathrm{out}}_{\mathrm{Bob}}}\xspace(X,Y))$ must be identically distributed as $(U,V)$. Secondly, if either Alice or Bob is ``curious'' (or ``passively corrupt''), the protocol should give that party no more information about the other party's output than what their own output provides. This is formalized using a simulatability requirement. In case of information theoretic security (as opposed to computational security) these can be stated in terms of independence of the view, given one's own output. Formally these three requirements can be stated as follows:% \footnote{For simplicity, we state the conditions for ``perfect security.'' Our definitions and results generalize to the setting of ``statistical security,'' where a small statistical error is allowed.} \begin{equation*} (\ensuremath{\Pi^{\mathrm{out}}_{\mathrm{Alice}}}\xspace(X,Y),\ensuremath{\Pi^{\mathrm{out}}_{\mathrm{Bob}}}\xspace(X,Y)) = (U,V) \end{equation*} \begin{equation*} \ensuremath{\Pi^{\mathrm{view}}_{\mathrm{Alice}}}\xspace(X,Y) \leftrightarrow \ensuremath{\Pi^{\mathrm{out}}_{\mathrm{Alice}}}\xspace(X,Y) \leftrightarrow \ensuremath{\Pi^{\mathrm{out}}_{\mathrm{Bob}}}\xspace(X,Y) \end{equation*} \begin{equation*} \ensuremath{\Pi^{\mathrm{out}}_{\mathrm{Alice}}}\xspace(X,Y) \leftrightarrow \ensuremath{\Pi^{\mathrm{out}}_{\mathrm{Bob}}}\xspace(X,Y) \leftrightarrow \ensuremath{\Pi^{\mathrm{view}}_{\mathrm{Bob}}}\xspace(X,Y) \end{equation*} \subsection{Towards Measuring Cryptographic Content} In \cite{WolfWu08} three information theoretic quantities were used to quantify the cryptographic content of a pair of correlated random variables $X$ and $Y$, which we shall rephrase as below: {\begin{align*} H(Y\searrow X|X) &= \min_{Q:H(Q|Y)=I(X;Y|Q)=0} H(Q|X) \\ H(X\searrow Y|Y) &= \min_{Q:H(Q|X)=I(X;Y|Q)=0} H(Q|Y) \\ I(X;Y|X\wedge Y) &= \min_{Q:H(Q|X)=H(Q|Y)=0} I(X;Y|Q) \end{align*}} As shown in \cite{WolfWu08}, these quantities are ``monotones'' that can only decrease in a protocol, and if the protocol securely realizes a pair of correlated random variables $(U,V)$ using a set up $(X,Y)$, then each of these quantities should be at least as large for $(X,Y)$ as for $(U,V)$. While these quantities do capture several interesting cryptographic properties, they paint a partial picture. For instance, two pairs of correlated random variables $(X,Y)$ and $(X',Y')$ may have vastly different values for these quantities, even if they are statistically close to each other, and hence have similar ``cryptographic content.''% \vspace{0.04cm} Instead, we shall consider the triplet $\K XYQ$ defined as {\small\[ \K XYQ := (I(Q;Y|X), I(Q;X|Y), I(X;Y|Q)),\]} for an arbitrary random variable $Q$. By considering all random variables $Q$ we define the region% \footnote{Here $\le$ stands for coordinate-wise comparison. Note that \KK XY is equivalent to $\{(\ensuremath{{\mathcal R}_{\star\RD}}(R_\ensuremath{\text{\sf RD}}),R_\ensuremath{\text{\sf RD}}):R_\ensuremath{\text{\sf RD}}\in[0, I(X;Y)]\}$. We use this notation to make the dependence on $X$ and $Y$ explicit.} {\small\begin{align*} \KK XY := \{ (x,y,z) \;:\; \exists Q \text{ s.t. } \K XYQ \le (x,y,z) \}. \end{align*}} This generalizes the three quantities considered in \cite{WolfWu08}, as (using arguments similar to that used for \corollaryref{GacsKo}) it can be shown that the region $\KK XY \subseteq \ensuremath{{{\mathbb{R}}^+}}\xspace^3$ intersects the co-ordinate axes at the points $(H(Y\searrow{X}|X),0,0)$, $(0,H(X\searrow{Y}|Y),0)$, and $(0,0,I(X;Y|X\wedge Y)$. In the following sections we point out that \ensuremath{\mathbb{K}}\xspace also satisfies a monotonicity property: the region can only expand in a protocol, and if the protocol securely realizes a pair of correlated random variables $(U,V)$ using a set up $(X,Y)$, then \KK XY should be smaller than \KK UV. As we shall see, since the region \KK XY has a non-trivial shape (see for instance, \exampleref{zsource}), \ensuremath{\mathbb{K}}\xspace can yield much better bounds on the rate than just considering the axis intercepts; in particular \ensuremath{\mathbb{K}}\xspace can differentiate between pairs of correlated random variables that have the same axis intercepts. Further \KK XY is continuous as a function of $(X,Y)$, and as such one can derive bounds on rate that are applicable to statistical security as well as perfect security. \subsection{Monotone Regions for 2-Party Secure Protocols} \label{sec:monotone} Given a pair of random variables $(X,Y)$ denoting the {\em views} of the two parties in a 2-party protocol we are interested in capturing the ``cryptographic content'' of this pair. We shall do so by defining a region in multi-dimensional real space, that intuitively, consists of witnesses of ``weakness'' in the cryptographic nature of the random variables $(X,Y)$; thus smaller this region, the more cryptographically useful the variables are. The region has a monotonicity property: a secure protocol that involves only communication (over noiseless links) and local computations (i.e., without using trusted third parties) can only {\em enlarge} the region. Our definition of a monotone region from \cite{MajiPrPrRo10} given below, strictly generalizes that suggested by \cite{WolfWu08}. The monotone in \cite{WolfWu08}, which is a single real number $m$, can be interpreted as a one-dimensional region $[m,\infty)$ to fit our definition. (Note that a decrease in the value of $m$ corresponds to the region $[m,\infty)$ enlarging.) \begin{defn} \label{def:monotone} We will call a function \ensuremath{\mathbb{M}}\xspace that maps a pair of random variables $X$ and $Y$, to an upward closed subset% \footnote{A subset \ensuremath{\mathbb{M}}\xspace of $\ensuremath{\mathbb{R}}\xspace^d$ is called upward closed if $\ensuremath{\mathbf{a}}\xspace \in \ensuremath{\mathbb{M}}\xspace$ and $\ensuremath{\mathbf{a}}\xspace' \ge \ensuremath{\mathbf{a}}\xspace$ (i.e., each co-ordinate of $\ensuremath{\mathbf{a}}\xspace'$ is no less than that of \ensuremath{\mathbf{a}}\xspace) implies that $a'\in \ensuremath{\mathbb{M}}\xspace$.} of $\ensuremath{{{\mathbb{R}}^+}}\xspace^d$ (points in the $d$-dimensional real space with non-negative co-ordinates) a {\em monotone region} if it satisfies the following properties: \begin{enumerate} \item ({\em Local computation cannot shrink it.}) For all random variables $(X,Y,Z)$ with $X \leftrightarrow Y \leftrightarrow Z$, we have $ \M{XY}{Z} \supseteq \M{Y}{Z}$ and $\M{X}{YZ} \supseteq \M{X}{Y}$. \item ({\em Communication cannot shrink it.}) For all random variables $(X,Y)$ and functions $f$ (over the support of $X$ or $Y$), we have $ \M{X}{Yf(X)} \supseteq \M{X}{Y}$ and $\M{Xf(Y)}{Y} \supseteq \M{X}{Y}$. \item ({\em Securely derived outputs do not have smaller regions.}) For all random variables $(X,U,V,Y)$ with $X \leftrightarrow U \leftrightarrow V$ and $U\leftrightarrow V \leftrightarrow Y$, we have $ \M{U}{V} \supseteq \M{XU}{YV}$. \item ({\em Cryptographic content in independent pairs add up.}) For independent pairs of random variables $(X_0,Y_0)$ and $(X_1,Y_1)$, we have $ \M{X_0X_1}{Y_0Y_1} = \M{X_0}{Y_0} + \M{X_1}{Y_1}, $ where the $+$ sign denotes {\em Minkowski sum}. That is, $\M{X_0X_1}{Y_0Y_1} = \{ \ensuremath{\mathbf{a}}\xspace_0+\ensuremath{\mathbf{a}}\xspace_1 \;|\; \ensuremath{\mathbf{a}}\xspace_0 \in \M{X_0}{Y_0} \text{ and } \ensuremath{\mathbf{a}}\xspace_1\in\M{X_1}{Y_1} \}$. (Here addition denotes coordinate-wise addition.) \end{enumerate} \end{defn} Note that since \M{X_0}{Y_0} and \M{X_1}{Y_1} have non-negative co-ordinates and are upward closed, $\M{X_0}{Y_0} + \M{X_1}{Y_1}$ is smaller than both of them. This is consistent with the intuition that more cryptographic content (as would be the case with having more independent copies of the random variables) corresponds to a smaller region. \subsection{\ensuremath{\mathbb{K}}\xspace as a Monotone Region.} \label{sec:KKmonotone} In \cite{MajiPrPrRo10} we prove the theorem below, and obtain the following corollary. \begin{thm} \ensuremath{\mathbb{K}}\xspace is a monotone region as defined in \definitionref{monotone}. \end{thm} \begin{corol} \label{cor:secure-realization-rate} If $n_1$ independent copies of a pair of correlated random variables $(U,V)$ can be securely realized from $n_2$ independent copies of a pair of correlated random variables $(X,Y)$, then $n_1 \KK XY \subseteq n_2 \KK UV$. (Here multiplication by an integer $n$ refers to $n$-times repeated Minkowski sum.) \end{corol} Intuitively, \KK XY captures the cryptographic content of the correlated random variables $(X,Y)$: the farther it is from the origin, the more cryptographic content it has. In particular, if \KK XY contains the origin, then $(X,Y)$ is cryptographically ``trivial,'' in the sense that $(X,Y)$ can be securely realized with no set ups. \iffalse To see this, note that \KK XY contains the origin iff there exists a $Q$ such that $X\leftrightarrow Q \leftrightarrow Y$, $Q \leftrightarrow X \leftrightarrow Y$ and $X \leftrightarrow Y \leftrightarrow Q$. Then, $(X,Y)$ can be securely realized trivially (i.e., starting with no correlated random variables as set up) as follows: a value for $Q$ is publicly sampled (say, Alice samples it and sends it to Bob), and then the two parties locally sample values for $X$ and $Y$ respectively, conditioned on the value for $Q$. Since $ X\leftrightarrow Q \leftrightarrow Y$, the resulting output distribution is indeed that of $(X,Y)$. Since either party's view in this protocol is simply $Q$, the security requirement is equivalent to $Q \leftrightarrow X \leftrightarrow Y$ and $Q \leftrightarrow X \leftrightarrow Y$. \fi This triviality property can be inferred from the three quantities considered by \cite{WolfWu08} as well, since those quantities correspond to the axis intercepts of our monotone region. However, what makes the monotone region more interesting is when the pair of correlated random variables is non-trivial, as illustrated in the following example. \begin{eg} \label{eg:crypto-example} Consider the question of securely realizing $n_1$ independent pairs of random variables distributed according to $(U,V)$ in \exampleref{zsource} from $n_2$ independent pairs of $(X,Y)$ in \exampleref{connected}. While the monotones in \cite{WolfWu08} will give a lowerbound of 0.5182 on $n_2/n_1$, we show that $n_2/n_1 \ge 1.8161$. (For this we use the intersection of \KK UV with the plane $z=0$ (\figureref{zsource}) and one point in the region \KK XY (marked in \figureref{connected}), and apply \corollaryref{secure-realization-rate}.) \end{eg} Hence, the axis intercepts of this monotone region (one of which is the common information of G\'{a}cs and K\"{o}rner) do not by themselves capture subtle characteristics of correlation that are reflected in {\em the shape of the monotone region}. As discussed in \cite{MajiPrPrRo10}, \KK XY is a convex region, and for a fixed set of axis intercepts, the cryptographic quality of a pair of random variables is reflected in how little it bulges towards the origin. We leave as an open question whether our bound is indeed tight. \section{Introduction} Finding a meaningful definition for the ``common information'' of a pair of dependent random variables $X$ and $Y$ has received much attention starting from the 1970s~\cite{GacsKo73,Witsenhausen75,Wyner75,AhlswedeKo74,Yamamoto94}. We propose a new measure --- a three-dimensional region --- which brings out a detailed picture of the extent of common information of a pair. This gives us an expressive means to compare different pairs with each other, based on the shape and size of their respective regions. We are motivated by potential applications in cryptography, game theory, and distributed control, besides information theory, where the role of dependent random variables and common randomness is well-recognized. Suppose $X=(X',Q)$ and $Y=(Y',Q)$ where $X',Y',Q$ are independent. Then a natural measure of ``common information'' of $X$ and $Y$ is $H(Q)$. Both an observer of $X$ and an observer of $Y$ may independently produce the common part $Q$; and conditioned on $Q$, there is no ``residual dependency,'' i.e., $I(X;Y|Q)=0$. The definition of G\'{a}cs and K\"{o}rner~\cite{GacsKo73} generalizes this to arbitrary $X,Y$ (Fig.~\ref{fig:setup}(a)): the two observers now see $X^n=(X_1,\ldots,X_n)$ and $Y^n=(Y_1,\ldots,Y_n)$, resp., where $(X_i,Y_i)$ pairs are independent drawings of $(X,Y)$. They are required to produce random variables $W_1=f_1(X^n)$ and $W_2=f_2(Y^n)$, resp., which agree (with high probability). The largest entropy rate (i.e., entropy normalized by $n$) of such a ``common'' random variable was proposed as the common information of $X$ and $Y$. However, in the same paper~\cite{GacsKo73}, G\'{a}cs and K\"{o}rner showed (a result later strengthened by Witsenhausen~\cite{Witsenhausen75}) that this rate is still just the largest $H(Q)$ for $Q$ such that $X$ and $Y$ can be written as $(X',Q)$ and $(Y',Q)$ respectively.% \footnote{Hence, after removing the maximal such $Q$, the contribution to the common information from $X'$ and $Y'$ is zero, even if they are highly correlated. Other approaches which do not necessarily suffer from this drawback have been suggested, notably~\cite{Wyner75,AhlswedeKo74,Yamamoto94}.} In other words, this definition captures only an explicit form of common information in (a single instance of) $(X,Y)$. One limitation of the common information defined by G\'{a}cs and K\"{o}rner is that it ignores information which is {\em almost} common. Our approach could be viewed as a strict generalization of theirs which uncovers extra layers of ``almost common information.'' Technically, we introduce an omniscient genie who has access to both the observations $X$ and $Y$ and can send separate messages to the two observers over rate-limited noiseless links. See Fig.~\ref{fig:setup}(b). The objective is for the observers to agree on a ``common'' random variable as before, but now with the genie's assistance. This leads to a trade-off region trading-off the rates of the noiseless links and the resulting common information\footnote{We use the term common information primarily to maintain continuity with~\cite{GacsKo73}.} (or the resulting residual dependency). We characterize these trade-off regions and show that, in general, they exhibit non-trivial behavior, but reduce to the trivial behaviour discussed above when the rates of the noiseless links are zero. \begin{figure}[tb] \centering \scalebox{0.23}{\includegraphics{GacsKorner}}\\(a)\\\vspace{0.1cm} \scalebox{0.23}{\includegraphics{Genie}}\\(b) \caption{Setup for (a) G\'{a}cs-K\"{o}rner common information, and (b) assisted common information.} \label{fig:setup} \end{figure} Our new measure has an immediate application to cryptography (\sectionref{crypto}). Distributed random variables with non-trivial correlations form an important resource in the cryptographic task of secure multi-party computation. A fundamental problem here is for two parties to ``securely generate'' a certain pair of random variables, given another pair of random variables, by means of a protocol. We show that the region of residual dependency of the views of two parties engaged in such a protocol can only monotonically expand and not shrink. Thus, by comparing the regions for the target random variables and the given random variables, we obtain improved upperbounds on the efficiency with which one pair can be used to securely generate another pair.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{intro} Recently a heated debate sparkled about the correct expression for the entropy of a system (see Refs. \cite{Nat.Phys.10.67.2014.Dunkel,PhysRevE.90.062116.2014.Hilbert,PhysRevE.91.052147.2015.Campisi,AmJPhys.83.163.2015.Frenkel,PhysRevE.92.020103.2015.Swendsen} and citations therein). From a variety of proposals, two expressions stand out: the Gibbs entropy, \begin{subequations} \label{def_SBG} \begin{equation} S_G = k_B \ln \Omega(E,{\bf X}) , \label{def_SG} \end{equation} and the Boltzmann entropy, \begin{equation} S_B = k_B \ln \big[ \omega(E,{\bf X}) \epsilon \big] . \label{def_SB} \end{equation} \end{subequations} In Eqs. (\ref{def_SBG}) $E$ is the energy of the system, ${\bf X} \equiv (X_1, \ldots, X_n)$ is the collection of extensive external parameters (other than energy) that specify the state of the system, and $\epsilon$ is an arbitrary, small parameter with dimensions of energy. $\Omega$ represents the number of states of the system with energy less or equal to $E$ (at fixed ${\bf X}$), whereas $\omega$ is the density of states (DOS), so that $\Omega \equiv \int_0^E \omega(E')\,dE'$. For a quantum system, if $H(\xi; {\bf X})$ is the Hamiltonian and $\xi$ are the microscopic degrees of freedom, then \begin{equation} \Omega(E; {\bf X}) \equiv {\rm Tr} \Theta[E-H(\xi; {\bf X})] \quad {\rm and} \quad \omega(E; {\bf X}) \equiv {\rm Tr} \delta[E-H(\xi; {\bf X})] = \frac{\partial \Omega(E, {\bf X})}{\partial E} . \label{def_omegas} \end{equation} In general, for thermodynamic systems with unbounded energy and monotonic DOS, the thermodynamic predictions of the two expressions (\ref{def_SBG}) coincide. Disagreements appear for mesoscopic systems and in systems with non-monotonic DOS. In the latter case the definition (\ref{def_SB}) may lead to negative temperatures whereas the temperatures derived from Eq. (\ref{def_SG}) are always positive. \section{Thermodynamics} \label{sec_thermodynamics} We have first to clarify the thermodynamic premises. The state of a system is defined by the small set of parameters $(E,{\bf X})$. The external parameters ${\bf X}$ may be directly measured, but the internal energy $E$ is defined by the work done in adiabatic processes (see for example \cite{Callen:book, Lungu:book, JMathChem.28.313.2000.Pogliani}). The basic ingredient in any thermodynamic considerations is the existence of \textit{equilibrium states}. We assume that at constant external parameters and fixed internal constrains each system is either in equilibrium or evolves irreversibly towards an equilibrium state. The equilibrium or the evolution towards equilibrium may be observed only in macroscopic systems by measurements that (supposedly) do not influence the set of parameters $(E,{\bf X})$. In a mesoscopic system (a system in which the finite size effects are non-negligible) the fluctuations are observable or comparable to the measured quantities. \textit{In this sense} the equilibrium cannot be attained or the act of observing the equilibrium might perturb the state of the system. Therefore we understand the equilibrium (in macroscopic and mesoscopic systems) as \textit{the state which is attained after a long interval of time ($t\to\infty$) when all the external parameters and internal constraints are fixed}. With this definition, any system (macroscopic or mesoscopic) is in equilibrium or tends to equilibrium when the external conditions are fixed. We shall say that two or more systems are in \textit{thermal contact} if they can exchange energy without changing the parameters ${\bf X}$. If the net (average) energy exchange between two systems in thermal contact is zero when the parameters ${\bf X}$ are fixed, we say that the two systems are in \textit{thermal equilibrium}. (We do not discuss here the important issue of how the heat exchange may be observed, especially in a mesoscopic system.) Obviously, two identical copies of a system are in thermal equilibrium with one another (in the absence of any external forces that should break the symmetry between them). The relation of thermal equilibrium (which we shall denote by ``$\sim$'') is \textit{reflexive} (${\mathcal A} \sim {\mathcal A}$) and \textit{symmetric} (${\mathcal A} \sim {\mathcal B} \ \Rightarrow \ {\mathcal B} \sim {\mathcal A}$). If ``$\sim$'' is also \textit{transitive} (i.e. ${\mathcal A} \sim {\mathcal B}$ and ${\mathcal B}\sim{\mathcal C}$ imply ${\mathcal A} \sim {\mathcal C}$), then the thermal equilibrium is an \textit{equivalence relation}. If we assume that \textit{the thermal equilibrium is an equivalence relation}, then the equivalence classes of a system form a set of disjoint \textit{isothermal} sets in the $(n+1)$-dimensional [$(n+1)$D] space of parameters $(E,{\bf X})$. If these sets are $n$D hyper-surfaces, then we identify each isothermal surface by a number $\theta$. If we can define an order for these numbers, then we call $\theta$ the \textit{empirical temperature}. To choose the order of $\theta$, let's assume that for each ${\bf X}_0$ fixed, the line defined by the points $(E,{\bf X}_0)$, parallel to the $E$ axis, intersects each isothermal hyper-surface in only one point. Then we may (typically) order $\theta$ on intervals in increasing order of $E$ ($\theta$ does not have to be positive). Since the isothermal hyper-surfaces are disjoint, the order of $\theta$'s is independent of ${\bf X}_0$. Furthermore, if $\theta(E,{\bf X}_0)$ is a bijective function of $E$, then one may invert it to $E(\theta,{\bf X}_0)$ and define the state of the system by $(\theta,{\bf X}_0)$ instead of $(E,{\bf X}_0)$. Let us now discuss the existence of the \textit{empirical entropy}. For this we assume that one can perform quasistatic, reversible, adiabatic processes on systems. Two states ${\mathcal A}$ and ${\mathcal B}$ that can be connected by such a process are denoted by ${\mathcal A} \bowtie {\mathcal B}$. Then, obviously, ${\mathcal A} \bowtie {\mathcal A}$ and, since the processes are reversible, ${\mathcal A} \bowtie {\mathcal B}$ implies ${\mathcal B} \bowtie {\mathcal A}$. Moreover, one can combine two reversible adiabatic processes ${\mathcal A} \bowtie {\mathcal B}$ and ${\mathcal B} \bowtie {\mathcal C}$ to obtain ${\mathcal A} \bowtie {\mathcal C}$. This implies that ``$\bowtie$'' is an equivalence relation on the set of equilibrium states of the system and defines equivalence classes. The equivalence classes are called \textit{isentropic}. We employ the \textit{Carath\'eodory formulation of the second principle of thermodynamics} \cite{Lungu:book, JMathChem.28.313.2000.Pogliani}, i.e. ``in the neighborhood of any equilibrium state of a system (of any number of thermodynamic coordinates), there exist states that are inaccessible by reversible adiabatic processes.'' If the isentropic sets are disjoint $n$D hyper-surfaces--like the isotherms--and the lines defined by the points $(E,{\bf X}_0)$ (where ${\bf X}_0$ is fixed) intersect them in only one point, we can define the \textit{empirical entropy} $\sigma(E,{\bf X}_0)$ as a monotonically increasing function of $E$, for each ${\bf X}_0$. The entropy is a state function. Any entropy function should be a bijective transformation of $\sigma(E,{\bf X})$. Effectively, the entropy and the temperature may be constructed using the definition of heat and its property of being a holomorphic Pfaff form: \begin{equation} \delta Q/\theta = \delta \sigma , \label{def_heat} \end{equation} where $\delta Q$ is the heat exchange and $\delta \sigma$ is the corresponding variation of the entropy. If the heat transfer occurs at constant ${\bf X}$, then $\delta Q = \delta E$ and we obtain from (\ref{def_heat}) the standard relation \begin{equation} \partial \sigma / \partial E \equiv 1/\theta . \label{def_temp} \end{equation} Choosing the value $\theta_0 = \theta(E,{\bf X})$ for one isothermal hyper-surface $S_{\theta_0}$ and the value of $\sigma_0 = \sigma(E,{\bf X})$ for one isentropic hyper-surface $S_{\sigma_0}$, one can construct the function $\sigma(E,{\bf X})$ on $S_{\theta_0}$ by integrating over $Q/\theta_0$ along any path on $S_{\theta_0}$. Once we know $\sigma(E,{\bf X})$ on $S_{\theta_0}$, we can extend it in the whole space by using the equation for the isentropic hyper-surfaces. Furthermore, having $\sigma(E,{\bf X})$ in the whole space, one can construct $\theta(E,{\bf X})$ by using Eq. (\ref{def_temp}). In all this we assume that $\theta$ does not take the value zero anywhere and the isothermal and isentropic hyper-surfaces are smooth. General values for $\theta$ can be obtained by defining a thermometer which may be used to probe the temperature in any system by using the transitivity property of the temperature. The typical absolute temperature scale is the Kelvin scale. Nevertheless, as we shall see further, this scale is not sufficient to define the temperature in any physical system. For some systems we need a scale which contains also negative temperatures. \subsection{Tisza-Callen postulates} \label{subsec_TC_postulates} We saw very briefly how one can obtain some of the basic properties of thermodynamic systems, including the existence of temperature and entropy, starting from very general assumptions and without making reference to the principles of thermodynamics. Without going further into details we conclude the section by presenting the axiomatic foundation of thermodynamics of Tisza and Callen (see for example \cite{Callen:book, JNonNewtonianFluidMech.96.5.2001.Jongschaap, PhysRevE.92.020103.2015.Swendsen,Lungu:book}), which is based on the following four postulates (and includes the assumptions made above). {\it Postulate 1 (existence of equilibrium states): Any isolated system has equilibrium states that are characterized uniquely by a small number of extensive variables $(E,{\bf X})$.} {\it Postulate 2 (existence of entropy): There exists a function (called the entropy $S$) of the extensive parameters, defined for all equilibrium states and having the following property. The values assumed by the extensive parameters in the absence of a constraint are those that maximize the entropy over the manifold of constrained equilibrium states.} We use here the notation $S$ for the entropy, to distinguish it from the empirical entropy $\sigma$. This postulate is an expression of the \textit{second law} of thermodynamics. {\it Postulate 3 (additivity and differentiability of $S$): The entropy of a composite system is additive over the constituent subsystems (whence the entropy of each constituent system is a homogeneous first-order function of the extensive parameters). The entropy is continuous and differentiable. } This postulate applies only to systems in which the interaction between particles belonging to different sub-systems are negligible. Moreover, in the original formulation of Callen \cite{Callen:book} it is stated that $S$ increases monotonically with $E$, but this is unnecessary, as explained in \cite{PhysRevE.92.020103.2015.Swendsen}. This postulate implies the existence of the temperature $T$, which is $T \equiv (\partial S/\partial E)^{-1}$. Using this definition and the maximization of the entropy of a composite system at equilibrium, we obtain that \begin{equation} \frac{1}{T} = \frac{\partial S_1}{\partial E_1} = \frac{\partial S_2}{\partial E_2} = \ldots , \label{transitivity} \end{equation} hence the transitivity of the thermal equilibrium and the \textit{zeroth law} of thermodynamics \cite{PhysRevE.92.020103.2015.Swendsen}. From this postulate we obtain also the \textit{first law} of thermodynamics, namely \begin{equation} dE = T\,dS + \sum_{i=1}^n p_i \,d X_i \equiv \delta Q + \delta L \label{Lex1} \end{equation} where $\delta L$ is the work and ${\bf p}$ is the collection of intensive variables conjugated to the variables ${\bf X}$ \cite{PhysRevE.92.020103.2015.Swendsen}. {\it Postulate 4: The entropy of any system vanishes in the state for which $T \equiv (\partial S/\partial E)^{-1} = 0$. } This postulate expresses the \textit{third law} of thermodynamics, but shall ignore it in the following. Now we can see if different definitions of the entropy comply with the axiomatic formulation of thermodynamics and compare the conclusions of Refs. \cite{Nat.Phys.10.67.2014.Dunkel,PhysRevE.90.062116.2014.Hilbert,PhysRevE.91.052147.2015.Campisi,AmJPhys.83.163.2015.Frenkel,PhysRevE.92.020103.2015.Swendsen}. \section{The Gibbs entropy} \label{sec_G} In searching for a microscopic expression for the entropy of a system which would be generally valid, some authors \cite{Nat.Phys.10.67.2014.Dunkel, PhysRevE.90.062116.2014.Hilbert, PhysRevE.91.052147.2015.Campisi} strongly support the Gibbs entropy (\ref{def_SG}). Let's see if this satisfies the basic thermodynamic requirements outlined in Section~\ref{sec_thermodynamics} (except the \textit{postulate 4}). Clearly, the existence of an equilibrium state is not contradicted by this definition and therefore \textit{postulate 1} is satisfied. Moreover, if two or more systems are in contact and some constraints are removed (like the removal of a wall or allowing heat exchange between different subparts of a system), the number of states $\Omega$ can only increase, since the states accessible before removing the constraints are still accessible after the removal (see for example Eqs. 48 and 49 of Ref. \cite{PhysRevE.90.062116.2014.Hilbert}). Apparently this implies that the \textit{postulate 2} is satisfied, since the entropy always increases after removal of some constraints. We shall come back to this point later and show that this is not the case. The \textit{postulate 3} is satisfied only in the thermodynamic limit, since by putting in contact two systems one obtains a total system of entropy which is always bigger than the sum of the entropies of the isolated systems. \textit{Assuming} that the difference is negligible for thermodynamic systems, we can say that the \textit{postulate 3} is also satisfied in such cases \cite{PhysRevE.90.062116.2014.Hilbert}. Now let's see if the \textit{postulate 2} is indeed satisfied and $S_G$ also describes the equilibrium state of a composite system. For this we calculate the Gibbs temperature, defined by Eq. (\ref{transitivity}), \begin{equation} T_G \equiv \frac{\Omega(E,{\bf X})}{k_B \omega(E,{\bf X})} . \label{def_TG} \end{equation} We take two relevant examples: the ideal gas and a system of independent spins in uniform magnetic field. For an \textit{ideal gas} of $N$ particles in a volume $V$, the total number of states is $\Omega_{\rm id}(E, V, N) = C_{\rm id}(N) V^N E^{3N/2}$, where $C_{\rm id}(N)$ is a constant that depends only on $N$. Therefore the relation between the temperature and the energy of the system is $T_{G {\rm id}} = [2/(3 k_B)] E/N$, which takes values between $(2/3) \epsilon_{\rm min}/k_B$ (when $E\to E_{\rm min}$ and $\epsilon_{\rm min} \equiv E_{\rm min}/N$) and $\infty$ (when $E \to \infty$); $E_{\rm min}$ is the minimum energy of the system and in general is taken to be equal to zero. The energy of a \textit{system of $N_0$ spins} ($N_0\gg 1$) in uniform magnetic field $B$ is \begin{equation} E = - B\mu \sum_{i=1}^{N_0} s_i , \label{en_spins} \end{equation} where $\mu$ is the magnetic moment, $s_i = \pm 1/2$, is the spin orientation, and we assume for convenience that $B > 0$. The minimum energy is $E_0 = - B\mu N_0/2$, which is reached when all the spins are pointing upwards, and the maximum energy is $E_1 = B\mu N_0/2$ which is reached when all the spins are pointing downwards. If we denote by $N$ the number of ``spin flips'' (which is the number of spins oriented downwards), then the energy of the system relative to $E_0$ is $E = B\mu N$ and the DOS is $\omega_s(N) \equiv \omega_s(E = B\mu N) = N_0!/[N! (N_0-N)!]/(B\mu)$ (see \cite{PhysRevE.91.052147.2015.Campisi} for details). Obviously, $\omega_s(N)$ reaches its maximum when $N = N_0/2$ (when $N_0$ is even) or $N = (N_0-1)/2, (N_0+1)/2$ (when $N_0$ is odd). Since $N_0 \gg 1$, we shall say the maximum number of microconfigurations is $\omega_{s\, {\rm max}} = N_0!/[(N_0/2)!]^2/(B\mu)$. The total number of states is $\Omega_{s} (E) = B\mu \sum_{N=0}^{E/(B\mu)} \omega_s(N)$, and the Gibbs temperature $T_{G s} = \Omega_{s} (E)/[k_B \omega_s(E)]$ takes values between $B\mu/k_B$ (when $E \to 0$) and $(B\mu/k_B) \Omega_s(B\mu N_0)$ (when $E \to E_{\rm max} = B\mu N_0$). Now we can see why $T_G$ is not a proper temperature and therefore $S_G$ is not an appropriate definition for the entropy. It is well known that if $E > B\mu N_0/2$, the spin system has a population inversion and cannot be in thermal equilibrium with an ideal gas. Yet, since $T_{G {\rm id}}$ takes values between $(2/3) \epsilon_{\rm min}/k_B$ and $\infty$ and if $2\epsilon_{\rm min}/3 < B\mu \Omega_s (B\mu N_0)$, then we can find the energies $E_{\rm id}$ and $E_s\ (> B\mu N_0/2)$, such that $T_{G {\rm id}}(E_{\rm id}) = T_{G s}(E_s)$. Since the Gibbs temperatures are equal, the Gibbs entropy of the total system is maximized for these choices of energies, which correspond to a nonequilibrium state, so \textit{the Gibbs entropy is unphysical} (see also the next section). \section{The Boltzmann entropy} \label{sec_B} The Boltzmann entropy (\ref{def_SB}) is eventually the most used in statistics (see for example \cite{AmJPhys.83.163.2015.Frenkel,PhysRevE.92.020103.2015.Swendsen} related to the recent debate), although is very much criticized by the authors who consider the Gibbs entropy (\ref{def_SG}) as the only viable choice \cite{Nat.Phys.10.67.2014.Dunkel, PhysRevE.90.062116.2014.Hilbert, PhysRevE.91.052147.2015.Campisi}. The interpretation of $S_B$ is different than that of $S_G$ and it has been thoroughly discussed in Refs. \cite{AmJPhys.83.163.2015.Frenkel,PhysRevE.92.020103.2015.Swendsen}. Let's see if it satisfies the postulates. It is easy to see that Eq. (\ref{def_SB}) is in accord with \textit{postulate 1}. Further, the evolution towards equilibrium is an evolution towards the maximum DOS. If we put two systems ${\mathcal A}$ and ${\mathcal B}$ in thermal contact, the total energy $E_{{\mathcal A} {\mathcal B}} = E_{\mathcal A} + E_{\mathcal B}$ is conserved and the total DOS for given values of $E_{\mathcal A}$ and $E_{\mathcal B}$ (assuming weak interaction between the systems) is $\omega_{{\mathcal A} {\mathcal B}} = \omega_{\mathcal A}(E_{\mathcal A}) \omega_{\mathcal B}(E_{\mathcal B})$. Therefore the maximum entropy $S_{B {\mathcal A} {\mathcal B}} = k_B \ln \omega_{{\mathcal A} {\mathcal B}} \epsilon$ is obtained by the maximization of $\omega_{{\mathcal A} {\mathcal B}}$ with respect to $E_{\mathcal A}$ and $E_{\mathcal B}$, under the constrain that $E_{{\mathcal A} {\mathcal B}}$ remains constant. This leads to the equilibrium condition for the Boltzmann temperatures, $1/T_{B {\mathcal A}} \equiv \partial S_{B{\mathcal A}}/\partial E_{\mathcal A} = \partial S_{B{\mathcal B}}/\partial E_{\mathcal B} \equiv 1/T_{B{\mathcal B}}$ \cite{AmJPhys.83.163.2015.Frenkel,PhysRevE.92.020103.2015.Swendsen}, so the \textit{postulate 2} is also satisfied. Moreover, once the \textit{postulate 2} is satisfied, under the equilibrium conditions, the \textit{postulate 3} is also satisfied. (We do not discuss the \textit{postulate 4}.) The definition of $S_B$ is based on the assumption that all the states of the system are equally probable and the system may spend approximately the same amount of time in each of them. In a thermodynamic system the equilibrium state, which corresponds to equal intensive parameters \textit{\'a la} Boltzmann, has an enormous DOS as compared to the non-equilibrium states and for this reason the fluctuations are very small, i.e. the system stays in equilibrium. This assumption is in accordance also with the study of small (mesoscopic) systems where fluctuations are observable (comparable with the averages) and the equilibrium is never achieved in the thermodynamic sense. The evolution towards equilibrium should be understood in the same sense. If we do not know anything \textit{a priori} about the transition rates between different states, we may assume that they are comparable. Therefore a macroscopic system always evolves towards the parameters regions of higher DOS, i.e. towards equilibrium. These considerations bring us back to the discussion about the equilibration of the system of spins from Section~\ref{sec_G}. From Eq. (\ref{def_SB}) we obtain $T_B = \omega(E,{\bf X})/[k_B \nu(E,{\bf X})]$, where $\nu(E,{\bf X}) \equiv \partial \omega(E,{\bf X})/\partial E$. If $\nu(E,{\bf X})<0$, then $T_B(E,{\bf X})<0$. Therefore for the system of spins described by Eq. (\ref{en_spins}), $T_{Bs}>0$ for $E\in [0,B\mu N_0/2)$, $T_{Bs}<0$ for $E\in (B\mu N_0/2,B\mu N_0]$, and $T_{Bs}(B\mu N_0/2) = \pm\infty$ is undefined. On the other hand, the temperature of the ideal gas $T_{B {\rm id}}$ is positive for any $E$ (and is the same as $T_{G {\rm id}}$ in the thermodynamic limit). Therefore the ideal gas and the system of spins cannot be in thermal equilibrium if $T_{B s} < 0$, contrary to the predictions of Gibbs' formula (\ref{def_SG}), since the system evolves towards parameters corresponding to higher number of states (higher probability). In general systems of negative $T_B$ cannot be in equilibrium with systems of positive $T_B$ due to the evolution towards maximum DOS and this is in accordance with the experimental observations. The supporters of the Gibbs statistics argue that such situations should not be taken into consideration because the states of negative $T_B$ are metastable. This is argument is false. The states with negative $T_B$ only appear to be metastable in contact with systems with unbounded spectra. If in a closed region of space exist only systems of bounded energy spectra, then they may equilibrate at either positive or negative $T_B$ and the systems of negative temperatures would be as legitimate as those of positive temperature. In such a ``world'' one may have reservoirs and thermometers of negative temperature. \section{Conclusions} Starting from the Tisza-Callen axiomatic formulation of thermodynamics we analyzed the validity of Gibbs and Boltzmann expressions for the entropy, $S_G$ (\ref{def_SG}) and $S_B$ (\ref{def_SB}), respectively. We agree with the authors of Refs. \cite{AmJPhys.83.163.2015.Frenkel,PhysRevE.92.020103.2015.Swendsen} and disagree with the authors of Refs. \cite{Nat.Phys.10.67.2014.Dunkel, PhysRevE.90.062116.2014.Hilbert, PhysRevE.91.052147.2015.Campisi} in considering $S_B$ as the only generally valid expression for the entropy. $S_G$ is correct only when it gives the same results as $S_B$. We saw that the equilibrium, according to Boltzmann, is the state of maximum probability (maximum DOS) for the extensive variables. If we assume, by \textit{reductio ad absurdum}, that the real equilibrium state is determined by the Gibbs' prescription and is different from Boltzmann's, then the average value of at least one extensive variable is different from the value corresponding to maximum probability. This implies further that the fluctuations are macroscopic and the equilibrium is not achieved. We also showed that the negative values of the Boltzmann temperature $T_B$ have clear physical meaning in any statistical ensemble (canonical, microcanonical, etc.). The states of negative $T_B$ seem to be unstable only in the presence of systems of unbounded spectra. If there would be only systems of bounded spectra, then the temperature may take any positive and negative value and one can define thermometers and reservoirs as usual. \begin{acknowledgement} This work has been financially supported by CNCSIS-UEFISCDI (project IDEI 114/2011) and ANCS (project PN-09370102). Travel support from Romania-JINR Collaboration grants 05-6-1119-2014/2016, 4436-3-2015/2017, 4342-3-2014/2015, and Titeica-Markov program is gratefully acknowledged. \end{acknowledgement}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{ Introduction } The purpose of this article is to study the problem of constructing $N$-dimensional, $N \ge 2$ , vectorial Sturm-Liouville differential equations subject to certain boundary conditions, which are isospectral to a given one. Those $N$-dimensional vectorial Sturm-Liouville eigenvalue problems which are considered in this paper, are of the following form: \begin{equation} \label{a1.1} -\phi ''(x) +P(x) \phi (x) = \lambda \phi (x), \quad B\phi '(0) + A \phi (0) = {\cal B} \phi ' (\pi ) +{\cal A} \phi (\pi ) = {\bf 0 } , \end{equation} where $0\le x \le \pi $ , $\phi (x) $ is an ${\bf R}^N$-valued function, $P(x)$ is a continuous $N \times N $ symmetric matrix-valued function, $A, B, {\cal A, B} $ are $N \times N $ matrices which satisfy the following conditions: \begin{equation} \label{a1.2} BA^* = AB^* , {\cal BA^* = AB^* }, \quad \mbox{rank}[ A, B]=\mbox{rank}[ {\cal A, B } ]= N , \end{equation} where $A^*$ is the transpose matrix of $A$, $[A,B]$ denotes the $N \times 2N $ matrix whose first $N\times N$ block is $A$, and the second $N \times N$ block is $B$. We shall use the tuple $(P, A, B, {\cal A, B })$ to denote the eigenvalue problem (\ref{a1.1}). Note that the conditions in (\ref{a1.2}) ensure the problem (\ref{a1.1}) a selfadjoint eigenvalue problem, and its eigenvalues can be determined by the variational principle. Counting multiplicities of the eigenvalues, we arrange the eigenvalues of (\ref{a1.1}) in an ascending sequence \begin{equation} \label{a1.3} \mu _0 \le \mu _1 \le \mu _2 \le \cdots . \end{equation} This sequence shall be denote by $\Sigma (P, A, B, {\cal A, B}) $, and is called the {\sl sequence of} \newline {\sl eigenvalues} of (\ref{a1.1}). Note that the multiplicity of each eigenvalue of (\ref{a1.1}) is at most $N$. For convenience, we shall use $\sigma (P,A,B,{\cal A,B})$ to denote the {\sl set of eigenvalues} of (\ref{a1.1}), and arrange its elements in ascending order as \[ \lambda _0 < \lambda _1 < \lambda _2 < \cdots , \] and use $m_k$ denote the multiplicity of $\lambda _k $ in the sequence (\ref{a1.3}). Given two $N$-dimensional selfadjoint vectorial Sturm-Liouville eigenvalue problem $(P, A, B, {\cal A,} $ \newline ${\cal B}) $ and $(\tilde{P} ,\tilde{A} ,\tilde{B} ,\tilde{\cal A} , \tilde {\cal B} )$ over $ 0 \le x \le \pi $. If $\Sigma (P, A, B, {\cal A, B})= \Sigma (\tilde{P} ,\tilde{A} ,\tilde{B} ,\tilde{\cal A} , \tilde{\cal B} )$, we call these two eigenvalue problems {\sl isospectral problems}, or $(P, A, B, {\cal A, B}) $ is {\sl isospectral} to $(\tilde{P} ,\tilde{A} ,\tilde{B} ,\tilde{\cal A} , \tilde{\cal B} )$. For scalar Sturm-Liouville equations, i.e., (\ref{a1.1}) with $N=1$, isospectral problems have been studied by many mathematicians, notably, G. Borg \cite{B}, I. M. Gel'fand , B. M. Levitan and their associates, and the structure of the set of isospectral problems for scalar Sturm-Liouville equations is well-presented in the book of J. P\"{o}schel and E. Trubowitz \cite{PT}, and in the works of E. Trubowitz. But for vectorial Strum-Liouville equations, i.e., (\ref{a1.1}) with $N \ge 2$ , methods for constructing isospectral problems and the structure of isospectral problems are not well-understood. Motivated by a recent work of Jodeit and Levitan \cite{JL}, in this paper we present a method for constructing an $N$-dimensional vectorial Sturm-Liouville eigenvalue problem $(\tilde{P} ,\tilde{A} ,\tilde{B} ,\tilde{\cal A} , \tilde{\cal B} )$ over $ 0 \le x \le \pi $, which is isospectral to the given $N$-dimensional vectorial Sturm-Liouville eigenvalue problem $(P, A, B, {\cal A, B}) $ over $ 0 \le x \le \pi $. For simplicity we shall assume the completeness of the eigenfunctions of the the given eigenvalue problem $(P, A, B, {\cal A, B}) $ subject to the given boundary conditions shown in (\ref{a1.1}). This paper is organized as follows. In section 2 we present some preliminary results related to vectorial Sturm-Liouville equations, and a matrix wave equation (see (\ref{Th3-1})) which is constructed from the given eigenvalue problem $(P, A, B, {\cal A, B}) $. In section 3, using matrix wave equation introduced in section 2, we construct eigenvalue problems which are isospectral to $(P, A, B, {\cal A, B}) $. In section 4 we present two examples using our construction method. As shown in one of our examples, even for a simple case such as $(P,I,0,I,0)$, where $P(x)$ is a constant two by two diagonal matrix, $ I$ is the two by two identity matrix, an isospectral problem $(Q,I,0,I,0)$ can be found where $Q(x)$ is not simultaneously diagonalizable. The isospectral problem for vectorial Sturm-Liouville equations is much more complicated than its scalar counterparts. \section{ Preliminary } To study the eigenvalue problem (\ref{a1.1}) we introduce the following matrix differential equation \begin{equation} \label{a2.1} - Y'' +P(x)Y= \lambda Y, \mbox{} \hskip 1cm Y(0)=B^* , \mbox{} \hskip 1cm Y'(0)=-A^*. \end{equation} Let $Y(x; \lambda) $ denote the $N \times N$ matrix-valued solution of the initial value problem (\ref{a2.1}). We have ( see \cite{CS} ), \[ Y(x; \lambda )= {\cal C} (x; \mu ) + \int _0^x \tilde{K} (x,t) {\cal C} (t; \mu ) dt \] where $\mu ^2 = \lambda $ , $ {\cal C } (x; \mu ) = \cos (\mu x) B^* - \mu^{-1} \sin (\mu x) A^* $, and $\tilde{K}(x,t)$ is as that described in [2, {\bf Lemma 2.1}]. Define the following matrix-valued function \begin{equation} \label{a2.2} W(\lambda ) = {\cal B} Y' (\pi ; \lambda ) +{\cal A} Y (\pi ; \lambda ). \end{equation} Then $\lambda_* \in \sigma (P, A, B, {\cal A, B})$ if and only if $W (\lambda _*)$ is a singular matrix. It follows from the variational principle that the set of the zeros of the equation \begin{equation} \label{a2.3} \det W (\lambda ) = 0 \end{equation} is bounded below. Denote the distint zeros of (\ref{a2.3}) in stricting ascending order as \begin{equation} \label{a2.4} \lambda _0 < \lambda _1 < \lambda _2 < \cdots . \end{equation} Then the multiplicity $m_k$ of $\lambda_k$ in the sequence of eigenvalues $\Sigma (P, A, B, {\cal A, B})$ of (\ref{a1.1}) is equal to the dimemsion of the null space $\mbox{Null}(W( \lambda_k ))$ of $W( \lambda _k )$. If ${\bf v}$ is a nonzero element in $\mbox{Null}(W( \lambda_k ))$, then the vector-valued function \[ z(x) = Y (x; \lambda_k){\bf v} \] is an eigenfunction of (\ref{a1.1}) corresponding to the eigenvalue $ \lambda_k$. In the following, for ${\bf v}$ and ${\bf w}$ in ${\bf R}^N$, the notation $\langle {\bf v,w} \rangle $ is used to denote the inner product of two elements ${\bf v}$ and ${\bf w}$ in ${\bf R}^N$. We shall need the following result. \newtheorem{c1}{Lemma}[section] \begin{c1} \label{le1} For each $k \ge 0$, in the null space Null$(W( \lambda_k ))$ there are exactly $m_k$ linearly independent constant vectors $\theta _l (k) $, $1\le l\le m_k $, such that the vector-valued functions \[ Y(x;\lambda_k)\theta _l (k) ,\mbox{} \hskip 1cm 1\le l \le m_k , \] are mutually orthogonal, i.e., \[ \int _0^\pi \langle Y(x; \lambda _k ) \theta _i (k) , Y(x; \lambda _k ) \theta _j (k) \rangle dx = 0 , \hskip 0.5cm \mbox{if} \hskip 0.5cm i \neq j. \] \end{c1} \noindent {\bf Proof.} Let $ v_1 , \ldots , v_{m_k} $ be a basis of Null$(W( \lambda _k ))$ and $V_k =[ v_1 ,\ldots , v_{m_k} ]$. Then \[ V= \int_0^\pi V_k^* Y^* (x; \lambda _k)Y(x; \lambda _k )V_k dx \] is an $m_k \times m_k $ positive definite matrix. There exists an $m_k \times m_k $ orthogonal matrix $U$ which diagonalizes $V$, i.e., $U^* V U $ is a diagonal matrix. Let $\theta _l (k) , 1\le l \le m_k $, denote the column vectors of $VU$. Then $ \theta _l (k) $ fulfills our requirement. $\Box $ According to {\bf Lemma \ref{le1} }, we define the following vector-valued function \begin{equation} \phi _l (x; \lambda , \lambda _k )= Y(x; \lambda ) \theta _l (k) , \mbox{} \hskip 1.2cm 1 \le l \le m_k. \end{equation} Then, the functions \[ \phi _l (x; \lambda _k , \lambda _k )= Y(x; \lambda_k ) \theta _l (k) , \mbox{} \hskip 1.2cm 1 \le l \le m_k, \] form an orthogonal basis of the eigenspace corresponding to the eigenvalue $\lambda _k$ . From now on the eigenvalue problem $ (P,A,B,{\cal A,B})$ shall be fixed. And, as it was mentioned in the introduction, for simplicity, we shall assume the completeness of the system of eigenfunctions $\{ \phi _l (x; \lambda_k , \lambda_k ) : 1\le l \le m_k , k =0,1,2,\ldots \} $ subject to the given boundary conditions. Next we extend the idea used by Jodeit and Levitan in \cite{JL} to construct the kernel function for a related integral equation. We shall view an ${\bf R}^N$-vector ${\bf v}$ as an $N \times 1$ matrix. Choose $c_k^i \in R $, $1\le i \le m_k, k=0,1,\ldots $, which convergs so rapidly to zero that the matrix-valued function ${\cal F}$ , defined by the following uniformly convergent series, \begin{equation} \label{F} {\cal F}(x,y)= \sum_{k=0}^{\infty} \sum_{i=1}^{m_k} c_k^i \phi _i (x;\lambda _k , \lambda _k ) \phi _i^* (y;\lambda _k , \lambda _k ), \end{equation} is continuous and has continuous first and second order derivatives. Then we construct the following integral equation : \begin{equation} \label{IE} K(x,y) + {\cal F}(x,y) + \int _0^x K(x,t) {\cal F}(t,y) dt =0 , \mbox{} \hskip 0.5cm 0\le y<x \le \pi. \end{equation} We note that if we choose the sequence $( c_k^i :1 \le i \le m_k, k=0,1,2, \ldots ) $ such that $c_k^i=0$ for all $ k \ge k_\circ $, $ i=1,2,\ldots ,m_k $, where $k_\circ $ is a fixed index, then the series in the right hand side of (\ref{F}) is a finite series, and the equation (\ref{IE}) makes sense. This choice of $(c_k^i )$ shall be used in section 4 to construct some concrete examples. The existence of the solution of (\ref{IE}) can be easily proven by using iteration method. On the other hand, when we choose suitably those real numbers $c_k^i$,$1\le i \le m_k$ , $k \ge 0 $, we may prove the following uniqueness theorem. \par \newtheorem{c2}[c1]{Theorem} \begin{c2} \label{Th2.2} Suppose that the sequence $( c_k^i )$ is chosen so that the series in (\ref{F}) is uniformly convergent and has continuous first and second order derivatives, and \begin{equation} \label{Th2-c} 1+c_k^i || \phi _i (\cdot ; \lambda _k , \lambda _k )||^2 > 0 , \mbox{} \hskip 0.5cm 1\le i \le m_k , \hskip 0.5cm \forall k \ge 0 . \end{equation} Then (\ref{IE}) has a unique solution for every $x$ , $ 0<x \le \pi $. \end{c2} \noindent {\bf Proof. } The method for proving this theorem is similar to the one used for treating the scalar case in [ 3, {\bf Theorem 1.1}] . It suffices to show that the only solution for the integral equation \begin{equation} \label{IE2} \Delta (x,y) + \int _0^x \Delta (x,t) {\cal F}(t,y )dt = 0 \end{equation} is $\Delta (x,y ) \equiv 0 $, where $\Delta (x,y )$ is the difference of two solution of (\ref{IE}). Denote $\phi_{i,k} (x) = \phi_i (x; \lambda_k, \lambda_k)$ for convenience. Owing to the assumption about the completeness of eigenfunctions of $(P, A, B, {\cal A,B})$, and the orthogonality ({\bf Lemma \ref{le1}}) of the eigenfunctions $\phi_{i,k} (x)$, we have \begin{equation} \label{Th2.2-1} \Delta^* (x,y) = \sum_{k=0}^\infty \sum_{i=1}^{m_k} \frac {\phi_{i,k} (y)}{||\phi_{i,k}||} ( \int_0^x \frac {\phi_{i,k}^* (t)}{||\phi_{i,k}||} \Delta^* (x,t) dt ) . \end{equation} By (\ref{IE2}), we have \[ \Delta^* (x,y) + \int _0^x {\cal F}^* (t,y )\Delta^* (x,t) dt = 0, \] \begin{equation} \label{Th2.2-2} \Delta (x,y)\Delta^* (x,y) + \int _0^x \Delta (x,y){\cal F}^* (t,y )\Delta^* (x,t) dt = 0. \end{equation} Integrating (\ref{Th2.2-2}) with respect to $y$-variable from $0$ to $x$, using (\ref{F}) and (\ref{Th2.2-1}), we obtain \begin{equation} \label{Th2.2-3} \sum_{k=0}^\infty \sum_{i=0}^{m_k} \frac 1{||\phi_{i,k}||}[ 1+ c_k^i ||\phi_{i,k}||^2] \Omega_{i,k}(x)\Omega_{i,k}^*(x)=0, \end{equation} where \begin{equation} \label{Th2.2-4} \Omega_{i,k}(x)= \int_0^x \Delta (x,t) \phi_{i,k} (t) dt. \end{equation} Since $ \Omega_{i,k}(x)\Omega_{i,k}^*(x)$ is nonnegative definite, (\ref{Th2.2-3}) and (\ref{Th2-c}) imply that \[ \Omega_{i,k}(x)\Omega_{i,k}^*(x)=0 , \] and hence $ \Omega_{i,k}(x)=0 $, and by (\ref{Th2.2-4}), \begin{equation} \label{Th2.2-5} \int_0^x \Delta (x,t) \phi_{i,k} (t) dt=0 \end{equation} for $ k=0,1,2, \ldots, i= 1,2, \ldots ,m_k $. Then by the completeness of eigenfunctions of $(P, A, B, {\cal A,B})$, (\ref{Th2.2-5}) implies $\Delta (x,y) =0 $. This completes the proof. $\Box $ \vskip 0.5cm Now we face the question : `` Does the matrix-valued function $K(x,y)$ determined by the above theorem also satisfy some wave equation with which we are familiar as in the scalar case ? '' The answer is affirmative, as shown below. \par \newtheorem{c3}[c1]{Theorem} \begin{c3} \label{Th3} Assumption as {\bf Theorem \ref{Th2.2}}. The solution $K(x,y)$ of (\ref{IE}) satisfies the following partial differential equation \begin{equation} \label{Th3-1} \frac {\partial ^2}{\partial x^2} K -Q(x) K = \frac {\partial ^2}{\partial y^2} K -KP(y) , \end{equation} where $ Q(x)= P(x)+2 d/dx K(x,x) $, and it also satisfies the following conditions: \[ K(x,y) =0 , \mbox{} \hskip 1cm \mbox{ if} \hskip 1cm y > x , \] \begin{equation} \label{Th3-2} K(x,0)A^* + \frac \partial{\partial y}K|_{y=0}B^* = 0 , \end{equation} \begin{equation} \label{Th3-3} K(x,x)= \frac 12 \int _0^x [ Q(t) -P(t)]dt -{\cal F}(0,0) , \end{equation} where \begin{equation} \label{Th3-4} {\cal F}(0,0) = B^* ( \sum_{k=0}^\infty \sum_{i=1}^{m_k} c_k^i \theta _i(k) \theta _i^*(k) )B. \end{equation} \end{c3} \noindent {\bf Proof. } Denote \[ {\cal J} (x,y)= K(x,y) + {\cal F}(x,y) + \int _0^x K(x,t) {\cal F}(t,y) dt . \] Then by (\ref{IE}), ${\cal J}=0$, hence ${\cal J}_{xx}={\cal J}_{yy}=0$. On the other hand, as \begin{eqnarray*} {\cal J}_{xx} &=& \mbox{} \frac {\partial ^2}{\partial x ^2}K +[ P(x) + ( \frac d{dx} K(x,x) + \frac {\partial }{\partial x} K(x,x))]{\cal F} (x,y) \\ & &\mbox{} - \sum_{k=0}^{\infty} \sum_{i=1}^{m_k} \lambda _k c_k^i \phi _i (x;\lambda _k , \lambda _k ) \phi _i^* (x;\lambda _k , \lambda _k ) \\ & &\mbox{} +K(x,x) \frac {\partial }{\partial x} {\cal F}(x,y) +\int _0^x \frac {\partial ^2}{\partial x^2}K(x,t) {\cal F}(t,y)dt, \end{eqnarray*} \begin{eqnarray*} {\cal J}_{yy}&=&\frac {\partial ^2 }{\partial y^2} K +({\cal F}(x,y)+\int _0^x K(x,t){\cal F}(t,y)dt )P(y) \\ & &\mbox{}+ \sum_{k=0}^{\infty} \sum_{i=1}^{m_k} \lambda _k c_k^i \phi _i (x;\lambda _k , \lambda _k ) \phi _i^* (x;\lambda _k , \lambda _k ) \\ & &\mbox{}-\int _0^x K(x,t)[ \sum_{k=0}^{\infty} \sum_{i=1}^{m_k} \lambda _k c_k^i \phi _i (t;\lambda _k , \lambda _k ) \phi _i^* (y;\lambda _k , \lambda _k )]dt, \end{eqnarray*} we have \begin{eqnarray*} 0 &=&\mbox{} {\cal J}_{xx}-{\cal J}_{yy} +{\cal J}P(y) \\ &=&\mbox{} \frac {\partial ^2}{\partial x^2} K-\frac {\partial ^2}{ \partial y^2}K +[ P(x)+ 2\frac d{dx}K(x,x)- \frac {\partial}{ \partial y} K|_{y=x} ]{\cal F}(x,y) \\ & &\mbox{}+K(x,x) \frac {\partial }{\partial x} {\cal F} + \int _0^x \frac {\partial ^2}{\partial x^2} K(x,t){\cal F} (t,y) dt \\ & &\mbox{} + \int _0^x K(x,t)[ \sum_{k=0}^{\infty} \sum_{i=1}^{m_k} c_k^i ( \lambda _k \phi _i (t;\lambda _k , \lambda _k )) \phi _i^* (y;\lambda _k , \lambda _k )]dt. \end{eqnarray*} Replacing $\lambda_k \phi _i (t;\lambda _k , \lambda _k ) $ by $ -\phi_i '' (t;\lambda _k ,\lambda _k ) +P(t) \phi _i (t; \lambda _k , \lambda _k ) $ in the last integral and using integration by parts twice, we obtain \begin{eqnarray*} 0 &=&\mbox{} {\cal J}_{xx} - {\cal J}_{yy}+{\cal J}P(y) \\ &=&\mbox{} \frac {\partial ^2}{\partial x^2} K -\frac {\partial ^2}{\partial y^2} K+(P(x)+2 \frac d{dx} ) {\cal F}(x,y) \\ & &\mbox{} +[ K(x,0) \frac {\partial}{\partial x}{\cal F}(x,y)|_{x=0} -\frac {\partial }{\partial y} K(x,y)|_{y=0}{\cal F} (0,y) ] \\ & &\mbox{} + \int _0^x [ \frac {\partial ^2}{\partial x^2} K(x,t)- \frac {\partial ^2}{\partial t^2} K(x,t) +K(x,t)P(t)]{\cal F}(t,y) dt . \end{eqnarray*} Finally, let $ Q(x)=P(x)+2d/{dx} K(x,x) $, we have \begin{eqnarray} 0 &=&\mbox{} {\cal J}_{xx} -{\cal J}_{yy} +{\cal J}P(y)-Q(x){\cal J} \nonumber \\ &=&\mbox{} +[ K(x,0) \frac {\partial}{\partial x}{\cal F}(x,y)|_{x=0} -\frac {\partial }{\partial y} K(x,y)|_{y=0}{\cal F} (0,y) ] \nonumber \\ & & \mbox{}+[ \frac {\partial ^2}{\partial x^2} K -\frac {\partial ^2}{\partial y^2} K -Q(x)K+KP(y)] \label{Th3-5} \\ & &\mbox{}+\int _0^x [ \frac {\partial ^2}{\partial x^2} K -\frac {\partial ^2}{\partial t^2} K -Q(x)K+KP(t)] {\cal F}(t,y)dt , \nonumber \end{eqnarray} where the function \[ K(x,0) \frac {\partial}{\partial x}{\cal F}(x,y)|_{x=0} -\frac {\partial }{\partial y} K(x,y)|_{y=0}{\cal F} (0,y) \] vanishes if and only if (\ref{Th3-2}) holds. Then (\ref{Th3-5}) becomes an integral equation of the same type as (\ref{IE2}). Hence we have (\ref{Th3-1}). \par By (\ref{F}) and (\ref{IE}), we see that \[ K(0,0)=-{\cal F}(0,0) =B^*( \sum_{k=0}^\infty \sum_{i=1}^{m_k} c_k^i \theta _i(k) \theta _i^*(k) )B . \] (\ref{Th3-3}) is a consequence of the fundamental theorem of calculus from the definition of $Q(x)$ given above. $\Box $ \par \newtheorem{c4}[c1]{Theorem} \begin{c4} \label{Th4} If $K(x,y)$ is determined by {\bf Theorem \ref{Th3} }, then, for every complex $\lambda $, $k\ge 0$ and $ 1\le l \le m_k $, the vector-valued function $\psi _l (x; \lambda , \lambda_k )$ defined by \begin{equation} \label{Th2.4-1} \psi _l (x; \lambda , \lambda_k )= \phi _l (x; \lambda , \lambda_k ) + \int _0^x K(x,t) \phi _l (t; \lambda , \lambda_k ) dt \end{equation} is a solution of the vectorial differential equation \begin{equation} \label{Th2.4-2} -\psi '' + Q(x) \psi = \lambda \psi , \mbox{} \hskip 1cm 0\le x\le \pi , \end{equation} where $Q(x)=P(x)+2d/dx K(x,x) $, and $ \psi _1 (x; \lambda_k , \lambda_k ) , \ldots , \psi _{m_k} (x; \lambda_k , \lambda_k )$ are linearly independent. In addition, it also satisfies the following initial conditions: \begin{eqnarray} \psi _l (0; \lambda , \lambda_k ) &=&\mbox{} B^* \theta _l (k) , \label{Th2.4-3} \\ \psi _l '(0; \lambda , \lambda_k ) &=& \mbox{} (-A^* + K(0,0)B^* )\theta _l (k) \label{Th2.4-4} \\ &=&\mbox{}(-A^* -B^*( \sum_{r=0}^\infty \sum_{i=1}^{m_r} c_r^i \theta _i(r) \theta _i^*(r) )BB^*) \theta _i (k) , \nonumber \end{eqnarray} or, equivalently, \begin{equation} \label{Th2.4-5} B \psi _l '(0; \lambda , \lambda_k ) +\tilde{A} \psi _l (0; \lambda , \lambda_k )={\bf 0}, \end{equation} where \begin{equation} \label{Th2.4-6} \tilde{A} = A-BK(0,0). \end{equation} \end{c4} \noindent {\bf Proof.} By (\ref{Th2.4-1}), we have \begin{eqnarray} \nonumber \psi _l '' (x; \lambda , \lambda_k ) &=&\mbox{}\phi _l '' (x; \lambda , \lambda_k )+[ \int _0^x K(x,t) \phi _l (t; \lambda , \lambda_k ) dt ]'' \\ &=&\mbox{}(P(x)-\lambda I)\phi _l (x; \lambda , \lambda_k ) +\int _0^x \frac {\partial ^2}{\partial x^2 }K(x,t) \phi _l (t; \lambda , \lambda_k )dt \label{Th2.4-7} \\ & &\mbox{}+ K(x,x) \phi _l '(x; \lambda , \lambda_k ) + \frac {\partial }{\partial x }K(x,t)|_{t=x} \phi _l (x; \lambda , \lambda_k ) , \nonumber \end{eqnarray} and \begin{eqnarray*} \lambda \psi _l (x; \lambda , \lambda_k ) &=& \mbox{} \lambda \phi _l (x; \lambda , \lambda_k ) + \int _0^x K(x,t) \lambda \phi _l (t; \lambda , \lambda_k ) dt \\ &=&\mbox{} \lambda \phi _l (x; \lambda , \lambda_k ) +\int _0^x K(x,t) \lambda Y (t; \lambda )\theta _i (k) dt \\ &=&\mbox{}\lambda \phi _l (x; \lambda , \lambda_k ) +\int _0^x K(x,t) [-Y ''(t; \lambda )+P(t)Y(t;\lambda ) ]\theta _i (k) dt \\ &=&\mbox{}\lambda \phi _l (x; \lambda , \lambda_k ) +\int _0^x K(x,t) P(t) \phi _l (t; \lambda , \lambda_k ) dt \\ & &\mbox{} -\int _0^x K(x,t)Y''(t;\lambda )\theta _i (k)dt . \end{eqnarray*} Using integration by parts twice on the last integral, we have \begin{eqnarray} \lambda \psi _l (x; \lambda , \lambda_k ) &=& \lambda \phi _l (x; \lambda , \lambda_k ) +\int _0^x K(x,t)P(t) \phi _l (t; \lambda , \lambda_k )dt \nonumber \\ & &\mbox{}+(-K(x,0)A^* - \frac {\partial }{\partial y} K|_{y=0}B^* ) \theta _l (k) \label{Th2.4-8} \\ & &\mbox{}-K(x,x) \phi _l '(x; \lambda , \lambda_k ) +\frac {\partial }{\partial y} K|_{y=x} \phi _l (x; \lambda , \lambda_k ) \nonumber \\ & &\mbox{}-\int _0^x \frac {\partial ^2}{\partial t^2} K(x,t) \phi _l (t; \lambda , \lambda_k ) dt. \nonumber \end{eqnarray} Then, by {\bf Theorem \ref{Th3}}, we have \begin{eqnarray*} \lefteqn{ -\psi _l '' (x; \lambda , \lambda_k ) + (\lambda I -Q(x)) \psi _l (x; \lambda , \lambda_k )\mbox{} \hskip 1cm } \\ & &\mbox{\hskip 0.75cm}= (-K(x,0)A^* - \frac {\partial }{\partial y} K|_{y=0}B^* ) \theta _l (k) \\ & &\mbox{\hskip 1.25cm} + \int _0^x [ \frac {\partial ^2}{\partial x^2} K -\frac {\partial ^2}{\partial t^2} K -Q(x)K+KP(t)] dt \\ & &\mbox{\hskip 0.75cm}={\bf 0. } \end{eqnarray*} Besides, for $ 1 \le l \le m_k $, \[ \psi _l (0; \lambda , \lambda_k ) =\phi _l (0; \lambda , \lambda_k ) =B^* \theta _l (k) , \] \begin{eqnarray*} \psi _l ' (0; \lambda , \lambda_k )&=&\phi _l ' (0; \lambda , \lambda_k ) +K(0,0)\phi _l (0; \lambda , \lambda_k ) \\ &=& \mbox{} (-A^* + K(0,0)B^* )\theta _l (k) \\ &=&\mbox{}(-A^* -B^*( \sum_{r=0}^\infty \sum_{i=1}^{m_r} c_r^i \theta _i(r) \theta _i^*(r) )BB^*)\theta _l (k) . \end{eqnarray*} If we denote $\tilde{A} =A-BK(0,0)$, then $B \tilde{A}^*=\tilde{A}B^* $, and, by using $BA^*=AB^*$, we have \begin{eqnarray*} \lefteqn{ B \psi _l ' (0; \lambda , \lambda_k ) +\tilde{A} \psi _l(0; \lambda , \lambda_k ) }\\ & &\mbox{ \hskip 0.7cm}= B( -A^* + K(0,0)B^* )\theta _l (k) +(A-BK(0,0))B^* \theta _l (k) \\ & &\mbox{ \hskip 0.7cm}= {\bf 0 } . \end{eqnarray*} The linear independence of those vector-valued functions $\psi _l (x; \lambda_k , \lambda_k ) , 1 \le l \le m_k $ can be proven by using (\ref{Th2.4-1}), Gronwall's lemma and the linear independence of those functions $\phi _l (x; \lambda_k , \lambda_k ), 1\le l \le m_k $. $\Box $ \vskip 0.5cm At the end of this section, we state a theorem which indicates the possible candidates for eigenfunctions of the isospectral problem $(Q,\tilde{A},B, \tilde{\cal A}, {\cal B})$ of $(P,A,B,{\cal A,B})$ which shall be described in next section. Furthermore, we may also use this theorem to construct that an isospectral Dirichlet's problem of a given Dirichlet's problem. \par \newtheorem{c5}[c1]{Theorem} \begin{c5} \label{Th5} Suppose $\lambda _k \in \sigma ( P ,A ,B, {\cal A}, {\cal B} ) $, $ k \ge 0 $. Then \begin{eqnarray} \label{Th2.5-1} \psi _l (x; \lambda_k , \lambda_k )&=&\phi _l (x; \lambda_k , \lambda_k ) \\ & &\mbox{} -\sum _{r=0}^\infty \sum _{i=1}^{m_r} c_r^i \psi _i (x; \lambda _r , \lambda_r ) \int _0^x \phi _i^* (t; \lambda_r , \lambda_r ) \phi _l (t; \lambda _k , \lambda_k )dt \nonumber \end{eqnarray} for all $ 1 \le l \le m_k $. \end{c5} \noindent{\bf Proof.} The proof is similar to the one of [3, {\bf Theorem 1.4}]. Denote $\phi_{i,k} (x) = \phi_i (x; \lambda_k ,\lambda_k )$. By (\ref{F}), and the integral equation (\ref{IE}), we have \begin{eqnarray} K(x,y) &=& - {\cal F}(x,y) - \int _0^x K(x,t){\cal F}(t,y) dt \nonumber \\ &=& - \sum _{r=0}^\infty \sum _{i=1}^{m_r} c_r^i [\phi_{i,r} (x)+ \int_0^x K(x,t)\phi_{i,r} (t)dt]\phi_{i,r}^* (y) . \nonumber \end{eqnarray} Then, by (\ref{Th2.4-1}), we have \begin{equation} \label{Th2.5-2} K(x,t)= -\sum _{r=0}^\infty \sum _{i=1}^{m_r} c_r^i \psi _i (x; \lambda _r , \lambda_r ) \phi _i^T (t; \lambda_r , \lambda_r ). \end{equation} Apply (\ref{Th2.5-2}), (\ref{Th2.4-1}) implies (\ref{Th2.5-1}). $\Box $ \par \section{ Isospectral problem } \par Those theorems in previous section enable us to construct an isospectral problem from a given eigenvalue problem $(P,A,B,{\cal A,B})$ and a sequence of real numbers $c_k^i, 1\le i \le m_k , k\ge 0 $, where the sequence $(c_k^i )$ satisfies the assumption of {\bf Theorem \ref{Th2.2}}. As {\bf Theorem \ref{Th4}} states, for any $k \ge 0 $, and for each $l$, $1\le l \le m_k $ , the vector-valued function $\psi _l (x; \lambda_k , \lambda_k )$ satisfies the boundary condition \[ B\psi '(0)+ \tilde{A}\psi (0) = {\bf 0}, \] where $\tilde{A}$ is given by (\ref{Th2.4-6}). Hence, the final step for constructing isospectral problem is to determine the form of boundary condition to be satisfied at $x=\pi $. For this purpose we use formula (\ref{Th2.5-1}). By (\ref{Th2.5-1}), we have \begin{equation} \label{a3.1} \psi _l (\pi ; \lambda_k , \lambda_k )= \frac {\phi _l (\pi ; \lambda_k , \lambda_k )}{1+c_k^l||\phi _l (x; \lambda_k , \lambda_k )||^2 }. \end{equation} Differentiating (\ref{Th2.5-1}) with respect to $x$ and evaluating it at $\pi $, we have \begin{eqnarray*} \psi _l ' (\pi ; \lambda_k , \lambda_k )&=& \phi _l '(\pi ; \lambda_k , \lambda_k ) - c_k^l \psi _l '(\pi ; \lambda_k , \lambda_k ) ||\phi _l (x; \lambda_k , \lambda_k )||^2 \\ & &\mbox{}- [\sum_{r=0}^\infty \sum_{i=1}^{m_r} c_r^i \psi _i (\pi ; \lambda_r , \lambda_r ) \phi _l^* (\pi ; \lambda_r , \lambda_r )] \phi _l (\pi ; \lambda_k , \lambda_k ) , \end{eqnarray*} and, hence we have \begin{eqnarray*} \lefteqn{(1+c_k^l||\phi _l (x; \lambda_k ,\lambda_k )||^2 ) \psi _l ' (\pi ; \lambda_k , \lambda_k )} \\ &=& \phi _l ' (\pi ; \lambda_k , \lambda_k ) -[\sum_{r=0}^\infty \sum_{i=1}^{m_r} \frac { c_r^i \phi _i (\pi ; \lambda_r, \lambda_r ) \phi _i^* (\pi ; \lambda_r, \lambda_r )}{ 1+c_r^i||\phi _i (x; \lambda_r ,\lambda_r )||^2}] \phi _l (\pi ; \lambda_k , \lambda_k ) \end{eqnarray*} Acting on the above identity by ${\cal B}$ and using the condition $ {\cal B} \phi_l' (\pi ; \lambda_k , \lambda_k ) + {\cal A}\phi_l (\pi ; \lambda_k , \lambda_k )=0 $,we have \begin{eqnarray*} \lefteqn{{\cal B}(1+c_k^l||\phi _l (x; \lambda_k ,\lambda_k )||^2 ) \psi _l ' (\pi ; \lambda_k , \lambda_k )} \\ &=& {\cal B} \phi _l ' (\pi ; \lambda_k , \lambda_k ) -{\cal B} [\sum_{r=0}^\infty \sum_{i=1}^{m_r} \frac { c_r^i \phi _i (\pi ; \lambda_r, \lambda_r ) \phi _i^* (\pi ; \lambda_r, \lambda_r )}{ 1+c_r^i||\phi _i (x; \lambda_r ,\lambda_r )||^2}] \phi _l (\pi ; \lambda_k , \lambda_k ) \\ &=& -({\cal A}+{\cal B} [\sum_{r=0}^\infty \sum_{i=1}^{m_r} \frac { c_r^i \phi _i (\pi ; \lambda_r, \lambda_r ) \phi _i^* (\pi ; \lambda_r, \lambda_r )}{ 1+c_r^i||\phi _i (x; \lambda_r ,\lambda_r )||^2}] ) \phi _l (\pi ; \lambda_k , \lambda_k ). \end{eqnarray*} Then, by (\ref{a3.1}), we have \[ {\cal B} \psi _l ' (\pi ; \lambda_k , \lambda_k ) = -({\cal A+B} [\sum_{r=0}^\infty \sum_{i=1}^{m_r} \frac { c_r^i \phi _i (\pi ; \lambda_r, \lambda_r ) \phi _i^* (\pi ; \lambda_r, \lambda_r )}{ 1+c_r^i||\phi _i (x; \lambda_r ,\lambda_r )||^2}] ) \psi _l (\pi ; \lambda_k , \lambda_k ) , \] and hence, \[ {\cal B} \psi _l ' (\pi ; \lambda_k , \lambda_k ) +\tilde{{\cal A}} \psi _l (\pi ; \lambda_k , \lambda_k ) = {\bf 0} ,, \] where \begin{equation} \label{a3.2} \tilde{{\cal A}}= {\cal A+B} [\sum_{r=0}^\infty \sum_{i=1}^{m_r} \frac { c_r^i \phi _i (\pi ; \lambda_r, \lambda_r ) \phi _i^* (\pi ; \lambda_r, \lambda_r )}{ 1+c_r^i||\phi _i (x; \lambda_r ,\lambda_r )||^2}] . \end{equation} In fact, by (\ref{Th2.5-2}), (\ref{a3.2}) can be simplified as \begin{equation} \label{a3.3} \tilde{\cal A} = {\cal A}- {\cal B} K(\pi ,\pi) . \end{equation} Furthermore, if we can prove that, for any ${\bf R}^N$-valued function $f$ satisfying $ Bf'(0)+ \tilde{A} f(0) ={\bf 0} $ , $ {\cal B} f' (\pi ) + \tilde{\cal A} f (\pi ) ={\bf 0} $, and \[ \int_0^\pi \langle f(x), \psi _l ( x ; \lambda _k , \lambda _k ) \rangle dx =0, \hskip 0.5cm k \ge 0, \hskip 0.25cm l=1, \ldots , m_k, \] we have $f \equiv {\bf 0}$, then the set $ \{ \psi _l ( x ; \lambda _k , \lambda _k ): k\ge 0, 1\le l \le m_k \} $ is complete, and, by {\bf Theorem \ref{Th4}}, we have $\Sigma (P, A, B, {\cal A, B}) = \Sigma (Q, \tilde{A} , B, \tilde{\cal A} , {\cal B})$. For our purpose, let \[ T(f)(x)= \int _0^x K(x,t) f(t) dt. \] Then \[ T^* (g)(t)= \int _t^\pi K^*(x,t)g(x)dx. \] Writing $ \psi _l ( x ; \lambda _k , \lambda _k ) = (I+T) \phi _l ( x ; \lambda _k , \lambda _k ) $ , $ 1\le l \le m_k $, we have \begin{eqnarray*} 0= \int_0^\pi \langle f(x), \psi _l ( x ; \lambda _k , \lambda _k ) \rangle dx &=&\int_0^\pi \langle f(x), (I+T) \phi _l ( x ; \lambda _k , \lambda _k ) \rangle dx \nonumber \\ &=&\int_0^\pi \langle (I+T^*)f(x), \phi _l ( x ; \lambda _k , \lambda _k ) \rangle dx. \end{eqnarray*} Now set $g= (I+T^*)f $. If we show that $ Bg'(0)+ Ag(0) ={\bf 0}$, and ${\cal B}g'(\pi) +{\cal A}g(\pi )={\bf 0} $, then by the completeness of the set $\{ \phi _l ( x ; \lambda _k , \lambda _k ) : k \ge 0, 1\le l \le m_k \} $, we have $g \equiv {\bf 0} $ and hence $f \equiv {\bf 0} $. We only check the identity $Bg'(0) +Ag(0)={\bf 0} $, the other part can be proved by similar argument. Suppose $Bf'(0)+\tilde{A} f(0) = {\bf 0} $. Then, using (\ref{Th3-2}) and (\ref{Th2.4-6}), we have \begin{eqnarray*} Bg'(0)+Ag(0) &=& B[ f'(0)-K^*(0,0)f(0) \\ & &\mbox{} + \int _0^\pi K^*_t (x,0) f(x)dx ] + A[f(0)+\int _0^\pi K^* (x,0) f(x) dx ] \\ &=& Bf'(0)-BK^* (0,0)f(0) \\ & &\mbox{}+Af(0)+ \int _0^\pi [BK^*_t (x,0) +AK^* (x,0) ]f(x) dx \\ &=& Bf'(0)+ [A-BK(0,0)]f(0)= Bf'(0)+\tilde{A} f(0)= {\bf 0 }. \end{eqnarray*} As a conclusion of the previous arguments, we have the following theorem. \newtheorem{c6}{Theorem}[section] \begin{c6} \label{Th6} Let $m_k$ denote the multiplicity of $\lambda _k $ in $\sigma ( P, A, B, {\cal A, B} )$. Suppose $\{ c_k^i , 1\le i \le m_k , k\ge 0 \} $ is a sequence, satisfying the condition (\ref{Th2-c}) and making ${\cal F}(x,y)$ in (\ref{F}) a $C^2$-function, $Q(x)$ is as that defined in {\bf Theorem 2.3}, $ \tilde{A} $ and $\tilde{ \cal A} $ are as those defined in (\ref{Th2.4-6}) and (\ref{a3.3}). Then $\Sigma (P, A, B, {\cal A , B} ) = \Sigma (Q, \tilde{A}, B, \tilde{\cal A} ,{\cal B}) $. \end{c6} As a final remark, we note that if $A=I, B=0, {\cal A}=I$, and ${\cal B}=0 $ in (\ref{a1.1}), then the matrices $\tilde{A}$ and $\tilde{\cal A}$ in {\bf Theorem \ref{Th6} } are equal to $I$, the identity matrix. Hence, for a given Dirichlet problem $(P, I,0,I,0)$, the isospectral problem constructed in {\bf Theorem \ref{Th6}} is also a Dirichlet problem. \section{ Examples } In this section, we use our theory to construct some examples which have some significant meaning the scalar case can not tell. Suppose $\lambda_\circ $ is an eigenvalue of (\ref{a1.1}) with multiplicity $m_\circ $. Let $ \phi_\circ (x) = \mbox{col} ( \phi _1 (x), $ \newline $ \phi_2 (x), \cdots, \phi_N (x)) $ be an eigenfunction corresponding to $\lambda _\circ $. Take \begin{eqnarray*} {\cal F} (x,y) &= & c \phi _\circ (x) \phi_\circ^* (y) \\ &= & c \left ( \begin{array}{cccc} \phi_1 (x) \phi_1 (y) &\phi_1 (x) \phi _2(y) & \cdots & \phi_1 (x) \phi_N (y) \\ \phi_2 (x) \phi_1 (y) &\phi_2 (x) \phi _2(y) & \cdots & \phi_2 (x) \phi_N (y) \\ \vdots & \vdots & \ddots &\vdots \\ \phi_N (x) \phi_1 (y) &\phi_N (x) \phi _2(y) & \cdots & \phi_N (x) \phi_N (y) \\ \end{array} \right ) . \end{eqnarray*} Plugging it into (\ref{IE}), and letting $k_{ij} (x,y) $ denote the $(i,j)$ entry of $K(x,y)$, we have \begin{equation} k_{ij} (x,y) + c \phi_i (x) \phi_j (y) +(c \int _0^x (\sum _{r=1}^N k_{ir} (x,t) \phi _r (t))dt) \phi _j (y) =0 , \end{equation} for $ i=1,\ldots ,N$ and $j=1,\ldots ,N $. We shall show that \begin{equation}\label{a4.1} k_{ij} (x,y) =- \frac {c \phi_i (x) \phi_j (y)}{ 1+ c \int _0^x |\phi _\circ (t)|^2 dt } . \end{equation} For $i$ fixed, consider the equations \begin{equation} \label{a4.2} k_{ij} (x,y) + c \phi_i (x) \phi_j (y) +(c \int _0^x (\sum _{r=1}^N k_{ir} (x,t) \phi _r (t))dt) \phi _j (y) =0, \end{equation} $ 1\le j \le N$. Multiplying the $j$th equation by $\phi _j (y) $, integrating it from $0$ to $x$ with respect to $y$, and denoting $ p_{i,j} (x) = \int _0^x k_{ij} (x,t) \phi _j (t) dt $ and $ \alpha _j (x) = \int _0^x \phi_j^2 (t)dt $, $ 1\le j \le N $, we have the following lineaar system of equations with unknowns $p_{ij} (x) $, \begin{equation} p_{ij} (x) + c \phi _i (x) \alpha _j (x) + c \alpha _j (x)( \sum _{r=1}^N p_{ir} (x) ) =0, \quad 1 \le j \le N . \end{equation} Solving this system, we obtain \begin{equation} \label{a.4.4} p_{ij} (x)= - \frac {c \phi _i (x) \alpha _j (x) }{1+ c \int _0^x |\phi _\circ (t)|^2 dt }, \quad 1 \le j \le N. \end{equation} Note that, by (\ref{a4.2}), \[ k_{ij} (x,y) =- c \phi_i (x) \phi_j (y) -(c \int _0^x (\sum _{r=1}^N k_{ir} (x,t) \phi _r (t))dt) \phi _j (y) . \] Hence if we plug (\ref{a.4.4}) into (\ref{a4.2}), we obtain (\ref{a4.1}) . It also follows from (\ref{a4.1}), that \[ K(x,y) = -\frac 1{1+ c \int _0^x |\phi _\circ (t)|^2 dt }{\cal F}(x,y). \] Hence, according to {\bf Theorem \ref{Th6}}, by setting \begin{eqnarray} Q(x) &= & P(x) - 2 \frac d{dx} [ \frac {\phi _\circ (x) \phi_\circ^* (x)} {1+ c \int _0^x |\phi _\circ (t)|^2 dt } ], \nonumber \label{a4.3} \\ \tilde{A} &=& A-cB\phi _\circ (0) \phi_\circ^* (0), \\ \tilde{\cal A}&=& {\cal A} -\frac {c\phi _\circ (\pi) \phi_\circ^* (\pi)}{1+ c ||\phi _\circ ||^2}, \nonumber \end{eqnarray} we have $\Sigma (P,A, B, {\cal A , B} ) = \Sigma (Q, \tilde{A}, B, \tilde{\cal A} ,{\cal B}) $. As an example to the above construction, we construct the following eigenvalue problem which has an eigenvalue of multiplicity $2$. Let $I$ be the $2 \times 2$ identity matrix. Take \[ P(x) = \left ( \begin{array}{cc} -3 & 0 \\ 0 & 0 \end{array} \right ) , \mbox{\hskip 0.25cm} A={\cal A}= I,\mbox{\hskip 0.25cm} B={\cal B}=0. \] Then one can verify that for the eigenvalue problem $ (P,I,0,I,0)$, $1$ is an eigenvalue of multiplicity $2$, and the other eigenvalues are all simple. The eigenspace corresponding to $1$ is the vector space spanned by the two vector-valued functions $ ( \sin (2x),0 )^*$ and $(0, \sin (x) )^* $. Choosing $ (\sin(2x) ,\sin(x))^*$ as the eigenfunction which plays the role of $\phi_\circ (x)$ in the above construction, $c=1$, and using (\ref{a4.3}), we have \begin{eqnarray*} Q(x)= \left ( \begin{array}{cc} -3 & 0 \\ 0 & 0 \end{array} \right )&-& \frac d{dx}[ \frac {2}{1+ \int _0^x (\sin^2 (t) + sin^2 (2t))dt } \\ & &\cdot \left ( \begin{array}{cc} \sin^2 (2x) & \sin (2x) \sin (x) \\ \sin (x) \sin (2x) & \sin ^2 (x) \end{array} \right ) ] . \end{eqnarray*} and $ \tilde{A} = \tilde{\cal A}=I $. Note that the matrix potential function $Q(x)$ is not simultaneously diagonalizable since, as checked by computation, the matrix-valued functions $Q(x)$ and $Q' (x) $ do not commute. On the other hand, if we take $(0, \sin (x))^* $ instead, then we find that $ Q (x) $ is a diagonal matrix-valued function and is of the following form \[ Q(x) = \left ( \begin{array}{cc} -3 & 0 \\ \mbox{} & \mbox{} \\ 0 & \frac d{dx} (\frac {-2 \sin ^2 (x)}{ 1+ \int _0^x \sin^2 (t) dt } ) \end{array} \right ). \] There are lots of interesting phenomena can be observed from our construction, which shall be observed later. \noindent {\bf Acknowledgements. } (i) The author show his gratitude to his Ph D. adviser Professor C. L. Shen for his instruction. (ii) The author became aware that Professor B. M. Levitan and Max. Jodeit also obtained analogous results in \cite{JL3}. \vskip 0.25cm
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Conclusion} \label{sec:conclusion} Gaze data acquisition is time consuming, and prone to inaccuracies. To reduce the impact of these limitations on the field of saliency estimation, we introduce a model that quantifies the uncertainty in training data for saliency prediction, and a new dataset for testing our model which is the first to offer a video-game context to the video saliency community. We show that NAT consistently leads to an improvement in the quality of the predicted saliency maps, especially when few videos and observers are available. The adoption of NAT has important practical implications, as it prevents overfitting to noise in gaze data while training saliency models, and therefore allows training with a reduced amount of data, both in terms of videos and number of observers, without loss of quality. Beyond being applied for training, our model can also be extended to encompass the case of evaluation in future. \vspace{-0.05in} \section*{Acknowledgments} We thank Abhishek Badki and Shoaib Ahmed Siddiqui for help with distributed training, Jan Kautz and Arash Vahdat for technical discussions, Michael Stengel for help with gaze-recording software, Paul Weakliem and Nathan Rogers for UCSB computational infrastructure, and the Fortnite players and observers for their help with creating ForGED. This work was supported in part through computational facilities purchased using NSF grant OAC-1925717 and administered by the Center for Scientific Computing (CSC) at UCSB. The CSC is supported by the California NanoSystems Institute and the Materials Research Science and Engineering Center (MRSEC; NSF DMR 1720256). \section{Introduction} \label{sec:introduction} \begin{figure*}[h!] \centering \includegraphics[width=0.9\textwidth]{figures/teaser/teaser} \caption{We propose a Noise-Aware Training (NAT) paradigm for video-saliency prediction to address the problem of noise in the training gaze data, which arises out of inaccurate or incomplete human-gaze capture process. We demonstrate the consistent advantage of NAT compared to traditional training (TT) across different DNN architectures (\eg, TASED~\cite{Min19}, SalEMA~\cite{Lin19}), on different datasets (\eg, our newly introduced ForGED, LEDOV~\cite{Jia18}, DIEM~\cite{Mit11}) and, using different discrepancy functions (KLD, NSS, a mix of KLD, NSS and CC). Here we show some visual examples. The leftmost images show example frames from each dataset we evaluate NAT on, with an overlay of the saliency maps captured using human gaze data (color scheme shown at the bottom left). Each $2\times2$ array of images on the right shows specific experiments, where we vary the DNN architectures, discrepancy functions, and datasets used for training. The training method (TT or NAT) is mentioned at the bottom, and the performance metrics (KLD, NSS) are reported at the top of each image. Within each $2\times2$ subset of images, the amount of training data is increased when moving from the bottom to the top row. As is evident, NAT is especially advantageous in cases when less training gaze data is available. \small{\textit{ForGED images have been published with the written permission of Epic Games.}}} \label{fig:teaser} \end{figure*} The human eye perceives high-frequency details within a small solid angle around the viewing direction~\cite{Gei98,Dez16}. To form the mental image of a scene, the eyes explore, or ``fixate on,'' different regions. Predicting the spatial distribution of the human-gaze locations has applications including image or video compression~\cite{Bae20}, and foveated rendering~\cite{Kim19, Kap19}, among others. Early methods achieved this using low- or mid-level~\cite{Itt98,Itt00,Koc87} image features, while recent deep-learning (DL) approaches leverage higher-level priors~\cite{Hua15,Kum17,Kum14,Pan17,Wan17,Jia20,Kru17,Liu18}, to obtain a saliency map: an estimate of the probability distribution of gaze over all the pixels of an image. Capturing gaze data to train DL algorithms is becoming increasingly easy with the recent improvements in gaze-tracking technologies. However, large-scale gaze data acquisition remains a time-consuming, expensive procedure affected by a number of sources of noise, from physiological nystagmus (the involuntary jittering of the eyes), to inaccurate localization of the Purkinje reflection (the reflection of the IR illuminator used to track the eye), to calibration issues~\cite{Fei17}. Collectively, we refer to these sources of noise as \emph{measurement noise}, which result in uncertainty about the spatial localization of the gaze of any single observer. Capturing gaze data on videos is even more challenging: typically, only one gaze location per observer is collected per video frame. This introduces a trade-off. For a given time/cost budget, one can either collect gaze data from many observers for the same video frame to get dense saliency maps for few frames, or capture fewer gaze fixations per frame to get sparse saliency maps for many frames. As a result, the measured saliency maps of large-scale video-saliency datasets are typically based on fewer fixations per frame. We refer to this source of inaccuracies as \emph{incomplete sampling}: we have partial observations of the underlying complete saliency map for each video frame. One effect of incomplete sampling is that the masses of different modes in a measured saliency map may be inaccurate, as they are estimated from few observations. The complex interaction of measurement noise and incomplete sampling impact the reliability of the training gaze data in each frame, depending on the number of observers and the specific saliency map. This is particularly apparent when few observers are available, as in the case of video saliency prediction. We propose a novel training paradigm that leverages this observation by accounting for the reliability of each measured saliency map. To this aim, we introduce a simple model that quantifies the uncertainty in a measured saliency map, given the number of fixations. Instead of directly minimizing the discrepancy $d$ between the measured and predicted saliency maps, as is typically done, we interpret $d$ as a random variable and train the saliency predictor through likelihood maximization. We call this proposed training paradigm \emph{noise-aware training} (NAT). Through several experiments, we show that NAT avoids overfitting towards noisy saliency maps, weighs training frames according to their reliability, and consistently yields improved saliency predictions over traditional training methods, for different datasets, deep neural networks (DNN), and training discrepancy $d$ (Fig.~\ref{fig:teaser}), \emph{especially when few observers or frames are available for training}. Therefore, NAT provides a promising approach to effectively train saliency predictors with limited data. We further notice that existing video-saliency datasets, typically based on natural scenes, often consist of mostly-static content, for which even image-saliency prediction methods can provide a good result~\cite{Tan20}. Consequently, assessing how video saliency prediction methods would fare on aspects specific to \textit{videos}, such as temporally-evolving content, is difficult. Therefore we also introduce the Fortnite Gaze Estimation Dataset (ForGED), a video-saliency dataset containing clips obtained from game-play videos of Fortnite, a third-person-shooter game amassing hundreds of million of players worldwide. With ForGED, we contribute a novel dataset with unique characteristics such as: fast temporal dynamics, semantically-evolving content, multiple attractors of attention, and gaming context. \section{The ForGED dataset} \label{sec:method_fortneyetd} When compared to natural videos in LEDOV and DIEM, videogames represent an interesting, diverse, and challenging application area for saliency methods. It is interesting, because of the large market value. It is diverse and challenging, because of the varying visual appearance, the fast temporal dynamics, the plurality of the objects of interest in a scene, and dependence of human gaze on temporal semantics --- something that is missing in many natural-scene videos datasets~\cite{Tan20}. For these reasons we introduce ForGED, a dataset with gaze data for video clips of Fortnite, a third-person shooter videogame counting hundreds of millions of players worldwide, and even more YouTube followers. \begin{figure}[t] \centering \begingroup \setlength{\tabcolsep}{0pt} \renewcommand{\arraystretch}{0.1} \begin{tabular}{ccc} \scriptsize{Measured saliency} $\tilde{x}_i$ & \scriptsize{Sampled saliency $\tilde{\tilde{x}}_i$} & \scriptsize{Sampled saliency $\tilde{\tilde{x}}_i$}\\ \scriptsize{15 observers} & \scriptsize{5 observers} & \scriptsize{2 observers} \vspace{0.1cm}\\ \includegraphics[width=0.16\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged/figs/fortnite_1_gt.png} & \includegraphics[width=0.16\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged/figs/fortnite_1_a.png} & \includegraphics[width=0.16\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged/figs/fortnite_1_b.png} \vspace{-0.45cm} \\ \textcolor{blue}{$0.197 \pm 0.077$} & \textcolor{blue}{$0.261 \pm 0.136$} & \textcolor{blue}{$0.478 \pm 0.418$} \vspace{0.1cm} \\ \includegraphics[width=0.16\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged/figs/fortnite_2_gt.png} & \includegraphics[width=0.16\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged/figs/fortnite_2_a.png} & \includegraphics[width=0.16\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged/figs/fortnite_2_b.png} \vspace{-0.45cm} \\ \textcolor{blue}{$0.372 \pm 0.106$} & \textcolor{blue}{$0.652 \pm 0.324$} & \textcolor{blue}{$1.079 \pm 0.691$} \vspace{0.1cm} \\ \includegraphics[width=0.16\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged/figs/fortnite_3_gt.png} & \includegraphics[width=0.16\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged/figs/fortnite_3_a.png} & \includegraphics[width=0.16\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged/figs/fortnite_3_b.png} \vspace{-0.45cm} \\ \textcolor{blue}{$0.442 \pm 0.112$} & \textcolor{blue}{$0.585 \pm 0.277$} & \textcolor{blue}{$1.018 \pm 0.630$} \vspace{0.1cm} \\ \includegraphics[width=0.16\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged/figs/fortnite_4_gt.png} & \includegraphics[width=0.16\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged/figs/fortnite_4_a.png} & \includegraphics[width=0.16\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged/figs/fortnite_4_b.png} \vspace{-0.45cm}\\ \textcolor{blue}{$0.517 \pm 0.154$} & \textcolor{blue}{$0.937 \pm 0.453$} & \textcolor{blue}{$0.947 \pm 0.722$} \\ \end{tabular} \endgroup \caption{Typical frames from ForGED. The first columns shows the measured 15-observers saliency map $\tilde{x}$ in red; the second and third columns show sampled $\tilde{\tilde{x}}_i$ maps with 5 and 2 observers respectively. We report in each panel the corresponding $\widetilde{\E}[\text{KLD}(\tilde{x}_i||\tilde{\tilde{x}}_i)] \pm\widetilde{\text{Std}}[\text{KLD}(\tilde{x}_i||\tilde{\tilde{x}}_i)]$. As shown here, these quantities increase when the saliency map becomes sparse, and the number of observers shrinks, making frames less reliable for training. \small{\textit{ForGED images have been published with the written permission of Epic Games.}} } \label{fig:datasets} \end{figure} \begin{table*}[h!] \centering \resizebox{0.9\textwidth}{!}{ \begin{tabular}{c|cccccc} Dataset & Videos & Frames & Resolution & Max. Obs. for training & Obs. for testing & Content type \\ \hline DIEM~\cite{Mit11} & 60 & 240,015 & 720p@30Hz & 31 & 51-219 & everyday videos (trailers, advs, ...) \\ LEDOV~\cite{Jia18} & 538 & 179,336 & 720p@24Hz & 32 & 32 & human/animal activities \\ ForGED (ours) & 480 & 374,400 & 1080p@60hz & 5 - 15 & 15-21 & videogame (Fortnite) \\ \end{tabular}} \vspace{-0.5em} \caption{Characteristics of video-saliency datasets, including the proposed ForGED dataset. We report the maximum-available observers in the training set for each dataset and the total observers available in test sets.} \label{tab:datasets} \vspace{-0.1in} \end{table*} The gaze acquisition process was twofold. The first phase was game-play video recording: 8 players (7 amateurs, 1 professional) volunteered to play in Fortnite Battle Royale mode, season 9 and 10, for a total of 12 hours of videos collected through OBS~\cite{OBD}. In the second phase (spectator recording) we split the game-play videos into 480 clips, each 15s long, and shuffled them in random sequences of 48 clips each. The game-play clips were separated by a 3s interval with a central red dot on a grey screen, with the aim of having a consistent gaze starting point during each game-play clip. 102 volunteers, approximately half of which familiar with Fortnite, observed the sequences, sitting at approximately 80cm from the display, wearing headsets for the game sounds, while we collected their gaze using a Tobii Tracker4C at 90Hz. Each session lasted less then 15 mins, to avoid fatigue. $5-21$ observers were captured for each clip (see Table ~\ref{tab:datasets}). After analyzing the typical gaze patterns, we discarded the initial two seconds of each clip, when observers were mostly spending time to understand the context, to get a total of 374,400 video frames with reliable gaze data. The 380 training clips contain 5 observers per frame, while the validation (25) and testing (75) clips have 15 to 21 observers to reduce the effect of noise.\footnote{Although we collected players' gaze data as well, we excluded them from further processing, as game playing and spectating are different visual tasks that generate different fixation patterns. } The main characteristics of ForGED are reported in Table~\ref{tab:datasets}. Fig.~\ref{fig:datasets} show typical ForGED testing frames. Some measured 15-observer saliency maps $\tilde{x}_i$ are overlayed in red in the first column: they present a large variety. Some are unimodal: in the first row, the player is aiming at an enemy; the estimated bias $\widetilde{\E}[\text{KLD}(\tilde{x}_i, \tilde{\tilde{x}}_i)]$ and standard deviation $\widetilde{\text{Std}}[\text{KLD}(\tilde{x}_i, \tilde{\tilde{x}}_i)]$ are small, and they slightly grow if the number of observers shrinks (second and third columns). This kind of frame is considered highly informative in NAT. Other saliency maps are bi- or tri-modal. In the second row, the human attention is mostly on other characters and the top-right mini-map; in the third row, the observers are focused on text and the main character. When compared to the first row, these frames have larger bias and standard deviation on $\text{KLD}$, because of the sparsity of the fixations. In the last row, the character is running in an uninteresting scenario: consequently, the fixation map is exploratory, with random locations covered by the observers. The sparse fixation density of such saliency maps is associated with higher levels of bias and standard deviation in $\text{KLD}$, that render such frames less reliable according to the NAT paradigm. \section{Noise-Aware Training (NAT)} \label{sec:method_NAT} \begin{figure} \centering \includegraphics[width=0.9\columnwidth, trim=0cm 0cm 0cm 0cm, clip=true]{figures/NAT_overview/NAT_overview} \vspace{-0.2cm} \caption{NAT vs. traditional training for saliency prediction. A measured saliency map (obtained using gaze recordings) typically captures a noisy version of the true, unobservable ground-truth (GT) saliency. Traditional deep-learning methods train a deep neural network (DNN) to minimize a discrepancy function $d$ between the measured and predicted saliency maps, which can lead to noise overfitting, especially when little gaze data is available. With our proposed NAT, we estimate the distribution of $d$ to quantify inaccuracies in measured saliency and optimize the DNN for likelihood of $d$.} \label{fig:NAT_overview} \vspace{-0.1in} \end{figure} \begin{figure*}[h!] \centering \includegraphics[width=0.45\textwidth, trim=0cm 0cm 0cm 0cm, clip=true]{figures/toy_example/toyExample/Slide1.PNG} \includegraphics[width=0.45\textwidth, trim=0cm 0cm 0cm 0cm, clip=true]{figures/toy_example/toyExample/Slide2.PNG} \caption{A toy example to motivate NAT. Saliency maps in training datasets are built by first sampling ``fixations'' (red / blue circles in (a, h)) from the ground truth map (dashed black curves in a,h), then blurring (b, e, i, l), summing, and normalizing to obtain the resulting curves (d, g, j, m). When a limited number of observers is available (\eg, 3, in red), the resulting training map $\tilde{x}_i$ may be shifted or differ in shape from the ground truth $x_i$. Even when using a high number of fixations (\eg, 50, in blue), the expected $\tilde{x}_i$ differs from $x_i$ because of the blurring. Panels d, g, k, n show the expected value and standard deviation for $\tilde{x}_i$, compared to $x_i$. Because of this discrepancy, the statistics $\E[\text{KLD}(x_i,\tilde{x}_i)]$ and $\text{Var}[\text{KLD}(x_i,\tilde{x}_i)]$ are non-zero, and they are larger when few observers are available and also when $x_i$ has a complex shape (\eg, multimodal), which makes $x_i$ more susceptible to inaccurate approximation using $\tilde{x}_i$. The curves considered here are: a Gaussian centered at $\mu=50$, $\sigma=5$; and a mixture with two components at $\mu=[25, 75]$, probabilities $P=[0.3, 0.7]$, and $\sigma=5$.} \label{fig:toy_example_new} \end{figure*} Let $x_i$ be the probability density function that denotes the $i$-th, noise-free, ground truth saliency map in a training set. We can train a saliency estimator by minimizing \begin{equation} J^{\text{ideal}} = \sum\nolimits_i d(\hat{x}_i,x_i), \label{eq:ideal_vanilla_cost_function} \end{equation} where $\hat{x}_i$ is the $i$-th predicted map, and $d(\cdot,\cdot)$ a discrepancy measure such as KLD, the correlation coefficient CC, NSS, AUC, or a mix of these. In practice, $x_i$ is not available and existing methods minimize \begin{equation} J^{\text{real}} = \sum\nolimits_i d(\hat{x}_i,\tilde{x}_i), \label{eq:vanilla_cost_function} \end{equation} where $\tilde{x}_i$ is obtained by combining a limited number of measured gaze fixations. It is therefore an approximation of the unobservable $x_i$, affected by measurement noise and incomplete sampling. We show that minimizing $J^{\text{real}}$ with limited training data may lead to noise overfitting. To address this issue, we introduce the first method to: \begin{itemize} \itemsep0em \item[(\emph{i})] quantify the effect of measurement noise and incomplete sampling in $\tilde{x}_i$, \item[(\emph{ii})] propagate the noise from $\tilde{x}_i$ to $d(x_i,\tilde{x}_i)$ and approximate its statistical distribution, $p[d(x_i,\tilde{x}_i)]$, and \item[(\emph{iii})] train a saliency estimator by maximizing $p[d(\hat{x}_i,\tilde{x}_i)]$. \end{itemize} We interpret $d(x_i,\tilde{x}_i)$ as a random variable whose probability density function, $p(d(x_i,\tilde{x}_i))$, depends on the shape of the map and the number of observers in $\tilde{x}_i$, and it is, therefore, \emph{different for each frame}. We reduce the problem of training a saliency estimator to the minimization of the negative log likelihood: \begin{equation} J_{\text{NAT}} = -\text{ln}\prod\nolimits_i p[d(\hat{x}_i,\tilde{x}_i)] = -\sum\nolimits_i \text{ln}\{p[d(\hat{x}_i,\tilde{x}_i)]\}, \label{eq:nat_likelihood} \end{equation} which, as demonstrated experimentally, leads to increased robustness against noise overfitting. Fig.~\ref{fig:NAT_overview} provides a visual comparison of the technical differences between traditional training and NAT. \vspace{-0.15in} \paragraph{A toy example.} Assume that a method predicts the (unobservable) distribution exactly, that is $\hat{x}_i = x_i$. Because of measurement noise and incomplete sampling in $\tilde{x}_i$, $d(x_i,\tilde{x}_i) \neq 0$, even though the prediction is prefect. We analyze this in a 1D toy example: Figs.~\ref{fig:toy_example_new}(a,h) show two 1D ground-truth saliency maps $x_i$, one unimodal, and one bimodal. We simulate gaze-data acquisition by sampling 3 (red circles) or 30 (blue) ``fixations'' from $x_i$. The random positions of the fixations mimic the measurement noise, while their finite number simulates incomplete sampling. Following the \emph{de facto} standard to generate saliency maps from single gaze locations, we blur each fixation (Fig.~\ref{fig:toy_example_new}(b)), and combine the resulting curves (Fig.~\ref{fig:toy_example_new}(c)). When few fixations are available, $\tilde{x}_i$ may be shifted with respect to $x_i$ (Fig.~\ref{fig:toy_example_new}(c)), and the number of its modes may not match $x_i$ (Fig.~\ref{fig:toy_example_new}(j)). Furthermore, when $x_i$ is multimodal, the mass of each mode in $\tilde{x}_i$ may be imprecisely estimated compared to $x_i$ (Fig.~\ref{fig:toy_example_new}(j)). To demonstrate the point \emph{(i)} enlisted earlier, we collect $1,000$ random realizations of $\tilde{x}_i$ and show that the uncertainty on $\tilde{x}_i$, measured by $\text{Std}[\tilde{x}_i]$, decreases for a large number of fixations (Figs.~\ref{fig:toy_example_new}(d, k) vs. Figs.~\ref{fig:toy_example_new}(g, n)). However, $\E[\tilde{x}_i]$ still differs from $x_i$ because of the blurring operation. Having defined the map generation process, and shown the effect of measurement and sampling noise on $\tilde{x}_i$, we can estimate the distribution $p[d(x_i,\tilde{x}_i)]$ (to show point \emph{(ii)}). The analytical expression for $p[d(x_i,\tilde{x}_i)]$ depends on the discrepancy function $d$ and it is impractical or unfeasible to derive, so we settle for computing $\text{KLD}(x_i, \tilde{x}_i)$ in 1,000 random realizations of $\tilde{x}_i$ and get $\E[\text{KLD}(x_i, \tilde{x}_i)]$, $\text{Std}[\text{KLD}(x_i, \tilde{x}_i)]$. These are reported in Figs.~\ref{fig:toy_example_new}(d, g, k, n). We use $\text{KLD}$ as discrepancy function because of its wide adoption for saliency estimation, but the results presented here hold for other metrics as well. We observe that: \begin{itemize} \itemsep0em \item $\E[\text{KLD}(x_i,\tilde{x}_i)] > 0$, \ie $\text{KLD}(x_i,\tilde{x}_i)$ is biased. The source of the bias is twofold. First, $\text{KLD}(x_i,\tilde{x}_i) > 0$ because $\E[\tilde{x}_i]$ is a smoothed version of $x_i$, independently from the number of observers. Second, $\tilde{x}_i$ is noisy ($\text{Std}[\tilde{x}_i]>0$), which, especially for a limited number of observers, contributes with an additional positive term to $\text{KLD}(x_i,\tilde{x}_i)$. \item $\text{Std}[\text{KLD}(x_i,\tilde{x}_i)] > 0$, and it tends to be smaller for a larger number of observers. \item For a given number of observers, $\E[\text{KLD}(x_i,\tilde{x}_i)]$ and $\text{Std}[\text{KLD}(x_i,\tilde{x}_i)]$ are larger for multimodal maps. \end{itemize} We conclude that, when $\tilde{x}_i$ is affected by measurement noise and incomplete sampling, the expected value and variance of the discrepancy $d(x_i,\tilde{x}_i)$ are not zero, depend on the number of observers, and are different for each frame. These properties, which also hold for 2D saliency maps recorded from real human observers, form the basis for the development and interpretation of NAT. \iffalse \paragraph{A toy example.} To provide the intuition behind NAT, we first resort to a toy example. We use polinomial interpolation as a proxy problem, where points used to fit the polynomial play the role of single frames used for saliency estimation. We assume the points to be generated by a noisy, linear model $y_i = 0.35 \cdot x_i + 0.2 + \sigma_i \cdot \mathcal{N}$, where $\mathcal{N}$ indicates a normally distributed random variable, and $\sigma_i = 0.05$ if $x_i<0.5$, and $0.1$ otherwise. The likelihood for $y_i$ is $p(y_i) = 1/\sqrt{2\pi}\cdot e^{-0.5(\frac{y_i-\mathbf{a_ip}}{\sigma_i})^2}$, where $a_i=(1+x_i+x_i^2+...+x_i^N)$, when fitting a polynomial of order $N$ with coefficients $p$. Very often, $\sigma_i$ is unknown and thus ignored, giving all the points the same importance in the fitting cost function and leading to the traditional least square solution $\mathbf{p} = (A^TA)^{-1}A^T\mathbf{y}$, where the i-th row of the matrix $A$ and the i-th element of the vector $\mathbf{y}$ are $\mathbf{a_i}$ and $y_i$, respectively. In the context of saliency prediction, this is equivalent to giving all the frames the same importance, indipendently from the specific frame noise level. Fig.~\ref{fig:poly} shows in the leftmost panel the distribution $p(y_i)$ for two points in the set: the gaussian shape is centered in the measured, noisy $y_i$; because of the noise, the maximum likelihood for the single point is achieved when the polynomial passes exactly through $y_i$. The left inset of the central panel shows the fitting of the cyan, ground truth line with a polynomial of degree 3 (in red), using the traditional least squares approach, while the colored map represent the sum of the probabilities for all the points in the set. The left inset in the rightmost panel highlights that overfitting does occur when the model is overparameterized with respect to the size of the training dataset (in this case, a polynomial of degree 40), as oscillations are introduced while the polynomial tries to perfectly fit all the points. It is also important noticing that overfitting (shown here as oscillation) is even more dramatic when the level of noise is high, \ie in the rightmost part of the inset in this case. It is worthy noticing that, as in the case of saliency estimation, we are estimating here a low dimensional shape (a line here, a saliency map there) using an overparameterized model (a polynomial with order 40 here, a DNN in the case of saliency estimation), which leads to sever overfitting, especially when the noise on the points (frames) is high. Since noise is Gaussian, the maximum likelihood estimate of the polynomial coefficients $\mathbf{p}$ is obtained by minimization of the usual quadratic cost function, $\sum_i (y_i-\mathbf{a_ip})^2$, where $a_i=(1xX_i+x_i^2+...+x_i^N)$, when fitting a polynomial of order $N$. \begin{figure*} \includegraphics[width=0.33\textwidth]{figures/poly/poly_isolated_order_3.png} \includegraphics[width=0.33\textwidth]{figures/poly/poly_full_order_3.png} \includegraphics[width=0.33\textwidth]{figures/poly/poly_full_order_40.png} \caption{Fitting of a line with a polynomial of order 3 and 40, using the traditional least square cost function and NAT.} \label{fig:poly} \end{figure*} \fi \vspace{-0.14in} \paragraph{The noise-aware training (NAT) procedure.} NAT modifies the traditional training cost function in Eq.~\ref{eq:vanilla_cost_function} to take into account the presence of noise in the training data; it can be applied to different discrepancy measures $d$. NAT is inspired by discrepancy principles proposed in other contexts, like image denoising~\cite{Ber10, Fro18}, and based on the idea that $\hat{x}_i$ should not perfectly match the noisy $\tilde{x}_i$, as this may lead to noise overfitting. Instead of minimizing $J^{\text{real}}$ in Eq.~\ref{eq:vanilla_cost_function}, we assume that $d(x_i,\tilde{x}_i)$ follows a Gaussian distribution. After solving for the logarithm and removing the constant terms in Eq.~\ref{eq:nat_likelihood} (see Supplementary), we have: \begin{equation} J_{\text{NAT}}^{\text{ideal}} = \sum\nolimits_i \frac{[d(\hat{x}_i, \tilde{x}_i) - \E[d(x_i,\tilde{x}_i)]^2}{\text{Var}[d(x_i,\tilde{x}_i)]}. \label{eq:nat_cost_function_ideal} \end{equation} Like traditional cost functions, $J_{NAT}^{\text{ideal}}$ penalizes predicted $\hat{x}_i$ that lie too far from $\tilde{x}_i$. However, it also introduces the idea that estimates should not be \emph{too close} to $\tilde{x}_i$, which helps prevent overfitting. Furthermore, the penalization is inversely proportional to the variance of $d(x_i,\tilde{x}_i)$, \ie it is strong only for those frames whose discrepancy can be accurately measured. In practice, since $\E[d(x_i,\tilde{x}_i)]$ and $\text{Var}[d(x_i,\tilde{x}_i)]$ are large for multimodal, sparse maps with few fixations (as seen in the toy example), NAT strongly penalizes errors in $\hat{x}_i$ for focused, unimodal saliency maps with many observers. On the other hand, it weakly penalizes the errors in $\hat{x}_i$ for widespread sparse gaze maps with few observers in $\tilde{x}_i$, which are considered unreliable for training. Eq.~\ref{eq:nat_cost_function_ideal} cannot be implemented in practice, as the ground truth $x_i$ is unknown. All we have access to is $\tilde{x}_i$, a noisy realization of $x_i$. We therefore approximate the statistics of $d(x_i,\tilde{x}_i)$ as follows: \begin{eqnarray} \E[d(x_i,\tilde{x}_i)] \approx \E[d(\tilde{x}_i,\tilde{\tilde{x}}_i)],\label{eq:approx1}\\ \text{Var}[d(x_i,\tilde{x}_i)] \approx \text{Var}[d(\tilde{x}_i,\tilde{\tilde{x}}_i)], \label{eq:approx2} \end{eqnarray} where $\tilde{\tilde{x}}_i$ is the map obtained by sampling and combining fixations from $\tilde{x}_i$ instead of $x_i$. Intuitively, we use the spatial noise introduced by sampling from $\tilde{x}_i$ as a proxy of the non-ideality introduced by the gaze-capturing process. Eqs.~\ref{eq:approx1} and~\ref{eq:approx2} are our best option, but they still introduce an approximation. However, we observe empirically that these approximations hold (see Supplementary for a detailed analysis), which is the reason why NAT outperforms traditional training, as we show in Sec.~\ref{sec:results}. We resort to sample estimation to compute $\E[d(\tilde{x}_i,\tilde{\tilde{x}}_i)]$ and $\text{Var}[d(\tilde{x}_i,\tilde{\tilde{x}}_i)]$: for any frame, we sample the prescribed number of observers from $\tilde{x}_i$ to generate multiple $\tilde{\tilde{x}}_i$ and compute the sample mean $\widetilde{\E}[d(\tilde{x}_i,\tilde{\tilde{x}}_i)]$ and variance $\widetilde{\text{Var}}[d(\tilde{x}_i,\tilde{\tilde{x}}_i)]$ of $d(\tilde{x}_i , \tilde{\tilde{x}}_i)$. The application of the NAT paradigm for training reduces then to the minimization of: \begin{equation} J_{\text{NAT}}^{\text{real}} = \sum\nolimits_i \frac{[d(\hat{x}_i,\tilde{x}_i) - \widetilde{\E}[d(\tilde{x}_i,\tilde{\tilde{x}}_i)]^2}{\widetilde{\text{Var}}[d(\tilde{x}_i, \tilde{\tilde{x}}_i)]}, \label{eq:vat_training_cost_function_real} \end{equation} where all the terms are now well-defined. Any DNN can be trained through NAT using the cost function in Eq.~\ref{eq:vat_training_cost_function_real}. \section{Related work} \label{sec:related_work} \paragraph{Saliency prediction methods.} From the early works that looked at low-level color, intensity, or orientation features to compute saliency maps~\cite{Itt98,Itt00,Koc87}, the field of visual saliency prediction for images has witnessed significant advancement over the past decades. DL approaches have accelerated this progress. Popular existing methods explore combination of learnt features and pre-trained DNNs at multiple scales~\cite{Hua15,Kum17,Kum14}, generative adversarial networks~\cite{Pan17}, encoder-decoder architectures~\cite{Wan17,Jia20}, dilated convolutions~\cite{Kru17}, and recurrent networks~\cite{Liu18}. \vspace{-0.02in} Despite the advances in visual saliency for images~\cite{Bor19}, video saliency prediction remains more challenging because of issues with capturing gaze data (Section~\ref{sec:introduction}), and the complexity of the gaze behavior for videos. Existing video-saliency methods are based on architectures such as: (i) 3D CNNs that observe a short sub-sequence of frames~\cite{Bel20,Min19}; (ii) architectures that leverage temporal aggregation or memory such as LSTMs~\cite{Lin19,Wan18,Gor2018,Wu20}; or (iii) a combination of both~\cite{Baz16}. TASED~\cite{Min19}, a 2019 state-of-the-art method on DHF1K~\cite{Wan18}, belongs to the first class: it uses a 3D CNN encoder-decoder, where the decoder combines temporal features to estimate the saliency. To address the scarcity of training data, the encoder is pre-trained using action recognition as an auxiliary task~\cite{Xie18}. SalEMA~\cite{Lin19} (also state-of-the-art in 2019) belongs to the second class: it uses an encoder-decoder and deploys temporal aggregation for memory by combining features from previous frames and the current one. The overarching aim with these approaches is to leverage spatial and temporal information from videos. For both categories, some methods also hand-code input features that inform spatial and temporal saliency, such as ``object-ness'' and motion in a frame~\cite{Jia18}, or spatio-temporal feature combination~\cite{Bak17}. Other methods combine image and video saliency prediction into a single framework to enable training with a variety of data~\cite{Dro20}. For an in-depth discussion of existing saliency methods, we refer the readers to a recent survey~\cite{Bor19}. None of these existing methods explicitly account for noise in the training data. \vspace{-0.1in} \paragraph{Saliency metrics for training with noisy gaze data.} Typically, saliency-prediction algorithms are trained using density-based metrics (such as the Kullback-Leibler Divergence KLD, correlation coefficient CC, SIM~\cite{Swa91}), fixation-based metrics (such as NSS~\cite{Pet05}, AUC~\cite{Byl18,Byl15}), or a combination of these~\cite{Dro20, Xie18}. Fixation-based metrics, by definition, do no require the entire measured saliency map to evaluate the quality of predicted saliency~\cite{Kum18}. However, when very few fixation locations are available on a small training dataset, we observe that training with both kinds of metrics show poor performance. A number of methods tried to account for measurement noise and incomplete sampling when computing statistics on a population~\cite{Cha03,Hol98}. Their key insight is to compensate for the noise by introducing correction terms to the population's statistics. Attempts to adapt these concepts to gaze data (e.g., when estimating KLD) have been done only at very low spatial resolution~\cite{Wil11}: when working at reasonable resolutions for saliency prediction, gaze data tends to be too sparse to collect sufficient statistics in each histogram bin, a prerequisite for the application of these aforementioned methods. This limits the applicability of such methods for designing noise-robust training procedures. In contrast, NAT can be used to train for video saliency in the presence of noise at any resolution. \vspace{-0.1in} \paragraph{Video-Saliency Datasets.} The progress of deep learning video saliency methods would not be possible without the introduction of large training datasets. Some of these target specific classes of videos (\eg, movies~\cite{Wig12}, sports~\cite{Rod08}, faces~\cite{Liu17}), while others (like DHF1K~\cite{Wan18}, LEDOV~\cite{Jia18}, and DIEM~\cite{Mit11}) capture natural visual content~\cite{Bor19}, and are often used as benchmarks. Among these, DHF1K, a well-known, public dataset, contains inconsistencies that make it difficult to access per-subject gaze data (a requirement to test saliency methods with limited data), and is reported to contain artifacts~\cite{Tan20}, thus we do not consider it. However, DHF1K is comparable in scope to the LEDOV and DIEM datasets (see Table~\ref{tab:datasets}), on which we perform extensive evaluation. For these datasets, a large portion of the video saliency information can be learned effectively by static image-based saliency models~\cite{Tan20}. We posit that the fast-changing imagery of a videogame offers a richer source of video-specific saliency problems, including the large and continuous motion of the camera. The only existing saliency dataset for videogames is the Atari-Head dataset~\cite{Zha19}. However it does not capture \textit{video} saliency, but fixations on still frames in a sequence. Furthermore, visual features and actions in an Atari game are elementary. This is in contrast with ForGED, the dataset we introduce. ForGED presents gaze data over long and highly dynamic sequences of a first person videogame. A task-specific video-saliency dataset is the Dr(eye)VE dataset --- which captures the gaze of the driver in a car~\cite{All16}. It is an extreme case in which capturing more than one observer per frame is simply not possible. The existence of tasks for which very limited gaze data can be captured further motivate the need to account for measurement noise and incomplete sampling. \section{Results} \label{sec:results} To compare NAT (Eq.~\ref{eq:vat_training_cost_function_real}) and traditional training (Eq.~\ref{eq:vanilla_cost_function}), we experiment with: 1) varying number of training videos and observers on ForGED and two existing datasets (LEDOV~\cite{Jia18}, and DIEM~\cite{Mit11}); 2) adopting density-based or fixation-based discrepancy functions; and 3) two different DNN architectures. We report an entire set of saliency metrics (KLD, CC, SIM, NSS, and AUC-J) to make the evaluation complete, as every metric penalizes artifacts differently~\cite{Byl18,Kum18}. To limit the effect of noise in evaluation, we use test sets that contain saliency maps from a \textit{larger} number of observers when compared to training (Table~\ref{tab:datasets}). Both training and testing saliency maps are obtained by the widely-used practice of blurring binary fixation maps with a Gaussian kernel of size $ \sim 1^{\circ}$ viewing angle. We discuss alternative strategies as well~\cite{Kum14, Kum17, Kum18}. \begin{table}[h!] \centering \resizebox{0.93\columnwidth}{!}{ \input{sections/tables/fortnite_tased_kld.tex}} \caption{NAT vs. traditional training on ForGED for TASED, $d = \text{KLD}$, various number of training videos and observers. Best metrics for each pair of experiments in bold. } \label{tab:tased-kld-fortnite} \end{table} \begin{table}[h!] \centering \resizebox{0.93\columnwidth}{!}{ \input{sections/tables/ledov_tased_kld.tex}} \caption{NAT vs. traditional training on LEDOV for TASED, $d = \text{KLD}$, various number of training videos and observers. The last two rows show the case of an unbalanced dataset having frames with 2, 5, 15, or 30 observers. Best metrics for each pair of experiments in bold.} \label{tab:tased-kld-ledov} \end{table} \subsection{Dataset type and size} \label{subsec:results:datasize} We first compare NAT (Eq.~\ref{eq:vat_training_cost_function_real}) and traditional training (Eq.~\ref{eq:vanilla_cost_function}) on all three datasets using a variable number of training videos and observers per video. We adopt the 3D-CNN-based TASED architecture~\cite{Min19} and the prescribed hyperparameters ($d = \text{KLD}$; action-recognition pretraining) with one exception: we use RMSprop~\cite{Hin12}, since it provides better results for traditional training (see Supplementary). Table~\ref{tab:tased-kld-fortnite} shows the results on ForGED, when using $30, 100$, or $379$ training videos, and $2,5$, or $15$ observers. NAT consistently outperforms traditional training when the number of training videos is $\leq 100$, \ie, when the effect of noise is large or overfitting may occur. The gap between NAT and traditional training tends to shrink when using the entire dataset, which makes overfitting less likely. NAT on 30 videos / 15 observers and on 100 videos / 5 observers is comparable or superior to traditional training with 379 videos / 5 observers, roughly corresponding to at least a $3\times$ saving factor in terms of data required for training. Similar conclusions can be drawn when training with the LEDOV (Table~\ref{tab:tased-kld-ledov}) and DIEM (see Supplementary) datasets. We also test an interesting case of practical importance where, for any reason, an \textit{unbalanced} training dataset is available, with uneven number of observers in the training videos. In this case, NAT accounts for the varying reliability of the gaze data in training frames. The last two rows of Table~\ref{tab:tased-kld-ledov} report metrics for TASED trained on the unbalanced LEDOV dataset and we note significantly better quality metrics with NAT compared to traditional training. \subsection{Discrepancy functions} NAT can be applied to any training discrepancy $d$ beyond the case of the density-based discrepancy KLD tested in Sec.~\ref{subsec:results:datasize}. To demonstrate this, we train TASED on ForGED with traditional training and NAT using: 1) the fixation-based discrepancy $d = -0.1\text{NSS}$ (Table~\ref{tab:tased-nss-fortnite}); and 2) a mix of density-based and fixation-based discrepancies, $d = \text{KLD}-0.1\text{CC}-0.1\text{NSS}$, which has been a popular choice in literature~\cite{Dro20,Wan18} (Table~\ref{tab:tased-kldccnss-fortnite}). In these experiments, NAT again achieves better metrics than traditional training (with one exception discussed below), independently of the training discrepancy metric being optimized. The advantage is more evident for a limited number of observers and videos, while the gap between traditional training and NAT tends to reduce when using the full dataset. Similar results for the LEDOV and DIEM datasets, TASED and an additional DNN architecture are reported in the Supplementary and further demonstrate the generality of these conclusions. In Table~\ref{tab:tased-nss-fortnite}, we notice that NAT overcomes traditional training in terms of NSS only for 2 or 5 observers, and 30 training videos. Recall that, by design, NSS optimizes the predicted saliency map only at the measured fixation locations. Consequently, when \textit{few} fixations per frame are available for training, a high NSS score may not generalize well to other evaluation metrics that evaluate different aspects of the quality of a predicted saliency map. This can be alleviated by additional regularization (such as using additional metrics as we do with $d = \text{KLD}-0.1\text{CC}-0.1\text{NSS}$ in Table~\ref{tab:tased-kldccnss-fortnite} and observe that high NSS scores generalize to good performance in terms of other metrics). In other words, for few-observer training, optimizing for NSS alone may not constrain the predicted saliency map sufficiently --- which shows up as poor generalization to other metrics. This is what we observe in Table~\ref{tab:tased-nss-fortnite}, where the regularizing effect of NAT leads to worse NSS values compared to traditional training; but, \textit{all} of the other evaluation metrics indicate NAT to be better. \begin{table}[h!] \centering \resizebox{0.93\columnwidth}{!}{ \input{sections/tables/fortnite_tased_nss.tex}} \caption{NAT vs. traditional training (TT) on ForGED for TASED, $d = -0.1\text{NSS}$ (a fixation-based discrepancy), various number of training videos and observers. Best metrics for each pair of experiments in bold.} \label{tab:tased-nss-fortnite} \end{table} \begin{table}[h!] \centering \resizebox{0.93\columnwidth}{!}{ \input{sections/tables/fortnite_tased_kld_cc_nss.tex}} \caption{NAT vs. traditional training on ForGED for TASED, $d = \text{KLD}-0.1\text{CC}-0.1\text{NSS}$, various number of training videos and observers. Due to training instability, for 30 videos / 2 observers we use $d=\text{KLD}-0.1\text{CC}$. Best metrics for each pair of experiments in bold.} \label{tab:tased-kldccnss-fortnite} \end{table} \begin{table}[h!] \centering \resizebox{0.93\columnwidth}{!}{ \input{sections/tables/ledov_salema_kld.tex}} \caption{NAT vs. traditional training on LEDOV for SalEMA, $d = \text{KLD}$, various number of training videos and observers. Best metrics for each pair of experiments in bold.} \label{tab:salema-kld-ledov} \end{table} \subsection{DNN architectures} Next, we show that NAT applies to DNN architectures other than TASED by training SalEMA~\cite{Lin19} on LEDOV and ForGED. Table~\ref{tab:salema-kld-ledov} shows that NAT outperforms traditional training and the gap shrinks for large number of training videos. The Supplementary shows more results. \subsection{Discussion} \paragraph{NAT for images.} Image-based saliency datasets (\eg, CAT2000~\cite{Bor15}, SALICON~\cite{Jia17}) typically have many fixations per image. Measurement noise and incomplete sampling are less of a concern with such datasets, as increasing the number of fixations rapidly generates fairly accurate ground-truth maps (e.g., $>90\%$ accuracy at $20$ fixations~\cite{Jud12}). It is however fair to ask if one can apply NAT to the case of images, to train with less data even in this case. To answer this question, we simulate a high-noise, incomplete, image dataset by sampling a subset of the total available fixations (which are at minimum 139) for each image in SALICON\footnote{Mouse clicks are used as proxy for gaze in SALICON.}. Then we train EML-Net~\cite{Jia20}, using the hyperparameters, EML discrepancy, and training procedure as proposed by the authors (see Supplementary) with traditional training and NAT, and evaluate it on the test set of the official SALICON benchmark. Table~\ref{tab:eml-kldccnss-salicon}, shows the results for varying number of fixations, and confirms the advantage of NAT over traditional training also in the case of images. \begin{table}[h!] \centering \resizebox{0.93\columnwidth}{!}{ \input{sections/tables/eml_kldccnss_salicon.tex} } \caption{Performance of EML-Net, trained with traditional training and NAT on the SALICON training set with a reduced number of fixations. Results shown on the full SALICON test set.} \label{tab:eml-kldccnss-salicon} \end{table} \paragraph{Alternative methods to estimate \boldmath{$\tilde{x}$}.} Estimating $\tilde{x}$ using a Gaussian-blur kernel is prevalent practice in video-saliency research, but more principled strategies that promise to automatically handle data noise and sparsity also exist, \eg, based on Kernel Density Estimation (KDE). For instance, the saliency map, $\tilde{x}$, can be estimated from the measured fixations using Gaussian kernel functions in KDE with a uniform regularization, where the optimal KDE bandwidth and the weight of the uniform regularization term are set to maximize a gold-standard model for saliency prediction~\cite{Kum15,Tan20}. We adopt this approach as an alternative method to estimate $\tilde{x}$. We find similarities between KDE and NAT: KDE captures the gaze uncertainty by generating a smooth, almost-uniform map for frames with few, sparse gaze locations. NAT gives small importance to such frames in training, and allows $\hat{x}_i$ to deviate from the training saliency map according to the expected statistical distribution. Thus we believe it is important to ask if processing fixations through KDE, instead of taking noise into account in the cost function as in NAT, is equivalently effective. Table~\ref{tab:kde} compares traditional training with a fixed-size blur, KDE target saliency maps~\cite{Tan20}, and NAT for ForGED, $5$ observers, $30$ videos. While estimating the training saliency maps with KDE instead of fixed-size blur led to an improvement in the metrics, NAT still yields the best results. \begin{table}[h!] \centering \resizebox{0.93\columnwidth}{!}{ \input{sections/tables/kde.tex} } \caption{Comparing different strategies for estimating $\tilde{x}$ for traditional training. While KDE-estimated $\tilde{x}$ for training yields an improvement over fixed-size blurring, NAT still outperforms the traditional training approach. Training dataset: ForGED, 5 observers, 30 videos.} \label{tab:kde} \vspace{-0.2in} \end{table} \paragraph{Limitations and future work.} We base the practical derivation of NAT on two main approximations: 1) we assume that $d(x_i, \tilde{x}_i)$ has a Gaussian distribution, and 2) we sample from $\tilde{x}_i$ (instead of $x_i$) in Eqs.~\ref{eq:approx1}-\ref{eq:approx2}. For the first approximation, we note that although $d(x_i, \tilde{x}_i)$ is typically strictly positive (\eg, in the case of KLD) and its distribution is consequently asymmetric, we observe that most often, $\widetilde{\E}[d(\tilde{x}_i||\tilde{\tilde{x}}_i)] \geq 2 \cdot \widetilde{Std}[d(\tilde{x}_i||\tilde{\tilde{x}}_i)]$ (e.g., see Fig.~\ref{fig:datasets}). This makes our approximation a good first guess for modeling the distribution of $d(x_i, \tilde{x}_i)$. We expect better models would further improve NAT. As for our second approximation (Eqs.~\ref{eq:approx1}-\ref{eq:approx2}), where we sample from $\tilde{x}_i$ instead of $x_i$, we report a detailed analysis of the level of accuracy in the Supplementary. The intuition is that, if $x_i$ and $\tilde{x}_i$ have a similar shape (e.g., see Fig.~\ref{fig:NAT_overview}), the approximation $\E[d(x_i||\tilde{x}_i)] \approx \widetilde{\E}[d(\tilde{x}_i||\tilde{\tilde{x}}_i)]$ and $\text{Var}[d(x_i||\tilde{x}_i)] \approx \widetilde{\text{Var}}[d(\tilde{x}_i||\tilde{\tilde{x}}_i)]$ can be leveraged. A second limitation, which is common to video-saliency research, is that, even if test sets are generally derived from a large number of fixations, measurement noise and incomplete sampling can still affect them and add uncertainty about the conclusions one can draw. The problem of evaluating saliency methods has been widely studied for images~\cite{Byl18, Kum18}, and a more principled approach, such as deriving metric-specific saliency from the probabilistic output of a saliency predictor, can give a clearer understanding. However, we note that all the metrics in our evaluation are generally in agreement with one another regarding the ranking of the methods, which further provides strong evidence of the advantages provided by NAT over traditional training. A potential direction for future work on evaluating saliency methods can be based on the design principles of NAT, where we give more or less importance to frames in a test set depending on the noise level in these frames. \section{Derivation of the NAT cost function} Let's assume that $d(x_i, \tilde{x}_i)$ is a random variable with Gaussian distribution, $d(x_i, \tilde{x}_i) \sim G(\mu, \sigma^2)$, where $\mu = E[d(x_i, \tilde{x}_i)]$ indicates its average value, whereas $\sigma^2 = \text{Var}[d(x_i, \tilde{x}_i)]$ is its variance. When the predicted saliency map $\hat{x}_i$ is optimal, \ie when $\hat{x}_i = x_i$, $d(\hat{x}_i, \tilde{x}_i)$ has the same statistical distribution of $d(x_i, \tilde{x}_i)$, therefore for a perfectly working saliency map predictor, we can write $d(\hat{x}_i, \tilde{x}_i) \sim G(\mu, \sigma^2)$. Notice that, in the context of NAT, $\mu$ and $\sigma$ are supposed to be known, whereas $\hat{x}_i$ is the unknown. The likelihood of measuring $d(\hat{x}_i, \tilde{x}_i)$ is given by: \begin{equation} p[d(\hat{x}_i, \tilde{x}_i)] = \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{[d(\hat{x}_i, \tilde{x}_i)-\mu]^2}{2\sigma^2}}, \label{eq:likelihood_single} \end{equation} whereas for an entire dataset containing $N$ saliency maps, we can write the negative log likelihood function as: \begin{eqnarray} J(\hat{x}_0, \hat{x}_1, ..., \hat{x}_N) = -\text{ln}\prod\nolimits_i \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{[d(\hat{x}_i, \tilde{x}_i)-\mu]^2}{2\sigma^2}} = \nonumber \\ \sum\nolimits_i -\text{ln}\{\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{[d(\hat{x}_i, \tilde{x}_i)-\mu]^2}{2\sigma^2}}\} = \nonumber \\ \sum\nolimits_i \{ \text{ln}(\sqrt{2\pi}\sigma) +\frac{[d(\hat{x}_i, \tilde{x}_i)-\mu]^2}{2\sigma^2} \}. \label{eq:loglikelihood} \end{eqnarray}% To maximize the likelihood of the predicted maps while training the saliency map predictor, one optimizes then: \begin{eqnarray} (\hat{x}_0, \hat{x}_1, ..., \hat{x}_N) = \underset{(\hat{x}_0, \hat{x}_1, ..., \hat{x}_N)}{\mathrm{argmin}} J(\hat{x}_0, \hat{x}_1, ..., \hat{x}_N) = \nonumber \\ \underset{(\hat{x}_0, \hat{x}_1, ..., \hat{x}_N)}{\mathrm{argmin}} \sum\nolimits_i \{\text{ln}(\sqrt{2\pi}\sigma) +\frac{[d(\hat{x}_i, \tilde{x}_i)-\mu]^2}{2\sigma^2} \}, \end{eqnarray} which leads, under the removal of the terms that do not depend on $(\hat{x}_0, \hat{x}_1, ..., \hat{x}_N)$, to: \begin{eqnarray} (\hat{x}_0, \hat{x}_1, ..., \hat{x}_N) = \underset{(\hat{x}_0, \hat{x}_1, ..., \hat{x}_N)}{\mathrm{argmin}} \sum\nolimits_i \{\frac{[d(\hat{x}_i, \tilde{x}_i)-\mu]^2}{\sigma^2} \}, \end{eqnarray} and thus to the definition of the NAT cost function: \begin{eqnarray} J_{\text{NAT}}^{\text{ideal}} = \sum\nolimits_i \{\frac{[d(\hat{x}_i, \tilde{x}_i)-\mu]^2}{\sigma^2} \} = \nonumber \\ \sum\nolimits_i \{\frac{[d(\hat{x}_i, \tilde{x}_i)-\E[d(x_i, \tilde{x}_i)]]^2}{\text{Var}[d(x_i, \tilde{x}_i)]} \}. \end{eqnarray} \section{Analysis of approximation in Eq.~5,6} To analyze the accuracy of Eq.~{5,6} in the main paper, we select a subset of the video frames from the DIEM dataset that contains gaze data from more than $200$ observers. Given the very large number of gaze fixations for these frames, we anticipate that the estimated human-saliency map $\tilde{x}$ is very close to ground-truth saliency $x$~\cite{Jud12}, and therefore analyze the accuracy of Eq.~{5,6} under the assumption that the $>200$-observer gaze maps of these frames represent $x$. From these $200$-observer gaze maps ($x$), we sample a certain number (denoted as $M$) of gaze fixation locations followed by blurring to compute $\tilde{x}$. Then, we compute $\tilde{\tilde{x}}$ by sampling $M$ locations from $\tilde{x}$ followed by blurring. Using multiple realizations of $\tilde{x}$ and $\tilde{\tilde{x}}$, we estimate $\mathbb{E}[d(x, \tilde{x})]$, $\mathbb{E}[d(\tilde{x}, \tilde{\tilde{x}})]$, $\text{Var}[d(x, \tilde{x})]$, $\text{Var}[d(\tilde{x}, \tilde{\tilde{x}})]$. We find that the mean absolute percentage error (MAPE) in the approximation of $\mathbb{E}[d(x, \tilde{x})]$ (Eq.~5 in main paper) goes from $21\%$ for $N=5$, to $13\%$ for $N=15$, and down to $10\%$ for $N=30$. Similarly, MAPE in the approximation of $\text{Var}[d(x, \tilde{x})]$ (Eq.~6 in main paper) goes from $13\%$ for $N=5$, to $6\%$ for $N=15$, and down to $5\%$ for $N=30$. Note that a large under/over-estimation of $\mathbb{E}[d(x,\tilde{x})]$ and $\text{Var}[d(x, \tilde{x})]$ in Eq.~{5,6} (main paper) may lead to overfitting to noisy data or sub-optimal convergence respectively using Eq.~{7} (main paper) for training. This would result in poor performance of NAT compared to traditional training -- which, as shown by the results, is not the case. \section{Additional Results (mentioned in Sec.~5)} We now report the additional experiments performed to compare NAT (Eq.~7 in main paper) to traditional training (Eq.~2 in main paper). Furthermore, we show typical gaze maps obtained through traditional training and NAT compared to the ground truth for TASED on the ForGED dataset in Fig.~\ref{fig:results}. \subsection{Dataset type and size} In this section, we continue reporting the results from Sec.~{5.1} of the main paper, where we compared NAT vs. traditional training for different dataset types and sizes. Table~\ref{tab:tased-kld-diem} compares the performance of traditional training to NAT on an additional dataset, the DIEM dataset~\cite{Mit11}, for the TASED architecture~\cite{Min19}, and using KLD as discrepancy. As done throughout Sec.~{5} of the main paper, the evaluation is performed on videos with gaze data from \textit{all} of the available observers (in contrast to training, for which a subset of observers are used, see Table~{1} in main paper). In case of DIEM dataset, given that only $84$ videos are available, we use $30$ or $60$ videos for training and report the results on the remaining $24$ videos, which are also used as validation set. The number of observers for these videos ranges from $51$ to $219$, which makes DIEM a very low-noise evaluation set~\cite{Jud12}. Results on DIEM are consistent with those reported in the main paper, with NAT providing better metrics in evaluation when compared to traditional training when less training data (e.g., $30$ videos) is available. \begin{table}[h!] \centering \resizebox{0.93\columnwidth}{!}{ \input{sections/tables/diem_tased_kld.tex}} \caption{Saliency metrics on DIEM, for TASED Net, training with KLD as discrepancy, and various number of training videos and observers. The best metrics between traditional training (Eq.~{2} in main paper) and NAT are in bold.} \label{tab:tased-kld-diem} \end{table} \begin{figure*}[h!] \centering \begingroup \setlength{\tabcolsep}{0pt} \renewcommand{\arraystretch}{0.1} \begin{tabular}{ccccc} \rotatebox{90}{\scriptsize{\hspace{2em}Ground truth $\tilde{x}_i$}} & \includegraphics[width=0.19\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged_output/figs/fortnite_1_gt_play10_clip41_downsamp4_714.png} & \includegraphics[width=0.19\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged_output/figs/fortnite_5_gt_play17_clip5_downsamp4_256.png} & \includegraphics[width=0.19\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged_output/figs/fortnite_6_gt_play20_clip1_downsamp4_326.png} & \includegraphics[width=0.19\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged_output/figs/fortnite_7_gt_play39_clip8_downsamp4_861.png} \\ % & KLD=1.48, CC=0.52 & KLD=0.88, CC=0.78 & KLD=1.40, CC=0.47 & KLD=1.22, CC=0.52 \\ \rotatebox{90}{\scriptsize{\hspace{1.5em}Traditional training}} & \includegraphics[width=0.19\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged_output/figs/fortnite_1_oth_play10_clip41_downsamp4_714.png} & \includegraphics[width=0.19\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged_output/figs/fortnite_5_oth_play17_clip5_downsamp4_256.png} & \includegraphics[width=0.19\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged_output/figs/fortnite_6_oth_play20_clip1_downsamp4_326.png} & \includegraphics[width=0.19\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged_output/figs/fortnite_7_oth_play39_clip8_downsamp4_861.png} \\ % & KLD=0.82, CC=0.77 & KLD=0.59, CC=0.69 & KLD=0.54, CC=0.75 & KLD=1.28, CC=0.51 \\ \rotatebox{90}{\scriptsize{\hspace{3em}NAT training}} & \includegraphics[width=0.19\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged_output/figs/fortnite_1_our_play10_clip41_downsamp4_714.png} & \includegraphics[width=0.19\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged_output/figs/fortnite_5_our_play17_clip5_downsamp4_256.png} & \includegraphics[width=0.19\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged_output/figs/fortnite_6_our_play20_clip1_downsamp4_326.png} & \includegraphics[width=0.19\textwidth,trim={1.5cm 0.5cm 1.5cm 0.5cm},clip]{figures/forged_output/figs/fortnite_7_our_play39_clip8_downsamp4_861.png} \\ \end{tabular} \endgroup \caption{Typical gaze maps obtained through traditional training (second row -- Eq.~{2} in main paper) and NAT (third row) compared to the ground truth (first row) for TASED on the ForGED dataset, training with KLD loss, 30 training videos and 5 observers per frame (see Table~2 in main paper). Each panel reports in the title the corresponding KLD and CC values. The last column shows a failure case where the metrics KLD and CC indicate that NAT is worse than traditional training, although a visual inspection might indicate otherwise. \small{\textit{ForGED images have been published with the written permission of Epic Games.}}} \label{fig:results} \end{figure*} \subsection{Discrepancy functions} To further verify that NAT generalizes to different discrepancy functions (continuing Sec.~{5.2} from main paper), we train and test TASED on LEDOV~\cite{Jia18} with the fixation-based discrepancy function, $d=-0.1\text{NSS}$, and the combination of fixation and density-based discrepancy functions, $d=\text{KLD}-0.1\text{CC}-0.1\text{NSS}$ (which is a popular discrepancy function used in video-saliency research~\cite{Dro20,Wan18}). The test set for LEDOV is used for all reported evaluations on LEDOV dataset, which contains gaze data from $32$ observers per video. Table~\ref{tab:tased-nss-ledov} shows NAT vs. traditional training (Eq.~{2} in main paper) using $d=-0.1\text{NSS}$. For this specific experiment, with traditional training we observe that adopting RMSprop as the optimizer (as done for all experiments in the paper) shows very fast convergence to very high NSS values. While this property of fast and optimal convergence of discrepancy function has proven useful for all experiments in the paper (see Sec.~\ref{sec:hyperparam} for details), for this specific experiment the solution provided by RMSprop optimization shows poor generalization to all other saliency metrics. This behavior is alleviated to some extent by switching RMSProp with Stochastic Gradient Descent (SGD) for traditional training -- but at the cost of poor convergence in terms of NSS. To show this, in Table~\ref{tab:tased-nss-ledov}, we report two sets of experiments for traditional training for each size of training dataset (one with SGD and another with RMSprop). With NAT, however, we observe a consistent optimal convergence due to the regularizing effect of the NAT formulation that prevents overfitting to dataset noise. We further observe that using additional terms with NSS in the discrepancy function, such as with $d=\text{KLD}-0.1\text{CC}-0.1\text{NSS}$ overcomes some of the issues of training with NSS alone. Table~\ref{tab:tased-kldccnss-ledov},~\ref{tab:tased-kldccnss-diem} show the comparison of traditional training vs. NAT for this combined discrepancy function. A high NSS performance in this case is well-correlated with good performance in terms of other metrics. Furthermore we note that, the performance of NAT is superior to traditional training when less gaze data is available, with the gap between the two approaches closing in with more gaze data. Given our analyses of all of the experiments with various discrepancy functions and dataset types, our conclusion is that the performance of models trained with density-based discrepancy functions (e.g., KLD) is better for traditional training as well as NAT, with NAT showing consistent superior performance compared to traditional training. \begin{table}[h!] \centering \resizebox{0.93\columnwidth}{!}{ \input{sections/tables/ledov_tased_nss_hyperparam.tex}} \caption{Comparison of traditional training (Eq.~{2} in main paper) vs. NAT on LEDOV testing set, for TASED Net, trained with $-0.1\text{NSS}$ as discrepancy, and various number of training videos and observers. The best metric between each set of 3 experiments for a given dataset size (videos and observers) is in bold and the second-best is italicized. Given the strong overfitting behavior of NSS with traditional training using RMSprop for this particular set of experiments, we report traditional training optimized with SGD as well.}% \label{tab:tased-nss-ledov} \end{table} \begin{table}[h!] \centering \resizebox{0.93\columnwidth}{!}{ \input{sections/tables/ledov_tased_kld_cc_nss.tex} } \caption{Saliency quality metrics on LEDOV testing set, for TASED Net, training with KLD-0.1CC-0.1NSS as discrepancy, and various number of training videos and observers. The best metrics between traditional training (Eq.~{2} in main paper) and NAT are in bold. * Because of instability in training, in the 30 videos / 5 observers case we use KLD-0.1CC.} \label{tab:tased-kldccnss-ledov} \end{table} \begin{table}[h!] \centering \resizebox{0.93\columnwidth}{!}{ \input{sections/tables/diem_tased_kld_cc_nss.tex}} \caption{Saliency quality metrics on DIEM testing set, for TASED Net, training with KLD-0.1CC-0.1NSS as discrepancy, and various number of training videos and observers. The best metrics between traditional training (Eq.~{2} in main paper) and NAT are in bold.} \label{tab:tased-kldccnss-diem} \end{table} \subsection{DNN architectures} To further verify that NAT works effectively on different DNN architectures, independently from the adopted dataset, we train SalEMA~\cite{Lin19} on the ForGED dataset. We use KLD as the discrepancy function, with RMSprop as the optimizer with a learning rate equal to $1e^{-5}$ rather than Adam with learning rate $1e^{-7}$ and binary cross entropy as discrepancy function, as suggested by the authors (an analysis of this hyperparameter choice is discussed later). Consistently with the other cases analyzed here, the results in this case also suggest that NAT achieves better metrics than traditional training when the number of observers or videos is limited (Table~\ref{tab:salema-kld-fortnite}). \begin{table}[h!] \centering \resizebox{0.93\columnwidth}{!}{ \input{sections/tables/fortnite_salema_kld.tex}} \caption{Saliency quality metrics on ForGED testing set, for SalEMA, training with KLD as discrepancy, and various number of training videos and observers. The best metrics between traditional training (Eq.~{2} in main paper) and NAT are in bold.} \label{tab:salema-kld-fortnite} \end{table} \section{Additional training details} \label{sec:hyperparam} \paragraph{Hyperparameters for TASED training on LEDOV.} To ensure a fair comparison against traditional training and guarantee that the best performance are achieved for the given architecture and dataset, we first perform some hyperparameter tuning of TASED on LEDOV with traditional training (Eq.~2 in main paper). We found that using RMSprop with a learning rate of $0.001$ for KLD optimization gives better performance than the default settings originally proposed for training on DHF1K (\ie, SGD with momentum $0.9$ and learning rate $0.1$ for decoder stepping down by a factor of $0.1$ at iteration $750$ and $950$, and $0.001$ for encoder), as shown in Table~\ref{tab:tased-hyperparam} and in Fig.~\ref{fig:tased-hyperparam}. Thus, we adopt RMSprop with a learning rate of $0.001$ to train TASED for both traditional training and NAT in all the experiments. An exception to this rule is the traditional training with SGD reported in Table~\ref{tab:tased-nss-ledov}, where we adopt SGD with a learning rate of $0.0001$ (any higher leads to training instabilities due to data noise) and momentum $0.9$. \begin{figure}[h!] \centering \includegraphics[width=0.4\textwidth,trim={0cm 0cm 0cm 0.5cm}, clip]{figures/supp/tased_hyperparam.png} \caption{Validation-set performance plots (KLD vs. training iterations) for the LEDOV dataset during training of TASED with KLD as loss function and LEDOV dataset using: SGD, initial learning rate $0.1$ for decoder and $0.001$ for encoder, momentum $0.9$ (default used by authors); and RMSprop, learning rate $0.001$. Based on this experiment, we choose RMSprop with a learning rate of $0.001$ for our experiments.} \label{fig:tased-hyperparam} \end{figure} \begin{table}[h!] \centering \resizebox{0.93\columnwidth}{!}{ \input{sections/tables/tased_hyperparam.tex}} \caption{Performance on LEDOV for TASED trained traditionally using KLD with original settings, and those used in the main paper (RMSprop, learning rate $0.001$) on the full LEDOV training set. We adopted the best hyperparameter setting (best metrics in bold) for all experiments. *Original settings: SGD, initial learning rate $0.1$ for decoder and $0.001$ for encoder, momentum $0.9$.} \label{tab:tased-hyperparam} \end{table} \paragraph{Hyperparameters for SalEMA training on LEDOV.} We train SalEMA~\cite{Lin19} on the full LEDOV dataset with the default choice for loss function and optimizer (Adam optimizer, binary cross entropy, with learning rate $1e^{-7}$), and compare against the adoption of the RMSprop optimizer with KLD as the loss function and $2$ learnings rates: $1e^{-5}$ and $1e^{-7}$ (see Table.~\ref{tab:salema-hyperparam}). We train with LEDOV training set and we choose the best hyperparameter setting based on the LEDOV test-set performance for all of the experiments in the paper. \begin{table}[h!] \centering \resizebox{0.93\columnwidth}{!}{ \input{sections/tables/salema_hyperparam.tex}} \caption{Performance comparisons on LEDOV test set for SalEMA trained with the original hyperaprameter settings and the ones used in this paper (RMPprop optimizer with $1e^{-5}$ learning rate) after training on LEDOV training set. Best metrics are in bold.} \label{tab:salema-hyperparam} \end{table} \paragraph{Details of training EML-Net.} We train EML-Net~\cite{Jia20} for image-saliency on our noisy version of SALICON train set~\cite{Jia17} (generated by randomly selecting a subset $5$ or $15$ fixations per image, See Sec.~5.4 in main paper). To do so, we select the ResNet50 backbone~\cite{He16}. Consistent with recommendations from authors, we train two versions of the encoder: first, we finetune starting from ImageNet-pretrained weights~\cite{Rus15}, and second, we finetune from Places365-pretrained weights~\cite{Zho17}. The two saliency models obtained from the encoder-training stage are input to the decoder-training pipeline to give the final image-saliency predictor for EML-Net approach. We adopt the EML loss (which is a combination of KLD, CC and NSS losses described by authors) for training both traditionally (Eq.~{2} in main paper) and using NAT. After searching through learning rates and optimizers, we find the author-specified choices to be most optimal: SGD with momentum with a learning rate of $0.01$ at the beginning and multiplied by $0.1$ after every epoch. We train both encoder and decoder for $10$ epochs. After training, the best model for each experiment in Table~7 of the main paper is selected based on validation-set performance (using \textit{all} available fixations on images in validation set), and submitted to SALICON benchmark for evaluation on the test set~\cite{Jia17}. Note that even though the training is performed on few-fixation images to simulate a noisy version of the SALICON dataset, the evaluation on test set and validation set contains \textit{all} of the available fixations. \section{Alternative methods to estimate \boldmath{$\tilde{x}$}: additional details} In Sec.~{5.4} of main paper, we discuss an alternative strategy using Gaussian kernel density estimation (KDE) with uniform regularizer to estimate $\tilde{x}$ for training, instead of the common practice of blurring human gaze fixation locations using a Gaussian blur kernel of size approximately $1^{\circ}$ viewing angle. We provide further details here. We estimate the optimal KDE bandwidth for \textit{each} video frame, mixed with a uniform regularizer whose coefficient is also a parameter to be estimated. We do a per-frame estimation of optimal KDE bandwidth and mixing coefficient, to account for the general case where each frame can have a different variety of points of interest to attract gaze which cannot be explained with the optimal KDE bandwidth of another frame. The alternative to this is to estimate an optimal KDE bandwidth independent of the video frames, which amounts to the trivial case of obtaining a universal Gaussian-blur kernel of a different size. In this case, the treatment of the underlying gaze data for obtaining the measured saliency maps remains the same, in principle, as our experiments with $\sim1^{\circ}$ viewing-angle Gaussian-blur kernel (which amounts to $36$ pixels and $1920\times1080$ resolution for ForGED). To demonstrate this for completeness, in Table~\ref{tab:tased-forged-kld-smallblur}, we show some of the results for TASED trained with ForGED and KLD as discrepancy. For this experiment, the training gaze maps are estimated using a Gaussian-blur kernel of size $27$ pixels (at resolution $1920\times1080$), which amounts to $\sim 0.75^{\circ}$ viewing angle. We note in Table~\ref{tab:tased-forged-kld-smallblur} that NAT outperforms traditional training, consistent with our experiments with $\sim1^{\circ}$ viewing-angle Gaussian-blur kernel reported in the main paper. \begin{table}[h!] \centering \resizebox{0.93\columnwidth}{!}{ \input{sections/tables/fortnite_tased_kld_small_blur.tex}} \caption{Performance comparisons on ForGED test set for TASED trained with KLD as discrepancy. Instead of computing gaze maps for train set with Gaussian blur kernel of size approximately $1^{\circ}$ viewing angle (which amounts of $36$ pixels at $1920\times1080$ resolution), we use a Gaussian blur kernel of size approximately $0.75^{\circ}$ viewing angle ($27$ pixels). As we can see, the conclusion regarding the superior performance of NAT compared to traditional training applies independent of blur kernel size.} \label{tab:tased-forged-kld-smallblur} \end{table} To estimate the optimal bandwidth using KDE, we optimize a gold-standard model for saliency prediction, which predicts the probability of fixation for one observer, given the gaze data from the remaining observers for the video frame (leave-one-out cross-validation)~\cite{Kum15,Tan20}. We observe that, when gaze fixation locations are sparsely distributed across a frame, the optimal bandwidth for KDE is high, which would result is high-spread, almost-uniform saliency maps. Independent of the estimation strategy for $\tilde{x}$, we posit that there is an underlying uncertainty / noise in the measured saliency map -- which is accounted for during training using NAT, to obtain improved performance over traditional training. \begin{figure*}[h!] \centering \begingroup \setlength{\tabcolsep}{0pt} \renewcommand{\arraystretch}{2} \begin{tabular}{ccc} \includegraphics[width=0.3\textwidth,trim={0cm 0cm 0cm 0.5cm}, clip]{figures/supp/tased_ledov_30obs_461vid.png} & \includegraphics[width=0.3\textwidth,trim={0cm 0cm 0cm 0.5cm}, clip]{figures/supp/tased_ledov_5obs_461vid.png} & \includegraphics[width=0.3\textwidth,trim={0cm 0cm 0cm 0.5cm}, clip]{figures/supp/tased_ledov_2obs_461vid.png} \\ \includegraphics[width=0.3\textwidth,trim={0cm 0cm 0cm 0.5cm}, clip]{figures/supp/tased_ledov_30obs_100vid.png} & \includegraphics[width=0.3\textwidth,trim={0cm 0cm 0cm 0.5cm}, clip]{figures/supp/tased_ledov_5obs_100vid.png} & \includegraphics[width=0.3\textwidth,trim={0cm 0cm 0cm 0.5cm}, clip]{figures/supp/tased_ledov_2obs_100vid.png} \\ \includegraphics[width=0.3\textwidth,trim={0cm 0cm 0cm 0.5cm}, clip]{figures/supp/tased_ledov_30obs_30vid.png} & \includegraphics[width=0.3\textwidth,trim={0cm 0cm 0cm 0.5cm}, clip]{figures/supp/tased_ledov_5obs_30vid.png} & \includegraphics[width=0.3\textwidth,trim={0cm 0cm 0cm 0.5cm}, clip]{figures/supp/tased_ledov_2obs_30vid.png} \\ \multicolumn{3}{c}{\includegraphics[width=0.8\textwidth,trim={0cm 0cm 0cm 0.5cm}, clip]{figures/supp/legend.PNG}} \\ \end{tabular} \endgroup \caption{Training-set and validation-set KLD as a function of training iterations for TASED trained on LEDOV (``GT'' in the legend indicates ``ground-truth''). In contrast to the traditional training (Eq.2 in main paper), NAT does not overfit. } \label{fig:training-behavior} \end{figure*} \section{Overfitting behavior with NAT} Figure~\ref{fig:training-behavior} shows the training and validation set performance (in terms of KLD) as a function of the training iteration when training TASED on LEDOV dataset with KLD discrepancy, for different number of observers and videos in the training set. For both the traditional approach (dashed orange line) and NAT (dashed purple line), the training-set curves decrease regularly, as expected in a smooth optimization process. However, the validation-set curves for traditional training (continuous orange line) quickly reach a minimum and then start diverging towards a higher asymptotic value, which is a clear sign of overfitting. On the other hand, the validation curves for NAT (continuous purple line) are always lower (suggesting better performance) and tend to stabilize around asymptotic values without growing anymore --- a clear sign, in this case, that overfitting is avoided. Note that for the training-set curves (dashed lines), the human saliency map used for KLD computation is derived using the limited number of observers available during the specific training experiment. As an additional check for the overfitting behavior of traditional training, we plot the performance of training set when compared against human saliency maps obtained from \textit{all} the observers available in the training videos ($32$). These are indicated with dash-dotted lines. For few-observer experiments, the performance of traditional training on all-observer evaluations gets worse with increasing iterations. On the contrary, the performance on validation set, training set, and all-observer training set do not generally show signs of overfitting for NAT. Only in few cases, NAT plots are unstable at the beginning of the training (see the peaks in the validation curves in the left most panels for $30$ observers trainings), but then the curves stabilize to an asymptotic value. The only exception to this is represented by the upper right panel in the figure ($2$-observer training with $461$ videos), where we believe that the slight increase in the validation-set performance value is due to the approximation introduced in NAT to make it computable in practice. We observed a similar behavior when training on other datasets.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In a series of recent publications we studied the extraction of the ground-state parameters from SVZ sum rules~\cite{svz} (see also, e.g., \cite{nsvz,rad,ck}). We made use of a quantum-mechanical potential model since this is essentially the only case where the standard procedures adopted in the method of sum rules may be tested: the estimates for the ground-state parameters obtained by these procedures may be compared with the actual values of the ground-state parameters calculated from the Schr\"odinger equation, thus providing an unambiguous check of the reliability of the method. The main results of our papers may be summarized as follows: (i) The standard approximation of a constant effective continuum threshold does not allow one to probe the accuracy of the extracted hadron parameter \cite{lms_2ptsr,lms_3ptsr,m_lcsr,lms_scalar}. (ii) Allowing for a Borel-parameter-dependent effective continuum threshold (we denote the Borel parameter $\tau$ in QCD and $T$ in the potential model) and fixing this quantity by using the information on the ground-state mass leads to a considerable improvement of the accuracy of the method \cite{lmss}. The goal of this letter is to demonstrate that the results obtained at each step of the extraction procedure both in QCD and in potential models follow the same pattern. This similarity gives a strong argument that all our findings concerning the extraction of bound-state parameters from correlators obtained in potential model apply also to QCD. In particular, it points out a way of improving the results for bound-state parameters obtained from various correlators in QCD. The paper is organized as follows: In Section \ref{sect:qcd} we recall the QCD results from Ref.~\cite{jamin} for the vacuum correlator of two pseudoscalar currents --- the basic object for the extraction of $f_B$ within the framework of QCD sum rules. Section \ref{sect:qm} provides the analogous results for a quantum-mechanical model for the case of a potential containing confining and Coulomb interactions. Section \ref{sect:sr} compares the procedures of extracting the decay constant. Section \ref{sect:conclusions} summarizes our conclusions. \section{\label{sect:qcd}Correlator and sum rule in QCD} Let us consider the correlator \begin{eqnarray} \label{Pi_QCD} \Pi(p^2)=i \int d^4x e^{ipx}\langle 0|T\left(j_5(x)j_5^\dagger(0)\right)| 0\rangle \end{eqnarray} of two pseudoscalar currents $j_5(x)=(m_b+m_u)\bar q(x) i\gamma_5 b(x)$. The Borel-transformed operator product expansion (OPE) series for this correlator has the form \begin{eqnarray} \label{OPE_QCD} \Pi(\tau)=\int\limits^\infty_{(m_b+m_u)^2}e^{-s\tau}\rho_{\rm pert}(s,\mu)ds + \Pi_{\rm power}(\tau,\mu), \end{eqnarray} where the perturbative spectral density reads \begin{eqnarray} \label{rhopert} \rho_{\rm pert}(s,\mu)=\rho^{(0)}(s,\mu)+\frac{\alpha_s(\mu)}{\pi}\rho^{(1)}(s,\mu)+ \left(\frac{\alpha_s(\mu)}{\pi}\right)^2\rho^{(2)}(s,\mu)+\cdots, \end{eqnarray} $\mu$ being the renormalization scale. We make use of the results for $\rho_{\rm pert}$ reported in \cite{jamin} and do not reproduce the explicit expression for this quantity here. Following the argument of \cite{jamin} we work in terms of the running masses in the $\overline{\rm MS}$ scheme. Therefore, in all expressions in this section the quark masses $m_b$ and $m_u$, and $\alpha_s$ are the $\overline{\rm MS}$ running quantities at the scale~$\mu$. Recall that the full Borel-transformed correlator (\ref{OPE_QCD}) does not depend on the renormalization scale $\mu$; however, both the perturbative expansion truncated to a fixed order in $\alpha_s$ and the truncated power corrections depend on $\mu$. We provide numerical estimates for $\mu=m_b$; for this choice of the scale the known terms of the perturbative expansion exhibit a good hierarchy. We set $m_b(m_b)=4.2$ GeV and for other QCD parameters make use of the central values reported in Table I of \cite{jamin}.\footnote{It is well known that the numerical value of the correlator depends sizeably on the values of $m_b(m_b)$ and on the specific choice of the renormalization scale $\mu$. However, a discussion of this dependence is far beyond the scope of this paper. A detailed analysis of $f_B$ in QCD is deferred to a separate publication.} The power corrections have been also considered in \cite{jamin}: \begin{eqnarray} \label{power_QCD} &&\Pi_{\rm power}(\tau,\mu=m_b)= (m_b+m_u)^2e^{-m_b^2\tau} \\ \nonumber &&\hspace{1cm}\times\left\{-m_b\langle \bar qq\rangle \left[ 1+\frac{2C_F\alpha_s}{\pi}\left(1-\frac{m_b^2\tau}2\right) -(1+m_b^2\tau)\frac{m_u}{2m_b}+\frac{m_u^2}{2} m_b^2\tau^2 +\frac{m_0^2\tau}{2}\left(1-\frac{m_b^2\tau}{2}\right) \right]+ \frac{1}{12}\langle{\frac{\alpha_s}{\pi} FF}\rangle \right\}. \end{eqnarray} The parameter $m_0^2$ describes the contribution of the four-quark condensate. Notice that radiative corrections to the condensates increase rather fast with $\tau$. The correlator (\ref{Pi_QCD}) may be calculated in terms of hadron intermediate states: \begin{eqnarray} \label{hadron} \Pi(\tau)=\Pi_{\rm g}(\tau)+\mbox{contributions of excited states}, \qquad \Pi_{\rm g}(\tau)\equiv {f_B^2M_B^4}e^{-M_B^2\tau}, \end{eqnarray} where $f_B$ is the decay constant of the $B$-meson, defined by \begin{eqnarray} \label{decay_constant} (m_b+m_u)\langle 0 |\bar u i\gamma_5 b| B \rangle = f_B M_B^2. \end{eqnarray} For large values of $\tau$ the contributions of the excited states decrease faster than the ground-state contribution and $\Pi(\tau)$ is dominated by the ground state. Unfortunately, the truncated OPE does not allow to evaluate the correlator~at sufficiently large $\tau$, so the excited states give a sizeable contribution to $\Pi(\tau)$ for the considered values of $\tau$. According to the duality assumption, the contribution of the excited states is described by the perturbative contribution above some effective continuum threshold $s_{\rm eff}$. Then one obtains the following relation: \begin{eqnarray} \label{SR_QCD} \Pi_{\rm g}(\tau)=\Pi_{\rm dual}(\tau, s_{\rm eff}) \end{eqnarray} with \begin{eqnarray} \Pi_{\rm dual}(\tau, s_{\rm eff})\equiv \int\limits^{s_{\rm eff}(\tau)}_{(m_b+m_u)^2} e^{-s\tau}\rho_{\rm pert}(s,\mu)\,ds + \Pi_{\rm power}(\tau,\mu). \end{eqnarray} In the region near the physical continuum threshold at $s=(M_{B^*}+m_\pi)^2$, the perturbative spectral density and the hadron spectral density are very different. Consequently, the effective continuum threshold as defined in (\ref{SR_QCD}) turns out to be necessarily a function of the Borel parameter $\tau$. The necessity of the $\tau$-dependence of $s_{\rm eff}$ may be understood by comparing the left-hand side (l.h.s.) and right-hand side (r.h.s.) of (\ref{SR_QCD}): the only way to obtain a single exponential on the l.h.s.\ for a given spectral density of the integral representation on the r.h.s.\ is to have a $\tau$-dependent $s_{\rm eff}$. The $\tau$-dependence of $s_{\rm eff}$ may be also demonstrated explicitly: for any value of the ground-state parameters on the l.h.s.\ of (\ref{SR_QCD}) one can obtain numerically $s_{\rm eff}$ and see that it does depend on $\tau$. One should be aware of the fact that the $\tau$-dependence of $s_{\rm eff}$ cannot and does not contradict to any principles of field theory: the dual correlator is a {\it hand-made} object; such an object does not emerge in field theory. Therefore, the properties of the dual correlator (e.g., its analytic properties) are very different from the properties of the field-theoretic correlators. Clearly, the standard assumption of a $\tau$-independent $s_{\rm eff}$ is a possible assumption. We shall demonstrate, however, that relaxing this assumption leads to a visible improvement of the obtained results. We define the dual decay constant and the dual invariant mass by the relations \begin{eqnarray} \label{fdual} f_{\rm dual}^2(\tau)=M_B^{-4} e^{M_B^2\tau}\Pi_{\rm dual}(\tau, s_{\rm eff}(\tau)), \qquad \label{mdual} M_{\rm dual}^2(\tau)=-\frac{d}{d\tau}\log \Pi_{\rm dual}(\tau, s_{\rm eff}(\tau)). \end{eqnarray} Notice that the deviation of the dual mass from the actual mass of the ground state gives an indication of the excited-state contributions picked up by the dual correlator. \section{\label{sect:qm}Correlator and sum rule in potential models} In parallel to QCD, let us consider a quantum-mechanical model with a potential containing a confining part, for which we take the HO form, and an attractive Coulomb interaction: \begin{eqnarray} \label{H} H=\frac{k^2}{2m}+\frac{m\omega^2 r^2}{2}-\frac{\alpha}{r}. \end{eqnarray} A quantum-mechanical analogue of the Borelized two-point function has the form \cite{nsvz} \begin{eqnarray} \Pi(T)=\langle \vec r'=0|\exp(-H T)|\vec r=0\rangle. \end{eqnarray} We construct the analogue of the OPE series for this correlator by retaining, similar to the QCD case, the perturbative contributions up to $O(\alpha^2)$ (three loops of the non-relativistic field theory) and two power corrections, including $O(\alpha)$ corrections to them. The resulting expression reads (the correlator for a pure Coulomb potential can be found in~\cite{voloshin}):\footnote{Interestingly, at small values of $T$ the system behaves like a free system, since the contribution of the confining potential as well as~that of the radiative corrections vanish for small $T$. According to \cite{svz} this is a signature of asymptotic freedom. So, a non-relativistic potential model (\ref{H}) even with a constant $\alpha$ behaves like an asymptotically free theory.} \begin{eqnarray} \Pi_{\rm OPE}(T)&=&\Pi_{\rm pert}(T)+\Pi_{\rm power}(T), \nonumber\\ \Pi_{\rm pert}(T)&=& \left(\frac{m}{2\pi T}\right)^{3/2} \left[1+\sqrt{2\pi mT}\alpha+\frac{1}{3}m\pi^2 T \alpha^2\right], \nonumber\\ \Pi_{\rm power}(T)&=& \left(\frac{m}{2\pi T}\right)^{3/2} \left[-\frac{1}{4}\omega^2 T^2\left(1+\frac{11}{12}\sqrt{2\pi m T}\alpha\right) +\frac{19}{480}\omega^4 T^4\left(1+\frac{1541}{1824}\sqrt{2\pi m T}\alpha\right)\right]. \end{eqnarray} Now, according to the standard procedures of the method of sum rules, the dual correlator is obtained as follows: we represent the perturbative contribution as a single spectral representation in the relative kinetic energy $z$ of the interacting quarks and cut this representation at $z_{\rm eff}$: \begin{eqnarray} \Pi_{\rm dual}(T,z_{\rm eff})= \left(\frac{m}{2\pi}\right)^{3/2}\int\limits_0^{z_{\rm eff}} dz \exp(-z T)\left[2\sqrt{\frac{z}{\pi}}+\sqrt{2\pi m}\alpha+\frac{\pi^{3/2} m\alpha^2}{3\sqrt{z}}\right] +\Pi_{\rm power}(T). \end{eqnarray} By construction, the dual correlator is related to the ground-state contribution by \begin{eqnarray} \label{Pidual} \Pi_{\rm dual}(T,z_{\rm eff})=\Pi_{\rm g}(T)\equiv R_{\rm g}\exp(-E_{\rm g}T),\qquad R_{\rm g}=|\psi_{\rm g}(r=0)|^2. \end{eqnarray} As we have shown in our previous studies of potential models, the effective continuum threshold defined according to (\ref{Pidual}) is a function of the Borel time parameter $T$. For our numerical analysis, we adopt the following parameter values: a reduced quark mass of $m=0.175$~GeV,~which corresponds to a constituent quark mass of 0.350~GeV relevant for nonrelativistic computations; $\omega=0.5$~GeV, which leads to a realistic radius of the $q\bar q$ system, and $\alpha=0.3$. The energy and wave function of the ground state are found by solving numerically the Schr\"odinger equation with the help of the Mathematica code provided in \cite{franz}: the ground-state energy is $E_g=0.6473$~GeV [for comparison, a pure HO model yields $E^{\rm HO}_g=0.75$~GeV]; the ground-state wave function at the origin has the value $\psi(r=0)=0.0783$~GeV$^{3/2}$ [$\psi^{\rm HO}(r=0)=(m \omega/\pi)^{3/4}= 0.068$ GeV$^{3/2}$]. Obviously, the effect of the Coulomb interaction is not small. \section{\label{sect:sr}Extraction of the decay constant} Whether a $\tau$-independent or some $\tau$-dependent effective continuum threshold is considered, the crucial problem~is~the choice of the criterion for fixing this quantity. We shall proceed as follows: \subsection{The Borel window} First, we must fix the working $\tau$ window where, on the one hand, the OPE gives an accurate description of the exact correlator (i.e., the higher-order radiative and power corrections are small) and, on the other hand, the ground~state gives a sizeable contribution to the correlator. In QCD, we set the window as follows: \begin{eqnarray} \label{our_window} 0.05\, {\rm GeV}^{-2} < \tau < 0.18\, {\rm GeV}^{-2}. \end{eqnarray} In the region $\tau < 0.18$ GeV$^{-2}$ the $\alpha_s$ and $\alpha_s^2$ terms to $\Pi(\tau)$ contribute less than 10\% and 3\% of the leading term, respectively. Power corrections give about 20\% of the leading term. We point out that the radiative corrections to the condensates increase rather fast with $\tau$, so it is preferable to stay at relatively low values of $\tau$. Therefore our window is located at the lower values of $\tau$ compared to the window adopted in \cite{jamin}; in the region (\ref{our_window}) the accuracy of the truncated OPE is higher. It is known that the experimental value of the $B$ meson decay constant is $f_B\approx 200$ MeV. Adopting this value, we may calculate the relative contribution of the ground state to the correlator. In our window it does not exceed 50\%. In the potential model, we choose the window as \begin{eqnarray} 0.2\, {\rm GeV}^{-1}< T < 0.8\, {\rm GeV}^{-1}. \end{eqnarray} For these values of $T$ the omitted unknown higher-order power corrections are negligible, so the correlator is known with good accuracy. The relative contribution of the ground state to the correlator amounts to 10\% at $T=0.2\, {\rm GeV}^{-1}$ and to 50\% at $T=0.8\,{\rm GeV}^{-1}$. We shall see that for a relative ground-state contribution of this size our procedure allows one to extract the decay constant with a reasonable accuracy. \subsection{Fixing the effective continuum threshold} Widely used is the so-called stability criterion: one looks for that constant value of $s_{\rm eff}$ for which the extracted~decay constant is most stable in the window. This Borel stability is an implementation of a self-evident statement that the physical observable cannot depend on $\tau$, an auxiliary parameter of the method. The problem is, however, that the independence of a hadron decay constant of $\tau$, being a necessary condition, is not sufficient to guarantee the extraction of the right value. We have given several examples for potential models \cite{lms_2ptsr,lms_3ptsr} which nicely demonstrate that assuming~a $\tau$-independent effective continuum threshold and fixing its value by requiring maximal stability in the Borel window may lead to the extraction of a very inaccurate value. In this paper, we consider a different algorithm for the extraction of $f_B,$ which makes use of the knowledge of the ground-state mass \cite{jamin}. In parallel to QCD we present also the results for a quantum-mechanical model (\ref{H}). This is done in order to demonstrate the way our algorithm works in the case where the exact value of the decay constant is known. A comparison makes~clear that, with respect to the extraction procedure, there are no essential differences between QCD and quantum mechanics. The algorithm developed in our previous works and established to work well for different correlators in the potential model is very simple: we consider a set of $\tau$-dependent Ans\"atze for the effective continuum threshold (for the case of the potential model one just replaces $\tau\to T$ and $M\to E$): \begin{eqnarray} \label{zeff} s^{(n)}_{\rm eff}(\tau)= \sum\limits_{j=0}^{n}s_j^{(n)}\tau^{j}. \end{eqnarray} Obviously, the standard $\tau$-independent effective continuum threshold is also taken account by (\ref{zeff}). Now, we fix the parameters on the r.h.s.\ of (\ref{zeff}) as follows: we calculate the dual mass squared according to (\ref{mdual}) for the $\tau$-dependent $s_{\rm eff}$ of Eq.~(\ref{zeff}). We then evaluate $M^2_{\rm dual}(\tau)$ at several values of $\tau = \tau_i$ ($i = 1,\dots, N$, where $N$ can be taken arbitrarily large) chosen uniformly in the window. Finally, we minimize the squared difference between $M^2_{\rm dual}$ and the known value $M^2_B$: \begin{eqnarray} \label{chisq} \chi^2 \equiv \frac{1}{N} \sum_{i = 1}^{N} \left[ M^2_{\rm dual}(\tau_i) - M_B^2 \right]^2. \end{eqnarray} This gives us the parameters of the effective continuum thresholds. As soon as the latter are fixed, it is straightforward to obtain the decay constant. Figure~\ref{Plot:1} shows the results for QCD and for our potential model; in the latter the actual value of the decay constant has been found from the Schr\"odinger equation, so that we may control each step of the extraction procedure. \begin{figure} \begin{tabular}{cc} \includegraphics[width=6cm]{HO_RB.eps} & \includegraphics[width=6cm]{QCD_RB.eps}\\ (a) & (b) \\ \includegraphics[width=7cm]{HO_MB.eps} & \includegraphics[width=7cm]{QCD_MB.eps}\\ (c) & (d) \\ \includegraphics[width=7cm]{HO_fB.eps} & \includegraphics[width=7cm]{QCD_fB.eps}\\ (e) & (f) \end{tabular} \caption{\label{Plot:1} Left column: potential model (\ref{H}); right column: QCD. First line: relative contribution of the ground state to the correlator; second line: fitted dual mass; third line: corresponding dual decay constant. The dashed line in Fig.~(e) corresponds to the~true value of the decay constant obtained by solving the Schr\"odinger equation. The index $n$ is the power of the polynomial Ansatz~for the Borel-parameter-dependent effective continuum threshold.} \end{figure} First, let us notice that the Borel-parameter-dependent effective thresholds corresponding to $n=1$ and $n=2$ lead to a visible improvement of the stability of the dual mass [Figs.~\ref{Plot:1}(c) and (d)] compared to the constant threshold. This means that the dual correlator for $n>0$ is less contaminated by the excited states; according to the philosophy of QCD sum rules the better stability of $M_{\rm dual}$ with $n>0$ is an important achievement for the trustability of the results. According to Fig.~\ref{Plot:1}, in the potential model the true value of the decay constant lies in the band provided by the linear ($n=1$) and the quadratic ($n=2$) fits. We have checked that this result holds in a broad range of the parameters of the potential model. The similarities of each step of the extraction procedure in QCD and in the potential model~are evident.\footnote{One may observe that outside the window the behavior of the dual mass in QCD and in the potential model is not exactly the same. This is related to the fact that the quark condensate in QCD is negative, whereas the corresponding power correction in potential models (for any confining potential) has a positive sign.} Therefore, it is tempting to expect that also in QCD the decay constant lies in the range provided by the linear and the quadratic fits. Anyway, the difference of the results obtained for $n=1$ and $n=2$ constitutes a realistic estimate of the intrinsic uncertainty of the extracted decay constant. If one considers only the standard constant Ansatz ($n=0$) for the effective continuum threshold, the accuracy of the extracted decay constant cannot be probed. \section{\label{sect:conclusions}Discussion and conclusions} We have presented a detailed analysis of the extraction of the decay constant from the two-point function in QCD and in a potential model. Our results may be summarized as follows: \begin{itemize} \item[(i)] The comparison presented in this work makes obvious that, with respect to the extraction of the ground-state parameters, there are no essential differences between QCD and quantum mechanics: as soon as the parameters of the Lagrangian are fixed, and the truncated OPE is calculated with a reasonable accuracy (taking into account also the relevant choice of the renormalization scale in QCD), the extraction procedures are very similar. At first glance, this similarity might look surprising since we know that the structure of bound states in potential models and in QCD are rather different. However, the method of dispersive sum rules does not make use of (and does not provide) information about the details of the ground-state structure. What really matters for extracting the ground-state parameters in this method is the structure of the OPE for a given correlator. Since the structure of the OPE in QCD and of its analogue in potential models is rather similar, it should not be surprising at all that the extraction procedures in potential models and in QCD are similar, too. In view of the above similarity, our previous results for the extraction of the ground-state parameters (including also the form factors \cite{lmss}) obtained in potential models have direct implications for the corresponding analyses~in QCD and should be taken quite seriously. \item[(ii)] Allowing for $\tau$-dependent Ans\"atze for the effective continuum threshold leads to two essential improvements: (a) The stability of both the dual mass and the dual decay constant in the window is considerably improved if one proceeds from the standard $\tau$-independent to the $\tau$-dependent Ansatz for the effective continuum threshold. (b) In the potential model, where the exact decay constant has been calculated from the Schr\"odinger equation, allowing for a $\tau$-dependent effective continuum threshold and fixing its parameters according to (\ref{chisq}) --- i.e., by minimizing the deviation of the dual mass from the known ground-state mass in the window --- leads to the extraction of a more accurate value. As follows from our analysis, a realistic band of values of the decay constant is provided by the numerical results obtained with the linear and quadratic Ans\"atze for the effective continuum threshold. The intrinsic uncertainty (i.e., the one related to the extraction procedure) of the decay constant found in this way is expected to be at the level of a few percent. Although not rigorous in the mathematical sense, this estimate for the systematic uncertainty may be considered as a realistic educated guess supported by findings in models where the true value of the decay constant is known. Moreover, we have~doubts that a more rigorous estimate of the intrinsic error of the method of sum rules may be obtained in principle. \end{itemize} \vspace{.5cm} \noindent {\it Acknowledgements}: We are grateful to Matthias Jamin for providing us with his Mathematica code for the calculation of $f_B$. D.~M.\ gratefully acknowledges financial support from the Austrian Science Fund (FWF) under project P20573 and from the Alexander von Humboldt-Stiftung. D.~M. expresses his gratitude to the Institute of Theoretical Physics of the Heidelberg University for hospitality during his visit, where this work was started. The work was supported in part by Federal Agency for Science and Innovation of Russian Federation under state contract 02.740.11.0244 and by EU Contract No. MRTN-CT-2006-035482, ``FLAVIAnet''. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec.intro} Argumentation has contributed to the AI community with a human-like mechanism for the formalization of commonsense reasoning. Briefly speaking, argumentation can be associated with the interaction of arguments for and against a claim supported by some form of reasoning from a set of premises, with the purpose of ascertaining if that conclusion is acceptable~\cite{Bench-CaponD07,rahwan2009argumentation}. The study of this process suggested several argument-based formalisms dealing with applications in many areas such as legal reasoning, autonomous agents and multi-agent systems. In such environments, an agent may use argumentation to perform individual reasoning to reach a resolution over contradictory evidence or to decide between conflicting goals, while multiple agents may use dialectical argumentation to identify and settle differences interacting via diverse processes such as negotiation, persuasion, or joint deliberation. Many of such accounts of argumentation are based on Dung's foundational work characterizing \emph{Abstract Argumentation Frameworks} (\emph{AF})~\cite{Dung93,Dung95} where arguments are considered as atomic entities and their interaction is represented solely through an attack relation. In many cases, commonsense reasoning requires some representation of time since a notion of ``change" is relevant in the modeling of the argumentation capabilities of intelligent agents~\cite{AugustoS99,AugustoSimari01}. In particular, in~\cite{CoboMS11,CoboMartinezSimariNMR10,CoboMS10} a novel framework is proposed, called \emph{Timed Abstract Framework} (\emph{TAF}), combining arguments and temporal notions. In this formalism, arguments are relevant only in a period of time, called its availability interval. This framework maintains a high level of abstraction in an effort to capture intuitions related with the dynamic interplay of arguments as they become available and cease to be so. The notion of \textit{availability interval} refers to an interval of time in which the argument can be legally used for the particular purpose of an argumentation process. Thus, this kind of timed-argument has a limited influence in the system, given by the temporal context in which these arguments are taken into account. In \emph{TAF}, a skeptical timed interval-based semantics is proposed, using admissibility notions. As arguments may get attacked during a certain period of time, the notion of defense is also time-dependant, requiring a proper adaptation of classical acceptability. Furthermore, algorithms for the characterization of defenses between timed arguments are presented, used to specify the acceptability status of an argument varying over time~\cite{CoboMS10,CoboMartinezSimariNMR10}. In most existing argumentation frameworks, only a conflict interaction between arguments is considered. However, in the last years, recent studies on argumentation have shown that a support interaction may exist between arguments, this kind of relation represents some real world situations. In this sense, several formal approaches were considered such as deductive support, necessary support and evidential support~\cite{cayrol2005acceptability,polberg2014,CohenGGS14,cayrol2015}, where a classical argumentative framework is enhanced to model a positive and negative interaction between arguments. In special, a simple abstract formalization of argument support is provided in the framework proposed by Cayrol and Lagasquie-Schiex in~\cite{cayrol2005acceptability}, called \emph{Bipolar Argumentation Framework} (\emph{BAF}), where they extend Dung's notion of acceptability by distinguishing two independent forms of interaction between arguments: support and attack. Besides the classical semantic consequences of attack, new semantic considerations are introduced that relies on the support of an attack and the attack of a support. In this work, we provide a timed argumentation framework to analyze the effect of attacks and supports in a dynamic discussion, leading place to a refined \emph{BAF}. In this sense, the resulting framework provides a suitable model for different time-dependent issues. The main contribution here is to provide an enhanced framework for modeling a positive (support) and negative (attack) interaction varying over time, which are relevant in many real-world situations. The aims of our work is to advance in the integration of temporal argumentation in different, time-related application domains and contribute to the successful integration of argumentation in different artificial intelligence applications, such as Knowledge Representation, Autonomous Agents in Decision Support Systems, and others of similar importance. Next, in order to state the relevance of our formalization, we analyse a classical example of bipolar argumentation case introduced in~\cite{amgoud2008bipolarity} about editorial publishing. Our formalism helps to represent a model that analyses the temporal effects, as follows:\\ \emph{Suppose a scenario where an Editorial is considering about presenting an important note related to a public person $\tt{P}$. For that, the chief editorial writer considers the following arguments, that are related to the importance and legality of the note.} \begin{itemize}\itemsep 4pt \item[\argu{I}:] \emph{Information $\tt{I}$ concerning person $\tt{P}$ should be published.} \item[\argu{P}:] \emph{Information $\tt{I}$ is private so, $\tt{P}$ denies publication.} \item[\argu{S}:] \emph{$\tt{I}$ is an important information concerning $\tt{P}$'s son.} \item[\argu{M}:] \emph{$\tt{P}$ is the new prime minister so, everything related to $\tt{P}$ is public.} \end{itemize} \emph{Controversies arise during the above discussion. That is the case of the conflict between arguments \argu{P} and \argu{I}, and between arguments \argu{M} and \argu{P}. On the other hand, there is a relation between arguments \argu{P} and \argu{S}, which is clearly not a conflict. Moreover, \argu{S} provides a new piece of information enforcing argument \argu{P}.}\\ Although this is a proper example to introduce positive argument relations, it does not consider the evolution of time in an explicit way. Argumentation is a process by nature, and then it is interesting to evaluate arguments and conflicts in different stages of this process. In the previous example, from a temporal perspective, the analysis is made in a point of time where all arguments are available. We want to take into account the fact that arguments are introduced in different moments. Even more, some argument may become invalid or unusable over time. Thus, the editorial publishing example can be adapted in order to consider the evolution of information in time by making explicit the moments where those arguments can be used.\\ \emph{Based on the arguments presented previously, \argu{I} and \argu{P} can be both considered as general information applicable at any moment, a sort of editorial rules. However, the argument \argu{M} is available during the period of time where $\tt{P}$ is prime minister. Before that, argument \argu{M} does not apply. And after leaving the Primer Minister Office, the information about $\tt{P}$ may be less relevant for publication. Then, a new prime minister $\tt{P}_2$ may be a more important public person than $\tt{P}$, at least for media purposes. The publisher may dismiss information about $\tt{P}$. Consider now that the chief editorial writer analyses a more complex scenario, taking some additional information into account in order to make a proper evaluation: the periods of time where $\tt{P}_2$ and $\tt{P}$ are prime ministers as well as the birth dates of their children, as follows:} \begin{itemize}\itemsep 4pt \item[\argu{I}:] \emph{Information $\tt{I}$ concerning person $\tt{P}$ should be published.} \item[\argu{P}:] \emph{Information $\tt{I}$ is private so, $\tt{P}$ denies publication.} \item[\argu{S}:] \emph{$\tt{I}$ is an important information concerning $\tt{P}$'s son.} \item[\argu{T}:] \emph{$\tt{I}$ is an important information concerning $\tt{P}_2$'s son.} \item[\argu{M}:] \emph{$\tt{P}$ is the new prime minister so, everything related to $\tt{P}$ is public.} \item[\argu{N}:] \emph{$\tt{P}_2$ is the new prime minister so, everything related to $\tt{P}_2$ is public.} \end{itemize} \begin{center} $\begin{array}{|c|c|} \hline Argument & Temporal \ Availability\\ \hline \hline \argu{I} & [0, \infty) \\ \argu{P} & [0, \infty) \\ \argu{S} & [2013, Abr - 2013, Oct] \\ \argu{T} & [2012, Feb - 2012, Jun] \\ \argu{M} & [2012, Oct - 2014, Oct] \\ \argu{N} & [2010, Jun - 2012, Oct) \\ \hline \end{array} $ \label{tab.temporalexample} \end{center} \emph{The information about time is depicted in Figure \ref{GraphExample1}, where we show the time intervals in which every argument is \textit{available} or \textit{relevant} in a particular argumentative discussion.As we can see, the attack relation between arguments \argu{I} and \argu{P} is available in any moment of time, while the conflict between arguments \argu{M} and \argu{P} is active only in the time interval where \argu{M} is available, $[2012, Oct - 2014, Oct]$. On the other hand, argument \argu{M} reinforces argument \argu{I} in the time interval $[2012, Oct - 2014, Oct]$, giving additional information about the person $\tt{P}$.} \begin{figure}[ht!] \begin{center}\leavevmode \xymatrix @R=0pc @C=0pc{ &&&&&&&&&&&&&&&&&&&&&&&&&\\ \ar@{->}[rrrrrrrrrrrrrrrrrrrrrrrr]^{time}&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&\\ \ar@{..}[r]&\ar@{=}^{2010}[rrrr]&&&&\ar@{=}^{2011}[rr]&&\ar@{=}^{2012}[rrrrrrr]&&&&&&&\ar@{=}^{2013}[rrrrr]&&&&&\ar@{=}^{2014}[rrr]&&&\ar@{..}[r]&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&\\ \mbox{\footnotesize 0}\ar@{-}[uuuu]\ar@{..}[rrr]&\ar@{-}[uuuu]&&\mbox{\footnotesize Jun}\ar@{-}[uuu]\ar@{..}[rr]&&\mbox{\footnotesize Jan}\ar@{-}[uuuu]\ar@{..}[rr]&&\mbox{\footnotesize Jan}\ar@{-}[uuuu]&\mbox{\footnotesize Feb}\ar@{-}[uuu]\ar@{..}[rr]&&\mbox{\footnotesize Jun}\ar@{-}[uuu]\ar@{..}[rr]&&\mbox{\footnotesize Oct}\ar@{-}[uuu]\ar@{..}[rr]&&\mbox{\footnotesize Jan}\ar@{-}[uuuu]\ar@{..}[rr]&&\mbox{\footnotesize Apr}\ar@{-}[uuu]\ar@{..}[rr]&&\mbox{\footnotesize Oct}\ar@{-}[uuu]&\mbox{\footnotesize Jan}\ar@{-}[uuuu]\ar@{..}[rr]&&\mbox{\footnotesize Oct}\ar@{-}[uuu]&\mbox{\footnotesize Jan}\ar@{-}[uuuu]&&\\\\ &&&&&&&&&&&&&&&&&&&&&&&&\\ \ar@{-}[rrrrrrrrrrrrrrrrrrrrrrrr]_{\argu{I}}&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&\\ \ar@{-}[rrrrrrrrrrrrrrrrrrrrrrrr]_{\argu{P}}&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&\ar@{-}[rr]_{\argu{T}}&&&&&&&&\ar@{-}[rr]_{\argu{S}}&&&&&&&&\\ &&&&&&&&\ar@{-}[uu]&&\ar@{-}[uu]&&&&&&\ar@{-}[uu]&&\ar@{-}[uu]&&&&&&\\ &&&&&&&&&&&&\ar@{-}[rrrrrrrrr]_{\argu{M}}&&&&&&&&&&&&\\ &&&&&&&&&&&&\ar@{-}[uu]&&&&&&&&&\ar@{-}[uu]&&&\\ &&&\ar@{-}[rrrrrrrrr]_{\argu{N}}&&&&&&&&&&&&&&&&&&&&&\\ &&&\ar@{-}[uu]&&&&&&&&&\ar@{-}[uu]&&&&&&&&&&&&\\ } \caption{Availability Distribution for the Arguments.} \label{GraphExample1} \end{center} \end{figure} In this example the time dimension is necessary to create a proper argumentation model that describes the evolution of the argumentative discussion. In this representation, we can analyse the different relationships between the arguments from a new perspective, such as: specification of time intervals where an argument is accepted, determination of moments in which an argument is strengthened by its supports (providing more information about a particular point) or to establish \textit{when} the supporting arguments provide extra conflict points for the supported argument. Furthermore, the proposed formalism allows the study of certain temporal properties associated with the arguments, such as their acceptability status over time. This work is organized as follows: Section 2 presents a brief review of the classical bipolar abstract argumentation frameworks which allows the representation of support and conflict defined over arguments. In Section 3, we present some intuition to model time notions in the argumentative process. In Section 4, we introduce a concrete extension of the bipolar argumentation formalism where the temporal notion associated to the arguments is taken into account, and different temporal acceptability semantic process are presented. In Section 5, we present a real world example where the \emph{T-BAF}'s notions are applied in order to analyze a dynamic argumentation model. Finally, Section 6 and Section 7 are devoted to some related works, concluding remarks and further issues. \section{Bipolar Abstract Argumentation}\label{sec.bipolar} The basic idea behind argumentation is to construct arguments for and against a conclusion, analyse the general scenario, and then select the \textit{acceptable} ones. Arguments have different roles in front of each other. One might then say that arguments are presented in a ``bipolar" way since arguments in favour of a conclusion can be considered as positive while arguments against the conclusion as negative ones. Based on these intuition, when representing the essential mechanism of argumentation the notion of bipolarity is a natural one. Abstracting away from the inner structure of the arguments, the Abstract Bipolar Argumentation Framework proposed by Cayrol and Lagasquie-Schiex in~\cite{cayrol2005acceptability}, extend Dung's notion of acceptability distinguishing two independent forms of interaction between arguments: support and attack. This new relation is assumed to be totally independent of the attack relation (i.e. it is not defined using the attack relation) providing a positive relation between arguments. \begin{Definition}[Bipolar Argumentation Framework]\label{Def.Bipolar} A Bipolar Argumentation Framework (\baf) is a 3-tuple $\Theta = \bipolar$, where \ard is a set of arguments, \atts and \supp are disjoint binary relations on \ard called attack relation and support relation, respectively. \end{Definition} In order to represent a \baf, Cayrol and Lagasquie-Schiex extended the notion of graph presented by Dung in~\cite{Dung95} adding the representation of support between arguments. This argumentative model provides a starting point to analyse a argumentative discussion enriched by the bipolarity of the human reasoning. This notion es defined as follows. \begin{Definition}[Bipolar Argumentation Graph]\label{Def.BipolarGraph} Let $\Theta = \bipolar$ be a \baf. We define a directed graph for $\Theta$, denoted as \graphbipolar{\Theta}, taking as nodes the elements in \ard, and two types of arcs: one for the attack relation (represented by plain arrows), and one for the support relation (represented by squid arrows). \end{Definition} In order to consider the interaction between supporting and defeating arguments, Cayrol and Lagasquie-Schiex in~\cite{cayrol2005acceptability} introduce the notions of \textit{supported} and \textit{secondary} defeat which combine a sequence of supports with a direct defeat. This notion is presented in the following definition. \begin{Definition}[Supported and Secondary Defeat]~\label{def.defeat} Let $\Theta= \bipolar$ be an \baf, and $\argum{A}, \argum{B} \in \ard$ two arguments. \begin{itemize} \item[--] A supported defeat from \argum{A} to \argum{B} is a sequence $\argum{A}_1 \ \R_1 \ ... \ \R_n \ \argum{A}_n$, with $n \geq 3$, where $\argum{A}_1 = \argum{A}$ and $\argum{A}_n = \argum{B}$, such that $\forall i = 1 ... n-1$, $\R_i = \supp$ and $\R_{n-1} = \atts$. \item[--] A secondary defeat from \argum{A} to \argum{B} is a sequence $\argum{A}_1 \ \R_1 \ ... \ \R_n \ \argum{A}_n$, with $n \geq 3$, where $\argum{A}_1 = \argum{A}$ and $\argum{A}_n = \argum{B}$, such that $\R_{1} = \atts$ and $\forall i = 2 ... n-1$, $\R_i = \supp$. \end{itemize} \end{Definition} Note that, in \emph{BAF}, a sequence reduced to two arguments $\argum{A} \ \atts \ \argum{B}$ (a direct defeat $\argum{A} \to \argum{B}$) is also considered as a supported defeat from \argum{A} to \argum{B}. \begin{Example}\label{ex.bipolarframework} Given a \baf $\Theta =\bipolar$, where: \begin{itemize} \item[] $\ard = \{\argum{A}; \argum{B}; \argum{C}; \argum{D}; \argum{E}; \argum{F}; \argum{G}; \argum{H}; \argum{I}; \argum{J} \}$, \item[] $\atts = \{(\argum{B},\argum{A}); (\argum{A},\argum{H});(\argum{C},\argum{B}); (\argum{G},\argum{I});(\argum{J},\argum{I});(\argum{F},\argum{C})\}$, and \item[] $\supp = \{(\argum{D},\argum{C}); (\argum{H},\argum{G});(\argum{I},\argum{F}); (\argum{E},\argum{B})\}$. \end{itemize} \begin{figure}[ht] \begin{center}\leavevmode \xymatrix @R=0pc @C=0pc{ &{\argu{D}}&&&&&{\argu{C}}&&&&&{\argu{B}}&&&&&{\argu{E}} \\ &{\blacktriangle}\ar@{..>}[rrrrr]&&&&&{\blacktriangle}\ar@{->}[rrrrr]&&&&&{\blacktriangle}\ar@{->}[rrddd]&&&&&{\blacktriangle}\ar@{..>}[lllll] \\ &&&&&&&&&&&&&&&& \\ &&&&&&&&&&&&&&&& \\ &&&{\argu{F}}&{\blacktriangle}\ar@{->}[rruuu]&&&&&&&&{\argu{A}}&{\blacktriangle}\ar@{->}[rrrddd]&&& \\ &&&&&&&&&&&&&&&& \\ &&&&&&&&&&&&&&&& \\ &{\blacktriangle}&&&&&{\blacktriangle}\ar@{<-}[lllll]\ar@{..>}[lluuu]&&&&&{\blacktriangle}\ar@{->}[lllll]&&&&&{\blacktriangle}\ar@{..>}[lllll] \\ &{\argu{J}}&&&&&{\argu{I}}&&&&&{\argu{G}}&&&&&{\argu{H}} \\ } \caption{Bipolar argumentation graph.} \label{Graph.Bipolar} \end{center} \vspace*{-15pt} \end{figure} We analyze the bipolar argumentation framework $\Theta$ characterized by the bipolar interaction graph depicted in \emph{Figure}~\ref{Graph.Bipolar}. For instance, \argum{J} and \argum{H} support defeat \argum{I}, since \argum{H} support G, and \argum{I} is attacked for \argum{G} and \argum{J} (direct attacker); in addition, \argum{J} and \argum{G} secondary defeat \argum{F}, because \argum{I} support \argum{F}, which is attacked for \argum{J} and \argum{G}. However, \argum{A} support defeat \argum{G} through \argum{H} support, and for that \argum{G} is defeated; also, \argum{B} support defeat \argum{A} (direct attacker), but \argum{D} support defeat \argum{B} through \argum{C}. Note that, the support defeat from \argum{G} to \argum{F} is invalidate since \argum{A} defeat \argum{H} which is a support of \argum{G}, irretrievably. \end{Example} Cayrol and Lagasquie-Schiex in~\cite{cayrol2005acceptability} argued that a set of arguments must be in some sense coherent to model one side of an intelligent dispute. The coherence of a set of arguments is analyzed \emph{internally} (a set of arguments in which an argument attacks another in the same set is not acceptable), and \emph{externally} (a set of arguments which contains both a supporter and an attacker for the same argument is not acceptable). The internal coherence is captured extending the definition of \textit{conflict free set} proposed in~\cite{Dung95}, and external coherence is captured with the notion of \textit{safe} set. \begin{Definition}[Conflict-free and Safe]\label{Def.ConflictSafe} Let $\Phi= \bipolar$ be an \baf, and $S \subseteq \ard$ be a set of arguments. \begin{itemize} \item[--] $S$ is \emph{Conflict-free} iff $\nexists \argum{A}, \argum{B} \in S$ such that there is a supported or a secondary defeat from \argum{A} to \argum{B}. \item[--] $S$ is \emph{Safe} iff $\nexists \argum{A} \in \ard$ and $\nexists \argum{B}, \argum{C} \in S$ such that there is a supported defeat or a secondary defeat from \argum{B} to \argum{A}, and either there is a sequence of support from \argum{C} to \argum{A}, or $\argum{A} \in S$. \end{itemize} \end{Definition} The notion of conflict-free in the above definition requires to take supported and secondary defeats into account, becoming a more restrictive definition than the classical version of conflict-freeness proposed by Dung. In addition, Cayrol and Lagasquie-Schiex show that the notion of safety is powerful enough to encompass the notion of conflict-freeness (\ie, if a set is safe, then it is also conflict-free). In addition, another requirement has been considered in~\cite{cayrol2005acceptability}, which concerns only the support relation, namely the closure under \supp. \begin{Definition}[Closure in BAF] Let $\Phi= \bipolar$ be an \baf. $S \subseteq \ard$ be a set of arguments. $S$ is closed under \supp iff $\forall \ \argum{A} \in S$, $\forall \ \argum{B} \in \ard$ if $\argum{A} \ \supp \ \argum{B}$ then $\argum{B} \in S$. \end{Definition} \begin{Example}[Continued Example~\ref{ex.bipolarframework}]~\label{ex.conflictsafe} The set $S_1 = \{\argum{I}; \argum{F}; \argum{D}; \argum{B}; \argum{K}\}$ is conflict free but not safe, since \argum{I} support defeat \argum{C} through \argum{F}, and \argum{D} support \argum{G}. The set $S_2 = \{\argum{J}; \argum{C}; \argum{D}; \argum{A}\}$ is conflict free and closed by \supp, then it is safe. \end{Example} Based on the previous concepts, Cayrol and Lagasquie-Schiex in~\cite{cayrol2005acceptability} extend the notions of defence for an argument with respect to a set of arguments, where they take into account the relations of support and conflict between arguments. \begin{Definition}[Defence of $A$ from $B$ by $S$] Let $S \subseteq \ard$ be a set of arguments, and $\argum{A} \in \ard$ be an argument. $S$ defends collectively $A$ iff $\forall \ \argum{B} \in \ard$ if \argum{B} is a supported or secundary defeat of \argum{A} then $\exists \ \argum{C} \in S$ such that \argum{C} is a supported or secundary defeat of \argum{B}. In this case, it can be interpreted that \argum{C} defends \argum{A} from \argum{B}. \end{Definition} The authors proposed three different definitions for admissibility, from the most general to the most specific. The most general is based on Dung's admissibility definition; then, they extended the notion of d-admissibility notion taking into account external coherence. Finally, external coherence is strengthened by requiring that admissible sets be closed for \supp. \begin{Definition}[Admissibility in BAF]\label{Def.AdmissibilityBipolar} Let $\Phi = \bipolar$ be a \baf. Let $S \subseteq \ard$ be a set of arguments. The admissibility of a set $S$ is defined as follows: \begin{itemize} \item[--] $S$ is d-admissible if $S$ is conflict-free and defends all its elements. \item[--] $S$ is s-admissible if $S$ is safe and defends all its elements. \item[--] $S$ is c-admissible if $S$ conflict-free, closed for \supp and defends all its elements. \end{itemize} \end{Definition} \begin{Example}[Continued Example~\ref{ex.bipolarframework}]~\label{ex.admissible} The set $S_1 = \{\argum{J}; \argum{C};$ $\argum{D}; \argum{A}; \argum{E}\}$ is d-admissible, since it is conflict free and defend all its elements; however, it is not s-admissible, because \argum{C} and \argum{E} belong to $S_1$, where \argum{C} support defeat \argum{B} and \argum{E} support \argum{B}, and for that $S_1$ is not safe. It is important to note that, if a set of arguments not satisfies the s-admissibility, then not satisfies the c-admissibility; for that $S_1$ is not c-admissible. The set $S_2 = \{\argum{J}; \argum{C}; \argum{D}; \argum{A}\}$ is s-admissible, since it is safe and defend all its elements; in addition, it is closed for \supp, then $S_2$ is c-admissible too. \end{Example} From the notions of coherence, admissibility, and extending the propositions introduced in~\cite{Dung95}, Cayrol and Lagasquie-Schiex in~\cite{cayrol2005acceptability} proposed different new semantics for the acceptability. \begin{Definition}[Stable extension]\label{Def.StableBipolar} Let $\Phi = \bipolar$ be a \baf. Let $S \subseteq \ard$ be a set of arguments. $S$ is a {\em stable extension} of $\Phi$ if $S$ is conflict-free and for all $\argum{A} \notin S$, there is a supported or a secondary defeat of $\argum{A}$ in $S$. \end{Definition} \begin{Definition}[Preferred extension]\label{Def.PreferredBipolar} Let $\Phi = \bipolar$ be a \baf. Let $S \subseteq \ard$ be a set of arguments. $S$ is a d-preferred (resp. s-preferred, c-preferred) extension if $S$ is maximal (for set-inclusion) among the d-admissible (resp. s-admissible, c-admissible) subsets of \ard. \end{Definition} \begin{Example}[Continued Example~\ref{ex.bipolarframework}]~\label{ex.extensions} In our example, the set of arguments $S_1= \{\argum{J}; \argum{C}; \argum{D}; \argum{A}; \argum{E}\}$ is the stable extension, since there exist a defeater for the arguments \argum{I}, \argum{F}, \argum{G} and \argum{H} (as we explain in the \emph{Example~\ref{ex.bipolarframework}}). However, as we see in the \emph{Example~\ref{ex.admissible}}, this extension is not safe. % On the other hand, based on the \emph{Definition~\ref{Def.PreferredBipolar}}, we analyze the bipolar argumentation graph and determine the following preferred extensions: $S_1$ is a maximal d-admissible set, so $S_1$ is a d-preferred extension; $S_2 = \{\argum{J}; \argum{C}; \argum{D}; \argum{A}\}$ is a maximal s-admissible sets, so $S_2$ is a s-preferred extensions; and $S_2$ is a maximal c-admissible set, therefore $S_2$ is a c-preferred extension. \end{Example} In the following section the support relation is considered among attacks in a timed context. Later, time-dependent semantics are presented. \section{Towards a Temporal Argumentation Framework} Our interest is to provide bipolar argumentation frameworks with a time-based notion of argument interaction. The focus is put in an abstract notion of \textit{availability} of arguments, which is a metaphor for a dynamic relative importance. Throughout this paper we mainly use the term ``available'', meaning that an argument will be considered just for a specific interval of time. However, availability can be interpreted in different ways. It may be the period of time in which an argument is relevant or strong enough or appropriate or any other suitable notion of \textit{relative importance} among arguments. The premise is that this availability is not persistent. In such a dynamic scenario, defeat and support may be sporadic and then proper time-based semantics need to be elaborated. \begin{figure}[ht] \begin{center}\leavevmode \xymatrix @R=0pc @C=0pc{ &&&&&&&& \\ &&&&&&{\argu{A}}&{\blacktriangle}& \\ &&&&&&&& \\ &&&&&&&& \\ &{\argu{C}}&{\blacktriangle}\ar@{..>}[dddrr]&&&&&& \\ &&&&&&&& \\ &&&&&&&& \\ &&&&{\blacktriangle}\ar@{->}[rrruuuuuu]&{\argu{B}}&&& \\ &&&&&&&& \\ &&&&(a)&&&& \\ } \xymatrix @R=0pc @C=0pc{ &&&&&&&& \\ &&&&&&{\argu{A}}&{\blacktriangle}& \\ &&&&&&&& \\ &&&&&&&& \\ &{\argu{C}}&{\blacktriangle}\ar@{->}[dddrrr]&&&&&& \\ &&&&&&&& \\ &&&&&&&& \\ &&&&&{\blacktriangle}\ar@{..>}[uuuuuurr]&{\argu{B}}&& \\ &&&&&&&& \\ &&&&(b)&&&& \\ } \caption{Arguments Relations} \label{Graph.Relation} \end{center} \end{figure} Let \argu{A}, \argu{B} and \argu{C} be three arguments such that \argu{B} \atts \argu{A} and \argu{C} \supp \argu{B}, as shown in Figure \ref{Graph.Relation}(a). This is a minimal example of supported defeat. In the classical definition of bipolar argumentation framework, the set $S = \{\argu{C},\argu{B}\}$ is conflict-free. When considering availability of arguments, different conflict-free situations may arise. Suppose at moment $t_1$ arguments $\argu{C}$ and $\argu{A}$ are available while $\argu{B}$ is not. Then the set $S_1 = \{\argu{C},\argu{A}\}$ is conflict-free, since the attacker of $\argu{A}$ is not available \ie not relevant or strong at this particular moment. Suppose later at moment $t_2$ argument $\argu{B}$ becomes available. Then $S_1$ is no longer conflict-free since $\argu{C}$ supports a (now available) defeater of $\argu{A}$. Suppose later at moment $t_3$ argument $\argu{B}$ is not available again. Then set $S_1$ regains its conflict-free quality. Hence, a set of arguments in a timed context is not a conflict-free set by itself, but regarding certain moments in time. The set $S_1$ is conflict-free in $t_1$ and in $t_3$, and more generally speaking, in intervals of time in which availability of related arguments does not change. In a similar fashion, consider the scenario of Figure \ref{Graph.Relation}(b), where \argu{B} \supp \argu{A} and \argu{C} \atts \argu{B}. Suppose at moment $t_1$ arguments $\argu{B}$ and $\argu{A}$ are available while $\argu{C}$ is not. Then the set $S_1 = \{\argu{A},\argu{B}\}$ is conflict-free. Suppose at moment $t_2$ arguments $\argu{C}$ is available, while $\argu{B}$ is not. Given the nature of the support relation, argument \argu{A} is not available too. Then, the set $S_1 = \{\argu{C}\}$ is conflict-free. Now, suppose that all the arguments are available at time $t_3$, then there is a conflict underlying in $\{\argu{C},\argu{A}\}$. In this case, argument $\argu{B}$ provides a support to $\argu{A}$, but in some moments of time $\argu{B}$ is attacked by the argument $\argu{C}$ providing a conflict for $\argu{A}$. This leads to the intuition that $\argu{B}$ is a weak support for $\argu{A}$ when $\argu{C}$ is available. An interesting aspect of these situations is that, in a timed context, the concept of \textit{argument extension} must be revised. Now the question is not \textit{whether} an argument is accepted (or rejected), but \textit{when}. Hence, a semantic analysis of the status of an argument must refer to intervals of time. We define a specific structure for this notion called \textit{t-profile}, to be presented later. It is clear that in a dynamic environment, the set of conflict-free sets changes through time. Thus, the notion of acceptability in a bipolar argumentation scenario must be adapted when properly considered in a timed context. In this sense, we reformulate the attacks and supports notions defined in the classical bipolar argumentation framework, modeling the positive and negative effect of the arguments over time. In the following section the formal model of Timed Bipolar Argumentation Framework is introduced and the corresponding argument semantics are presented. \section{Modeling Temporal Argumentation with T-BAF}\label{sec.taf} The \textit{Timed Bipolar Argumentation Framework} (\tbaf) is an argumentation formalism where arguments are valid only during specific intervals of time, called \textit{availability intervals}. Attacks and supports are not permanent, since they are considered only when the involved arguments are \textit{available}. Thus, when identifying the set of acceptable arguments, the outcome associated with a \tbaf may vary in time. In order to represent time, we assume that a correspondence was defined between the time line and the set of real numbers. A time interval, representing a period of time without interruptions, will be defined as follows For legibility reasons we use a different symbol as separator in the definition of intervals: ``$-$'' instead of, the tradicional, ``,''. \begin{Definition}[Time Interval]\label{TimeInterval} A time interval $I$ represents a continuous period of time, identified by a pair of time-points. The initial time-point is called the startpoint of $I$, and the final time-point is called the endpoint of $I$. The intervals can be: \begin{itemize} \item[] Closed: defines a period of time that includes the definition points (startpoint and endpoint). Closed intervals are noted as $[a-b]$. \item[] Open: defines a period of time without the start and enpoint. Open intervals are noted as $(a-b)$. \item[] Semi-Closed: the periods of time includes one of the definition points but not both. Depending wich one is included, they are noted as $(a-b]$ (includes the endpoint) or $[a-b)$ (includes the startpoint). \end{itemize} \end{Definition} As it is usual, any of the previous intervals is considered empty if $b < a$, and the interval $[a-a]$ represents the point in time $\{a\}$. For the infinite endpoint, we use the symbol $+\infty$ and $-\infty$, as in $[a-{+\infty})$ or $({-\infty}- a]$ respectively, to indicate that there is no upper or lower bound for the interval respectively, and an interval containing this symbol will always be closed by $``)"$ or $``("$ respectively. It will be necessary to group different intervals. We introduce the notion of \emph{time intervals set}. \begin{Definition}[Time Intervals Set] A time intervals set, or just intervals set, is a finite set $\mathcal{T}$ of time intervals. \end{Definition} Semantic elaborations are focused on the maximality of intervals. For instance, two subsequent intervals may be joined and considered as one interval. When convenient, we will use the set of sets notation for time intervals sets. Concretely, a time interval set $\mathcal{T}$ will be denoted as the set of all disjoint and $\subseteq$-maximal individual intervals included in the set. For instance, we will use $\{(1-3],\ [4.5-8)\}$ to denote the time interval set $(1-3]\ \cup\ [4.5-8)$. Now we formally introduce the notion of Timed Bipolar Argumentation Framework (\tbaf), which extends the \baf of Cayrol and Lagasquie-Schiex by incorporating an additional component, the availability function, which will be used to characterize those time intervals where arguments are available. \begin{Definition}[Timed Bipolar Argumentation Framework]\label{def.TAF} A Timed Bipolar Argumentation framework (or simply \tbaf) is a triple $\Omega = \timebipolar$, where \ard is a set of arguments, \atts is a binary relation defined over \ard (representing attack), \supp is a binary relation defined over \ard (representing support), and $\av : \ard \longrightarrow \wp(\mathds{R})$ is an availability function for timed arguments, such that $\av(A)$ is the set of availability intervals of an argument $A$. \end{Definition} Note that since the arguments are only available during a certain period of time (the availability interval), it is rational to think that the relationship between arguments is relevant only when the arguments involved are available at the same time. \begin{Definition} For any arguments \argum{A} and \argum{B}, we denote as \timedefeat{A}{B} and \timesupport{A}{B} the period of time in which the attack and the support between \argum{A} and \argum{B} is available, respectively. \end{Definition} \begin{Example} The corresponding timed bipolar argumentation framework of the introductory example is $\Omega_{intro}=\timebipolar$ where \begin{itemize} \item \ard = $\{ {\cal I,P,S,T,M} \}$ \item \atts = $\{ (\cal P,I) , (\cal M,P)\} $ \item \supp = $\{ (\cal S,P)\} $ \item $\av({\cal I})= [0,\infty]$, $\av({\cal P})= [0,\infty]$, $\av({\cal S})= [201304,201309]$,\\ $\av({\cal T})= [201202,201206]$, $\av({\cal M})= [201209,201409]$, \\ $\av({\cal S})= [201006,201209)$ \end{itemize} Months and years are encoded to integers in order to preserve order. \end{Example} Some definitions are needed towards the formalization of the notion of \textit{acceptability} of arguments in \tbaf, which is a time-based adaptation of the acceptability notions presented in Section 2 for \emph{BAF} with some new intuitions. First, we present the notion of \textit{t-profile}, binding an argument to a set of time intervals. This set represents intervals for special semantic consideration of the corresponding argument. It is a structure that formalizes the phrase ``\textit{this argument, in those intervals}". It is not necessarily the total availability of the argument as it is defined in the framework, so the reference has a special meaning when applied in appropriate contexts. T-profiles constitute a fundamental component for the formalization of time-based acceptability, since it is the basic unit of timed reference for an argument. \begin{Definition}[T-Profile]\label{def.T-profile} Let $\Omega = \timebipolar$ be a \tbaf. A timed argument profile for \argum{A} in $\Omega$, or just \tprotxt for \argum{A}, is a pair $\tpro{A}$ where $\argum{A} \in \ard$ and $\tiempo{A}$ is a set of time intervals where \argum{A} is available, \ie, $\tiempo{A} \subseteq \av(\argum{A})$. The t-profile \tprobasic{A} is called the basic t-profile of \argum{A}. \end{Definition} The basic t-profile of an argument \argum{X} may be interpreted as the reference ``\argum{X}, whenever it is available". Note that, as discussed previously, this argument may be attacked and defended as time evolves and then the basic t-profile will be probably fragmented under different semantics. Since argument extensions are a collective construction based on arguments and interactions, several t-profiles will be considered. \begin{Definition}[Collection of T-Profiles]\label{def.Set-t-profile} Let $\Omega = \langle \ard, \atts,$ $ \supp,\av \rangle$ be a \tbaf. Let \tpron{X}{1}, \tpron{X}{2}, $\cdots$ , \tpron{X}{n} be \tprostxt. The set $C = \{$\tpron{X}{1},\tpron{X}{2}, $\cdots$ , \tpron{X}{n}$\}$ is a collection of \tprostxt iff it verifies the following conditions: \begin{itemize} \item[i\emph{)}] $\argum{X}_i \neq \argum{X}_j$ for all $i, j$ such that $i \neq j$, $1 \leq i,j \leq n$. \item[ii\emph{)}] $\tiempon{X}{i} \neq \emptyset$, for all $i$ such that $1 \leq i \leq n$. \end{itemize} \end{Definition} Given a collection of t-profiles, it will be sometimes necessary to reference all the arguments involved in those t-profiles. \begin{Definition}[Arguments from a Collection of T-profiles]\label{def.cutprofiles} Let $C$ be a collection of t-profiles. The function \cutprofiles{Args}{C} defined as $\cutprofiles{Args}{C} = \{\argum{X} \mid \tpro{X} \in C\}$, obtain the set of arguments $Args$ involved in a collection of t-profiles $C$. \end{Definition} Since arguments interact to each other, and every argument will be related to several intervals of time, it is necessary to introduce some basic operations. This is the intersection and inclusion of t-profiles, denoted as \textit{t-intersections} and \textit{t-inclusions}, formalized below: \begin{Definition}[t-intersection]\label{def.tinterBudan} Let $\Omega = \timebipolar$ be a \tbaf. Let $C_1$ and $C_2$ be two collections of \tprostxt. We define the t-intersection of $C_1$ and $C_2$, denoted $C_1 \cap_t C_2$, as the collection of t-profiles such that:\\ \begin{small} \noindent$$C_1 \cap_t C_2 = \{(X, \tiempo{X} \cap \tiempoprima{X})\,|\, \tpro{X} \in C_1, \tproprima{X} \in C_2, \mathrm{and}\, \tiempo{X} \cap \tiempoprima{X} \neq \emptyset \}$$ \end{small} \end{Definition} \begin{Definition}[t-inclusion]\label{def.tinclBudan} Let $C_1$ and $C_2$ be two collections of \tprostxt. We say that $C_1$ is \emph{t-included} in $C_2$, denoted as $C_1 \subseteq_t C_2$, if for any t-profile $\tpro{X} \in C_1$ there exists a t-profile $\tproprima{X} \in C_2$ such that $\tiempo{X} \subseteq \tiempoprima{X}$. \end{Definition} In \tbaf, given a collection of t-profiles, it is possible generate a sequence of t-profiles from the existing relations between the arguments that are involved in them. \begin{Definition}[Sequence of T-profiles]\label{def.secuenceprofiles} Let $\Omega = \langle \ard, \atts,$ $ \supp,\av \rangle$ be a \tbaf. Let $C = \{$ \tpron{X}{1}, \tpron{X}{2}, $\cdots$, \tpron{X}{n} $\}$ be a collection of t-profiles. Let $Args = \cutprofiles{Args}{C}$ be a set of arguments involved in the collection $C$. We will say that each t-profile of $C$ compose a secuence of t-profiles iff $\forall i = 1 \dots n-1$ verifies that $(\argum{A}_i,\argum{A}_{i+1}) \in \supp$ or $(\argum{A}_i,\argum{A}_{i+1}) \in \atts$, where $\argum{A}_i, \argum{A}_{i+1} \in Args$ and $\cap_t \tiempon{A}{i} \neq \emptyset \ \forall i = 1 \dots n$. \end{Definition} The following definitions reformulate \baf formalizations considering t-profiles instead of arguments. First, we will define the notion of supported and secundary defeat over time in \tbaf. \begin{Definition}[Supported Defeat over Time]~\label{def.timesuppoerteddefeat} Let $\Omega= \timebipolar$ be a \tbaf. Let \tpro{A} and \tpro{B} two t-profiles. Let \tpron{A}{1} \tpron{A}{2} $\cdots$ \tpron{A}{n-1} \tpron{A}{n} be a sequence of t-profiles, with $n \geq 3$, $\tpron{A}{1} = \tpro{A}$ and \linebreak $\tpron{A}{n} = \tpro{B}$, such that $\forall i = 1 ... n-1$ $(A_i,A_{i+1}) \in \supp$ and $(A_{n-1},A_n) \in \atts$. The time interval in which \tpro{A} supported defeat \tpro{B}, denoted as \tsuppdefeat{A}{B}, is defined as $\tsuppdefeat{A}{B} = \cap^n_{i=1} \tiempon{A}{i}$. \end{Definition} A sequence reduced to two arguments $\argum{A} \ \atts \ \argum{B}$ (a direct defeat $\argum{A} \to \argum{B}$) is also considered as a supported defeat from $\argum{A}$ to $\argum{B}$. \begin{Definition}[Secundary Defeat over Time]~\label{def.timesecundarydefeat} Let $\Omega= \timebipolar$ be a \tbaf. Let \tpro{A} and \tpro{B} two t-profiles. Let \tpron{A}{1} \tpron{A}{2} $\cdots$ \tpron{A}{n-1} \tpron{A}{n} be a sequence of t-profiles, with $n \geq 3$, $\tpron{A}{1} = \tpro{A}$ and \linebreak $\tpron{A}{n} = \tpro{B}$, such that $(A_1,A_2) \in \atts$ and $\forall i = 2 ... n$, $(A_i,A_{i+1}) \in \supp$. We will define the time interval in which \tpro{A} secundary defeat \tpro{B}, denoted as \tsecdefeat{A}{B}, is defined as $\tsecdefeat{A}{B} = \cap^n_{i=1} \tiempon{A}{i}$. \end{Definition} \begin{Example} We will introduce an abstract example, through which we will clarify the concepts introduced until now. In this case we introduce the notion of time availability into the arguments presented in Example~\ref{ex.bipolarframework}.\\ \noindent Given a \tbaf $\Omega =\timebipolar$, where: \begin{itemize} \item[] $\ard = \{\argum{A}; \argum{B}; \argum{C}; \argum{D}; \argum{E}; \argum{F}; \argum{G}; \argum{H}; \argum{I}; \argum{J} \}$, \item[] $\atts = \{(\argum{B},\argum{A}); (\argum{A},\argum{H}); (\argum{C},\argum{B}); (\argum{G},\argum{I});(\argum{J},\argum{I});(\argum{F},\argum{C})\}$, \item[] $\supp = \{(\argum{D},\argum{C}); (\argum{H},\argum{G});(\argum{I},\argum{F}); (\argum{E},\argum{B})\}$, and \item[] $\av = \{ \tproins{A}{[0-100]}; \tproins{B}{(90-150]}; \tproins{C}{[30-180]}; \tproins{D}{[0-60]};$\linebreak $\tproins{E}{[100-160)}; \tproins{F}{[50-90]}; \tproins{G}{[60-120]}; \tproins{H}{[40-80]};$ \linebreak $\tproins{I}{(70-110]}; \tproins{J}{[0-90)}\}$. \end{itemize} \begin{figure}[ht] \begin{center}\leavevmode \xymatrix @R=0pc @C=0pc{ &{\argu{D}}_{\left[0-60\right]}&&{\argu{C}}_{\left[30-180\right)}&&{\argu{B}}_{\left(90-150\right]}&&{\argu{E}}_{\left[100-160\right)} \\ &{\blacktriangle}\ar@{..>}[rr]&&{\blacktriangle}\ar@{->}[rr]&&{\blacktriangle}\ar@{->}[rddd]&&{\blacktriangle}\ar@{..>}[ll] \\ &&&&&&& \\ &&&&&&& \\ &{\argu{F}}_{\left[50-90\right]}&{\blacktriangle}\ar@{->}[ruuu]&&&{\argu{A}}_{\left[0-100\right]}&{\blacktriangle}\ar@{->}[lddd]& \\ &&&&&&& \\ &&&&&&& \\ &{\blacktriangle}&&{\blacktriangle}\ar@{->}[ll]\ar@{..>}[luuu]&&{\blacktriangle}\ar@{->}[ll]&&{\blacktriangle}\ar@{..>}[ll] \\ &{\argu{J}}_{\left[0-90\right)}&&{\argu{I}}_{\left(70-110\right]}&&{\argu{G}}_{\left[60-120\right]}&&{\argu{H}}_{\left[40-80\right)} \\ } \caption{Bipolar argumentation graph with Timed Availability} \label{Graph.TimeBipolar} \end{center} \vspace*{-15pt} \end{figure} \begin{figure}[ht] \begin{center}\leavevmode \xymatrix @R=0pc @C=0pc{ &&&&&&&&&&&&&&&&&&&\\ \ar@{->}[rrrrrrrrrrrrrrrrrrr]^{time}&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&\\ \mbox{\tiny 0}\ar@{-}[uuu]&\mbox{\tiny 10}\ar@{-}[uuu]&\mbox{\tiny 20}\ar@{-}[uuu]&\mbox{\tiny 30}\ar@{-}[uuu]&\mbox{\tiny 40}\ar@{-}[uuu]&\mbox{\tiny 50}\ar@{-}[uuu]&\mbox{\tiny 60}\ar@{-}[uuu]&\mbox{\tiny 70}\ar@{-}[uuu]&\mbox{\tiny 80}\ar@{-}[uuu]&\mbox{\tiny 90}\ar@{-}[uuu]&\mbox{\tiny 100}\ar@{-}[uuu]&\mbox{\tiny 110}\ar@{-}[uuu]&\mbox{\tiny 120}\ar@{-}[uuu]&\mbox{\tiny 130}\ar@{-}[uuu]&\mbox{\tiny 140}\ar@{-}[uuu]&\mbox{\tiny 150}\ar@{-}[uuu]&\mbox{\tiny 160}\ar@{-}[uuu]&\mbox{\tiny 170}\ar@{-}[uuu]&\mbox{\tiny 180}\ar@{-}[uuu]\\ &&&&&&&&&&&&&&&&&&&&\\ \ar@{-}[rrrrrrrrrr]_{\argu{A}}&&&&&&&&&&&&&&&&&&&&\\ \ar@{-}[uu]&&&&&&&&&&\ar@{-}[uu]&&&&&&&&&\\ &&&&&&&&&\ar@{-}[rrrrrr]_{\argu{B}}&&&&&&&&&&\\ &&&&&&&&&\ar@{..}[uu]&&&&&&\ar@{-}[uu]&&&&\\ &&&\ar@{-}[rrrrrrrrrrrrrrr]_{\argu{C}}&&&&&&&&&&&&&&&&\\ &&&\ar@{..}[uu]&&&&&&&&&&&&&&&\ar@{-}[uu]&\\ \ar@{-}[rrrrrr]_{\argu{D}}&&&&&&&&&&\ar@{-}[rrrrrr]_{\argu{E}}&&&&&&&&&\\ \ar@{-}[uu]&&&&&&\ar@{-}[uu]&&&&\ar@{-}[uu]&&&&&&\ar@{..}[uu]&&&\\ &&&&&\ar@{-}[rrrr]_{\argu{F}}&&&&&&&&&&&&&&\\ &&&&&\ar@{-}[uu]&&&&\ar@{-}[uu]&&&&&&&&&&\\ &&&&&&\ar@{-}[rrrrrr]_{\argu{G}}&&&&&&&&&&&&&\\ &&&&&&\ar@{-}[uu]&&&&&&\ar@{-}[uu]&&&&&&&\\ &&&&\ar@{-}[rrrr]_{\argu{H}}&&&&&&&&&&&&&&&\\ &&&&\ar@{-}[uu]&&&&\ar@{..}[uu]&&&&&&&&&&&\\ &&&&&&&\ar@{-}[rrrr]_{\argu{I}}&&&&&&&&&&&&\\ &&&&&&&\ar@{..}[uu]&&&&\ar@{-}[uu]&&&&&&&&\\ \ar@{-}[rrrrrrrrr]_{\argu{J}}&&&&&&&&&&&&&&&&&&&\\ \ar@{-}[uu]&&&&&&&&&\ar@{..}[uu]&&&&&&&&&&\\ } \caption{Temporal Distribution} \label{Graph.DistTime} \end{center} \vspace*{-15pt} \end{figure} Next, we analyze the timed bipolar argumentation framework $\Omega$ characterized by the bipolar interaction graph depicted in \emph{Figure~\ref{Graph.TimeBipolar}}, and temporal distribution depicted in \emph{Figure~\ref{Graph.DistTime}}. In particular, we will pay attention to the relations that arise from the leaf nodes of the graph, in order to clarify the \emph{Definitions}~\ref{def.timesuppoerteddefeat} and~\ref{def.timesecundarydefeat}. % On one hand, \argum{J} support defeat \argum{I} in the time intervals $\tsuppdefeat{J}{I} = \tiempo{J} \cap \tiempo{I} = \{(70-90)\}$, and \argum{J} secondary defeat \argum{F} in the time intervals $\tsuppdefeat{J}{F} = \tiempo{J} \cap \tiempo{I} \cap \tiempo{F} = \{(70-90)\}.$ Also, analysing the leaf argument \argum{D}, the support defeat from \argum{D} to \argum{B} is invalidate since the time interval $\tsuppdefeat{D}{B} = \{\emptyset\}$, where $\tsuppdefeat{D}{B} = \tiempo{A} \cap \tiempo{C} \cap \tiempo{B} = \{\emptyset\}.$ On the other hand, from the argument \argum{E}, there exist a support defeat from \argum{E} to \argum{A} in the time interval $\tsuppdefeat{E}{A} = \tiempo{E} \cap \tiempo{B} \cap \tiempo{A} = \{[100-100]\}.$ \end{Example} Once defined the relations of attack over time using the t-profiles, we are able to adapt the notions of conflict-free and safeness used in \baf, now considering time. \begin{Definition}[Conflict-free and Safe]\label{Def.ConflictSafetime} Let $\Omega= \langle \ard,$ $\atts,$ $ \supp,\av \rangle$ be a \tbaf, and $S$ be a collection of t-profiles defined for $\Omega$. \begin{itemize} \item[--] $S$ is \emph{Conflict-free} iff $\nexists \tpro{A}, \tpro{B} \in S$ such that $\tsuppdefeat{A}{B} \neq \emptyset$ or $\tsecdefeat{A}{B} \neq \emptyset$. \item[--] $S$ is \emph{Safe} iff $\nexists \tpro{A}, \tpro{B} \in S$ and $\nexists \tpro{C}$ where $\tpro{C}$ is a valid $\Omega$'s t-profile such that $\tsuppdefeat{A}{C} \neq \emptyset$ or $\tsecdefeat{A}{C} \neq \emptyset$, and either there is a sequence of support from \tpro{B} to \tpro{C}, or $\tpro{C} \in S$. \end{itemize} \end{Definition} In addition, another requirement has been considered in~\cite{cayrol2005acceptability}, which concerns only the support relation, namely the \textit{closure under} \supp. This is presented in a timed context as follows. \begin{Definition}[Closure in T-BAF] Let $\Omega= \langle \ard,$ $\atts,$ $ \supp,\av \rangle$ be a \tbaf, and $S$ be a collection of t-profiles defined for $\Omega$. The set $S$ is closed under \supp iff $\forall \ \tpro{A} \in S$, $\forall \ \tpro{B}$ where $\tpro{B}$ is a valid $\Omega$'s t-profile: if $\tpro{A} \ \supp \ \tpro{B}$ where $\tiempo{A} \cap \tiempo{B} \neq \emptyset$ then at least $ \langle\argum{B}, \tiempo{A} \cap \tiempo{B}\rangle \in S$. \end{Definition} \begin{Example} The collection $C_1 = \{\tproins{A}{[0-100]} ; \tproins{C}{[30-50) , (90-180)} ; \linebreak \tproins{D}{[0-60]} ; \tproins{E}{[100-160)} ; \tproins{F}{[50-90]} ;$ $\tproins{G}{[80-120]} ; \linebreak \tproins{J}{[0-90)}\}$ is conflict-free but not safe, since the argument \argum{D} support \argum{C} and \argum{F} attacks \argum{C} in the time interval set $\{[50-60]\}$; On another case, the argument \argum{B} is supported and attacked by \argum{E} and \argum{C}, respectively, in the time interval set $\{[100-150]\}$. The collection $C_2 = \{\tproins{A}{[0-100]} ; \tproins{D}{[0-50)} ; \tproins{C}{[30-50),(90-100), \linebreak (150-180)}; \tproins{E}{(150-160)} ;\tproins{F}{(60-90]} ; \tproins{G}{[80-120]} ; \tproins{J}{[0-90)}\}$ is conflict-free and safe. \end{Example} \begin{Proposition} Let $S$ be a collection of t-profiles: \begin{itemize} \item[--] If $S$ is safe, then any collection $S' \subseteq_t S$ is conflict-free. \item[--] If $S$ is conflict-free and closed for \supp then $S$ is safe. \end{itemize} \end{Proposition} Following the same fashion, the following definitions reformulate \baf notions considering t-profiles instead of arguments. We define the \textit{defense} of an argument over time, taking into account the corresponding support and secondary defeat. \begin{Definition}[Defense of \argum{A} from \argum{B} by a collection $C$] Let $\Omega= \timebipolar$ be a \tbaf, and $C$ be a conflict-free collection of t-profiles. Let \tpro{A} and \tpro{B} two t-profiles, where \argum{B} attacks \argum{A} through a support or secondary attacks such that $\tsecdefeat{B}{A} \neq \emptyset$ and/or $\tsuppdefeat{B}{A} \neq \emptyset$. The defense t-profile of \argum{A} from \argum{B} with respect to $C$, denoted as \tdefence{A}{B} is defined as follows: \begin{center} $\tdefence{A}{B} =_{\mathit{def}} \tiempobasic{A} \cap \left( \tsupdefence{A}{B} \cup \tsecdefence{A}{B}\right) $ \end{center} \noindent where $\tsupdefence{A}{B} =_{\mathit{def}} \bigcup_{\tiny \argum{C}\in{\{\argum{X}\,|\,\tpro{X} \in C,\,(\argum{X},\tsuppdefeat{X}{B} \neq \emptyset\}}} \tsuppdefeat{C}{B} $\\ and $\tsecdefence{A}{B} =_{\mathit{def}} \bigcup_{\tiny \argum{C}\in{\{\argum{X}\,|\,\tpro{X} \in C,\,(\argum{X},\tsecdefeat{X}{B}\neq \emptyset\}}} \tsecdefeat{C}{B}$. \end{Definition} Intuitively, \argum{A} is defended from the attack of \argum{B} when: (a) \argum{B} is not available and (b) in those intervals where the attacker \argum{B} is available but it is in turn attacked by an available argument \argum{C} in the collection $C$. \begin{Definition}[Acceptable t-profile of \argum{A} w.r.t. $C$] Let $\Omega= \timebipolar$ be an \tbaf. The acceptable t-profile for \argum{A} w.r.t. $C$, denoted as \tdefences{A}, is defined as follows: \begin{center} $\tdefences{A} =_{\mathit{def}}\bigcap_{\tiny \argum{B}\in{\{\argum{X}\,|\,\tsuppdefeat{X}{A} \neq \emptyset \ \vee \ \tsecdefeat{X}{A} \neq \emptyset\}}}$\\ $ \left( \tiempobasic{A} \setminus \left( \tsuppdefeat{B}{A} \cup \tsecdefeat{B}{A}\right) \right) \cup \tdefence{A}{B} $ \end{center} where \tdefence{A}{B} is the time interval where \argum{A} is defended of its attacker \argum{B} by $C$. Then, the intersection of all time intervals in which \argum{A} is defended from each of its attackers by the collection $C$, is the time interval where \argum{A} is available and is acceptable with respect to $C$. \end{Definition} \begin{Example} In this example, we will show how the acceptable t-profile of \argum{I} from a collection $C_3 = \{\tproins{A}{[0-100]}$ is calculated. $\tdefencesBis{I}{C_3} = \left( \tiempobasic{I} \setminus\left( \tsecdefeat{G}{I} \cup \tsecdefeat{H}{I}\right) \right) \cup \left( \tdefenceBis{A}{G}{C_3} \cup \tdefenceBis{A}{H}{C_3}\right) = \\ = (70-110] \setminus ( (70-110] \cup (70-80))) \cup ( (70-110] \cup (70-80)]) = (70-110]$ \end{Example} In this section, we adapt the three different definitions for admissibility proposed by Cayrol and Lagasquie-Schiex in~\cite{cayrol2005acceptability}, through the new version of conflict-freeness and safety. \begin{Definition}[Admissibility in T-BAF]\label{Def.admissibilityovertime} Let $\Omega= \langle \ard, \atts, $ $\supp,\av \rangle$ be a \tbaf. Let $C$ be a collection of t-profiles. The admissibility of a collection $C$ is defined as follows: \begin{itemize}\small\itemsep 4pt \item[--] $C$ is td-admissible if $C$ is conflict-free and defends all its elements. \item[--] $C$ is ts-admissible if $C$ is safe and defends all its elements. \item[--] $C$ is tc-admissible if $C$ conflict-free, closed for \supp and defends all its elements. \end{itemize} \end{Definition} \begin{Example} The collection $C_4 = \{\tproins{A}{[0-100)} ; \tproins{C}{[30-50) , (70-180)} ;$\linebreak $\tproins{D}{[0-60]} ; \tproins{E}{[100-160)} ; \tproins{F}{[50-70]} ;$ $\tproins{G}{(80-120]} ;$\linebreak $\tproins{J}{[0-90)}\}$ is td-admissible since it is conflict-free and defends all its elements over time. However, $C_4$ is not a ts-admissible or tc-admissible collection of t-profiles. $C_5 = \{\tproins{A}{[0-100]} ; \tproins{C}{[30-50) , (70-180)} ; \tproins{D}{[0-60]} ;$ \linebreak$\tproins{E}{(150-160)} ; \tproins{F}{[50-70]} ;\tproins{G}{(80-120]} ; \tproins{J}{[0-90)}\}$ is ts-admissible because its safe and defends all its elements. In addition, $C_5$ is closed for \supp, then it is tc-admissible. \end{Example} \begin{Proposition} Let $\Omega= \langle \ard,$ $ \atts, \supp,\av \rangle$ be a \tbaf, then: \begin{itemize} \item[--] A td-admissible extension is t-included in a ts-admissible extension. \item[--] A ts-admissible extension is t-included in a tc-admissible extension. \end{itemize} \end{Proposition} Now we can define the classic argument semantics for \tbaf. First, we present the stable extension with a dynamic intuition. Then we introduce an adapted, timed version of preferred extension. Each one has a special property based on the admissibility notion. \begin{Definition}[Stable extension over Time]\label{Def.StableBipolartime} Let $\Omega= \langle \ard,$ $ \atts, \supp,\av \rangle$ be a \tbaf. Let $C$ be a collection of t-profiles. $C$ is a {\em t-stable extension} of $\Omega$ if $C$ is conflict-free and for all $\tpro{A} \notin C$, verifies that $\tiempo{A} \ \setminus \ (\bigcup \tsecdefeat{B}{A} \ \cup \ \bigcup \tsuppdefeat{B}{A}) = \emptyset$ for all $\tpro{B} \in C$. \end{Definition} \begin{Definition}[Preferred extension over Time]\label{Def.PreferredBipolartime} Let $\Omega= \timebipolar$ be an \tbaf. Let $C$ be a collection of t-profiles. $C$ is a td-preferred (resp. ts-preferred, tc-preferred) extension if $C$ is maximal (for set-t-inclusion) among the td-admissible (resp. ts-admissible, tc-admissible). \end{Definition} The relations between t-preferred extensions and t-stable extensions are stated in the following proposition. Note that these are consistent with the classical \baf. \begin{Proposition} Let $\Omega= \langle \ard,$ $ \atts, \supp,\av \rangle$ be a \tbaf, then: \begin{itemize} \item[-] A td-preferred extension is t-included in a ts-preferred extension. \item[-] A ts-preferred extension is t-included in a tc-preferred extension. \item[-] A ts-preferred extension closed under \supp is also a tc-preferred. \item[-] A td-preferred extension is t-included in a t-stable extension. \item[-] A ts-preferred extension is t-included in a safe t-stable extension. \item[-] A tc-preferred extension is t-included in a safe t-stable extension. \end{itemize} \end{Proposition} Given an \tbaf $\Omega= \timebipolar$, and an argument $\argum{A} \in \ard$, we will use $t\mbox{-}PR_d(\argum{A})$, $t\mbox{-}PR_s(\argum{A})$, $t\mbox{-}PR_c(\argum{A})$ and $t\mbox{-}ES(\argum{A})$ to denote the set of intervals on which \argum{A} is acceptable in $\Omega$ according to td-preferred, ts-preferred, tc-preferred and t-stable semantics respectively, using again the skeptical approach where it corresponds. The following property establishes a connection between acceptability in our extended temporal framework T-BAF and acceptability in Cayrol and Lagasquie-Schiex's frameworks. \begin{Theorem} Let $\Omega= \timebipolar$ be a \tbaf and let $\alpha$ representing a point in time. Let $\Theta'_{\alpha} = \langle \ard'_{\alpha}, \atts^{\alpha}, \supp^{\alpha} \rangle$ be a bipolar abstract framework obtained from $\Omega$ in the following way: $\ard'_{\alpha} = \{\argum{A} \in \ard \mid \alpha \in \tiempo{A}\}$, $\atts^{\alpha} = \{(\argum{A}, \argum{B})\in \alpha \in \timedefeat{A}{B}\}$ and $\supp^{\alpha} = \{(\argum{A}, \argum{B})\in \alpha \in \timesupport{A}{B}\}$. Let $C$ a collection of t-profiles in $\Omega$, and $C'_{\alpha} = \{\argum{A} \mid \tdefencese{A} \in C$ and $\alpha \in \tdefencese{A}\}$. It holds that, if $C$ is an td-preferred extension (resp. ts-preferred, tc-preferred, and t-stable) w.r.t. $\Omega$, then $C'_{\alpha}$ is a d-preferred extension (resp. ts-preferred, tc-preferred, and t-stable) w.r.t. $\Theta'_{\alpha}$. \end{Theorem} Intuitively, the BAF $\Theta'_{\alpha}$ represents a snapshot of the T-BAF framework $\Omega$ at the time point $\alpha$, where the arguments and attacks in $\Theta'_{\alpha}$ are those that are available at the time point $\alpha$ in $\Omega$. Then, this Lemma states that an td-preferred extension (resp. ts-preferred, tc-preferred, and t-stable) $C$ for T-BAF at the time point $\alpha$ coincides with a d-preferred extension $C'_{\alpha}$ (resp. ts-preferred, tc-preferred, and t-stable) of $\Theta'_{\alpha}$.\\ In addition, we formally establish that two arguments with an attack path cannot coincide in time when both belong to the same extension in a given semantics. \begin{Proposition} Let $\Omega= \langle \ard,$ $ \atts, \supp,\av \rangle$ be a \tbaf, and \tpro{A} and \tpro{B} be two t-profiles, where \argum{B} defeats \argum{A} through a support or secondary attacks, then it holds that: \begin{itemize} \item[--] $t\mbox{-}ES(\argum{A}) \cap t\mbox{-}ES(\argum{B}) = \emptyset$ \item[--] $t\mbox{-}PR_d(\argum{A}) \cap t\mbox{-}PR_d(\argum{B}) = \emptyset$; \item[--] $t\mbox{-}PR_s(\argum{A}) \cap t\mbox{-}PR_s(\argum{B}) = \emptyset$; \item[--] $t\mbox{-}PR_c(\argum{A}) \cap t\mbox{-}PR_c(\argum{B}) = \emptyset$; and \end{itemize} \end{Proposition} \begin{Example} In our example, the set of arguments $C_4$ is the stable extension, since there exist a defeater for each t-profile that does not belong to $C_4$. In addition, $C_4$ is a td-preferred extension since it is the maximal td-admissible collection of t-profiles that defends all of its elements. On another hand, $C_5$ is a ts-preferred extension because it is the maximal ts-admissible collection of t-profiles that defends all of its elements. Also, $C_5$ is closed by \supp, then it is tc-preferred extension. \end{Example} It is worthwhile to notice that when time becomes irrelevant, \ie\ reduced to a particular instant or all arguments are available in exactly the same periods of time, the behavior of $\tbaf$ is equivalent to the original $\baf$. \section{Application Example}\label{sec.exampletaf} As stated before, the aim of this work is to increase the representational capability of \emph{BAF}s, by adding a temporal dimension towards a model of dynamic argumentation discussion. To illustrate the usefulness of this direction in the context of agent and multi-agents systems, we discuss an example where the formalism provides a better characterization of the overall situation.\smallskip \emph{Consider the following scenario where an agent is looking for an apartment to rent. As expected, while considering a candidate she analyzes different arguments for and against renting it. These arguments are subject to availability or relevance in time. The task is to determine in the present (time 0) if the property is a good option in the future, counting with 150 days to make such a decision. The arguments and the availability intervals follows.} \begin{itemize}\itemsep 0pt \item[\argum{A}] \textit{She should rent it, the apartment has a good location since it is near of her work.}$\small{[0-150]}$ \item[\argum{B}] \textit{The apartment is located in an well illuminated and safe area.}$\small{[0-150]}$ \item[\argum{C}] \textit{The property is in a quiet area, because most of the neighbors are retirees and peaceful people.}$\small{[0-150]}$ \item[\argum{D}] \textit{The apartment is small; therefore, she should not rent it.}$\small{[0-150]}$ \item[\argum{E}] \textit{Despite the apartment size, the spaces are well distributed.}$\small{[0-150]}$ \item[\argum{F}] \textit{She should not rent it, since the apartment seems to have humidity problems.}$\small{[0-150]}$ \item[\argum{G}] \textit{There are rumours that a nightclub will open in the area in 50 days, so the area will not be quiet anymore.}$\small{[50-150]}$ \item[\argum{H}] \textit{The humidity problems are difficult and costly to resolve.}$\small{[0-150]}$ \item[\argum{I}] \textit{Laws forbid the opening of a nightclub in this urban area, but this will be revised in the next Town Hall meeting.}$\small{[0-80]}$ \item[\argum{J}] \textit{The person responsible for maintenance is committed to fixing the humidity problem at a low cost.}$\small{[0-150]}$ \end{itemize} Next, we instantiate a \tbaf $\Omega =\timebipolar$ in order to represent and analyze this example, where: \begin{itemize} \item[] $\ard = \{\argum{A}; \argum{B}; \argum{C}; \argum{D}; \argum{E}; \argum{F}; \argum{G}; \argum{H}; \argum{I}; \argum{J} \}$, \item[] $\atts = \{(\argum{B},\argum{A}); (\argum{C},\argum{A}); (\argum{H},\argum{F})\}$, \item[] $\supp = \{(\argum{E},\argum{D}); (\argum{D},\argum{A});(\argum{I},\argum{G}); (\argum{G},\argum{C});(\argum{F},\argum{A}); (\argum{J},\argum{H})\}$, and \item[] $\av = \{ \tproins{A}{[0-150]}; \tproins{B}{[0-150]}; \tproins{C}{[0-150]}; \tproins{D}{[0-150]};$\linebreak $\tproins{E}{[0-150]}; \tproins{F}{[0-150]}; \tproins{G}{[50-150]}; \tproins{H}{[0-150]};$ \linebreak $\tproins{I}{[0-80]}; \tproins{J}{[0-150]}\}$. \end{itemize} \begin{figure}[ht] \begin{center}\leavevmode \xymatrix @R=0pc @C=0pc{ &{\argu{B}}_{\left[0-150\right]}&&{\argu{I}}_{\left[0-80\right]}&&{\argu{G}}_{\left[50-150\right]}&&{\argu{C}}_{\left[0-150\right]} \\ &{\blacktriangle}\ar@{..>}[ddddrrrrr]&&{\blacktriangle}\ar@{->}[rr]&&{\blacktriangle}\ar@{->}[rr]&&{\blacktriangle}\ar@{..>}[ddddl] \\ &&&&&&& \\ &&&&&&& \\ &&&&&&& \\ &{\argu{D}}_{\left[0-150\right]}&{\blacktriangle}\ar@{->}[rrrr]&&&&{\blacktriangle}&{\argu{A}}_{\left[0-150\right]} \\ &&&&&&& \\ &&&&&&& \\ &{\blacktriangle}\ar@{->}[uuur]&&{\blacktriangle}\ar@{->}[rr]&&{\blacktriangle}\ar@{..>}[rr]&&{\blacktriangle}\ar@{->}[uuul] \\ &{\argu{E}}_{\left[0-150\right]}&&{\argu{J}}_{\left[0-150\right]}&&{\argu{H}}_{\left[0-150\right]}&&{\argu{F}}_{\left[0-150\right]} \\ } \caption{Bipolar argumentation graph with Timed Availability} \label{Graph.TimeBipolar2} \end{center} \vspace*{-15pt} \end{figure} \begin{figure}[ht] \begin{center}\leavevmode \xymatrix @R=0pc @C=0pc{ &&&&&&&&&&&&&&&&&&&\\ \ar@{->}[rrrrrrrrrrrrrrrrrrr]^{time}&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&\\ \mbox{\tiny 0}\ar@{-}[uuu]&\mbox{\tiny 10}\ar@{-}[uuu]&\mbox{\tiny 20}\ar@{-}[uuu]&\mbox{\tiny 30}\ar@{-}[uuu]&\mbox{\tiny 40}\ar@{-}[uuu]&\mbox{\tiny 50}\ar@{-}[uuu]&\mbox{\tiny 60}\ar@{-}[uuu]&\mbox{\tiny 70}\ar@{-}[uuu]&\mbox{\tiny 80}\ar@{-}[uuu]&\mbox{\tiny 90}\ar@{-}[uuu]&\mbox{\tiny 100}\ar@{-}[uuu]&\mbox{\tiny 110}\ar@{-}[uuu]&\mbox{\tiny 120}\ar@{-}[uuu]&\mbox{\tiny 130}\ar@{-}[uuu]&\mbox{\tiny 140}\ar@{-}[uuu]&\mbox{\tiny 150}\ar@{-}[uuu]&\mbox{\tiny 160}\ar@{-}[uuu]&\mbox{\tiny 170}\ar@{-}[uuu]&\mbox{\tiny 180}\ar@{-}[uuu]\\ &&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&\\ \ar@{-}[rrrrrrrrrrrrrrr]_{\argu{A}, \ \argu{B}, \ \argu{C}, \ \argu{D}, \ \argu{E}, \ \argu{F}, \ \argu{H}, \ \argu{J}}&&&&&&&&&&&&&&&&&&&&\\ \ar@{-}[uu]&&&&&&&&&&&&&&&\ar@{-}[uu]&&&&\\ &&&&&&&&&&&&&&&&&&&\\ &&&&&\ar@{-}[rrrrrrrrrr]_{\argu{G}}&&&&&&&&&&&&&&\\ &&&&&\ar@{-}[uu]&&&&&&&&&&\ar@{-}[uu]&&&&\\ &&&&&&&&&&&&&&&&&&&\\ \ar@{-}[rrrrrrrr]_{\argu{I}}&&&&&&&&&&&&&&&&&&&\\ \ar@{-}[uu]&&&&&&&&\ar@{-}[uu]&&&&&&&&&&&\\ } \caption{Temporal Distribution} \label{Graph.DistTime2} \end{center} \vspace*{-15pt} \end{figure} \pagebreak Let's analyze a couple of collections of t-profiles in order to determine which of them are conflict-free and safe. In one hand, we have the collection {\footnotesize $C_1 = \{\tproins{C}{[0-80]} ; \linebreak \tproins{G}{(80-150]} ; \tproins{E}{[0-80)} ; \tproins{F}{[0-150]} \}$} which is conflict-free but not safe, since the argument \argum{C} support \argum{A} and \argum{F} attacks \argum{A} in the time interval set {\footnotesize$\{[0-80]\}$}. In another hand, the collection {\footnotesize$C_2 = \{\tproins{C}{[0-80]} ; \tproins{G}{(80-150]} ; \tproins{E}{[0-80)} ; \linebreak \tproins{F}{(80-150]} ; \tproins{A}{[0-80]} \}$} is conflict-free and safe, since the argument \argum{C} that support \argum{A} is available when \argum{F} (that attacks \argum{A}) is not (\argum{C} is available in the time interval {\footnotesize$[0-80]$}, while \argum{F} is available in the interval {\footnotesize$(80-150]$}). Let's determine the collection of t-profile that represent a t-stable extension. In this case, the collection {\footnotesize$C_3 = \{\tproins{C}{[0-80]} ; \tproins{G}{(80-150]} ; \tproins{E}{[0-150]} ; \linebreak \tproins{A}{[0-80]} ; \tproins{I}{[0-80]} ; \tproins{J}{[0-150]} ; \tproins{B}{[0-150]} \}$} is a safe t-stable extension, since its conflict-free and attacks all the t-profiles not considered in $C_3$. In this case these t-profiles are: \\ $ \begin{array}{ll} \tproins{C}{(80-150]} & \mbox{support defeated by } \tproins{G}{[80-150]} \\ \tproins{G}{[50-80]} & \mbox{support defeated by } \tproins{I}{[0-80]} \\ \tproins{D}{[0-50]} & \mbox{support defeated by } \tproins{E}{[0-150]} \\ \tproins{H}{[0-150]} & \mbox{support defeated by } \tproins{J}{[0-150]} \\ \tproins{F}{[0-150]} & \mbox{secondary defeated by } \tproins{J}{[0-150]} \mbox{ and } \tproins{H}{[0-150]} \\ \tproins{A}{(80-150]} & \mbox{secondary defeated by } \tproins{G}{[80-150]} \mbox{ and } \tproins{C}{(80-150]} \\ \end{array}$\\ \\ \ Finally, using the results obtained in \emph{Proposition} 3 that relates by t-inclusion the td-preferred, ts-preferred and tc-preferred extensions we can conclude that the collection of t-profiles $C_3$, which is a safe t-stable, is also td-preferred, ts-preferred and tc-preferred. We only need to show that it is td-preferred. This means that $C_3$ should be maximal and td-acceptable (\ie\ conflict-free and must defends all its elements). We have already shown that $C_3$ is conflict-free and by attacking all the t-profiles that do not belong to $C_3$ we can assure its maximality in acceptability. In this particular case, $C_3$ is a safe t-stable, td-preferred, ts-preferred and tc-preferred extension. This situation occurs because the bipolar argumentation $\Omega$ do not include cycles. The graph representing our example is acyclic \textit{in every moment of time}. As we see in the abstract example, the difference is produced by cycles of support and attacks, as it was proved elsewhere for classical, non bipolar argumentation frameworks \section{Related Work}\label{sec.RelatedWork} As discussed in the introduction, reasoning about time is an important concern in commonsense reasoning. Thus, its consideration becomes relevant when modeling argumentation capabilities of intelligent agents~\cite{rahwan2009argumentation}. There have been recent advances in modeling time in argumentation frameworks. Mann and Hunter in~\cite{MannHunterComma}, propose a calculus for representing temporal knowledge, which is defined in terms of propositional logic. The use of this calculus is then considered with respect to argumentation, where an argument is defined in the standard way: an argument is a pair constituted by a minimally consistent subset of a database entailing its conclusion. Briefly speaking, the authors discuss a way of encoding temporal information into propositional logic, and examined its impact in a coherence argumentation system. The central idea is that they draw heavily on temporal knowledge due to their day-to-day nature - what is true today may well not be true tomorrow - as well as their inclusion of information concerning periods of time. In order to represent time variable, the authors propose a calculus built upon the ideas of Allen's interval logic~\cite{Allen83} using abstract intervals, and in keeping with the desire for a practical system, they restrict the system to using specific timepoints. This work is related to the works proposed by Hunter in~\cite{hunter2001ramification} and Augusto and Simari in~\cite{AugustoSimari01}. Hunter's system is based on maximally consistent subsets of the knowledge base, which are now not normally regarded as representative of arguments, while Augusto and Simari's contribution is based upon a many sorted logic with defeasible formula, and hence also falls into a different category of argumentation, and the use of many sorted logic raises similar concerns to that of using first order logic. Barringer \emph{et.al.} present two important approaches that share elements of our research. In the first one~\cite{barringer2010modal}, the authors present a temporal argumentation approach, where they extend the traditional Dung's networks using temporal and modal language formulas to represent the structure of arguments. To do that, they use the concept of usability of arguments defined as a function that determines if an argument is usable or not in a given context, changing this status over time based on the change in a dynamics context. In addition, they improved the representational capability of the formalism by using the ability of modal logic to represent accessibility between different argumentative networks; in this way, the modal operator is treated as a fibring operator to obtain a result for another argumentation network context, and then apply it to the local argumentation network context. In the second~\cite{barringer2012temporal}, they study the relationships of support and attack between arguments through a numerical argumentation network, where both the strength of the arguments and the strength that carry the attack and support between them is considered. This work pays close attention to the relations of support and attack between arguments, and to the treatment of cycles in an argumentative network. Furthermore, they offer different motivations for modeling domains in which the strengths can be time-dependent, presenting a brief explanation of how to deal with this issue in a numerical argumentation network. Finally, Godo and Pardo~\emph{et.al.} in~\cite{pardo2011t} and Bud\'an~\emph{et.al.} in~\cite{budan2012approach}, explored the possibility of expressing the uncertainty or reliability of temporal rules and events, and how this features may change over time. In the first one~\cite{pardo2011t}, the authors propose an argumentation-based defeasible logic, called \emph{t-DeLP}, that focuses on forward temporal reasoning for causal inference. They extend the language of the \emph{DeLP} logical framework by associating temporal parameters to literals. As usual, a dialectical procedure determines which arguments are undefeated, and hence which literals are warranted, or defeasibly follow from the program. \emph{t-DeLP}, though, slightly differs from \emph{DeLP} in order to accommodate temporal aspects, like the persistence of facts. The output of a t-DeLP program is a set of warranted literals, which is first shown to be non-contradictory and be closed under sub-arguments. This basic framework is then modified to deal with programs whose strict rules encode mutex constraints. The resulting framework is shown to satisfy stronger logical properties like indirect consistency and closure. In the second~\cite{budan2012approach}, the authors present a different extension of \emph{DeLP} introducing the possibility of formalizing arguments and the corresponding defeat relations among them by combining both temporal criteria and belief strength criteria. This extension is based on the \emph{Extended Temporal Argumentation Framework} (\emph{E-TAF})~\cite{cobo2010admissibility,budan2015modeling} which has the capability of modeling different time-dependent properties associated with arguments. Briefly speaking, this extension of \emph{DeLP} incorporate the representation of temporal availability and strength factors of arguments varying overtime, associating these characteristics with the DeLP language elements. The strength factors are used to model different more concrete measures such as reliability, priorities, etc.; the information is propagated to the level of arguments, and then the \emph{E-TAF} definitions are applied establishing their temporal acceptability Analyzing these research lines, is quite dificult to establish a proper comparison among the works mentioned above and \tbaf. This complication arises from the different levels of abstraction used, some of them are not abstract formalisms while other uses other temporal represententation (based on events or modalities). There is a clear relation with non-temporal approaches \cite{CohenGGS14}, this aspects are explored through out the paper. \section{Conclusions and Future Work}\label{sec.conclu} In this work we expanded temporal argumentation frameworks (\emph{TAF}) to include an argument support relation, as in classical bipolar argumentation frameworks. In this formalization, arguments are only valid for consideration (\textit{available} or \textit{relevant}) in a given period of time, which is defined for every individual argument. Hence, support and defeat relation are sporadic and proper argument semantics are defined. We bring admissibility-based extensions for bipolar scenarios to the context of timed argumentation, providing new formalizations of argument semantics with time involved. Future work has several directions. We view temporal information as an additional dimension that can be applied to several argumentation models. We are interested in the formalization of other timed argument relations, specially the ones defined in the backing-undercutting argumentation framework of \cite{CohenGS11}. Also, we will investigate how the approach could be developed by considering a timed version of Caminada's labelling, where an argument has a particular label for a specified period of time. Besides interval-based semantics defined in this present work, we are also interested in new integrations of timed notions in argumentation, such as temporal modal logic~\cite{gabbay2003many,barringer2012temporal}. We are developing of a framework combining the representation capabilities of \baf with an algebra of argumentation labels \cite{budanWL4AI13} to represent timed features of arguments in dynamic domains. \bibliographystyle{elsarticle-num}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} As is well known, the ordinary Bernoulli polynomials $B_{n}(x)$ and Euler polynomials $E_{n}(x)$ are respectively defined by \begin{equation}\label{1} \frac{t}{e^{t}-1}e^{xt}=\sum_{n=0}^{\infty}B_{n}(x)\frac{t^{n}}{n!}, \end{equation} and \begin{equation}\label{2} \frac{2}{e^{t}+1}e^{xt}=\sum_{n=0}^{\infty}E_{n}(x) \frac{t^{n}}{n!} ,\qquad \textrm{(see [1-20]).} \end{equation} For any nonzero $\lambda\in\mathbb{R}$, the degenerate exponential function is defined by \begin{equation}\label{3} e_{\lambda}^{x}(t)=(1+\lambda t)^{\frac{x}{\lambda}},\quad e_{\lambda}(t)=e_{\lambda}^{1}(t),\quad (\mathrm{see}\ [8]). \end{equation} In [1,2], Carlitz considered the degenerate Bernoulli and Euler polynomials which are given by \begin{equation}\label{4} \frac{t}{e_{\lambda}(t)-1}e_{\lambda}^{x}(t)=\frac{t}{(1+\lambda t)^{\frac{1}{\lambda}}-1}(1+\lambda t)^{\frac{x}{\lambda}}=\sum_{n=0}^{\infty}\beta_{n,\lambda}(x) \frac{t^{n}}{n!} \end{equation} and \begin{equation}\label{5} \frac{2}{e_{\lambda}(t)+1}e_{\lambda}^{x}(t)=\frac{2}{(1+\lambda t)^{\frac{1}{\lambda}}+1}(1+\lambda t)^{\frac{x}{\lambda}}=\sum_{n=0}^{\infty}\mathcal{E}_{n,\lambda}(x) \frac{t^{n}}{n!}. \end{equation} Note that \begin{displaymath} \lim_{\lambda\rightarrow 0}\beta_{n,\lambda}(x)=B_{n}(x),\quad \lim_{\lambda\rightarrow 0}\mathcal{E}_{n,\lambda}(x)=E_{n}(x). \end{displaymath} The falling factorial sequence is defined as \begin{displaymath} (x)_{0}=1,\quad (x)_{n}=x(x-1)\cdots(x-n+1),\ (n\ge 1),\quad (\mathrm{see\ [17]}). \end{displaymath} The Stirling numbers of the first kind are defined by the coefficients in the expansion of $(x)_n$ in terms of powers of $x$ as follows: \begin{equation}\label{6} (x)_{n}=\sum_{l=0}^{n}S^{(1)}(n,l)x^{l},\quad (\mathrm{see},\ [7,11,17]). \end{equation} The Stirling numbers of the second kind are defined by \begin{equation}\label{7} x^{n}=\sum_{l=0}^{n}S^{(2)}(n,l)(x)_{l},\ (n\ge 0),\ (\mathrm{see}\ [9,10,17]) .\end{equation} In [9], the degenerate stirling numbers of the second kind are defined by the generating function \begin{equation}\label{8} \frac{1}{k!}\big(e_{\lambda}(t)-1\big)^{k}=\sum_{n=k}^{\infty}S_{\lambda}^{(2)}(n,k)\frac{t^{n}}{n!},\ (k\ge 0). \end{equation} Note that $\displaystyle\lim_{\lambda\rightarrow 0}S_{\lambda}^{(2)}(n,k)=S^{(2)}(n,k),\ (n,k\ge 0)\displaystyle$. \\ ~~\\ Recently, Masjed-Jamei, Beyki and Koepf introduced the new type Euler polynomials which are given by \begin{equation}\label{9} \frac{2e^{pt}}{e^{t}+1}\cos qt=\sum_{n=0}^{\infty}E_{n}^{(c)}(p,q)\frac{t^{n}}{n!}, \end{equation} \begin{equation}\label{10} \frac{2e^{pt}}{e^{t}+1}\sin qt=\sum_{n=0}^{\infty}E_{n}^{(s)}(p,q)\frac{t^{n}}{n!},\quad (\mathrm{see}\ [15]). \end{equation} They also considered the cosine-polynomials and sine-polynomials defined by \begin{equation}\label{11} e^{pt}\cos qt=\sum_{n=0}^{\infty}C_{n}(p,q)\frac{t^{n}}{n!}, \end{equation} and \begin{equation}\label{12} e^{pt}\sin qt=\sum_{n=0}^{\infty}S_{n}(p,q)\frac{t^{n}}{n!},\quad (\mathrm{see}\ [15]). \end{equation} In [15], the authors deduced many interesting identities and properties for those polynomials. \\ ~~\\ It is well known that \begin{equation}\label{13} e^{ix}=\cos x+i\sin x,\quad\mathrm{where}\,\,x \in \mathbb{R},\ i=\sqrt{-1},\quad (\mathrm{see}\ [20]). \end{equation} From \eqref{1} and \eqref{2}, we note that \begin{equation}\label{14} \frac{t}{e^{t}-1}e^{(x+iy)t}=\sum_{n=0}^{\infty}B_{n}(x+iy)\frac{t^{n}}{n!}, \end{equation} and \begin{equation}\label{15} \frac{2}{e^{t}+1}e^{(x+iy)t}=\sum_{n=0}^{\infty}E_{n}(x+iy)\frac{t^{n}}{n!}. \end{equation} By \eqref{14} and \eqref{15}, we get \begin{align}\label{16} \frac{t}{e^{t}-1}e^{xt}\cos yt &=\sum_{n=0}^{\infty}\frac{B_{n}(x+iy)+B_{n}(x-iy)}{2}\frac{t^{n}}{n!}=\sum_{n=0}^{\infty}B_n^{(c)}(x,y)\frac{t^n}{n!}, \\ \frac{t}{e^{t}-1}e^{xt}\sin yt &=\sum_{n=0}^{\infty}\frac{B_{n}(x+iy)-B_{n}(x-iy)}{2i}\frac{t^{n}}{n!}=\sum_{n=0}^{\infty}B_n^{(s)}(x,y)\frac{t^n}{n!},\nonumber\\ \frac{2}{e^{t}+1}e^{xt}\cos yt &=\sum_{n=0}^{\infty}\frac{E_{n}(x+iy)+E_{n}(x-iy)}{2}\frac{t^{n}}{n!}=\sum_{n=0}^{\infty}E_n^{(c)}(x,y)\frac{t^n}{n!},\nonumber \end{align} and \begin{displaymath} \quad\frac{2}{e^{t}+1}e^{xt}\sin yt =\sum_{n=0}^{\infty}\frac{E_{n}(x+iy)-E_{n}(x-iy)}{2i}\frac{t^{n}}{n!}=\sum_{n=0}^{\infty}E_n^{(s)}(x,y)\frac{t^n}{n!},\,[\mathrm{see}\ 12]. \end{displaymath} \vspace{0.1in} In view of \eqref{4} and \eqref{5}, we study the degenerate Bernoulli and Euler polynomials with complex variable and investigate some identities and properties for those polynomials. The outline of this paper is as follows. In Section 1, we will beriefly recall the degenerate Bernoulli and Euler polynomials of Carlitz and the degenerate Stirling numbers of the second kind. Then we will introduce so called the new type Euler polynomials, and the cosine-polynomials and sine-polynomials recently introduced in [15]. Then we indicate that the new type Euler polynomials and the corresponding Bernoulli polynomials can be expressed by considering Euler and Bernoulli polynomials of complex variable and treating the real and imaginary parts separately. In Section 2, the degenerate cosine-polynomials and degenerate sine-polynomials were introduced and their explicit expressions were derived. The degenerate cosine-Euler polynomials and degenerate sine-Euler polynomials were expressed in terms of degenerate cosine-polynomials and degenerate sine-polynomials and vice versa. Further, some reflection identities were found for the degenerate cosine-Euler polynomials and degenerate sine-Euler polynomials. In Section 3, the degenerate cosine-Bernoulli polynomials and degenerate sine-Bernoulli polynomials were introduced. They were expressed in terms of degenerate cosine-polynomials and degenerate sine-polynomials and vice versa. Reflection symmetries were deduced for the degenerate cosine-Bernoulli polynomials and degenrate sine-Bernoulli polynomials. \section{Degenerate Euler polynomials of complex variable} Here we will consider the degenerate Euler polynomials of complex variable and, by treating the real and imaginary parts separately, introduce the degenerate cosine-Euler polynomials and degenerate sine-Euler polynomials. They are degenerate versions of the new type Euler polynomials studied in [15]. The degenerate sine and cosine functions are defined by \begin{equation}\label{17} \cos_{\lambda}t=\frac{e_{\lambda}^{i}(t)+e_{\lambda}^{-i}(t)}{2},\quad \sin_{\lambda}t= \frac{e_{\lambda}^{i}(t)-e_{\lambda}^{-i}(t)}{2i}. \end{equation} From \eqref{13}, we note that \begin{displaymath} \lim_{\lambda\rightarrow 0}\cos_{\lambda}t=\cos t,\quad \lim_{\lambda\rightarrow 0}\sin_{\lambda}t=\sin t. \end{displaymath} By \eqref{5}, we get \begin{equation}\label{18} \frac{2}{e_{\lambda}(t)+1}e_{\lambda}^{x+iy}(t)=\sum_{n=0}^{\infty}\mathcal{E}_{n,\lambda}(x+iy)\frac{t^{n}}{n!}. \end{equation} and \begin{equation}\label{19} \frac{2}{e_{\lambda}(t)+1}e_{\lambda}^{x-iy}(t)=\sum_{n=0}^{\infty}\mathcal{E}_{n,\lambda}(x-iy)\frac{t^{n}}{n!}. \end{equation} Now, we define the degenerate cosine and degenerate sine function as \begin{equation}\label{20} \cos_{\lambda}^{(y)}(t)=\frac{e_{\lambda}^{iy}(t) + e_{\lambda}^{-iy} (t)}{2}=\cos\bigg(\frac{y}{\lambda}\log(1+\lambda t)\bigg), \end{equation} \begin{equation}\label{21} \sin_{\lambda}^{(y)}(t)=\frac{e_{\lambda}^{iy}(t) - e_{\lambda}^{-iy} (t)}{2i}=\sin\bigg(\frac{y}{\lambda}\log(1+\lambda t)\bigg). \end{equation} Note that $\displaystyle\lim_{\lambda\rightarrow 0}\cos_{\lambda}^{(y)}(t)=\cos yt,\ \lim_{\lambda\rightarrow 0}\sin_{\lambda}^{(y)}(t)=\sin yt\displaystyle$. \\ \noindent From \eqref{18} and \eqref{19}, we note that \begin{equation}\label{22} \frac{2}{e_{\lambda}(t)+1}e_{\lambda}^{x}(t)\cos_{\lambda}^{(y)}(t)=\sum_{n=0}^{\infty}\bigg(\frac{\mathcal{E}_{n,\lambda}(x+iy)+\mathcal{E}_{n,\lambda}(x-iy)}{2}\bigg)\frac{t^{n}}{n!}, \end{equation} and \begin{equation}\label{23} \frac{2}{e_{\lambda}(t)+1}e_{\lambda}^{x}(t)\sin_{\lambda}^{(y)}(t)=\sum_{n=0}^{\infty}\bigg(\frac{\mathcal{E}_{n,\lambda}(x+iy)-\mathcal{E}_{n,\lambda}(x-iy)}{2i}\bigg)\frac{t^{n}}{n!}. \end{equation} In view of \eqref{9} and \eqref{10}, we define the degenerate cosine-Euler polynomials and degenerate sine-Euler polynomials respectively by \begin{equation}\label{24} \frac{2}{e_{\lambda}(t)+1}e_{\lambda}^{x}(t)\cos_{\lambda}^{(y)}(t)=\sum_{n=0}^{\infty}\mathcal{E}_{n,\lambda}^{(c)}(x,y)\frac{t^{n}}{n!}, \end{equation} and \begin{equation}\label{25} \frac{2}{e_{\lambda}(t)+1}e_{\lambda}^{x}(t)\sin_{\lambda}^{(y)}(t)=\sum_{n=0}^{\infty}\mathcal{E}_{n,\lambda}^{(s)}(x,y)\frac{t^{n}}{n!}. \end{equation} Note that $\displaystyle\lim_{\lambda\rightarrow 0}\mathcal{E}_{n,\lambda}^{(c)}(x,y)=E_{n}^{(c)}(x,y),\quad\lim_{\lambda\rightarrow 0}\mathcal{E}_{n,\lambda}^{(s)}(x,y)=E_{n}^{(s)}(x,y),\ (n\ge 0)\displaystyle$, where $E_{n}^{(c)}(x,y)$ and $E_{n}^{(s)}(x,y)$ are the new type of Euler polynomials of Masjed-Jamei, Beyki and Koepf (see [15]). \\ ~~~\\ From \eqref{22}-\eqref{25}, we note that \begin{equation}\label{26} \mathcal{E}_{n,\lambda}^{(c)}(x,y)=\frac{\mathcal{E}_{n,\lambda}(x+iy)+\mathcal{E}_{n,\lambda}(x-iy)}{2}, \end{equation} and \begin{equation}\label{27} \mathcal{E}_{n,\lambda}^{(s)}(x,y)=\frac{\mathcal{E}_{n,\lambda}(x+iy)-\mathcal{E}_{n,\lambda}(x-iy)}{2i},\quad (n\ge 0). \end{equation} We recall here that the generalized falling factorial sequence is defined by \begin{displaymath} (x)_{0,\lambda}=1,\quad (x)_{n,\lambda}=x(x-\lambda)(x-2\lambda)\cdots(x-(n-1)\lambda),\quad (n\ge 1). \end{displaymath} Note that $\displaystyle\lim_{\lambda\rightarrow 1}(x)_{n,\lambda}=(x)_{n},\quad \lim_{\lambda\rightarrow 0}(x)_{n,\lambda}=x^{n}\displaystyle$.\\ We observe that \begin{align}\label{28} e_{\lambda}^{iy}(t)&=(1+\lambda t)^{\frac{iy}{\lambda}}=e^{\frac{iy}{\lambda}\log(1+\lambda t)}\\ &=\sum_{k=0}^{\infty}\bigg(\frac{iy}{\lambda}\bigg)^{k}\frac{1}{k!}\big(\log(1+\lambda t)\big)^{k}\nonumber\\ &=\sum_{k=0}^{\infty}\lambda^{-k}(iy)^{k}\sum_{n=k}^{\infty}S^{(1)}(n,k)\frac{\lambda^{n}}{n!}t^{n}\nonumber\\ &=\sum_{n=0}^{\infty}\bigg(\sum_{k=0}^{n}\lambda^{n-k}i^{k}y^{k}S^{(1)}(n,k)\bigg)\frac{t^{n}}{n!}.\nonumber \end{align} From \eqref{20}, we can derive the following equation. \begin{align}\label{29} \cos_{\lambda}^{(y)}(t)&=\frac{e_{\lambda}^{iy}(t)+e_{\lambda}^{-iy}(t)}{2}\\ &=\frac{1}{2}\sum_{n=0}^{\infty}\bigg(\sum_{k=0}^{n}\lambda^{n-k}(i^{k}+(-i)^{k})y^{k}S^{(1)}(n,k)\bigg)\frac{t^{n}}{n!}\nonumber\\ &=\sum_{n=0}^{\infty}\bigg( \sum_{k=0}^{[\frac{n}{2}]}\lambda^{n-2k}(-1)^{k}y^{2k}S^{(1)}(n,2k)\bigg)\frac{t^{n}}{n!}\nonumber\\ &=\sum_{k=0}^{\infty}\bigg(\sum_{n=2k}^{\infty}\lambda^{n-2k}(-1)^{k}y^{2k}S^{(1)}(n,2k)\bigg)\frac{t^{n}}{n!}.\nonumber \end{align} Note that \begin{displaymath} \lim_{\lambda\rightarrow 0}\cos_{\lambda}^{(y)}(t)=\sum_{k=0}^{\infty}(-1)^{k}y^{2k}\frac{t^{2k}}{(2k)!}=\cos yt. \end{displaymath} By \eqref{21}, we get \begin{align}\label{30} \sin_{\lambda}^{(y)}(t)&=\frac{e_{\lambda}^{iy}(t)-e_{\lambda}^{-iy}(t)}{2i}\\ &=\frac{1}{2i}\sum_{n=0}^{\infty}\bigg(\sum_{k=0}^{n}\lambda^{n-k}(i^{k}-(-i)^{k})y^{k}S^{(1)}(n,k)\bigg)\frac{t^{n}}{n!}\nonumber\\ &=\sum_{n=1}^{\infty} \bigg(\sum_{k=0}^{[\frac{n-1}{2}]}\lambda^{n-2k-1}(-1)^{k}y^{2k+1}S^{(1)}(n,2k+1)\bigg)\frac{t^{n}}{n!} \nonumber \\ &=\sum_{k=0}^{\infty}\bigg(\sum_{n=2k+1}^{\infty}(-1)^{k}\lambda^{n-2k-1}S^{(1)}(n,2k+1)\frac{t^{n}}{n!} \bigg)y^{2k+1},\nonumber \end{align} where $[x]$ denotes the greatest integer $\leq x$. \\ ~~\\ Note that \begin{displaymath} \lim_{\lambda\rightarrow 0}\sin_{\lambda}^{(y)}(t)=\sum_{k=0}^{\infty}(-1)^{k}y^{2k+1}\frac{t^{2k+1}}{(2k+1)!}=\sin(yt). \end{displaymath} From \eqref{18}, we note that \begin{align}\label{31} \sum_{n=0}^{\infty}\mathcal{E}_{n,\lambda}(x+iy)\frac{t^{n}}{n!}&=\frac{2}{e_{\lambda}(t)+1}e^{x}_{\lambda}(t)\cdot e^{iy}_{\lambda}(t)\\ &=\sum_{l=0}^{\infty}\mathcal{E}_{l,\lambda}(x)\frac{t^{l}}{l!}\sum_{j=0}^{\infty}(iy)_{j,\lambda}\frac{t^{j}}{j!}\nonumber\\ &=\sum_{n=0}^{\infty}\bigg(\sum_{l=0}^{n}\binom{n}{l}(iy)_{n-l,\lambda}\mathcal{E}_{l,\lambda}(x)\bigg)\frac{t^{n}}{n!}.\nonumber \end{align} On the other hand \begin{align}\label{32} \frac{2}{e_{\lambda}(t)+1}e_{\lambda}^{x+iy}(t)&=\sum_{n=0}^{\infty}\mathcal{E}_{l,\lambda}\frac{t^{l}}{l!}\sum_{j=0}^{\infty}(x+iy)_{j,\lambda}\frac{t^{j}}{j!}\\ &=\sum_{n=0}^{\infty}\bigg(\sum_{l=0}^{n}\binom{n}{l}(x+iy)_{n-l,\lambda}\mathcal{E}_{l,\lambda}\bigg)\frac{t^{n}}{n!}\nonumber. \end{align} Therefore, by \eqref{31} and \eqref{32}, we obtain the following theorem, \begin{theorem} For $n\ge 0$, we have \begin{align*} \mathcal{E}_{n,\lambda}(x+iy)&=\sum_{l=0}^{n}\binom{n}{l}(iy)_{n-l,\lambda}\mathcal{E}_{l,\lambda}(x)\\ &=\sum_{l=0}^{n}\binom{n}{l}(x+iy)_{n-l,\lambda}\mathcal{E}_{l,\lambda}. \end{align*} Also, we have \begin{align*} \mathcal{E}_{n,\lambda}(x-iy)&=\sum_{l=0}^{n}\binom{n}{l}(-1)^{n-l}\langle iy\rangle_{n-l,\lambda}\mathcal{E}_{l,\lambda}(x)\\ &=\sum_{l=0}^{n}\binom{n}{l}(-1)^{n-l}\langle iy-x\rangle_{n-l,\lambda}\mathcal{E}_{l,\lambda}, \end{align*} where $\langle x\rangle_{0,\lambda}=1$, $\langle x\rangle_{n,\lambda}=x(x+\lambda)\cdots(x+\lambda(n-1)),\ (n\ge 1)$. \end{theorem} \noindent By \eqref{29}, we get \begin{equation}\label{33} e_{\lambda}^{x}(t)\cos_{\lambda}^{(y)}(t)=\sum_{l=0}^{\infty}(x)_{l,\lambda}\frac{t^{l}}{l!}\sum_{m=0}^{\infty}\sum_{k=0}^{[\frac{m}{2}]}\lambda^{m-2k}(-1)^{k}y^{2k}S^{(1)}(m,2k)\frac{t^{m}}{m!} \end{equation} \begin{displaymath} =\sum_{n=0}^{\infty}\bigg(\sum_{m=0}^{n}\sum_{k=0}^{[\frac{m}{2}]}\binom{n}{m}\lambda^{m-2k}(-1)^{k}y^{2k}S^{(1)}(m,2k)(x)_{n-m,\lambda}\bigg)\frac{t^{n}}{n!}, \end{displaymath} and \begin{equation}\label{34} e_{\lambda}^{x}(t)\sin_{\lambda}^{(y)}(t)=\sum_{l=0}^{\infty}(x)_{\lambda,l}\frac{t^{l}}{l!}\sum_{m=1}^{\infty}\sum_{k=0}^{[\frac{m-1}{2}]}\lambda^{m-2k-1}(-1)^{k}y^{2k+1}S^{(1)}(m,2k+1)\frac{t^{m}}{m!} \end{equation} \begin{displaymath} =\sum_{n=1}^{\infty}\bigg(\sum_{m=1}^{n}\sum_{k=0}^{[\frac{m-1}{2}]} \binom{n}{m}\lambda^{m-2k-1}(-1)^{k}y^{2k+1}S^{(1)}(m,2k+1)(x)_{n-m,\lambda}\bigg)\frac{t^{n}}{n!}. \end{displaymath} Now, we define the degenerate cosine-polynomials and degenerate sine-polynomials respectively by \begin{equation}\label{35} e_{\lambda}^{x}(t)\cos_{\lambda}^{(y)}(t)=\sum_{k=0}^{\infty}C_{k,\lambda}(x,y)\frac{t^{k}}{k!}, \end{equation} and \begin{equation}\label{36} e_{\lambda}^{x}(t)\sin_{\lambda}^{(y)}(t)=\sum_{k=0}^{\infty}S_{k,\lambda}(x,y)\frac{t^{k}}{k!}. \end{equation} Note that \begin{displaymath} \lim_{\lambda\rightarrow 0}C_{k,\lambda}(x,y)=C_{k}(x,y),\quad\lim_{\lambda\rightarrow 0}S_{k,\lambda}(x,y)=S_{k}(x,y), \end{displaymath} where $C_{k}(x,y)$ and $S_{k}(x,y)$ are the cosine-polynomials and sine-polynomials of Masijed-Jamei, Beyki and Koepf. \par ~~\\ Therefore, by \eqref{33}-\eqref{36}, we obtain the following theorem. \begin{theorem} For $n\ge 0$, we have \begin{align*} C_{n,\lambda}(x,y)&=\sum_{m=0}^{n}\sum_{k=0}^{[\frac{m}{2}]}\binom{n}{m}\lambda^{m-2k}(-1)^{k}y^{2k}S^{(1)}(m,2k)(x)_{n-m,\lambda}\\ &= \sum_{k=0}^{[\frac{n}{2}]}\sum_{m=2k}^{n}\binom{n}{m} \lambda^{m-2k}(-1)^{k}y^{2k}S^{(1)}(m,2k)(x)_{n-m,\lambda}. \end{align*} Also, for $n\in\mathbb{N}$, we have \begin{align*} S_{n,\lambda}(x,y)&=\sum_{m=1}^{n}\sum_{k=0}^{[\frac{m-1}{2}]}\binom{n}{m}\lambda^{m-2k-1}(-1)^{k}y^{2k+1}S^{(1)}(m,2k+1)(x)_{n-m,\lambda}\\ &=\sum_{k=0}^{[\frac{n-1}{2}]}\sum_{m=2k+1}^{n} \binom{n}{m}\lambda^{m-2k-1}(-1)^{k}y^{2k+1}S^{(1)}(m,2k+1)(x)_{n-m,\lambda}. \end{align*} and $S_{0,\lambda}(x,y)=0$. \end{theorem} \noindent From \eqref{24}, we note that \begin{align}\label{37} \sum_{n=0}^{\infty}\mathcal{E}_{n,\lambda}^{(c)}(x,y)\frac{t^{n}}{n!}&=\frac{2}{e_{\lambda}(t)+1}e_{\lambda}^{x}(t)\cos_{\lambda}^{(y)}(t)\\ &=\sum_{m=0}^{\infty}\mathcal{E}_{m,\lambda}\frac{t^{m}}{m!}\sum_{l=0}^{\infty}C_{l,\lambda}(x,y)\frac{t^{l}}{l!}\nonumber\\ &=\sum_{n=0}^{\infty}\bigg(\sum_{m=0}^{n}\binom{n}{m}\mathcal{E}_{m,\lambda}C_{n-m,\lambda}(x,y)\bigg)\frac{t^{n}}{n!}\nonumber. \end{align} On the other hand, \begin{align}\label{38} \frac{2}{e_{\lambda}(t)+1}e_{\lambda}^{x}(t)\cos_{\lambda}^{(y)}(t)&=\sum_{m=0}^{\infty}\mathcal{E}_{m,\lambda}(x)\frac{t^{m}}{m!}\sum_{l=0}^{\infty}\sum_{k=0}^{[\frac{l}{2}]}\lambda^{l-2k}(-1)^{k}y^{2k}S^{(1)}(l,2k)\frac{t^{l}}{l!}\\ &=\sum_{n=0}^{\infty}\bigg(\sum_{l=0}^{n}\sum_{k=0}^{[\frac{l}{2}]}\binom{n}{l}\lambda^{l-2k}(-1)^{k}y^{2k}S^{(1)}(l,2k)\mathcal{E}_{n-l,\lambda}(x)\bigg)\frac{t^{n}}{n!}\nonumber\\ &=\sum_{n=0}^{\infty}\bigg(\sum_{k=0}^{[\frac{n}{2}]}\sum_{l=2k}^{n}\binom{n}{l}\lambda^{l-2k}(-1)^{k}y^{2k}S^{(1)}(l,2k)\mathcal{E}_{n-l,\lambda}(x)\bigg)\frac{t^{n}}{n!}.\nonumber \end{align} By \eqref{30}, we get \begin{align}\label{39} \frac{2}{e_{\lambda}(t)+1}e_{\lambda}^{x}(t)\sin_{\lambda}^{(y)}(t)&=\sum_{m=0}^{\infty}\mathcal{E}_{m,\lambda}(x)\frac{t^{m}}{m!}\sum_{l=1}^{n}\sum_{k=0}^{[\frac{l-1}{2}]}(-1)^{k}\lambda^{l-2k-1}y^{2k+1}S^{(1)}(l,2k+1)\frac{t^{l}}{l!}\\ &=\sum_{n=1}^{\infty}\bigg(\sum_{l=1}^{n}\sum_{k=0}^{[\frac{l-1}{2}]}\binom{n}{l}\lambda^{l-2k-1}(-1)^{k}y^{2k+1}S^{(1)}(l,2k+1)\mathcal{E}_{n-l,\lambda}(x)\bigg)\frac{t^{n}}{n!}\nonumber\\ &=\sum_{n=1}^{\infty}\bigg(\sum_{k=0}^{[\frac{n-1}{2}]} \sum_{l=2k+1}^{n}\binom{n}{l}\lambda^{l-2k-1}(-1)^{k}y^{2k+1}S^{(1)}(l,2k+1)\mathcal{E}_{n-l,\lambda}(x)\bigg)\frac{t^{n}}{n!}.\nonumber \end{align} Therefore, by \eqref{24}, \eqref{25}, and \eqref{37}-\eqref{39}, we obtain the following theorem. \begin{theorem} For $n\ge 0$, we have \begin{align*} \mathcal{E}_{n,\lambda}^{(c)}(x,y)&=\sum_{k=0}^{n}\binom{n}{k}\mathcal{E}_{k,\lambda}C_{n-k,\lambda}(x,y)\\ &=\sum_{k=0}^{[\frac{n}{2}]}\sum_{l=2k}^{n}\binom{n}{l}\lambda^{l-2k}(-1)^{k}y^{2k}S^{(1)}(l,2k)\mathcal{E}_{n-l,\lambda}(x). \end{align*} Also, for $n\in\mathbb{N}$, we obtain \begin{align*} \mathcal{E}_{n,\lambda}^{(s)}(x,y)&=\sum_{k=0}^{n}\binom{n}{k}\mathcal{E}_{k,\lambda}S_{n-k,\lambda}(x,y)\\ &=\sum_{k=0}^{[\frac{n-1}{2}]}\sum_{l=2k+1}^{n}\binom{n}{l}\lambda^{l-2k-1}(-1)^{k}y^{2k+1}S^{(1)}(l,2k+1)\mathcal{E}_{n-l,\lambda}(x). \end{align*} \end{theorem} By \eqref{24}, we get \begin{align}\label{40} 2e_{\lambda}^{x}(t)\cos_{\lambda}^{(y)}(t)&=\sum_{l=0}^{\infty}\mathcal{E}_{l,\lambda}^{(c)}(x,y)\frac{t^{l}}{l!}(e_{\lambda}(t)+1)\\ &=\sum_{l=0}^{\infty}\mathcal{E}_{l,\lambda}^{(c)}(x,y)\frac{t^{l}}{l!}\sum_{m=0}^{\infty}(1)_{m,\lambda}\frac{t^{m}}{m!}+\sum_{n=0}^{\infty}\mathcal{E}_{n,\lambda}^{(c)}(x,y)\frac{t^{n}}{n!}\nonumber\\ &=\sum_{n=0}^{\infty}\bigg(\sum_{l=0}^{n}\binom{n}{l}(1)_{n-l,\lambda}\mathcal{E}_{l,\lambda}^{(c)}(x,y)+\mathcal{E}_{n,\lambda}^{(c)}(x,y)\bigg)\frac{t^{n}}{n!}.\nonumber \end{align} Therefore by comparing the coefficients on both sides of \eqref{35} and \eqref{40}, we obtain the following theorem. \begin{theorem} For $n\ge 0$, we have \begin{displaymath} C_{n,\lambda}(x,y)=\frac{1}{2}\bigg(\sum_{l=0}^{n}\binom{n}{l}(1)_{n-l,\lambda}\mathcal{E}_{l,\lambda}^{(c)}(x,y)+\mathcal{E}_{n,\lambda}^{(c)}(x,y)\bigg), \end{displaymath} and \begin{displaymath} S_{n,\lambda}(x,y)=\frac{1}{2}\bigg(\sum_{l=0}^{n}\binom{n}{l}(1)_{n-l,\lambda}\mathcal{E}_{l,\lambda}^{(s)}(x,y)+\mathcal{E}_{n,\lambda}^{(s)}(x,y)\bigg). \end{displaymath} \end{theorem} \noindent From \eqref{24}, we have \begin{align}\label{41} \sum_{n=0}^{\infty}\mathcal{E}_{n,\lambda}^{(c)}(x+r,y)\frac{t^{n}}{n!}&=\frac{2}{e_{\lambda}(t)+1}e_{\lambda}^{x+r}(t)\cos_{\lambda}^{(y)}(t)\\ &=\frac{2}{e_{\lambda}(t)+1}e_{\lambda}^{x}(t)\cos_{\lambda}^{(y)}(t) e_{\lambda}^{r}(t)\nonumber\\ &=\sum_{l=0}^{\infty}\mathcal{E}_{l,\lambda}^{(c)}(x,y)\frac{t^{l}}{l!}\sum_{m=0}^{\infty}(r)_{m,\lambda}\frac{t^{m}}{m!}\nonumber\\ &=\sum_{n=0}^{\infty}\bigg(\sum_{l=0}^{n}\binom{n}{l}\mathcal{E}_{l,\lambda}^{(c)}(x,y)(r)_{n-l,\lambda}\bigg)\frac{t^{n}}{n!}.\nonumber \end{align} Therefore, by comparing the coefficients on both sides of \eqref{41}, we obtain the following proposition. \begin{proposition} For $n\ge 0$, we have \begin{displaymath} \mathcal{E}_{n,\lambda}^{(c)}(x+r,y)=\sum_{l=0}^{n}\binom{n}{l}\mathcal{E}_{l,\lambda}^{(c)}(x,y)(r)_{n-l,\lambda}, \end{displaymath} and \begin{displaymath} \mathcal{E}_{n,\lambda}^{(s)}(x+r,y)=\sum_{l=0}^{n}\binom{n}{l}\mathcal{E}_{l,\lambda}^{(s)}(x,y)(r)_{n-l,\lambda}, \end{displaymath} where $r$ is a fixed real (or complex) number. \end{proposition} \noindent Now, we consider the reflection symmetric identities for the degenerate cosine-Euler polynomials. \\ ~~~\ By \eqref{24}, we get \begin{align}\label{42} \sum_{n=0}^{\infty}\mathcal{E}_{n,\lambda}^{(c)}(1-x,y)\frac{t^{n}}{n!}&=\frac{2}{e_{\lambda}(t)+1}e_{\lambda}^{1-x}(t)\cos_{\lambda}^{(y)}(t)\\ &=\frac{2}{1+e_{\lambda}^{-1}(t)}e_{\lambda}^{-x}(t)\cos_{\lambda}^{(y)}(t)\nonumber\\ &=\frac{2}{e_{-\lambda}(-t)+1}e_{-\lambda}^{x}(-t)\cos_{-\lambda}^{(y)}(-t)\nonumber\\ &=\sum_{n=0}^{\infty}\mathcal{E}_{n,-\lambda}^{(c)}(x,y)\frac{(-1)^{n}t^{n}}{n!}, \nonumber \end{align} and \begin{align}\label{43} \sum_{n=0}^{\infty}\mathcal{E}_{n,\lambda}^{(s)}(1-x,y)\frac{t^{n}}{n!}&=\frac{2}{e_{\lambda}(t)+1}e_{\lambda}^{1-x}(t)\sin_{\lambda}^{(y)}(t)\\ &=\frac{2}{1+e_{\lambda}^{-1}(t)}e_{\lambda}^{-x}(t)\sin_{\lambda}^{(y)}(t)\nonumber\\ &=\frac{2}{e_{-\lambda}(-t)+1}e_{-\lambda}^{x}(-t)\sin_{-\lambda}^{(y)}(-t)\nonumber\\ &=-\sum_{n=0}^{\infty}\mathcal{E}_{n,-\lambda}^{(s)}(x,y)\frac{(-1)^{n}t^{n}}{n!}, \nonumber \end{align} Therefore, by \eqref{42} and \eqref{43}, we obtain the following theorem \begin{theorem} For $n\ge 0$, we have \begin{displaymath} \mathcal{E}_{n,\lambda}^{(c)}(1-x,y)=(-1)^{n}\mathcal{E}_{n,-\lambda}^{(c)}(x,y), \end{displaymath} and \begin{displaymath} \mathcal{E}_{n,\lambda}^{(s)}(1-x,y)=(-1)^{n+1}\mathcal{E}_{n,-\lambda}^{(s)}(x,y), \end{displaymath} \end{theorem} Now, we observe that \begin{align}\label{44} \sum_{n=0}^{\infty}\mathcal{E}_{n,\lambda}^{(c)}(x,y)\frac{t^{n}}{n!}&=\frac{2}{e_{\lambda}(t)+1}(e_{\lambda}(t)-1+1)^{x}\cos_{\lambda}^{(y)}(t)\\ &=\frac{2}{e_{\lambda}(t)+1}\sum_{l=0}^{\infty}\binom{x}{l}(e_{\lambda}(t)-1)^{l}\cos_{\lambda}^{(y)}(t)\nonumber\\ &=\frac{2}{e_{\lambda}(t)+1}\cos_{\lambda}^{(y)}(t)\sum_{l=0}^{\infty}(x)_{l}\sum_{k=l}^{\infty}S_{\lambda}^{(2)}(k,l)\frac{t^{k}}{k!}\nonumber\\ &=\sum_{j=0}^{\infty}\mathcal{E}_{j,\lambda}^{(c)}(y)\frac{t^{j}}{j!}\sum_{k=0}^{\infty}\bigg(\sum_{l=0}^{k}(x)_{l}S_{\lambda}^{(2)}(k,l)\bigg)\frac{t^{k}}{k!}\nonumber\\ &=\sum_{n=0}^{\infty}\bigg(\sum_{k=0}^{n}\sum_{l=0}^{k}\binom{n}{k}(x)_{l}S_{\lambda}^{(2)}(k,l)\mathcal{E}_{n-k}^{(c)}(y)\bigg)\frac{t^{n}}{n!}.\nonumber \end{align} Therefore, by \eqref{44}, we obtain the following theorem. \begin{theorem} For $n\ge 0$, we have \begin{displaymath} \mathcal{E}_{n,\lambda}^{(c)}(x,y)=\sum_{k=0}^{n}\sum_{l=0}^{k}\binom{n}{l}(x)_{l}S_{\lambda}^{(2)}(k,l)\mathcal{E}_{n-k,\lambda}^{(c)}(y). \end{displaymath} Also, for $n\in\mathbb{N}$, we have \begin{displaymath} \mathcal{E}_{n,\lambda}^{(s)}(x,y)=\sum_{k=0}^{n}\sum_{l=0}^{k}\binom{n}{k}(x)_{l}S_{\lambda}^{(2)}(k,l)\mathcal{E}_{n-k,\lambda}^{(s)}(y). \end{displaymath} \end{theorem} \section{Degenerate Bernoulli polynomials of complex variable} In this section, we will consider the degenerate Bernoulli polynomials of complex variable and, by treating the real and imaginary parts separately, introduce the degenerate cosine-Bernoulli polynomials and degenerate sine-Bernoulli polynomials. \noindent From \eqref{4}, we have \begin{equation}\label{45} \frac{t}{e_{\lambda}(t)-1}e_{\lambda}^{x+iy}(t)=\sum_{n=0}^{\infty}\beta_{n,\lambda}(x+iy)\frac{t^{n}}{n!}, \end{equation} and \begin{equation}\label{46} \frac{t}{e_{\lambda}(t)-1}e_{\lambda}^{x-iy}=\sum_{n=0}^{\infty}\beta_{n,\lambda}(x-iy)\frac{t^{n}}{n!}. \end{equation} Thus, by \eqref{45} and \eqref{46}, we get \begin{equation}\label{47} \sum_{n=0}^{\infty}\big(\beta_{n,\lambda}(x+iy)+\beta_{n,\lambda}(x-iy)\big)\frac{t^{n}}{n!}=2\frac{t}{e_{\lambda}(t)-1}e_{\lambda}^{x}(t)\cos_{\lambda}^{(y)}(t), \end{equation} and \begin{equation}\label{48} \sum_{n=0}^{\infty}\big(\beta_{n,\lambda}(x+iy)-\beta_{n,\lambda}(x-iy)\big)\frac{t^{n}}{n!}=2i\frac{t}{e_{\lambda}(t)-1}e_{\lambda}^{x}(t)\sin_{\lambda}^{(y)}(t). \end{equation} In view of \eqref{24} and \eqref{25}, we define the degenerate cosine-Bernoulli polynomials and degenerate sine-Bernoulli polynomials respectively by \begin{equation}\label{49} \frac{t}{e_{\lambda}(t)-1}e^{x}_{\lambda}(t)\cos_{\lambda}^{(y)}(t)=\sum_{n=0}^{\infty}\beta_{n,\lambda}^{(c)}(x,y)\frac{t^{n}}{n!}, \end{equation} and \begin{equation}\label{50} \frac{t}{e_{\lambda}(t)-1}e^{x}_{\lambda}(t)\sin_{\lambda}^{(y)}(t)=\sum_{n=0}^{\infty}\beta_{n,\lambda}^{(s)}(x,y)\frac{t^{n}}{n!}. \end{equation} Note that $\beta_{0,\lambda}^{(s)}(x,y)=0$.\\ From \eqref{47}-\eqref{50}, we have \begin{equation}\label{51} \beta_{n,\lambda}^{(c)}(x,y)=\frac{\beta_{n,\lambda}(x+iy)+\beta_{n,\lambda}(x-iy)}{2}, \end{equation} and \begin{equation}\label{52} \beta_{n,\lambda}^{(s)}(x,y)=\frac{\beta_{n,\lambda}(x+iy)-\beta_{n,\lambda}(x-iy)}{2i},\quad (n\ge 0). \end{equation} Note that \begin{displaymath} \lim_{\lambda\rightarrow 0}\beta_{n,\lambda}^{(c)}(x,y)=B_{n}^{(c)}(x,y),\quad\lim_{\lambda\rightarrow 0}\beta_{n,\lambda}^{(s)}(x,y)=B_{n}^{(s)}(x,y), \end{displaymath} where $B_{n}^{(c)}(x,y),\ B_{n}^{(s)}(x,y)$ are cosine-Bernoulli polynomials, and sine-Bernoulli polynomials (see [12,16]). ~~\\ ~~\\ By \eqref{49}, we get \begin{align}\label{53} \sum_{n=0}^{\infty}\beta_{n,\lambda}^{(c)}(x,y)\frac{t^{n}}{n!}&=\frac{t}{e_{\lambda}(t)-1}e_{\lambda}^{x}(t)\cos_{\lambda}^{(y)}(t)\\ &=\sum_{l=0}^{\infty}\beta_{l,\lambda}\frac{t^{l}}{l!}\sum_{m=0}^{\infty}C_{m,\lambda}(x,y)\frac{t^{m}}{m!}\nonumber\\ &=\sum_{n=0}^{\infty}\bigg(\sum_{l=0}^{n}\binom{n}{l}\beta_{l,\lambda}C_{n-l,\lambda}(x,y)\bigg)\frac{t^{n}}{n!}.\nonumber \end{align} On the other hand, \begin{align}\label{54} \frac{t}{e_{\lambda}(t)-1}&e_{\lambda}^{x}(t)\cos_{\lambda}^{(y)}(t)\\ &=\sum_{m=0}^{\infty}\beta_{m,\lambda}(x)\frac{t^{m}}{m!}\sum_{l=0}^{n}\sum_{k=0}^{[\frac{l}{2}]}\lambda^{l-2k}(-1)^{k}y^{2k}S^{(1)}(l,2k)\frac{t^{l}}{l!}\nonumber\\ &=\sum_{n=0}^{\infty}\bigg(\sum_{l=0}^{n}\sum_{k=0}^{[\frac{l}{2}]}\binom{n}{l}\lambda^{l-2k}(-1)^{k}y^{2k}S^{(1)}(l,2k)\beta_{n-l,\lambda}(x)\bigg)\frac{t^{n}}{n!}\nonumber \\ &=\sum_{n=0}^{\infty}\bigg(\sum_{k=0}^{[\frac{n}{2}]}\sum_{l=2k}^{n} \binom{n}{l}\lambda^{l-2k}(-1)^{k}y^{2k}S^{(1)}(l,2k)\beta_{n-l,\lambda}(x)\bigg)\frac{t^{n}}{n!}.\nonumber \end{align} Therefore, by \eqref{53} and \eqref{54}, we obtain the following theorem. \begin{theorem} For $n\ge 0$, we have \begin{align*} \beta_{n,\lambda}^{(c)}(x,y)&=\sum_{k=0}^{n}\binom{n}{k}\beta_{k,\lambda}C_{n-k,\lambda}(x,y)\\ &=\sum_{k=0}^{[\frac{n}{2}]}\sum_{l=2k}^{n}\binom{n}{l}\lambda^{l-2k}(-1)^{k}y^{2k}S^{(1)}(l,2k)\beta_{n-l,\lambda}(x). \end{align*} Also, for $n\in\mathbb{N}$, we have \begin{align*} \beta_{n,\lambda}^{(s)}(x,y)&=\sum_{k=0}^{n}\binom{n}{k}\beta_{k,\lambda}S_{n-k,\lambda}(x,y)\\ &=\sum_{k=0}^{[\frac{n-1}{2}]}\sum_{l=2k+1}^{n}\binom{n}{l}\lambda^{l-2k-1}(-1)^{k}y^{2k+1}S^{(1)}(l,2k+1)\beta_{n-l,\lambda}(x). \end{align*} and \begin{displaymath} \beta_{0,\lambda}^{(s)}(x,y)=0. \end{displaymath} \end{theorem} \noindent From \eqref{49}, we have \begin{align} \label{55} \sum_{n=0}^{\infty}\beta_{n,\lambda}^{(c)}(1-x,y)\frac{t^{n}}{n!} &=\frac{t}{1-e_{\lambda}^{-1}(t)} e_{\lambda}^{-x}(t)\cos_{\lambda}^{(y)}(t)\\ &=\frac{-t}{e_{-\lambda}(-t)-1}e_{-\lambda}^{x}(-t)\cos_{-\lambda}^{(y)}(-t)\nonumber\\ &= \sum_{n=0}^{\infty}\beta_{n,-\lambda}^{(c)}(x,y)\frac{(-1)^{n}}{n!}t^{n}.\nonumber \end{align} Therefore, by \eqref{55}, we obtain the following theorem. \begin{theorem} For $n\ge 0$, we have \begin{displaymath} \beta_{n,\lambda}^{(c)}(1-x,y)=(-1)^{n}\beta_{n,-\lambda}^{(c)}(x,y), \end{displaymath} and \begin{displaymath} \beta_{n,\lambda}^{(s)}(1-x,y)= (-1)^{n+1}\beta_{n,-\lambda}^{(s)}(x,y). \end{displaymath} \end{theorem} \noindent By \eqref{49}, we easily get \begin{align}\label{56} \sum_{n=0}^{\infty}\beta_{n,\lambda}^{(c)}(x+r,y)\frac{t^{n}}{n!}&=\frac{t}{e_{\lambda}(t)-1}e_{\lambda}^{x+r}(t)\cos_{\lambda}^{(y)}(t)\\ &=\frac{t}{e_{\lambda}(t)-1}e_{\lambda}^{x}(t)\cos_{\lambda}^{(y)}(t)e_{\lambda}^{r}(t)\nonumber\\ &=\sum_{l=0}^{\infty}\beta_{l,\lambda}^{(c)}(x,y)\frac{t^{l}}{l!}\sum_{m=0}^{\infty}(r)_{m,\lambda}\frac{t^{m}}{m!}\nonumber\\ &=\sum_{n=0}^{\infty}\bigg(\sum_{l=0}^{n}\binom{n}{l}\beta_{l,\lambda}^{(c)}(x,y)(r)_{n-l,\lambda}\bigg)\frac{t^{n}}{n!}\nonumber. \end{align} By comparing the coefficients on both sides of \eqref{56}, we get \begin{equation}\label{57} \beta_{n\lambda}^{(c)}(x+r,y)=\sum_{l=0}^{n}\binom{n}{l}\beta_{l,\lambda}^{(c)}(x,y)(r)_{n-l,\lambda}, \end{equation} and \begin{equation}\label{58} \beta_{n,\lambda}^{(s)}(x+r,y)=\sum_{l=0}^{n}\binom{n}{l}\beta_{l,\lambda}^{(s)}(x,y)(r)_{n-l,\lambda}, \end{equation} where $r$ is a fixed real (or complex) number. \\ ~~\\ From \eqref{49}, we note that \begin{align}\label{59} te_{\lambda}^{x}(t)\cos_{\lambda}^{(y)}(t)&=\sum_{l=0}^{\infty}\beta_{l,\lambda}^{(c)}(x,y)\frac{t^{l}}{l!}(e_{\lambda}(t)-1)\\ &= \sum_{l=0}^{\infty}\beta_{l,\lambda}^{(c)}(x,y)\frac{t^{l}}{l!}\sum_{m=0}^{\infty}(1)_{m,\lambda}\frac{t^{m}}{m!}-\sum_{n=0}^{\infty}\beta_{n,\lambda}^{(c)}(x,y)\frac{t^{n}}{n!}\nonumber \\ &=\sum_{n=0}^{\infty}\bigg(\sum_{l=0}^{n}\binom{n}{l}\beta_{l,\lambda}^{(c)}(x,y)(1)_{n-l,\lambda}-\beta_{n,\lambda}^{(c)}(x,y)\bigg)\frac{t^{n}}{n!}\nonumber \\ &=\sum_{n=1}^{\infty}\bigg(\beta_{n,\lambda}^{(c)}(x+1,y)-\beta_{n,\lambda}^{(c)}(x,y)\bigg)\frac{t^{n}}{n!}\nonumber\\ &=\sum_{n=0}^{\infty}\bigg(\frac{\beta_{n+1,\lambda}^{(c)}(x+1,y)-\beta_{n+1,\lambda}^{(c)}(x,y)}{n+1}\bigg)\frac{t^{n+1}}{n!}.\nonumber \end{align} By \eqref{59}, we get \begin{equation}\label{60} \sum_{n=0}^{\infty}\bigg(\frac{\beta_{n+1,\lambda}^{(c)}(x+1,y)-\beta_{n+1,\lambda}^{(c)}(x,y)}{n+1}\bigg)\frac{t^{n}}{n!}=e_{\lambda}^{x}(t)\cos_{\lambda}^{(y)}(t)=\sum_{n=0}^{\infty}C_{n,\lambda}(x,y)\frac{t^{n}}{n!}. \end{equation} Therefore, by comparing the coefficients on both sides of \eqref{60}, we obtain the following theorem. \begin{theorem} For $n\ge 0$, we have \begin{displaymath} C_{n,\lambda}(x,y)=\frac{1}{n+1}\big\{\beta_{n+1,\lambda}^{(c)}(x+1,y)-\beta_{n+1,\lambda}^{(c)}(x,y)\big\}, \end{displaymath} and \begin{displaymath} S_{n,\lambda}(x,y)=\frac{1}{n+1}\big\{\beta_{n+1,\lambda}^{(s)}(x+1,y)-\beta_{n+1,\lambda}^{(s)}(x,y)\big\}. \end{displaymath} \end{theorem} \begin{corollary} For $n\ge 1$, we have \begin{displaymath} C_{n,\lambda}(x,y)=\frac{1}{n+1}\sum_{l=0}^{n}\binom{n+1}{l}\beta_{l,\lambda}^{(c)}(x,y)(1)_{n+1-l,\lambda}, \end{displaymath} and \begin{displaymath} S_{n,\lambda}(x,y)= \frac{1}{n+1}\sum_{l=0}^{n}\binom{n+1}{l}\beta_{l,\lambda}^{(s)}(x,y)(1)_{n+1-l,\lambda}. \end{displaymath} \end{corollary} \noindent When $x=0$, let $\beta_{n,\lambda}^{(c)}(0,y)=\beta_{n,\lambda}^{(c)}(y),\ \beta_{n,\lambda}^{(s)}(0,y)=\beta_{n,\lambda}^{(s)}(y)$, $\mathcal{E}_{n,\lambda}^{(c)}(0,y)=\mathcal{E}_{n,\lambda}^{(c)}(y)$, and\\ $\mathcal{E}_{n,\lambda}^{(s)}(0,y)=\mathcal{E}_{n,\lambda}^{(s)}(y)$. \\ ~~~\\ For $n\ge 0$, we have \begin{equation}\label{61} \beta_{n,\lambda}^{(c)}(y)=\sum_{k=0}^{[\frac{n}{2}]}\sum_{l=2k}^{n}\binom{n}{l}\lambda^{l-2k}(-1)^{k}y^{2k}S^{(1)}(l,2k)\beta_{n-l,\lambda}. \end{equation} Also, for $n\in\mathbb{N}$, we get \begin{equation}\label{62} \beta_{n,\lambda}^{(s)}(y)=\sum_{k=0}^{[\frac{n-1}{2}]}\sum_{l=2k+1}^{n}\binom{n}{l}\lambda^{l-2k-1}(-1)^{k}y^{2k+1}S^{(1)}(l,2k+1)\beta_{n-l,\lambda}. \end{equation} \noindent By \eqref{49}, we get \begin{align}\label{63} \sum_{n=0}^{\infty}\beta_{n,\lambda}^{(c)}(x,y)\frac{t^{n}}{n!}&=\frac{t}{e_{\lambda}(t)-1}\cos_{\lambda}^{(y)}(t)\big(e_{\lambda}(t)-1+1\big)^{x}\\ &=\sum_{m=0}^{\infty}\beta_{m,\lambda}^{(c)}(y)\frac{t^{m}}{m!}\sum_{l=0}^{\infty}(x)_{l}\sum_{k=l}^{\infty}S_{\lambda}^{(2)}(k,l)\frac{t^{k}}{k!}\nonumber\\ &=\sum_{m=0}^{\infty}\beta_{m,\lambda}^{(c)}(y)\frac{t^{m}}{m!}\sum_{k=0}^{\infty}\sum_{l=0}^{k}(x)_{l}S_{\lambda}^{(2)}(k,l)\frac{t^{k}}{k!}\nonumber\\ &=\sum_{n=0}^{\infty}\bigg(\sum_{k=0}^{n}\sum_{l=0}^{k}\binom{n}{k}(x)_{l}S_{\lambda}^{(2)}(k,l)\beta_{n-k,\lambda}^{(c)}(y)\bigg)\frac{t^{n}}{n!}.\nonumber \end{align} Comparing the coefficients on both sides of \eqref{63}, we have \begin{displaymath} \beta_{n,\lambda}^{(c)}(x,y)=\sum_{k=0}^{n}\sum_{l=0}^{k}\binom{n}{k}(x)_{l}S_{\lambda}^{(2)}(k,l)\beta_{n-k,\lambda}^{(c)}(y). \end{displaymath} Also, for $n\in\mathbb{N}$, we get \begin{displaymath} \beta_{n,\lambda}^{(s)}(x,y)=\sum_{k=0}^{n}\sum_{l=0}^{k}\binom{n}{k}(x)_{l}S_{\lambda}^{(2)}(k,l)\beta_{n-k,\lambda}^{(s)}, \end{displaymath} and \begin{displaymath} \beta_{0,\lambda}^{(s)}(x,y)=0. \end{displaymath} \section{Conclusions} In [15], the authors introduced the so called the new type Euler polynomials by means of generating functions (see \eqref{9}, \eqref{10}) and deduced several properties and identities for these polynomials. Hac\`ene Belbachir, the reviewer of the paper [15], asked the following question in Mathematical Reviews (MR3808565) of the American Mathematical Society: Is it possible to obtain their results by considering the classical Euler polynomials of complex variable $z$, and treating the real part and the imaginary part separately?\\ \indent Our result gives an affirmative answer to the question (see \eqref{16}). In this paper, we considered the degenerate Euler and Bernoulli polynomials of complex variable and, by treating the real and imaginary parts separately, were able to introduce degenerate cosine-Euler polynomials, degenerate sine-Euler polynomials, degenerate cosine-Bernoulli polynomials, and degenerate sine-Bernoulli polynomials. They are degenerate versions of the new type Euler polynomials studied by Masjed-Jamei, Beyki and Koepf [15] and of the 'new type Bernoulli polynomials.'\\ \indent In Section 2, the degenerate cosine-polynomials and degenerate sine-polynomials were introduced and their explicit expressions were derived. The degenerate cosine-Euler polynomials and degenerate sine-Euler polynomials were expressed in terms of degenerate cosine-polynomials and degenerate sine-polynomials and vice versa. Further, some reflection identities were found for the degenerate cosine-Euler polynomials and degenerate sine-Euler polynomials. In Section 3, the degenerate cosine-Bernoulli polynomials and degenerate sine-Bernoulli polynomials were introduced. They were expressed in terms of degenerate cosine-polynomials and degenerate sine-polynomials and vice versa. Reflection symmetries were deduced for the degenerate cosine-Bernoulli polynomials and degenrate sine-Bernoulli polynomials. Further, some expressions involving the degenerate Stirling numbers of the second kind were derived for them.\\ \indent It was Carlitz [1,2] who initiated the study of degenerate versions of some special polynomials, namely the degenerate Bernoulli and Euler polynomials. Studying degenerate versions of some special polynomials and numbers have turned out to be very fruitful and promising (see [3,5-11,13-14,19] and references therein). In fact, this idea of considering degenerate versions of some special polynomials are not limited just to polynomials but can be extended even to transcendental functions like gamma functions [8].
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Conclusion}\label{sec:conc} In this paper, we proposed a novel progressive face aging framework based on generative adversarial networks~(PFA-GAN) to model the age progression by a progressive neural network. In doing so, PFA-GAN aged the input young face in a progressive way to mimic the human face age progression. We also introduced a novel age estimation loss and an aging smoothness metric. The PFA-GAN can be optimized in an end-to-end manner to eliminate the accumulative error. Experimental results on two benchmarked datasets demonstrated the superiority of PFA-GAN over the state-of-the-art cGAN-based methods in terms of image quality, aging accuracy, aging smoothness, and identity preservation. \section{Experiments}\label{sec:exp} \begin{figure*}[t!] \centering \includegraphics[width=1.0\linewidth]{aged_faces_morph_optimized.pdf} \caption{The generated aged faces by PFA-GAN on the MORPH dataset. We show the input faces with their real ages in the first column, and different aged faces of the same subject in the next three columns corresponding to three age groups $31-40$, $41-50$ and $51+$, respectively.} \label{fig:aged_faces_morph} \end{figure*} \subsection{Data Collection} We conducted extensive experiments on two benchmarked age datasets: {MORPH}~\cite{ricanek2006morph}, and {CACD}~\cite{chen2015face}. The MORPH dataset~\cite{ricanek2006morph} is the most popular benchmark for face aging, which contains 55,134 colorful face images with strictly controlled conditions such as the near-frontal pose, neutral expression, moderate illumination, and simple background. The CACD dataset~\cite{chen2015face} contains 163,446 face images of 2,000 celebrities, which were collected via Google Image Search with few controlled conditions. Therefore, CACD has large variations in pose, illumination, and expression, making it much more challenging than the MORPH dataset. For the raw face images in both datasets, we first extend the transformation matrix used in~\cite{zhang2016joint} to have an output size of $256\times 256$ with additional 20\% margin on all sides of the faces, doubling the image size of most previous works such as~\cite{zhang2017age,wang2018face}. We then align and crop the faces with five facial landmarks detected by Face++ API~\cite{faceplusplus.com} using an affine transformation to make the faces in the center of the input images. Before being fed into the network, the original intensity range of all cropped faces, $[0, 256]$, is linearly normalized into $[-1, 1]$. For the interpretation of the outputs, all the outputs of the network are first truncated into $[-1, 1]$ and then rescaled back to $[0,256]$ for the metrics calculation and display. Following the convention~\cite{yang2018learning,liu2019attribute,li2019age}, we divided the face images into four age groups; \textit{i}.\textit{e}.\xspace, $30-$, $31-40$, $41-50$, $51+$. For each dataset, we randomly select $80\%$ images for training and the rest for testing, and ensure that there is no overlap in identities between these two sets. Here, we also adopted FG-NET~\cite{lanitis2002toward} as testing set for a fair comparison with more prior works. Specifically, FG-NET is popular in face aging analysis but only contains 1,002 images from 82 individuals ranging from 0 to 69 and we used the model trained on CACD to age the faces from FG-NET. \subsection{Implementation Details} During training, we trained all models with a maximum of $200,000$ iterations on a single NVIDIA GTX 2080Ti GPU. PFA-GAN was initialized with He initialization~\cite{he2015delving}, and optimized by Adam optimization method~\cite{kingma2014adam}. The age estimation network was pre-trained on each dataset from scratch for $50$ epochs with a mini-batch size of $128$, an initial learning rate of $1.0\times10^{-4}$, and a learning rate decay factor of $0.7$ after every $15$ epochs. The generator and discriminator used the same training hyperparameters with learning rates of $1.0\times10^{-4}$, exponential decay rates for the first moment estimates $\beta_{1}$ of $0.5$, exponential decay rates for the second moment estimates $\beta_{2}$ of $0.99$, and mini-batch sizes of $12$. $G$ and $D$ were trained alternately. Input images were randomly sampled from the training set and normalized into the range of $[-1, 1]$. Note that in the testing phase, the final output images were clipped into the normal pixel range. The hyperparameters in the loss functions were empirically set as follows: $\lambda_{\mathrm{adv}}$ was $100$; $\lambda_{\mathrm{ide}}$ was $0.02$; $\lambda_{\mathrm{age}}$ was $0.4$; $\alpha_{\mathrm{ssim}}$ was $0.15$; and $\alpha_{\mathrm{fea}}$ was $0.025$. We implemented PFA-GAN based on PyTorch v1.3.1, and used Face++ APIs v3 to evaluate all methods. \begin{figure*}[t!] \centering \includegraphics[width=1.0\linewidth]{aged_faces_cacd_optimized.pdf} \caption{The generated aged faces by PFA-GAN on the CACD dataset. We showcase the input faces with their real ages in the first column, and different aged faces of the same subject in the next three columns corresponding to three age groups $31-40$, $41-50$ and $51+$, respectively.} \label{fig:aged_faces_cacd} \end{figure*} \begin{figure*}[t!] \centering \includegraphics[width=1.0\linewidth]{quality_comparsion_optimized.pdf} \caption{Performance comparison with prior work on the MORPH and CACD datasets. We showcase the input young faces in the first row with their real age labels below the image. The second row presents two sample results of prior work of seven recently published face aging methods. The third row shows our results of the same input faces in the same age groups as the prior work. Zoom in for a better view of image details.} \label{fig:method_comparison} \end{figure*} \subsection{Qualitative Evaluations} Figs.~\ref{fig:aged_faces_morph} and~\ref{fig:aged_faces_cacd} showcase the aged faces from the MORPH and CACD datasets, respectively, which were generated by our PFA-GAN from the age group $30-$ to the other three old age groups. Although input faces cover a wide range of the population in terms of race, gender, pose, makeup, and expression, those aged faces are photo-realistic, with natural details in the skin, muscles, wrinkles, etc. For example, the beard turns into gray, and the skin gets wrinkles. Besides, all faces, even at a large age gap, can preserve their original identities. Although hair color generally turns white as the face ages, it varies from person to person and depends on the race and the training data, which explains why some generated faces in Figs.~\ref{fig:aged_faces_morph} and~\ref{fig:aged_faces_cacd} have few aging effects. Note that we have to compress the presented images for a reduced file size, which may cause some chessboard artifacts when zoomed in. \begin{figure*}[t!] \centering \includegraphics[width=1.0\linewidth]{rejuvenated_faces_optimized.pdf} \caption{The rejuvenated faces by PFA-GAN on the MORPH (left two columns) and CACD (right two columns) datasets. We showcase the input faces with their real ages in the first column, and different rejuvenated faces of the same subject in the next three columns corresponding to three age groups $41-50$, $31-40$, and $30-$, respectively.} \label{fig:rejuvenated_faces} \end{figure*} To demonstrate the superiority of our progressive face aging framework, we compare the PFA-GAN with the most recently published state-of-the-art methods: GLCA-GAN~\cite{li2018global}, ldGAN~\cite{sun2020facial}, Pyramid-architectured GAN~(PAG-GAN)~\cite{yang2018learning}, IPCGAN~\cite{wang2018face}, CAAE~\cite{zhang2017age}, Dual AcGAN~\cite{li2019age}, Attribute-aware GAN~(A3GAN)~\cite{liu2019attribute}, WGLCA-GAN~\cite{li2019global}, and RFA~\cite{wang2016recurrent}. Note that we directly compared with the published results in their own papers, which is widely adopted in the face aging literature such as~\cite{yang2018learning,song2018dual,liu2019attribute,li2019global,li2019age,sun2020facial}. This can form a fair comparison for image quality and avoid any bias or error caused by our implementation. Fig.~\ref{fig:method_comparison} shows the comparison between PFA-GAN and the other baseline methods in generating high-quality aged faces. In contrast to previous cGANs-based methods including GLCA-GAN, ldGAN, IPCGAN, and CAAE, PFA-GAN achieves a better performance in aging faces of higher resolution ($2\times$) with enhanced aging details and realistic aging effects. Although PAG-GAN, Dual AcGAN, and A3GAN can generate high-quality face images, their aged faces usually contain ghost artifacts and unexpected changes of the background area. For instance, PAG-GAN fails to keep the background color where the clothes are also strongly ghosted. In addition, when the age gap becomes large, PFA-GAN is better at suppressing ghost artifacts and color distortion than Dual AcGAN and A3GAN. This benefits from the progressive aging framework, where the whole age progression is split into several small steps rather than a single step in the previous (c)GANs-based methods, and each sub-network only learns the adjacent aging translation patterns, which is relatively easy. Although we mainly focus on face age progression, the proposed method can also be applied to face age regression. In the face rejuvenation, the input faces come from $51+$ age group, and are rejuvenated into three young age groups. Fig.~\ref{fig:rejuvenated_faces} shows the rejuvenated faces by our PFA-GAN. With the age decreasing from old to young, the face skin is tightened, and the hair becomes thick and luxuriant as expected. Moreover, PFA-GAN is not limited to face aging, it can also be easily extended to other image translation tasks; see Appendix Fig.~\ref{fig:expression_translation} for the sample results by applying PFA-GAN to smile-like facial expression translation. \subsection{Quantitative Evaluations} In this subsection, we used two widely-used and two auxiliary quantitative metrics to evaluate the performance of face aging methods: 1) \textbf{Aging Accuracy} measures the difference of age distributions of both generic and synthetic faces in each age group; 2) \textbf{Aging Smoothness} evaluates the capability of different methods in generating smooth aging results; 3) \textbf{Inception Score} evaluates the image quality quantitatively; and 4) \textbf{Identity Preservation} checks whether the identities have been preserved during face aging. Following the convention~\cite{li2019age,liu2019attribute,yang2018learning}, we adopted the online face analysis tool developed by Face++ to estimate the face aging accuracy and identity preservation objectively. Moreover, we also conducted a double-blinded user study to evaluate the image quality of the generated faces subjectively. We compared the proposed model with previous state-of-the-art methods, including CAAE~\cite{zhang2017age}, IPCGAN~\cite{wang2018face}, WGLCA-GAN~\cite{li2019global}, and PAG-GAN~\cite{yang2018learning}, to demonstrate the effectiveness. Note that we have tried our best to reproduce the performance of these baseline methods as close as to that reported in their original papers. Specifically, the pre-trained AlexNet in IPCGAN and pre-trained LightCNN~\cite{wu2018light} in WGLCA-GAN were also replaced with VGG-Face descriptor for a fair comparison. We followed the original setting suggested in their original papers for CAAE and PAG-GAN. \subsubsection{{Aging Accuracy}} \begin{table*}[ht] \caption{Quantitative results of age estimation error of three age groups, the Pearson correlation coefficient (PCC), and the inception score (IS) on the MORPH and CACD datasets. For age estimation error, we report the absolute value between the mean estimate ages of real faces and fake faces in each age group. IS are calculated over all aged test faces. We also report the results of applying IPCGAN and PAG-GAN sequentially to age faces, which are denoted as IPCGAN$^\sharp$ and PAG-GAN$^\sharp$, respectively. The best results are highlighted in bold.} \label{tab:performance_age} \centering \begin{spacing}{1.0} \begin{tabular}{lcccccclccccc} \toprule \multicolumn{6}{c}{\textbf{MORPH}} & & \multicolumn{6}{c}{\textbf{CACD}} \\ \cmidrule{1-6} \cmidrule{8-13} \multirow{2}{*}{Method} & \multicolumn{3}{c}{Age Estimation Error} & \multirow{2}{*}{PCC} & \multirow{2}{*}{IS} && \multirow{2}{*}{Method} & \multicolumn{3}{c}{Age Estimation Error} & \multirow{2}{*}{PCC} & \multirow{2}{*}{IS} \\ \cmidrule{2-4} \cmidrule{9-11} & 31 - 40 & 41 - 50 & 51+ & & && & 31 - 40 & 41 - 50 & 51+ & & \\ \midrule CAAE & 4.41 & 5.84 & 6.07 & 0.937 & 2.38 && CAAE & 3.55 & 5.07 & 5.32 & 0.946 & \hspace{1.75mm}6.26\\ IPCGAN & 2.04 & 2.68 & 2.01 & 0.978 & 3.09 && IPCGAN & 1.78 & 0.50 & 2.64 & 0.972 & 29.07\\ IPCGAN$^\sharp$ & 2.04 & 1.05 & 1.57 & 0.982 & 2.71 && IPCGAN$^\sharp$ & 1.78 & 3.00 & 1.66 & 0.978 & 22.58\\ WGLCA-GAN & 3.52 & 1.41 & 2.98 & 0.974 & 3.15 && WGLCA-GAN & 0.63 & 1.75 & 2.33 & 0.981 & 29.60\\ PAG-GAN & 0.77 & 0.43 & 1.56 & 0.955 & 3.67 && PAG-GAN & 0.94 & 1.09 & 1.27 & 0.951 & 26.43\\ PAG-GAN$^\sharp$ & 0.77 & 1.01 & 1.34 & 0.968 & 2.87 && PAG-GAN$^\sharp$ & 0.94 & 1.12 & 0.72 & 0.971 & 19.20\\ \cmidrule{1-6} \cmidrule{8-13} PFA-GAN & \textbf{0.38} & \textbf{0.14} & \textbf{1.11} & \textbf{0.989} & \textbf{3.90} && PFA-GAN & \textbf{0.41} & \textbf{0.11} & \textbf{0.37} & \textbf{0.986} & \textbf{33.39}\\ \hspace{1.5mm}w/o DEX & 1.74 & 1.55 & 1.40 & 0.983 & 3.49 && \hspace{1.5mm}w/o DEX & 1.13 & 0.38 & 1.38 & 0.983 & 32.28\\ \hspace{1.5mm}w/o PFA & 0.69 & 1.27 & 1.49 & 0.979 & 3.01 && \hspace{1.5mm}w/o PFA & 0.72 & 0.47 & 1.04 & 0.976 & 30.37\\ \bottomrule \end{tabular} \end{spacing} \end{table*} In the mainstream works of face aging~\cite{yang2018learning,li2019global,liu2019attribute,li2019age,sun2020facial,zhu2020look}, the discrepancy between the age distributions of both generic and synthetic faces in each age group, also referred to as {age estimation error}, is a widely-used evaluation metric to measure the aging accuracy of different face aging methods. Specifically, the ages of both real and fake faces in each age group are first estimated by Face++ APIs for a fair comparison, and then the discrepancy between the mean ages of real and fake faces from the same age group is the age estimation error, where a lower value indicates a more accurate simulation of aging effects. Following the convention, only young faces from the age group of $30-$ are considered as the testing samples, and their aged faces in the other three age groups are produced by different methods. Table~\ref{tab:performance_age} presents the age estimation errors of different methods for each age group on the MORPH and CACD datasets. Markedly, our PFA-GAN consistently outperforms other baseline methods by a large margin in these three age groups, even when the age gap becomes large. CAAE fails to produce strong enough aging effects so that the synthetic faces are also over-smoothed with subtle changes, leading to large errors in estimated ages. Compared to IPCGAN, PAG-GAN performs better in aging faces with enhanced details due to the pyramid architecture. Modeling face age progression in a progressive way renders PFA-GAN achieve the best performance in aging accuracy among all methods, in which the difficulties of learning several aging translation patterns in cGANs-based methods are significantly reduced. \subsubsection{{Aging Smoothness}} Although aging accuracy, or age estimation error, can evaluate the performance of the aging accuracy of different methods, this metric cannot reflect the aging smoothness of individual faces, which is another important metric for face aging methods. When achieving a satisfied aging accuracy, face aging models should, as expected, produce a smooth aging process. Intuitively, given the ages of input faces and aged faces belonging to $N-1$ old age groups, their relative positions to the mean ages in the age distribution should be the same for all age groups. Therefore, a linear correlation exists between the age sequence of one subject and the mean age of generic one. To quantitatively calculate this linear correlation, we propose to use the Pearson correlation coefficient~(PCC) as a novel metric to measure the aging smoothness of different methods. Specifically, PCC has an output value between $-1$ and $+1$, where $1$ is a total positive linear correlation, $0$ no linear correlation, and $-1$ total negative linear correlation. We further calculate the mean PCC on all testing samples, which is defined as follows: \begin{align} \mathrm{PCC} = \frac{1}{m}\sum_{i=1}^{m} \rho(\mathcal{Y}_i, \widebar{\mathcal{Y}}) \end{align} where $\rho$ is the Pearson correlation coefficient function, $m$ the total number of samples, $\widebar{\mathcal{Y}}$ the generic age sequence including the mean ages of each age group from real data, and $\mathcal{Y}_i$ the $i$-th age sequence containing $N$ ages of the input face and the other $N-1$ aged faces. Note that the face ages were estimated by Face++ APIs~\cite{faceplusplus.com}. The higher PCC indicates not only a more smooth aging result but also a higher aging accuracy. Table~\ref{tab:performance_age} shows that CAAE fails to produce a smooth aging result with a lower aging accuracy. Although PAG-GAN performs better than IPCGAN in aging accuracy, IPCGAN could generate more smooth faces. This is because PAG-GAN learns each mapping separately and ignores the age distribution of whole training data. In contrast, with the progressive face aging framework, PFA-GAN learns each adjacent age mapping in an end-to-end manner and thus achieves the best performance in aging smoothness. \subsubsection{{Inception Score}} As a standard metric in image generation and translation, inception score (IS)~\cite{salimans2016improved} is widely used to evaluate the quality and diversity of generated images. Following~\cite{wang2018face} that does not suggest using the inception score based on a pre-trained network on ImageNet to evaluate the image quality of faces, we use the OpenAI source code to compute the inception score based on the pre-trained VGG-Face descriptor~\cite{parkhi2015deep}. Table~\ref{tab:performance_age} presents the inception scores for all methods. Due to the limited variation in the MORPH dataset, the inception scores are much lower than the ones on the CACD dataset. Note that all results are calculated over aged test faces for a fair comparison, hence the results of IPCGAN are not comparable to the one reported in the original paper. Obviously, PFA-GAN achieved the best results in terms of the inception score, indicating that PFA-GAN tends to produce higher image-quality faces than others. \subsubsection{{Identity Preservation}} \begin{table*}[ht] \caption{Quantitative results of face verification confidence and rate on the MORPH and CACD datasets. For the face verification rate, the false accept rate~(FAR) and threshold in Face++ APIs were set to be $10^{-5}$ and $76.5$, respectively. We also report the results of applying IPCGAN and PAG-GAN sequentially to age faces, which are denoted as IPCGAN$^\sharp$ and PAG-GAN$^\sharp$, respectively. The best results are highlighted in bold.} \label{tab:performance_id} \centering \begin{spacing}{1.0} \begin{tabular}{lcccclccc} \toprule \multicolumn{4}{c}{\textbf{MORPH}} & & \multicolumn{4}{c}{\textbf{CACD}} \\ \cmidrule{1-4} \cmidrule{6-9} Age Group & 31 - 40 & 41 - 50 & 51+ && Age group & 31 - 40 & 41 - 50 & 51+ \\ \midrule \multicolumn{4}{c}{Verification Confidence} & & \multicolumn{4}{c}{Verification Confidence} \\ \cmidrule{1-4} \cmidrule{6-9} 30 - & 95.36 & 93.23 & 88.28 && 30 - & 96.21 & 94.50 & 89.80 \\ 31 - 40 & -- & 96.20 & 92.87 && 31 - 40 & -- & 95.52 & 91.70 \\ 41 - 50 & -- & -- & 95.47 && 41 - 50 & -- & -- & 94.59 \\ \multicolumn{4}{c}{Verification Rate~(\%)} & & \multicolumn{4}{c}{Verification Rate~(\%)} \\ \cmidrule{1-4} \cmidrule{6-9} CAAE & \hspace{1.75mm}44.02 & \hspace{1.75mm}27.87 & \hspace{1.75mm}5.97 && CAAE & 13.59 & \hspace{1.75mm}8.75 & \hspace{1.75mm}2.67\\ IPCGAN & \textbf{100.00} & \textbf{100.00} & 99.21 && IPCGAN & 99.76 & 99.72 & 99.07\\ IPCGAN$^\sharp$ & \textbf{100.00} & \hspace{1.75mm}99.92 & 77.12 && IPCGAN$^\sharp$ & 99.76 & 99.01 & 96.34\\ WGLCA-GAN & \textbf{100.00} & \textbf{100.00} & 98.82 && WGLCA-GAN & 99.90 & 99.88 & 98.89\\ PAG-GAN & \textbf{100.00} & \hspace{1.75mm}98.97 & 91.51 && PAG-GAN & 99.93 & 99.38 & 97.87\\ PAG-GAN$^\sharp$ & \textbf{100.00} & \hspace{1.75mm}93.69 & 59.16 && PAG-GAN$^\sharp$ & 99.93 & 98.01 & 89.24\\ \cmidrule{1-4} \cmidrule{6-9} PFA-GAN & \textbf{100.00} & \textbf{100.00} & \textbf{99.70} && PFA-GAN & \textbf{99.97} & \textbf{99.89} & \textbf{99.69} \\ \hspace{1.5mm}w/o DEX & \textbf{100.00} & \textbf{100.00} & 99.44 && \hspace{1.5mm}w/o DEX & 99.89 & 99.80 & 99.44\\ \hspace{1.5mm}w/o PFA & \textbf{100.00} & \textbf{100.00} & 99.32 && \hspace{1.5mm}w/o PFA & 99.95 & 99.85 & 99.37\\ \bottomrule \end{tabular} \end{spacing} \end{table*} Face verification experiments are conducted with Face++ APIs to examine identity preservation during face age progression. For each input young face, we checked if the aged and the input faces have the same identities. Following~\cite{yang2018learning,liu2019attribute}, we report the verification confidence between the synthetic aged images from different age groups of the same subject in the top portion of Table~\ref{tab:performance_id}, where the high verification confidence demonstrates that the identity information is consistently preserved. We also report the results of face verification rate in the bottom portion of Table~\ref{tab:performance_id}, in which the false accept rate (FAR) and threshold used in Face++ APIs were set to be $10^{-5}$ and $76.5$, respectively, as suggested by~\cite{yang2018learning,liu2019attribute}. With the great improvement of face aging accuracy on these two benchmarked datasets, PFA-GAN achieves the highest verification rate on all three age groups, and outperforms previous state-of-the-art methods by a large margin, especially in the challenging case from $30-$ to $51+$. CAAE fails to preserve the identity permanence since it maps the faces into a latent vector. Compared to IPCGAN that outperforms PAG-GAN thanks to the identity-preserved module, PFA-GAN still performs better in both aging accuracy and identity preservation. The main reason is that IPCGAN re-estimates all pixels in the output images, leading to a drop in performance, especially for in-the-wild images in the CACD dataset. Benefiting from the progressive face aging framework, each sub-network learns a residual image---aging effects---which greatly improves the identity preservation of PFA-GAN. Note that it is reasonable that the verification rate and confidence slightly decrease as more changes appear in the face when the age gap becomes age. Therefore, face verification results show that our method was relatively robust to preserve the identity information of input faces regardless of various attributes such as races and sexes. \subsubsection{{Double Blinded User Study}} We further conducted a double-blinded user study to evaluate all generated faces quantitatively from the perspective of human beings on Amazon Mechanical Turk~(AMT). First, we randomly selected $20$ real young faces from each dataset and collected their generated faces by the baseline methods as well as our method for the age group of $51+$. Second, $50$ volunteers evaluated all the aged faces with the reference input real young faces. These aged faces were in a randomly shuffled order. Finally, we asked them to select which one is the most realistic aged face at the age of over 51 years old, which should be 1) the same identity as the input face, 2) with natural aging effects, and 3) without ghost artifacts. The odds of the aged face being selected as the most realistic one are $6.6\%$ for CAAE, $19.0$\% for IPCGAN, $30.0\%$ for PAG-GAN, and $44.4\%$ for PFA-GAN, respectively. Among all methods, our PFA-GAN achieves the best performance in this double-blinded user study. CAAE fails to produce photo-realistic faces with the identities preserved while IPCGAN produced lower resolution images~($2\times$) with blurry aging effects in contrast to PAG-GAN and PFA-GAN. Additionally, PAG-GAN fails to maintain identity consistency as good as PFA-GAN, making the results worse than PFA-GAN. In summary, our PFA-GAN can not only generate visually photo-realistic faces but also outperform other methods in aging accuracy and identity preservation. \subsection{Comparison with Sequential (c)GANs} \begin{figure}[t!] \centering \includegraphics[width=1.0\linewidth]{sequential_vis_optimized.pdf} \caption{ The generated aged faces of different methods to investigate the impact of sequential use on the (c)GANs-based methods including IPCGAN~(cGANs-based method) and PAG-GAN~(GANs-based method), which are denoted as IPCGAN$^\sharp$ and PAG-GAN$^\sharp$, respectively. The input faces from the age group of $30-$ are aged into the age group of $51+$. } \label{fig:sequential_vis} \end{figure} As discussed in~Sec.~\ref{sec:PFA}, the advantage of our PFA-GAN over the sequential (c)GANs is the end-to-end training. To demonstrate the difference between our PFA-GAN and sequential (c)GANs through both quantitative and qualitative comparisons, we implemented the sequential IPCGAN and sequential PAG-GAN as IPCGAN$^\sharp$ and PAG-GAN$^\sharp$, respectively, and reported their results in Fig.~\ref{fig:sequential_vis}, Tables~\ref{tab:performance_age} and~\ref{tab:performance_id}. Fig.~\ref{fig:sequential_vis} shows that when (c)GANs-based methods are applied sequentially to age faces, the aged faces are strongly ghosted and blurry with unsatisfied image quality. Quantitatively, the inception scores in Table~\ref{tab:performance_age} also drop as expected. It should be noted that the sequential use could improve the aging accuracy for some age groups, and improve the aging smoothness at the cost of compromising identity preservation as shown in Table~\ref{tab:performance_id}, especially when the age gap becomes large. However, different from sequential (c)GANs, our PFA-GAN trains these sub-networks simultaneously in an end-to-end manner. Consequently, PFA-GAN are able to remove accumulative error and produce satisfied faces through back-propagation. Besides, benfitting from the end-to-end manner, PFA-GAN provides the latter sub-networks with the capability of sensing the generated faces to eliminate the potential domain shift, and only focusing on changing these area of the face image relevant to face aging. As a result, PFA-GAN could not only further improve the image quality and aging accuracy with smooth aging results, but also maintain the identity consistency better than sequential (c)GANs. \subsection{Ablation Study} \label{sec:ablation} \begin{figure}[t!] \centering \includegraphics[width=1.0\linewidth]{ablation_study_optimized.pdf} \caption{The ablation study results of PFA-GAN. The input faces from the age group of $30-$ are aged into the age group of $51+$ by PFA-GAN without DEX (w/o DEX), PFA-GAN without PFA (w/o PFA), and PFA-GAN itself.} \label{fig:ablation_study} \end{figure} In this subsection, an ablation study of PFA-GAN is conducted to fully explore the importance of DEX term and progressive face aging framework~(PFA) in simulating accurate age translations. We investigate the impact of these two modules by removing DEX term in loss function~(w/o DEX) and training only one sub-network that uses $\mat{C}_t$ to control the aging process like cGANs~(w/o PFA). When replacing DEX term with an age group classification loss, we increased $\lambda_{\mathrm{age}}$ from $0.4$ to $8$ for a fair comparison. Therefore, without DEX term, PFA-GAN only optimizes a single age group classification task in age estimation loss, and without progressive face aging framework, PFA-GAN reduces to a common cGANs-based method. Fig.~\ref{fig:ablation_study} shows the visual comparison of face images generated by different variants of the proposed model. Highlighted by the hair and beard of the generated faces, PFA-GAN without DEX term fails to produce enhanced aging effects, and PFA-GAN without PFA suffers from severe ghost artifacts. On the contrary, the integration of DEX and PFA suppresses the ghost artifacts and produce realistic aging effects. To be specific, DEX is used to achieve an improved aging accuracy and PFA divides the whole aging process into several small steps towards a better image quality especially when the age gap becomes large. Tables~\ref{tab:performance_age} and~\ref{tab:performance_id} show the quantitative results for ablation study. The results in Table~\ref{tab:performance_age} indicate that although introducing DEX greatly improves the aging accuracy, it fails to generate smooth aging results as the PFA does. Besides, the progressive face aging framework can achieve a better image quality than cGANs-based methods. With the improved image quality and aging accuracy, PFA-GAN achieves the best face verification rate in Table~\ref{tab:performance_id} since the identity-related features are preserved by the identity consistency loss. \subsection{Robustness Analysis} \begin{figure}[t!] \centering \includegraphics[width=1.0\linewidth]{robustness_analysis_optimized.pdf} \caption{The generated aged faces under three extreme conditions of face on the CACD dataset:~(a) illumination,~(b) occlusion,~(c) low quality. Each input face was aged into three old age groups by our PFA-GAN~(first row), IPCGAN~(second row), and PAG-GAN~(third row).} \label{fig:robustness_analysis} \end{figure} The qualitative results above show that PFA-GAN is robust to various conditions such as pose, expression, and occlusion, during face aging and rejuvenation. For example, the occlusion, such as glasses and jewelry on the faces, is well preserved during face rejuvenation. Here, we further investigate the robustness of the face aging model under extreme uncontrolled conditions. Fig.~\ref{fig:robustness_analysis} showcases some aged faces under three representative extreme conditions---including illumination, occlusion, and low quality---demonstrating the strong robustness of PFA-GAN over IPCGAN and PAG-GAN. We revisit previous works to better explain why our PFA-GAN outperforms others under these extreme conditions. Specifically, the cGANs-based methods such as IPCGAN~\cite{wang2018face} achieve face aging with a single generator while GANs-based methods such as PAG-GAN~\cite{yang2018learning} train several generators separately to learn each mapping from young faces to elder ones. Due to the intrinsic complexities of face aging, they cannot generate high-quality aged faces with smooth aging effects, especially when the age gap becomes large, making them not robust under extreme conditions. On the contrary, the generator of PFA-GAN consists of several sub-generators, and the output of one sub-generator is fed into the next sub-generator. Once one sub-generator amid the whole aging process produces ghosted faces, the following sub-generators can detect such anomalies and enforce that generator to produce satisfied faces through back-propagation. In a sense, the latter sub-generators provide an attention mechanism for earlier ones to age faces effectively. Therefore, PFA-GAN is capable of focusing on these areas of the face image relevant to face aging through only learning the residual images---aging effects. \begin{figure}[t!] \centering \includegraphics[width=1.0\linewidth]{generalization_ability_study_optimized.pdf} \caption{Sample results of both face aging and rejuvenation by applying our PFA-GAN model trained on the CACD dataset to three external datasets: (a) FG-NET~\cite{lanitis2002toward},~(b) CelebA~\cite{liu2015deep},~(c) IMDB-WIKI~\cite{Rothe-IJCV-2018}. Red boxes indicate input faces.} \label{fig:generalization_ability_study} \end{figure} \subsection{Generalization Ability} To evaluate the generalization ability of PFA-GAN, we applied the model trained on the CACD dataset to external images from FG-NET~\cite{lanitis2002toward}, CelebA~\cite{liu2015deep}, and IMDB-WIKI~\cite{Rothe-IJCV-2018} datasets for face aging and rejuvenation. For those images without ground-truth age labels, face ages were estimated by the Face++ APIs. Fig.~\ref{fig:generalization_ability_study} presents exciting results, demonstrating that PFA-GAN generalizes well to face images with different sources for both face age progression and regression. Noticeably, the occlusions on the input faces, such as the makeup, scars, and glasses, are also well preserved in the aged faces. \begin{figure*}[t] \centering \includegraphics[width=1\linewidth]{limitation_study_optimized.pdf} \caption{Sample results of PFA-GAN on the MORPH~(left two columns) and CACD~(right two columns) datasets for both face aging and rejuvenation. To examine the performance of PFA-GAN when the source age is unavailable, we estimate the age of a given face by the trained age estimation network $A$. Red boxes indicate input faces with the ground truth age label below it. Note that not all ages are estimated correctly.} \label{fig:limitation_study} \end{figure*} \subsection{Limitations} Although PFA-GAN can achieve state-of-the-art performance both qualitatively and quantitatively, some limitations exist in PFA-GAN. First, compared to cGANs-based methods, the major limitation of the GANs-based methods including PFA-GAN, PAG-GAN~\cite{yang2018learning}, and A3GAN~\cite{liu2019attribute}, is that the networks require the source age label as the input to achieve the aging process. To eliminate the doubts towards the effectiveness of PFA-GAN, we removed the source age label in the testing phase and instead estimated it by the trained age estimation network $A$. Fig.~\ref{fig:limitation_study} shows the sample results for both face age progression and regression. Even though some faces are misclassified into other age groups, the results verify that our PFA-GAN is robust even with noisy age labels. Besides, considering that the faces age progressively, PFA-GAN can produce more smooth aging results than PAG-GAN~\cite{yang2018learning}. Second, PFA-GAN needs to re-train another model for rejuvenation, although what we need to do is only to reverse the order of the age groups for the training of face rejuvenation. However, PFA-GAN can achieve face progression and rejuvenation in one model if it uses invertible neural network such as~\cite{kingma2018glow} as the sub-network. Since this paper mainly presents the progressive face aging framework for face aging, the invertible neural network for face progression and rejuvenation deserves studying as a future work. Third, with more age groups split in face aging, the difficulty of end-to-end training from scratch for our network would be increased and the patterns between two adjacent age groups will become less clear. The possible solutions are to first train each sub-network independently and then fine-tune these sub-networks in an end-to-end manner, and use more data to characterize the aging patterns between two adjacent age groups, respectively. Last, PFA-GAN may produce worse background compared to the one without progressive module as shown in Fig.~\ref{fig:ablation_study}. This may be caused by the simple background presented in the dataset and could be addressed with diverse background like CACD dataset. \section{Introduction} Face aging is to render a given young face to predict its future appearance with natural aging effects while preserving his/her personalized features. It has broad applications ranging from digital entertainment to information forensics and security; \textit{e}.\textit{g}.\xspace, predicting the future appearances of lost children, and the cross-age face verification~\cite{ling2009face,park2010age,li2011discriminative,wu2012age}. Thanks to the appealing application value of face aging, the literature has proposed numerous methods to address this problem in the past two decades~\cite{lanitis2002toward,suo2010compositional}, especially the supervised-based deep neural networks explored in~\cite{Duong_2016_CVPR,wang2016recurrent,Duong_2017_ICCV}. However, these methods require massive paired faces of the same subject over a long period for training, which is impractical and cumbersome. To alleviate the need for paired faces, in recent years, generative adversarial networks (GANs)~\cite{goodfellow2014generative} and its variant, conditional GANs (cGANs)~\cite{mirza2014conditional}, have been widely used to train a face aging model with unpaired face aging data~\cite{antipov2017face,zhang2017age,wang2018face,yang2018learning,song2018dual,liu2019attribute,li2019global,li2019age,zhu2020look,sun2020facial}, achieving better aging performance than conventional methods such as physical model-based methods~\cite{suo2010compositional} and prototype-based methods~\cite{kemelmacher2014illumination}. The resultant methods can be roughly summarized into two categories: cGANs- and GANs-based methods. The cGANs-based methods~\cite{zhang2017age,wang2018face,li2019global,li2019age,zhu2020look,sun2020facial} typically train a single network to learn various aging effects between any two different age groups, with a target age group as the condition. However, the generated faces from these methods cannot simultaneously meet three important requirements for face aging: image quality, aging accuracy, and identity preservation. For example, the conditional adversarial autoencoder (CAAE)~\cite{zhang2017age} was proposed to learn a face manifold, traversing on which smooth age progression and regression can be achieved simultaneously. Its generated faces cannot preserve the identity well and are prone to blurriness. On the contrary, the GANs-based methods~\cite{yang2018learning,liu2019attribute} attempt to improve the performance from different perspectives. For instance, Yang~\textit{et al}.\xspace designed a pyramid-architecture discriminator to estimate high-level age-related details~\cite{yang2018learning}, and Liu~\textit{et al}.\xspace fed the facial attribute vectors into both the generator and discriminator to keep facial attributes consistent. However, these existing methods attempt to learn aging effects between any different age groups by a different network. That is, each network is trained independently. A major drawback is that they cannot guarantee a smooth aging result due to the intrinsic complexities of face aging, such as various expressions and races. The aged faces are sometimes even younger than these of previous age groups; see Appendix Fig.~\ref{fig:not_smooth_vis}. Besides, when the age gap between the source age and target one becomes large, these methods usually generate ghosted or blurry faces. \begin{figure} \centering \includegraphics[width=1\linewidth]{mainfold.pdf} \caption{Illustration of face age progression on a face manifold. Faces traversing from light to dark appear different dominant aging effects at different age stages, which motivates us to model face aging in a progressive way.} \label{fig:mainfold} \end{figure} Inspired by the fact that faces gradually age over time, we are motivated to model the face aging process in a progressive way~\cite{karras2017progressive,shan2019competitive,zhang2019progressive}. Therefore, we propose a novel progressive face aging framework based on generative adversarial network~(PFA-GAN). More specifically, our PFA-GAN consists of multiple small sub-networks, each of which only deals with specific aging effects between two adjacent age groups. The rationale behind this idea is that the aging effects appear to be different at different ages; \textit{e}.\textit{g}.\xspace, the phenomenon of hair turning white typically happens from 50 to 60 years old but is rare from 20 to 30 years old. To facilitate the understanding of this rationale, Fig.~\ref{fig:mainfold} depicts the face age progression on a face manifold, in which the dominant aging effects vary in the aging progress. In addition to that, the proposed framework can be trained in an end-to-end manner to alleviate the accumulative artifacts and blurriness, and generate smooth aging results. The main difference between PFA-GAN and previous GANs-based methods~\cite{yang2018learning,liu2019attribute} is that PFA-GAN trains several sub-networks \emph{simultaneously}, each of which aims at learning the aging effects between \emph{two adjacent} age groups, while previous GANs-based methods train several networks \emph{independently}, each of which learns the aging effects between \emph{any two} age groups. Experimental results on two large-scale age datasets empirically demonstrate the superiority of PFA-GAN over three state-of-the-art methods in generating high-quality faces with natural aging effects while preserving identity information. Here, we highlight the importance of modeling face aging in a progressive way in the following four aspects. First, progressive face aging focuses on modeling face aging effects between two adjacent age groups, which can decrease the learning complexity of face aging compared to traditional GAN-based or cGAN-based models that suffer from ghosted artifacts. Second, training progressive face aging in an end-to-end manner can enforce the network to produce smooth face aging results, which can further improve the image quality, aging accuracy, and identity preservation. Third, the ordinal relationship between age groups can be utilized to enhance the aging smoothness so that faces in a young group should be younger than the ones in an old group. Last, progressive face aging can improve the performance of cross-age verification, which is very important for security. As a by-product, progressive face aging can produce a sequence of aged faces for reference, facilitating cross-age verification in practice. The main contributions of this paper are summarized as follows. \begin{enumerate} \item[1)] We propose a progressive face aging framework based on generative adversarial network (PFA-GAN) to model the face age progression in a progressive way. \item[2)] Unlike the traditional way that uses the target age group as a conditional input, we propose a novel age encoding scheme for PFA-GAN by adding binary gates to control the aging flow. \item[3)] We introduce an age estimation loss to take into account the age distribution for an improved aging accuracy. \item[4)] We propose to use the Pearson correlation coefficient (PCC) as an evaluation metric to measure the aging smoothness for face aging methods. \item[5)] We conduct extensive experiments on two benchmarked datasets to demonstrate the effectiveness and robustness of the proposed method in rendering accurate aging effects while preserving identity through both qualitative and quantitative comparisons. \end{enumerate} The rest of the paper is organized as follows. Sec.~\ref{related_work} surveys the development of face aging methods. In Sec.~\ref{sec:method}, we first formulate the face age progression into a progressive neural network, and then present the network architectures as well as the loss functions for PFA-GAN. This is followed by comprehensively comparing the proposed framework with recently published four state-of-the-art methods on two benchmarked age datasets in Sec.~\ref{sec:exp}. Finally, Sec.~\ref{sec:conc} presents a concluding summary. \section{Methodology}\label{sec:method} Assuming that the faces lie on a face manifold residing in a high-dimensional space as shown in Fig.~\ref{fig:mainfold}, traversing from light to deep color achieves age progression. The change direction on the face manifold corresponds to the natural aging effects in the pixel space at different age stages. However, directly modeling the face manifold complicates the aging process of different races and sexes, resulting in low-quality faces and unexpected changes of identity. Therefore, we formulate this complicated aging process into a progressive neural network consisting of several sub-networks, each of which only learns the specific aging effects between two adjacent age groups in the image domain. The geodesic distance between any two age groups on the face manifold can be approximated by several locally linear Euclidean distances, similar to~\cite{tenenbaum2000global,he2018multi}. Following~\cite{yang2018learning,liu2019attribute,li2019age}, we divide all ages into $N$ non-overlapping age groups, where $N=4$ in this paper. More specifically, the age ranges in age group $1$, $2$, $3$, and $4$ correspond to $30- $, $31 - 40$, $ 41 - 50$, and $51+ $, respectively. We interchange the age range and the age group index based on the context to simplify the expression without confusing. The following subsections present our novel progressive face aging framework, network architectures, and loss functions, respectively. \subsection{Progressive Face Aging Framework} \label{sec:PFA} Prior to introducing our framework, we first briefly describe the cGAN-based methods. The cGANs-based methods employ one-hot encoding to represent the target age group $t$ as a vector $\vct{c}_{t} \in \mathbb{R}^{1\times N}$, whose elements are all $0$s except for a single $1$ to indicate the target age group. When given an input face image $\mat{X}_{s}\in \mathbb{R}^{w \times h \times 3}$ from source age group $s$, they first pad the vector $\vct{c}_{t}$ into a tensor $\vct{C}_t \in \mathbb{R}^{w\times h \times N}$, and then concatenate $\mat{X}_s$ and $\mat{C}_t$ along the channel dimension as the input to an aging network $G$. Formally, the aged face $\mat{X}_t$ belonging to target age group $t$ can be expressed as: \begin{align}\label{Eq:traditional} \mat{X}_t = G\Big([\mat{X}_s; \mat{C}_t]\Big), \end{align} where $[\mat{X}_s; \mat{C}_t]$ denotes the concatenation of two tensors $\mat{X}_s$ and $\mat{C}_t$ along the channel dimension. Although this encoding scheme is convenient to some extent, it enforces a single network to learn various aging effects between any two age groups, failing to generate high-quality faces when the age gap becomes large. To address these issues, we propose a novel progressive aging framework inspired by the fact that faces gradually age over time. We formulate the aging process into a progressive neural network comprising several sub-networks, each of which only learns specific aging effects between two adjacent age groups. Fig.~\ref{fig:framework} shows that $i$-th sub-network $\widebar{G}_i$ is to age faces from age group $i$ to $i+1$; \textit{i}.\textit{e}.\xspace, $\mat{X}_{i+1}=\widebar{G}_i(\mat{X}_i)$. Therefore, the progressive aging framework from the source age group $s$ to target one $t$ can be formulated as follows: \begin{align} \mat{X}_t = \widebar{G}_{t-1}\circ \widebar{G}_{t-2}\circ\cdots\circ \widebar{G}_s (\mat{X}_s), \end{align} where symbol $\circ$ denotes the function composition. To prevent from memorizing the exact copy of the input face through several sub-networks, we employ a residual skip connection~\cite{he2016deep} to perform identity mapping from input to output in each sub-network, enabling sub-network to learn the aging effects. By introducing skip connection, we can easily recast the target age group into a sequence of binary gates that control the aging flow. The changes from age group $i$ to $i+1$ can be rewritten as: \begin{align} \mat{X}_{i+1} = \widebar{G}_i(\mat{X}_i) = \mat{X}_i + \lambda_i G_i(\mat{X}_i). \end{align} To be clear, each sub-network consists of a residual skip connection, a binary gate, and the sub-network itself. Here, $\lambda_i \in \{0, 1\}$ is the binary gate controlling if the sub-network $G_i$ is involved in the path to target age group. That being said, $\lambda_i = 1$ if the sub-network $G_i$ is between source age group $s$ and target age group $t$, \textit{i}.\textit{e}.\xspace, $s \leq i < t$; otherwise $\lambda_i = 0$. Put differently, we recast the tensor $\mat{C}$ used in the cGANs-based methods into a binary gate vector $\vct{\lambda}=(\lambda_1, \lambda_2, \ldots, \lambda_{N-1})$ controlling the aging flow for the proposed PFA-GAN framework. With the proposed framework, the age progression, for example from age group $1$ to $4$ as shown in Fig.~\ref{fig:framework}, can be expressed as: \begin{align} \mat{X}_4 & = \mat{X}_3 + \underbrace{\lambda_3 G_3(\mat{X}_3)}_{\mathsf{aging\mbox{ }effects} :3\rightarrow 4} \\ & = \mat{X}_2 + \underbrace{\lambda_2 G_2(\mat{X}_2) + \lambda_3 G_3(\mat{X}_3)}_{\mathsf{aging\mbox{ }effects} :2\rightarrow 4} \notag\\ & = \mat{X}_1 + \underbrace{\lambda_1 G_1(\mat{X}_1) + \lambda_2 G_2(\mat{X}_2) + \lambda_3 G_3(\mat{X}_3)}_{\mathsf{aging\mbox{ }effects} :1\rightarrow 4}. \notag \end{align} When one wants to predict the aged face from age group $2$ to $3$, the above equation reduces to $\mat{X}_3 = \mat{X}_2 + G_2(\mat{X}_2)$ since the gate vector for this aging mapping is $(0,1,0)$, which consequently bypasses sub-networks $G_1$ and $G_3$. It can be seen that the proposed framework is quite flexible in modeling the age progression between any two different age groups with this novel age encoding scheme. Finally, given a young face $\mat{X}_{s}$ of source age group $s$, the aging process of $\mat{X}_{s}$ from $s$ into an old age group $t$ could be formulated as: \begin{align} \mat{X}_{t} = G\Big(\mat{X}_{s}, \vct{\lambda}_{s:t}\Big), \end{align} where $G=\widebar{G}_{N-1}\circ \widebar{G}_{N-2}\circ\cdots\circ \widebar{G}_1$ denotes the entire progressive face aging network in PFA-GAN, and $\vct{\lambda}_{s:t}$ controls the aging process, where subscript $s\!:\!t$ indicates the elements in $\vct{\lambda}_{s:t}$ are all $0$s except for the indices from $s$ to $t-1$ that are $1$s. Note that PFA-GAN can also be applied to face rejuvenation by reversing the order of the age groups, which is detailed in Sec.~\ref{sec:exp}. Considering that the (c)GAN-based models can also be used sequentially for face aging just like our PFA-GAN, we call such variants of (c)GANs-based models sequential (c)GANs. Since the GANs and cGANs are used differently for face aging, we discuss the sequential (c)GANs separately. First, the cGANs-based methods typically learn different age mappings by one network with a target age group as the condition. Once trained, the cGANs can be used sequentially to age a given face by changing the condition accordingly. The disadvantages of the sequential cGANs can be summarized as follows. 1) Using one single network to learn various face aging effects between different age groups could produce ghosted artifacts, especially when the gap between age groups becomes large. 2) The intermediate generated faces are unseen to the network; the potential domain shift between generated and real faces for the latter cGANs could lead to inferior performance. Second, the GANs-based methods train several models for different age mappings separately. To be used sequentially, we consider the case that GANs-based methods train several sub-networks for adjacent age groups separately, which is similar to our proposed PFA-GAN. The drawback of the sequential GANs is that those sub-networks are trained separately, which cannot sense the generated faces for latter sub-networks. The artifacts produced by earlier sub-networks could be enlarged by the latter ones, leading to poor image quality. Different from the sequential (c)GANs, the unique advantage of PFA-GAN is that PFA-GAN optimizes these sub-networks in an end-to-end training, which can eliminate the accumulative errors and sense the generated faces from previous sub-network. \subsection{Network Architecture} As we described above, our PFA-GAN consists of several sub-networks to model the different age progression. Each sub-network only learns specific aging effects between two adjacent age groups. We use the GAN framework~\cite{goodfellow2014generative} to optimize each sub-network due to the lack of paired age dataset. A GAN has two components: generator and discriminator. Without confusing, a sub-network is also called a sub-generator as the progressive face aging network serves as the generator in the GAN framework. Fig.~\ref{fig:overall} describes the GAN framework used for the proposed PFA-GAN, which comprises a generator $G$, a discriminator $D$, and an age estimation network $A$. The discriminator is to distinguish the fake aged faces from the real faces. The generator is to synthesize faces that fool the discriminator by producing more realistic faces. Each sub-generator should learn the aging patterns between two adjacent age groups. Therefore, compared to previous (c)GANs-based methods, our method can generate smooth aging results due to its nature of modeling the progressive aging. Here we describe the network architectures of the generator, discriminator, and age estimation network. The detailed network architectures can be found in Appendix Table~\ref{tab:network_architecture}. \begin{figure}[t!] \centering \includegraphics[width=1.0\linewidth]{overall.pdf} \caption{The GAN framework for PFA-GAN. The generator $G$ is to age a young face $\mat{X}_{s}$ into an aged face $G(\mat{X}_{s}, \vct{\lambda}_{s:t})$ that is indistinguishable from real face $\mat{X}_{t}$ by the discriminator $D$. $\vct{\lambda}_{s:t}$ is the binary gate vector to achieve progressive face aging from age group $s$ to $t$ for the generator while $\mat{C}_t$ is the age condition to align target age with the generated images for the discriminator. The pre-trained age estimation network $A$ is used to compute the age estimation loss---including an age group classification accuracy and an age regression error---for improved age accuracy and smoothness.} \label{fig:overall} \end{figure} \subsubsection{{Generator}} When given $N$ age groups, our proposed PFA-GAN consists of $N-1$ sub-generators $\{\widebar{G}_i\}_{i=1}^{N-1}$. The input to each sub-generator is colorful facial images of size $256\times 256\times 3$. Similar to~\cite{zhu2017unpaired}, our network structure is a residual encoder-decoder network. The encoder has three convolutional layers that have respectively $32$ $9\times 9$, $64$ $4\times 4$, and $128$ $4\times 4$ convolution filters. Likewise, the decoder has two deconvolutional and one convolutional layers, whose filter sizes and the numbers of filters are the same as the encoder in reverse. We employ 4 residual blocks to convey the information from encoder to decoder. The skip connection from input to output enables the sub-network to learn aging effects without memorizing the exact copy of the input face. All the convolutional layers except the last one are followed by instance normalization and leaky rectified linear unit (LReLU) activation whose slope is 0.2 for negative input. Unlike~\cite{liu2019attribute} that uses the tanh activation after the final layer to constrain the final output images within the range of $[-1,1]$, we found that the tanh activation is not appropriate in our framework, which limits the update of earlier sub-generators of $G$ during the training process. Instead, we did not use any activation function after the final layer. \subsubsection{{Discriminator}} \label{sec:discriminator} We adopt the PatchDiscriminator from~\cite{isola2017image} as our discriminator $D$ thanks to its impressive results in a number of image-to-image translation tasks. The discriminator has a series of 6 convolutional layers with an increasing number of $4 \times 4$ filters. A spectral normalization layer~\cite{miyato2018spectral} and a LReLU activation with a slope of $0.2$ for negative input follow each convolutional layer except the first and last ones. Besides, similar to~\cite{wang2018face}, $\mat{C}_t$ is concatenated with the feature maps after the first convolutional layer for aligning conditions with the generated images. \subsubsection{{Age Estimation Network}} \label{sec:classifier} To better characterize the face age distribution for an improved aging accuracy, an age estimation network $A$ with 6 convolutional layers and 1 fully-connected layer is incorporated into the progressive face aging framework. The last layer has 101 neurons corresponding to all possible ages from $0$ to $100$. Unlike~\cite{li2019age} that shares parts of parameters between the age estimation network and discriminator, we found that this sharing scheme could give rise to the training difficulty of GANs. Therefore, we separate them in PFA-GAN. It is important to note that the age estimation network has much fewer parameters than the pre-trained model adopted in~\cite{wang2018face,yang2018learning}. Previous works typically employ either an age classification~\cite{wang2018face} term or an age regression~\cite{li2019age} term to check whether the generated face belongs to the target age group, which may be insufficient to characterize the face age distribution. In PFA-GAN, we adopt a deep expectation (DEX)~\cite{Rothe-IJCV-2018} term to learn the age distribution by computing a softmax expected value for age estimation. Formally, the estimated age $\widehat{\vct{y}}$ for an input face $\mat{X}$ can be computed as follows: \begin{align} \widehat{\vct{y}}=\sum_{i=0}^{100} i \times \sigma\big[{A}(\mat{X})\big]_i, \end{align} where $\sigma$ represents a softmax function. The number of output neurons of the age estimation network ${A}(\mat{X})$ was empirically set to be 101 according to~\cite{Rothe-IJCV-2018}, where one neuron is expected to respond to one certain age. This should cover a wide age range from 0 to 100, and apparently includes the age ranges of the two datasets used in this paper. To regularize the learned age distribution for an improved learning efficiency, we train this age estimation network in a multi-task framework addressing the age regression and classification tasks simultaneously. We append another fully-connected layer with $N$ neurons on the top of ${A}$ for the age group classification task. By training the age estimation task in a multi-task framework, we found that the number of output neurons has no significant impact on the performance of the age estimation network as the age classification loss is able to regularize the learned age distribution. In addition, another consideration is that this setting could enable our framework to be adapted to other age estimation datasets such as the IMDB-WIKI~\cite{Rothe-IJCV-2018} and FG-NET~\cite{lanitis2002toward} with different age ranges. Instead of training this age estimation network $A$ within our proposed framework, we found that it is better to pre-train this network on the training data. Once trained, the age estimation network is frozen and regularizes the generator for an improved aging accuracy. \subsection{Loss Functions} The overall loss functions used for PFA-GAN contain three components in order to meet these three requirements of face aging: 1) Adversarial loss aims to produce high-quality aged faces indistinguishable from real ones; 2) Age estimation loss expects to improve the aging accuracy; 3) Identity consistency loss seeks to preserve the same identity. These three loss components are detailed as follows. \subsubsection{{Adversarial Loss}} The training of the GANs describes a competing game between a generator $G$ and a discriminator $D$, where $D$ aims to distinguish fake images from real ones, and $G$ attempts to fool $D$ by producing more realistic fake images. Once reaching a balance, $G$ can generate faithful images indistinguishable from the real ones. However, conventional GANs~\cite{goodfellow2014generative} suffer from maintaining a healthy competition between $G$ and $D$, leading to an unsatisfied performance. Therefore, we employ the least-squares GANs~\cite{mao2017least} for the discriminator to improve the quality of generated images and stabilize the training process. More specifically, least-squares GANs adopt the least-squares loss function to force the generator to generate samples toward the decision boundary rather than the negative log-likelihood loss function used in conventional GANs. Given a young face $\mat{X}_{s}$ from age group $s$, the output of $G$ from $s$ to an old age group $t$ is $G(\mat{X}_{s}, \vct{\lambda}_{s:t})$. In the context of least-squares GANs, the adversarial loss for the generator $G$ is thus defined as: \begin{align} \mathcal{L}_{\mathrm{adv}}=\frac{1}{2} \mathbb{E}_{\mat{X}_{s}}\Big[D\big([G(\mat{X}_s, \vct{\lambda}_{s:t}); \mat{C}_t]\big)-1\Big]^{2}. \end{align} \subsubsection{{Age Estimation Loss}} Apart from being photo-realistic, synthetic face images are also expected to satisfy the target age condition. Therefore, we include an age estimation network ${A}$ in our progressive face aging framework to regularize the face age distribution by minimizing age estimation loss---including age regression loss and age group classification loss. The age estimation network $A$ is pre-trained on training data, and frozen in our framework, regularizing the generator towards more accurate age estimation. Formally, the age estimation loss between estimated age $\widehat{\vct{y}}$ and target age $\vct{y}$ for the generator $G$ is defined as: \begin{align} \label{eq:age_estimation_loss} \mathcal{L}_{\mathrm{age}}=\mathbb{E}_{\mat{X}_{s}}\Big[\big\|\vct{y}-\widehat{\vct{y}}\|_2 + \ell\big({A}(\mat{X})\mat{W}, \vct{c}_t\big)\Big], \end{align} where $\mat{W}\in \mathbb{R}^{101\times N}$ denotes the final fully-connected layer for age group classification task outputting the age group logits, and $\ell$ is the cross-entropy loss for age group classification. Note that Eq.~\eqref{eq:age_estimation_loss} is also the loss function to pre-train age estimation network ${A}$ and $\mat{W}$. \subsubsection{{Identity Consistency Loss}} To preserve the identity-related information of the face and keep the identity-irrelevant information such as the background unchanged, we adopt a mixed identity consistency loss between the input face and generated one, which includes a pixel-wise loss, a structural similarity (SSIM) loss~\cite{wang2004image}, and a feature-level loss. These three losses for identity preservation are defined as follows: \begin{align} \mathcal{L}_{\mathrm{pix}} &= \mathbb{E}_{\mat{X}_{s}}\Big|G(\mat{X}_{s}, \vct{\lambda}_{s:t})-\mat{X}_{s}\Big|, \\ \mathcal{L}_{\mathrm{ssim}} &= \mathbb{E}_{\mat{X}_{s}}\Big[1-\mathrm{SSIM}\big(G(\mat{X}_{s}, \vct{\lambda}_{s:t}),\mat{X}_{s}\big)\Big], \\ \mathcal{L}_{\mathrm{fea}}&=\mathbb{E}_{\mat{X}_{s}}\Big\|\phi(G(\mat{X}_{s}, \vct{\lambda}_{s:t}))-\phi(\mat{X}_{s})\Big\|^2_F.\label{Loss:IPL:fea} \end{align} The feature-level loss $\mathcal{L}_{\mathrm{fea}}$ is employed to keep identity consistency in a high-level feature space, where $\|\cdot\|_F$ represents the Frobenius norm. Here, $\phi$ in Eq.~\eqref{Loss:IPL:fea} denotes the activation output of the $10$th convolutional layer from the VGG-Face descriptor~\cite{parkhi2015deep}, which is adapted to extract the identity-related semantic representation of a face image. However, this loss alone could lead to unexpected changes to the identity-irrelevant information. Therefore, the image reconstruction loss, \textit{i}.\textit{e}.\xspace, mean absolute error~(MAE) between the input and output images, is adopted so that the sparse aging effects are produced and the background is unchanged. It may produce some outliers in the resultant aging effects. We leverage the SSIM loss to balance the identity-related information for feature-level loss and identity-irrelevant information for MAE because SSIM loss can measure the local structure similarity and is a trade-off between MAE in the image space and feature-level loss in a high-level feature space. Finally, the identity consistency loss for generator $G$ is defined as: \begin{align} \mathcal{L}_{\mathrm{ide}}=(1 - \alpha_{\mathrm{ssim}})\mathcal{L}_{\mathrm{pix}} + \alpha_{\mathrm{ssim}} \mathcal{L}_{\mathrm{ssim}} + \alpha_{\mathrm{fea}} \mathcal{L}_{\mathrm{fea}}, \end{align} where $\alpha_{\mathrm{ssim}}$ and $\alpha_{\mathrm{fea}}$ are the hyperparameters to control the balance between these three losses. \subsubsection{{Final Loss}} To generate a photo-realistic face which belongs to the target age group and has the same identity as the input one, the final loss function to optimize generator $G$ is expressed as: \begin{align} \mathcal{L}_{G}=\lambda_{\mathrm{adv}} \mathcal{L}_{\mathrm{adv}} + \lambda_{\mathrm{age}} \mathcal{L}_{\mathrm{age}} + \lambda_{\mathrm{ide}} \mathcal{L}_{\mathrm{ide}}. \end{align} The discriminator $D$ is optimized by minimizing the following loss: \begin{align} \mathcal{L}_D =& \frac{1}{2} \mathbb{E}_{\mat{X}} \Big[D\big([\mat{X};\mat{C}]\big)-1\Big]^{2} +\notag\\ & \frac{1}{2} \mathbb{E}_{\mat{X}_{s}} \Big[D\big([G(\mat{X}_{s},\vct{\lambda}_{s:t});\mat{C}_t]\big)\Big]^{2}, \end{align} where the first term is computed over all real faces from all age groups and second term over all generated fake faces from all target age groups. During the training phase, the generator $G$ and discriminator $D$ are updated alternatively until the training converges. By feeding the output of sub-generator $\widebar{G}_i$ as the input to the next sub-generator $\widebar{G}_{i+1}$, we can train our whole model in an end-to-end manner to eliminate the accumulative error. Note that there is only one discriminator in our PFA-GAN. We inject the age condition $\mat{C}_t$ into discriminator so that one discriminator can be adapted to different age groups with different conditions. Besides, the error from the latter sub-generator is backpropagated to former sub-generators. Because we train PFA-GAN in an end-to-end manner, consequently, there may exist leakage of aging effects between sub-generators. On the one hand, the aging effects between two adjacent age groups only reflect the primary pattern transformation, there are still some faces that cannot fit in this transformation and may be suitable for other two adjacent age groups. In this case, the leakage of aging effects between sub-generators is unavoidable. On the other hand, our end-to-end training could eliminate the accumulative error. For example, once one sub-generator amid the whole aging process produces ghosted faces, the following sub-generators can detect such anomalies and enforce that sub-generator to produce satisfied faces through back-propagation. In a sense, the latter sub-generators provide an attention mechanism for earlier ones to age faces effectively, which is more critical than the leakage of aging effects. \section{Related Work} \label{related_work} Traditional methods of modeling face age progression can be roughly divided into two categories: physical model- and prototype-based methods. Physical model-based methods~\cite{lanitis2002toward,suo2010compositional} mechanically simulate the changes of facial appearance over time, such as muscles and facial skins, via a set of parameters. However, physical model-based methods are usually computationally expensive and do not generalize well due to the mechanical aging rules. On the contrary, prototype-based methods~\cite{rowland1995manipulating,suo2010compositional,kemelmacher2014illumination} compute the average faces of people in the same age group as the prototypes. As a result, the testing face can be aged by adding the differences between the prototypes of any two age groups. The main problem of prototype-based methods is that the personalized features cannot be preserved well due to the use of average faces. Recently, deep neural networks have shown their potential for face aging~\cite{Duong_2016_CVPR,Duong_2017_ICCV,wang2016recurrent}. For example, Wang~\textit{et al}.\xspace proposed a recurrent face aging framework that first maps the faces into eigenface subspace~\cite{turk1991face} and then utilizes a recurrent neural network to model the transformation patterns across different ages smoothly~\cite{wang2016recurrent}. As one of many supervised-based face aging methods, it requires massive paired faces of the same subject over a long period for training, which is impractical. On the contrary, PFA-GAN uses several sub-networks to learn aging effects with unpaired faces. To address this issue, many recent works utilize the availability of unpaired faces to train face aging models, either based on generative adversarial networks~(GANs)~\cite{goodfellow2014generative} or conditional GANs~(cGANs)~\cite{mirza2014conditional}. Most of unsupervised face aging methods are cGANs-based~\cite{antipov2017face,zhang2017age,wang2018face,song2018dual,li2019global,li2019age,zhu2020look,sun2020facial}, with a target age group as the condition. For example, Zhang~\textit{et al}.\xspace proposed a conditional adversarial autoencoder~(CAAE)~\cite{zhang2017age} to achieve both age progression and regression simultaneously by traversing on a low-dimensional face manifold. Nevertheless, projecting faces into a latent subspace often compromises the image quality due to the reconstruction and fails to preserve the identity-related information~\cite{wang2016recurrent,zhang2017age}. To overcome the above deficiencies, Wang~\textit{et al}.\xspace introduced an identity-preserved cGAN~(IPCGAN) that utilizes a perceptual loss based on a pre-trained model to preserve identity information~\cite{wang2018face}. Considering that face rejuvenation is the reverse process of face aging, Song~\textit{et al}.\xspace extended dual GANs~\cite{yi2017dualgan} into dual conditional GANs~(Dual cGAN) framework to achieve both face age progression and regression~\cite{song2018dual}, and then Li~\textit{et al}.\xspace combined this framework with a spatial attention mechanism to better keep the identity information and reduce ghost artifacts~\cite{li2019age}. There are a few works directly using GANs to model the aging progression between any two age groups~\cite{yang2018learning,liu2019attribute}. For example, Yang~\textit{et al}.\xspace designed a discriminator with the pyramid architecture~(PAG-GAN)~\cite{yang2018learning}, which can estimate high-level age-related details through a pre-trained neural network. In addition to preserving identity information, Liu~\textit{et al}.\xspace further found that the inconsistency of facial attributes still exists in previous works, and they fed the facial attribute vectors into both the generator and discriminator to suppress the unnatural changes of facial attributes~\cite{liu2019attribute}. In summary, cGANs-based methods typically learn different age mappings by one or two models with a target age group as the condition, while the GANs-based methods train several models for different age mappings separately. The difference between these two different kinds of methods turns out that cGANs-based methods are more flexible than GANs-based methods but GANs-based methods can produce better results in face aging. A significant difference from previous works is that our proposed PFA-GAN enjoys the best of both worlds; it is a GAN-based method but with a novel age encoding scheme, is more flexible than current (c)GAN-based methods, and most importantly, achieves the best performance in image quality, aging accuracy, and identity preservation. \begin{figure*}[t!] \centering \includegraphics[width=1.0\linewidth]{generator.pdf} \caption{The proposed progressive face aging framework for a face aging task with 4 age groups. Each sub-network $\overline{G}_i$ aims at aging faces from age group $i$ to $i + 1$, which consists of a residual skip connection, a binary gate $\lambda_i$, and a light sub-network $G_i$ outputting aging effects. The residual skip connection from input to output can prevent the sub-network from memorizing the exact copy of the input face. The binary gate $\lambda_i$ can control the aging flow and determine if the aging mapping needs the sub-network $G_i$ to be involved.} \label{fig:framework} \end{figure*}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The European Space Agency's space astrometry mission \emph{Gaia}, aiming to determine astrometric parameters for at least one billion stars with accuracies reaching 10~microarcseconds \citep{2012Ap&SS.341...31D}, was launched in December 2013 \citep{2016A&A...595A...1G}. \emph{Gaia} is based on similar observation principles as the highly successful pioneering astrometric mission {\sc Hipparcos} \citep{1997ESASP1200.....E}. In particular, both satellites use an optical system providing two viewing directions separated by a wide angle, referred to as the basic angle \citep{2001A&A...369..339P}. The goal of the present paper is to show how certain time-dependent variations of the basic angle can bias the parallax zero point of an astrometric solution derived from observations by such a scanning astrometric satellite. The data processing for an astrometric satellite should, as far as possible, be based on the principle of self-calibration \citep{2011EAS....45..109L}. This means that the same observational data are used to determine both the scientifically interesting astrometric parameters and the so-called ``nuisance parameters'' that describe the instrument calibration, satellite attitude, and other relevant parts of the observation model \citep{2012A&A...538A..78L}. The self-calibration is, however, of limited applicability when variations of different parameters do not produce fully independent effects in the observables. Such situations can occur when the variation of certain parameters leads to changes in the observables that resemble the changes produced by the variation of some other parameters. The more similar the changes in the observables are, the stronger the correlation between the parameters. If the changes are identical, the problem of parameter estimation is degenerate: the same set of observables is equally well described by different sets of parameter values. As long as the degeneracy involves only nuisance parameters, it has no effect on the astrometric solution and only leads to an arbitrary but unimportant shift of the respective nuisance parameters. However, if there is a degeneracy between the astrometric and nuisance parameters, the self-calibration process will in general lead to biased astrometry. The celestial reference frame is an example of a complete degeneracy between the astrometric and attitude parameters, which can only be lifted by means of external data, in this case the positions and proper motions of a number of extragalactic objects \citep{1997A&A...323..620K,2012A&A...538A..78L}. Concerning the instrument calibration parameters, it is possible to formulate the calibration model in such a way that (near-)degeneracies are avoided among its parameters, as well as between the calibration and attitude parameters. However, it is still possible that the actual physical variations of the instrument contain components that are degenerate with the astrometric parameters. By definition, such variations cannot be detected internally by the astrometric solution (e.g., from an analysis of the residuals), but only through a comparison with external data, e.g.\ astrophysical information or independent metrology. In Gaia the latter is chosen as will be detailed in Section~\ref{sec:relevance}. It turns out that the basic angle is an important example of a quantity that could vary in a way that cannot be fully calibrated from observations. Already in the early years of the {\sc Hipparcos} project it was realised that certain periodic variations of the basic angle, caused by a non-uniform heating of the satellite by the solar radiation, lead to a global shift of the parallaxes \citep{lindegren77}. Subsequent analyses \citep{1995A&A...304...52A,2005A&A...439..805V} concluded that the possible effect on the {\sc Hipparcos} parallaxes was negligible, suggesting a very good short-term stability of the basic angle in that satellite. For \emph{Gaia} the situation is different. The much higher accuracy targeted by this mission necessitates a very careful consideration of possible biases introduced by uncalibrated instrumental effects, including basic angle variations. This is even more evident in view of the very significant ($\sim$1~mas amplitude) basic angle variations measured by the on-board metrology system of \emph{Gaia} \citep{2016A&A...595A...1G,2016A&A...595A...4L}. In this context the near-degeneracy between a global parallax zero point error and a possible basic-angle variation induced by solar radiation is particularly relevant. The theoretical analysis of the problem presented here expands and clarifies earlier analytical results by \citet{lindegren77,LL:GAIA-LL-057} and \citet{2005A&A...439..805V}. An analytical treatment of the problem is given in Sect.~\ref{s:derivations}. Section~\ref{s:simulations} presents the results of numerical experiments confirming the theoretical expectations. In Sect.~\ref{sec:disc} we consider the practical implications of results. Some concluding remarks are given in Sect.~\ref{s:conclusion}. \section{Theory} \label{s:derivations} In this section we consider how small perturbations of various parameters change the observed quantities. We first demonstrate that, to first order in the small angles, arbitrary variations of observables are equivalent to certain variations of the basic angle and attitude (Sects.~\ref{sec-reference-system}--\ref{sec-attitude}). Then we find the changes of observables due to a common shift of all parallaxes (Sect.~\ref{sec-parallax}). Combining these results, we derive in Sects.~\ref{sec-parallax-ba-attitude-general}--\ref{sec-parallax-ba-attitude-harmonic} the specific variations of the basic angle and attitude that are observationally indistinguishable from a common shift of the parallaxes. \subsection{Reference system} \label{sec-reference-system} To study the coupling between the instrument parameters and parallax, it is convenient to make use of the rotating reference system aligned with the fields of view. This system, known as the Scanning Reference System (SRS) in the \emph{Gaia} nomenclature \citep{2012A&A...538A..78L}, is represented by the instrument axes $\vec{x}$, $\vec{y}$, $\vec{z}$ (Fig.~\ref{fig:system}), with $\vec{z}$ directed along the nominal spin axis of the satellite, $\vec{x}$ bisecting the two viewing directions separated by the basic angle $\Gamma$, and $\vec{y}=\vec{z}\times\vec{x}$. The direction towards an object is specified by the unit vector \begin{equation}\label{u_pqr} \vec{u}=\vec{x}\cos\varphi\cos h+\vec{y}\sin\varphi\cos h+\vec{z}\sin h\,, \end{equation} with the instrument angles $\varphi$ and $h$ describing the position of the object with respect to the SRS (Fig.~\ref{fig:system}). For a star in the preceding field of view (PFoV) $\varphi\simeq +\Gamma/2$, while in the following field of view (FFoV) $\varphi\simeq -\Gamma/2$. \subsection{Field angles} An observation consists of a measurement of the coordinates of a stellar image in the focal plane at a particular time. In practice the measurement is expressed in detector coordinates (e.g.\ pixels), but we consider here an idealised system providing a direct measurement of the two field angles $g$ and $h$ in the relevant field of view. While the across-scan field angle $h$ coincides with the corresponding instrument angle, the along-scan field angle $g$ is reckoned from the centre of the corresponding field of view in the direction of the satellite rotation (Fig.~\ref{fig:system}). Projected on the sky, the field-of-view centre defines two viewing directions separated by the basic angle $\Gamma$. Thus, \begin{equation}\label{eq:gpf} \left. \begin{aligned} g_\mathrm{p}&=\varphi-\Gamma/2\quad \text{in the preceding field of view}\\ g_\mathrm{f}&=\varphi+\Gamma/2\quad \text{in the following field of view} \quad\end{aligned} \right\}\,, \end{equation} where subscripts p and f denote values for the preceding and following field of view, respectively. We assume that the instrument is ideal except for the basic angle $\Gamma$, which can deviate from its nominal (conventional) value $\Gamma_\mathrm{c}$ by a time-dependent variation: \begin{equation}\label{eq:g} \Gamma(t)=\Gamma_\mathrm{c}+\delta\Gamma(t) \, . \end{equation} It is important to note that the along-scan field angle $g$, as defined here, is not the same as the along-scan field angle $\eta$ normally used in the context of the \emph{Gaia} data processing \citep{2012A&A...538A..78L}. While $\eta$ is measured from a fixed, conventional origin at $\varphi=\pm\Gamma_\mathrm{c}/2$, our $g$ is measured from the actual, variable field centre at $\varphi=\pm\Gamma(t)/2$. This difference motivates the change in notation from $\eta$ to $g$. For consistency, a corresponding change is made in the across-scan direction, although our $h$ is the same as the across-scan field angle $\zeta$ used in the \emph{Gaia} data processing. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{fig_refSystem.pdf}} \caption{Definition of the instrument axes $\vec{x}$, $\vec{y}$, $\vec{z}$ of the Scanning Reference System (SRS), the basic angle $\Gamma$, and the field angles $g$ and $h$ specifying the observed direction to a star ($\vec{u}$) in either field of view. $\varphi$ is the along-scan instrument angle of the star. In the SRS the direction to the solar system barycentre, $\vec{b}$, is specified by the angles $\xi$ and $\Omega$.} \label{fig:system} \end{figure} \subsection{Variations of the field angles due to a change in the basic angle} \label{ss_basangle} Any increase or decrease of the basic angle makes the fields of view move further from each other or closer together. This, in turn, changes the observed field angle $g$ for a given stellar image. However, since the attitude (celestial pointing of the SRS axes) is unchanged, the value of $\varphi$ for a given star is not affected by the basic angle. For example, let us consider the preceding field of view. An increase of the basic angle causes the observed image to be shifted with respect to the centre of the field of view so that the observed along-scan field angle $g_\mathrm{p}$ is decreased. The opposite effect takes place in the following field of view. The across-scan field angles $h_\mathrm p$ and $h_\mathrm f$ are obviously not affected. The variations of the field angles caused by the basic-angle variation $\delta\Gamma$ are therefore \begin{equation}\label{eq:alpf_gamma} \left. \begin{aligned} \delta g_\mathrm{p}&=-{\textstyle\frac{1}{2}}\,\delta\Gamma\\ \delta g_\mathrm{f}&=+{\textstyle\frac{1}{2}}\,\delta\Gamma\\ \delta h_\mathrm{p}&=0\\ \delta h_\mathrm{f}&=0\\ \end{aligned} \quad\right\}\,. \end{equation} \noindent This agrees with Eq.~(\ref{eq:gpf}) taking into account that $\delta\varphi=0$. \subsection{Variations of the field angles due to a change in the attitude} \label{sec-attitude} A quaternion representation is used to parametrise the attitude of \emph{Gaia} \cite[Appendix A]{2012A&A...538A..78L}. Here it is more convenient to describe small changes in the attitude by means of three small angles $\delta_x$, $\delta_y$, and $\delta_z$ representing the rotations around the corresponding SRS axes. Since the direction $\vec{u}$ to the star is regarded here as fixed, the corresponding changes in the observed field angles are found to be \begin{equation}\label{eq:alac_att} \left. \begin{aligned} \delta g_\mathrm{p}&=-\delta_z\\ \delta g_\mathrm{f}&=-\delta_z\\ \delta h_\mathrm{p}&=\cos(\Gamma_\mathrm{c}/2)\,\delta_y-\sin(\Gamma_\mathrm{c}/2)\,\delta_x\\ \delta h_\mathrm{f}&=\cos(\Gamma_\mathrm{c}/2)\,\delta_y+\sin(\Gamma_\mathrm{c}/2)\,\delta_x \end{aligned} \quad\right\}\,. \end{equation} In these and following equations, we neglect terms of the second and higher orders in $\delta_x$, $\delta_y$, $\delta_z$, and $\delta\Gamma$. To this approximation we can use $\Gamma_\mathrm{c}$ instead of $\Gamma$ in the trigonometric factors. \subsection{Combined changes in the basic angle and attitude} \label{sec-combined} Combining Eqs.~(\ref{eq:alpf_gamma}) and (\ref{eq:alac_att}) we see that a simultaneous change of the basic angle by $\delta\Gamma$ and of the attitude by $\delta_x$, $\delta_y$, $\delta_z$ gives the following total change of the field angles: \begin{equation}\label{eq:gamma_att} \left. \begin{aligned} \delta g_\mathrm{p}&=-{\textstyle\frac{1}{2}}\,\delta\Gamma-\delta_z\\ \delta g_\mathrm{f}&=+{\textstyle\frac{1}{2}}\,\delta\Gamma-\delta_z\\ \delta h_\mathrm{p}&=\cos(\Gamma_\mathrm{c}/2)\,\delta_y-\sin(\Gamma_\mathrm{c}/2)\,\delta_x\\ \delta h_\mathrm{f}&=\cos(\Gamma_\mathrm{c}/2)\,\delta_y+\sin(\Gamma_\mathrm{c}/2)\,\delta_x \end{aligned} \quad\right\}\,. \end{equation} An exact inversion of this system of equations gives \begin{equation}\label{eq:gamma_att_inv} \left. \begin{aligned} \delta_x&=\frac{1}{2\sin\left(\Gamma_\mathrm{c}/2\right)}\left(\delta h_\mathrm{f} -\delta h_\mathrm{p}\right)\\ \delta_y&=\frac{1}{2\cos(\Gamma_\mathrm{c}/2)}\left(\delta h_\mathrm{p} +\delta h_\mathrm{f}\right)\\ \delta_z&=-\frac{1}{2}\left(\delta g_\mathrm{p}+\delta g_\mathrm{f}\right)\\ \delta\Gamma&=\delta g_\mathrm{f}-\delta g_\mathrm{p} \end{aligned} \quad\right\}\,. \end{equation} The first two equations in (\ref{eq:gamma_att_inv}) show that arbitrary small changes in the across-scan field angles $\delta h_\mathrm{p}$ and $\delta h_\mathrm{f}$ to first order can be represented as changes in the attitude by $\delta_x$ and $\delta_y$. Similarly, the last two equations show that arbitrary changes in the along-scan field angles $\delta g_\mathrm{p}$ and $\delta g_\mathrm{f}$ can be represented as a combination of a change of the basic angle $\delta\Gamma$ and a change in the attitude by $\delta_z$. In general, an arbitrary perturbation of the observed stellar positions, being a smooth function of time and stellar position, clearly result in a smooth, time-dependent variation of $\delta g_\mathrm{p}$, $\delta g_\mathrm{f}$, $\delta h_\mathrm{p}$, and $\delta g_\mathrm{f}$. From Eq.~(\ref{eq:gamma_att_inv}) it follows that such a perturbation is observationally indistinguishable from a certain time-dependent variation of $\delta_x$, $\delta_y$, $\delta_z$, and $\delta\Gamma$. \subsection{Variations of the field angles due to a change in the parallax} \label{sec-parallax} The position of the barycentre of the solar system with respect to the instrument can be specified by a distance $R$ (in au) and two angular coordinates. We take the angular coordinates to be $\xi$ and $\Omega$ defined as in Fig.~\ref{fig:system}. According to the scanning law, $\xi$ is nearly constant while $\Omega$ is increasing with time as the satellite spins. The barycentric position of the satellite is \begin{equation}\label{R-vector} \vec{R}=R\left(-\vec{x}\cos\Omega\sin\xi+\vec{y}\sin\Omega\sin\xi-\vec{z}\cos\xi\right)\,. \end{equation} The observed direction $\vec{u}$ to a star is given by Eq.~(4) of \citet{2012A&A...538A..78L} as a function of the astrometric parameters of the star. Linearisation yields the change in the direction caused by a small change of the parallax $\delta\varpi$: \begin{equation}\label{delta_u} \delta\vec{u}=\vec{u}\times\left(\vec{u}\times\vec{R}\,\delta\varpi\right)\,. \end{equation} We now assume that the direction to a star is changed only from a change of its parallax, while the basic angle and attitude are kept constant. The fixed basic angle implies $\delta\varphi=\delta g$. The fixed attitude means that $\vec{x}$, $\vec{y}$, and $\vec{z}$ are constant, so that Eq.~(\ref{u_pqr}) gives the change in direction \begin{equation}\label{eq:du} \delta\vec{u} = \vec{v}\cos h\,\delta g + \vec{w}\,\delta h\,, \end{equation} where \begin{equation}\label{eq:vw} \left.\begin{aligned} \vec{v}&= -\vec{x}\sin\varphi+\vec{y}\cos\varphi\\ \vec{w}&= -\vec{x}\cos\varphi\sin h-\vec{y}\sin\varphi\sin h+\vec{z}\cos h \end{aligned} \quad\right\} \end{equation} are unit vectors in the directions of increasing $\varphi$ and $h$, respectively. They are evidently orthogonal to each other and to $\vec{u}$. Equating $\delta\vec{u}$ from (\ref{delta_u}) and (\ref{eq:du}) and taking the dot product with $\vec{v}$ and $\vec{w}$ gives \begin{equation}\label{eq:eta_pi} \left. \begin{aligned} \cos h\,\delta g&=-\vec{v}^\prime\vec{R}\,\delta\varpi\\ \delta h&=-\vec{w}^\prime\vec{R}\,\delta\varpi \end{aligned} \quad\right\}\,. \end{equation} Substituting Eqs.~(\ref{R-vector}) and (\ref{eq:vw}) we find \begin{equation}\label{eq:alac_par} \left. \begin{aligned} \cos h\,\delta g&=-\sin\left(\Omega+\varphi\right)\sin\xi\, R\,\delta\varpi\\ \delta h&=\left[\cos h\cos\xi-\cos(\Omega+\varphi)\sin h\sin\xi\right]\,R\,\delta\varpi \end{aligned} \quad\right\}\,. \end{equation} Up to this point the derived formulae are valid throughout the field of view to first order in the (very small) variations denoted with a $\delta$. To proceed, we now consider a star observed at the center of either field of view, so that $g=h=0$ and $\varphi=\pm\,\Gamma_\mathrm{c}/2$. In this case the variations of the field angles caused by $\delta\varpi$ become \begin{equation}\label{eq:gh_par} \left. \begin{aligned} \delta g_\mathrm{p}&=-\sin\left(\Omega+\Gamma_\mathrm{c}/2\right)\sin\xi\, R\,\delta\varpi\\ \delta g_\mathrm{f}&=-\sin\left(\Omega-\Gamma_\mathrm{c}/2\right)\sin\xi\, R\,\delta\varpi\\ \delta h_\mathrm{p}&=\cos\xi\, R\,\delta\varpi\\ \delta h_\mathrm{f}&=\cos\xi\, R\,\delta\varpi\,. \end{aligned} \quad\right\}\,. \end{equation} Considering only stars at the centre of either field of view effectively means that we neglect the finite size of the field of view. In both {\sc Hipparcos} and \emph{Gaia} the half-size of the field of view is $\Phi < 10^{-2}$~rad. Since $|g|$, $|h|<\Phi$, neglected terms in Eq.~(\ref{eq:gh_par}) are of the order of $\Phi\times\delta$, where $\delta$ represents any of the quantities $\delta\Gamma$, $\delta_x$, etc. Equation~(\ref{eq:gh_par}) is therefore expected to be accurate to $<1$\% at any point in the field of view. The implications of this approximation are further discussed in Sect.~\ref{sec:fov}. \subsection{Relation between the changes in parallax, basic angle, and attitude} \label{sec-parallax-ba-attitude-general} Substituting Eq.~(\ref{eq:gh_par}) into Eq.~(\ref{eq:gamma_att_inv}) we readily obtain a relation between the change in parallax and the corresponding changes in basic angle and attitude: \begin{equation}\label{eq:delta_par} \left. \begin{aligned} \delta_x&=0\\ \delta_y&=\cos\xi\sec(\Gamma_\mathrm{c}/2)\,R\,\delta\varpi\\ \delta_z&=\sin\Omega\sin\xi\cos(\Gamma_\mathrm{c}/2)\,R\,\delta\varpi\\ \delta\Gamma&=2\cos\Omega\sin\xi\sin(\Gamma_\mathrm{c}/2)\,R\,\delta\varpi \end{aligned} \quad\right\}\,. \end{equation} These equations should be interpreted as follows: a change in parallax by $\delta\varpi$ is observationally indistinguishable (to order $\Phi\times\delta$) from a simultaneous change of the attitude by $\delta_x$, $\delta_y$, $\delta_z$ and of the basic angle by $\delta\Gamma$. The formulae were derived for one star, but if $\delta\varpi$ is the same for all stars, they hold for all observations of all stars. Equation~(\ref{eq:delta_par}) therefore defines the specific variations of the attitude and basic angle that mimic a global shift in parallax. Remarkably, the rotation around the $\vec{x}$ axis is not affected by the global parallax change, while the rotation around the $\vec{y}$ axis is independent of $\Omega$ and therefore, in practice, almost constant. In a global astrometric solution all the attitude and stellar parameters (including $\varpi$) are simultaneously fitted to the observations of $g$ and $h$. A specific variation in the basic angle of the form $\delta\Gamma(t)\propto\cos\Omega\sin\xi\,R$ will then lead to a global shift of the fitted parallaxes (together with some time-dependent attitude errors $\delta_y$, $\delta_z$). Since the effects of such a basic-angle variation are fully absorbed by the attitude parameters and parallaxes, the variation is completely degenerate with the stellar and attitude model and cannot be detected from the residuals of the fit. For a satellite in orbit around the Earth (as {\sc Hipparcos}) or near L$_2$ (as \emph{Gaia}), $R$ is approximately constant. In order to achieve a stable thermal regime of the instrument for a scanning astrometric satellite one typically chooses a scanning law with a constant angle between the direction to the Sun and the spin axis -- the so-called solar aspect angle. This means that angle $\xi$ is nearly constant as well (see Sect.~\ref{sec:helio}). In the next section we consider the case when $R$ and $\xi$ are exactly constant. Nevertheless, in reality both $R$ and $\xi$ are somewhat time-dependent, and this case is discussed in Sect.~\ref{s:time}. Since $\xi$ and $R$ are nearly constant, the degenerate component of the basic-angle variation is essentially of the form $\cos\Omega$, which is periodic with the satellite spin period relative to the solar-system barycentre. This leads to a fundamental design requirement for a scanning astrometry satellite, namely that the basic angle should not have significant periodic variations with a period close to the period of rotation of the satellite, and especially not of the form $\cos\Omega$. \subsection{Harmonic representation of the variations} \label{sec-parallax-ba-attitude-harmonic} From Eq.~(\ref{eq:delta_par}) it is seen that a global parallax shift corresponds to variations of $\delta\Gamma$ and $\delta_z$ proportional to $\cos\Omega$ and $\sin\Omega$, respectively, while $\delta_y$ and $\delta_x$ are constant. These quantities correspond to terms of order $k=0$ and 1 in the more general harmonic series \begin{equation}\label{eq:har_gamma} \left. \begin{aligned} \delta_x &= \sum_{k\ge0}a_k^{(x)}\cos k\Omega+b_k^{(x)}\sin k\Omega\\ \delta_y &= \sum_{k\ge0}a_k^{(y)}\cos k\Omega+b_k^{(y)}\sin k\Omega\\ \delta_z &= \sum_{k\ge0}a_k^{(z)}\cos k\Omega+b_k^{(z)}\sin k\Omega\\ \delta\Gamma &= \sum_{k\ge0}a_k^{(\Gamma)}\cos k\Omega+b_k^{(\Gamma)}\sin k\Omega\\ \end{aligned} \quad\right\}\,. \end{equation} Specifically, if $a_k^{(\Gamma)}=b_k^{(\Gamma)}=0$ except for $a_1^{(\Gamma)}\ne 0$, we find the following relations between the amplitude of the basic angle variation, the global shift of the parallaxes, and the non-zero harmonics of the attitude errors: \begin{equation}\label{eq:har_varpi} \left. \begin{aligned} \delta\varpi &=\frac{1}{2R\sin\xi\sin(\Gamma_\mathrm{c}/2)}\,a_1^{(\Gamma)} =0.8738\,a_1^{(\Gamma)}\\ a_0^{(y)} &=\frac{1}{\tan\xi\sin\Gamma_\mathrm{c}}\,a_1^{(\Gamma)} =1.0429\,a_1^{(\Gamma)}\\ b_1^{(z)} &=\frac{1}{2\tan(\Gamma_\mathrm{c}/2)}\,a_1^{(\Gamma)} =0.3734\,a_1^{(\Gamma)}\\ \end{aligned} \quad\right\}\,. \end{equation} The numerical values correspond to the mean parameters relevant for \emph{Gaia}, that is $\Gamma_\mathrm{c}=106\fdg5$, $\xi=45\degr$, and $R=1.01$~au. It is not the purpose of this paper to investigate the possible effects of other harmonics of the basic-angle variation. However, it can be mentioned that $a_1^{(\Gamma)}$ is the only harmonic parameter that is degenerate with the attitude and stellar parameters. This means that $a_0^{(\Gamma)}$, $b_1^{(\Gamma)}$, and $a_k^{(\Gamma)}$, $b_k^{(\Gamma)}$ for $k>1$ can be determined as parameters in the calibration model. \section{Results of numerical simulations} \label{s:simulations} In this section we present the results of numerical tests performed to check the above conclusions. To study different aspects of the problem, we make use of two different, though complimentary, solutions: a direct solution, where inversion of the normal matrix provides full covariance information, and an iterative solution, with separate updates for different groups of unknowns, similar to the method used in the actual processing of \emph{Gaia} data \citep{2012A&A...538A..78L}. While the direct solution can only handle a relatively small number of stars, and therefore is of limited practical use, it enables us to investigate important mathematical properties of the problem and to examine the correlations between all the parameters. By contrast, the iterative solution cannot provide this kind of information, but is more realistic in terms of the number of stars and has been successfully employed in the processing of real \emph{Gaia} data. \subsection{Small-scale direct solutions} \label{sec:small} For the direct solutions a special simulation software was developed. It simulates the observations of a small number of stars and the reconstruction of their astrometric parameters based on conventional least-squares fitting. The normal equations for the unknown parameters are accumulated and the normal matrix is inverted using singular value decomposition \citep{golub}. This decomposition allows us to study mathematical properties of the problem, especially details of its degeneracy. The simulations included $10^4$ stars uniformly distributed over the celestial sphere. Observations were generated with a harmonic perturbation of the basic angle as in Eq.~(\ref{eq:har_gamma}d) with $a_1^{(\Gamma)}=1$~mas. No noise was added to the observations in order to study the effects of the basic angle variations in pure form. The solutions always include five astrometric parameters per star: two components of the position, the parallax, and two components of the proper motion. Additional parameters representing the variations in attitude and basic angle were introduced as required by various types of solutions. The first test is to check the theoretical predictions in Eq.~(\ref{eq:har_varpi}). To this end, a solution was made including only the star (S) and attitude (A) parameters, where the latter were taken to be the harmonic amplitudes of $\delta_x$, $\delta_y$, and $\delta_z$ as given by the first three equations of (\ref{eq:har_gamma}) for $k\le 1$, i.e.\ with a total of nine attitude parameters. The results of this solution, summarised in Table~\ref{t_amplitudes}, are in very good agreement with the theoretical expectations. Small deviations from the predicted values may be caused e.g. by the limited number of stars, the finite size of the field of view, and numerical rounding errors. Another test is the singular value analysis of the normal matrix. In addition to the solution described above (of type SA), we computed three other solutions with different sets of unknowns but always using the same observations. Solution S included only the star parameters as unknowns, solution SB included also the harmonic coefficients of $\delta\Gamma$ in the last equation of (\ref{eq:har_gamma}) for $k\le 1$, and solution SBA included all three sets of unknowns. The results, summarised in Table~\ref{t_sin_val}, again confirm the theoretical predictions. The problem is close to being degenerate only in case SBA where all three kinds of parameters (star, basic angle, attitude) are fitted simultaneously. In this case there is one singular value (${\sim}10^{-5}$) much smaller than the second smallest value (${\sim}10^{-2}$). This indicates that the problem has a rank deficiency of one, which is clearly caused by the (near-)degeneracy between the global parallax shift and the specific basic-angle and attitude variations described by Eq.~(\ref{eq:har_varpi}). In all other cases (SB, SA, S) no isolated small singular values appear: the problem is then formally well-conditioned, although (as we have seen in case SA) the solutions may be biased by the basic-angle variations. \begin{table} \caption{The parallax shift and the attitude harmonics obtained in the small-scale direct solution (of type SA) with $a_1^{(\Gamma)}=1$~mas basic-angle variation.} \label{t_amplitudes} \centering \begin{tabular}{ccc} \hline\hline \noalign{\smallskip} Quantity & Predicted & Computed \\ & [mas] & [mas] \\ \noalign{\smallskip} \hline \noalign{\smallskip} $\delta\varpi$ & 0.8738 & 0.8735 \\[5pt] $a_0^{(x)}$ & 0 & $ 4\times 10^{-6}$ \\ $a_1^{(x)}$ & 0 & $ 9\times 10^{-6}$ \\ $b_1^{(x)}$ & 0 & $ 2\times 10^{-6}$ \\[7pt] $a_0^{(y)}$ & 1.0429 & 1.0423 \\ $a_1^{(y)}$ & 0 & $ 2\times 10^{-5}$ \\ $b_1^{(y)}$ & 0 & $-2\times 10^{-6}$ \\[7pt] $a_0^{(z)}$ & 0 & $-8\times 10^{-6}$ \\ $a_1^{(z)}$ & 0 & $-2\times 10^{-5}$ \\ $b_1^{(z)}$ & 0.3734 & 0.3733 \\ \noalign{\smallskip} \hline \end{tabular} \end{table} \begin{table} \caption{The singular values $\sigma_i$ of the normal matrix for the different types of direct solutions.} \label{t_sin_val} \centering \begin{tabular}{ccccc} \hline\hline \noalign{\smallskip} & SBA & SB & SA & S \\ \noalign{\smallskip} \hline \noalign{\smallskip} $\sigma_1$ & $5.6\times 10^{-6}$ & 0.015 & 0.017 & 0.017 \\ $\sigma_2$ & 0.017 & 0.017 & 0.017 & 0.017 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ $\sigma_\mathrm{max}$ & 1549 & 387 & 1549 & 0.886 \\ \noalign{\smallskip} \hline \end{tabular} \tablefoot{ The singular values are sorted lowest to highest. The different solutions are denoted by the parameters included in the fit: S, B and A stand respectively for the astrometric (star) parameters, basic angle, and attitude angles. } \end{table} The circumstance that the smallest singular value in case SBA is not zero is partly attributable to rounding errors in a solution involving $\ge 50\,000$ unknowns. However, even in exact arithmetics the small-scale direct solution does not represent a fully degenerate problem because (i) the strict degeneracy only occurs if one neglects the finite size of the fields of view, and (ii) some parameters of the mission are slightly time-dependent, but were assumed to be constant by the harmonic representations in Eq.~(\ref{eq:har_gamma}). The question of the time dependence is further addressed in Sect.~\ref{s:time}. The attitude parameters used in the small-scale solutions are not representative of any practically useful attitude model, but were chosen solely to verify the expected degeneracy with the basic angle variation and parallax zero point. In particular, the harmonic model of $\delta_x$, $\delta_y$, $\delta_z$ in Eq.~(\ref{eq:har_gamma}) cannot describe a solid rotation of the reference frame, which explains why Table~\ref{t_sin_val} does not show the six-fold degeneracy between the attitude and stellar parameters normally expected from the unconstrained reference frame. This simplification is removed in the large-scale simulations described below, which use a fully realistic attitude model. \subsection{Large-scale iterative solutions} To test the effect of the basic angle variation in an iterative solution, we make use of the \emph{Gaia} AGISLab simulation software \citep[Appendix B]{2012A&A...543A..15H}. This tool allows us to simulate independent astrometric solutions in a reasonable time, based on the same principles as the astrometric global iterative solution \citep[AGIS;][]{2012A&A...538A..78L} used for \emph{Gaia} but employing a smaller number of primary stars and several other time-saving simplifications. To investigate the parallax zero point we have done a set of tests for different values of the basic angle in the range from $30\degr$ to $150\degr$, including the {\sc Hipparcos} and \emph{Gaia} values of $58\degr$ and $106\fdg5$, respectively. The nominal \emph{Gaia} scanning law and realistic geometry of the {\it Gaia}\/ fields of view are assumed in these simulations which include one million bright ($G=13$) stars uniformly distributed on the sky and observed during 5~years with no dead time. The nominal along-scan observation noise of 95~$\mu$as per CCD is assumed, based on the estimated centroiding performance of \emph{Gaia} for stars of $G=13$~mag. According to \citet{2012Ap&SS.341...31D} this corresponds to an expected end of mission precision of around 10~$\mu$as for the parallaxes. Additionally, basic-angle variations with an amplitude $a_1^{\left(\Gamma\right)} = 1$~mas are included. The unknowns consist of the five astrometric parameters per star and the attitude parameters based on a B-spline representation of the quaternion components \citep{2012A&A...538A..78L} using a knot interval of 30~s. The AGISLab simulations therefore incorporate many detailed features from the observations and data reductions of an actual scanning astrometric mission, including a number of higher-order effects neglected in our analytical approach. The parallaxes obtained in the iterative solutions are offset from their "true" values (assumed in the simulation) by random and systematic errors. Results for the mean offset $\delta\varpi$ are shown by the points in Fig.~\ref{fig:parBA}. The curve is the theoretical relation from the first equation in (\ref{eq:har_varpi}). The points deviate from the curve by at most 1~$\mu$as, or $\lesssim0.1$\%. The random errors, measured by the sample standard deviation of the offsets of the individual parallaxes, were 7.3~$\mu$as in all the experiments, practically independent of the basic angle and in rough agreement with the expected end of mission precision. \begin{figure} \resizebox{\hsize}{!}{\includegraphics{fig_parShift.pdf}} \caption{Parallax zero point shift for a basic angle variation of the form $\cos\Omega$ with amplitude 1~mas. The dots show results of the large-scale iterative solution for several different basic angles, including the {\sc Hipparcos} and \emph{Gaia} values; the solid curve shows the theoretical relation from Eq.~(\ref{eq:har_varpi}).} \label{fig:parBA} \end{figure} \section{Discussion} \label{sec:disc} In the preceding sections it was shown that a global shift of the parallaxes is observationally indistinguishable from a certain time-variation of the basic angle. The relevant relation, strictly valid at the centre of the field of view, is given by the last identity in Eq.~(\ref{eq:delta_par}). Here we proceed to discuss some practical implications of this result. \subsection{Physical relevance of the cos~$\Omega$ dependence} \label{sec:relevance} Elementary design principles have led to the choice of a nearly constant solar aspect angle $\xi$ (Fig.~\ref{fig:system}) for both {\sc Hipparcos} (43\degr) and \emph{Gaia} (45\degr). Moreover, for a satellite in orbit around the Earth or close to the second Lagrange (L$_2$) point of the Sun-Earth-Moon system, the barycentric distance $R$ is always close to 1~au. The form of basic-angle variation that is degenerate with parallax is then essentially proportional to $\cos\Omega$. This result is highly significant in relation to expected thermal variations of the instrument. The oblique solar illumination of the rotating satellite may produce basic-angle variations that are periodic with the spin period relative to the Sun, i.e.\ of the general harmonic form of the last line in Eq.~(\ref{eq:har_gamma}). A nearly constant, non-zero coefficient $a_1^{(\Gamma)}$ could therefore be a very realistic physical consequence of the way the satellite is operated. Knowledge of the close coupling between a possible thermal impact on the instrument and the parallax zero point has resulted in very strict engineering specifications for the acceptable amplitude of short-term variations of the basic angle in both {\sc Hipparcos} and \emph{Gaia}. In the case of \emph{Gaia} it was known already at an early design phase that the basic angle variations cannot be fully avoided passively and need to be measured. Therefore Gaia includes a dedicated laser-interferometric metrology system, the basic angle monitor \citep[BAM;][]{2014SPIE.9143E..0XM}, to measure the short-term variations. According to BAM measurements during the first year of the nominal operations of \emph{Gaia}, the amplitude of the $\cos\Omega$ term, referred to 1.01~au and epoch 2015.0, was about 0.848~mas \citep{2016A&A...595A...4L}. Uncorrected, such a large variation would lead to a parallax bias of 0.741~mas according to Eq.~(\ref{eq:har_varpi}). For \emph{Gaia} Data Release~1 \citep{2016A&A...595A...2G} the observations were corrected for the basic angle variations based on a harmonic model fitted to the BAM measurements. \subsection{Dependence on $\Gamma_\mathrm{c}$} \label{s:smallGamma} From Eq.~(\ref{eq:har_varpi}) it is seen that the parallax shift is inversely proportional to $\sin(\Gamma_\mathrm{c}/2)$. For a constant amplitude of the $\cos\Omega$ term in the basic angle variation, the parallax shift therefore decreases with increasing basic angle (cf.\ Fig.~\ref{fig:parBA}). From this point of view the optimum basic angle is therefore 180\degr. This value would however be very bad for the overall conditioning and precision of the astrometric solution \citep{1997ESASP.402..823M,1998A&A...340..309M,2011EAS....45..109L}, which instead favours a value around 90\degr. Moreover, the sensitivity to the $\cos\Omega$ variation is only a factor 1.4 smaller at 180{\degr} than at 90{\degr}. The \emph{Gaia} value $\Gamma_\mathrm{c}=106\fdg5$ is therefore a reasonable choice. That the sensitivity to the basic angle variation increases with decreasing $\Gamma_\mathrm{c}$ can be understood from simple arguments. Consider the along-scan effects of the parallax shift $\delta\varpi$. As long as the nominal basic angle $\Gamma_\mathrm{c}$ is large, the effects in the two fields of view are significantly different. However, the smaller the basic angle is, the more similar are the effects in the two fields of view. This can be seen from Eq.~(\ref{eq:gh_par}) but is also obvious without any formula. We emphasise that it is the basic angle variations that induce field angle perturbations and that the solution tries to find parallaxes and attitude parameters that fit the perturbed field angles. It is then obvious that a smaller basic angle will require a larger parallax shift to absorb a basic-angle variation of given amplitude. \subsection{Time dependence of $R$ and $\xi$} \label{s:time} As noted above, the barycentric distance $R$ and the angle $\xi$ are not strictly constant but functions of time. In this case Eq.~(\ref{eq:delta_par}) gives the particular time dependences of $\delta\Gamma$, $\delta_x$, $\delta_y$, and $\delta_z$ that are degenerate with a global parallax shift. In particular, \begin{equation}\label{eq:gamma_var} \delta\Gamma(t) = C \, R(t)\sin\xi(t)\cos\Omega(t) \, , \end{equation} where $C$ is constant, is indistinguishable from a parallax shift of $\delta\varpi=\frac{1}{2}C/\sin(\Gamma_\mathrm{c}/2)$. Any other form of the variation $\delta\Gamma(t)$ is not completely degenerate with $\delta\varpi$ and may therefore contain components that can be detected by analysis of the residuals and subsequently eliminated by means of additional calibration terms. However, an arbitrary variation $\delta\Gamma(t)$ in general also contains a component of the form (\ref{eq:gamma_var}), which will result in some parallax shift. This shift can be estimated by projecting the variation onto the function on the right-hand side of Eq.~(\ref{eq:gamma_var}) in the least-squares sense: \begin{equation}\label{eq:mean} \delta\varpi={1\over 2\sin(\Gamma_\mathrm{c}/2)}\, { \bigl\langle\,\delta\Gamma(t)\,R(t)\,\sin\xi(t)\,\cos\Omega(t)\,\bigr\rangle \over \bigl\langle\,R(t)^2\,\sin^2\xi(t)\,\cos^2\Omega(t)\,\bigr\rangle }\,, \end{equation} where the angular brackets denote averaging over time. If $R$ and $\xi$ are constant, the factor $(R\sin\xi)^{-1}$ can be taken out of the averages. If, in addition, $\delta\Gamma(t)$ is strictly periodic in $\Omega$, we recover the first equality in Eq.~(\ref{eq:har_varpi}). For Gaia, which operates in the vicinity of L$_2$, its barycentric distance $R$ varies between approximately 0.99 and 1.03~au as a combination of the eccentric heliocentric orbit of the Earth, the Lissajour orbit around L$_2$ and the time-dependent offset between the Sun and the Solar system's barycentre. As discussed in Sect.~\ref{sec:helio} below $\xi$ varies by about 1\% from its nominal value 45\degr. \subsection{Effect of the finite size of the fields of view} \label{sec:fov} In order to derive Eqs.~(\ref{eq:gh_par}) and (\ref{eq:delta_par}) we neglected the finite size of the field of view by considering only observations at the field centre ($g=h=0$). This was necessary in order to obtain an exact relation between the parallax shift and the basic-angle and attitude variations. In a finite field of view, additional terms appear due to the variation of the parallax factor across the field, which cannot be represented by a unique set of basic-angle variations. These terms are of the order of $\Phi\simeq 10^{-2}$ times smaller than the basic-angle variations, where $\Phi$ is half the size of the field of view. As a consequence, a basic-angle variation of the form (\ref{eq:gamma_var}) is not strictly degenerate with $\delta\varpi$ and the attitude angles when a finite field of view is considered. However, if the instrument has also periodic optical distortions separately in each field of view that need to be calibrated, the corresponding, more complex calibration model may contribute to the degeneracy and, in worst case, restore complete degeneracy. \subsection{Dependence on heliotropic coordinates} \label{sec:helio} The spherical coordinates $R$, $\xi$, $\Omega$ introduced in Sect.~\ref{sec-parallax} define the position of the solar system barycentre in the scanning reference system (SRS). This is what matters for calculating the parallax effect, which depends on the observer's displacement from the barycentre. On the other hand, the physical relevance of the $\cos\Omega$ modulation is connected with the changing illumination of the satellite by the Sun, which depends on the heliocentric distance $R_\mathrm{h}$ and the (heliotropic) angles $\xi_\mathrm{h}$, $\Omega_\mathrm{h}$ representing the proper direction towards the centre of the Sun at the time of observation. The difference between the heliotropic and barytropic coordinates is at most about 0.01~au and 0.01~rad, respectively. This is small but should be taken into account for an accurate modelling of the basic-angle variations. In this context it can be noted that the expected thermal impact on the satellite scales as $R_\mathrm{h}^{-2}$, while the parallax factor scales as $R$. On the other hand, the scanning law is chosen to keep $\xi_\mathrm{h}$ as constant as possible, while $\xi$ can vary at the level of 1\%. If the basic angle varies as $R_\mathrm{h}^{-2}\sin\xi_\mathrm{h}\cos\Omega_\mathrm{h}$ it is no longer strictly of the form (\ref{eq:gamma_var}). The resulting parallax bias can be estimated by means of Eq.~(\ref{eq:mean}). \subsection{Breaking the degeneracy?} \label{sec:breaking} If the basic angle varies as a consequence of the changing solar illumination of the rotating satellite we expect to see a parallax bias according to Eq.~(\ref{eq:mean}). However, as discussed above, the degeneracy with the parallax zero point is not perfect, and in principle this opens a possibility to calibrate the basic angle variations from the observations. At least three different effects that contribute to breaking the degeneracy could be used: the finite size of the field of view (Sect.~\ref{sec:fov}), the time dependence of $R$ due to the eccentricity of the Earth's orbit (Sect.~\ref{s:time}), and the difference between the barytropic and heliotropic angles (Sect.~\ref{sec:helio}). Unfortunately all three effects only come in at a level of a few per cent of the variation, or less, which makes the result very sensitive to small errors in the calibration model. Moreover, the finite field of view is of little use if we have to calibrate complex periodic variations of optical distortions independently in each field of view. The best chance may be offered by the time variation of $R$, where the ratio of the parallax effect to the illumination strength goes as $R^3$ and consequently varies by $\pm 5$\% over the year. Thus the hope to break the degeneracy purely from the observations themselves, i.e.\ based on the self-calibration principle, is rather limited. \section{Conclusions} \label{s:conclusion} We have presented an analysis of the effect of basic angle variations on the global shift of parallaxes derived from observations by a scanning astrometric satellite with two fields of view. The method of small perturbations was used to derive the changes in the four observables (the across- and along-field angles in both fields of view) resulting from perturbations of four instrument parameters (the basic angle and three components of the attitude). Conversely, any given perturbation of the four observables can equally be represented by a specific combination of the instrument parameters. Applying this technique to the perturbations induced by a change in the parallax, we derived the time-dependent variations of the instrument parameters that exactly mimic a global shift of the parallaxes. These relations confirm previous findings that an uncorrected variation of the basic angle of the form $a_1^{(\Gamma)}\cos\Omega$, with $\Omega$ being the barycentric spin phase, leads to a global shift of the parallax zero point of ${\simeq\,}0.87a_1^{(\Gamma)}$ for the parameters of the \emph{Gaia} design. Results of numerical simulations are in complete agreement with the analytical formulae. In general, periodic variations of the basic angle can be expected from the thermal impact of solar radiation on the spinning satellite \citep{lindegren77,LL:GAIA-LL-057}. Those periodic variations are typically related to the heliotropic spin phase $\Omega_\mathrm{h}$, which is close to the barycentric spin phase $\Omega$. If the thermally-induced variations contain a significant component that is proportional to $\cos\Omega_\mathrm{h}$, their effect on the observations is practically indistinguishable from a global shift of the parallaxes. Although the degeneracy is not perfect, it is difficult to break except by using of other kinds of data or external information. In the case of \emph{Gaia} this includes, in particular, direct measurement of the basic angle variations by means of laser metrology (BAM). The use of astrophysical information such as parallaxes of pulsating stars \citep{2011A&A...530A..76W,2016arXiv160900728G} and quasars is vital for verifying the successful determination of the parallax zero point. \begin{acknowledgements} The authors acknowledge useful discussions with many colleagues within the \emph{Gaia} community, of which we especially wish to mention Ulrich Bastian, Anthony Brown, Jos de Bruijne, Uwe Lammers, Fran\c{c}ois Mignard, and Timo Prusti. The authors warmly thank the anonymous referee for valuable comments and suggestions. The work at Technische Universit\"at Dresden was partially supported by the BMWi grants 50\,QG\,0601, 50\,QG\,0901 and 50\,QG\,1402 awarded by the Deutsche Zentrum f\"ur Luft- und Raumfahrt e.V. (DLR). \end{acknowledgements} \bibliographystyle{aa}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} Matrix factorization is a popular machine learning technique, with applications in variety of domains, such as recommendation systems~\citep{lawrence09:non-linear,salakhutdinov08:probabilistic}, natural language processing~\citep{riedel13:relation}, and computer vision~\citep{huang03:learning}. Due to this widespread use of these models, there has been considerable theoretical analysis of the various properties of low-rank approximations of real-valued matrices, including approximation rank~\citep{alon13:the-approximate,davenport14:1-bit} and sample complexity~\citep{balcan2017optimal}. Rather than assume real-valued data, a number of studies (particularly ones on practical applications) focus on more specific data types, such as binary data~\citep{nickel13:logistic}, integer data~\citep{lin2009integer}, and ordinal data~\citep{koren2011ordrec,udell14:generalized}. For such matrices, existing approaches have used different \emph{link} functions, applied in an element-wise manner to the low-rank representation~\citep{neumann16:what}, i.e. the output $\hat{Y}$ is $\ensuremath{\psi}(\mathbf{U}^T\mathbf{V})$ instead of the conventional $\mathbf{U}^T\mathbf{V}$. These link functions have been justified from a probabilistic point of view \citep{collins01:a-generalization,salakhutdinov08:bayesian}, and have provided considerable success in empirical settings. However, theoretical results for linear factorization do not apply here, and thus the expressive power of the factorization models with non-linear link functions is not clear, and neither is the relation of the rank of a matrix to the link function used. In this paper, we first define a generalized notion of rank based on the link function $\ensuremath{\psi}$, as the rank of a latent matrix before the link function is applied. We focus on a link function that applies to factorization of integer-valued matrices: the generalized round function ($\text{GRF}$), and define the corresponding generalized round-rank ($\text{GRR}$). After providing background on $\text{GRR}$, we show that there are many low-$\text{GRR}$ matrices that are full rank\footnote{We will refer to rank of a matrix as its \emph{linear} rank, and refer to the introduced generalized rank as \emph{link}-rank.}. Moreover, we also study the approximation limitations of linear rank, by showing, for example, that low $\text{GRR}$ matrices often cannot be approximated by low-rank linear matrices. We define uniqueness for $\text{GRR}$-based matrix completion, and derive its necessary and sufficient conditions. These properties demonstrate that many full linear-rank matrices can be factorized using low-rank matrices if an appropriate link function is used. We also present an empirical evaluation of factorization with different link functions for matrix reconstruction and completion. We show that using link functions is efficient compared to linear rank, in that gradient-based optimization approach learns more accurate reconstructions using a lower rank representation and fewer training samples. We also perform experiments on matrix completion on two recommendation datasets, and demonstrate that appropriate link function outperform linear factorization, thus can play a crucial role in accurate matrix completion. \section{Link Functions and Generalized Matrix Rank} \label{sec:link} Here we introduce our notation for matrix factorization, and use it to introduce link functions and \emph{generalized link-rank}. We will focus on the round function and round-rank, introduce their generalized versions, and present their properties. \para{Rank Based Factorization} Matrix factorization, broadly defined, is a decomposition of a matrix as a multiplication of two matrices. Accordingly, rank of a matrix $\mathbf{Y}\in {\mathbb{R}}^{n\times m}$ defined as the smallest natural number $r$ such that: $ \mathbf{Y} = \mathbf{U} \mathbf{V}^T, \text{or,} \mathbf{Y}_{ij} = \sum_k \mathbf{U}_{ik}\mathbf{V}_{jk} $, where $\mathbf{U}\in {\mathbb{R}}^{n\times r}$ and $\mathbf{V}\in {\mathbb{R}}^{n\times r}$. We use $\mathrm{r}(\mathbf{Y})$ to indicate the rank of a matrix $\mathbf{Y}$. \para{Link Functions and Link-Rank} Since the matrix $\mathbf{Y}$ may be from a domain $\mathbb{V}^{n\times m}$ different from real matrices, link functions can be used to define an alternate factorization: \begin{align} \mathbf{Y} = \ensuremath{\psi}_{\boldsymbol{\tau}}(\ensuremath{\mathbf{X}}), \ensuremath{\mathbf{X}} = \mathbf{U} \mathbf{V}^T, \end{align} where $\mathbf{Y} \in \mathbb{V}^{n\times m}$, $\ensuremath{\psi}:{\mathbb{R}}\rightarrow\mathbb{V}$ (applied element-wise), $\ensuremath{\mathbf{X}}\in {\mathbb{R}}^{n\times m}$, $\mathbf{U}\in {\mathbb{R}}^{n\times r}$, $\mathbf{V}\in {\mathbb{R}}^{n\times r}$, and ${\boldsymbol{\tau}}$ represent parameters of the link function, if any. Examples of link functions that we will study in this paper include the \emph{round} function for binary matrices, and its generalization to ordinal-valued matrices. Link functions were introduced for matrix factorization by \citet{singh08:a-unified}, consequently \citet{udell14:generalized} presented their generalization to loss functions and regularization for \emph{abstract data types}. \begin{thm:def}\label{thm:rank} Given a matrix $\mathbf{Y}$ and a link function $\ensuremath{\psi}_{\boldsymbol{\tau}}$ parameterized by ${\boldsymbol{\tau}}$, the \textbf{link-rank} $\mathrm{r}_\ensuremath{\psi}$ of $\mathbf{Y}$ is defined as the minimal rank of a real-matrix $\ensuremath{\mathbf{X}}$ such that, $\mathbf{Y} = \ensuremath{\psi}_{\boldsymbol{\tau}}(\ensuremath{\mathbf{X}})$, \begin{equation} \label{eq:rank} \mathrm{r}_\ensuremath{\psi}(\mathbf{Y}) = \min_{\ensuremath{\mathbf{X}}\in{\mathbb{R}}^{n\times m}, {\boldsymbol{\tau}}} \left\{ \mathrm{r}(\ensuremath{\mathbf{X}}); \mathbf{Y} = \ensuremath{\psi}_{\boldsymbol{\tau}}(\ensuremath{\mathbf{X}}) \right\} \end{equation} \end{thm:def} Note that with $\ensuremath{\psi}\equiv I$, i.e. $\ensuremath{\psi}(x)=x$, $\mathrm{r}_\ensuremath{\psi}(\mathbf{Y})=\mathrm{r}(\mathbf{Y})$. \para{Sign and Round Rank} If we consider the $\text{sign}$ function as the link function, where $ \text{sign}(x)= \{ 0 \text{ if } x<0, 1 \text{ o.w.} \}$ (applied element-wise to the entries of the matrix), the link-rank defined above corresponds to the well-known $\text{sign-rank}$ for binary matrices~\citep{neumann2015some}: \[ \text{sign-rank}(\mathbf{Y}) = \min_{\ensuremath{\mathbf{X}}\in{\mathbb{R}}^{n\times m}} \left\{ \mathrm{r}(\ensuremath{\mathbf{X}}); \mathbf{Y} = \text{sign}(\ensuremath{\mathbf{X}}) \right\}. \] A variation of the $\text{sign}$ function that uses a threshold $\tau$, $\text{Round}_\tau(x)=\{ 0 \text{ if } x<\tau, 1 \text{ o.w.} \}$ when used as a link function results in the $\text{round-rank}$ for binary matrices, i.e. \[ \text{round-rank}_{\tau}(\mathbf{Y}) = \min_{\ensuremath{\mathbf{X}}\in{\mathbb{R}}^{n\times m}} \left\{ \mathrm{r}(\ensuremath{\mathbf{X}}); \mathbf{Y} = \text{Round}_{\tau}(\ensuremath{\mathbf{X}}) \right\}, \] % as shown in ~\citet{neumann2015some}. Thus, our notion of \emph{link-rank} not only unifies existing definitions of rank, but can be used for novel ones, as we will do next. \para{Generalized Round-Rank ($\text{GRR}$)} \label{sec:rrabk} Many matrix factorization applications use ordinal values, i.e $\mathbb{V}=\{0,1,\ldots,N\}$. For these, we define generalized round function ($\text{GRF}$): \begin{align} \text{GRF}_{\tau_1,...,\tau_N}(x)= \begin{cases} 0\tab\tab x \leq \tau_1\\ 1\tab\tab \tau_1 < x \leq \tau_2\\ \vdots\\ N-1\tab \tau_{N-1} < x \leq \tau_{N}\\ N\tab\tab \text{o.w.} \end{cases} \end{align} where its parameters ${\boldsymbol{\tau}}\equiv\{\tau_1,...,\tau_N\}$ are thresholds (sorted in ascending order). Accordingly, we define \emph{generalized round-rank} ($\text{GRR}$) for any ordinal matrix $\mathbf{Y}$ as: \[ \text{GRR}_{{\boldsymbol{\tau}}}(\mathbf{Y})=\min_{\ensuremath{\mathbf{X}}\in{\mathbb{R}}^{n\times m}} \left\{ \mathrm{r}(\ensuremath{\mathbf{X}}); \mathbf{Y} = \text{GRF}_{{\boldsymbol{\tau}}}(\ensuremath{\mathbf{X}}) \right\} . \] Here, we are primarily interested in exploring the utility of $\text{GRR}$ and, in particular, compare the representation capabilities of low-$\text{GRR}$ matrices to low-linear rank matrices. To this end, we present the following interesting property of $\text{GRR}$. \iffalse \begin{thm:lemma} For $A ,B \in \{0,...,N\}^{n \times m}$: \begin{align} \text{GRR}_{\tau_1,...\tau_N}(A) &\leq min(n,m)\\ \text{GRR}_{\tau_1,...\tau_N}(A) &=\text{GRR}_{\tau_1,...\tau_N}(A^T)\\ \text{GRR}_{\tau_1,...\tau_N}(A+B) &\leq \text{GRR}_{\tau_1,...\tau_N}(A)+\text{GRR}_{\tau_1,...\tau_N}(B) \end{align} Where $+$ is in the real numbers and $A+B \in \{0,...,N\}^{n \times m}$. \end{thm:lemma} \begin{thm:lemma} the following decomposition holds for $\text{GRF}$: \begin{align} Round_{\tau_1,...\tau_N}(A)=\sum\limits_{i=1}^{N}Round_{\tau_i}(A) \end{align} \end{thm:lemma} \begin{thm:lemma} For any arbitrary subset of thresholds $T=\{\tau_{i_1},...,\tau_{i_r}\}$: \begin{align} \text{GRR}_{\tau_1,...\tau_N}(A)\geq \text{GRR}_{T}(\bar A) \end{align} Where $\bar A$ attained by the following transformation in matrix $A$: \begin{align} \bar A & =[b_{ij}]_{n\times m}\\ b_{ij} & = \begin{cases} 0, & \text{if } a_{ij} \in \{0,..,i_{1}-1\} \\ 1, & \text{if } a_{ij} \in\{i_{1},..,i_{2}-1\} \\ \vdots\\ r-1, & \text{if } a_{ij} \in\{i_{r},..,N-1\} \end{cases} \end{align} \end{thm:lemma} \begin{thm:lemma} Following inequality holds for GRR: \begin{align} GRR_{\tau_{1},...\tau_{N}}(A)\leq GRR_{\tau_1,...\tau_N,\tau_{N+1}}(A) \end{align} \end{thm:lemma} \begin{thm:lemma} Lets define the function $F: R^N \rightarrow N$ as follows: \begin{align} F(\tau_{1},...\tau_{N})= GRR_{\tau_1,...\tau_N}(A) \end{align} Where $A$ is a matrix in $\{0,...,N\}^{n\times m}$. Then we have the following inequality: \begin{align} F((\tau_{1}+\tau\textquotesingle_{1})/2,...\tau_{N}) \leq F(\tau_{1},...\tau_{N})+F(\tau\textquotesingle_{1},...\tau_{N}) \end{align} \end{thm:lemma} \begin{thm:lemma} We have the following inequality: \begin{align} F(\tau_{1}+\tau\textquotesingle_{1},...,\tau_{N}+\tau\textquotesingle_{N}) \leq F(\tau_{1},...\tau_{N})+F(\tau\textquotesingle_{1},...,\tau\textquotesingle_{N}) \end{align} \end{thm:lemma} \fi \begin{thm:thm} \label{thm:thresh} For a given matrix $\mathbf{Y} \in \{0,\ldots,N\}^{n \times m}$, let's assume ${\boldsymbol{\tau}}^*$ is the set of optimal thresholds, i.e. $ \text{GRR}_{{\boldsymbol{\tau}}^{\star}}(Y)=\text{argmin}_{{\boldsymbol{\tau}}}\text{GRR}_{{\boldsymbol{\tau}}}(Y)$, then for any other ${\boldsymbol{\tau}}'$: \begin{align} \label{thresh} \text{GRR}_{{\boldsymbol{\tau}}'}(\mathbf{Y}) \leq N\times\text{GRR}_{{\boldsymbol{\tau}}^{\star}}(\mathbf{Y})+1 \end{align} \begin{proof} We provide a sketch of proof here, and include the details in the appendix. We can show that the GRR can change at most by $1$ if we add a constant to all the thresholds and does not change at all if all the thresholds are multiplied by a constant. Further, we show that there exist $\epsilon_i$ for every $i \in \{1,...,N-1\}$ such that shifting $\tau_i$ by $\epsilon_i$ does not change the GRR. These properties provide a bound to the change in GRR between any two sets of thresholds. \cut{ \begin{thm:lemma} \label{thm:thr} We have the following property for \text{GRR}: \begin{align} \text{GRR}_{\tau_{1}+c,...,\tau_{N}+c}(\mathbf{Y})\leq \text{GRR}_{\tau_1,...,\tau_N}(\mathbf{Y})+1 \end{align} Where $c$ is a real number. \begin{proof} We define $\mathbf{B}$ and $\mathbf{B'}$ as follows: \begin{align} \mathbf{B} &=\{B|\text{GRF}_{\tau_1,..,.\tau_N}(B)=\mathbf{Y}\}\\ \mathbf{B'} &=\{B'|\text{GRF}_{\tau_1+c,...,\tau_N+c}(B')=\mathbf{Y}\} \end{align} For an arbitrary $B\in \mathbf{B}$ let's assume we have matrix $\mathbf{U}$ and $\mathbf{V}$ such that $B=\mathbf{U} \times \mathbf{V}^T$. If we add a column to the end of $\mathbf{U}$ and $\mathbf{V}$ and call them $\mathbf{U}'$ and $\mathbf{V}'$ as follows: \begin{align} U'&=\begin{bmatrix} & c\\ U & \vdots \\ & c \end{bmatrix} ,& V'&=\begin{bmatrix} & 1\\ V & \vdots \\ & 1 \end{bmatrix} \end{align} It is clear that $B'=\mathbf{U}'\times \mathbf{V}'^T \in \mathbf{B'}$. Furthermore, using the fact that $\mathrm{r}(B')\leq \mathrm{r}(B)+1$ we can complete the proof. \end{proof} \end{thm:lemma} \begin{thm:lemma} For any $k \in \ensuremath{{\mathbb{R}}}$, the following holds: \begin{align} \text{GRR}_{k\tau_{1},...,k\tau_{N}}(\mathbf{Y}) = \text{GRR}_{\tau_{1},...\tau_{N}}(\mathbf{Y}) \end{align} \begin{proof} If we define $\mathbf{B}$ as before, and $\mathbf{B'}$ as follows: \begin{align} \mathbf{B'} &=\{B'|\text{GRF}_{k\tau_1,...,k\tau_N}(B')=\mathbf{Y}\} \end{align} For any $B\in \mathbf{B}$ it is clear that $kB\in \mathbf{B'}$. On the other hand, for any $B'\in \mathbf{B'}$ we know that $ B'/k\in \mathbf{B}$. By using the fact that $\mathrm{r}(kB) = \mathrm{r}(B)$, we can complete the proof. \end{proof} \end{thm:lemma} Based on these lemmas and the fact that for any $i \in \{1,...,N-1\}$, there exist an $\epsilon_i$ which will satisfies the following equality: \begin{align} \label{op} \text{GRR}_{\tau_{1}^{\star},...,\tau_{i}^{\star}-\epsilon_i,...,\tau_{N}^{\star}}(\mathbf{Y}) = \text{GRR}_{\tau_{1}^{\star},...\tau_{N}^{\star}}(\mathbf{Y}) \end{align} We can show that there exists a set of $\epsilon_i$ $(i \in \{1,...,N-1\})$, that transform $(\tau_{1}^{\star},...\tau_{N}^{\star})$ in to $(\tau\textquotesingle_{1},...,\tau\textquotesingle_{N})$ with a set of linear combinations and a constant shift in the thresholds. In another word, it means we have $k_0,...,k_{N-1}$ in a way that (effect of constant shift will appear as the plus one in the inequality~\ref{thresh}): \hspace{5mm} $ T'=k_0T_0^{\star}+...+k_{N-1}T_{N-1}^{\star} $, Where $T'=(\tau_{1}',...\tau_{N}')$, $T_0=(\tau_{1}^{\star},...\tau_{N}^{\star})$ and $T_i=(\tau_{1}^{\star},...,\tau_{i}^{\star}-\epsilon_i,...,\tau_{N}^{\star})$. Therefore, if we define $\mathbf{B_i}$ as follows: \begin{align} \mathbf{B_i}=\{B_i|\text{GRF}_{{T_i}^{\star}}(B_i)=\mathbf{Y}\} \end{align} And considering the fact that: \begin{align} \mathrm{r}(k_0B_0+...+k_{N-1}B_{N-1})&\leq \sum_{j=0}^{N-1} \mathrm{r}(k_{j}B_{j})\\ &=\sum_{j=0}^{N-1} \mathrm{r}(B_{j}) \end{align} Finally, with Lemma~\ref{thm:thr} and equation~\ref{op} we can complete the theorem. } \end{proof} \end{thm:thm} This theorem shows that even though using a fixed set of thresholds is not optimal, the rank is still bounded in terms of $N$, and does not depend on the size of the matrix ($n$ or $m$). Other complementary lemmas are provided in appendix. \begin{thm:rmk} The upper bound in the theorem \ref{thm:thresh} matches the upper bound found in \citet{neumann16:what} for the case where $N=1$, $ \text{GRR}_{\tau '}(\mathbf{Y}) \leq \text{GRR}_{\tau^*}(\mathbf{Y})+1 $. \end{thm:rmk} \section{Comparing Generalized Round Rank to Linear Rank} \label{sec:GRR vs LR} Matrix factorization (MF) based on linear rank has been widely used in lots of machine learning problems like matrix completion, matrix recovery and recommendation systems. The primary advantage of matrix factorization is its ability to model data in a compact form. Being able to represent the same data accurately in an even more compact form, specially when we are dealing with high rank matrices, is thus quite important. Here, we study specific aspects of exact and approximate matrix reconstruction with $\text{GRR}$. In particular, we introduce matrices with high linear rank but low $\text{GRR}$, and demonstrate the inability of linear factorization in approximating many low-$\text{GRR}$ matrices. \subsection{Exact Low-Rank Reconstruction} To compare linear and $\text{GRR}$ matrix factorization, here we identify families of matrices that have high (or full) linear rank but low (or constant) $\text{GRR}$. Such matrices demonstrate the primary benefit of GRR over linear rank: factorizing matrices using GRR can be significantly beneficial. As provided in \citet{neumann2015some} for round-rank (a special case of $\text{GRR}$), $\text{GRR}_{{\boldsymbol{\tau}}}(\mathbf{Y})\leq r(\mathbf{Y})$ for any matrix $\mathbf{Y}\in\mathbb{V}^{n\times m}$. More importantly, there are many structures that lower bound the linear rank of a matrix. For example, if we define the upper triangle number $n_U$ for matrix $\mathbf{Y}\in \mathbb{V}^{n\times n}$ as the size of the biggest square block which is in the form of an upper triangle matrix, then $ \mathrm{r}(\mathbf{Y})\geq n_U $. If we define the identity number $n_I$ similarly, then $ \mathrm{r}(\mathbf{Y})\geq n_I $, and similarly for matrices with a band diagonal submatrix. None of these lower bounds that are based on identity, upper-triangle, and band-diagonal structures apply to $\text{GRR}$. In particular, as shown in \citet{neumann2015some}, identity matrices (of any size) have a constant round-rank of $2$, upper triangle matrices have round-rank of $1$, and band diagonal matrices have round-rank of $2$ (which also holds for GRR). Moreover, we provide another lower bound for linear rank of a matrix, which is again not applicable to $\text{GRR}$. \begin{thm:thm} If a matrix $\mathbf{Y}\in\ensuremath{{\mathbb{R}}}^{n\times m}$ contains $k$ rows, $k\leq n,k\leq m$, such that $R=\{Y_{R_1},...,Y_{R_k}\}$, two columns $C=\{j_0,j_1\}$, and: \begin{enumerate}[nosep] \item rows in $R$ are distinct from each other, i.e, $\forall i,i'\in R, \exists j,Y_{ij}\neq Y_{i'j}$, \item columns in $C$ are distinct from each other, i.e, $\exists i,Y_{ij_0}\neq Y_{ij_1}$, and \item matrix spanning $R$ and $C$ are non-zero constants, w.l.o.g. $\forall i\in R,Y_{ij_0}=Y_{ij_1}=1$, \end{enumerate} then $\mathrm{r}(\mathbf{Y})\geq k$. (See appendix for the proof) \iffalse Let us assume $\mathrm{r}(\mathbf{Y})<k$, i.e. $\exists k'<k, \mathbf{U}\in\ensuremath{{\mathbb{R}}}^{k'\times n}, \mathbf{V}\in\ensuremath{{\mathbb{R}}}^{k'\times m}$ such that $\mathbf{Y}=\mathbf{U}^T\times\mathbf{V}$. Since the rows $R$ and the columns in $C$ are distinct, their factorizations in $\mathbf{U}$ and $\mathbf{V}$ have to also be distinct, i.e. $\forall i,i'\in R, i\neq i', \mathbf{U}_{i}\neq\mathbf{U}_{i'}$ and $\mathbf{V}_{j_0}\neq\mathbf{V}_{j_1}$. Furthermore, $\forall i,i'\in R, i\neq i', \not\exists a,\mathbf{U}_{i}=a\mathbf{U}_{i'}$ and $\not\exists a,\mathbf{V}_{j_0}=a\mathbf{V}_{j_1}$ for $a\neq0$, it is clear that $\mathbf{U}_i\cdot\mathbf{V}_{j_0}=\mathbf{U}_i\cdot\mathbf{V}_{j_1}=1$ (and similarly for $i,i'\in R$). Now consider a row $i\in R$. Since $\forall j\in C, \mathbf{Y}_{ij}=1$, then $\mathbf{U}_i\cdot\mathbf{V}_{j}=1$. As a result, $\mathbf{V}_j$ are distinct vectors that lie in the hyperplane spanned by $\mathbf{U}_i\cdot\mathbf{V}_j=1$. In other words, the hyperplane $\mathbf{U}_i\cdot\mathbf{V}_j=1$ defines a $k'$-dimensional hyperplane tangent to the unit hyper-sphere. Going over all the rows in $R$, we obtain constraints that $\mathbf{V}_j$ are distinct vectors that lie in the intersection of the hyperplanes spanned by $\mathbf{U}_i\cdot\mathbf{V}_j=1$ for all $i\in R$. Since all $\mathbf{U}_i$s are distinct, there are $k$ distinct $k'$-dimensional hyperplanes, all tangent to the unit sphere, that intersect at more than one point (since $\mathbf{V}_j$s are distinct). Since $k$ hyper-planes tangent to unit sphere can intersect at at most one point in $k'<k$ dimensional space, $\mathbf{V}_j$ cannot be distinct vectors. Hence, our original assumption $k'<k$ is wrong, therefore, $\mathrm{r}(\mathbf{Y})\geq k$. \fi \end{thm:thm} So far, we provide examples of high linear-rank structures that do not impose any constraints on $\text{GRR}$. We now provide the following lemma that, in conjunction with above results, indicates that lower bounds on the linear rank can be really high for matrices if they contain low-GRR structures (like identity and upper-triangle), while the lower bound on $\text{GRR}$ is low. \begin{thm:lemma} For any matrix $A$, if there exists a submatrix $A'$ in a way that $\mathrm{r}(A')=R$ and $\text{GRR}_{{\boldsymbol{\tau}}}(A')=r$, then $\text{GRR}_{{\boldsymbol{\tau}}}(A)\geq r$ and $\mathrm{r}(A)\geq R$. \begin{proof} If we consider the linear rank as the number of independent row (column) of the matrix, consequently having a rank of $R$ for submatrix $A'$ means there exist at least $R$ independent rows in matrix $A$. Using this argument we can simply prove above inequalities. \end{proof} \end{thm:lemma} \subsection{Approximate Low-Rank Reconstruction} Apart from examples of high linear-rank matrices that have low GRR, we can further show that many of these matrices cannot even be \emph{approximated} by a linear factorization. In other words, we show that there exist many matrices for which not only their linear rank is high, but further, that the linear rank approximations are poor as well, while their low GRR reconstruction is perfect. In order to measure whether a matrix can be approximated well, we describe the notion of approximate rank (introduced by \citet{alon13:the-approximate}, we rephrase it here in our notation). \begin{thm:def} Given $\epsilon$, \textbf{approximate rank} of a matrix $\ensuremath{\mathbf{X}}$ is:\\ $ \epsilon\text{-rank}(\ensuremath{\mathbf{X}}) = \min\{\mathrm{r}(\ensuremath{\mathbf{X}}'): \ensuremath{\mathbf{X}}'\in{\mathbb{R}}^{n\times m}, ||\ensuremath{\mathbf{X}}-\ensuremath{\mathbf{X}}'||^2_F\le\epsilon\} $ \end{thm:def} We extend this definition to introduce the generalized form of approximate rank as follows: \begin{thm:def} Given $\epsilon$ and a link function $\ensuremath{\psi}$ (e.g. GRF), the \textbf{generalized approximate rank} of a matrix $\mathbf{Y}$ is defined as: \hspace{5mm} \epsilon\text{-rank}_\ensuremath{\psi}(\mathbf{Y}) = \min\{\mathrm{r}_\ensuremath{\psi}(\mathbf{Y}'): \mathbf{Y}'\!\in\!\mathbb{V}^{n\times m}, ||\mathbf{Y}-\mathbf{Y}'||^2_F\le\epsilon\} $. \end{thm:def} For an arbitrary matrix, we can evaluate how well a linear factorization can approximate it using SVD, i.e.: \begin{thm:thm} \label{thm:pca} For a matrix $\ensuremath{\mathbf{X}}=\mathbf{U}\Sigma\mathbf{V}^T$, where $\mathrm{diag}(\Sigma)$ are the singular values, and $\mathbf{U}$ and $\mathbf{V}$ are orthogonal matrices, then $\sum_{i=k+1}^n|\Sigma_{ii}|^2 = \min_{\mathbf{Y},\mathrm{r}(\mathbf{Y})=k}||\ensuremath{\mathbf{X}}-\mathbf{Y}||^2_F$. \begin{proof} This was first introduced in ~\citet{eckart1936approximation}, and recently presented again in ~\citet{udell14:generalized}. We omit the detailed proof, but the primary intuition is that the PCA decomposition minimizes the Frobenius norm, and $\mathbf{Y}=\mathbf{U}'\mathbf{V}'$, with $\mathbf{U}'=\mathbf{U}\Sigma^{\frac{1}{2}}$ and $\mathbf{V}'=\Sigma^{\frac{1}{2}}\mathbf{V}^T$. \end{proof} \end{thm:thm} For an arbitrary binary matrix $\mathbf{Y}$, recall that $\text{Round}_{\tau=0}(\mathbf{Y})$ is equal to $\text{sign-rank}(\mathbf{Y})$. Using above theorem, we want to show that there are binary matrices that cannot be approximated by low linear-rank matrices (for non-trivial $\epsilon$), but can be approximated well by low round-rank matrices. Clearly, these results extend to ordinal matrices and their $\text{GRR}$ approximations, the generalized form of binary case. Let us consider $\mathbf{Y}$, the identity binary matrix of size $n$, for which the singular values of $\mathbf{Y}$ are all $1$s. By using Theorem~\ref{thm:pca}, any linear factorization $\mathbf{Y}'$ of rank $k$ will have $||\mathbf{Y}-\mathbf{Y}'||_F^2\geq(n-k)$. As a result, the identity matrix cannot be approximated by \emph{any} rank-$k$ linear factorization for $\epsilon<{n-k}$. On the other hand, such a matrix can be reconstructed exactly with a rank $2$ factorization if using the round-link function, since $\text{round-rank}(\mathbf{Y})=2$. In Figure~\ref{fig:pca_plot}, we illustrate a number of other such matrices, i.e. they can be exactly represented by a factorization with $\text{GRR}$ of $2$, but cannot be approximated by any compact linear factorization. \begin{figure} \centering \includegraphics[width=0.7\columnwidth]{pca_plot.pdf} \caption{Comparison of the optimal linear factorization approximation as the rank $k$ is varied for a number of matrices (of size $n\times n$), demonstrating that linear factorization is unable to approximate these matrices with low-rank. All of these matrices have a \emph{constant} generalized round-rank ($\leq 2$).} \label{fig:pca_plot} \end{figure} \section{Matrix Completion with Generalized Round-Rank Factorization} \label{sec:SC} So far, we show that there are many matrices that cannot be represented compactly using conventional matrix factorization (linear), either approximately or exactly, whereas they can be reconstructed using compact matrices when using $\text{GRF}$ as the link function. In this section, we study properties of \emph{completion} of ordinal-valued matrices based on $\text{GRF}$ (and the notion of rank from $\text{GRR}$). In particular, given a number of noise-free observations $\Omega$ from $\mathbf{Y}\in\{0,\ldots,N\}^{n\times m}$ and its $\text{GRR}(\mathbf{Y})=r, r\ll\min(n,m)$, the goal here is to identify $\mathbf{U}\in{\mathbb{R}}^{n\times r},\mathbf{V}\in{\mathbb{R}}^{m\times r}$ such that $\text{GRF}(\mathbf{U}\mathbf{V}^T)$ completes the unobserved entries of $\mathbf{Y}$ accurately. \subsection{Theoretical Results for Uniqueness} Uniqueness in matrix completion is defined as the minimum number of entries required to recover the matrix $\mathbf{Y}$ with high probability, assuming that sampling of the set of observed entries is based on an specific distribution. To obtain uniqueness in $\text{GRR}$ based factorization, we first need to introduce the interval matrix $\mathbf{\bar{X}}$. Based on definition of generalized round function ($\text{GRF}$) and a set of fixed thresholds, we define matrix $\mathbf{\bar X}$ to be a matrix with interval entries calculated based on entries of matrix $\mathbf{Y}$ and thresholds ($\tau_1,...\tau_N$). As an example, if an entry $\mathbf{Y}_{ij}$ is $k \in \{0,...,N\}$, $\mathbf{\bar X_{ij}}$ would be equal to the interval $[\tau_k,\tau_{k+1}]$. When entries of $\mathbf{Y}$ are equal to $0$ or $N$, w.l.o.g. we assume the corresponding entries in matrix $\mathbf{\bar{X}}$ are bounded. Thus, each one of matrix $\mathbf{\bar X}$'s entries must be one of the $N+1$ possible intervals based on $\text{GRF}$'s thresholds. \begin{thm:def} A target matrix $\mathbf{Y} \in \{0,\ldots,N\}^{n \times m}$ with 1)~observed set of entries $\Omega=\{(i,j), \mathbf{Y}_{ij} \text{is observed}\}$, 2)~set of known thresholds ($\tau_1,...\tau_N$), and 3)~$\text{GRR}_{\tau_1,...,\tau_N}(\mathbf{Y})=r$, is called uniquely recoverable, if we can recover its unique interval matrix $\mathbf{\bar X}$ with high probability. \end{thm:def} Similar to $\mathbf{\bar X}$, we introduce $\mathcal{X}^{\star}$ to be a set of all matrices that satisfy following two conditions: 1)~For the observed entries $\Omega$ of $\mathbf{Y}$, $\mathbf{Y}_{ij}=\text{GRF}_{\tau_1,...,\tau_N}(\ensuremath{\mathbf{X}}^{\star}_{ij})$, and 2)~linear rank of $\mathcal{X}^{\star}$ is $r$. If we consider a matrix $\ensuremath{\mathbf{X}} \in \mathcal{X}^{\star}$ then for an arbitrary entry $\ensuremath{\mathbf{X}}_{ij}$ we must have $\ensuremath{\mathbf{X}}_{ij} \in \mathbf{\bar X}_{ij}$, where $\mathbf{\bar X}_{ij}$ is an interval containing $\ensuremath{\mathbf{X}}_{ij}$. Given a matrix $\ensuremath{\mathbf{X}}\in\mathcal{X}^{\star}$, the uniqueness conditions ensure that we would be able to recover $\mathbf{\bar X}$, using which we can uniquely recover matrix $\mathbf{Y}$. In the next theorems, we first find the necessary condition on the entries of matrix $\ensuremath{\mathbf{X}}$ for satisfying uniqueness of matrix $\mathbf{Y}$. Then, we derive the sufficient condition accordingly. In our calculations, we assume the thresholds to be fixed and our target matrix $\mathbf{Y}$ be noiseless, and further, there is at least one observed entry in every column and row of matrix $\mathbf{Y}$. \begin{thm:thm} (Necessary Condition) For a target matrix $\mathbf{Y} \in \mathbb{V}^{n \times m}$ with few observed entries and given $\text{GRR}(\mathbf{Y})=r$, we consider set of $\{\mathbf{Y}_{i_1j},...,\mathbf{Y}_{i_rj}\}$ to be the $r$ observed entries in an arbitrary column $j$ of $Y$. Given any matrix $\ensuremath{\mathbf{X}} \in \mathcal{X}^{\star}$, $\ensuremath{\mathbf{X}}=\mathbf{U}\times \mathbf{V}^{T}$, and taking an unobserved entry $\mathbf{Y}_{ij}$, we define $a_{i_kj}$ as: $ \mathbf{U}_i=\sum_{k=1}^{r} a_{i_kj} \mathbf{U}_{i_k} $, where $\mathbf{U}_d$ ($d \in \{1,...,n\}$) is the $d^\text{th}$ row of matrix $\mathbf{U}$ and $i_k$ represents the index of observed entries in $j$th column. Then, the necessary condition of uniqueness of $\mathbf{Y}$ is: \begin{align} \sum_{k=1}^{r}\left | a_{i_kj} \right | \leq \epsilon\left(\frac{T_\text{min}}{T_\text{max}}\right) \end{align} Where $r=\text{GRR}(\mathbf{Y})$, $T_\text{min}$ and $T_\text{max}$ are the length of smallest and largest intervals and $\epsilon$ is a small constant. \begin{proof} We only provide a sketch of proof here, and include the details in the appendix. To achieve uniqueness we need to find a condition in which for any column of $\ensuremath{\mathbf{X}}$, by changing respective row of $\mathbf{V}$, while the value of observed entries stay in the respected intervals, the value of unobserved ones wouldn't change dramatically which result in moving to other intervals. To do so, we will calculate the maximum of the possible change for an arbitrary unobserved entry of column $j$ in matrix $\mathbf{Y}$. To calculate this maximum for any unobserved entry $\mathbf{Y}_{ij}$, we consider the row $\mathbf{U}_i$ as a linear combination of linearly independent rows of $\mathbf{U}$ (which are in respect to observed entries of $\mathbf{Y}$ in column $j$). Then, by finding the maximum possible change for observed entries in column $j$, based on their respective intervals, we find mentioned boundary for achieving the uniqueness. \iffalse To better understand the concept of uniqueness in $\text{GRR}$ benchmark, let's first look at the uniqueness in fixed value linear matrix factorization. In fixed value matrix factorization, it is proved that to achieve uniqueness, we need at least $r=\mathrm{r}(\ensuremath{\mathbf{X}})$ observation in each column (other than the linearly independent columns). Therefore, if we decompose $\ensuremath{\mathbf{X}}$ as $\ensuremath{\mathbf{X}}=\mathbf{U}\mathbf{V}^T$, and decide to change only unobserved entries of $\ensuremath{\mathbf{X}}$ in column $j$ (in opposed to uniqueness), we need to change the $jth$ row of matrix $\mathbf{V}$. To do so, let's assume we change the $jth$ row to: $ [\mathbf{V}_{j1}+c_1,...,\mathbf{V}_{jr}+c_r] $. Now since we know $\mathrm{r}(U)=r$ and assume the respective rows of $\mathbf{U}$ to observed entries of column $j$ in matrix $\ensuremath{\mathbf{X}}$ are independent (as a consequence of uniqueness), we can see that only possible value for $c_1,..., c_r$ that does not change the observed entries of $\ensuremath{\mathbf{X}}$ is equal to $0$ (using $\forall q\in\{1,..r\}, \sum_{j=1}^{r}U_{i_qj}\times C_j =0 $). The biggest difference between factorization based on $\text{GRR}$ and linear factorization is the fact that the observed entries of matrix $\ensuremath{\mathbf{X}}$ ($\mathbf{Y}=\text{GRF}(\ensuremath{\mathbf{X}})$) are not fixed in $\text{GRR}$ version, and can change through the respective interval. In result, to achieve uniqueness we need to find a condition in which for any column of $\ensuremath{\mathbf{X}}$, by changing respective row of $\mathbf{V}$, while the value of observed entries stay in the respected intervals, the value of unobserved ones wouldn't change dramatically which result in moving to other intervals. To do so, we will calculate the maximum of the possible change for an arbitrary unobserved entry of column $j$ in matrix $\mathbf{Y}$. Let's call the $r$ observed entries of column's $j$ of matrix $\mathbf{Y}$, $\mathbf{Y}_{i_{1}j},...,\mathbf{Y}_{i_{r}j}$. Similar to linear factorization, we assume that the respective rows of $\mathbf{U}$ to these entries are linearly independent. In result, if we represent the change in entries of $j^\text{th}$ rows of $\mathbf{V}$ by $c_i$, we should have: \begin{align} \begin{bmatrix} \mathbf{U}_{i_1}\\ \vdots \\ \mathbf{U}_{i_r} \end{bmatrix} \times \begin{bmatrix} c_1\\ \vdots \\ c_r \end{bmatrix} = \begin{bmatrix} \epsilon_{i_1j}\\ \vdots \\ \epsilon_{i_rj} \end{bmatrix} \end{align} Where $\mathbf{U}_{i_k}$ is the $i_k th$ row of $\mathbf{U}$, and $\epsilon_{i_kj}$ is the possible change for $\ensuremath{\mathbf{X}}_{i_{k}j}$, based on the observed interval. Therefore: \begin{align} \epsilon_{i_kj} \in (\tau_{i_kj}\downarrow-\ensuremath{\mathbf{X}}_{i_kj},\tau_{i_kj}\uparrow-\ensuremath{\mathbf{X}}_{i_kj})=(\epsilon_{i_kj}^-,\epsilon_{i_kj}^+) \end{align} Where $\tau_{i_kj}\downarrow$ and $\tau_{i_kj}\uparrow$ are lower bound and upper bound of respective interval of $\ensuremath{\mathbf{X}}_{i_kj}$ calculated based on $\mathbf{Y}_{i_kj}$. Now let's assume we want to find the maximum possible change for $\ensuremath{\mathbf{X}}_{sj}$ considering that $\mathbf{Y}_{sj}$ is an unobserved entry. Since $\mathbf{U}_{i_k}$'s are independent, there exist $a_1,..a_r$ such that $ \mathbf{U}_s=\sum_{k=1}^{r}a_{i_kj}\mathbf{U}_{i_k} $. Therefore, we can show the change in entry $\ensuremath{\mathbf{X}}_{sj}$ as $ A=\sum_{k=1}^{r}a_{i_kj}\epsilon_{i_kj} $. In result, for the maximum possible change we have: \begin{align} \max|A|=\max(\sum_{k=1}^{r}a_{i_kj}\epsilon_{i_kj}^{\text{sign}(a_{i_kj})},|\sum_{k=1}^{r}a_{i_kj}\epsilon_{i_kj}^{-\text{sign}(a_{i_kj})}|) \end{align} Where $\text{sign}(.)$ is the sign function. On the other hand: \begin{align} \sum_{k=1}^{r}a_{i_kj}\epsilon_{i_kj}^{\text{sign}(a_{i_kj})}+|\sum_{k=1}^{r}a_{i_kj}\epsilon_{i_kj}^{-\text{sign}(a_{i_kj})}|=\sum_{k=1}^{r}|a_{i_kj}|T_{i_kj} \end{align} \begin{align} \Rightarrow \max|A| \geqslant \frac{1}{2}\sum_{k=1}^{r}|a_{i_kj}|T_{i_kj} \end{align} where $T_{i_kj}$ is the length of $\bold{\bar X}_{i_kj}$ (an interval entry). Clearly, to achieve the uniqueness we need $\max|A|\leq T_{sj}$. But, since the entry $\mathbf{Y}_{sj}$ is unobserved we do not know the value of $T_{sj}$. In result, for uniqueness in the worst case we need: \begin{align} \sum_{k=1}^{r}|a_{i_kj}|T_{\max} &\leq \epsilon T_{\min}\\ \Rightarrow\sum_{k=1}^{r}|a_{i_kj}| &\leq\epsilon \frac{T_{\min}}{T_{\max}} \end{align} Where $T_{\min}$ and $T_{\max}$ are the smallest and the biggest interval, and $\epsilon$ is a small real constant. \fi \end{proof} \end{thm:thm} The same condition is necessary for matrix $\mathbf{V}$ as well. The necessary condition must be satisfied for all columns of matrix $\ensuremath{\mathbf{X}}$. Moreover, if the necessary condition is not satisfied, we cannot find a unique matrix $\ensuremath{\mathbf{X}}$, and hence a unique completion, i.e. $\mathbf{Y}=\text{GRF}_{\tau_1,...,\tau_N}(\ensuremath{\mathbf{X}})$ where $\ensuremath{\mathbf{X}} \in \mathcal{X}^{\star}$. \begin{thm:thm} (Sufficient Condition) Using above necessary condition, for any unobserved entry $\mathbf{Y}_{ij}$ of matrix $\mathbf{Y}$ we define $\bar\epsilon$ as minimum distance of $\ensuremath{\mathbf{X}}_{ij}$ with its respected interval's boundaries. Then, we will have the following inequality as sufficient condition of uniqueness: \begin{align} \bar\epsilon \geq \max \left(\sum_{k=1}^{r}a_{i_kj}\epsilon_{i_k}^{\text{sign}(a_{i_kj})} , \left | \sum_{k=1}^{r}a_{i_kj}\epsilon_{i_k}^{-\text{sign}(a_{i_kj})} \right |\right) \end{align} where $r$ and $a_{i_kj}$ are defined as before, $\epsilon_{i_kj}^{+}$ is defined as the distance of $\ensuremath{\mathbf{X}}_{i_kj}$ to its upper bound, and $\epsilon_{i_kj}^{-}$ is defined as negative of the distance of $\ensuremath{\mathbf{X}}_{i_kj}$ to its lower bound. \end{thm:thm} Above sufficient condition is a direct result of necessary condition proof. Although not tight, it guarantees the existence of unique $\mathbf{\bar X}$, and thus the complete matrix $\mathbf{Y}$. \subsection{Gradient-Based Algorithm for $\text{GRR}$ Factorization} Although previous studies have used many different paradigms for matrix factorization, such as alternating minimization \citep{hardt2014understanding,jain2013low} and adaptive sampling~\citep{krishnamurthy2013low}, stochastic gradient descent-based (SGD) approaches have gained widespread adoption, in part due to their flexibility, scalability, and theoretical properties~\citep{de2014global}. For linear matrix factorization, a loss function that minimizes the squared error is used, i.e. $L_{\text{linear}}=\sum (Y_{ij}- U_iV_j)^2$, where the summation is over the observed entries. In order to prevent over-fitting, $L_2$ regularization is often incorporated. \para{Round} We extend this framework to support $\text{GRR}$-based factorization by defining an alternate loss function. In particular, with each observed entry $Y_{ij}$ and the current estimate of ${\boldsymbol{\tau}}$, we compute the $b_{ij}^\downarrow$ and $b_{ij}^\uparrow$ as the lower and upper bounds for $X_{ij}$ with respect to the $\text{GRF}$. Given these, we use the following loss, $L_{\text{Round}}=\sum (b_{ij}^\downarrow - U_iV_j)_++( U_iV_j - b_{ij}^\uparrow)_+$, where $(.)_+=\max(.,0)$. Considering the regularization term as well, we apply stochastic gradient descent as before, computing gradients using a differentiable form of $\max$ with respect to $\mathbf{U}$, $\mathbf{V}$, and ${\boldsymbol{\tau}}$. \para{Multi-Sigmoid} Although the above loss captures the goal of the $\text{GRR}$-based factorization accurately, it contains both discontinuities and flat regions, and thus is difficult to optimize. Instead, we also propose to use a smoother and noise tolerant approximation of the $\text{GRF}$ function. The sigmoid function, $\sigma(x)=\frac{1}{1+e^{-x}}$, for example, is often used to approximate the $\text{sign}$ function. When used as a link function in factorization, we can further show that it approximates the $\text{sign-rank}$ well. \begin{thm:thm} For any $\epsilon>0$ and matrix $\mathbf{Y}$, $\text{sign-rank}(\mathbf{Y}) = \epsilon\text{-rank}_\sigma(\mathbf{Y})$. (See appendix for the proof) \iffalse \begin{proof} By introducing $\mathcal{B}_\sigma^\epsilon(k)=\{\mathbf{B}\in\{0,1\}^{n\times m}; \epsilon\text{-rank}_\sigma(\mathbf{B})=k\}$, i.e. the set of binary matrices whose $\epsilon\text{-rank}_\sigma$ is $k$, We prove the theorem by showing both directions. $\mathcal{B}_+\subseteq\mathcal{B}_\sigma$: Any $\mathbf{U},\mathbf{V}$ that works for $+$ should work with $\sigma$ if multiplied by a very large number, i.e. take a sufficiently large $\eta$, and $\mathbf{U}_\sigma=\eta\mathbf{U}_+,\mathbf{V}_\sigma=\eta\mathbf{V}_+$. Then, $\ensuremath{\mathbf{X}}_\sigma=\eta^2\ensuremath{\mathbf{X}}_+$ and if we set $\mathbf{\theta}_\sigma=\eta^2\mathbf{\theta}_+$, then $(\ensuremath{\mathbf{X}}_\sigma-\mathbf{\theta}_\sigma)=\eta^2(\ensuremath{\mathbf{X}}_+-\mathbf{\theta}_+)$, therefore will have the same sign, and $\mathbf{Y}_\sigma=\sigma(\ensuremath{\mathbf{X}}_\sigma)$ will be arbitrarily close to $0$ and $1$ in $\mathbf{Y}_+$. $\mathcal{B}_\sigma\subseteq\mathcal{B}_+$: Any $\mathbf{U},\mathbf{V}$ that works for $\sigma$ will directly work with $+$. \end{proof} \fi \end{thm:thm} We can similarly approximate $\text{GRF}$ using a sum of sigmoid functions that we call $\text{Multi-sigmoid}$ defined as $\ensuremath{\psi}^{m\sigma}_{{\boldsymbol{\tau}}}(x)=\sum_{d=1}^{N}\sigma(x-\tau_d)$, for which the above properties also hold. The resulting loss function that minimizes the squared error is $L_{\text{multi-sigmoid}}=\sum(Y_{ij}-\ensuremath{\psi}^{m\sigma}_{\boldsymbol{\tau}}(U_iV_j))^2$. In our experiments, we evaluate both of our proposed loss functions, and compare their relative performance. We study variations in which the thresholds ${\boldsymbol{\tau}}$ are either pre-fixed or updated (using $\frac{\partial}{\partial{\boldsymbol{\tau}}}L$) during training. All the parameters of the optimization, such as learning rate and early stopping, and the hyper-parameters of our approaches, such as regularization, are tuned on validation data. \section{Experiments} \label{sec:results} In this section we evaluate the capabilities of our proposed $\text{GRR}$ factorization relative to linear factorization first through variety of simulations, followed by considering \emph{smallnetflix} and \emph{MovieLens 100K}\footnote{The codes available at: \url{https://github.com/pouyapez/GRR-Matrix-Factorization}} datasets. Unless otherwise noted, all of evaluations are based on Root Mean Square Error (RMSE). \paragraph{Matrix Recovery} We first consider the problem of recovering a fully known matrix $\mathbf{Y}$ from its factorization, thus all entries are considered observed. We create three matrices in order to evaluate our approaches for recovery: (a)~Random $10\times10$ matrix with $N=5$ that has $\text{GRR}\leq 2$ (create by randomly generating ${\boldsymbol{\tau}}$, $\mathbf{U}$, and $\mathbf{V}$), (b)~Binary upper triangle matrix with size 10 ($\text{GRR}$ of 1), and (c)~Band-diagonal matrix of size 10 and bandwidth 3, which has the linear rank of $8$ and $\text{GRR}$ of 2. Figure~\ref{fig:fullrec} presents the RMSE comparison of these three matrices as training progresses. For the upper triangle and the band diagonal, we fix threshold to $\tau=0.5$. The results show that Round works far better than others by converging to zero. Moreover, linear approach is outperformed by the Multi-sigmoid without fixed thresholds in all, demonstrating it cannot recover even simple matrices. \begin{figure*}[tb] \centering \begin{subfigure}{0.31\textwidth} \centering \includegraphics[width=\textwidth]{Random.pdf} \caption{Random matrix, k=2} \end{subfigure} \quad \begin{subfigure}{0.31\textwidth} \centering \includegraphics[width=\textwidth]{uppertriangle.pdf} \caption{Upper Triangle matrix, k=1} \end{subfigure} \quad \begin{subfigure}{0.31\textwidth} \centering \includegraphics[width=\textwidth]{banddiagonal.pdf} \caption{Band Diagonal matrix, k=2} \end{subfigure} \caption{\textbf{Matrix Recovery:} Synthetic matrices that are reconstructed using their $k$-dimensional factorization with different representations. We plot RMSE of the reconstruction vs the number of training iterations, demonstrating the efficiency of $\text{GRR}$-based methods, especially without fixed thresholds.} \label{fig:fullrec} \end{figure*} \paragraph{Matrix Completion} Instead of fully-observed matrices, we now evaluate completion of the matrix when only a few of the entries are observed. We consider $50\times50$ upper-triangle and band-diagonal (bandwidth $10$) matrices, and sample entries from them, to illustrate how well our approaches can complete them. Results on held-out 20\% entries are given in Tables~\ref{tab:MC-UT} and \ref{tab:MC-BD}. In addition, we build a random matrix with size 50 and $\text{GRR}$ 2, and present the results for this matrix in Table ~\ref{tab:MC-Ra}. As we can see, linear factorization in all three cases is outperformed by our proposed approaches. In band-diagonal, because of over-fitting of the Round approach, Multi-sigmoid performs a little better, and for upper-triangle, we achieve the best result for Round method by fixing $\tau=0.5$. \paragraph{Matrix Completion on Real Data} In this section we use the \emph{smallnetflix} movie ratings data for $95526$ users and $3561$ movies, where the training dataset contains $3,298,163$ ratings and validation contains $545,177$ ratings, while each one of ratings is an integer in $\{1,2,3,4,5\}$. We also evaluate on a second movie recommendation dataset, \emph{Movielens 100k}, with $100,000$ ratings from $1000$ users on $1700$ movies, with the same range as \emph{smallnetflix}. For this recommendation systems, in addition to RMSE, we also consider the notion of accuracy that is more appropriate for the task, calculated as the fraction of predicted ratings that are within $\pm0.5$ of the real ratings. As shown in Figure~\ref{fig:smallnet}, for \emph{smallnetflix}, linear factorization is better than Round approach from RMSE perspective, probably because linear is more robust to noise. On the other hand, Multi-sigmoid achieves better RMSE than linear method. Furthermore, both Round and Multi-sigmoid outperform the linear factorization in accuracy. \emph{Movielens} results for the percentage metric shows similar behavior as \emph{smallnetflix}, demonstrating that $\text{GRR}$-based factorization can provide benefits to real-world applications. Furthermore, a comparison of our models with existing approaches on \emph{Movielens} dataset is provided in Table~\ref{tab:Ml}. We choose the RMSE result for smallest $k$ presented in those works. As we can see, our Multi-sigmoid method appear very competitive in comparison with other methods, while our Round approach result suffer from existence of noise in the dataset as before. \begin{table*}[tb] \centering \caption{Matrix completion for \textbf{Upper Triangular Matrices} ($k=1$)} \label{tab:MC-UT} \begin{tabular}{lcccccccc} \toprule Proportion of Observations& 10\%& 20\%& 30\%& 40\%& 50\%& 60\%& 70\%& 80\% \\ \midrule Linear&0.50&0.50&0.50&0.50&0.50&0.50&0.50&0.50\\ \addlinespace Multi-Sigmoid&0.51&0.30&0.25&0.25&0.26&0.25&0.23&0.23\\ Multi-Sigmoid, $\tau=0.5$&0.58&0.37&0.36&0.36&0.35&0.35&0.34&0.34\\ Round&0.46&0.34&0.27&0.25&0.26&0.21&0.20&0.16\\ Round, $\tau=0.5$&\bf{0.38}&\bf{0.26}&\bf{0.23}&\bf{0.19}&\bf{0.15}&\bf{0.13}&\bf{0.15}&\bf{0.13}\\ \bottomrule \end{tabular} \end{table*} \begin{table*}[tb] \centering \caption{Matrix completion for \textbf{Band Diagonal Matrices} ($k=2$)} \label{tab:MC-BD} \begin{tabular}{lcccccccc} \toprule Proportion of Observations& 10\%& 20\%& 30\%& 40\%& 50\%& 60\%& 70\%& 80\% \\ \midrule Linear&0.49&0.46&0.46&0.46&0.46&0.46&0.46&0.46\\ \addlinespace Multi-Sigmoid&\bf{0.39}&\bf{0.26}&\bf{0.23}&\bf{0.23}&\bf{0.22}&\bf{0.21}&\bf{0.20}&\bf{0.20}\\ Multi-Sigmoid, $\tau=0.5$&0.48&0.49&0.33&0.31&0.30&0.29&0.29&0.29\\ Round&0.71&0.41&0.35&0.29&0.29&0.27&0.23&0.22\\ Round, $\tau=0.5$&0.61&0.57&0.39&0.52&0.58&0.30&0.29&0.34\\ \bottomrule \end{tabular} \end{table*} \begin{table*}[tb] \centering \caption{Matrix completion with different number of samples for \textbf{Random low-GRR Matrices}} \label{tab:MC-Ra} \begin{tabular}{lcccccccc} \toprule Proportion of Observations& 10\%& 20\%& 30\%& 40\%& 50\%& 60\%& 70\%& 80\% \\ \midrule Linear&1.73&1.06&0.97&0.90&0.85&0.85&0.87&0.83\\ \addlinespace Multi-Sigmoid&1.92&\bf{0.53}&\bf{0.48}&\bf{0.42}&\bf{0.39}&\bf{0.38}&0.36&0.35\\ Multi-Sigmoid (Fixed $\tau$)&1.96&1.54&1.37&1.32&1.29&1.28&1.25&1.23\\ Round&\bf{1.49}&0.92&0.60&0.48&0.48&0.39&\bf{0.30}&\bf{0.28}\\ Round (Fixed $\tau$)&2.44&1.50&1.50&1.43&1.36&1.39&1.44&1.34\\ \bottomrule \end{tabular} \end{table*} \begin{figure*}[tb] \centering \begin{subfigure}{0.30\textwidth} \centering \includegraphics[width=\textwidth]{smallnetflix-percentage-log.pdf} \caption{Percentage, smallnetfix} \end{subfigure} \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=\textwidth]{smallnetflix-rmse-log.pdf} \caption{RMSE, smallnetflix} \end{subfigure} \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=\textwidth]{movielens-percentage-log.pdf} \caption{Percentage, movielens} \end{subfigure} \caption{Performance on recommendation datasets, as $k$ in increased} \label{fig:smallnet} \end{figure*} \begin{table*}[tb] \centering \caption{RMSE on Movielens-100k for a variety of models with different low-rank approximation (k).} \label{tab:Ml} \begin{tabular}{lcc} \toprule Models&Low-rank approximation&RMSE\\ \hline APG~\citep{kwok2015accelerated}& k=70 & 1.037\\ AIS-Impute~\citep{kwok2015accelerated}& k=70 & 1.037\\ CWOCFI~\citep{lu2013second}& k=10 & 1.01\\ our Round& k=10 & 1.007\\ our Linear& k=10 & 0.995\\ UCMF~\citep{zhang2014information}& - & 0.948\\ our Multi-sigmoid& k=10 & 0.928\\ SVDPlusPlus~\citep{gantner2011mymedialite}& k=10 & 0.911\\ SIAFactorModel~\citep{gantner2011mymedialite}& k=10 & 0.908\\ GG~\citep{lakshminarayanan2011robust}& k=30 & 0.907\\ \bottomrule \end{tabular} \end{table*} \section{Related Work} \label{sec:related} There is a rich literature on matrix factorization and its applications. To date, a number of link functions have been used, along with different losses for each, however here we are first to focus on expressive capabilities of these link functions, in particular of the ordinal-valued matrices~\citep{singh08:a-unified,koren2011ordrec,paquet2012hierarchical,udell14:generalized}. \citet{nickel13:logistic} addressed tensor factorization problem and showed improved performance when using a sigmoid link function. \citet{marevcek2017matrix} introduced the concept of matrix factorization based on interval uncertainty, which results in a similar objective as our algorithm. However, not only is our proposed algorithm going beyond by updating the thresholds and supporting sigmoid-based smoothing, but we present results on the representation capabilities of the round-link function. A number of methods have approached matrix factorization from a probabilistic view, primarily describing solutions when faced with different forms of noise, resulting, interestingly, in link functions as well. \citet{collins01:a-generalization} introduced a generalization of PCA method to loss function for non real-valued data, such as binary-valued. \citet{salakhutdinov08:bayesian} focused on Bayesian treatment of probabilistic matrix factorization, identifying the appropriate priors to encode various \emph{link} functions. On the other hand, \citet{lawrence09:non-linear} have analyzed non-linear matrix factorization based on Gaussian process and used SGD to optimize their model. However, these approaches do not explicitly investigate the representation capabilities, in particular, the significant difference in \emph{rank} when link functions are taken into account. Sign-rank and its properties have been studied by \citet{nickel14:reducing,bouchard15:on-approximate,davenport14:1-bit}, and more recently, \citet{neumann2015some} provides in-depth analysis of round-rank. Although these have some similarity to $\text{GRR}$, sign-rank and round-rank are limited to binary matrices, while $\text{GRR}$ is more suitable for most practical applications, and further, we present extension of their results in this paper that apply to round-rank as well. Since we can view matrix factorization as a simple neural-network, research in understanding the complexity of neural networks~\citep{huang03:learning}, in particular with rectifier units~\citep{pan2016expressiveness}, is relevant, however the results differ significantly in the aspects of representation we focus on. \section{Conclusions and Future Work} \label{sec:conclusions} In this paper, we demonstrated the expressive power of using link functions for matrix factorization, specifically the generalized round-rank ($\text{GRR}$) for ordinal-value matrices. We show that not only are there full-rank matrices that are low $\text{GRR}$, but further, that these matrices cannot even be approximated by low linear factorization. Furthermore, we provide uniqueness conditions of this formulation, and provide gradient descent-based algorithms to perform such a factorization. We present evaluation on synthetic and real-world datasets that demonstrate that $\text{GRR}$-based factorization works significantly better than linear factorization: converging faster while requiring fewer observations. In future work, we will investigate theoretical properties of our optimization algorithm, in particular explore convex relaxations to obtain convergence and analyze sample complexity. We are interested in the connection of link-rank with different probabilistic interpretations, in particular, robustness to noise. Finally, we are also interested in practical applications of these ideas to different link functions and domains.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{_Intro_Section_} A locally conformally K\"ahler (LCK) manifold is an Hermitian manifold $(M,I, g)$ equipped with an atlas $\{U_\alpha\}$ such that the restriction of $g$ to each $U_\alpha$ is conformally equivalent to some K\"ahler metric $g_\alpha$ defined only on $U_\alpha$, \ie $g\restrict{U_\alpha}=e^{f_\alpha}g_\alpha$, where $f_\alpha\in C^\infty U_\alpha$. One can see that in this case the exterior derivatives of the conformal factors agree on intersections: $d f_\alpha=d f_\beta$ on $U\alpha\cap U_\beta$, thus giving rise to a global closed 1-form $\theta$, called {\em the Lee form}. Then the Hermitian form $\omega(x,y):=g(Ix,y)$ satisfies $d\omega=\theta \wedge \omega$ (see \cite{do} for an introduction to the subject). Forgetting the complex structure, one arrives at the notion of {\em locally conformally symplectic manifold} (LCS, for short): a $2n$ dimensional real manifold endowed with a non-degenerate 2-form $\omega$ and a closed 1-form $\theta$ (also called Lee form) such that $d\omega=\theta \wedge \omega$. Two subclasses of LCK manifolds are very important and rather well understood by now. The {\em Vaisman manifolds}, whose universal covers are K\"ahler cones over Sasakian manifolds (see Subsection \ref{_Vaisman_manifolds_}), and {\em LCK manifolds with potential}, whose universal cover admits a K\"ahler metric with global, positive and automorphic potential (see Subsection \ref{_LCK_w_potential_} for the precise definition). The cohomology class of the Lee form, called {\em the Lee class}, is the first cohomological invariant one encounters when dealing with the LCK manifolds. Let $(M, \theta,\omega)$ be a compact LCK manifold, and $[\theta] \in H^1(M, \R)$ its Lee class. By Vaisman's theorem (\ref{vailcknotk}), $[\theta]=0$ if and only if $M$ is of K\"ahler type. For a compact K\"ahler manifold $X$, the subset of K\"ahler classes in $H^2(X, \R)$ is the ``K\"ahler cone'', and is one of the most important geometric features of a K\"ahler manifold. Similarly, we would like to have a description of the set of Lee classes on a given compact complex manifold which is known to admit LCK structures. It was already shown that in this case it cannot be a cone: indeed, by A. Otiman (\cite[Theorem 3.11]{oti2}), for an Inoue surface of class $S^0$, the set of Lee classes is a point. For LCS structures, the set of the Lee classes is better understood, due to Eliashberg and Murphy, who proved that on any almost complex manifold with $H^1(M, \Q)\neq 0$, for any non-zero class $\alpha\in H^1(M, \Q)$, there exists $C>0$ such that $C\alpha$ is the Lee class of an LCS structure (\cite[Theorem 1.11]{em}). For complex surfaces with $b_1(M)=1$, the set ${\goth L}$ of Lee classes of LCK structures was studied by Apostolov and Dloussky, who proved that ${\goth L}$ is either open or a point, \cite{ad2}. For higher dimensional LCK manifolds, the first important advance in this direction was due to K. Tsukada, who proved that the set of Lee classes on Vaisman manifolds is an open half-space (\cite[Theorem 5.1]{tsuk}), using the harmonic decomposition on Vaisman manifolds, due to T. Kashiwada, \cite{kashiwada_kodai}. In this paper, we extend Tsukada's theorem to compact LCK manifolds with potential of complex dimension greater than 3, using the following decomposition theorem for the first cohomology (\ref{_LCK_pot_Hodge_decompo_Theorem_}): \begin{equation}\label{_H^1_decompo_Equation_} H^1(M, \C) = H^{1,0}(M) \oplus \overline{H^{1,0}(M)} \oplus \langle \theta \rangle \end{equation} where $H^{1,0}(M)\subset H^1(M, \C)$ is the space of all closed holomorphic 1-forms, identified with a subspace in cohomology by \ref{_H^1_holo_LCK_Lemma_}. Tsukada proved this for Vaisman manifolds using the commutation formulae for Laplacians, and the harmonic decomposition for Vaisman manifolds. I. Vaisman conjectured that $b_1(M)$ is odd-dimensional for any compact LCK manifold (\cite[p. 535]{va_tr}); this famous conjecture was disproven by Oeljeklaus and Toma in \cite{ot}. The decomposition \eqref{_H^1_decompo_Equation_} would imply that $b_1(M)$ is odd, hence the counterexample of Oeljeklaus-Toma does not satisfy \eqref{_H^1_decompo_Equation_}. However, the natural map \begin{equation}\label{_H^1_holom_map_Equation_} H^{1,0}(M) \oplus \overline{H^{1,0}(M)} \oplus \langle \theta \rangle \arrow H^1(M, \C) \end{equation} is always injective (\ref{_H^1_holo_LCK_Lemma_}). For LCK manifolds with potential, we deduce \eqref{_H^1_decompo_Equation_} from a deformation argument, by showing that an LCK manifold with potential $M_1$ obtained as a deformation of a Vaisman manifold $M_2$ satisfies $\dim H^{1,0}(M_1)\geq \dim H^{1,0}(M_2)$. Unless $\dim H^{1,0}(M_1)= \dim H^{1,0}(M_2)$, this would imply that $\dim H^{1,0}(M_1) > \frac{b_1(M)-1}{2}$, which is impossible because the map \eqref{_H^1_holom_map_Equation_} is injective. Notice that the equality $\dim H^{1,0}(M) = \frac{b_1(M)-1}{2}$ is valid for non-K\"ahler complex surfaces as well. The decomposition \eqref{_H^1_decompo_Equation_} is the cornerstone for the description of the set of Lee classes on an LCK manifold with potential. Consider the linear map $\mu:\; H^1(M, \R)\arrow \R$ vanishing on the codimension 1 subspace $H^{1,0}(M) \oplus \overline{H^{1,0}(M)}\subset H^1(M, \R)$ and positive on the Lee form. We prove that $\xi \in H^1(M, \R)$ is a Lee class if and only if $\mu(\xi) >0$ (\ref{_Lee_cone_on_LCK-pot_Theorem_}). \hfill \noindent{\bf Conventions:} In the sequel, $(M,I)$ is a connected complex manifold of complex dimension $n\geq 2$. For an Hermitian metric $g$, we shall denote with $\omega(x,y):=g(Ix,y)$ the fundamental 2-form. We extend the action of the complex structure to $k$-forms by $(I\eta)(x_1,\ldots,x_k)=(-1)^k\eta(Ix_1,\ldots,Ix_k)$, and we denote $I\eta$ by $\eta^c$. The complex differential $d^c$ is defined as $d^c=I^{-1}dI$. We let $d^*:\Lambda^kM\arrow \Lambda^{k-1}M$ be the metric adjoint of the exterior derivative. \hfill \section{LCK manifolds} We gather here the necessary background in LCK geometry. For details, please see \cite{do} and \cite{ov_jgp_16,ov_lckpot,ov_jgp_09}. \subsection{Definitions. Examples} \hfill \definition\label{_LCK_def_via_formula_Definition_} $(M,I)$ is of {\bf locally conformally K\"ahler (LCK) type} if it admits an Hermitian metric $g$ whose fundamental form satisfies the equation \begin{equation}\label{_def_LCK_equation_} d\omega=\theta \wedge \omega \end{equation} for a closed 1-form $\theta$ called {\bf the Lee form}. Then $(M,I,\omega,\theta)$ is called an {\bf LCK manifold}. \hfill \remark \begin{enumerate} \item The LCK condition is conformally invariant: if $g$ is LCK with Lee form $\theta$, then $e^fg$ is LCK with Lee form $\theta+df$, hence to each conformal class of LCK metrics there corresponds a Lee class in $H^1(M,\R)$. \item In dimension $n\neq 2$, the equation \eqref{_def_LCK_equation_} implies $d\theta=0$. \item Using \eqref{_def_LCK_equation_}, one can prove that the Lee form is determined in terms of $I$ and $\omega$ by $\theta=-I\left(\frac{1}{n-1}d^*\omega\right)$. \item If $\theta$ is exact, the LCK manifold is called {\bf globally conformally K\"ahler (GCK)}. Usually, it is tacitly assumed that $\theta$ is not exact. \end{enumerate} \hfill In the sequel we will mostly use following definition, equivalent to \ref{_LCK_def_via_formula_Definition_}. \hfill \definition A complex manifold $(M,I)$ is LCK if and only if it admits a cover $(\tilde M, I)$ equipped with a K\"ahler metric $\tilde\omega$ with respect to which the deck group of the cover acts by holomorphic homotheties. \hfill \definition The {\bf homothety character} associated to a K\"ahler cover with deck group $\Gamma$ is $\chi:\Gamma:\to\R^{>0}$, $\chi(\gamma)=\frac{\gamma^*\tilde\omega}{\tilde\omega}$. The rank of $\Im(\chi)$ is the {\bf LCK rank} of $(M,I,\omega)$. \hfill \remark Since $\Gamma$ is a quotient group of $\pi_1(M)$, we can consider $\chi$ as a character on $\pi_1(M)$. Let then $L\arrow M$ be the local system associated to $\chi$. It is a real line bundle and $\theta$ can be viewed as a connection form in $L$ which is thus flat. The line bundle $L$ is also called {\bf the weight bundle} of the LCK manifold. \hfill \definition {\bf The minimal K\"ahler cover} of an LCK manifold corresponds to a group $\Gamma$ on which $\chi$ is injective ($\Gamma$ does not contain $\tilde\omega$-isometries). This is the smallest cover admitting a K\"ahler metric which is conformal to the pullback of the LCK metric. \hfill \definition A differential form $\alpha\in\Lambda^*\tilde M$ is called {\bf automorphic} if $\gamma^*\alpha=\chi(\gamma)\alpha$ for all $\gamma\in\Gamma$. \hfill \remark\label{_weight_bundle_remark_} \begin{description} \item[(i)] Automorphic forms on $\tilde M$ can be identified with $L$-valued forms on $M$. In particular, since $\pi^*\omega$ is $\Gamma$-invariant, $\omega$ can be viewed as a section of $\Lambda^{1,1}(M,L)$, and $\omega^k$ as a section of $\Lambda^{k,k}(M,L^{\otimes k})$ etc. \item[(ii)] Let $d_\theta:=d-\theta\wedge$. Then $d_\theta\omega=0$, hence $\omega$ is a closed $L$-valued form. \item[(iii)] The complex $(\Lambda^*, d_\theta)$ is elliptic, since $d_\theta$ has the same symbol as $d$, and its cohomology $H^*_\theta(M)$ can be identified with the cohomology $H^*(M,L)$ of the local system $L$; it is called {\bf Morse-Novikov cohomology}. \end{description} \example The following manifolds admit an LCK structure: almost all known non-K\"ahler compact complex surfaces (see e.g. \cite{_ovv:surf_,va_tr, bel, go, _Brunella:Kato_}; Hopf manifolds: $(\C^n\setminus 0)/\langle A\rangle$, $A\in\mathrm{GL}(n,\C)$ with eigenvalues of absolute value $> 1$ (see e.g. \cite{ov_pams})\footnote{$(\C^n\setminus 0)/\langle A\rangle$ is called {\em diagonal Hopf manifold} when $A$ is diagonalizable, and {\em non-diagonal Hopf manifold} when $A$ is not diagonalizable.}; some Oeljeklaus-Toma manifolds (\cite{ot}); Kato manifolds (\cite{iop}) and some ``toric Kato manifolds'' (\cite{iopr}). \subsection{The dichotomy K\"ahler {\em versus} LCK} The next result, proven by Vaisman, shows that on compact complex manifolds, LCK and GCK metrics cannot coexist. For consistency, we provide a proof, slightly different from the original one. \hfill \theorem {\rm (\bf \cite{va_tr})}\label{vailcknotk} Let $(M,\omega, \theta)$ be a compact LCK manifold, not globally conformally K\"ahler. Then $M$ does not admit a K\"ahler structure. \hfill \proof {\bf Step 1:} That $M$ is not globally conformally K\"ahler means that $\theta$ is not cohomologuous with zero, that is $\theta$ is not $d$-exact. Let $d\omega=\omega\wedge\theta$, $\theta'=\theta+d\phi$. Then $$d(e^\phi\omega)= e^\phi\omega\wedge \theta+ e^\phi\omega\wedge d\phi =e^\phi\omega\wedge \theta'.$$ This means that we can replace the triple $(M,\omega, \theta)$ by $(M,e^\phi \omega, \theta')$ for any 1-form $\theta'$ cohomologous to $\theta$. \hfill {\bf Step 2:} Assume that $M$ admits a K\"ahler structure. Then, by Hodge theory, $\theta$ is cohomologous to the sum of a holomorphic and an antiholomorphic form. After a conformal transformation (which changes $\theta$ in $\theta+d\phi$) as in Step 1, we may assume that $\theta$ itself is the sum of a holomorphic and an antiholomorphic form. \hfill {\bf Step 3:} Then $dd^c\theta= \1 d \bar\6\theta=0$, giving $dd^c(\omega^{n-1})= \omega^{n-1}\wedge \theta\wedge I(\theta)$. Therefore $0=\int_M dd^c(\omega^{n-1})=\int \mathrm{Mass}(\theta\wedge I(\theta))$, hence $\theta\wedge I(\theta)=0$, thus $\Vert\theta\Vert^2=0$ and the initial metric is globally conformally K\"ahler.\footnote{Recall that the mass of a positive $(1,1)$-form $\eta$, denoted $\mathrm{Mass}(\eta)$, is the volume form $\eta \wedge \omega^{n-1}$, \cite{_Demailly:Book_}. } \endproof \hfill Using similar techniques, we can prove: \hfill \lemma\label{_theta_not_d^c_closed_Lemma_} Let $(M, \theta, \omega)$ be a compact LCK manifold. Then the cohomology class $[\theta]\in H^1(M, \R)$ cannot be represented by a form which is $d^c$-closed. \hfill \proof Indeed, for each representative of $[\theta]$, this form can be realized as the Lee form for an LCK metric which is conformally equivalent to $\omega$. Therefore, it would suffice to show that $d^c\theta\neq 0$ for any compact LCK manifold $(M, \theta, \omega)$. If $d^c\theta= 0$, we would have $dd^c(\omega^{n-1})= \omega^{n-1}\wedge \theta\wedge I(\theta)$, giving, as above, $0=\int_M dd^c(\omega^{n-1})=\int \mathrm{Mass}(\theta\wedge I(\theta))$, hence $\theta\wedge I(\theta)=0$, implying that $\theta=0$. \endproof \hfill This can be used to prove an important step in our decomposition theorem for $H^1(M)$, where $M$ is an LCK manifold with potential. \hfill \lemma\label{_H^1_holo_LCK_Lemma_} Let $(M, \theta, \omega)$ be a compact LCK manifold, and $H^{1,0}(M)$ denote the space of closed holomorphic 1-forms on $M$. Then the natural map \[ H^{1,0}(M)\oplus \overline{H^{1,0}(M)} \oplus \langle \theta\rangle\arrow H^1(M,\C) \] is injective, where $\langle \theta\rangle$ is the subspace generated by $\theta$. \hfill \proof A closed holomorphic form $\alpha$ belongs to $\ker d \cap \ker d^c$. Indeed, $\bar\6 \alpha=0$ together with $d\alpha=0$ implies $d^c\alpha=0$. Therefore, if $\alpha\in H^{1,0}(M)+ \overline{H^{1,0}(M)}$ is exact, one has $\alpha = d f$ and $dd^c f=0$, which is impossible by the maximum principle. However, if $\theta$ is cohomologous to a sum of holomorphic and antiholomorphic forms, this easily leads to a contradiction with \ref{_theta_not_d^c_closed_Lemma_}. Indeed, suppose that $\theta=\alpha+ df$, where $d\alpha=d^c\alpha=0$. Making a conformal change, we obtain another LCK structure which has Lee form equal to $\alpha$. This is impossible, again by \ref{_theta_not_d^c_closed_Lemma_}. \endproof \hfill \corollary\label{_inequa_holo_LCK_Corollary_} Let $M$ be a compact LCK manifold, and $H^{1,0}(M)$ denote the space of closed holomorphic 1-forms on $M$. Then $\dim H^{1,0}(M) \leq \frac{b_1(M)-1}{2}$. \endproof \section{Vaisman manifolds}\label{_Vaisman_manifolds_} The best understood subclass of LCK manifolds is the one with the Lee form which is parallel with respect to the Levi-Civita connection. They are called {\bf Vaisman manifolds}. \hfill If $(M,g,I,\theta)$ is Vaisman, the Lee field $\theta^\sharp$ is Killing and holomorphic; moreover, it commutes with $I\theta^\sharp$ (\cite{va_gd}). Denote by $\Sigma$ the holomorphic 1-dimensional foliation generated by $\theta^\sharp$ and $I\theta^\sharp$. It is called {\bf the canonical foliation} (the motivation is given in the next theorem). \hfill \theorem \label{_Subva_Vaisman_Theorem_} Let $M$ be a compact Vaisman manifold, and $\Sigma\subset TM$ its canonical foliation. Then: \begin{description} \item[(i)] $\Sigma$ is independent from the choice of the Vaisman metric (\cite{tsu}). \item[(ii)] $d^c\theta=\omega-\theta \wedge I\theta$ (\cite{va_gd}) and the exact (1,1)-form $\omega_0:= d^c\theta$ is semi-positive (\cite{_Verbitsky:Vanishing_LCHK_}). Therefore, $\Sigma=\ker\omega_0$, and $\omega_0$ is transversally K\"ahler with respect to $\Sigma$. \end{description} Since $\theta^\sharp$ is Killing and holomorphic, it generates a complex flow of $g$-isometries. These lift to holomorphic non-trivial homotheties of the K\"ahler metric on the universal cover $\tilde M$. This is in fact an equivalent definition of Vaisman-type manifolds, as the following criterion shows: \hfill \theorem{\bf (\cite {kor})}\label{kami_or} Let $(M,\omega, \theta)$ be a compact LCK manifold equipped with a holomorphic and conformal $\C$-action $\rho$ without fixed points, which lifts to non-isometric homotheties on the K\"ahler cover $\tilde M$. Then $(M,\omega, \theta)$ is conformally equivalent to a Vaisman manifold. \hfill \example\label{_Vaisman_Examples_} A non-exhaustive list of examples of Vaisman manifolds comprises: \begin{description} \item[(i)] Diagonal Hopf manifolds $(\C^n\backslash 0)/\langle A\rangle$ where $A$ is semi-simple and with eigenvalues $\alpha_i$ of absolute value $>1$, \cite{go, ov_pams}. \item[(ii)] Elliptic complex surfaces (see \cite{bel} for the complete classification of Vaisman compact surfaces; see also \cite{_ovv:surf_}). \item[(iii)] All compact submanifolds of a Vaisman manifold (\cite{_Verbitsky:Vanishing_LCHK_}). \end{description} \remark The class of Vaisman manifolds is strict: neither the LCK Inoue surfaces, nor the non-diagonal Hopf manifolds can bear Vaisman metrics (\cite{bel}, \cite{ov_pams}). \hfill Recall that a form $\eta$ on a foliated manifold $(M, \Sigma)$ is called {\bf basic} if it can be locally obtained as a pullback $\pi^*\eta_0$ from the leaf space of $\Sigma$, which is defined in a sufficiently small neighbourhood of every point $x\in M$. \hfill The following claim is well known (and can be used as a definition of basic forms). \hfill \claim\label{_basic_vanish_Claim_} A form $\eta$ on $M$ is basic with respect to $\Sigma\subset TM$ if and only if for any vector field $X \in \Sigma$, one has $i_X(\eta) = \Lie_X(\eta)=0$, where $i_X$ denotes the contraction with $X$. \endproof \hfill \corollary\label{_closed_basic_Corollary_} A closed form $\eta$ on $M$ is basic with respect to $\Sigma\subset TM$ if and only if $i_X(\eta) = 0$. \hfill \proof Follows from the Cartan formula $\Lie_X(\eta) = i_X(d\eta) + d(i_X(\eta))$. \endproof \hfill Further on, we need the following observation. \hfill \proposition\label{_holomo_on_Vaisman_basic_Proposition_} Let $M$ be a compact Vaisman manifold, and $\eta$ a closed holomorphic 1-form on $M$. Then $\eta$ is basic with respect to the canonical foliation $\Sigma$ on $M$. \hfill \proof Let $n=\dim_\C M$, and $\omega_0\in \Lambda^{1,1}(M)$ the transversal K\"ahler form defined above. Since $\eta$ is closed and $\omega_0$ is exact, one has $\int_M \omega_0^{n-1}\wedge \eta\wedge\bar \eta=0$. However, $-\1 \eta\wedge\bar \eta$ is a semi-positive form, and $\omega_0$ is strictly positive in the directions transversal to $\Sigma$. This implies that $-\1 \omega_0^{n-1}\wedge \eta\wedge\bar \eta$ is a positive volume form in every point $x\in M$ such that $\eta\restrict {T_x M}$ does not vanish on $\Sigma\restrict{T_x M}$. Since $\int_M \omega_0^{n-1}\wedge \eta\wedge\bar \eta=0$, it follows that $\eta \restrict\Sigma=0$ everywhere. By \ref{_closed_basic_Corollary_}, $\eta$ is basic. \endproof \section{LCK manifolds with potential}\label{_LCK_w_potential_} We now introduce the main object of study of this paper. \hfill \definition An LCK manifold has {\bf LCK potential} if it admits a K\"ahler covering on which the K\"ahler metric has a global and positive potential function $\psi$ such that the deck group multiplies $\psi$ by a constant. In this case, $M$ is called {\bf LCK manifold with potential}. \hfill \example All Vaisman manifolds are LCK manifolds with potential. Indeed, if $\pi:\tilde M\arrow M$ is the universal cover and $\theta$ is the Lee form on $M$, then $\Vert\pi^*\theta\Vert$ is an automorphic global K\"ahler potential for $\tilde\omega$. Also, the structure is inherited by all complex submanifolds of an LCK manifold with potential. Among the non-Vaisman examples, we mention the non-diagonal Hopf manifolds, \cite{ov_jgp_16}. \hfill \remark LCK Inoue surfaces, blow-ups of LCK manifolds and OT-manifolds cannot be LCK manifolds with potential, \cite{oti2,_Vuletescu:blowups_}. \hfill A wealth of examples is provided by the following fundamental result: \hfill \theorem {\bf (\cite{ov_lckpot})} Let $(M,I,\omega,\theta)$ be a compact LCK manifold with potential. Then any small deformation $(M,I_t)$, $t\in\C$, $|t|<\epsilon$, admit an LCK metric with potential. In particular, non-diagonal Hopf manifolds are LCK with potential. \hfill \remark \label{_d_theta_c_equation_} By \cite{ov_imrn_10} (see also \cite{_Istrati:LCK-pot_}), an LCK manifold with potential admits a conformal gauge such that \begin{equation}\label{dctheta} d\theta^c=\omega - \theta\wedge\theta^c, \quad \text{where}\quad \theta^c(X)=-\theta(IX). \end{equation} We shall tacitly assume that the LCK metric is chosen in such a way that \eqref{dctheta} holds. \hfill We are especially interested in the LCK manifolds with potential of LCK rank 1, that is, LCK manifolds with potential admitting a K\"ahler $\Z$-covering. We showed in \cite{ov_lckpot} that this is equivalent to the LCK potential being a proper function on the minimal K\"ahler cover. The minimal cover of a compact LCK manifold with proper potential is very nice from the complex and algebraic viewpoint: \hfill \theorem {\bf (\cite{ov_lckpot,ov_pams})}\label{potcon} Let $M$ be a compact LCK manifold with proper potential, and $\tilde M$ its K\"ahler $\Z$-covering. If $\dim_\C M\geq 3$, then the metric completion $\tilde M_c$\index{completion!metric} admits a structure of a complex variety, compatible with the complex structure on $\tilde M \subset\tilde M_c$, and the complement $\tilde M_c\setminus \tilde M$ is just one point. Moreover, $\tilde M_c$ is an affine algebraic variety obtained as an affine cone over a projective orbifold. \hfill \remark Notice that $\tilde M_c$ is indeed the {\bf Stein completion} of $\tilde M$ in the sense of \cite{andreotti_siu}. In the proof of our theorem, we used the filling theorem by Rossi and Andreotti-Siu (\cite{rossi,andreotti_siu}) which imposes the restriction $\dim_\C>2$. \hfill By appearance, assuming that the potential is a proper function is a restrictive condition. However, this is not entirely true: as long as one is interested in the complex geometry (and not in the Riemannian one), one can always assume the LCK potential is proper. \hfill \theorem {\em \bf(\cite{ov_jgp_09, ov_jgp_16})} \label{defor_improper_to_proper} Let $(M, \omega, \theta, \phi)$ be a compact LCK manifold with improper LCK potential. Then $(\omega, \theta, \phi)$ can be approximated in the ${C}^\infty$-topology by an LCK structure with proper LCK potential on the same complex manifold. \hfill The following is one of the most important features of compact LCK manifolds with proper potential, of dimension greater than 3. \hfill \theorem\label{_Embedding_LCK_pot_in_Hopf_} Any compact LCK manifold $M^n$, $n\geq 3$, admits a holomorphic embedding into a Hopf manifold $(\C^N\backslash 0)/\langle A\rangle$. A manifold $M^n$ is Vaisman if and only if it admits an embedding to $(\C^N\backslash 0)/\langle A\rangle$, with the matrix $A$ diagonalizable. \hfill \proof In \cite[Theorem 3.4]{ov_lckpot} it is shown that an LCK manifold with potential is embeddable into a Hopf manifold, and in \cite[Theorem 3.6]{ov_lckpot} it is shown that a Vaisman manifold is embeddable to a diagonal Hopf manifold. Conversely, in \cite[Section 2.5]{ov_pams} we show that a diagonal Hopf manifold is Vaisman, and in \cite{_Verbitsky:Vanishing_LCHK_} it is shown that a positive-dimensional compact submanifold of a Vaisman manifold is Vaisman. \endproof \hfill One of the most useful properties of compact LCK manifolds with potential in dimension greater than 3 is that their complex structure can be deformed to a complex structure that supports Vaisman metrics. \hfill \theorem {\bf (\cite{ov_imrn_10})} \label{def_lckpot2Vai} Let $(M,\omega,\theta)$, $\dim_\C M \geq 3$, be a compact LCK manifold with proper potential. Then there exists a complex analytic deformation of $M$ which admits a Vaisman metric. \hfill \remark A refinement of this result will be given in \ref{_Vaisman_limit_of_LCK_pot_Theorem_}. \section{Algebraic cones and LCK manifolds with potential} \subsection{Jordan-Chevalley decomposition} Further on, all algebraic groups are considered over $\C$. For the definition and more reference on algebraic groups, please see \cite{hum}. \hfill \definition An element of an algebraic group $G$ is called {\bf semisimple} if its image is semisimple for some exact algebraic representation of $G$, and is called {\bf unipotent} if its image is unipotent (that is, exponential of a nilpotent) for some exact algebraic representation of $G$. \hfill \remark For any algebraic representation of an algebraic group $G$, the image of any semisimple element is a semisimple operator, and the image of any unipotent element is a unipotent operator (\cite[\S 15.3]{hum}). \hfill \theorem {\bf (Jordan-Chevalley decomposition), \cite[\S 15.3]{hum}} \label{jcdec}\\ Let $G$ be an algebraic group, and $A\in G$. {Then there exists a unique decomposition $A= S U$ of $A$ in a product of commuting elements $S$ and $U$, where $U$ is unipotent and $S$ semisimple.} \hfill \subsection{Algebraic cones} To better describe the universal cover of a compact LCK manifold with potential we need to introduce the closed and open algebraic cones. \hfill \definition A {\bf closed algebraic cone} is an affine variety $\cac$ admitting a $\C^*$-action $\rho$ with a unique fixed point $x_0$, called {\bf the origin}, and satisfying the following: \begin{description} \item[(i)] $\cac$ is smooth outside of $x_0$, \item[(ii)] $\rho$ acts on the Zariski tangent space $T_{x_0}\cac$ with all eigenvalues $|\alpha_i|<1$. \end{description} An {\bf open algebraic cone} is a closed algebraic cone without the origin. \hfill For the sake of completeness, we give a new and self-contained proof of the following basic result. \hfill \theorem {\bf (\cite[Theorem 2.8]{ov_pams})} \label{_cone_cover_for_LCK_pot_Theorem_} Let $M=\tilde M/\langle A\rangle$ be an LCK manifold with potential, with LCK rank 1, and $\tilde M$ its K\"ahler $\Z$-covering. {Then $\tilde M$ is an open algebraic cone.} \hfill \pstep Let $\tilde M_c$ be the Stein completion of $\tilde M$ equipped with an $A$-equivariant embedding to $\C^N$, where $A$ acts as a linear operator with all eigenvalues $|\alpha_i|< 1$. Let $\calo_{\C^N,0}$ denote the ring of germs of holomorphic functions in zero. Call a function $f\in \calo_{\C^N,0}$ {\bf $A$-finite} if the space $\langle f, A^*f, {A^2}^* f, ...\rangle$ is finite-dimen\-sio\-nal. A polynomial function is clearly $A$-finite. The converse is also true, because the Taylor decomposition of an $A$-finite function $f$ can only have finitely many components, otherwise the eigenspace decomposition of $f$ is infinite. \hfill {\bf Step 2:} We want to produce an explicit fundamental domain $U_0$ for the action of $\Z\cong \langle A\rangle = \{ ..., A^{-n}, A^{-n+1}, ... A^{-1}, \Id_{\C^n}, A, A^2, ... \}$ on $\C^N$, in such a way that $U_0 = V \backslash A(V)$, where $V$ is Stein. Let $B\subset \C^n$ be the unit ball. When the operator norm $\|A\|$ of $A$ is less than 1, one has $A(B) \Subset B$, and $B \backslash A(B)$ is the fundamental domain which we can use. This would hold, for example, when $A$ is diagonalizable. On the other hand, the operator norm of a contraction can be bigger than 1. Consider for example the matrix $A=\begin{pmatrix} \frac 1 2 & 1000 \\ 0 & \frac 1 2\end{pmatrix}$; its norm is at least 1000. Therefore, one should take more care when choosing the fundamental domain. Recall that any matrix over $\C$ admits a Jordan decomposition, and every Jordan cell {\begin{equation}\label{_Jordan_cell_Equation_} \small \begin{pmatrix} \alpha & 1 & 0 & \ldots & 0\\ 0 & \alpha & 1 & \ldots & 0\\ \vdots &\vdots &\vdots & \cdots & \vdots \\ 0&0&0 & \ldots &1\\ 0&0&0 & \ldots &\alpha \end{pmatrix} \end{equation} } is conjugate to {\begin{equation}\label{_Jordan_cell_epsilon_Equation_} \small \begin{pmatrix} \alpha & \epsilon & 0 & \ldots & 0\\ 0 & \alpha & \epsilon & \ldots & 0\\ \vdots &\vdots &\vdots & \cdots & \vdots \\ 0&0&0 & \ldots &\epsilon\\ 0&0&0 & \ldots &\alpha \end{pmatrix} \end{equation}} (see {\em e. g.} \ref{_semisimple_operator_approx_Proposition_} below). Writing $A$ in a Jordan basis and replacing each cell \eqref{_Jordan_cell_Equation_} with \eqref{_Jordan_cell_epsilon_Equation_}, for $\epsilon$ sufficiently small, we obtain a contraction with an operator norm $<1$ conjugate to $A$; then $A(B)\Subset B$ is a fundamental domain for the action of $\langle A\rangle$. \hfill {\bf Step 3:} Let $U_0$ be a fundamental domain for $A$ acting on $\C^N$. As indicated in Step 2, every linear contraction is conjugate to an operator $A$ with operator norm $<1$. The fundamental domain $U_0$ for $A$ with operator norm $<1$ can be obtained by taking an open ball $B\subset \C^n$ and removing $A(B)$ from $B$. Denote by $U_n$ a copy of this domain obtained as $U_n:= A^{-n}(U_0)$, and let $V_n:= \{0\}\cup \bigcup_{i>-\infty}^n U_i$. Since $V_n= A^{-n}(B)$, it is a Stein domain in $\C^N$. Let $A:\; H^0_b(\calo_{V_n}) \arrow H^0_b(\calo_{V_n})$ be the operator on the ring of bounded holomorphic functions induced by the action of $A$ on $\C^N$. Clearly, this map is compatible with the map $A:\; H^0(\calo_{\tilde M_c}) \arrow H^0(\calo_{\tilde M_c})$ constructed above; this is what allows us to denote them by the same letter. \hfill {\bf Step 4:} We are going to prove that the operator \[ A:\; H^0_b(\calo_{V_n}) \arrow H^0_b(\calo_{V_n}) \] is compact with respect to the topology defined by the $\sup$-norm.\footnote{Recall that for a complex manifold $X$, the sup-topology on $H^0_b(\calo_X)$ is the topology given by the sup-norm, namely: $|f|_{\sup} := \sup_X |f|$.} Let $\gamma:\; X \arrow X$ be a map taking a complex manifold $X$ to its precompact subset. We prove that in this case the map $\gamma^*:\; H^0_b(\calo_{X}) \arrow H^0_b(\calo_{X})$ is compact in the $\sup$-topology. For any $f\in H^0(\calo_X)$ we have \[|\gamma^* f|_{\sup}= \sup_{x\in \overline{\gamma(X)}} |f(x)|. \] This implies that $\gamma^*(f)$ is bounded. Therefore, {for any sequence $\{f_i\in H^0(\calo_X)\}$ converging in the $ C^0$-topology, the sequence $\{\gamma^* f_i\}$ converges in the $\sup$-topology.} The set $B_C:=\{v\in V\ \ | \ \ |v|_{\sup} \leq C\}$ is a normal family and hence, by Montel's theorem, it is precompact in the $ C^0$-topology (\cite[Chapter I, Theorem 3.12]{_Demailly:Book_}). Then $\gamma^* B_C$ is precompact in the $\sup$-topology. This proves that the operator $\gamma^*:\; V\arrow V$ is compact. \hfill {\bf Step 5:} Let $\goth{I}(V_n)$ be the ideal of $\tilde M_c\cap V_n$ in $H^0_b(\calo_{V_n})$. Recall that $H^0_b(\calo_X)$ is a Banach algebra, by Montel's theorem (\cite[Chapter IX, Proposition 4.7]{_Demailly:Book_}). By the Riesz-Schauder's theorem (\cite[Section 5.2]{friedman}), a compact endomorphism of a Banach space admits a Jordan decomposition. Then $A$-finite vectors are finite linear combinations of the vectors from the Jordan cells. This implies that the set of $A$-finite functions in $\goth{I}(V_n)$ is dense in $\goth{I}(V_n)$, with respect to the $\sup$-topology. On the other hand all $A$-finite functions can be holomorphically extended to $\C^N$ by automorphicity. The base of $C^0$ (that is, compact-open) topology on $H^0(\calo_{\C^N})$ is formed by translations of open sets consisting of all functions $f\in H^0(\calo_{\C^N})$ which satisfy $|f|< C< \infty$ on a given compact, for some positive $C\in \R$. Therefore, it is the weakest topology such that its restriction to $H^0_b(\calo_{V_n})$ with $\sup$-topology is continuous. This implies that any set of functions ${\goth S}\subset H^0(\calo_{\C^N})$ which is bounded on compacts and dense in $H^0_b(\calo_{V_n})$, for all $n$, is dense in $H^0(\calo_{\C^N})$. Since the space of $A$-finite functions is dense in $\goth{I}(V_n)$, the space $\goth{I}^A$ of $A$-finite functions in $\goth{I}$ is dense in $\goth{I}$ with respect to the $C^0$-topology. In particular, the set of common zeros of $\goth{I}^A$ coincides with $\tilde M_c\subset \C^N$. \hfill {\bf Step 6:} The $A$-finite functions are polynomials, as shown in Step 1. By Hilbert's basis theorem, any ideal in the ring of polynomials is finitely generated. Therefore, the ideal $\goth{I}^A$ is finitely generated over polynomials. Let $f_1, ..., f_n$ be the set of its generators. By Step 2, the set of common zeros of $\goth{I}^A$ is $\tilde M_c\subset \C^N$; therefore, $\tilde M_c\subset \C^N$ is given by polynomial equations $f_1=0, f_2=0, ..., f_n=0$. \hfill {\bf Step 7:} It remains to show that $\tilde M_c$ admits a holomorphic $\C^*$-action containing a contraction. Let $G$ be the Zariski closure of $\langle A \rangle$ in $\GL(\C^N)$. This is a commutative algebraic group, acting on the variety $\tilde M_c\subset \C^N$. Let $A=SU$ be the Jordan-Chevalley decomposition for $A$, with $S, U\in G$. Since $G$ preserves $\tilde M_c$, theendomorphisms $S$ and $U\in\End\C^n)$ also act on $\tilde M_c$. Since the eigenvalues of $S$ are the same as the eigenvalues of $A$, it is a contraction. Let $G_S\subset G$, $G_S= e^{\C \log S}$ be a one-parametric subgroup containing $S$. We prove that $G_S$ can be approximated by subgroups of $G$ isomorphic to $\C^*$; then these subgroups also contain a contraction, and we are done. Consider the map taking any $A_1\in G$ to its unipotent component $U_1$. Since $G$ is commutative, this map is a group homomorphism. Therefore, its kernel $G_s$ (that is, the set of all semisimple elements in $G$) is an algebraic subgroup of $G$. A semisimple commutative algebraic subgroup of $\GL(\C^N)$ is always isomorphic to $(\C^*)^k$ (\cite[Proposition 1.5]{_Borel_Tits:Groupes_Reductifs_}). The one-parametric subgroups $\C^*\subset (\C^*)^k$ are dense in $(\C^*)^k$ because one-parametric complex subgroups $\C^*\subset (\C^*)^k$ can be obtained as complexification of subgroups $S^1\subset U(1)^k \subset (\C^*)^k$, and those are dense in $U(1)^k$. Therefore, the contraction $S\in G_s=(\C^*)^k$ can be approximated by an element of $\C^*$ acting on $\tilde M_c$. \endproof \hfill \section{Hodge decomposition for $H^1(M)$ on LCK manifolds with potential} Any harmonic $r$-form, $r\leq n-1$, on a compact $n$-dimensional Vaisman manifold $(M,\omega,\theta)$ can be uniquely written as a sum $\alpha+\theta\wedge\beta$ where $\alpha$ and $\beta$ are basic harmonic forms (see \cite{va_gd} or \cite{_ov_super_sas_} for a different proof. In particular, the space of harmonic 1-forms on a compact Vaisman manifolds is identified with $\ker d \cap \ker d^c\oplus \langle \theta\rangle$. For LCK manifolds with potential, such a decomposition is no longer available. Instead we can prove: \hfill \theorem\label{_LCK_pot_Hodge_decompo_Theorem_} Let $(M, \theta, \omega)$ be a compact LCK manifold with potential, and $H^{1,0}(M)$ denote the space of closed holomorphic 1-forms on $M$. Using \ref{_H^1_holo_LCK_Lemma_}, we consider $H^{1,0}(M)\oplus \overline{H^{1,0}(M)}\oplus \langle \theta \rangle$ as a subspace in $H^1(M, \C)$. Then $H^1(M,\C) = H^{1,0}(M)\oplus \overline{H^{1,0}(M)} \oplus \langle \theta \rangle$. \hfill \proof To prove that the map \[ H^{1,0}(M)\oplus \overline{H^{1,0}(M)} \oplus \langle \theta\rangle \arrow H^1(M,\C) \] is surjective, it would suffice to show that $\dim_\C H^{1,0}(M)= \frac{b_1(M)-1}{2}$. We prove it by deforming $M$ to a Vaisman manifold $M_0$ and showing that $\dim H^{1,0}(M)= \dim H^{1,0}(M_0)$. We first deform the LCK metric on $M$ to an LCK metric of LCK rank 1 (\ref{defor_improper_to_proper}). This operation does not affect the complex structure on $M$, hence $\dim H^{1,0}(M)$ does not change, and it will suffice to prove that $\dim_\C H^{1,0}(M)= \frac{b_1(M)-1}{2}$ when $M$ is an LCK manifold with proper potential. Let $\tilde M$ be the open algebraic cone associated with $M$ as in \ref{_cone_cover_for_LCK_pot_Theorem_}, and $A:\; \tilde M \arrow \tilde M$ the generator of the deck group. Applying the Jordan-Chevalley decomposition $A=SU$ as in \ref{def_lckpot2Vai}, we can deform $\tilde M/\langle A\rangle$ to the Vaisman manifold $M_0:=\tilde M/\langle S\rangle$. To prove that $\dim_\C H^{1,0}(M) = \frac{b_1(M)-1}{2}$ it would suffice to show that all holomorphic, $S$-invariant 1-forms on $\tilde M$ are also $U$-invariant. Consider $U$ as an automorphism of $M_0$. This automorphism is homotopy equivalent to the identity because $U= e^{N}$, where $N$ commutes with $S$. Since $U$ is an unipotent element of the group of automorphisms of the algebraic cone $\tilde M$, the action of $U_t:= e^{tN}$ preserves $\tilde M$ and commutes with $S$, hence it is well defined on $M_0$. This gives a homotopy of $U=U_1$ to $\Id=U_0$. Since $U$ is homotopy equivalent to the identity, it acts trivially on $H^1(M_0)$, hence all $S$-invariant holomorphic forms on $\tilde M$ are also $SU$-invariant. This implies that $\dim_\C H^{1,0}(M) \geq \dim_\C H^{1,0}(M_0) =\frac{b_1(M_0)-1}{2}$. The inequality in this expression is in fact an equality by \ref{_inequa_holo_LCK_Corollary_}. We thus proved \ref{_LCK_pot_Hodge_decompo_Theorem_}. \endproof \section{Approximating LCK with potential structures by Vaisman structures} We start with a linear algebra result which will be used in the proof of the main theorem of this section: \hfill \proposition\label{_semisimple_operator_approx_Proposition_} Let $p\in \GL(n,\C)$ be a linear operator, and $p=su$ its Jordan decomposition, with $s$ semisimple, $u$ unipotent, and $su=us$. Then there exists a sequence $r_i\in \GL(n,\C)$ of operators commuting with $s$ and satisfying $\lim_{i\to\infty} r_i p r_i^{-1}=s$. \hfill \proof Since any operator is a sum of Jordan cells, it would suffice to prove \ref{_semisimple_operator_approx_Proposition_} when $p$ is a single $k\times k$ Jordan cell, {\[\small p =\begin{pmatrix} \alpha & 1 & 0 & \ldots & 0\\ 0 & \alpha & 1 & \ldots & 0\\ \vdots &\vdots &\vdots & \cdots & \vdots \\ 0&0&0 & \ldots &1\\ 0&0&0 & \ldots &\alpha \end{pmatrix} \]} In this case, $s= \const\Id$, hence it commutes with everything. Take {\[\small r_i =\begin{pmatrix} 1 & 0 & 0 & \ldots & 0\\ 0 & \epsilon_i & 0 & \ldots & 0\\ \vdots &\vdots &\vdots & \cdots & \vdots \\ 0&0&0 & \ldots &0\\ 0&0&0 & \ldots &\epsilon_i^k \end{pmatrix} \]} Then {\[\small r_i p r_i^{-1} =\begin{pmatrix} \alpha & \epsilon_i & 0 & \ldots & 0\\ 0 & \alpha & \epsilon_i & \ldots & 0\\ \vdots &\vdots &\vdots & \cdots & \vdots \\ 0&0&0 & \ldots &\epsilon_i\\ 0&0&0 & \ldots &\alpha \end{pmatrix} \]} Taking a sequence $\epsilon_i$ converging to 0, we obtain $\lim_{i\to\infty} r_i p r_i^{-1}=s$. \endproof \hfill The main result of this section, \ref{_Vaisman_limit_of_LCK_pot_Theorem_}, gives a more precise description of the approximation in \ref{def_lckpot2Vai}. In order to state it, we need to recall the notion of Teichm\"uller space. Recall first that the {\bf $C^k$-topology} on the space of sections of a bundle $B\arrow M$ is the topology of uniform convergence of $b$, $\nabla b$, $\nabla^2 b$, ..., $\nabla^k b$ on compacts, for some connection $\nabla$ on $B$. The {\bf $C^\infty$-topology} is the topology of uniform convergence of {\em all} derivatives. In other words, a set is open in the $C^\infty$-topology if it is open in all $C^k$-topologies. Now we can give: \hfill \definition Let $\Comp$ be the set of all integrable complex structures on $M$, equipped with the $C^\infty$-topology, and $\Diff_0$ the group of isotopies of $M$, that is, the connected component of the group of diffeomorphisms of $M$. {\bf The Teichm\"uller space} of complex structures on $M$ is the quotient $\Teich:=\Comp/\Diff_0$ equipped with the quotient topology. \hfill \theorem\label{_Vaisman_limit_of_LCK_pot_Theorem_} Let $(M, J)$ be an LCK manifold with potential, $\dim_\C M \geq 3$. Then there exists a Vaisman-type complex structure $(M, J_\infty)$ such that the point $[J_\infty]$ in the Teichm\"uller space $\Teich(M)$ of complex structures on $M$ belongs to the closure of $[J]\in \Teich(M)$. In other words, there exists a sequence of diffeomorphisms $\nu_i \in \Diff_0(M)$ such that $\lim_i \nu_i(J)=J_\infty$, where the limit is taken with respect to the $C^\infty$-topology on the space $\Comp$ of complex structures. \hfill \proof Without restricting the generality, we may assume that $(M,J)$ has LCK rank 1, and its $\Z$-cover is K\"ahler (\ref{defor_improper_to_proper}). Fix an embedding of $(M,J)$ into the Hopf manifold $H= \frac{\C^n\backslash 0}{\langle A \rangle}$ (\ref{_Embedding_LCK_pot_in_Hopf_}). Let $A=us$ be the Jordan-Chevalley decomposition for $A$. By \ref{_semisimple_operator_approx_Proposition_}, there exists a sequence $A_i= u_i s= r_i A r_i^{-1}$ of operators conjugated to $A$ such that $u_i$ converges to 0. Denote by $(H, I_i)$ the Hopf manifold $(H, I_i):= \frac{\C^n\backslash 0}{\langle A_i \rangle}$. Since $(H, I_i)$ are all naturally isomorphic to $H$, one obtains the embedding $\phi_i:\; (M,J)\arrow (H, I_i)$. Since the operators $A_i= u_i s$ converge to $s$, the sequence $I_i\in \Comp(H)$ converges to $I_\infty$, where $(H, I_\infty):= \frac{\C^n\backslash 0}{\langle s \rangle}$. Denote by $\gamma$ the generator of the monodromy acting on the Stein completion $\tilde M_c$ (\ref{potcon}), and $\phi:\; \tilde M_c\arrow \C^n$ the embedding making the following diagram commutative \[ \begin{CD} \tilde M_c@>{\phi}>> \C^n\\ @V{\gamma}VV @V{A}VV\\ \tilde M_c@>{\phi}>> \C^n. \end{CD} \] Consider the map $\phi_i := r_i \phi r_i^{-1}$ as an embedding from $\tilde M_c$ to $\C^n$ making the following diagram commutative \[ \begin{CD} \tilde M_c@>{\phi_i}>> \C^n\\ @V{\gamma}VV @V{A_i}VV\\ \tilde M_c@>{\phi_i}>> \C^n. \end{CD} \] We use the same letter $\phi_i$ to denote the embedding $(M, J) \arrow \frac{\C^n\backslash 0}{\langle A_i \rangle}$ associated with $\phi_i$. Since $\phi(\tilde M_c)$ is $s$-invariant, and $A_i$ converge to $s$, the sequence $\phi_i\restrict{\tilde M_c}$ converges to $\phi\restrict {\tilde M_c}$, giving an embedding $\frac{\tilde M}{\langle s\rangle} \arrow (H,I_\infty)$. The limit manifold $(M, J_\infty)=\frac{\tilde M}{\langle s\rangle}$ is of Vaisman type, because it is embedded to a diagonal Hopf manifold (\ref{_Embedding_LCK_pot_in_Hopf_}). The maps $\phi_i$ do not converge to $\phi_\infty$ smoothly, because the sequence $\{r_i^{-1}\}$ is not bounded. However, the sequence $\{(\phi_i(\tilde M_c), A_i)\}$ $C^\infty$-converges to $(\phi_\infty(\tilde M_c), A)$ as a sequence of pairs \[ \text{(algebraic subvariety $Z\subset \C^n$, an automorphism $\psi\in \Aut(Z)$);} \] hence the corresponding points in $\Teich$ also converge. This is what we are going to show. Let $S\subset \calo_{\C^n}$, $\dim S=m$, be a finite-dimensional space generating the ideal of $\phi_1(\tilde M_c)$ (\ref{_cone_cover_for_LCK_pot_Theorem_}). By Step 1 of the proof of \ref{_cone_cover_for_LCK_pot_Theorem_}, we may assume that all elements of $S$ are polynomials of degree less than $d$. Denote by $V\subset \calo_{C^n}$ the space of polynomials of degree $\leq d$, and let $X\subset \Gr_m(V)$ be a subset of the Grassmannian of $m$-dimensional planes in $V$ consisting of all subspaces $W\subset V$ which generate an ideal $J_W\subset \C[\C^n]$ in the polynomial ring such that $\C[\C^n]/J_W$ is isomorphic to the algebraic cone $\tilde M_c$. \footnote{Interpreting $X$ as a piece of the relevant Hilbert scheme, we obtain that it is an algebraic subvariety in $\Gr_m(V)$; we are not going to prove or use this observation.} The sequence $\{\sigma_i :=\phi_i(\tilde M_c)\}$ corresponds to points in $X$ converging to $\sigma_\infty:=\phi_\infty(\tilde M_c)$. This gives the convergence of the submanifolds $\phi_i(\tilde M)$ to $\phi_\infty(\tilde M)$ in $\C^n \backslash 0$. Indeed, consider the ``universal fibration'' over $X$, with the fiber over $W\in X$ being the algebraic cone associated with the ideal $J_W\subset \calo_{\C^n}$ generated by $W$. The associated open cone fibration has smooth fibers. Indeed, any open algebraic cone is the total space of a $\C^*$-bundle over a projective manifold. To finish the proof, we need to prove that the manifolds $(M,J_i)=\frac{\phi_i(\tilde M)}{\langle u_i s\rangle}$ smoothly converge to $(M, J_\infty)= \frac{\phi(\tilde M)}{\langle s\rangle}$. This would follow if we prove that the corresponding cones in $\C^n$ converge smoothly in each annulus $B_R \backslash B_r$ around 0 (we need to restrict to the annulus, because the cone itself is singular around zero, hence it makes no sense to speak of $C^\infty$-convergence unless we remove a neighbourhood of the origin). Then $(M, J_i)$ and $(M, J_\infty)$ are quotients of the respective cones by $A_i$ and $A$-actions respectively, and $A_i$ converges to $A$ in $\GL(n, \C)$. However, the cones $\phi_i(\tilde M_c)$ are smooth in each annulus, and they converge to $\tilde M_c$ in $C^0$-topology (or in the Hausdorff metric) by construction. For smooth families of compact manifolds, the $C^\infty$-convergence of their fibers is automatic. To finish the proof, we replace the cone fibration over $X$ by the corresponding fibration of compact complex orbifolds, which also converges to the central fiber. The fibers of a locally trivial fibration of compact orbifolds converge to the central fiber with all derivatives by Ehresmann's theorem. Let $P\arrow X$ be the fibration with projective fibers over $X$, obtained by taking $\C^*$-quotients of the tautological open cone fibration $U\arrow X$. The fibration $P\arrow X$ is locally trivial, because it is smooth and all its fibers are isomorphic projective orbifolds. The fibers of $U$ are total spaces of the $\C^*$-bundles associated with $\calo(1)$ over fibers of $P$. Then $U\arrow X$ is smoothly locally trivial. To obtain the convergence of the corresponding LCK manifolds, we notice that $\lim_i A_i=s$, hence $\lim_i \tilde M_i /\langle A_i \rangle = \tilde M /\langle s \rangle= (M, J_\infty)$. \endproof \hfill \corollary Let $(M,J)$ be a compact complex manifold, $\dim_\C M\geq 3$, admitting an LCK structure with potential, and $J_\infty$ the Vaisman-type complex structure on $M$ obtained as in \ref{_Vaisman_limit_of_LCK_pot_Theorem_}. Then any Vaisman-type Lee form on $(M, J_\infty)$ can be realized as the Lee form of an LCK structure with potential on $(M, J)$. \hfill \proof Let $(M, I, \omega, \theta)$ be a Vaisman structure on $M$, with $\omega= d^c \theta + \theta \wedge \theta^c$, and $I_i$ a sequence of complex structures on $M$ converging to $I$, such that all $(M,I_i)$ are isotopic to $(M,J)$ as complex manifolds. Then the sequence $\omega_i= I_i d I_i^{-1} \theta + \theta \wedge I_i\theta$ converges to \[ d^c \theta + \theta \wedge \theta^c= I d I^{-1} \theta + \theta \wedge I\theta. \] Since positivity is an open condition, the (1,1)-form $\omega_i$ is positive for $i$ sufficiently big. Then $(M, I_i, \omega_i, \theta)$ is LCK with potential, and $\theta$ its Lee form. However, $I_i$ is mapped to $J$ by an isotopy which preserves the cohomology class of $\theta$, hence $\theta$ is a Lee class on $(M, J)$. \endproof \section{The set of Lee classes} \subsection{Opposite Lee forms on LCK manifolds with potential} As another preliminary result, we need the following non-existence claim. For Vaisman manifolds, it was obtained by K. Tsukada (\cite{tsuk}). \hfill \proposition\label{_Lee_cannot_be_opposite_Proposition_} Let $(M, \theta, \omega)$ and $(M, \theta_1, \omega_1)$ be two LCK structures on the same compact complex manifold. Suppose that $(M, \theta, \omega)$ is an LCK structure with potential. Then $\theta+\theta_1$ cannot be cohomologous to 0. \hfill \pstep If $[\theta]$ is the Lee class for an LCK structure with potential on $M$, then $a[\theta]$ is also a Lee class for one, for any $a>1$. To see this, consider the expression $\omega=d^c \theta + \theta \wedge\theta^c$ (\ref{_d_theta_c_equation_}) corresponding to the K\"ahler potential $\phi$ on the K\"ahler cover $(\tilde M, \tilde \omega)\arrow (M,\theta,\omega)$, with $\pi^*\theta = -d\log\phi$. Then $\phi^a$ is also a K\"ahler potential on $\tilde M$, \[ dd^c \phi^{a} = \phi^{a-2} (a \cdot \phi dd^c \phi + a(a-1) d\phi\wedge d^c\phi). \] Indeed, the first summand $a\phi^{a-1} dd^c \phi$ is Hermitian, because $dd^c\phi$ is Hermitian, and the second summand $a(a-1)d\phi\wedge d^c\phi$ is positive. The function $\phi^a$ is automorphic, hence it defines an LCK structure with potential on $M$, and the corresponding Lee form is $- d \log (\phi^a)=a\theta$. \hfill {\bf Step 2:} Let $\omega, \omega_1$ be LCK forms, and $\theta, \theta_1$ the corresponding Lee forms. Suppose that $k \theta + l \theta_1=0$. Then \begin{equation* d(\omega^k \wedge \omega_1^l)= d\omega^k \wedge \omega_1^l + \omega^k \wedge d\omega_1^l= k\theta \wedge \omega^k \wedge \omega_1^l + l\theta_1 \wedge \omega^k \wedge \omega_1^l=0. \end{equation*} This computation can be interpreted as follows. Let $L$ be the weight bundle for $(M, \omega, \theta)$ and $L_1$ the weight bundle for $(M, \omega_1, \theta_1)$. Recall (\ref{_weight_bundle_remark_} (ii)) that $\omega$, $\omega_1$ are viewed as closed $L$- and $L_1$-valued forms. Then $\omega^k$ is a closed $L^{\otimes k}$-valued form, $\omega_1^l$ is a closed $L_1^{\otimes l}$-form, and $\omega^k \wedge \omega_1^l$ is a closed form with coefficients in the flat bundle $L^{\otimes k}\otimes L_1^{\otimes l}$, which is trivial. Return now to the situation described in the assumptions of \ref{_Lee_cannot_be_opposite_Proposition_}. Let $n=\dim_\C M$. Using Step 1, we replace the LCK structure $(\omega, \theta)$ by another LCK structure with potential in such a way that $\theta$ is replaced by $(n-1)\theta$. Then $(n-1)\theta_1 = -\theta$, and the volume form $\omega \wedge \omega_1^{n-1}$ is closed. However, $\omega$ is actually an exact $L$-valued form, because $\omega= d_\theta (\theta^c)$, hence $\omega \wedge \omega_1^{n-1}$ is an exact $L \otimes L_1^{\otimes (n-1)}$-valued form. However, $L \otimes L_1^{\otimes (n-1)}$ is a trivial local system, which implies that $\omega \wedge \omega_1^{n-1}$ is exact. We verify this with an explicit computation: \begin{equation*} \begin{split} d(\theta^c \wedge \omega_1^{n-1})&= d\theta^c \wedge \omega_1^{n-1}- \theta^c \wedge d\omega_1^{n-1}\\ &=(\omega - \theta\wedge \theta^c)\wedge \omega_1^{n-1} - (n-1)\theta_1 \wedge \theta^c \wedge \omega_1^{n-1}\\ &=\omega \wedge \omega_1^{n-1} - (\theta\wedge \theta^c+(n-1)\theta_1 \wedge \theta^c)\wedge \omega_1^{n-1}= \omega \wedge \omega_1^{n-1}. \end{split} \end{equation*} We have shown that the positive volume form $\omega \wedge \omega_1^{n-1}$ on $M$ is exact, which is impossible. \endproof \subsection{The set of Lee classes on Vaisman manifolds} To proceed, we need the following preliminary result, which might be of separate interest. \hfill \proposition\label{_Lee_form_on_Vaisman_is_Vaisman_Proposition_} Let $(M,\theta, \omega)$ be an LCK structure on a compact Vaisman manifold. Then $\theta$ is cohomologous to a Lee form of a Vaisman structure. \hfill \proof Let $X$ be the Lee field of a Vaisman structure $(M, \omega^V, \theta^V)$ on $M$, and $G$ the closure of the group generated by exponents of $X$ and $I(X)$. Since $X$ and $IX$ are Killing and commute, $G$ is a compact commutative Lie group, hence it is isomorphic to a compact torus. This group acts on $M$ by holomorphic isometries with respect to the Vaisman metric. Averaging $\theta$ with the $G$-action, we obtain a $G$-invariant 1-form $\theta$, corresponding to another LCK structure in the same conformal class. Without restricting the generality, we may assume from the beginning that the form $\theta$ is $G$-invariant. Now, the equation $d\omega=\omega\wedge\theta$ is invariant under the action of $G$, because $\theta$ is $G$-invariant; in other words, $d(g^*\omega)= g^*\omega\wedge \theta$, for all $g\in G$. This implies that $\omega$ averaged with $G$ gives a form $\omega^G$ which satisfies $d(\omega^G)= \omega^G\wedge \theta$. We have constructed a $G$-invariant LCK structure $(M, \omega^G, \theta)$. After lifting it to the K\"ahler cover $\tilde M$ of $(M, \omega^G, \theta)$, the group $G$ becomes non-compact. Indeed, if it remained compact, it would act by isometries on the universal cover $\tilde M_U$ of $M$ as well, hence the action of $G$ on $M$ is lifted to the action of $G$ on $\tilde M_U$. This is impossible, however, because the lift of $G$ to the K\"ahler cover associated with $(M, \omega^V, \theta^V)$ acts by non-trivial homotheties, hence $G$ is lifted to an infinite cover $\tilde G\arrow G$ effectively acting on $\tilde M_U$. We obtain that $G$ acts by non-isometric homotheties on the K\"ahler cover associated with $(M, \omega^G, \theta)$. By \ref{kami_or}, $(M, \omega^G, \theta)$ is actually Vaisman. \endproof \hfill The following result was obtained by K. Tsukada (\cite{tsuk}). We provide a new, simpler proof. \hfill \theorem \label{_Lee_cone_on_Vaisman_Theorem_} Let $M$ be a compact Vaisman manifold, and $H^1(M)= H^{1,0}(M) \oplus \overline{H^{1,0}(M)} \oplus \langle \theta\rangle$ be the decomposition established in \ref{_LCK_pot_Hodge_decompo_Theorem_}. Consider a 1-form $\mu\in H^1(M)^*$ vanishing on $H^{1,0}(M) \oplus \overline{H^{1,0}(M)} \subset H^1(M)$ and satisfying $\mu([\theta])>0$. Then a class $\alpha\in H^1(M,\R)$ is a Lee class for some LCK structure if and only if $\mu(x) >0$. \hfill \pstep We start by proving that any $\alpha\in H^1(M,\R)$ satisfying $\mu(x) >0$ can be realized as a Lee class. From \ref{_d_theta_c_equation_}, we have $\omega=d^c\theta+\theta\wedge I\theta$. By \ref{_Subva_Vaisman_Theorem_} (ii), the form $\omega_0:=d^c\theta$ is semi-positive: it vanishes on the canonical foliation $\Sigma$ and is strictly positive in the transversal directions. Let $u\in H^{1,0}(M) \oplus \overline{H^{1,0}(M)}$. Then \[ d^c(\theta+ u) + (\theta+u) \wedge (\theta^c + u^c) = \omega_0 + (\theta+u) \wedge (\theta^c + u^c) \] is the sum of two semi-positive forms. Indeed, since $u$ is $d^c$-closed, $d^c(\theta+ u) =\omega_0$ is semi-positive; the form $(\theta+u) \wedge (\theta^c + u^c)$ is semi-positive of rank 1 by definition. By \ref{_holomo_on_Vaisman_basic_Proposition_}, $u$ is basic. Since $\theta+u$ is the sum of $\theta$ and a basic form, the restriction of $(\theta+u) \wedge (\theta^c + u^c)$ to $\Sigma$ satisfies \[ (\theta+u) \wedge (\theta^c + u^c)\restrict \Sigma = \theta \wedge \theta^c \restrict \Sigma. \] The sum $\omega_0 + (\theta+u) \wedge (\theta^c + u^c)$ is strictly positive on all tangent vectors $x\notin \Sigma$ because $\omega_0$ is positive on these vectors,\footnote{When we say ``a positive (1,1)-form $\alpha$ is positive on a vector $v$'', we mean that $\1\alpha(v, I(v))>0$; a form is Hermitian if it is positive on all non-zero vectors.} and positive on $x\in \Sigma$ because $\theta \wedge \theta^c \restrict \Sigma$ is positive on such $x$. \hfill {\bf Step 2:} It remains to show that none of the classes $\alpha$ with $\mu(\alpha)\leq 0$ can be realized as a Lee class of an LCK structure. By \ref{_Lee_form_on_Vaisman_is_Vaisman_Proposition_}, any Lee class on $M$ is the Lee class of a Vaisman metric. If $\mu(\alpha)=0$, we can represent $\alpha$ by a $d, d^c$-closed form $\alpha_0$. This is impossible by \ref{_theta_not_d^c_closed_Lemma_}. \hfill {\bf Step 3:} It remains to show that there are no LCK classes which satisfy $\mu(\alpha)< 0$. Suppose that such a class exists; by \ref{_Lee_form_on_Vaisman_is_Vaisman_Proposition_}, it is the Lee class of a Vaisman manifold, hence it has an LCK potential. This is impossible, because two Lee classes for LCK structures with potential cannot sum to zero, by \ref{_Lee_cannot_be_opposite_Proposition_}. \endproof \subsection{The set of Lee classes on LCK manifolds with potential} Now we can prove the main result of this paper. \hfill \theorem\label{_Lee_cone_on_LCK-pot_Theorem_} Let $(M, \theta, \omega)$ be a compact LCK manifold with potential, $\dim_\C M\geq 3$, and $\mu:\; H^1(M, \R)\arrow \R$ a non-zero linear map vanishing on the space $H^{1,0}(M)\oplus \overline{H^{1,0}(M)}$ which has codimension 1 by \ref{_LCK_pot_Hodge_decompo_Theorem_}. Assume that $\mu(\theta) >0$. Then $\xi\in H^1(M, \R)$ is the Lee class of an LCK structure with potential on $M$ if and only if $\mu(\xi)>0$. \hfill \proof Let $(M, I_\infty)$ be a Vaisman manifold, and $\{I_i\}$ the sequence of complex structures converging to $I_\infty$, such that all manifolds $(M, I_k)$ are isomorphic to $(M,I)$ (\ref{_Vaisman_limit_of_LCK_pot_Theorem_}). Given an LCK metric with potential, choose the conformal gauge such that $\omega_\infty:= d^c\theta_\infty + \theta_\infty \wedge \theta^c_\infty$ on $(M,I_\infty)$ (\ref{_d_theta_c_equation_}). Then the form $ I_kdI_k^{-1}\theta_\infty + \theta_\infty \wedge I_k(\theta_\infty)$ remains strictly positive for almost all manifolds $(M, I_k)$, because $\lim_k I_k = I_\infty$, and positivity is an open condition. This implies that $\theta_\infty$ is a Lee form on $(M,I_k)$, for $k$ sufficiently big. By \ref{_Lee_cone_on_Vaisman_Theorem_} the set ${\goth L}$ of Lee classes on $(M,I)$ contains the half-space $\{u\in H^1(M, \R)\ \ |\ \ \mu_0(u)>0\}$ for some linear map $\mu_0:\; H^1(M, \R)\arrow \R$. By \ref{_Lee_cannot_be_opposite_Proposition_}, ${\goth L}$ cannot be bigger than a closed half-space. However, ${\goth L}$ is open, because the condition ``$d^c\theta + \theta \wedge \theta^c$ is Hermitian'' is open in $\theta$, hence ${\goth L}$ is an open half-space. It remains only to show that $\mu_0$ is proportional to $\mu$. This would follow if we prove that $\ker \mu=\ker \mu_0$. The space $\ker \mu_0$ is the set of all classes $\alpha \in H^1(M, \R)$ such that neither $\alpha$ nor $-\alpha$ are Lee classes, and $\ker \mu$ are classes represented by $d, d^c$-closed forms. By \ref{_theta_not_d^c_closed_Lemma_}, a Lee class of an LCK manifold cannot be represented by a $d^c$-closed form, which gives $\ker \mu=\ker \mu_0$. \endproof \hfill \noindent{\bf Acknowledgment:} L.O. thanks Massimiliano Pontecorvo for very useful discussions during the preparation of this work, during his visit at Universit\`a di Roma Tre in June 2021. Both authors thank Victor Vuletescu for very useful comments on a first version of the paper. \hfill {\small
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \begin{figure}[t] \setlength{\belowcaptionskip}{-0.2cm} \centering \includegraphics[width=0.48\textwidth]{bing-google.eps} \caption{Example QA feature in Web search engines.} \label{Goole&Bing} \vspace{-5pt} \end{figure} Traditional web search results consists of a list of hyperlinks to the most relevant documents with respect to the user query. In recent years, most commercial Web search engines provide question answering (QA) service as an important component in the search result page (SERP)----if a query bears question intent, the search engine will extract the most relevant passage from web documents and place it in an individual block. Figure~\ref{Goole&Bing} shows a screenshot of QA features on \url{google.com} and \url{bing.com}. Usually a QA block consists of the passage to answer the question, the source Url from which the passage is extracted, and the links to collect user feedback. Sometimes a QA block may also contain relevant images. The QA feature is appreciated by search engine users since it saves their efforts of clicking on the hyperlinks, scanning the web documents, and looking for the answers. It has been even more popular with voice search on mobile devices becoming widely adopted by more and more users. A critical issue in web question answering is {\em passage relevance}, which is to decide whether a passage is able to answer the given question or not. Table~\ref{table:example} shows an example of this task. Traditional methods apply linguistic rules or patterns~\cite{radev2002probabilistic,echihabi2008select,kaisser2004question}. However, such rule-based methods can hardly generalize to unseen cases. To handle the complexity in web-scale, open-domain question answering, statistical machine learning models are often the choice in practice (e.g. ~\cite{radev2002probabilistic,echihabi2008select,kaisser2004question,shen2006exploring}). In recent years, deep neural networks have shown outstanding performance on various natural language processing tasks, and they have also become the state-of-the-arts approach for passage relevance \cite{mitra2017learning, nogueira2019passage, DBLP:journals/corr/abs-1904-09636}. However, a great challenge for machine learning models is the requirement for large amounts of training data. Usually the model size (in terms of the number of parameters) increases in accordance with the complexity of the target task, and the size of required training data increases in proportion to the model size. In practice, the number of parameters of a deep learning model for the task of passage relevance in web QA may reach the magnitude of 10-100 million. To train such a huge model, millions of training examples are needed. It is a huge cost to label the training data by human judges. Moreover, a commercial search engine often provides services in multiple countries with various languages. It is unrealistic to manually label millions of training data for each language. Moreover the judges may not truly understand the user intent. As an instance, for some tech or health related queries, if judges don't have that background, it would be hard for them to label, thus judging quality cannot be guaranteed. Therefore, how to collect large amount of high quality training data in different languages becomes the bottleneck to web question answering. A straightforward way to collect training data is asking users to provide explicit feedback to the relevance of passages. For example, on both Google and Bing, there are feedback links or voting flags under the QA block (see Figure~\ref{Goole&Bing}). However, very few users would take the efforts to click on the feedback links. In our empirical study, less than 0.001\% of the total QA impressions receive users' explicit feedback. Moreover, users strongly tend to give negative feedback. The ratio of positive over negative feedback is about 1:17. After sampling from such skewed label distribution, the usable data is even less. \begin{table}[htbp] \renewcommand{\arraystretch}{1.5} \centering \caption{\label{t:example}An example of QA relevance task.} \begin{tabular}{lp{5cm}p{7cm}} \hline \textbf{Question}: &\emph{What's the normal body temperature for child?}\\ \hline \textbf{Passage}: &\emph{The average normal body temperature for children is about 37 degree. A child's temperature usually averages from around 36.3 degree in the morning to 37.6 degree in the afternoon.} \\ \hline \textbf{Label}: &\emph{Relevant} \\ \hline \end{tabular} \label{table:example} \end{table} To address the limitation of explicit feedback, one natural idea is to mine implicit feedback from user behavior in search engine logs. There has been a rich literature in this direction. For example, Joachims et al.~\cite{DBLP:journals/sigir/JoachimsGPHG17} analyzed users' decision process through eye-tracking and inferred implicit relevance feedback from click-through data. Several authors built more complicated click models through the whole search sessions and derived implicit feedback from the models~\cite{DBLP:conf/sigir/GaoTY11,DBLP:conf/cikm/HuangHGDAH13,DBLP:journals/sigir/AgichteinBD18}. Other works further combined searching and browsing behavior and estimated the topical relevance as well as the domain authority of the visited web pages~\cite{}. However, all previous approaches target at the relevance of web documents, rather than passages. Due to the following two observations, we argue that the previous user behavior models cannot be applied to infer passage relevance. \begin{table}[htbp] \small \renewcommand{\arraystretch}{1.5} \centering \caption{\label{t:passage_behaviour}Example cases of user behaviour for web QA.} \begin{tabular}{lp{5cm}p{7cm}} \hline \textbf{Question (a)}: &\emph{What's the normal body temperature for \textbf{child}?}\\ \hline \textbf{Passage (a)}: &\emph{The average normal body temperature for children is about \textbf{37 degree}. A child's temperature usually averages from around \textbf{36.3 degree} in the morning to \textbf{37.6 degree} in the afternoon.} \\ \hline \textbf{Url (a)}: &\emph{Human Body Temperature: Fever - Normal - Low \url{https://www.disabled-world.com/calculators-charts/degrees.php}} \\ \hline \textbf{Label}: &\emph{Relevant} \\ \hline \textbf{User Behaviour}: &\emph{\textbf{No Click}} \\ \hline \hline \textbf{Question (b)}: &\emph{What's the normal body temperature for \textbf{adult}?}\\ \hline \textbf{Passage (b)}: &\emph{The average normal body temperature for children is about \textbf{37 degree}. A child's temperature usually averages from around \textbf{36.3 degree} in the morning to \textbf{37.6 degree} in the afternoon.} \\ \hline \textbf{Url (b)}: &\emph{Human Body Temperature: Fever - Normal - Low \url{https://www.disabled-world.com/calculators-charts/degrees.php}} \\ \hline \textbf{Label}: &\emph{Irrelevant} \\ \hline \textbf{User Behaviour}: &\emph{\textbf{Click}} \\ \hline \end{tabular} \label{table:example} \end{table} Table~\ref{t:passage_behaviour} shows two cases of question answering. In the first case ``{\sl What's the normal body temperature for \textbf{child}?}'', there is no click in the QA block, while for the second case ``{\sl What's the normal body temperature for \textbf{adult}?}'', a user click is observed. When we further examine these two cases, we can see the passage in the first case perfectly answers the user question. Through browsing the content of the passage, a user can already get satisfactory information without any follow up action. While for the second case, the information in the passage (about child body temperature) does not accurately match the user intent (for adult body temperature). The user may want to explore more information in the source page from which the passage is extracted. In this case, the title ``human body temperature'' of the source page may trigger the user's interest to click on the Url and read more in that page. This example illustrates a unique characteristic of question answering, i.e., the content of the passage is already presented to the user in the QA block, therefore, the user may not need to click on the Url to get the answer. Consequently, the correlation between user clicks and passage relevance may be even weaker than the correlation of clicks with page relevance. We will report more detailed analysis in Section~\ref{sec:user_feedback}. The QA block also differs in quantity with the traditional web search results. Given a user question, there is often only a single QA block, versus a list of links to web documents. Most previous click models leverage the relative rank order of documents to reduce the position bias, and gain more reliable implicit feedback. Unfortunately, this idea cannot be directly applied to QA block. The two major differences discussed above call for a new study of user behavior, with a particular focus on the interaction with the QA block. To be more specific, we are interested in the following questions. What types of user behavior we should consider for QA block? How is the correlation between different user behaviors and passage relevance? Is it still possible to extract reliable implicit feedback for passage relevance from the noisy behavior data, analogous to the previous works for page relevance? How to leverage such implicit feedback in QA model training? How much gain in model performance we can achieve through this approach? And how much labeling cost we can save in practice? In this paper, we make a thorough study of user behavior for web QA, and propose a novel approach to mining implicit relevance feedback from noisy user behavior data for passage relevance. To the best of our knowledge, this is the first work in this area. We make the following contributions. \begin{itemize} \item We consider three types of user behavior when they interact with the QA block, including click behaviour, re-query behaviour, and browse behaviour. To reduce the randomness from individual user and individual action, we aggregate user behaviors from many users and analyze the sequences of user actions within the context of complete search sessions. We gain interesting insights about the correlation between user behavior and passage relevance. We report our analysis in Section \ref{sec:user_feedback}. \item We develop several methods to automatically extract users' feedback signals from their behavior data by learning from small amount of manual relevance judgements. We verify the feasibility to learn implicit feedback with reasonable accuracy. Through the analysis of learned model, we reversely postulate users' decision process when they interact with QA block as well as the remaining components in the search result page. The methods and analysis are reported in Section \ref{sec:imp_feedback_modeling}. \item We incorporate the mined implicit feedback in a weakly-supervised approach for QA model training, and carry out extensive experiments on several QA datasets (including both open benchmark dataset and dataset from one commercial QA system). The experimental results verified the effectiveness of our proposed approach. The experimental results are reported in Section \ref{sec:experiment}. \end{itemize} The remaining paper is organized as follows. We first review the relate work in Section~\ref{sec:related-work}. We then present our approach in Section~\ref{sec:method}. The extensive experimental results are presented in Section~\ref{sec:experiment}, and the paper is concluded in Section~\ref{sec:conclusion}. \section{Related Work} \label{sec:related-work} \subsection{Question Answering} The purpose of web QA is finding answers in collections of unstructured documents in web-based content such as Wikipedia or other online encyclopedias \cite{chen2017reading,ahn2004using,buscaldi2006mining}. Web QA systems first select some relevant candidate passages, and then extract the most appropriate answers directly from the underlying corpus of unstructured passages. Various methods for web QA have been proposed in the literature, include linguistic rules, pattern matching, and machine learning models. Moldovan et al. \cite{DBLP:conf/acl/MoldovanHPMGGR00} proposed window-based word scoring technique to rank potential answer pieces for web QA. AskMSR \cite{DBLP:conf/emnlp/BrillDB02}, a search-engine based QA system, used Bayesian Neural Network relying on data redundancy to find short answers. Cui et al. \cite{DBLP:conf/sigir/CuiSLKC05} learned transformations of dependency paths from questions to answers to improve passage ranking. Yao et al. \cite{DBLP:conf/naacl/YaoDCC13} tried to fulfill the matching using mimimal edit sequences between dependency parse tree. In recent years, deep neural networks have achieved excellence performance in QA area \cite{wang2018r}. Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), such as the Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), were applied to learn the representations of questions and answers \cite{DBLP:journals/corr/TanXZ15, DBLP:conf/acl/TanSXZ16}. Later on, attention mechanisms were employed to model the interaction between questions and answers \cite{DBLP:journals/corr/SantosTXZ16,DBLP:conf/cikm/YangAGC16,DBLP:conf/ijcai/WangHF17, DBLP:journals/corr/abs-1806-00778}, which resulted in better performance than simply modeling query and answer separately. Most recently, deep pre-trained models, ELMo \cite{DBLP:conf/naacl/PetersNIGCLZ18}, OpenAI GPT \cite{radford2018improving}, BERT \cite{DBLP:journals/corr/abs-1908-06780,DBLP:journals/corr/abs-1908-08167}, ERNIE \cite{DBLP:conf/acl/ZhangHLJSL19} and XLNet \cite{DBLP:journals/corr/abs-1906-08237}, have become the new state-of-the-art approaches in QA area. Due to the complexity of web-scale, open domain question answering, the statistic machine learning models for this task require large amount of training data. In this paper, we do not target at developing novel QA models, but instead, we aim to find a model-agnostic approach for training data collection. \subsection{Learning from User Feedback} User feedback has been widely used in web page ranking to improve search quality \cite{DBLP:conf/kdd/Joachims02}. We may consider two types of user feedback. Explicit (or shallow) feedback means the user takes an extra effort to proactively expresses her satisfaction with the search results, e.g., through a simple up-voting or down-voting button. Implicit feedback means the inference of user satisfaction from the user's search and/or browse sessions, without burden on the user. Rocchio et al. \cite{rocchio1971relevance} was a pioneer work to leverage relevance feedback for information retrieval, which explicitly gathered feedback through a button for up-voting or down-voting. Another means of collecting explicit feedback was through side-by-side comparisons \cite{ali2006relationship, thomas2006evaluation}. Explicit feedback has the drawback of disturbing users in their normal interaction with search engines. Compared with explicit feedback, the approach of implicit feedback has the advantage that it can be collected at much lower cost, in much larger quantities, and without burden on the user of the search system \cite{DBLP:journals/sigir/JoachimsGPHG17}. Various features have been extracted from user behavior data, such as the click-through information, the average dwell time, the number of page visits in post-search browsing sessions, and so on~\cite{DBLP:journals/sigir/AgichteinBD18}. Gao et al. \cite{DBLP:conf/sigir/GaoTY11} and Huang et al. \cite{DBLP:conf/cikm/HuangHGDAH13} used click-through data for deep semantic model training to learn semantic matching between query and documents in page ranking. A major drawback, however, of implicit feedback is that it is inherently noisy or even biased \cite{DBLP:conf/sigir/JoachimsGPHG05}, which makes interpretation challenging. To address the challenge, vrious methods have been prposed. For example, Craswell et al. \cite{DBLP:conf/wsdm/CraswellZTR08} proposed four simple hypotheses about how position bias might arise; Dupret and Piwowarski \cite{DBLP:conf/sigir/DupretP08} proposed a set of assumptions on user browsing behavior in order to estimate the probability a document was to be seen; Chapelle and Zhang. \cite{DBLP:conf/www/ChapelleZ09} proposed a dynamic Bayesian Network model to indicate whether a user was satisfied by a clicked and thus left the page. Although user feedback for web page ranking has been well studied, there is few work on user feedback to web QA. The closest work to our study is Kratzwald et al. \cite{kratzwald2019learning}, which designs feedback buttons to explicitly ask users to assess the overall quality of the QA result. In this work, we mainly focus on the mining of implicit relevance feedback for web QA. To the best of our knowledge, it is the first study in this direction. \begin{figure*}[ht] \setlength{\belowcaptionskip}{-0.2cm} \centering \includegraphics[scale=0.65, viewport=85 180 695 460, clip=true]{framework_final.pdf} \caption{\label{figure:3_framework} The overall framework of our Feedback QA approach.} \label{fig:framkework} \end{figure*} \section{Our Approach} \label{sec:method} Figure \ref{fig:framkework} shows the overall framework of our approach. For web QA, leveraging user behaviour data is a promising direction to continuously improve web QA quality when there is no sufficient labeled data available. There exists various user behaviours when users naturally interact with web QA system. However, these implicit feedback from user behaviors could be extremely noisy. In this section, we firstly investigate all the related user behaviour and extract useful features for the implicit feedback modeling. Then we leverage the implicit feedback model to auto-label large-scale training data for QA models pre-training, thus leading to the performance improvement of our web QA system. \subsection{Taxonomy of User Behavior} \label{sec:user_feedback} We propose a taxonomy of user behavior as summarized in Table \ref{table:behaviour}. At the higher level, we distinguish two types of user behavior, which correspond to explicit and implicit feedback to the web QA system. For implicit feedback, we further recognize three types of user behavior, namely, re-query, click, and browsing. In the following, we first show an empirical study of explicit feedback in a commercial search engine, and explain why it is not very helpful to collect training data. We then give more detailed description for user implicit feedback. To collect explicit feedback, search engines such as Google and Bing provide links underneath the QA block. However, only a very small fraction of users would take the efforts to send their explicit feedback. In a real commercial web QA system, the coverage of the explicit feedback (click on the feedback links) is only less than \emph{0.001\%} of the total QA impressions. Moreover, we find the users strongly tend to send negative feedback; the positive over negative ratio is about 1:17. To form a training data with balanced percentage of positive and negative examples, we have to adopt an even sampling strategy from the skewed label distribution. This further reduces the size of valid training data that can be derived from explicit feedback. Consequently, the explicit feedback may not be a good source to collect training data for web QA model. We then cast our attention to implicit feedback, which is easy to collect and abundant in quantity. We consider the following three types of user behavior. \noindent \textbf{\emph{Re-query Behavior}}: We use the term \emph{reformulation} to denote re-query behaviour, i.e., the user modifies the original query and issues the new query to the search engine. \noindent \textbf{\emph{Click Behavior}}: We categorize four types of clicks, depending on the components being clicked on (see Figure~\ref{figure:user_behavior}). \begin{itemize} \item \emph{Answer Click} is the behavior that the user clicks one the source page Url of the answer passage (indicated by {\small \textcircled{1}} in Figure~\ref{figure:user_behavior}). \item \emph{Answer Expansion Click} means that the user clicks on the special button (indicated by {\small \textcircled{2}} in Figure~\ref{figure:user_behavior}) to expand the folded QA answer due to the maximum length limit for displaying . \item \emph{Outside Answer Click} means that the user clicks on the document hyperlinks in SERP (indicated by {\small \textcircled{3}} in Figure~\ref{figure:user_behavior}) other than the source page Url for web QA answer. \item \emph{Related Click} is the behavior that the user clicks on the related queries (indicated by {\small \textcircled{4}} in Figure~\ref{figure:user_behavior}) to explore more information. \end{itemize} \noindent \textbf{\emph{Browsing Behavior}}: this means the user reads the content of the QA passage or other components in the SERP, without any input action to the search engine. \begin{table} \small \caption{\label{table:behaviour} Taxonomy of user behaviour in web QA system} \begin{center} \begin{tabular}{ c|cc} \hline \textbf{Type} & \textbf{Type} & \textbf{Behaviour} \\ \hline \textbf{Explicit} & \emph{Click} & \emph{Up-vote/Down-vote} \\ \hline \multirow{8}{*}{\textbf{Implicit}} & \emph{Re-query} & \emph{Reformulation} \\ \cline{2-3} & \multirow{4}{*}{\emph{Click}} & \emph{Answer Click} \\ & & \emph{Answer Expansion Click} \\ & & \emph{Outside Answer Click} \\ & & \emph{Related Click} \\ \cline{2-3} & \multirow{1}{*}{\emph{Browsing}} & \emph{Browse} \\ \hline \end{tabular} \label{table:user-feedback} \end{center} \vspace{-5pt} \end{table} \begin{figure}[tbp] \setlength{\belowcaptionskip}{-0.2cm} \centering \includegraphics[scale=0.80, viewport=240 160 500 462, clip=true] {user_behaviour_clear.pdf} \caption{\label{figure:user_behavior} User click behavior illustration, including \emph{Answer Click}, \emph{Answer Expansion Click}, \emph{Outside Answer Click}, \emph{Related Click}.} \label{fig:impact_size} \end{figure} \subsection{Feature Extraction from User Behavior} \begin{table}[] \small \caption{\label{table:feature_description} User behaviour feature description. Raw features mean that features extracted based on the behaviour of singe impression, while Aggregation features mean that features are aggregated across all search impressions of the same QA pair.} \begin{tabular}{l|lp{3.8cm}} \hline \textbf{Name} & \textbf{Type} & \textbf{Description} \\ \hline \hline \textbf{Raw Features} \\ \hline \emph{HasRFAfterClick} & \emph{Re-query} & 1 if has reformulation after click on SERP, 0 otherwise \\ \emph{HasRFNoClick} & \emph{Re-query} & 1 if has reformulation after no click on SERP, 0 otherwise \\ \hline \emph{AnswerClick} & \emph{Click} & 1 if clicked on answer, 0 otherwise \\ \emph{AnswerClickOnly} & \emph{Click} & 1 if only clicked on answer, 0 otherwise \\ \emph{AnswerSatClick} & \emph{Click} & 1 if sat-clicked on answer, 0 otherwise \\ \emph{AnswerExpClick} & \emph{Click} & 1 if clicked on answer expansion, 0 otherwise \\ \emph{OTAnswerClick} & \emph{Click} & 1 if clicked outside of answer, 0 otherwise \\ \emph{OTAnswerClickOnly} & \emph{Click} & 1 if only clicked outside of answer, 0 otherwise \\ \emph{OTAnswerSatClick} & \emph{Click} & 1 if satisfied clicked outside of answer, 0 otherwise \\ \emph{BothClick} & \emph{Click} & 1 if clicked both on in/outside of answer, 0 otherwise \\ \emph{RelatedClick} & \emph{Click} & 1 clicked on related queries, 0 otherwise \\ \hline \emph{NoClick} & \emph{Browsing} & 1 if there is not click, 0 otherwise \\ \emph{SourcePageDwellTime} & \emph{Browsing} & dwell time of source page \\ \emph{SERPDwellTime} & \emph{Browsing} & dwell time of SERP \\ \emph{IsAbandoned} & \emph{Browsing} & 1 if page is abandoned, 0 otherwise \\ \hline \hline \textbf{Aggregation Features} \\ \hline \emph{RFAfterClickRate} & \emph{Re-query} & rate of re-query after click \\ \emph{RFNoClickRate} & \emph{Re-query} & rate of re-query with no click \\ \hline \emph{AnswerCTR} & \emph{Click} & CTR of QA answer \\ \emph{AnswerOnlyCTR} & \emph{Click} & CTR with only click on QA answer \\ \emph{AnswerSatCTR} & \emph{Click} & satisfied CTR of QA answer \footnote{satisfied click means that the dwell time >30s} \\ \emph{AnswerExpRate} & \emph{Click} & CTR of answer expansion \\ \emph{OTAnswerCTR} & \emph{Click} & CTR outside of QA answer \\ \emph{OTAnswerOnlyCTR} & \emph{Click} & CTR with only click on outside of QA answer \\ \emph{OTAnswerSatCTR} & \emph{Click} & satisfied CTR outside of QA answer \\ \emph{BothClickCTR} & \emph{Click} & CTR of both click \\ \emph{RelatedClickRate} & \emph{Click} & CTR of related queries \\ \hline \emph{NoClickRate} & \emph{Browsing} & no click rate \\ \emph{AbandonRate} & \emph{Browsing} & abandonment rate \\ \emph{AvgSourcePageDwellTime} & \emph{Browsing} & average source page dwell time \\ \emph{AvgSERPDwellTime} & \emph{Browsing} & average SERP dwell time \\ \hline \end{tabular} \end{table} After we define various types of user behavior, the next step is to convert them into Boolean or numerical features. Table~\ref{table:feature_description} summarizes the raw features as well as the aggregated features. Most features are straightforward; please refer to the ``Description'' column in Table~\ref{table:feature_description}. We only select several features to explain in the following. \begin{itemize} \item \emph{Source Page Dwell Time} records the time from the user click into the source page of web QA answer to the moment that user leaves the source page. \item \emph{SERP Dwell Time} records the time when SERP page loading succeeds to the moment when the search session finishes. \item \emph{Abandonment}: there is no click on the SERP page. Users just browse the SERP page and the search session finishes. \end{itemize} Moreover, the click-through rate ({\emph CTR}) for a component (can be either a QA answer, an answer expansion, a related search, or a web document outside of QA block) is defined as \begin{equation} CTR = \frac{N_{click}}{N_{impression}}, \end{equation} where $N_{impression}$ denotes the total number of impressions of the component and $N_{click}$ denotes the number of clicks on the component. A satisfied click ({\emph SatClick}) on a component means a click on the component followed by the corresponding Dwell Time great than or equal to a predefined threshold. The satisfied click-through rate (SatCTR) is then defined as \begin{equation} SatCTR = \frac{N_{SatClick}}{N_{impression}}, \end{equation} where $N_{SatClick}$ denotes the number of SatClicks on the component. \subsection{Implicit Feedback Modeling} \label{sec:imp_feedback_modeling} After we define the set of user behavior features, the next question is whether it is possible to mine implicit relevance feedback from the noisy user behavior data. To answer this question, we first prepare a data set with 18k QA pairs in Section~\ref{sec:feedback_dataset}, where each QA pair is augmented with the set of user behavior features as well as a human judged label. Intuitively, a single feature, such as AnswerClick, may be too weak a signal to infer user satisfaction. We verify this assumption as a baseline in Section~\ref{sec:click_baseline}. We then consider various machine learning models, including Logistic Regression (LR)~\cite{menard2002applied}, Decision Tree (DT)~\cite{safavian1991survey}, Random Forest (RT)~\cite{liaw2002classification}, and Gradient Boost Decision Tree (GBDT)~\cite{ke2017lightgbm}. To fit the models with human judged labels, we design difference strategies to aggregate multiple users' behaviors and compare their performance in Section~\ref{sec:ml_models}. We also analyze the feature importance and derive rules from the learned models, which help us gain insights about users' decision process when they interact with the QA block. \subsubsection{Dataset} \label{sec:feedback_dataset} We create a data set which consists of 18k QA pairs, where each QA pair is augmented with the set of user behavior features as well as a human judged label. To be more specific, the data set is a table, where each row is in the form of <Question, Passage, User behaviour features, Label>. The QA pairs are sampled from a half-year log (i.e. from January to June 2019) of one commercial search engine. We sample the QA pairs whose number of impressions in the log is around 50. This is due to two considerations. First, we want to aggregate multiple users' behavior to reduce the noise from individual users. Second, we want to avoid too popular queries, which tend to be short and too easy to answer. Each QA pair is sent to three crowd sourcing judges and the final label is derived based on a majority voting. This 18k data set is further randomly split into 14k/2k/2k as training, dev and test sets. \subsubsection{Baseline}\label{sec:click_baseline} Click through rate (CTR) and satisfied click-through rate (SatCTR) have been widely adopted in previous works as indicators for the relevance of a web page with respect to a given user query. Analogously in our study for passage relevance, we start with users' clicks on the QA block. We first investigate the feature of $AnswerCTR$ in Table~\ref{table:feature_description} by plotting a precision recall curve in Figure \ref{figure:pr} (a). \begin{itemize} \item For QA pairs whose $CTR$ > 0, the recall is less than 0.33. In other words, when the passage is relevant to the question, there are more than two thirds of cases where users do not make any single click on the source Url. Please note the number of clicks is counted in all the impressions for that question passage pair. This is an observation very different from page ranking. However, when we consider the feature of question answering, this result is not surprising since users may simply browse the content of the passage and get the information. No further click into the source Url is needed at all. \item The highest precision is less than 0.77, across the full range of recall values. This suggests clicking into the source Url does not necessarily suggests a relevant passage. We find in most clicked cases, the passage is partially relevant to the question. Therefore, the users may want to click into the source Url to explore more information. \end{itemize} We further investigate the correlation between SatCTR and passage relevance. Similarly, we plot the precision-recall curves in Figure~\ref{figure:pr} (b). $CTR\_t$ means we consider the clicks followed by dwell time on the source page for longer than $t$ seconds as a satisfied click. We experiment with dwell time threshold as \{0s, 5s, 15s, 25s\}, and observe similar trend with that in Figure~\ref{figure:pr} (a). The experiments with CTR and SatCTR verifies that the single feature of user clicks into the source Url is not a good indicator for passage relevance: clicks do not indicate relevant passages, and no clicks do not necessarily suggest irrelevant passages. Therefore, we need to consider more complex models to combine the sequences of user actions in search sessions. \begin{figure}[t] \setlength{\belowcaptionskip}{-0.2cm} \centering \includegraphics[scale=0.32, viewport=10 0 800 330, clip=true]{pr_curve_ctr.eps} \caption{\label{figure:pr}Precision recall curves of \emph{AnswerCTR} (a) and \emph{AnswerSatCTR} (b).} \label{PR_Curve} \end{figure} \begin{table} \begin{center} \caption{Comparison of feedback modeling methods including click based baselines, Label Aggregation, Feature Aggregation.} \begin{tabular}{l|ccc} \hline \textbf{Method} & \textbf{AUC} & \textbf{ACC} & \textbf{F1} \\ \hline \emph{AnswerCTR} & 58.28 & 53.87 & 18.11 \\ \emph{AnswerSatCTR\textsubscript{5s}} & 57.73 & 53.50 & 16.52 \\ \emph{AnswerSatCTR\textsubscript{15s}} & 57.75 & 52.73 & 13.43 \\ \emph{AnswerSatCTR\textsubscript{25s}} & 57.79 & 52.63 & 12.23 \\ \hline \hline \textbf{LR\textsubscript{LA}} & 70.96 & 63.54 & 65.31 \\ \textbf{DT\textsubscript{LA}} & 71.65 & 64.33 & 66.88 \\ \textbf{RF\textsubscript{LA}} & 71.61 & 64.33 & \textbf{\textbf{66.95}} \\ \textbf{GBDT\textsubscript{LA}} & 71.12 & 59.03 & 66.73 \\ \hline \textbf{LR\textsubscript{FA}} & 71.41 & 66.00 & 66.78 \\ \textbf{DT\textsubscript{FA}} & 61.75 & 63.43 & 60.92 \\ \textbf{RF\textsubscript{FA}} & 71.14 & 67.47 & 65.66 \\ \textbf{GBDT\textsubscript{FA}} & \textbf{\textbf{73.69}} & \textbf{\textbf{68.00}} & 66.08 \\ \hline \end{tabular} \label{table:metrics for different modeling method} \end{center} \vspace{-3pt} \end{table} \subsubsection{Our Approach}\label{sec:ml_models} We apply machine learning models to combine the various types of user behavior features in Table~\ref{table:feature_description}. The training target is to fit the human judged labels. We evaluate the model performance by common binary classification metrics, including area under the curve (AUC), accuracy, and F1 score. \noindent \textbf{\emph{Three Aggregation Strategies}}: For each QA pair, we collect multiple impressions. Therefore, there could be two strategies to aggregate the impressions, as described in the following. \begin{itemize} \item {\bf Label Aggregation (LA)}. The model makes a prediction for each impression based on the raw features in Table~\ref{table:feature_description}. The labels from all the impressions for the same question-passage pair is then aggregated to give the final label. \item {\bf Feature Aggregation (FA)}. For each question-passage pair, we first calculate the aggregated features in Table~\ref{table:feature_description} from all the impressions for that pair. We then train the model to predict the label based on the aggregated features. \end{itemize} \noindent \textbf{\emph{Comparison of Different Models}}: Based on raw features or aggregation features, both linear and non-linear models are applied, including Logistic Regression (LR) \cite{menard2002applied}, Decision Tree (DT) \cite{safavian1991survey}, Random Forest (RT) \cite{liaw2002classification}, Gradient Boost Decision Tree (GBDT) \cite{ke2017lightgbm}. \noindent \textbf{\emph{Results Analysis}}: As summarized in Table \ref{table:metrics for different modeling method}, no matter using raw features or aggregation features, or what kind of models for feature combination, our proposed methods significantly outperform baseline methods (i.e. \emph{AnswerCTR} and \emph{AnswerSatCTR}) on all metrics. In terms of AUC and ACC, using aggregation features and GBDT model achieves the best performance; in terms of F1 score, using DT or RF model based on raw features achieve top 2 performance. \begin{figure}[tbp] \setlength{\belowcaptionskip}{-0.2cm} \centering \includegraphics[scale=0.5, viewport=14 150 400 350, clip=true]{single_feature_importance.eps} \caption{\label{figure:Feature_importance} Relative weights of top 8 features in user feedback models for web QA. GBDT\textsubscript{FA} model is used for analysis since it shows the best evaluation results.} \label{fig:impact_size} \vspace{-5pt} \end{figure} \noindent \textbf{\emph{Model Interpretation}}: To get more insights about user behaviours on QA, we first investigate the impact of individual feature based on best model GBDT\textsubscript{FA}. The top 8 features of the model are shown in Figure \ref{figure:Feature_importance}. From the result, we can see that \emph{AvgSERPDwellTime} and \emph{OTAnswerOnlyCTR} have the highest feature importance, followed by \emph{AnswerOnlyCTR}, \emph{AnswerSatCTR}, \emph{AnswerExpRate} and \emph{AbandonRate}. Reformulation related features like \emph{HasRFNoClickRate} as well as \emph{RelatedClickRate} have relatively low importance. We can also obtain the following insights about the user behaviour on web QA: \begin{itemize} \item Click features related to web QA answer itself, such as \emph{AnswerClick} and \emph{AnswerSatClick}, are not the most top important features. This also aligns with our previous thinking. Answer click may not suggest relevance and no answer click may not suggest bad relevance as well. \item SERP Dwell Time suggest the time of the search session, which is correlated to user satisfaction of the SERP. It is straight-forward to see the correlation with web QA since QA answer is placed on the most prominent position of SERP, \end{itemize} To further get more insights, we pick the DT\textsubscript{LA} model in Table \ref{table:metrics for different modeling method} and further analyze the model through visualization of the decision tree \footnote{We pick decision tree model because it is easy to visualize as a tree for interpretation.}. Here are a couple of insights we find: \begin{itemize} \item When QA answer is not clicked, it does not necessarily suggest bad relevance. If there is no click on SERP and SERPDwellTime is long, this can also suggest relevance. User just browse the QA passage answer for a while and then get the information needed. However, if user click on outside of QA answer, this is a strong signal that the pasasge relevance is bad, since the QA answer is placed on top position and user try to click on other places on SERP to look for answer. \item When user only click on QA answer and the SERP Dwell time is long enough like larger than 20s, user is very likely to be satisfied with the answer. \item When users click on both QA answer and outside answers such as clicking on web documents below QA answer, it is hard to figure out whether the user is satisfied or not. The intuition is that the user may think the QA answer does not answer her question and want to click on more sites to search for the correct answer. However, it is also possible that the user just want to check more web documents to double confirm the answer. For example, in health domains, users tend to be more cautious thus leading to clicking more sites for verification purpose. \end{itemize} \noindent \textbf{\emph{Summary}}: QA level user behaviour differs with page level user behaviour modeling. It doesn't perform very well if we simply leverage the similar click features from page ranking to model users' satisfaction of web QA. Leveraging the natural combinations of more comprehensive user behaviour features with machine learning, we are able to model user satisfaction of web QA reasonably well. In the next sections, we will discuss how to leverage these user behaviour models to generate large scale weakly supervised data and further boost the quality of web QA relevance models. \begin{table} \begin{center} \small \caption{\label{t:statistic} Statistics of experiment datasets.} \begin{tabular}{l|cccc} \hline \textbf{Dataset} & \textbf{Train} & \textbf{Dev} & \textbf{Test} & \textbf{Labels}\\ \hline \textbf{FeedbackQA\textsubscript{log}} & 22M & - & - & - \\ \textbf{FeedbackQA\textsubscript{\{ctr, winrate, gbdt\}}} & 4M & 10k & 10k & 50\%+/50\%- \\ \textbf{DeepQA\textsubscript{factoid}} & 30k & 2k & 2k & 55.7\%+/44.3\%- \\ \textbf{DeepQA\textsubscript{general}} & 30k & 2k & 2k & 57.6\%+/42.4\%- \\ \textbf{Marco} & 30k & 2k & 2k & 50\%+/50\%- \\ \hline \end{tabular} \label{dataset} \end{center} \vspace{-2pt} \end{table} \subsection{Feedback as Weak Supervision for QA Pre-training} As shown in Figure \ref{fig:framkework}, a two-stage approach is proposed to integrate the user implicit feedback signals to the QA relevance model training process. In the first stage, large scale search log can be mined from the commercial web QA system, which consists of tuple \emph{<Query, Passage, User feedback>}. By leveraging the feedback models developed in Table \ref{table:metrics for different modeling method}, each \emph{<Query, Passage>} pair ($\left \langle Q, P \right \rangle$) is auto-labeled with a simulated relevance score. \begin{align} &score_{QP}=F_{FeedbackModel}(X_{1},...,X_{m}) \\ &label_{QP}=\left\{\begin{matrix} 1 & score_{QP} > threshold_{1},\\ 0 & score_{QP} < threshold_{2} \end{matrix}\right. \end{align} where $X_{i}$ represents the feature described in the above section, $m$ represents the number of features, $threshold_{1}$ and $threshold_{2}$ are two hyper-parameters. Using the large scale auto-labeled data as weak supervision, we firstly pre-train a web QA relevance model. In the second stage, with the QA labeled data, the pre-trained QA model is further fine-tuned to obtain an enhanced web QA relevance model. Let $Q={\{w_1, w_2, w_3, ..., w_m\}}$ be a question with $m$ word pieces, $P={\{w_1, w_2, w_3, ..., w_n\}}$ be a passage with $n$ word pieces, and $w_i$ is the bag-of-word representation of $i$-th word piece. Here we use Cross Entropy (CE) as our loss function for the two-stage training, which is defined as: \begin{align} &y=F_{QAModel}(\left \langle Q, P \right \rangle) \\ &L_{CE}=-\frac{1}{n}\sum_{i}^{n}[\hat{y}_{i}\log(y_{i})+ (1-\hat{y}_{i})\log(1-y_{i})] \end{align} where $F_{QAModel}(\cdot)$ will be described in Section 4.2, ${y}_{i}$ represents our QA model output, $\hat{y}_{i}$ represents the true label, and $n$ represents the number of training samples. \section{Experiments} \label{sec:experiment} \subsection{Dataset and Metrics} We conduct experiments on several datasets as follows, with their statistics shown in Table \ref{t:statistic}. \begin{itemize} \item \textbf{FeedbackQA\textsubscript{log}}: An English QA dataset collected from the latest half year's web QA system log of one commercial search engine. Each item of the log consists of a tuple <query, passage, user behavior>. \item \textbf{FeedbackQA\textsubscript{\{ctr, gbdt, ...\}}}: For each QA pairs in FeedbackQA\textsubscript{log}, the feedback models in Table \ref{table:metrics for different modeling method} are leveraged to predict a simulated relevance label (the original score is among [0, 1] which could be converted to binary relevance label according to different training objectives discussed in Section \ref{subsec:objective impact}). For example, \textbf{FeedbackQA\textsubscript{ctr}} indicates a feedback QA dataset labeled by the \emph{AnswerCTR} model. Then each dataset is further sampled to make it a balanced dataset (i.e. the positive/negative ratio is about 1:1). \item \textbf{DeepQA\textsubscript{general}}: An English QA dataset from one commercial QA system including 0.1 million human labeled cases. Each case consists of three parts, i.e. question, passage, and binary label (i.e. 0 or 1) by crowd sourcing judges indicating whether the question can be answered by the passage. Briefly describe the data collection process as following. Firstly, for each question, top 10 relevant documents returned by the search engine are used to form <Question, Url> pairs; Then passages are further extracted from these documents to form <Question, Url, Passage> triples; Next, the <Query, Passage> pairs are sampled and sent to crowd sourcing judges. Specifically, each <Query, Passage> pair is required to get judged by three judges. The case with no less than 2/3 positive labels will get final positive label (i.e. 1), otherwise negative (i.e. 0). \item \textbf{DeepQA\textsubscript{factoid}}: This dataset is collected in a similar approach with DeepQA\textsubscript{general}. The main difference is that the queries of this dataset are belonging to factoid queries, such as \{what, who, why, how\} intent queries. \item \textbf{Marco}: An English open source QA dataset \cite{DBLP:conf/nips/NguyenRSGTMD16}. The dataset contains tuples of a query, relevant and non-relevant passages extracted from web documents. We choose the relevant and non-relevant passage as our positive and negative sample respectively. \end{itemize} AUC and ACC are used as evaluation metrics of the QA relevance models. AUC represents model's capability of distinguishing between relevant and non-relevant answers. Higher is the AUC or ACC, better the model is. \begin{table*}[t!] \small \caption{\label{table:result}Performance comparison between our methods and baselines on QA \textsubscript{general} dataset. ACC denotes accuracy and AUC denotes Area under Curve (all ACC, AUC metrics in the table are percentage numbers with \% omitted)}\label{t:main} \subtable{(a)}{ \begin{tabular}{ccccccccccc} \hline \multirow{2}{*}{\textbf{Model}} & \multirow{2}{*}{\textbf{Method}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Pretraining\\ Data Size\end{tabular}}} & \multicolumn{4}{c}{\textbf{Performance (AUC/ACC)}} \\ \textbf{} & \textbf{} & \textbf{} & \textbf{5k} & \textbf{10k} & \textbf{20k} & \textbf{30k} \\ \hline \multirow{7}{*}{\textbf{BiLSTM}} & \multirow{1}{*}{\textbf{Original}} & \textbf{-} & 60.45/58.21 & 61.30/59.92 & 61.55/61.99 &\multicolumn{1}{c}{62.40/61.74} \\ & \multirow{3}{*}{\textbf{FBQA\textsubscript{ctr}}} & \textbf{0.5m} & 59.90/57.60(-0.55/-0.61) & 61.25/58.25(-0.05/-1.67) & 61.40/60.50(-0.15/-1.49) & \multicolumn{1}{c}{60.65/59.29(-1.75/-2.45)} \\ & & \textbf{1m} & 60.25/58.45(-0.20/-0.24) & 61.35/58.12(+0.05/-1.80) & 62.65/57.43(+1.10/-4.56) & \multicolumn{1}{c}{61.35/60.69(-1.05/-1.05)} \\ & & \textbf{4m} & 60.50/56.99(+0.05/-1.22) & 59.75/58.39(-1.55/-1.53) & 60.90/59.15(-0.65/-2.84) & \multicolumn{1}{c}{62.25/61.73(-0.15/-0.01)} \\ & \multirow{3}{*}{\textbf{FBQA\textsubscript{LA}}} & \textbf{0.5m} & 61.95/59.66(+1.50/+1.45) & 62.50/60.96(+1.20/+1.04) & 62.85/62.74(+1.30/+0.75) & \multicolumn{1}{c}{64.23/62.50(+1.83/+0.76)} \\ & & \textbf{1m} & 62.80/60.44(+2.35/+2.23) & 63.20/61.20(+1.90/+1.28) & 63.45/63.00(+1.90/+1.01) & \multicolumn{1}{c}{65.57/63.05(+3.17/+1.31)} \\ & & \textbf{4m} & \textbf{64.13/62.15(+3.68/+3.94)} & \textbf{65.45/63.33(+4.15/+3.41)} & \textbf{65.46/64.17(+3.91/+2.18)} & \multicolumn{1}{c}{\textbf{67.35/64.35(+4.95/+2.61)}} \\ \hline \hline \multirow{7}{*}{\textbf{BERT}} & \multirow{1}{*}{\textbf{Original}} & \textbf{-} & 69.31/64.86 & 71.81/67.76 & 72.47/67.07 & \multicolumn{1}{c}{75.28/68.26} \\ & \multirow{3}{*}{\textbf{FBQA\textsubscript{ctr}}} & \textbf{0.5m} & 67.35/62.76(-1.96/-2.10) & 72.96/66.66(+1.15/-1.10) & 75.11/68.26(+2.64/+1.19) & \multicolumn{1}{c}{77.76/71.07(+2.48/+2.81)} \\ & & \textbf{1m} & 72.33/67.06(+3.02/+2.20) & 73.76/67.36(+1.95/-0.40) & 76.16/69.16(+3.69/+2.09) & \multicolumn{1}{c}{77.42/68.26(+2.14/+0.00)} \\ & & \textbf{4m} & 72.19/65.66(+2.88/+2.90) & 73.92/67.96(+2.11/+0.20) & 76.81/67.96(+4.34/+0.89) & \multicolumn{1}{c}{77.94/69.36(+2.66/+1.10)} \\ & \multirow{3}{*}{\textbf{FBQA\textsubscript{LA}}} & \textbf{0.5m} & 72.26/65.27(+2.95/+0.41) & 76.03/68.87(+4.22/+1.11) & 77.79/69.47(+5.32/+2.40) & \multicolumn{1}{c}{77.92/69.47(+2.34/+1.21)} \\ & & \textbf{1m} & 73.53/66.37(+4.22/+1.51) & 76.29/68.97(+4.48/+1.15) & 78.63/68.77(+6.16/+1.70) & \multicolumn{1}{c}{79.82/70.17(+4.54/+1.91)} \\ & & \textbf{4m} & \textbf{76.53/68.57(+7.22/+3.71)} & \textbf{78.17/68.57(+6.36/+0.81)} & \textbf{79.79/71.17(+7.32/+4.10)} & \multicolumn{1}{c}{\textbf{81.03/71.57(+5.78/+3.31)}} \\ \hline \end{tabular} } \subtable{(b)}{ \begin{tabular}{ccccccccccc} \hline \multirow{2}{*}{\textbf{Model}} & \multirow{2}{*}{\textbf{Method}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Pretraining\\ Data Size\end{tabular}}} & \multicolumn{4}{c}{\textbf{Performance (AUC/ACC)}} \\ \textbf{} & \textbf{} & \textbf{} & \textbf{5k} & \textbf{10k} & \textbf{20k} & \textbf{30k} \\ \hline \multirow{7}{*}{\textbf{BiLSTM}} & \multirow{1}{*}{\textbf{Original}} & \textbf{-} & 64.02/60.80 & 64.73/59.00 & 65.18/60.80 & 64.03/58.95 \\ & \multirow{3}{*}{\textbf{FBQA\textsubscript{ctr}}} & \textbf{0.5m} & 63.62/60.35(-0.40/-0.45) & 65.51/59.00(+0.78/+0.00) & 64.56/60.40(-1.24/-0.40) & 61.79/58.05(-2.24/-0.90) \\ & & \textbf{1m} & 63.67/60.30(-0.35/-0.50) & 64.86/58.65(+0.15/-0.35) & 65.53/60.40(+0.35/-0.40) & 61.35/57.40(-2.68/-1.55) \\ & & \textbf{4m} & 62.83/60.20(-1.19/-0.60) & 65.09/59.05(+0.36/+0.05) & 64.67/59.70(-0.51/-1.10) & 65.24/60.95(+1.21/+2.00) \\ & \multirow{3}{*}{\textbf{FBQA\textsubscript{LA}}} & \textbf{0.5m} & 65.66/62.05(+1.64/+1.25) & 67.87/63.00(+3.14/+4.00) & 68.84/63.80(+3.66/+3.00) & 69.80/64.75(+5.77/+5.80) \\ & & \textbf{1m} & 66.36/64.00(+2.34/+3.2) & 67.81/63.45(+3.08/+4.45) & 69.23/64.70(+4.05/+3.90) & 72.16/66.40(+8.13/+7.45) \\ & & \textbf{4m} & \textbf{67.22/65.25(+3.20/+4.45)} & \textbf{70.26/65.10(+5.53/+6.10)} & \textbf{70.79/64.95(+5.61/+4.15)} & \textbf{73.31/67.15(+9.28/+8.20)} \\ \hline \hline \multirow{7}{*}{\textbf{BERT}} & \multirow{1}{*}{\textbf{Original}} & \textbf{-} & 68.88/65.46 & 71.43/66.77 & 73.87/67.06 & 78.37/69.56 \\ & \multirow{3}{*}{\textbf{FBQA\textsubscript{ctr}}} & \textbf{0.5m} & 71.44/66.27(+2.56/+0.81) & 75.17/67.06(+3.74/+0.29) & 77.78/70.78(+3.91/+3.72) & 80.00/72.87(+1.63/+3.31) \\ & & \textbf{1m} & 71.73/66.17(+2.85/+0.71) & 74.19/67.26(+2.76/+0.49) & 77.01/69.46(+3.14/+2.40) & 78.30/72.07(-0.07/+2.51) \\ & & \textbf{4m} & 70.55/66.06(+1.67/+0.60) & 75.51/68.26(+4.08/+1.49) & 78.10/69.77(+4.23/+2.71) & 79.54/70.97(+1.17/+1.41) \\ & \multirow{3}{*}{\textbf{FBQA\textsubscript{LA}}} & \textbf{0.5m} & 74.00/67.77(+5.12/+2.31) & 76.57/68.87(+5.14/+2.10) & 78.78/70.67(+4.91/+3.61) & 80.91/72.37(+2.54/+2.81) \\ & & \textbf{1m} & 74.33/68.67(+5.45/+3.21) & 75.77/68.77(+4.34/+2.00) & 78.40/71.17(+4.53/+4.11) & 79.31/71.77(+0.94/+2.21) \\ & & \textbf{4m} & \textbf{76.13/69.77(+7.25/+4.31)} & \textbf{77.93/71.27(+6.50/+4.50)} & \textbf{79.94/72.77(+6.07/+5.71)} & \textbf{81.21/73.27(+2.84/+3.71)} \\ \hline \end{tabular} } \end{table*} \subsection{Baselines and Models} To verify the effectiveness of our proposed Feedback QA approach, we have the following two baseline methods for comparison: \begin{itemize} \item \textbf{Original}: We only use the task specific training data (i.e. QA human labeled data) to train the QA model. \item \textbf{FBQA\textsubscript{ctr}}: The FeedbackQA\textsubscript{ctr} data is used for pre-training the QA model at the first stage. At the second stage, the QA model is further fine-tuned using the task specific training data. \end{itemize} To prove the generalization capability of our approach, we conduct experiments using two kinds of model architectures: \begin{itemize} \item \textbf{BiLSTM}: It consists of three parts. The first is a embedding layer which map each token to a fixed dimension vector. The second is multi-layered BiLSTM which is employed to encode the previous embeddings, that is, we apply BiLSTM for encoding both question and passage, i.e. $H^q = BiLSTM_1(Q)$ and $H^p = BiLSTM_2(P)$ where $H^q$ and $H^p$ are the representations for question and passage respectively. Notably, they are different models named $BiLSTM_1$ and $BiLSTM_2$ which do not share any parameters. Followed BiLSTM layer is a prediction layer which includes a combination layer which concatenate $H^q$ and $H^p$, and a fully connection layer to predict question and passage relevance probability. \item \textbf{BERT\textsubscript{base}}\footnote{Our goal is to prove the effectiveness of approach, so we do not use BERT\textsubscript{large} which is time and resource consuming.}: It contains 12 bidirectional transformer encoders. We concatenate the question text and the passage text together as a single input of the BERT encoder. Then we feed the final hidden state for the first token (\emph{[CLS]} token embedding) of the input into a two-layer feed-forward neural network. The final output is the relevance score between the input question and passage. In all cases, the hidden size is set as 768. The number of self-attention heads is set as 12, and the feed-forward/filter size is set to 3072. \end{itemize}{} \subsection{Experimental Setup} We set hyper-parameters $threshold_{1}$ and $threshold_{2}$ as 0.6 and 0.4 respectively according to our best experimental results. For experiments with BiLSTM, we use two BiLSTM layers with 128 hidden size of BiLSTM cell. We set the maximum question and passage length to be 30 and 200 respectively for DeepQA and Marco dataset. We set the batch size as 256 and the dimension of word embedding as 300. During pre-training, set learning rate as \{1e\textsuperscript{-5}, 3e\textsuperscript{-5}, 5e\textsuperscript{-5}\}, dropout as $\{0.1, 0.2, 0.3\}$ and epoch as 10, we choose the best model for fine-tuning based on the evaluation metric on dev set; during fine-tuning, set learning rate as 3e\textsuperscript{-5}, dropout as 0.1 and epoch as 10. For experiments with BERT\textsubscript{base}, we use the huggingface version pre-trained BERT\textsubscript{base} model\footnote{\url{https://github.com/huggingface/pytorch-transformers}}. We set the maximum sequence length to be 200 for DeepQA and Marco dataset. We set the batch size as 128, the gradient accumulation steps as 2 and learning rate warmup ratio as 0.1. During pre-training, set learning rate as \{1e\textsuperscript{-5}, 3e\textsuperscript{-5}, 5e\textsuperscript{-5}\} and epoch as 3, we choose the best model for fine-tuning based on the evaluation metric on dev set; during fine-tuning, set learning rate as 3e\textsuperscript{-5} and epoch as 1. \subsection{Results and Discussions} \subsubsection{Overall Comparison Results} Table \ref{table:result} shows a comparison of discussed setting on all datasets. We can have the following observations. \begin{itemize} \item Compared with two baselines Original and FBQA\textsubscript{ctr}, our feedback approach method FBQA\textsubscript{LA} achieves significant improvements on both experiment sets, in terms of different pre-training data size $\{0.5m, 1m, 2m\}$ and different QA fine-tuning data size $\{5k, 10k, 20k, 30k\}$. When the size of user feedback pre-training data equals to 4 million, our model get the best results on both experiment sets across all training data size settings with more than 6 points AUC increase on average. \item Especially for low resource settings like 5k and 10k QA fine-tune data, our approaches shows excellent results to save labelling cost. Take DeepQA\textsubscript{general} and BERT setting as an example, when pre-training data size equals to $4m$ and fine-tune data equals to $5k$, our model can get 76.53 AUC metric, which is even higher than Original result on $30k$. With only $1/6$ of the label data, our model can still outperform the model trained on full data size, which can save lots of labelling cost. \item When we increase the feedback pre-training data size from $0.5m$ to $1m$ and $4m$, our model is able to get consistent gains for all experimental settings. While for FBQA\textsubscript{ctr}, the gains are not very consistent. Increasing pre-training data size does not necessarily increase the metrics, which aligns with our findings in Section \ref{sec:user_feedback}. \item In terms of our approach, both BiLSTM and BERT shows significant good gains. It is that expected BERT based QA models outperform BiLSTM based models since BERT benefits from the large model parameter size as well as the large scale unsupervised pre-training stage. It is interesting to find that even on top of deep pre-trained model like BERT, further significant gains can be observed, which approves the model agnostic characteristics of our FeedbackQA approach. \end{itemize} \subsubsection{Impact of pre-training data size} To further analyze the impact of user implicit feedback to improve web QA, we explore the model performance by leveraging different feedback pre-training data size. The experiments are conducted on the DeepQA\textsubscript{general} dataset using both BiLSTM and BERT\textsubscript{base} models. Pre-training data size are set as \{0, 1, 2, 3, 4, 5, 6\} million. The results are shown in Figure \ref{figure:feedbackdatasize_auc}. From the graph, it is shown that by further increasing the data size of FeedbackQA pre-training data, the QA model performance on DeepQA dataset can further increase. When the data size reaches a certain scale like 4 million, the performance on test set slowly flattens out. \begin{figure}[H] \centering \small \includegraphics[scale=0.3, viewport=0 0 600 400, clip=true]{new_impact_training_data_size.eps} \caption{\label{figure:feedbackdatasize_auc} Performance on DeepQA\textsubscript{general} dataset with different FeedbackQA pre-training data size.} \label{fig:impact_size} \end{figure} \subsubsection{Impact of Feedback for QA Pre-training Stage} Our feedback QA approach consists of two stages: (1) Feedback as weak supervision for QA Pre-training. (2) Supervised QA Fine-tuning. To further evaluate the impact of the first stage, we leverage the best pre-trained QA model in Table \ref{table:result} to fine-tune on Marco QA dataset. Although the distribution of Marco QA is different with our pre-trained dataset FeedbackQA\textsubscript{log}, we want to evaluate if our pre-trained QA model is able to learn generalized QA knowledge from user implicit feedback. The results are shown in Table \ref{table:marco_test}. We can see that models fine-tuned based on Feedback\textsubscript{LA} continue to outperform baseline methods. This proves that generalized QA knowledge can be obtained from the QA pre-training stage learning from implicit feedback. \begin{table}[H] \caption{\label{table:marco_test} Performance comparison between our methods and baselines on Marco datasets. AUC denotes Area under Curve (all AUC metrics in the table are percentage numbers with \% omitted)} \begin{tabular}{cccccc} \hline \multirow{2}{*}{\textbf{Model}} & \multirow{2}{*}{\textbf{Method}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Pretraining\\ Data Size\end{tabular}}} & \multicolumn{3}{c}{\textbf{Performance (AUC)}} \\ & & & \textbf{10k} & \textbf{20k} & \textbf{30k} \\ \hline \multirow{3}{*}{\textbf{BiLSTM}} & \textbf{Original} & \textbf{-} & 58.40 & 60.89 & 61.17 \\ & \textbf{FBQA\textsubscript{ctr}} & \textbf{4m} & 60.11 & 61.05 & 62.09 \\ & \textbf{FBQA\textsubscript{LA}} & \textbf{4m} & \textbf{61.39} & \textbf{63.04} & \textbf{64.64} \\ \hline \multirow{3}{*}{\textbf{BERT}} & \textbf{Original} & \textbf{-} & 94.01 & 95.19 & 95.31 \\ & \textbf{FBQA\textsubscript{ctr}} & \textbf{4m} & 94.47 & 94.96 & 95.38 \\ & \textbf{FBQA\textsubscript{LA}} & \textbf{4m} & \textbf{94.81} & \textbf{95.48} & \textbf{95.72} \\ \hline \end{tabular} \end{table} \subsubsection{Impact of pre-training task} \label{subsec:objective impact} To understand what pre-training task can better learn from the feedbackQA data derived from user implicit feedback, compared with classification task using cross entropy loss, we also try regression task with mean squared error (MSE) as our loss function, which is defined as: \begin{align} &L_{MSE}=\frac{1}{n}\sum_{i}^{n}({y}_{i}-\hat{y}_{i})^{2} \end{align} where $y_{i}$ represents our QA model output, $\hat{y}_{i}$ represents the auto-labeled relevance score output-ed by our feedback model, and $n$ represents the number of training samples. The results are shown in Figure \ref{figure:impact_obj}. From the result, we can see that both for BERT and BiLSTM models, the performance of regression pre-training task is consistently lower than the classification task. It indicates that the user feedback data is noisy, and learning the data distribution directly with regression task may bring further negative impact to fine-tuning stage. Our current classification pre-training task is better since those noisy data that our feedback model are not quite certain when user is satisfied or not are filtered out by setting thresholds for filtering. \begin{figure}[H] \setlength{\belowcaptionskip}{-0.2cm} \centering \includegraphics[scale=0.15, viewport=60 50 1700 790, clip=true]{impact_obj.eps} \caption{\label{figure:impact_obj} Model comparison results between Cross Entropy and MSE objective functions during the QA pre-training stage on DeepQA\textsubscript{general} dataset} \end{figure} \section{Conclusion and Future work} \label{sec:conclusion} This paper presents Feedback QA, a new framework to leverage user implicit feedback from web QA systems to boost the web QA model performance when there is only limited training data available. Instead of using explicit feedback such as up-voting/down-voting feedback buttons (which is very limited in real web QA systems), we leverage the abundant implicit feedback through users' natural interaction behaviours with web QA system. In the paper, we first investigate and analyze real user behaviour on web QA system, such as click, browsing, re-query, etc, through which we find that user behaviours of QA level is quite different with document level. Then two types of feedback modeling approaches for web QA are proposed to well correlate users' behaviour to QA relevance label. Next the trained feedback model is leveraged to auto-label large-scale QA pairs from user interaction log and generate new training data. Finally these auto-labeled training data are further leveraged as weak supervision to pre-train our web QA model for performance improvement. Intensive experiments on several datasets have been conducted to prove the effective of our approach. In the future, xxxxxxx \bibliographystyle{ACM-Reference-Format} \section{Introduction} Question Answering (QA) has become a popular feature in the search result page (SERP) of most commercial search engines in recent years. If a query bears question intent, the search engine will extract the most relevant passage from web documents and place it in an individual block at the top of a SERP. Figure~\ref{Goole&Bing} shows a screenshot of QA features on a commercial search engine. Usually a {\em QA block} consists of the passage to answer the question, the Url to the source page from which the passage is extracted, and the links to collect user feedback. The QA block is appreciated by search engine users since it saves their time of looking for answers in the web documents. It has become even more popular when voice search on mobile devices is adopted by more and more users. To answer user questions with the most relevant passages, various machine learning algorithms, including the recent deep neural networks, have been proposed (e.g.,~\cite{radev2002probabilistic,echihabi2008select,kaisser2004question,shen2006exploring,simmons1964answering,DBLP:journals/corr/abs-1904-09636}. However, one great challenge in applying those models in industry is the the requirement for large amount of training data. In practice, a commercial search engine receives extremely diverse questions in open domain at web scale. To handle such complex question space, the QA models for search engines often involve tens of millions of parameters, which causes the models easily overfit to small training data. Therefore, we usually need millions of training examples to train the model with less bias. However, it is very expensive to label the training data by human judges. Moreover, a commercial search engine often provides services in global markets with various languages. It is unrealistic to manually label millions of training data for each language. In this paper, we target at the challenge of collecting large amount of high quality training data in multiple languages with a low cost. Our approach is to build an implicit relevance feedback model from user behaviors mined from large volume of search logs. Although there has been a rich literature in this direction, e.g., ~\cite{DBLP:journals/sigir/JoachimsGPHG17,DBLP:conf/sigir/GaoTY11,DBLP:conf/cikm/HuangHGDAH13,DBLP:journals/sigir/AgichteinBD18,Bilenko:2008:MST:1367497.1367505, White:2007:SUP:1277741.1277771}, all existing works target at the relevance of web {\em documents}, rather than {\em passages}. We argue that the document-level user behavior models cannot be applied to infer passage relevance due to the following observations. \begin{figure}[t] \setlength{\belowcaptionskip}{-0.2cm} \centering \includegraphics[scale=0.5, viewport=80 250 600 470, clip=true]{QA-New.pdf} \caption{Example QA features in web search engines.} \label{Goole&Bing} \end{figure} \begin{table}[htbp] \small \renewcommand{\arraystretch}{1.5} \centering \caption{\label{t:passage_behaviour}Example cases of user behaviour for web QA.} \begin{tabular}{lp{5cm}p{7cm}} \hline \textbf{Question (a)}: &\emph{What's the normal body temperature for \textbf{child}?}\\ \hline \textbf{Passage (a)}: &\emph{The average normal body temperature for children is about \textbf{37 degree}. A child's temperature usually averages from around \textbf{36.3 degree} in the morning to \textbf{37.6 degree} in the afternoon.} \\ \hline \textbf{Url (a)}: &\emph{Human Body Temperature: Fever - Normal - Low \url{https://www.disabled-world.com/calculators-charts/degrees.php}} \\ \hline \textbf{Label}: &\emph{Relevant} \\ \hline \textbf{User Behaviour}: &\emph{\textbf{No Click}} \\ \hline \hline \textbf{Question (b)}: &\emph{What's the normal body temperature for \textbf{adult}?}\\ \hline \textbf{Passage (b)}: &\emph{The average normal body temperature for children is about \textbf{37 degree}. A child's temperature usually averages from around \textbf{36.3 degree} in the morning to \textbf{37.6 degree} in the afternoon.} \\ \hline \textbf{Url (b)}: &\emph{Human Body Temperature: Fever - Normal - Low \url{https://www.disabled-world.com/calculators-charts/degrees.php}} \\ \hline \textbf{Label}: &\emph{Irrelevant} \\ \hline \textbf{User Behaviour}: &\emph{\textbf{Click}} \\ \hline \end{tabular} \label{table:example} \vspace{-8pt} \end{table} Table~\ref{t:passage_behaviour} shows two cases of question answering. In the first case ``{\sl What's the normal body temperature for \textbf{child}?}'', there is no click in the QA block, while for the second case ``{\sl What's the normal body temperature for \textbf{adult}?}'', a user click on the Url in the QA block is observed. When we further examine these two cases, we can see the passage in the first case perfectly answers the user question. Therefore, a user can get satisfactory information by simply going through the content of the passage. No follow-up action is needed. For the second case, the information in the passage (about child body temperature) does not accurately match the user intent (for adult body temperature). The user may want to explore more information in the source page from which the passage is extracted. In this case, the title ``human body temperature'' of the source page may trigger the user's interest to click on the Url and read more in that page. This example illustrates a unique characteristic of question answering, i.e., the content of the passage is already presented to the user in the QA block. therefore, the user may not need to click on the Url to get the answer. Consequently, the correlation between user clicks and passage relevance may be much weaker than the correlation of user clicks with page relevance. We will report more analysis in Section~\ref{sec:click_baseline}. Another major difference between QA block and web document results is the number of results presented in the SERP. Given a user question, the search engine usually returns a list of web documents, but only a single QA block. Most previous click models leverage the relative rank order of documents to gain more reliable implicit feedback. However, this idea cannot be directly applied to the single QA block. The above observations call for a new study of user behavior, with a particular focus on the interaction with the QA block. In this paper, we present our study of user behavior and propose a novel approach to mining implicit relevance feedback from noisy user behavior data for passage relevance. To the best of our knowledge, this is the first work in this area. We make the following contributions. \begin{itemize} \item We categorize three types of user behavior when they interact with the QA block, including click behaviour, re-query behaviour, and browsing behaviour. By analyzing the aggregated sequences of user actions within the context of complete search sessions, we gain interesting insights about the correlation between user behavior and passage relevance. \item We compare several methods to automatically extract users' feedback signals from their behavior data by learning from small amount of manual judgements. We verify the feasibility to learn implicit feedback with reasonable accuracy. \item We incorporate the mined implicit feedback in a weakly-supervised approach for QA model training, and carry out extensive experiments on several QA datasets in English. The experimental results show our approach can greatly improve the QA performance on all datasets, especially under the low-resource conditions. \item We apply our approach in a commercial search engine in two non-English markets. We find users speaking different languages uniformly follow similar behavior patterns when they interact with QA blocks. Consequently, the implicit relevance feedback model trained in en-US market can be successfully transferred to foreign markets without any tuning. In de-DE (German) and fr-FR (French) markets, our approach significantly improves the QA service by around 3.0\% AUC metric. Moreover, this approach can automatically refresh the QA model by continuously collecting the relevance feedback from users, which further saves the labeling cost. We expect our approach can save millions of dollars of labeling cost when scaling out to more markets. \end{itemize} The remaining paper is organized as follows. We first review the relate work in Section~\ref{sec:related-work}. We then present our approach in Section~\ref{sec:method}. The extensive experimental results are reported in Section~\ref{sec:experiment}, and the paper is concluded in Section~\ref{sec:conclusion}. \section{Related Work} \label{sec:related-work} \subsection{Question Answering} The purpose of web QA is offering an efficient information access mechanism by directly presenting an answer passage to the web search engine users~\cite{chen2017reading,ahn2004using,buscaldi2006mining}. Various methods for web QA have been proposed in the literature. For example, Moldovan et al. ~\cite{DBLP:conf/acl/MoldovanHPMGGR00} proposed window-based word scoring technique to rank potential answer pieces for web QA. Cui et al. ~\cite{DBLP:conf/sigir/CuiSLKC05} learned transformations of dependency path from questions to answers to improve passage ranking. Yao et al. ~\cite{DBLP:conf/naacl/YaoDCC13} tried to fulfill the matching using minimal edit sequences between dependency parse trees. AskMSR~\cite{DBLP:conf/emnlp/BrillDB02}, a search-engine based QA system, used Bayesian Neural Network relying on data redundancy to find short answers. In recent years, deep neural networks have achieved excellent performance in QA area~\cite{chen2017reading,wang2018r}. Convolutional Neural Networks (CNN), as well as Recurrent Neural Networks (RNN) such as the Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), were applied to learn the representations of questions and answers~\cite{DBLP:journals/corr/TanXZ15, DBLP:conf/acl/TanSXZ16}. Later on, attention mechanisms were employed to model the interaction between questions and answers~\cite{DBLP:journals/corr/SantosTXZ16,DBLP:conf/cikm/YangAGC16,DBLP:conf/ijcai/WangHF17, DBLP:journals/corr/abs-1806-00778}, which resulted in better performance than simply modeling query and answer separately. Most recently, deep pre-trained models, ELMo \cite{DBLP:conf/naacl/PetersNIGCLZ18}, OpenAI GPT \cite{radford2018improving}, BERT \cite{DBLP:journals/corr/abs-1908-06780,DBLP:journals/corr/abs-1908-08167}, ERNIE \cite{DBLP:conf/acl/ZhangHLJSL19} and XLNet \cite{DBLP:journals/corr/abs-1906-08237}, have become the new state-of-the-art approaches in QA area. Due to the complexity of web-scale, open domain question answering, the statistic machine learning models for this task require large amount of training data. In this paper, we do not target at developing novel QA models, but instead, we aim to find a model-agnostic approach for training data collection. \subsection{Learning from User Feedback} User feedback has been intensively studied in web page ranking to improve search quality \cite{DBLP:conf/kdd/Joachims02, DBLP:conf/sigir/JoachimsGPHG05}. There are two types of user feedback. Explicit (or shallow) feedback means the user takes an extra effort to proactively express her satisfaction with the search results, e.g., through a simple up-voting or down-voting button. Implicit feedback means the inference of user satisfaction from the user's search and/or browse sessions, without extra efforts from the users. Rocchio \cite{rocchio1971relevance} did a pioneer work to leverage relevance feedback for information retrieval, which explicitly gathered feedback through a button for up-voting or down-voting. Another means of collecting explicit feedback was through side-by-side comparisons \cite{ali2006relationship, thomas2006evaluation}. In practice, chances are low to receive explicit feedback from users since it disturbs users in their normal interaction with search engines. Compared with explicit feedback, the approach of implicit feedback has the advantage that it can be collected at much lower cost, in much larger quantities, and without burden on the user of the search system \cite{DBLP:journals/sigir/JoachimsGPHG17}. Various features have been extracted from user behavior data, such as the click-through information, the average dwell time, the number of page visits in post-search browsing sessions, and so on \cite{DBLP:journals/sigir/AgichteinBD18,Bilenko:2008:MST:1367497.1367505}. For example, Joachims et al. \cite{DBLP:journals/sigir/JoachimsGPHG17} derived relative preferences from click-through information; Agichtein et al. \cite{DBLP:journals/sigir/AgichteinBD18} explored page clicks and page visits as features to improve ordering of top results in web search; Gao et al.~\cite{DBLP:conf/sigir/GaoTY11} and Huang et al. \cite{DBLP:conf/cikm/HuangHGDAH13} used click-through data for deep semantic model training to learn semantic matching between query and documents in page ranking. A major issue with implicit feedback is that it is inherently noisy or even biased \cite{DBLP:conf/sigir/JoachimsGPHG05}, which makes the interpretation challenging. To address the challenge, various methods have been proposed. For example, Craswell et al. \cite{DBLP:conf/wsdm/CraswellZTR08} proposed four simple hypotheses about how position bias might arise. Dupret and Piwowarski \cite{DBLP:conf/sigir/DupretP08} proposed a set of assumptions on user browsing behavior in order to estimate the probability a document was to be viewed. Chapelle and Zhang \cite{DBLP:conf/www/ChapelleZ09} proposed a dynamic Bayesian Network model to indicate whether a user was satisfied with a clicked document and then left the page. Although user feedback for web page ranking has been well studied, there is little work on user feedback for web QA. The closest work to our study is by Kratzwald and Feuerriegel \cite{kratzwald2019learning}, who designed feedback buttons to explicitly ask users to assess the overall quality of the QA result. Different from Kratzwald and Feuerriegel, our work mainly focuses on the mining of implicit relevance feedback for web QA. To the best of our knowledge, it is the first study in this direction. \nop{ \begin{figure*}[ht] \setlength{\belowcaptionskip}{-0.2cm} \centering \includegraphics[scale=0.68, viewport=20 180 700 470, clip=true]{framework_final.pdf} \caption{\label{figure:3_framework} The overall framework of our FeedbackQA approach.} \label{fig:framkework} \end{figure*} } \section{Our Approach} \label{sec:method} Our goal is to derive training data from online user behaviors to train QA models. To achieve this goal, our basic idea is to learn an implicit relevance feedback model. Given a question $q$ and the passage $p$ served by the search engine, the feedback model extracts features from user behaviors and predicts the relevance $r$ between $q$ and $p$. The predicted results $(q, p, r)$ are then used as training data to train the QA models. Based on this idea, we first conduct a comprehensive analysis of user behaviors in QA sessions in Section~\ref{sec:user_feedback} and propose a systematic categorization to cover all types of user behaviors. We then design a very rich set of user behavior features in Section~\ref{sec:feature_extraction} to make sure we do not miss any useful implicit feedback signals. In Section~\ref{sec:imp_feedback_modeling}, we carefully compare various algorithms to learn the implicit feedback models, and apply the best model to huge volume of user behavior data to derive large amount of training data. Section~\ref{sec:two_stage} elaborates how we leverage the derived training data in a weekly-supervised approach for the training of QA model. \begin{table} \caption{\label{table:behaviour} Taxonomy of user behaviour in web QA system} \small \begin{center} \begin{tabular}{ ccc} \hline \textbf{Type} & \textbf{Type} & \textbf{Behaviour} \\ \hline \textbf{Explicit} & \emph{Click} & \emph{Up-vote/Down-vote} \\ \hline \multirow{8}{*}{\textbf{Implicit}} & \emph{Re-query} & \emph{Reformulation} \\ \cdashline{2-3} & \multirow{4}{*}{\emph{Click}} & \emph{Answer Click} \\ & & \emph{Answer Expansion Click} \\ & & \emph{Outside Answer Click} \\ & & \emph{Related Click} \\ \cdashline{2-3} & \multirow{1}{*}{\emph{Browsing}} & \emph{Browse} \\ \hline \end{tabular} \label{table:user-feedback} \end{center} \vspace{-8pt} \end{table} \subsection{Taxonomy of User Behavior} \label{sec:user_feedback} We propose a taxonomy of user behavior as summarized in Table \ref{table:behaviour}. At the higher level, we distinguish two types of user behavior, which correspond to explicit and implicit feedback to the web QA system. For implicit feedback, we further recognize three types of user behavior, namely, re-query, click, and browsing. In the following, we first show an empirical study of explicit feedback in a commercial search engine, and explain why it is not very helpful to collect training data. We then give more detailed descriptions for user implicit feedback. To collect explicit feedback, search engines such as Google and Bing provide links at the bottom of the QA block like Figure \ref{Goole&Bing}. However, only a very small fraction of users would take the efforts to send their explicit feedback. In a real commercial web QA system, the coverage of the explicit feedback (clicking on the feedback links) is only less than \emph{0.001\%} of the total QA impressions. Moreover, we find the users strongly tend to send negative feedback; the positive over negative ratio is about 1:17. To form a balanced training data, we have to sample almost equal amount of positive and negative examples from the skewed label distribution. This further reduces the size of valid training data that can be derived from explicit feedback. Consequently, the explicit feedback may not be a good source to collect training data for web QA model. We then cast our attention to implicit feedback. A thorough examination of various user behaviors suggests we can categorize them into three types. \noindent \textbf{\emph{Re-query Behavior}}: We use the term \emph{reformulation} to denote re-query behaviour, i.e., the user modifies the original query and issues the new query to the search engine. \noindent \textbf{\emph{Click Behavior}}: We categorize four types of clicks, depending on the components being clicked on (see Figure~\ref{figure:user_behavior}). \begin{itemize} \item \emph{Answer Click} is the behavior that the user clicks on the source page Url of the answer passage (indicated by {\small \textcircled{1}} in Figure~\ref{figure:user_behavior}). \item \emph{Answer Expansion Click} means that the user clicks on the special button (indicated by {\small \textcircled{2}} in Figure~\ref{figure:user_behavior}) to expand the folded QA answer due to the maximum length limit for display. \item \emph{Outside Answer Click} means that the user clicks on the links to the web documents in the SERP (indicated by {\small \textcircled{3}} in Figure~\ref{figure:user_behavior}) other than the source page Url for the web QA passage. \item \emph{Related Click} is the behavior that the user clicks on the related queries (indicated by {\small \textcircled{4}} in Figure~\ref{figure:user_behavior}) to explore more information. \end{itemize} \noindent \textbf{\emph{Browsing Behavior}}: this means the user reads the content of the QA passage or other components in the SERP, without any input to the search engine. \begin{figure}[tbp] \setlength{\belowcaptionskip}{-0.2cm} \centering \includegraphics[scale=0.78, viewport=250 160 480 462, clip=true] {user_behaviour_clear.pdf} \caption{\label{figure:user_behavior} User click behavior illustration, including \emph{Answer Click}, \emph{Answer Expansion Click}, \emph{Outside Answer Click}, and \emph{Related Click}.} \label{fig:impact_size} \vspace{-8pt} \end{figure} \subsection{Feature Extraction from User Behavior}\label{sec:feature_extraction} \begin{table}[] \small \caption{\label{table:feature_description} User behaviour features description. Here ``Answer'' means the source Url for the passage in web QA.} \begin{tabular}{p{2.0cm}lp{4.3cm}} \hline \textbf{Name} & \textbf{Type} & \textbf{Description} \\ \hline \emph{RFRate} & \emph{Re-query} & rate of re-query \\ \hdashline \emph{AnswerCTR} & \emph{Click} & CTR of answer \\ \emph{AnswerOnlyCTR} & \emph{Click} & CTR with only click on answer \\ \emph{AnswerSatCTR} & \emph{Click} & satisfied CTR of answer\\ \emph{AnswerExpRate} & \emph{Click} & CTR of answer expansion \\ \emph{OTAnswerCTR} & \emph{Click} & CTR outside of answer \\ \emph{OTAnswerOnlyCTR} & \emph{Click} & CTR with only click outside of answer \\ \emph{OTAnswerSatCTR} & \emph{Click} & satisfied CTR outside of answer \\ \emph{BothClickCTR} & \emph{Click} & CTR of both click on/outside of answer \\ \emph{RelatedClickRate} & \emph{Click} & CTR of related queries \\ \hdashline \emph{NoClickRate} & \emph{Browsing} & no click rate \\ \emph{AbandonRate} & \emph{Browsing} & abandonment rate \\ \emph{AvgSourcePage- DwellTime} & \emph{Browsing} & average source page dwell time \\ \emph{AvgSERPDwellTime} & \emph{Browsing} & average SERP dwell time \\ \hline \end{tabular} \vspace{-5pt} \end{table} To learn implicit feedback models, we need to design user behavior features, which should be sensitive and robust to capture users relevance feedback signals. To meet this goal, we follow two principles in our feature design. First, our feature set exhaustively covers all types of user behaviors discussed in Section~\ref{sec:user_feedback}. This helps to prevent missing useful relevance signals. Second, we design aggregated features which summarize the behaviors of multiple users in multiple sessions. The aggregated features can effectively reduce the noise and bias in individual user and individual action. The features are listed in Table~\ref{table:feature_description}. Most features are straightforward. Please refer to the ``Description'' column for the meaning of the features. We only select several features to explain in the following. \begin{itemize} \item \emph{AvgSourcePageDwellTime}: the average time from the user clicking into the source page of the web QA answer to the moment that the user leaves the source page. \item \emph{AvgSERPDwellTime}: the average time from SERP loaded successfully to the moment when the search session finishes. \item \emph{AbandonRate}: the percentage of sessions with no click on the SERP. The user just browses the SERP and leaves the search session. \end{itemize} The click-through rate (\emph{CTR}) for a component, which can be either a QA passage, an answer expansion, a related search, or a web document outside of QA block, is defined as \begin{equation} CTR = \frac{N_{click}}{N_{impression}} \end{equation} where $N_{impression}$ denotes the total number of impressions of the component and $N_{click}$ denotes the number of clicks on the component. A satisfied click (\emph{SatClick}) on a component means a click on the component followed by the corresponding Dwell Time greater than or equal to a pre-defined threshold. The satisfied click-through rate (\emph{SatCTR}) is then defined as \begin{equation} SatCTR = \frac{N_{SatClick}}{N_{impression}} \end{equation} where $N_{SatClick}$ denotes the number of SatClicks on the component. \subsection{Implicit Feedback Modeling} \label{sec:imp_feedback_modeling} In this section, we target at building an effective implicit feedback model to predict the relevance between a question and a passage based on the features designed in Section~\ref{sec:feature_extraction}. We first prepare a data set with 18k QA pairs, where each QA pair is augmented with the set of user behavior features as well as a human judged label (Section~\ref{sec:feedback_dataset}). Intuitively, a single feature, such as AnswerCTR or AnswerSatCTR, may be too weak a signal to infer user satisfaction. We verify this assumption as a baseline in Section~\ref{sec:click_baseline}. We then consider various machine learning models, including Logistic Regression (LR)~\cite{menard2002applied}, Decision Tree (DT)~\cite{safavian1991survey}, Random Forest (RT)~\cite{liaw2002classification}, and Gradient Boost Decision Tree (GBDT)~\cite{ke2017lightgbm}, and conduct an empirical study to choose the best model (Section~\ref{sec:ml_models}). We also analyze the feature importance and derive rules from the learned models, which help us gain insights about users' decision process when they interact with the QA block. \subsubsection{Dataset} \label{sec:feedback_dataset} We create a data set which consists of 18k QA pairs, where each QA pair is augmented with the set of user behavior features as well as a human judged label. To be more specific, the data set is a table, where each row is in the form of \emph{<Question, Passage, User behaviour features, Label>}. The QA pairs are sampled from a half-year log (i.e. from January to June 2019) of one commercial search engine. We sample the QA pairs whose number of impressions in the log is around 50. This number is set due to two considerations. First, we want to aggregate multiple users' behavior to reduce the noise from individual users. Second, we want to avoid too popular queries, which tend to be short and too easy to answer. Each QA pair is sent to three crowd sourcing judges and the final label is derived based on a majority voting. This 18k data set is further randomly split into 14k/2k/2k as training, dev and test sets. \subsubsection{Baseline}\label{sec:click_baseline} Click-through rate (\emph{CTR}) and satisfied click-through rate (\emph{SatCTR}) have been widely adopted in previous works as indicators for the relevance of a web page with respect to a given user query. Analogously in our study for passage relevance, we start with users' clicks on the source Url for the answer passage. We first investigate the feature of $AnswerCTR$ in Table~\ref{table:feature_description} by plotting a precision-recall curve in Figure \ref{figure:pr} (a). \begin{itemize} \item For QA pairs whose $CTR$ > 0, the recall is less than 0.33. In other words, when the passage is relevant to the question, there are more than two thirds of cases where users do not make any single click on the source Url. Please note the number of clicks is counted throughout all the impressions for that question-passage pair. This is a very different observation compared with the cases for page ranking. However, when we consider the feature of question answering, this result is not surprising since users may simply browse the content of the passage and get the information. No further click into the source Url is needed at all. \item The highest precision is less than 0.77, across the full range of recall values. This suggests clicking into the source Url does not necessarily suggest a relevant passage. We find in most clicked cases, the passage is partially relevant to the question. Therefore, the users may want to click into the source Url to explore more information. \end{itemize} We further investigate the correlation between \emph{SatCTR} and passage relevance. Similarly, we plot the precision-recall curves in Figure~\ref{figure:pr} (b). $CTR\_t$ means we consider the clicks followed by dwell time on the source page for longer than $t$ seconds as a satisfied click. We experiment with dwell time threshold as \{5s, 15s, 25s\}, and observe similar trend with that in Figure~\ref{figure:pr} (a). The experiments with \emph{CTR} and \emph{SatCTR} verify that the single feature of user clicks into the source Url is not a good indicator for passage relevance: clicks do not necessarily indicate relevant passages, and vice versa. Therefore, we need to consider more complex models to combine the sequences of user actions in search sessions. \begin{figure}[t] \setlength{\belowcaptionskip}{-0.2cm} \centering \includegraphics[scale=0.3, viewport=10 10 700 350, clip=true]{pr_curve_ctr.eps} \caption{\label{figure:pr}Precision-recall curves of \emph{AnswerCTR} (a) and \emph{AnswerSatCTR} (b).} \label{PR_Curve} \vspace{-5pt} \end{figure} \begin{table} \begin{center} \small \caption{Comparison of feedback modeling methods. The best results are indicated in bold, and the secondary best results are marked with underline.} \begin{tabular}{llll} \hline \textbf{Method} & \textbf{AUC} & \textbf{ACC} & \textbf{F1} \\ \hline \emph{AnswerCTR} & 58.28 & 53.87 & 18.11 \\ \emph{AnswerSatCTR\textsubscript{5s}} & 57.73 (-0.55) & 53.50 (-0.37) & 16.52 (-1.59) \\ \emph{AnswerSatCTR\textsubscript{15s}} & 57.75 (-0.53) & 52.73 (-1.14) & 13.43 (-4.68) \\ \emph{AnswerSatCTR\textsubscript{25s}} & 57.79 (-0.49) & 52.63 (-1.24) & 12.23 (-5.88) \\ \hline \textbf{LR} & \underline{71.41 (+13.13)} & 66.00 (+12.13) & {\bf 66.78 (+48.67)} \\ \textbf{DT} & 61.75 (+3.37) & 63.43 (+9.56) & 60.92 (+42.81) \\ \textbf{RF} & 71.14 (+12.86) & \underline{67.47 (+13.60)} & 65.66 (+47.55) \\ \textbf{GBDT} & \textbf{\textbf{73.69 (+15.41)}} & \textbf{\textbf{68.00 (+14.13)}} & \underline{66.08 (+47.97)} \\ \hline \end{tabular} \label{table:metrics for different modeling method} \end{center} \vspace{-10pt} \end{table} \subsubsection{Machine Learning Models}\label{sec:ml_models} We apply machine learning models to combine the various types of user behavior features. The training target is to fit the human judged labels. We evaluate the model performance by common binary classification metrics, including area under the curve (AUC), accuracy (ACC), and F1 score. \noindent \textbf{\emph{Different Machine Learning Models}}: We apply various models and evaluate the results, including Logistic Regression (LR), Decision Tree (DT), Random Forest (RT), and Gradient Boost Decision Tree (GBDT). \noindent \textbf{\emph{Results}}: As summarized in Table \ref{table:metrics for different modeling method}, no matter using what kind of models applied for feature combination, the machine learning approach significantly outperforms baseline methods (i.e. \emph{AnswerCTR} and \emph{AnswerSatCTR}) on all metrics. In terms of AUC and ACC, the GBDT model achieves the best performance; in terms of F1 score, the performance of GBDT model (66.08) is very close to the best result (66.78). Overall, we consider GBDT as the best model. \begin{figure}[tbp] \setlength{\belowcaptionskip}{-0.2cm} \centering \includegraphics[scale=0.45, viewport=10 10 450 230, clip=true]{single_feature_importance.eps} \caption{\label{figure:Feature_importance} Relative weights of top 8 features in the GBDT model.} \label{fig:impact_size} \end{figure} \noindent \textbf{\emph{Model Interpretation}}: To get more insights about user behaviours on QA, we first investigate the impact of individual feature based on best model GBDT. The top 8 features of the model are shown in Figure \ref{figure:Feature_importance}. From the result, we can see that \emph{AvgSERPDwellTime} and \emph{OTAnswerOnlyCTR} have the highest feature importance, followed by \emph{AnswerOnlyCTR}, \emph{AnswerSatCTR}, \emph{AnswerExpRate} and \emph{AbandonRate}. Reformulation related features like \emph{RFRate} as well as \emph{RelatedClickRate} have relatively low importance. We can also obtain the following insights about the user behaviour on web QA: \begin{itemize} \item Click features related to web QA answer itself, such as \emph{AnswerClick} and \emph{AnswerSatClick}, are not the most important features. This aligns with our previous observations. \item SERP Dwell Time suggests the time period during which the user lingers on the search result page. Since the content of passage is presented in the QA block (in the SERP) as the answer to user's question, the length of SERP Dwell time may be a good indicator of the relevance of the passage. \end{itemize} To further reveal users' decision process in a search session, we examine the decision tree model DT in Table \ref{table:metrics for different modeling method}. Some interesting insights gained from the paths on the tree are listed in the following. \begin{itemize} \item \emph{AvgSERPDwellTime} is long \&\& \emph{NoClickRate} is small, i.e., the SERP is abandoned, the passage usually has good relevance. Users may just browse the QA passage for a while and then get the information needed. \item \emph{AnswerCTR} is small \&\& \emph{OTAnswerCTR} is large, it is often a strong signal that the passage has poor relevance. In such cases, users may not be satisfied with the passage answer, and then click more on other documents. \item \emph{AnswerOnlyCTR} is large \&\& \emph{AvgSERPDwellTime} is long, this is also a positive signal for passage relevance. The passage is relevant to the question, but due to the space limit of the QA block, the displayed content cannot fully answer user's question. Therefore, the user clicks on the source Url. \item \emph{NoClickRate} is large \&\& \emph{RelatedClickRate} is large, it suggests the passage is not relevant to the user question. The user revises the query to express her information need. \end{itemize} \noindent \textbf{\emph{Summary}}: Unlike the cases for page relevance, user clicks (including sat clicks) are not a good indicator for passage relevance for web QA. However, through machine learning models to combine various user behavior, it is still feasible to extract relevance feedback from the search sessions. \subsection{Pre-training QA models} \label{sec:two_stage} Through the implicit feedback model, we can derive large amount of training data. However, user behavior data could be very noisy. Although we develop aggregated features to reduce the bias in individual sessions, the prediction accuracy of the best model is only 68\% (see Table~\ref{table:metrics for different modeling method}). How to leverage such noisy training data becomes an interesting problem. Inspired by the pre-training idea in learning deep neural networks, we apply large-scale automatically derived implicit feedback data to pre-train the QA models, such as BiLSTM and BERT, as weak supervision. Intuitively, although the derived training data is noisy, it still contains valuable relevance signals to roughly guide the parameters of QA model to a region close to the optimal solutions. In the second stage, we apply the human labeled data to fine-tune the parameters to get the final model. As verified in our experiments (Section~\ref{sec:experiment}), the strategy of pre-training plus fine-tuning is better than training the model with only human labeled data without pre-training. \nop{ As shown in Figure \ref{fig:framkework}, a two-stage approach is proposed to integrate the user implicit feedback signals to the QA relevance model training process. At the first stage, a huge number of tuples \emph{<Query, Passage, User feedback>} can be mined from the search logs of a commercial search engine. By applying the feedback models developed in Table \ref{table:metrics for different modeling method}, each \emph{<Query, Passage>} pair ($\left \langle Q, P \right \rangle$) is auto-labeled with a predicted relevance score. \begin{align} &score_{<Q,P>}=F_{FeedbackModel}(x_{1}, ..., x_{m}) \\ &label_{<Q,P>}=\left\{\begin{matrix} 1 & score_{<Q,P>} \geq \tau_1,\\ 0 & score_{<Q,P>} \leq \tau_2 \end{matrix}\label{eq:label}\right. \end{align} where $F_{FeedbackModel}(\cdot)$ denote the Feedback models, $x_{i}$ represents the feature described in the above section, $m$ represents the number of features, $\tau_1$ and $\tau_2$ are two thresholds. } Let $Q={\{w_1, w_2, w_3, ..., w_{|Q|}\}}$ be a question with $|Q|$ words (or word pieces), $P={\{w_1, w_2, w_3, ..., w_{|P|}\}}$ be a passage with $|P|$ words (or word pieces). We use cross-entropy (CE) as our loss function for both pre-training and fine-tuning, which is defined as: \begin{align} &y=F_{QAModel}(\left \langle Q, P \right \rangle) \\ &L_{CE}=-\frac{1}{n}\sum_{i}^{n}[\hat{y}_{i}\log(y_{i})+ (1-\hat{y}_{i})\log(1-y_{i})]. \label{eq:cross_entropy} \end{align} $F_{QAModel}(\cdot)$ are the QA models, which will be described in Section 4.2, ${y}_{i}$ represents our QA relevance model output, $\hat{y}_{i}$ represents the true label, and $n$ represents the number of training samples. \section{Experiments} \label{sec:experiment} \subsection{Datasets and Metrics} We conduct experiments on several datasets as follows, with their statistics shown in Table \ref{t:statistic}. \begin{itemize} \item \textbf{FeedbackQA\textsubscript{log}}: An English QA dataset collected from the latest half year's web QA system log of one commercial search engine. Each item of the log consists of a tuple <query, passage, user behavior>. \item \textbf{FeedbackQA\textsubscript{\{ctr, gbdt\}}}: For each QA pair in FeedbackQA\textsubscript{log}, the feedback models in Table \ref{table:metrics for different modeling method} are employed to predict a relevance label. For example, \textbf{FeedbackQA\textsubscript{gbdt}} indicates a FeedbackQA dataset labeled by the the \emph{GBDT} feedback model. Each dataset is further sampled to make it a balanced dataset (i.e. the positive/negative ratio is about 1:1). \item \textbf{DeepQA\textsubscript{general}}: An English QA human labeled dataset from one commercial QA system. Each case consists of three parts, i.e. question, passage, and binary label (i.e. 0 or 1) by crowd sourcing judges. The data collection process can be briefly described as follows. Firstly, for each question $Q$, all the passages $P$ extracted from the top 10 relevant web documents returned by the search engine are collected to form a candidate set of <$Q$, $P$> pairs. Next, each <$Q$, $P$> pair is sent to three crowd sourcing judges, and a label (1 for relevance and 0 otherwise) is derived based on a majority voting. \item \textbf{DeepQA\textsubscript{factoid}}: This dataset is collected by the same process with that for DeepQA\textsubscript{general}. The main difference is that the queries of DeepQA\textsubscript{factoid} are mainly factoid queries, i.e., queries asking about ``what'', ``who'', ``where'', and ``when''. \item \textbf{MS Marco}: An open source QA dataset \cite{DBLP:conf/nips/NguyenRSGTMD16}, which contains questions generated from real anonymized Bing user queries. Each question is associated with multiple passages extracted from the Bing web search results. Well trained judges read the question and its related passages, if there is an answer present, the supporting passages are annotated as relevant, while others are labeled as irrelevant. In order to obtain a dataset for low resource setting which this paper targets, the dataset is further sub-sampled to form a positive/negative balanced set with 30k/2k/2k as training, dev and testing sets. \item \textbf{WikiPassageQA}: A Wikipedia based open source QA dataset \cite{DBLP:conf/sigir/CohenYC18}, targeting non-factoid passage retrieval task. It contains thousands of questions with annotated answers. Each question is associated with multiple passages in the document. In order to obtain a dataset for low resource setting as well, the dataset is further sub-sampled to form a positive/negative balanced set with 10k/1k/1k as training, dev and testing sets. \item \textbf{FrenchGermanQA}: This dataset is collected by the same process with that for DeepQA\textsubscript{general}. The main difference is that this dataset targets French and German languages. \end{itemize} We use AUC and ACC as the evaluation metrics for the QA relevance models. \begin{table} \begin{center} \small \caption{\label{t:statistic} Statistics of experiment datasets.} \begin{tabular}{lcccc} \hline \textbf{Dataset} & \textbf{Train} & \textbf{Dev} & \textbf{Test} & \textbf{Labels}\\ \hline \textbf{FeedbackQA\textsubscript{log}} & 22M & - & - & - \\ \textbf{FeedbackQA\textsubscript{\{ctr, gbdt\}}} & 4M & 10k & 10k & 50\%+/50\%- \\ \hdashline \textbf{DeepQA\textsubscript{factoid}} & 30k & 2k & 2k & 55.7\%+/44.3\%- \\ \textbf{DeepQA\textsubscript{general}} & 30k & 2k & 2k & 57.6\%+/42.4\%- \\ \textbf{MS Marco} & 10k & 1k & 1k & 50\%+/50\%- \\ \textbf{WikiPassageQA} & 10k & 1k & 1k & 50\%+/50\%- \\ \textbf{FrenchGermanQA} & 50k & 2k & 2k & 50\%+/50\%- \\ \hline \end{tabular} \label{dataset} \end{center} \vspace{-8pt} \end{table} \begin{table*}[t!] \small \caption{\label{table:result}Performance comparison between our methods and baselines on QA datasets. ACC denotes accuracy and AUC denotes Area under Curve (all ACC, AUC metrics in the table are percentage numbers with \% omitted).}\label{t:main} \subtable{\textbf{(a) Results on DeepQA\textsubscript{general}}} { \begin{tabular}{cccllll} \hline \multirow{2}{*}{\textbf{Model}} & \multirow{2}{*}{\textbf{Method}} & \multirow{2}{*}{ \textbf{\begin{tabular}[c]{@{}c@{}}Pre-training\\ Data Size\end{tabular}}} & \multicolumn{4}{c}{\textbf{Performance on Different Fine-tuning Data Size (AUC/ACC)}} \\ \textbf{} & \textbf{} & \textbf{} & \textbf{5k} & \textbf{10k} & \textbf{20k} & \textbf{30k} \\ \hline \multirow{7}{*}{\textbf{BiLSTM}} & \multirow{1}{*}{\textbf{Original}} & \textbf{-} & 60.45/58.21 & 61.30/59.92 & 61.55/61.99 &62.40/61.74 \\ \cdashline{3-7} & \multirow{3}{*}{\textbf{FBQA\textsubscript{ctr}}} & \textbf{0.5m} & 59.90/57.60 (-0.55/-0.61) & 61.25/58.25 (-0.05/-1.67) & 61.40/60.50 (-0.15/-1.49) & 60.65/59.29 (-1.75/-2.45) \\ & & \textbf{1.0m} & 60.25/58.45 (-0.20/-0.24) & 61.35/58.12 (+0.05/-1.80) & 62.65/57.43 (+1.10/-4.56) & 61.35/60.69 (-1.05/-1.05) \\ & & \textbf{4.0m} & 60.50/56.99 (+0.05/-1.22) & 59.75/58.39 (-1.55/-1.53) & 60.90/59.15 (-0.65/-2.84) & 62.25/61.73 (-0.15/-0.01) \\ \cdashline{3-7} & \multirow{3}{*}{\textbf{FBQA\textsubscript{FA}}} & \textbf{0.5m} & 61.95/59.66 (+1.50/+1.45) & 62.50/60.96 (+1.20/+1.04) & 62.85/62.74 (+1.30/+0.75) & 64.23/62.50 (+1.83/+0.76) \\ & & \textbf{1.0m} & 62.80/60.44 (+2.35/+2.23) & 63.20/61.20 (+1.90/+1.28) & 63.45/63.00 (+1.90/+1.01) & 65.57/63.05 (+3.17/+1.31) \\ & & \textbf{4.0m} & \textbf{64.13/62.15 (+3.68/+3.94)} & \textbf{65.45/63.33 (+4.15/+3.41)} & \textbf{65.46/64.17 (+3.91/+2.18)} & \textbf{67.35/64.35 (+4.95/+2.61)} \\ \hline \multirow{7}{*}{\textbf{BERT}} & \multirow{1}{*}{\textbf{Original}} & \textbf{-} & 69.31/64.86 & 71.81/67.76 & 72.47/67.07 & 75.28/68.26 \\ \cdashline{3-7} & \multirow{3}{*}{\textbf{FBQA\textsubscript{ctr}}} & \textbf{0.5m} & 67.35/62.76 (-1.96/-2.10) & 72.96/66.66 (+1.15/-1.10) & 75.11/68.26 (+2.64/+1.19) & 77.76/71.07 (+2.48/+2.81) \\ & & \textbf{1.0m} & 72.33/67.06 (+3.02/+2.20) & 73.76/67.36 (+1.95/-0.40) & 76.16/69.16 (+3.69/+2.09) & 77.42/68.26 (+2.14/+0.00) \\ & & \textbf{4.0m} & 72.19/65.66 (+2.88/+2.90) & 73.92/67.96 (+2.11/+0.20) & 76.81/67.96 (+4.34/+0.89) & 77.94/69.36 (+2.66/+1.10) \\ \cdashline{3-7} & \multirow{3}{*}{\textbf{FBQA\textsubscript{FA}}} & \textbf{0.5m} & 72.26/65.27 (+2.95/+0.41) & 76.03/68.87 (+4.22/+1.11) & 77.79/69.47 (+5.32/+2.40) & 77.92/69.47 (+2.34/+1.21) \\ & & \textbf{1.0m} & 73.53/66.37 (+4.22/+1.51) & 76.29/68.97 (+4.48/+1.15) & 78.63/68.77 (+6.16/+1.70) & 79.82/70.17 (+4.54/+1.91) \\ & & \textbf{4.0m} & \textbf{76.53/68.57 (+7.22/+3.71)} & \textbf{78.17/68.57 (+6.36/+0.81)} & \textbf{79.79/71.17 (+7.32/+4.10)} & \textbf{81.03/71.57 (+5.78/+3.31)} \\ \hline \end{tabular} } \subtable{\textbf{(b) Results on DeepQA\textsubscript{factoid}}} { \begin{tabular}{ccclllll} \hline \multirow{2}{*}{\textbf{Model}} & \multirow{2}{*}{\textbf{Method}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Pre-training\\ Data Size\end{tabular}}} & \multicolumn{4}{c}{\textbf{Performance on Different Fine-tuning Data Size (AUC/ACC)}} \\ \textbf{} & \textbf{} & \textbf{} & \textbf{5k} & \textbf{10k} & \textbf{20k} & \textbf{30k} \\ \hline \multirow{7}{*}{\textbf{BiLSTM}} & \multirow{1}{*}{\textbf{Original}} & \textbf{-} & 64.02/60.80 & 64.73/59.00 & 65.18/60.80 & 64.03/58.95 \\ \cdashline{3-7} & \multirow{3}{*}{\textbf{FBQA\textsubscript{ctr}}} & \textbf{0.5m} & 63.62/60.35 (-0.40/-0.45) & 65.51/59.00 (+0.78/+0.00) & 64.56/60.40 (-1.24/-0.40) & 61.79/58.05 (-2.24/-0.90) \\ & & \textbf{1.0m} & 63.67/60.30 (-0.35/-0.50) & 64.86/58.65 (+0.15/-0.35) & 65.53/60.40 (+0.35/-0.40) & 61.35/57.40 (-2.68/-1.55) \\ & & \textbf{4.0m} & 62.83/60.20 (-1.19/-0.60) & 65.09/59.05 (+0.36/+0.05) & 64.67/59.70 (-0.51/-1.10) & 65.24/60.95 (+1.21/+2.00) \\ \cdashline{3-7} & \multirow{3}{*}{\textbf{FBQA\textsubscript{FA}}} & \textbf{0.5m} & 65.66/62.05 (+1.64/+1.25) & 67.87/63.00 (+3.14/+4.00) & 68.84/63.80 (+3.66/+3.00) & 69.80/64.75 (+5.77/+5.80) \\ & & \textbf{1.0m} & 66.36/64.00 (+2.34/+3.2) & 67.81/63.45 (+3.08/+4.45) & 69.23/64.70 (+4.05/+3.90) & 72.16/66.40 (+8.13/+7.45) \\ & & \textbf{4.0m} & \textbf{67.22/65.25 (+3.20/+4.45)} & \textbf{70.26/65.10 (+5.53/+6.10)} & \textbf{70.79/64.95 (+5.61/+4.15)} & \textbf{73.31/67.15 (+9.28/+8.20)} \\ \hline \multirow{7}{*}{\textbf{BERT}} & \multirow{1}{*}{\textbf{Original}} & \textbf{-} & 68.88/65.46 & 71.43/66.77 & 73.87/67.06 & 78.37/69.56 \\ \cdashline{3-7} & \multirow{3}{*}{\textbf{FBQA\textsubscript{ctr}}} & \textbf{0.5m} & 71.44/66.27 (+2.56/+0.81) & 75.17/67.06 (+3.74/+0.29) & 77.78/70.78 (+3.91/+3.72) & 80.00/72.87 (+1.63/+3.31) \\ & & \textbf{1.0m} & 71.73/66.17 (+2.85/+0.71) & 74.19/67.26 (+2.76/+0.49) & 77.01/69.46 (+3.14/+2.40) & 78.30/72.07 (-0.07/+2.51) \\ & & \textbf{4.0m} & 70.55/66.06 (+1.67/+0.60) & 75.51/68.26 (+4.08/+1.49) & 78.10/69.77 (+4.23/+2.71) & 79.54/70.97 (+1.17/+1.41) \\ \cdashline{3-7} & \multirow{3}{*}{\textbf{FBQA\textsubscript{FA}}} & \textbf{0.5m} & 74.00/67.77 (+5.12/+2.31) & 76.57/68.87 (+5.14/+2.10) & 78.78/70.67 (+4.91/+3.61) & 80.91/72.37 (+2.54/+2.81) \\ & & \textbf{1.0m} & 74.33/68.67 (+5.45/+3.21) & 75.77/68.77 (+4.34/+2.00) & 78.40/71.17 (+4.53/+4.11) & 79.31/71.77 (+0.94/+2.21) \\ & & \textbf{4.0m} & \textbf{76.13/69.77 (+7.25/+4.31)} & \textbf{77.93/71.27 (+6.50/+4.50)} & \textbf{79.94/72.77 (+6.07/+5.71)} & \textbf{81.21/73.27 (+2.84/+3.71)} \\ \hline \end{tabular} } \end{table*} \subsection{Baselines and Models} To evaluate the effectiveness of our approach to mine implicit feedback, we set the following two baselines. \begin{itemize} \item \textbf{Original}: We only use the human labeled data to train the QA model. \item \textbf{FBQA\textsubscript{ctr}}: The FeedbackQA\textsubscript{ctr} data is used for pre-training the QA model at the first stage. At the second stage, the QA model is further fine-tuned using the human labeled data. \end{itemize} In terms of our approach, the best performing feedback model GBDT in Table \ref{table:metrics for different modeling method} is used to auto-label large scale pre-training data (i.e. FeedbackQA\textsubscript{gbdt}) for pre-training QA model. At the second stage, the QA model is further fine-tuned using the human labeled data. This approach is referred to as \textbf{FBQA\textsubscript{FA}} in the later experiment results. We build the QA relevance models based on the following two popular deep neural networks. \begin{itemize} \item \textbf{BiLSTM}: It consists of three parts. The first is an embedding layer which maps each token to a vector with fixed dimension. The second is multi-layer Bidirectional LSTM which encodes both question and passage based on the token embeddings, i.e. $H^q = BiLSTM_1(Q)$ and $H^p = BiLSTM_2(P)$, where $H^q$ and $H^p$ are the representations for question and passage respectively. The parameters of $BiLSTM_1$ and $BiLSTM_2$ are not shared. Following the BiLSTM layer is a prediction layer which includes a combination layer to concatenate $H^q$ and $H^p$, and then a fully connection layer to predict the relevance of the passage and the question. \item \textbf{BERT\textsubscript{base}}\footnote{Our goal is to prove the effectiveness of approach, so we do not use BERT\textsubscript{large} which is time and resource consuming.}: It contains 12 bidirectional transformer encoders. We concatenate the question text and the passage text together as a single input of the BERT encoder. We then feed the final hidden state of the first token (\emph{[CLS]} token embedding) from the input into a two-layer feed-forward neural network. The final output is the relevance score between the input question and passage. In all cases, the hidden size is set as 768. The number of self-attention heads is set as 12, and the feed-forward filter size is set as 3072. \end{itemize}{} \subsection{Experimental Setup} For experiments with BiLSTM, we use two BiLSTM layers with 128 hidden size of BiLSTM cell. We set the maximum question and passage length to be 30 and 200 respectively for DeepQA, MS Marco and WikiPassageQA datasets. We set the batch size as 256 and the dimension of word embedding as 300. During pre-training, learning rate is set as \{1e\textsuperscript{-5}, 3e\textsuperscript{-5}, 5e\textsuperscript{-5}\}, dropout as $\{0.1, 0.3, 0.5\}$ and max epoch as 50. We choose the best model for fine-tuning based on the evaluation metric on dev set. During fine-tuning, learning rate is set as \{1e\textsuperscript{-5}, 3e\textsuperscript{-5}, 5e\textsuperscript{-5}\}, dropout as 0.5 and max epoch as 50. Best model is selected based on the evaluation metric on dev set as well. For experiments with BERT\textsubscript{base}, we use the huggingface version pre-trained BERT\textsubscript{base} model\footnote{\url{https://github.com/huggingface/pytorch-transformers}}. We set the maximum sequence length to be 200 for DeepQA, MS Marco and WikiPassageQA datasets. We set the batch size as 128, the gradient accumulation steps as 2 and learning rate warmup ratio as 0.1. During pre-training, learning rate is set as \{1e\textsuperscript{-5}, 3e\textsuperscript{-5}, 5e\textsuperscript{-5}\} and max epoch as 3. We choose the best model for fine-tuning based on the evaluation metric on dev set. During fine-tuning, learning rate is set as \{1e\textsuperscript{-5}, 3e\textsuperscript{-5}, 5e\textsuperscript{-5}\} and max epoch as 3. Best model is selected based on the evaluation metric on dev set as well. \subsection{Results and Discussions} \subsubsection{Overall Comparison Results} Table \ref{table:result} shows the experimental results across all settings. We can make the following observations. \begin{itemize} \item Compared with two baselines Original and FBQA\textsubscript{ctr}, our implicit feedback approach FBQA\textsubscript{FA} achieves significant improvements on both DeepQA\textsubscript{general} and DeepQA\textsubscript{factoid} sets, over different pre-training data size $\{0.5m$, $1m, 4m\}$ and different QA fine-tuning data size $\{5k, 10k, 20k,$ $30k\}$. When the size of feedback pre-training data reaches $4m$, our model gets the best results on both experiment sets: for BiLSTM, there is about 5 AUC points increase on average; for BERT, 6 AUC points increase on average. \item Especially for low resource settings such as $5k$ and $10k$ QA fine-tuning data, our approach shows excellent results to save the labeling cost. Take DeepQA\textsubscript{general} and BERT setting as an example, when the size of pre-training data equals to $4m$ and fine-tuning data equals to $5k$, our model can get 76.53 AUC metric, which is even higher than Original result on $30k$ fine-tuning data. In other words, with only $1/6$ of the human labeled data, our model can still outperform the model trained on full data. This experiment verifies the effectiveness of our approach to save large labeling cost. \item When we increase the size of implicit feedback pre-training data from $0.5m$ to $1m$ and $4m$, our model is able to get consistent gains in all experimental settings. While for FBQA\textsubscript{ctr}, the gains are not consistent: increasing pre-training data size does not necessarily increase the metrics, which aligns with our findings in Section \ref{sec:imp_feedback_modeling}. \item Our approach shows good gains over the baselines with both BiLSTM and BERT models, which verifies the model agnostic characteristic of our approach. It is expected that BERT based QA models outperform BiLSTM based models since BERT benefits from the large scale unsupervised pre-training as well as the large model parameter size. It is interesting to find that even on top of the powerful deep pre-trained model such as BERT, further significant gains can be observed. This demonstrates the huge potential of the cheap, abundant implicit feedback derived from large-scale user behavior data as complementary data sources to the expensive human labeled data with a relatively small size. \end{itemize} \subsubsection{Impact of pre-training data size} To further analyze the impact of user implicit feedback to improve web QA, we explore the model performance with respect to the size of the feedback data employed in the pre-training stage. The experiments are conducted on the DeepQA\textsubscript{general} dataset using both BiLSTM and BERT\textsubscript{base} models. Pre-training data size is set as \{0, 1, 2, 3, 4, 5, 6\} millions. The results are shown in Figure \ref{figure:feedbackdatasize_auc}. By increasing the data size of implicit feedback in pre-training from 0 to 4 millions, the model performance increases accordingly. However, when the data size reaches a certain scale, e.g, 4 millions, the AUC metric on test set slowly flattens out. This suggests the noise in the implicit feedback data may limit the upper-bound of the improvement. \begin{figure}[htbp] \setlength{\belowcaptionskip}{-0.1cm} \centering \includegraphics[scale=0.35, viewport=2 5 600 410, clip=true]{new_impact_training_data_size.eps} \caption{\label{figure:feedbackdatasize_auc} Performance on DeepQA\textsubscript{general} dataset with different FeedbackQA pre-training data size.} \label{fig:impact_size} \end{figure} \subsubsection{Results on MS Marco and WikiPassageQA datasets} We further apply our pre-trained model (trained on 4m FeedbackQA\textsubscript{gbdt}) from the implicit relevance feedback data to two open benchmark QA datasets, the MS Marco and WikiPassageQA. The results are reported in Table~\ref{table:marco_test}. We find the queries in MS Marco set are simpler than those in the DeepQA\textsubscript{general} and DeepQA\textsubscript{factoid} sets, consequently, with 10k human labeled data and the BERT model, the AUC can reach as high as 94.01\%. Our approach also shows improvement on this data set, although the gain is not as large as on the two DeepQA sets. For BiLSTM model, the gain is around 3 points on average in terms of AUC over 10k, 20k, and 30k human labeled training data in the fine-tuning stage. For BERT model, since the baseline is already very high, the improvement is less than 1 point. In terms of WikiPassageQA dataset, our approaches also shows gains, with around 1.3 points AUC gains for BERT and around 1.8 points AUC gains for BiLSTM on average over 2k, 5k and 10k labeled fine-tuning data. \begin{table}[htbp] \small \caption{\label{table:marco_test} Results on MS Marco and WikiPassageQA datasets (all AUC metrics in the table are percentage numbers with \% omitted).} \begin{tabular}{ccccc|ccc} \hline \multirow{2}{*}{\textbf{Model}} & \multirow{2}{*}{\textbf{Method}} & \multicolumn{3}{c|}{\textbf{MS Marco}} & \multicolumn{3}{c}{\textbf{WikiPassageQA}} \\ & & \textbf{2k} & \textbf{5k} & \textbf{10k} & \textbf{2k} & \textbf{5k} & \textbf{10k} \\ \hline \multirow{3}{*}{\textbf{BiLSTM}} & \textbf{Original} & 64.70 & 64.25 & 65.61 & 55.39 & 58.37 & 60.72 \\ & \textbf{FBQA\textsubscript{ctr}} & 62.52 & 65.50 & 66.46 & 53.80 & 58.95 & 61.16 \\ & \textbf{FBQA\textsubscript{FA}} & \textbf{65.65} & \textbf{66.24} & \textbf{68.66} & \textbf{57.23} & \textbf{60.38} & \textbf{62.30} \\ \hline \multirow{3}{*}{\textbf{BERT}} & \textbf{Original} & 87.93 & 93.02 & 94.01 & 78.03 & 83.70 & 84.70 \\ & \textbf{FBQA\textsubscript{ctr}} & 88.70 & 93.42 & 94.47 & 78.04 & 81.18 & 85.66 \\ & \textbf{FBQA\textsubscript{FA}} & \textbf{88.75} & \textbf{94.02} & \textbf{94.81} & \textbf{80.10} & \textbf{84.14} & \textbf{86.16} \\ \hline \end{tabular} \vspace{-8pt} \end{table} \nop{ \subsubsection{Comparison of objective functions in the pre-training stage} \label{subsec:objective impact} Once we learn an implicit relevance feedback model, we can apply it to a query passage pair to predict a relevance score. In Equation~\ref{eq:label}, we use two thresholds to convert this continuous relevance score into a Boolean relevance label. In this section, we would like to explore whether it is better to fit the Boolean relevance label in the pre-training stage, or alternatively, fit the original continuous relevance score (between 0 and 1). To fit the Boolean relevance label, we carry out a classification task and use the loss of cross-entropy (CE) as in Equation~\ref{eq:cross_entropy}. Instead, to fit the original continuous relevance score, we conduct a regression task and use the mean squared error (MSE) as the loss function: \begin{align} &L_{MSE}=\frac{1}{n}\sum_{i}^{n}({y}_{i}-\bar{y}_{i})^{2} \end{align} where $y_{i}$ represents our QA model output, $\bar{y}_{i}$ represents the auto-labeled relevance score output by the feedback model in Table \ref{table:metrics for different modeling method}, and $n$ represents the number of training samples. We compare the results from the two objective functions in Figure \ref{figure:impact_obj}. It is clear that for both BERT and BiLSTM, the performance of using regression loss in the pre-training stage is consistently worse than that adopting the cross-entropy loss. It indicates that the user feedback data is noisy, and learning the fine-grained data distribution with regression task may propagate the noise to the fine-tuning stage. Instead, the Boolean label approach considers the implicit relevance feedback as a rough indicator, and throws away the uncertain cases falling between the two thresholds. Consequently, the Boolean label approach is more robust to the noise. \begin{figure}[H] \setlength{\belowcaptionskip}{-0.2cm} \centering \includegraphics[scale=0.22, viewport=20 20 680 540, clip=true]{impact_obj.eps} \caption{\label{figure:impact_obj} Comparison of objective functions during the QA pre-training stage on DeepQA\textsubscript{general} dataset.} \end{figure} } \subsection{Application to non-English QA} We further apply our approach in a commercial search engine for several non-English markets. We find user behaviors in different countries are very consistent. Consequently, the implicit relevance feedback model trained in en-US market can be successfully transferred to foreign markets without any tuning. The results are shown in Table \ref{t:frde_qa}. In de-DE (German) and fr-FR (French) markets, our approach significantly improves the QA service in terms of AUC metric, saving huge amount of human labeling cost. Take fr-FR as example, our approach shows around 3.2 consistent AUC gains across all training data sizes. Meanwhile, FBQA\textsubscript{FA} model is able to get 76.43 AUC with only 5k training data while Original model needs 30k training data to get similar results. \begin{table}[htbp] \small \caption{\label{t:frde_qa} Results on French and German QnA.} \centering \begin{tabular}{@{}ccccc@{}} \hline \textbf{Model} & \multicolumn{4}{c}{\textbf{AUC of fr-FR \& de-DE}} \\ & \textbf{5k} & \textbf{10k} & \textbf{30k} & \textbf{50k} \\ \midrule \textbf{Original} & 73.05/71.46 & 73.99/73.15 & 76.23/75.84 & 76.82/77.11 \\ \textbf{FBQA\textsubscript{FA}} & {\ul \textbf{76.43/76.64}} & {\ul \textbf{77.26/76.22}} & {\ul \textbf{79.28/78.83}} & {\ul \textbf{80.31/79.76}} \\ \hline \end{tabular} \vspace{-10pt} \end{table} \section{Conclusion and Future work} \label{sec:conclusion} This paper presents a study of user interaction with web QA block, and proposes a novel framework to mine implicit relevance feedback from user behavior. The implicit feedback models are further applied to generate weak supervision data to train QA models. Extensive experiments verify the effectiveness of this approach to improve the performance of QA models and thus to reduce the human labeling cost. Mining implicit feedback from user behavior data for web QA task is an interesting area to explore. In this study, we mainly focus on users' search behavior, while in the future, we may combine users' search behavior with browse behavior. Moreover, we may also conduct deeper analysis on the question types and compare the effectiveness of implicit feedback on different types of queries. Understanding when to trigger QA block from user feedback is another interesting problem. Finally, the application of our approach to more languages is also in our future plan. \bibliographystyle{ACM-Reference-Format} \section{Introduction} Question answering (QA) has become a de facto feature in \emph{search result pages} (\emph{SERP} for short) in most commercial search engines. For a query bearing some question intent, such as a noun phrase like symptoms of coronavirus, a search engine can extract the most relevant passage from web documents and put it in an individual block at the top of a SERP. Figure~\ref{Goole&Bing} shows the screenshot of the QA feature of a commercial search engine, where the query is ``normal temperature for children in Celsius''. Typically, a {\em QA block} is composed of a question, the passage to answer the question, the URL of the source web document from which the passage is extracted, and the links to collect user feedback (e.g., ``Was the QA block helpful?''). Clearly, a well-designed QA block can deliver informative answers to search engine users in an intuitive and straightforward manner, save user time, and improve user experience. QA blocks have become even more popular on mobile devices as voice search is adopted by more and more users. No doubt the magic behind a QA block is empowered by various machine learning algorithms, including the latest deep neural networks~\cite{radev2002probabilistic,echihabi2008select,kaisser2004question,DBLP:journals/corr/abs-1908-06780,Yang2020ModelCW,Yuan2020EnhancingAB,Huang2019UnicoderAU}. While machine learning algorithms have attracted extensive attention, people often overlook a critical challenge in making QA blocks industry scale commercial products -- the need of huge amounts of training data. In practice, a commercial search engine receives extremely diverse questions in open domain at web scale. To handle such a complex and huge question space, the QA models for search engines often have to involve tens of millions of parameters, which cause the models easily overfitting to small training data. Consequently, we usually have to use millions of training examples to train a model in order to overcome overfitting and biases. It is well recognized that obtaining large amounts of high quality training data is a bottleneck for commercial search engines. Using human judges to label training data is very expensive in financial cost and time. To make the challenge even tougher, a commercial search engine often provides services in global markets with various languages. It is unrealistic to manually label millions of training samples for each language. A practical approach for search engines to collect massive training data for search tasks is to exploit implicit relevance feedback from user behavior mined from search logs. There exists a rich body of literature~\cite{DBLP:journals/sigir/JoachimsGPHG17,Gao2011ClickthroughbasedLS,DBLP:conf/cikm/HuangHGDAH13,DBLP:journals/sigir/AgichteinBD18,Bilenko:2008:MST:1367497.1367505, White:2007:SUP:1277741.1277771}. Can we simply extend the existing best practice to collect user implicit relevance feedback data to train QA models? Unfortunately, all existing works target at the relevance of web {\em documents}, rather than {\em passages}. Collecting and understanding implicit relevance feedback for QA blocks is much more sophisticated and demands dramatic innovation beyond any existing approaches. While we will develop our novel approach and present our best engineering practice later in the paper, let us illustrate some challenges in collecting and understanding implicit relevance feedback for QA block using a real example. \begin{figure}[t] \setlength{\belowcaptionskip}{-0.2cm} \centering \includegraphics[scale=0.54, viewport=100 270 600 470, clip=true]{QA-New.pdf} \caption{Example QA features in web search engines.} \label{Goole&Bing} \end{figure} Figure~\ref{t:passage_behavior} shows two QA examples. In the first case, ``{\sl What's the normal body temperature for \textbf{child}?}'', there is no click in the QA block. In the second case, ``{\sl What's the normal body temperature for \textbf{adult}?}'', user clicks on the URL in the QA block are observed. We further examine these two cases, and find that the passage in the first case perfectly answers the question. Therefore, a user can obtain satisfactory information by simply going through the content of the passage. No follow-up action is needed. For the second case, the information in the passage (about child body temperature) does not accurately match the user intent (for adult body temperature). A user may have to explore more information in the source page from which the passage is extracted. In this case, the title ``human body temperature'' of the source page may trigger a user's interest to click on the URL and read more in that page. This example illustrates a unique characteristic of QA. As the content of the passage is already presented to users in a QA block, users may not need to click on the URL to get the answer. Consequently, the correlation between user clicks and passage relevance may be much weaker than the correlation between user clicks and page relevance in web search results. We will provide more insights in Section~\ref{sec:click_baseline}. \begin{figure}[t] \small \centering \begin{tabular}{lp{5cm}p{7cm}} \hline \textbf{Question (a)}: &\emph{What's the normal body temperature for \textbf{child}?}\\ \hline \textbf{Passage (a)}: &\emph{The average normal body temperature for children is about \textbf{37 degree}. A child's temperature usually averages from around \textbf{36.3 degree} in the morning to \textbf{37.6 degree} in the afternoon.} \\ \hline \textbf{URL (a)}: &\emph{Human Body Temperature: Fever - Normal - Low \url{https://www.disabled-world.com/calculators-charts/degrees.php}} \\ \hline \textbf{Label}: &\emph{Relevant} \\ \hline \textbf{User behavior}: &\emph{\textbf{No Click}} \\ \hline \\ \hline \textbf{Question (b)}: &\emph{What's the normal body temperature for \textbf{adult}?}\\ \hline \textbf{Passage (b)}: &\emph{The average normal body temperature for children is about \textbf{37 degree}. A child's temperature usually averages from around \textbf{36.3 degree} in the morning to \textbf{37.6 degree} in the afternoon.} \\ \hline \textbf{URL (b)}: &\emph{Human Body Temperature: Fever - Normal - Low \url{https://www.disabled-world.com/calculators-charts/degrees.php}} \\ \hline \textbf{Label}: &\emph{Irrelevant} \\ \hline \textbf{User behavior}: &\emph{\textbf{Click}} \\ \hline \end{tabular} \label{table:example} \caption{\label{t:passage_behavior}Examples of user behavior for web QA.} \vspace{-8pt} \end{figure} Another major difference between QA block and web document results is the number of results presented in SERP. Given a user question, a search engine usually returns a list of web documents, but only a single QA block. Most previous click models leverage the relative rank order of documents to gain more reliable implicit feedback. However, this idea cannot be used to QA blocks, since SERP contains only one QA block per question. In this paper, we investigate user behavior in QA blocks and propose a novel approach to mine implicit relevance feedback from noisy user behavior data for passage relevance. To the best of our knowledge, this is the first systematic study submitted for publication to address the data collection challenges for QA blocks. We make the following contributions. First, we capture three types of user behavior when users interact with QA blocks, namely \emph{click behavior}, \emph{re-query behavior}, and \emph{browsing behavior}. By analyzing the aggregated sequences of user actions in the context of complete search sessions, we obtain interesting insights about the correlation between user behavior and passage relevance. Second, we examine several possible methods that automatically extract user feedback signals from user behavior data. With a small amount of human labeled data as ground truth, we reveal strong correlation between extracted feedback signals and passage relevance, and further assess the feasibility of learning implicit feedback with reasonable accuracy. Third, we incorporate implicit feedback mined from user behavior data into a weakly-supervised approach for QA model training, and carry out extensive experiments on several QA datasets in English. The experimental results clearly show our approach greatly improves the QA performance on all datasets, especially under the low-resource conditions. Last, we deploy our approach in a commercial search engine in two non-English markets. We find users speaking different languages uniformly follow similar behavior patterns when they interact with QA blocks. Consequently, the implicit relevance feedback model trained in en-US (english) market can be successfully transferred to foreign markets without any tuning. In de-DE (German) and fr-FR (French) markets, our approach significantly improves the QA service by around 3.0\% in the AUC metric. Moreover, this approach can automatically refresh the QA model by continuously collecting relevance feedback from users, which further saves the labeling cost. We expect our approach can save millions of dollars of labeling cost when scaling out to more markets. The rest of the paper is organized as follows. We first review the related work in Section~\ref{sec:related-work}. We then present our approach in Section~\ref{sec:method}. We report the extensive experimental results in Section~\ref{sec:experiment}, and conclude the paper in Section~\ref{sec:conclusion}. \section{Related Work} \label{sec:related-work} Our study is mainly related to the previous work on QA and learning from user feedback. We provide a brief review on those topics here. \subsection{Question Answering (QA)} The purpose of web QA is to offer users an efficient information access mechanism by directly presenting an answer passage to the web search engine users~\cite{chen2017reading,ahn2004using,buscaldi2006mining}. There are various methods for web QA in literature. For example, Moldovan~\textit{et al.}~\cite{DBLP:conf/acl/MoldovanHPMGGR00} proposed a window-based word scoring technique to rank potential answer pieces for web QA. Cui~\textit{et al.}~\cite{DBLP:conf/sigir/CuiSLKC05} learned transformations of dependency path from questions to answers to improve passage ranking. Yao~\textit{et al.}~\cite{DBLP:conf/naacl/YaoDCC13} tried to fulfill the matching using minimal edit sequences between dependency parse trees. AskMSR~\cite{DBLP:conf/emnlp/BrillDB02}, a search-engine based QA system, used Bayesian Neural Network relying on data redundancy to find short answers. In recent years, deep neural networks have achieved excellent performance in QA~\cite{chen2017reading,DBLP:conf/aaai/WangYGWKZCTZJ18}. Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), such as the Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), were applied to learn the representations of questions and answers~\cite{DBLP:journals/corr/TanXZ15, DBLP:conf/acl/TanSXZ16}. Attention mechanisms were employed to model the interaction between questions and answers~\cite{DBLP:conf/cikm/YangAGC16,DBLP:conf/ijcai/WangHF17}, which led to better performance than simply modeling query and answer separately. Most recently, deep pre-trained models, such as BERT~\cite{DBLP:journals/corr/abs-1908-08167}, XLNet~\cite{DBLP:journals/corr/abs-1906-08237}, have become the new state-of-the-art approaches of QA models. To tackle web-scale open-domain question answering, the statistic machine learning models require large amounts of training data. In this paper, we do not target at developing another QA model. Instead, we aim to find a model-agnostic approach for training data collection. \subsection{Learning from User Feedback} User feedback has been intensively explored in web page ranking to improve search quality~\cite{DBLP:conf/kdd/Joachims02, DBLP:conf/sigir/JoachimsGPHG05}. There are two types of user feedback. \emph{Explicit (or shallow) feedback} is that a user takes an extra effort to proactively express her satisfaction with the search results, e.g., through a simple up-voting or down-voting button. \emph{Implicit feedback} is the inference of user satisfaction from a user's search and/or browse sessions, without extra efforts from the users. Rocchio~\cite{rocchio1971relevance} is a pioneer to leverage relevance feedback for information retrieval by explicitly gathering feedback through a button for up-voting or down-voting. Another means of collecting explicit feedback was through side-by-side comparisons~\cite{ali2006relationship, thomas2006evaluation}. In practice, the chances of receiving explicit feedback from users are very low, since explicit feedback disturbs users in their normal interaction with search engines. Compared with explicit feedback, implicit feedback can be collected at much lower cost and in much larger quantity, without putting any burden on users of search systems~\cite{DBLP:journals/sigir/JoachimsGPHG17}. Various features have been extracted from user behavior data, such as click-through information, average dwell time, and number of page visits in post-search browsing sessions~\cite{DBLP:journals/sigir/AgichteinBD18,Bilenko:2008:MST:1367497.1367505}. For example, Joachims~\textit{et al.}~\cite{DBLP:journals/sigir/JoachimsGPHG17} derived relative preferences from click-through information. Agichtein~\textit{et al.}~\cite{DBLP:journals/sigir/AgichteinBD18} explored page clicks and page visits as features to improve ordering of top results in web search. Gao~\textit{et al.}~\cite{DBLP:conf/sigir/GaoTY11} and Huang~\textit{et al.}~\cite{DBLP:conf/cikm/HuangHGDAH13} used click-through data for deep semantic model training to learn semantic matching between queries and documents in page ranking. A major challenge in exploiting implicit feedback is that it is inherently noisy or even biased~\cite{DBLP:conf/sigir/JoachimsGPHG05}. To address the challenge, various methods have been proposed. For example, Craswell~\textit{et al.}~\cite{DBLP:conf/wsdm/CraswellZTR08} proposed four simple hypotheses about how position bias may arise. Dupret and Piwowarski~\cite{DBLP:conf/sigir/DupretP08} proposed a set of assumptions on user browsing behavior in order to estimate the probability a document was to be viewed. Chapelle and Zhang~\cite{DBLP:conf/www/ChapelleZ09} proposed a dynamic Bayesian Network model to indicate whether a user was satisfied with a clicked document and then left the page. Although user feedback for web page ranking has been well studied, there is little work on user feedback for web QA. The closest work to our study is by Kratzwald and Feuerriegel~\cite{kratzwald2019learning}, who designed feedback buttons to explicitly ask users to assess the overall quality of the QA result. Different from them~\cite{kratzwald2019learning}, our work mainly focuses on the mining of implicit relevance feedback for web QA. To the best of our knowledge, it is the first study in this frontier. \nop{ \begin{figure*}[ht] \setlength{\belowcaptionskip}{-0.2cm} \centering \includegraphics[scale=0.68, viewport=20 180 700 470, clip=true]{framework_final.pdf} \caption{\label{figure:3_framework} The overall framework of our FeedbackQA approach.} \label{fig:framkework} \end{figure*} } \section{Our Approach} \label{sec:method} Our goal is to derive training data from online user behavior to train QA models. To achieve this goal, our basic idea is to learn an implicit relevance feedback model. Given a question $q$ and the passage $p$ served by the search engine, the feedback model extracts features from user behavior and predicts the relevance $r$ between $q$ and $p$. The predicted results $(q, p, r)$ are then used as training data to train QA models. Based on this idea, we first conduct a comprehensive analysis on user behavior in QA sessions in Section~\ref{sec:user_feedback} and propose a systematic categorization to cover all types of user behavior. We then design a rich set of user behavior features in Section~\ref{sec:feature_extraction} to make sure we do not miss any useful implicit feedback signals. In Section~\ref{sec:imp_feedback_modeling}, we carefully compare various algorithms that learn implicit feedback models, and apply the best model to a huge volume of user behavior data to derive a large amount of training data. Section~\ref{sec:two_stage} elaborates how we leverage the derived training data in a weakly-supervised approach for the training of QA model. \subsection{Taxonomy of User Behavior} \label{sec:user_feedback} \begin{table}[t] \caption{\label{table:behavior} Taxonomy of user behavior in web QA system} \begin{center} \begin{tabular}{ ccc} \hline \textbf{Type} & \textbf{Type} & \textbf{behavior} \\ \hline \textbf{Explicit} & \emph{Click} & \emph{Up-vote/Down-vote} \\ \hline \multirow{8}{*}{\textbf{Implicit}} & \emph{Re-query} & \emph{Reformulation} \\ \cdashline{2-3} & \multirow{4}{*}{\emph{Click}} & \emph{Answer Click} \\ & & \emph{Answer Expansion Click} \\ & & \emph{Outside Answer Click} \\ & & \emph{Related Click} \\ \cdashline{2-3} & \multirow{1}{*}{\emph{Browsing}} & \emph{Browse} \\ \hline \end{tabular} \label{table:user-feedback} \end{center} \vspace{-10pt} \end{table} We propose a taxonomy of user behavior summarized in Table~\ref{table:behavior}. At the higher level, we distinguish two types of user behavior, which correspond to explicit and implicit feedback to web QA systems. In the following, we first show an empirical study of explicit feedback in a commercial search engine, and explain why it is not efficient or effective to collect training data through explicit feedback. We then describe user implicit feedback in detail. To collect explicit feedback, commercial search engines, such as Google and Bing, provide links at the bottom of a QA block, as illustrated in Figure~\ref{Goole&Bing}. However, only a very small fraction of users send their explicit feedback. In a real commercial web QA system, the coverage of explicit feedback, i.e., clicking on the feedback links, is only less than \emph{0.001\%} of the total QA impressions. Moreover, we find that users strongly tend to send negative feedback -- the positive over negative ratio is about 1:17. To form a balanced training dataset, we have to sample almost equal amounts of positive and negative examples from the skewed label distribution. This further reduces the size of valid training data that can be derived from explicit feedback. Consequently, the explicit feedback may not be a good source to collect training data for web QA model. We then cast our attention to implicit feedback. Basically, all actions related to QA blocks recorded in search logs are either queries and clicks. Therefore, the first two categories in our taxonomy correspond to these two types of actions. We further refine the categories of actions related to QA blocks into sub-groups. For example, we distinguish the types of clicks based on the components that are clicked on. Finally, we also model the information about general SERP browsing that may be useful to QA blocks, which is the last category in our taxonomy. The details of the taxonomy are introduced in the following. \noindent \textbf{\emph{Re-query Behavior}}: we consider the sequence of user queries in a session and particularly note whether a user issues a new query by modifying the previous one in the session. Interchangeably we also call this behavior \emph{reformulation}. \noindent \textbf{\emph{Click Behavior}}: we distinguish four types of clicks, depending on the components being clicked on (see Figure~\ref{figure:user_behavior} for illustration). \begin{itemize} \item \emph{Answer Click}: a user clicks on the source page URL of the answer passage (indicated by {\small \textcircled{1}} in Figure~\ref{figure:user_behavior}). \item \emph{Answer Expansion Click}: a user clicks on a special button (indicated by {\small \textcircled{2}} in Figure~\ref{figure:user_behavior}) to expand the folded QA answer due to the maximum length limit for display. \item \emph{Outside Answer Click}: a user clicks on the links to the web documents in the SERP (indicated by {\small \textcircled{3}} in Figure~\ref{figure:user_behavior}) other than the source page URL for the web QA passage. \item \emph{Related Click}: a user clicks on the related queries (indicated by {\small \textcircled{4}} in Figure~\ref{figure:user_behavior}) to explore more information. \end{itemize} \noindent \textbf{\emph{Browsing Behavior}}: a user reads the content of the QA passage or other components in the SERP, and does not give any input to the search engine. \begin{figure}[t] \setlength{\belowcaptionskip}{-0.2cm} \centering \includegraphics[scale=0.78, viewport=250 160 480 462, clip=true] {user_behaviour_clear.pdf} \caption{\label{figure:user_behavior} Illustration of user click behavior, including \emph{Answer Click}, \emph{Answer Expansion Click}, \emph{Outside Answer Click}, and \emph{Related Click}.} \label{fig:impact_size} \end{figure} \subsection{Feature Extraction from User Behavior}\label{sec:feature_extraction} \begin{table}[t] \small \caption{\label{table:feature_description} User behavior features. Here ``Answer'' means the source URL for the passage in web QA.} \begin{tabular}{p{2.0cm}lp{4.3cm}} \hline \textbf{Name} & \textbf{Type} & \textbf{Description} \\ \hline \emph{RFRate} & \emph{Re-query} & rate of re-query \\ \hdashline \emph{AnswerCTR} & \emph{Click} & CTR of answer \\ \emph{AnswerOnlyCTR} & \emph{Click} & CTR with only click on answer \\ \emph{AnswerSatCTR} & \emph{Click} & satisfied CTR of answer\\ \emph{AnswerExpRate} & \emph{Click} & CTR of answer expansion \\ \emph{OTAnswerCTR} & \emph{Click} & CTR outside of answer \\ \emph{OTAnswerOnlyCTR} & \emph{Click} & CTR with only click outside of answer \\ \emph{OTAnswerSatCTR} & \emph{Click} & satisfied CTR outside of answer \\ \emph{BothClickCTR} & \emph{Click} & CTR of both click on/outside of answer \\ \emph{RelatedClickRate} & \emph{Click} & CTR of related queries \\ \hdashline \emph{NoClickRate} & \emph{Browsing} & no click rate \\ \emph{AbandonRate} & \emph{Browsing} & abandonment rate \\ \emph{AvgSourcePage- DwellTime} & \emph{Browsing} & average source page dwell time \\ \emph{AvgSERPDwellTime} & \emph{Browsing} & average SERP dwell time \\ \hline \end{tabular} \end{table} To learn implicit feedback models, we need to design user behavior features, which should be sensitive and robust in capturing relevance feedback signals from users. To meet this goal, we follow two principles in our feature design. First, our feature set exhaustively covers all types of user behavior discussed in Section~\ref{sec:user_feedback}, and makes sure we do not miss any useful relevance signals. Second, we design aggregated features that summarize the behavior of multiple users in multiple sessions. Aggregated features can effectively reduce noise and biases in individual users and individual actions. The features are listed in Table~\ref{table:feature_description}. Most features are straightforward. Please refer to the ``Description'' column for the meaning of the features. We only select several features to explain as follows. \begin{itemize} \item \emph{AvgSourcePageDwellTime}: the average time from the user clicking into the source page of the web QA answer to the the user leaving the source page. \item \emph{AvgSERPDwellTime}: the average time from SERP loaded successfully to the completion of the search session. \item \emph{AbandonRate}: the percentage of sessions with no click on the SERP. In those sessions, a user just browses SERP and leaves the search sessions. \end{itemize} The click-through rate (\emph{CTR}) for a component, which can be either a QA passage, an answer expansion, a related search, or a web document outside of QA block, is defined as \begin{equation} CTR = \frac{N_{click}}{N_{impression}} \end{equation} where $N_{impression}$ denotes the total number of impressions of the component and $N_{click}$ the number of clicks on the component. A satisfied click (\emph{SatClick}) on a component is a click on the component followed by the corresponding Dwell Time greater than or equal to a pre-defined threshold. The satisfied click-through rate (\emph{SatCTR}) is then defined as \begin{equation} SatCTR = \frac{N_{SatClick}}{N_{impression}} \end{equation} where $N_{SatClick}$ denotes the number of SatClicks on the component. \subsection{Implicit Feedback Modeling} \label{sec:imp_feedback_modeling} In this section, we target at building an effective implicit feedback model to predict the relevance between a question and a passage based on the features designed in Section~\ref{sec:feature_extraction}. We first prepare a dataset with 18k QA pairs, where each QA pair is augmented with the set of user behavior features as well as a human judged label (Section~\ref{sec:feedback_dataset}). Intuitively, a single feature, such as AnswerCTR or AnswerSatCTR, may be too weak a signal to infer user satisfaction. We verify this assumption as a baseline in Section~\ref{sec:click_baseline}. We then consider various machine learning models, including Logistic Regression (LR)~\cite{menard2002applied}, Decision Tree (DT)~\cite{safavian1991survey}, Random Forest (RT)~\cite{liaw2002classification}, and Gradient Boost Decision Tree (GBDT)~\cite{ke2017lightgbm}, and conduct an empirical study to choose the best model (Section~\ref{sec:ml_models}). We also analyze the feature importance and derive rules from the learned models, which help us gain insights into users' decision process when they interact with a QA block. \subsubsection{Dataset} \label{sec:feedback_dataset} We create a dataset which consists of 18k QA pairs, where each QA pair is augmented with the set of user behavior features as well as a human judged label. To be more specific, the dataset is a table, where each row is in the form of \emph{$\langle$Question, Passage, User behavior features, Label$\rangle$}. The QA pairs are sampled from a half-year log from January to June 2019) of one commercial search engine. We sample the QA pairs whose number of impressions in the log is around 50. This number is set due to two considerations. First, we want to aggregate multiple users' behavior to reduce the noise from individual users. Second, we want to avoid too popular queries, which tend to be short and too easy to answer. Each QA pair is sent to three crowd sourcing judges and the final label is derived based on a majority voting. This 18k dataset is further randomly split into 14k/2k/2k as training, dev and test sets, respectively. We plan to make this data set public if this paper is accepted. \subsubsection{Baseline}\label{sec:click_baseline} Click-through rate (\emph{CTR}) and satisfied click-through rate (\emph{SatCTR}) have been widely adopted in existing works as indicators for the relevance of a web page with respect to a given user query. Analogously in our study of passage relevance, we start with users' clicks on the source URL of the answer passage. We first investigate the feature of $AnswerCTR$ in Table~\ref{table:feature_description} by plotting a precision-recall curve in Figure~\ref{figure:pr}(a). \begin{itemize} \item For QA pairs whose $CTR$ > 0, the recall is less than 0.33. In other words, when the passage is relevant to the question, there are more than two thirds of cases where users do not make any single click on the source URL. Please note that the number of clicks is counted throughout all the impressions for that question-passage pair. This is a very different observation compared to the cases for page ranking. However, when we consider the feature of question answering, this result is not surprising since users may simply browse the content of the passage and get the information. No further click into the source URL is needed at all. \item The highest precision is less than 0.77, across the full range of recall values. This suggests that clicking into the source URL does not necessarily suggest a relevant passage. We find in most clicked cases, the passage is partially relevant to the question. Therefore, users may want to click into the source URL to explore more information. \end{itemize} We further investigate the correlation between \emph{SatCTR} and passage relevance. Similarly, we plot the precision-recall curves in Figure~\ref{figure:pr}(b). $CTR\_t$ means we consider the clicks followed by dwell time on the source page for longer than $t$ seconds as a satisfied click. We experiment with dwell time thresholds 5s, 15s, and 25s, and observe similar trend as in Figure~\ref{figure:pr}(a). The experiments with \emph{CTR} and \emph{SatCTR} verify that the single feature of user clicks into the source URL is not a good indicator for passage relevance. Clicks do not necessarily indicate relevant passages, and vice versa. Thus, we have to consider more complex models to combine the sequences of user actions in search sessions. \begin{figure}[t] \setlength{\belowcaptionskip}{-0.2cm} \centering \includegraphics[scale=0.32, viewport=10 10 700 350, clip=true]{pr_curve_ctr.pdf} \caption{\label{figure:pr}Precision-recall curves of \emph{AnswerCTR}(a) and \emph{AnswerSatCTR}(b).} \label{PR_Curve} \vspace{-5pt} \end{figure} \begin{table} \begin{center} \small \caption{Comparison of feedback modeling methods. The best results are highlighted in bold, and the secondary best results are marked with underline.} \begin{tabular}{llll} \hline \textbf{Method} & \textbf{AUC} & \textbf{ACC} & \textbf{F1} \\ \hline \emph{AnswerCTR} & 58.28 & 53.87 & 18.11 \\ \emph{AnswerSatCTR\textsubscript{5s}} & 57.73 (-0.55) & 53.50 (-0.37) & 16.52 (-1.59) \\ \emph{AnswerSatCTR\textsubscript{15s}} & 57.75 (-0.53) & 52.73 (-1.14) & 13.43 (-4.68) \\ \emph{AnswerSatCTR\textsubscript{25s}} & 57.79 (-0.49) & 52.63 (-1.24) & 12.23 (-5.88) \\ \hline \textbf{LR} & \underline{71.41 (+13.13)} & 66.00 (+12.13) & {\bf 66.78 (+48.67)} \\ \textbf{DT} & 61.75 (+3.37) & 63.43 (+9.56) & 60.92 (+42.81) \\ \textbf{RF} & 71.14 (+12.86) & \underline{67.47 (+13.60)} & 65.66 (+47.55) \\ \textbf{GBDT} & \textbf{\textbf{73.69 (+15.41)}} & \textbf{\textbf{68.00 (+14.13)}} & \underline{66.08 (+47.97)} \\ \hline \end{tabular} \label{table:metrics for different modeling method} \end{center} \end{table} \subsubsection{Machine Learning Models}\label{sec:ml_models} We apply machine learning models to combine the various types of user behavior features. The training target is to fit the human judged labels. We evaluate the model performance by common binary classification metrics, including area under the curve (AUC), accuracy (ACC), and F1 score. \noindent \textbf{\emph{Machine Learning Models Considered}}. We apply various models and evaluate the results, including Logistic Regression (LR), Decision Tree (DT), Random Forest (RT), and Gradient Boost Decision Tree (GBDT). \noindent \textbf{\emph{Results}}. As summarized in Table~\ref{table:metrics for different modeling method}, the machine learning approach significantly outperforms baseline methods (i.e. \emph{AnswerCTR} and \emph{AnswerSatCTR}) on all metrics. In terms of AUC and ACC, the GBDT model achieves the best performance. In terms of F1 score, the performance of GBDT model (66.08) is very close to the best result (66.78). Overall, we consider GBDT as the best model. \noindent \textbf{\emph{Model Interpretation}}. To get more insights about user behavior on QA, we first investigate the impact of individual feature based on the best model GBDT. The top 8 features of the model are shown in Figure~\ref{figure:Feature_importance}. \begin{figure}[t] \setlength{\belowcaptionskip}{-0.2cm} \centering \includegraphics[scale=0.45, viewport=10 10 450 230, clip=true]{single_feature_importance.pdf} \caption{\label{figure:Feature_importance} Relative weights of top 8 features in the GBDT model.} \label{fig:impact_size} \vspace{-5pt} \end{figure} The results indicate that \emph{AvgSERPDwellTime} and \emph{OTAnswerOnlyCTR} have the highest feature importance, followed by \emph{AnswerOnlyCTR}, \emph{AnswerSatCTR}, \emph{AnswerExpRate} and \emph{AbandonRate}. Reformulation related features like \emph{RFRate} as well as \emph{RelatedClickRate} have relatively low importance. We can also obtain the following insights about user behavior on web QA. \begin{itemize} \item Click features related to web QA answer itself, such as \emph{AnswerOnlyCTR} and \emph{AnswerSatCTR}, are not the most important features. This aligns well with our previous observations. \item SERP Dwell Time suggests the time period during which a user lingers on the search result page. Since the content of passage is presented in the QA block (in the SERP) as the answer to the user's question, the length of SERP Dwell time may be a good indicator of the relevance of the passage. \end{itemize} To further reveal users' decision process in a search session, we examine the decision tree model DT in Table~\ref{table:metrics for different modeling method}. Some interesting insights gained from the paths on the tree are listed in the following. \begin{itemize} \item If ``\emph{AvgSERPDwellTime} is long $\wedge$ \emph{NoClickRate} is large'', i.e., the SERP is abandoned, the passage usually has good relevance. Users may just browse the QA passage for a while and then get the information needed. \item ``\emph{AnswerCTR} is small $\wedge$ \emph{OTAnswerCTR} is large'' is often a strong signal that the passage has poor relevance. In such cases, users may not be satisfied with the passage answer, and then click more on other documents. \item ``\emph{AnswerOnlyCTR} is large $\wedge$ \emph{AvgSERPDwellTime} is long'' is also a positive signal for passage relevance. The passage is relevant to the question, but due to the space limit of the QA block, the displayed content cannot fully answer user's question. Therefore, the user clicks on the source URL. \item ``\emph{NoClickRate} is large $\wedge$ \emph{RelatedClickRate} is large'' suggests that the passage is not relevant to the user question. The user revises the query to express her information need. \end{itemize} \noindent \textbf{\emph{Summary}}: Unlike the cases of page relevance, user clicks (including satisfied clicks) are not a good indicator for passage relevance in web QA. However, using machine learning models to combine various user behavior, it is still feasible to extract relevance feedback from the search sessions. \subsection{Pre-training QA models} \label{sec:two_stage} Through an implicit feedback model, we can derive a large amount of training data. However, user behavior data may be very noisy. Although we develop aggregated features to reduce biases in individual sessions, the prediction accuracy of the best model is only 68\% (see Table~\ref{table:metrics for different modeling method}). How to leverage such noisy training data becomes an interesting problem. Inspired by the pre-training idea in learning deep neural networks, we apply large-scale automatically derived implicit feedback data to pre-train QA models, such as BiLSTM and BERT, as weak supervision. Intuitively, although the derived training data is noisy, it still contains valuable relevance signals and can roughly guide the parameters of QA models to a region close to the optimal solutions. In the second stage, we apply the human labeled data to fine-tune the parameters to get the final model. As verified in our experiments (Section~\ref{sec:experiment}), the strategy of pre-training plus fine-tuning is remarkably better than training models with only human labeled data without pre-training. \nop{ As shown in Figure~\ref{fig:framkework}, a two-stage approach is proposed to integrate the user implicit feedback signals to the QA relevance model training process. At the first stage, a huge number of tuples \emph{<Query, Passage, User feedback>} can be mined from the search logs of a commercial search engine. By applying the feedback models developed in Table~\ref{table:metrics for different modeling method}, each \emph{<Query, Passage>} pair ($\left \langle Q, P \right \rangle$) is auto-labeled with a predicted relevance score. \begin{align} &score_{<Q,P>}=F_{FeedbackModel}(x_{1}, ..., x_{m}) \\ &label_{<Q,P>}=\left\{\begin{matrix} 1 & score_{<Q,P>} \geq \tau_1,\\ 0 & score_{<Q,P>} \leq \tau_2 \end{matrix}\label{eq:label}\right. \end{align} where $F_{FeedbackModel}(\cdot)$ denote the Feedback models, $x_{i}$ represents the feature described in the above section, $m$ represents the number of features, $\tau_1$ and $\tau_2$ are two thresholds. } Technically, let $Q={\{w_1, w_2, w_3, ..., w_{|Q|}\}}$ be a question with $|Q|$ words (or word pieces), $P={\{w_1, w_2, w_3, ..., w_{|P|}\}}$ be a passage with $|P|$ words (or word pieces). We use cross-entropy (CE) as our loss function for both pre-training and fine-tuning, defined by \begin{align} &y=F_{QAModel}(\left \langle Q, P \right \rangle) \\ &L_{CE}=-\frac{1}{n}\sum_{i}^{n}[\hat{y}_{i}\log(y_{i})+ (1-\hat{y}_{i})\log(1-y_{i})] \label{eq:cross_entropy} \end{align} where $F_{QAModel}(\cdot)$ are the QA models, which will be described in Section 4.2, ${y}_{i}$ represents our QA relevance model output, $\hat{y}_{i}$ represents the true label, and $n$ is the number of training samples. \section{Experiments} \label{sec:experiment} In this section, we report extensive experiments to verify our proposed approach using real data in a commercial search engine. \subsection{Datasets and Metrics}\label{sec:exp_data} We conduct experiments on several datasets as follows, with their statistics shown in Table~\ref{t:statistic}. More detailed description of the following datasets as well as one extra dataset is presented in Appendix~\ref{sec:append_data}. AUC and ACC are used as the evaluation metrics for the QA relevance models. \noindent \textbf{FeedbackQA\textsubscript{log}}: An English QA dataset collected from the latest half year's web QA system log in a commercial search engine. Each item of the log consists of a tuple $\langle$query, passage, user behavior$\rangle$. \noindent \textbf{FeedbackQA\textsubscript{\{ctr, gbdt\}}}: For each QA pair in FeedbackQA\textsubscript{log}, the feedback models in Table~\ref{table:metrics for different modeling method} are employed to predict a relevance label. The subscript indicates the model employed. \noindent \textbf{DeepQA}: An English QA dataset where each case consists of three parts, i.e. question, passage, and a binary label (i.e. 0 or 1) by crowd sourcing human judges. The queries and passages are sampled from a commercial search engine. \noindent \textbf{MS Marco}: An open source QA dataset~\cite{DBLP:conf/nips/NguyenRSGTMD16}, which contains questions generated from real anonymized Bing user queries. To evaluate the effectiveness of our approach under low-resource setting (i.e., only a small amount of labeled data are available), the dataset is sub-sampled to form a positive/negative balanced set with 10k/1k/1k as training, dev and testing sets, respectively. \noindent \textbf{WikiPassageQA}: A Wikipedia based open source QA dataset~\cite{DBLP:conf/sigir/CohenYC18}, targeting non-factoid passage retrieval tasks. To evaluate our approach in low resource setting, the dataset is sub-sampled to form a positive/negative balanced set with 10k/1k/1k as training, dev and testing sets, respectively. \noindent \textbf{FrenchGermanQA}: This dataset is collected in a similar process as DeepQA. The main difference is that this dataset targets at French and German languages. \begin{table} \begin{center} \small \caption{\label{t:statistic} Statistics of experiment datasets.} \begin{tabular}{lcccc} \hline \textbf{Dataset} & \textbf{Train} & \textbf{Dev} & \textbf{Test} & \textbf{Labels}\\ \hline \textbf{FeedbackQA\textsubscript{log}} & 22M & - & - & - \\ \textbf{FeedbackQA\textsubscript{\{ctr, gbdt\}}} & 4M & 10k & 10k & 50\%+/50\%- \\ \hdashline \textbf{DeepQA} & 30k & 2k & 2k & 57.6\%+/42.4\%- \\ \textbf{MS Marco} & 10k & 1k & 1k & 50\%+/50\%- \\ \textbf{WikiPassageQA} & 10k & 1k & 1k & 50\%+/50\%- \\ \textbf{FrenchGermanQA} & 50k & 2k & 2k & 50\%+/50\%- \\ \hline \end{tabular} \label{dataset} \end{center} \vspace{-8pt} \end{table} \begin{table*}[t!] \small \caption{\label{table:result}Performance comparison between our methods and baselines on the DeepQA dataset. All ACC and AUC metrics in the table are in percentage , where the sign \% are omitted.}\label{t:main} \begin{tabular}{cccllll} \hline \multirow{2}{*}{\textbf{Model}} & \multirow{2}{*}{\textbf{Method}} & \multirow{2}{*}{ \textbf{\begin{tabular}[c]{@{}c@{}}Pre-training\\ Data Size\end{tabular}}} & \multicolumn{4}{c}{\textbf{Performance on Different Fine-tuning Data Size (AUC/ACC)}} \\ \textbf{} & \textbf{} & \textbf{} & \textbf{5k} & \textbf{10k} & \textbf{20k} & \textbf{30k} \\ \hline \multirow{7}{*}{\textbf{BiLSTM}} & \multirow{1}{*}{\textbf{Original}} & \textbf{-} & 60.45/58.21 & 61.30/59.92 & 61.55/61.99 &62.40/61.74 \\ \cdashline{3-7} & \multirow{3}{*}{\textbf{FBQA\textsubscript{ctr}}} & \textbf{0.5m} & 59.90/57.60 (-0.55/-0.61) & 61.25/58.25 (-0.05/-1.67) & 61.40/60.50 (-0.15/-1.49) & 60.65/59.29 (-1.75/-2.45) \\ & & \textbf{1.0m} & 60.25/58.45 (-0.20/-0.24) & 61.35/58.12 (+0.05/-1.80) & 62.65/57.43 (+1.10/-4.56) & 61.35/60.69 (-1.05/-1.05) \\ & & \textbf{4.0m} & 60.50/56.99 (+0.05/-1.22) & 59.75/58.39 (-1.55/-1.53) & 60.90/59.15 (-0.65/-2.84) & 62.25/61.73 (-0.15/-0.01) \\ \cdashline{3-7} & \multirow{3}{*}{\textbf{FBQA\textsubscript{FA}}} & \textbf{0.5m} & 61.95/59.66 (+1.50/+1.45) & 62.50/60.96 (+1.20/+1.04) & 62.85/62.74 (+1.30/+0.75) & 64.23/62.50 (+1.83/+0.76) \\ & & \textbf{1.0m} & 62.80/60.44 (+2.35/+2.23) & 63.20/61.20 (+1.90/+1.28) & 63.45/63.00 (+1.90/+1.01) & 65.57/63.05 (+3.17/+1.31) \\ & & \textbf{4.0m} & \textbf{64.13/62.15 (+3.68/+3.94)} & \textbf{65.45/63.33 (+4.15/+3.41)} & \textbf{65.46/64.17 (+3.91/+2.18)} & \textbf{67.35/64.35 (+4.95/+2.61)} \\ \hline \multirow{7}{*}{\textbf{BERT}} & \multirow{1}{*}{\textbf{Original}} & \textbf{-} & 69.31/64.86 & 71.81/67.76 & 72.47/67.07 & 75.28/68.26 \\ \cdashline{3-7} & \multirow{3}{*}{\textbf{FBQA\textsubscript{ctr}}} & \textbf{0.5m} & 67.35/62.76 (-1.96/-2.10) & 72.96/66.66 (+1.15/-1.10) & 75.11/68.26 (+2.64/+1.19) & 77.76/71.07 (+2.48/+2.81) \\ & & \textbf{1.0m} & 72.33/67.06 (+3.02/+2.20) & 73.76/67.36 (+1.95/-0.40) & 76.16/69.16 (+3.69/+2.09) & 77.42/68.26 (+2.14/+0.00) \\ & & \textbf{4.0m} & 72.19/65.66 (+2.88/+2.90) & 73.92/67.96 (+2.11/+0.20) & 76.81/67.96 (+4.34/+0.89) & 77.94/69.36 (+2.66/+1.10) \\ \cdashline{3-7} & \multirow{3}{*}{\textbf{FBQA\textsubscript{FA}}} & \textbf{0.5m} & 72.26/65.27 (+2.95/+0.41) & 76.03/68.87 (+4.22/+1.11) & 77.79/69.47 (+5.32/+2.40) & 77.92/69.47 (+2.34/+1.21) \\ & & \textbf{1.0m} & 73.53/66.37 (+4.22/+1.51) & 76.29/68.97 (+4.48/+1.15) & 78.63/68.77 (+6.16/+1.70) & 79.82/70.17 (+4.54/+1.91) \\ & & \textbf{4.0m} & \textbf{76.53/68.57 (+7.22/+3.71)} & \textbf{78.17/68.57 (+6.36/+0.81)} & \textbf{79.79/71.17 (+7.32/+4.10)} & \textbf{81.03/71.57 (+5.78/+3.31)} \\ \hline \end{tabular} \end{table*} \subsection{Baselines and Models} To evaluate the effectiveness of our approach to mine implicit feedback, we set the following two baselines. \begin{itemize} \item \textbf{Original}: Only the human labeled data is used to train the QA model. \item \textbf{FBQA\textsubscript{ctr}}: The FeedbackQA\textsubscript{ctr} data is used for pre-training the QA model at the first stage. At the second stage, the QA model is further fine-tuned using the human labeled data. \end{itemize} In our approach, the best performing feedback model GBDT in Table~\ref{table:metrics for different modeling method} is used to auto-label large scale pre-training data (i.e. FeedbackQA\textsubscript{gbdt}) for pre-training the QA model. At the second stage, the QA model is further fine-tuned using the human labeled data. This approach is referred to as \textbf{FBQA\textsubscript{FA}} in the later experiment results. We build the QA relevance models based on two popular deep neural networks, BiLSTM and BERT\textsubscript{base}\footnote{Our goal is to verify the effectiveness of approach, so we do not use BERT\textsubscript{large}, which is time and resource consuming.}. The detailed description of these two models as well as the experimental setting are presented in Appendix~\ref{sec:append_exp_setting}. \subsection{Results and Discussions} \label{sec_result} \subsubsection{Overall Comparison Results} Table~\ref{table:result} shows the experimental results across all settings. We observe the following. \begin{itemize} \item Compared with the two baselines Original and FBQA\textsubscript{ctr}, our implicit feedback approach FBQA\textsubscript{FA} achieves significant improvements over different pre-training data size $\{0.5m$, $1m, 4m\}$ and different QA fine-tuning data size $\{5k, 10k, 20k,$ $30k\}$. When the size of feedback pre-training data reaches $4m$, our model gets the best results on the experiment set: for BiLSTM, there is about 5 AUC points increase on average; for BERT, 6 AUC points increase on average. \item Especially for low resource settings such as $5k$ and $10k$ QA fine-tuning data, our approach shows excellent results to save the labeling cost. Take BERT setting as an example. When the size of pre-training data equals to $4m$ and fine-tuning data equals to $5k$, our model can get 76.53 AUC metric, which is even higher than the Original result on $30k$ fine-tuning data. In other words, with only $1/6$ of the human labeled data, our model can still outperform the model trained on the full dataset. This experiment verifies the effectiveness of our approach to save large labeling cost. \item When we increase the size of implicit feedback pre-training data from $0.5m$ to $1m$ and $4m$, our model is able to get consistent gains in all experimental settings. While for FBQA\textsubscript{ctr}, the gains are not consistent: increasing pre-training data size does not necessarily improve the metrics, which aligns with our findings in Section~\ref{sec:imp_feedback_modeling}. \item Our approach shows substantial gains over the baselines with both the BiLSTM and BERT models, which verifies the model agnostic characteristic of our approach. It is expected that BERT based QA models outperform BiLSTM based models since BERT benefits from large scale unsupervised pre-training as well as a large size of model parameters. It is interesting to find that even on top of the powerful deep pre-trained model such as BERT, further significant gains can be obtained. This demonstrates the huge potential of the inexpensive, abundant implicit feedback derived from large-scale user behavior data as complementary data sources to the expensive human labeled data of a relatively small size. \end{itemize} \subsubsection{Effect of Pre-training Data Size} To further analyze the effect of user implicit feedback on improving web QA, we explore the model performance with respect to the size of the feedback data employed in the pre-training stage. The experiments are conducted on the DeepQA dataset using BERT\textsubscript{base} models. Pre-training data size is set to \{0, 1, 2, 3, 4, 5, 6\} millions. The results are shown in Figure~\ref{figure:feedbackdatasize_auc}. By increasing the size of implicit feedback data in pre-training from 0 to 4 millions, the model performance improves accordingly. However, when the data size reaches a certain scale, e.g, 4 millions in our experiments, the AUC metric on test set slowly flattens out. This suggests that the noise in the implicit feedback data may limit the growth of the improvement. \begin{figure}[t] \setlength{\belowcaptionskip}{-0.1cm} \centering \includegraphics[scale=0.36, viewport=2 5 600 410, clip=true]{new_impact_training_data_size.pdf} \caption{\label{figure:feedbackdatasize_auc} Performance on dataset DeepQA with different FeedbackQA pre-training data size.} \label{fig:impact_size} \end{figure} \subsubsection{Results on the MS Marco and WikiPassageQA datasets} We further apply our pre-trained model (trained on 4m FeedbackQA\textsubscript{gbdt}) from the implicit relevance feedback data to two open benchmark QA datasets, MS Marco and WikiPassageQA. The results are reported in Table~\ref{table:marco_test}. We find that the queries in the MS Marco dataset are simpler than those in the DeepQA dataset, consequently, with 10k human labeled data and the BERT model, the AUC can reach as high as 94.01\%. Our approach also shows improvement on this dataset, although the gain is not as large as that on the DeepQA dataset. For the BiLSTM model, the gain is around 2 points on average in terms of AUC over 2k, 5k, and 10k human labeled training data in the fine-tuning stage. For the BERT model, since the baseline is already very strong, the improvement is less than 1 point. On the WikiPassageQA dataset, our approach also shows gains, with around 1.3 points AUC gains for BERT and around 1.8 points AUC gains for BiLSTM on average over 2k, 5k and 10k labeled fine-tuning data. \begin{table}[t] \small \caption{\label{table:marco_test} Comparison Results on MS Marco and WikiPassageQA datasets (all AUC metrics are percentage numbers with \% omitted).} \begin{tabular}{ccccc|ccc} \hline \multirow{2}{*}{\textbf{Model}} & \multirow{2}{*}{\textbf{Method}} & \multicolumn{3}{c|}{\textbf{MS Marco}} & \multicolumn{3}{c}{\textbf{WikiPassageQA}} \\ & & \textbf{2k} & \textbf{5k} & \textbf{10k} & \textbf{2k} & \textbf{5k} & \textbf{10k} \\ \hline \multirow{3}{*}{\textbf{BiLSTM}} & \textbf{Original} & 64.70 & 64.25 & 65.61 & 55.39 & 58.37 & 60.72 \\ & \textbf{FBQA\textsubscript{ctr}} & 62.52 & 65.50 & 66.46 & 53.80 & 58.95 & 61.16 \\ & \textbf{FBQA\textsubscript{FA}} & \textbf{65.65} & \textbf{66.24} & \textbf{68.66} & \textbf{57.23} & \textbf{60.38} & \textbf{62.30} \\ \hline \multirow{3}{*}{\textbf{BERT}} & \textbf{Original} & 87.93 & 93.02 & 94.01 & 78.03 & 83.70 & 84.70 \\ & \textbf{FBQA\textsubscript{ctr}} & 88.70 & 93.42 & 94.47 & 78.04 & 81.18 & 85.66 \\ & \textbf{FBQA\textsubscript{FA}} & \textbf{88.75} & \textbf{94.02} & \textbf{94.81} & \textbf{80.10} & \textbf{84.14} & \textbf{86.16} \\ \hline \end{tabular} \end{table} \nop{ \subsubsection{Comparison of objective functions in the pre-training stage} \label{subsec:objective impact} Once we learn an implicit relevance feedback model, we can apply it to a query passage pair to predict a relevance score. In Equation~\ref{eq:label}, we use two thresholds to convert this continuous relevance score into a Boolean relevance label. In this section, we would like to explore whether it is better to fit the Boolean relevance label in the pre-training stage, or alternatively, fit the original continuous relevance score (between 0 and 1). To fit the Boolean relevance label, we carry out a classification task and use the loss of cross-entropy (CE) as in Equation~\ref{eq:cross_entropy}. Instead, to fit the original continuous relevance score, we conduct a regression task and use the mean squared error (MSE) as the loss function: \begin{align} &L_{MSE}=\frac{1}{n}\sum_{i}^{n}({y}_{i}-\bar{y}_{i})^{2} \end{align} where $y_{i}$ represents our QA model output, $\bar{y}_{i}$ represents the auto-labeled relevance score output by the feedback model in Table~\ref{table:metrics for different modeling method}, and $n$ represents the number of training samples. We compare the results from the two objective functions in Figure~\ref{figure:impact_obj}. It is clear that for both BERT and BiLSTM, the performance of using regression loss in the pre-training stage is consistently worse than that adopting the cross-entropy loss. It indicates that the user feedback data is noisy, and learning the fine-grained data distribution with regression task may propagate the noise to the fine-tuning stage. Instead, the Boolean label approach considers the implicit relevance feedback as a rough indicator, and throws away the uncertain cases falling between the two thresholds. Consequently, the Boolean label approach is more robust to the noise. \begin{figure}[H] \setlength{\belowcaptionskip}{-0.2cm} \centering \includegraphics[scale=0.22, viewport=20 20 680 540, clip=true]{impact_obj.eps} \caption{\label{figure:impact_obj} Comparison of objective functions during the QA pre-training stage on DeepQA\textsubscript{general} dataset.} \end{figure} } \subsection{Applications to non-English QA} We further apply our approach to several non-English markets in a commercial search engine. We find user behavior in different countries is very consistent. Consequently, the implicit relevance feedback model trained in en-US market can be successfully transferred to foreign markets without any tuning. The results are shown in Table~\ref{t:frde_qa}. In the de-DE (German) and the fr-FR (French) markets, our approach significantly improves the QA service in AUC metric, saving a huge amount of human labeling cost. Take fr-FR as example, our approach shows around 3.2 consistent AUC gains across all training data sizes. Meanwhile, the FBQA\textsubscript{FA} model is able to get 76.43 AUC with only 5k training data while the Original model needs 30k training data to get similar results. \begin{table}[t] \small \caption{\label{t:frde_qa} Results on French and German QnA.} \centering \begin{tabular}{@{}ccccc@{}} \hline \textbf{Model} & \multicolumn{4}{c}{\textbf{AUC of fr-FR \& de-DE}} \\ & \textbf{5k} & \textbf{10k} & \textbf{30k} & \textbf{50k} \\ \midrule \textbf{Original} & 73.05/71.46 & 73.99/73.15 & 76.23/75.84 & 76.82/77.11 \\ \textbf{FBQA\textsubscript{FA}} & {\ul \textbf{76.43/76.64}} & {\ul \textbf{77.26/76.22}} & {\ul \textbf{79.28/78.83}} & {\ul \textbf{80.31/79.76}} \\ \hline \end{tabular} \end{table} \section{Conclusion and Future work} \label{sec:conclusion} This paper proposes a novel framework of mining implicit relevance feedback from user behavior. The implicit feedback models are further applied to generate weak supervised data to train QA models. Our extensive experiments demonstrate the effectiveness of this approach in improving the performance of QA models and thus reducing the human labeling cost. Mining implicit feedback from user behavior data for web QA task is an interesting area to explore. In this study, we mainly focus on users' search behavior. As future work, we may combine users' search behavior with browse behavior. Moreover, we may also conduct deeper analysis on the question types and compare the effectiveness of implicit feedback on different types of queries. Understanding when to trigger QA block from user feedback is another interesting problem. Finally, the application of our approach to more languages is also in our future plan. \bibliographystyle{ACM-Reference-Format}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{Section:Introduction} The fountainhead of modern topological quantum field theory can be traced back to the work of Witten~\cite{Witten82}. There he proved that the Morse inequalities~\cite{Milnor63} can be obtained through the use of a certain supersymmetric version of quantum mechanics. The next major milestone in the study of topological quantum field theory was also authored by Witten~\cite{Witten88}. There he showed that the Donaldson polynomials~\cite{Donaldson90} can be interpreted as observables of a certain four-dimensional quantum field theory. Subsequently, Witten~\cite{Witten89} authored \textit{Quantum Field Theory and the Jones Polynomial}, for which, in large part, he received the Fields Medal. There he proved that the Jones polynomial~\cite{Jones85} can be interpreted as an observable of a certain three-dimensional quantum field theory. It was after the publication of this paper that the flood gates opened and the volume of papers dealing with topological quantum field theory began to greatly increase. With this increased volume of work on topological quantum field theory, many mathematicians started to become more interested in the subject. However, due to the methods used, in particular the mathematically ill-defined path-integral~\cite{Ashtekar74}, many of the results were valid to a physicist's level of rigour, but not to a mathematician's. This soon changed when Atiyah~\cite{Atiyah88}, motivated by Witten~\cite{Witten89} and Segal~\cite{Segal88}, axiomatized the foundations of topological quantum field theory. This axiomatization made it possible for mathematicians to obtain rigorous results. However, Atiyah's axiomatization is based on experiences from topological quantum field theories in three or fewer dimensions. The axiomatization is rarely used in four or more dimensions. Hence, there are a dearth of results using axiomatic topological quantum field theory in four or more dimensions, and the axioms themselves may contain hidden ``biases'' that ``favor'' three or fewer dimensions. In particular, application of the axiomatization to four dimensions~\cite{Thurston11} tends to yield theories that are ``trivial'' in that they can not detect changes in smooth structure\footnote{Thurston's mathoverflow answer~\cite{Thurston11} and subsequent discussion were the original motivation for this article.}. In this article we will take a first step in to higher dimensions and examine axiomatic topological quantum field theory in four dimensions. We prove, formalizing the difficulties expressed by Thurston~\cite{Thurston11}, that in four dimensions any unitary, axiomatic topological quantum field theory can not detect changes in the smooth structure of $M$, a simply connected, closed (compact without boundary), oriented smooth four-manifold. This motivates us to slightly modify the axioms of a topological quantum field theory so that it is possible for an axiomatic topological quantum field theory to detect changes in the smooth structure of such an $M$. Thus, these modified axioms could more accurately be dubbed axioms of a differential quantum field theory. \section{Axiomatic TQFT} \label{Section:AxiomaticTQFT} In his ground--breaking work Witten~\cite{Witten88} introduced an ``informal'' definition of a topological quantum field theory, a quantum field theory on a smooth manifold $M$ that is independent of the metric placed on $M$. Atiyah~\cite{Atiyah88}, motivated by Witten's informal definition and Segal's~\cite{Segal88} axiomatization of two-dimensional conformal field theory, then axiomatized topological quantum field theory. Over the years several authors have explored and refined Atiyah's axiomatization, see~\cite{Quinn:1991kq} and~\cite{Turaev:1994xb}, resulting in the current formulation~\cite{Blanchet2006232}, which we describe below. \subsection{Axiomatic TQFT} \label{SubSection:AxiomaticTQFT} An $(n + 1)$-dimensional topological quantum field theory, from now on abbreviated TQFT, over a field $\mathbb{F}$ assigns to every closed, oriented $n$-dimensional smooth manifold $X$ a finite dimensional vector space $\mathcal{H}(X)$ over $\mathbb{F}$ and assigns to every $(n+1)$-dimensional cobordism $W$ from $X_-$ to $X_+$ an $\mathbb{F}$ linear map, \begin{equation} Z(W,X_-,X_+): \mathcal{H}(X_-) \rightarrow \mathcal{H}(X_+). \end{equation} Recall that given two closed, oriented $n$-dimensional smooth manifolds $X_\pm$ a \textit{cobordisim} from $X_-$ to $X_+$ is a compact, oriented $(n+1)$-dimensional smooth manifold $W$ such that $\partial W = X_- \amalg X_+$, where $\partial W$ is the boundary of $W$ and $\amalg$ denotes disjoint union. The assignments $\mathcal{H}(X)$ and $Z(W,X_-,X_+)$ must satisfy the following axioms. \subsubsection{Naturality} \label{SubSubSection:Naturality} \begin{axiom}[Naturality] \label{Axiom:Naturality} Any orientation--preserving diffeomorphism of closed, oriented $n$-dimensional smooth manifolds $f:X \rightarrow X'$ induces an isomorphism\footnote{Note, we use $f$ to denote the orientation--preserving diffeomorphism and the isomorphism. Context should prevent any confusion in this regard.} $f: \mathcal{H}(X) \rightarrow \mathcal{H}(X')$. For an orientation--preserving diffeomorphism $g$ from the cobordism $(W,X_-,X_+)$ to the cobordism $(W',X_-',X_+')$, the following diagram is commutative. \[ \xymatrixcolsep{5pc} \xymatrix{ \mathcal{H}(X_-) \ar[d]_{Z(W)} \ar[r]^{g_{|_{X_-}}} &\mathcal{H}(X_-')\ar[d]^{Z(W')}\\ \mathcal{H}(X_+) \ar[r]^{g_{|_{X_+}}} &\mathcal{H}(X_+')} \] Note, $Z(W)$ is shorthand for $Z(W,X_-,X_+)$ and $Z(W')$ is shorthand for $Z(W',X_-',X_+')$. \end{axiom} \subsubsection{Functoriality} \label{SubSubSection:Functoriality} \begin{axiom}[Functoriality] \label{Axiom:Functoriality} If a cobordism $(W,X_-,X_+)$ is obtained by gluing\footnote{The formal definition of \textit{gluing} is given in Chapter VI Section 5 of Kosinski~\cite{Kosinski93}.} two cobordisms $(M,X_-,X)$ and $(M',X',X_+)$ using an orientation-preserving diffeomorphism $f: X \rightarrow X'$, then the following diagram is commutative. \[ \xymatrixcolsep{5pc} \xymatrix{ \mathcal{H}(X_-) \ar[d]_{Z(M)} \ar[r]^{Z(W)} &\mathcal{H}(X_+)\\ \mathcal{H}(X) \ar[r]^{f} &\mathcal{H}(X')\ar[u]_{Z(M')}} \] \end{axiom} \subsubsection{Normalization} \label{SubSubSection:Normalization} \begin{axiom}[Normalization] \label{Axiom:Normalization} For any closed, oriented $n$-dimensional smooth manifold $X$, the $\mathbb{F}$ linear map \begin{equation*} Z(X \times [0,1]): \mathcal{H}(X) \rightarrow \mathcal{H}(X) \end{equation*} is the identity. \end{axiom} \subsubsection{Multiplicativity} \label{SubSubSection:Multiplicativity} \begin{axiom}[Multiplicativity] \label{Axiom:Multiplicativity} There are functorial isomorphisms \begin{equation*} \mathcal{H}(X \amalg Y) \longrightarrow \mathcal{H}(X) \otimes \mathcal{H}(Y) \end{equation*} and \begin{equation*} \mathcal{H}(\emptyset) \longrightarrow \mathbb{F} \end{equation*} such that the diagrams \[ \xymatrix{ \mathcal{H}((X_1 \amalg X_2) \amalg X_3) \ar[d] \ar[r] &(\mathcal{H}(X_1) \otimes \mathcal{H}(X_2)) \otimes \mathcal{H}(X_3)\ar[d]\\ \mathcal{H}(X_1 \amalg (X_2 \amalg X_3)) \ar[r] &\mathcal{H}(X_1) \otimes (\mathcal{H}(X_2) \otimes \mathcal{H}(X_3))} \] and \[ \xymatrix{ \mathcal{H}(X \amalg \emptyset) \ar[d] \ar[r] &\mathcal{H}(X) \otimes \mathbb{F}\ar[d] \\ \mathcal{H}(X) \ar[r]^{id} &\mathcal{H}(X)} \] commute. Note, the vertical maps are induced by the obvious diffeomorphisms and the standard vector space isomorphisms. \end{axiom} \subsubsection{Symmetry} \label{SubSubSection:Symmetry} \begin{axiom}[Symmetry] \label{Axiom:Symmetry} The isomorphism \begin{equation*} \mathcal{H}(X \amalg Y) \longrightarrow \mathcal{H}(Y \amalg X) \end{equation*} induced by the obvious diffeomorphism corresponds to the standard isomorphism of vector spaces \begin{equation*} \mathcal{H}(X) \otimes \mathcal{H}(Y) \longrightarrow \mathcal{H}(Y) \otimes \mathcal{H}(X). \end{equation*} \end{axiom} \subsection{Remarks} \label{SubSection:Remarks} Before continuing on with the remainder of this article, there are a few points of note that easily follow from the above axioms and that we will have need of later. First, an axiomatic TQFT defines invariants for closed, oriented $(n+1)$-dimensional smooth manifolds. In more detail, a closed, oriented $(n+1)$-dimensional smooth manifold $W$ can be thought of as a cobordism from $\emptyset$ to $\emptyset$. Thus, $Z(W) \in Hom_{\mathbb{F}}(\mathbb{F}, \mathbb{F}) = \mathbb{F}$, and $Z(W) \in \mathbb{F}$ is simply a numerical invariant of $W$. Second, any compact, oriented $(n+1)$-dimensional smooth manifold $W$ with boundary can be thought of as a cobordism from $\emptyset$ to $\partial W$. Thus, $Z(W) \in Hom_{\mathbb{F}} (\mathbb{F}, \mathcal{H}(\partial W)) = \mathcal{H}(\partial W)$. So, $Z(W)$ in this case is simply a vector in $\mathcal{H} (\partial W)$. This vector $Z(W)$ is called the \textit{vacuum vector} of $W$ and we will find it of great use in what follows. Finally, for a closed, oriented $n$-dimensional smooth manifold $X$ the manifold $X \times [0,1]$ can be considered as a cobordism from $\overline{X} \amalg X$ to $\emptyset$, where $\overline{X}$ is $X$ with its orientation reversed. Hence, $Z(X \times [0,1])$ can be viewed as an $\mathbb{F}$ linear map \begin{equation} Z(X \times [0,1]): \mathcal{H}(\overline{X}) \otimes \mathcal{H}(X) \rightarrow \mathbb{F}. \end{equation} This gives a functorial isomorphism of $\mathcal{H}(\overline{X}) = \mathcal{H}(X)^* = Hom_{\mathbb{F}}(\mathcal{H}(X), \mathbb{F})$. Thus, if a closed, oriented $(n+1)$-dimensional smooth manifold $W$ is obtained by gluing $M$ to $M'$, where $\partial M = \overline{\partial M'}$, then Axiom~\ref{Axiom:Functoriality}, the functoriality axiom, implies $Z(W) = \langle Z(M') | Z(M) \rangle \in \mathbb{F}$, where $Z(M)$ and $Z(M')$ are viewed as vacuum vectors and $\langle Z(M') | Z(M) \rangle$ is defined as the value of $Z(M') \in \mathcal{H}(\partial M)^*$ acting on $Z(M) \in \mathcal{H} (\partial M)$. \subsection{Unitarity} \label{SubSection:Unitarity} An additional axiom that is sometimes used in conjunction with the above set of standard axioms is that of unitarity. \begin{axiom}[Unitarity] \label{Axiom:Unitarity} For any compact, oriented $(n+1)$-dimensional smooth manifold $W$ with non-zero $Z(W) \in \mathcal{H}(\partial W)$ the element $Z(\overline{W} \cup_{id} W) = \langle Z(\overline{W}) | Z(W) \rangle \in \mathbb{F}$ is not zero. \end{axiom} Unitarity is sometimes, but not always, taken as an axiom of TQFT. However, all ``physical'' theories, for example the standard model~\cite{Peskin95} and general relativity~\cite{Hawking05}, are unitary. Thus, we will assume that any axiomatic TQFT that we deal with obeys the unitarity axiom. \section{Akbulut Corks and Exotic Four-Manifolds} \label{Section:AkbulutCorksAndExoticFourManifolds} The wellspring of many an idea related to exotic four-manifolds can be traced back to the work of Akbulut~\cite{Akbulut88}. In this foundational work Akbulut found that for a certain smooth four-manifold $M$ one can make an exotic copy $M'$ of $M$, a manifold homeomorphic but not diffeomorphic to $M$, by cutting out and regluing $A_C$, a certain four-dimensional smooth submanifold of $M$, by an involution of its boundary $\partial A_C$. This smooth four-manifold $A_C$ later became known as an Akbulut cork. This means of generating exotic four-manifolds was later generalized in a preprint of Curtis and Hsiang. The proofs in this preprint were then simplified and extended through the work of Curtis, Freedman, Hsiang, and Strong~\cite{Curtis96}, Matveyev~\cite{Matveyev95}, Bi\u{z}aca, and Kirby~\cite{Kirby97}. In this section, to place these developments in the proper context, we will review the theorems that built up to the discovery of Akbulut corks, Smale's h-cobordisim theorem~\cite{Smale62} and Freedman's h-cobordisim theorem~\cite{Freedman82}, as well as reviewing the theorems presented in the above series of papers. These theorems will be presented without proofs. The interested reader can refer to original works and/or to Chapter 9 of Gompf and Stipsicz~\cite{Gompf99} where most of this material is covered. \subsection{Smale's h-Cobordisim Theorem} \label{SubSection:SmalesHCobordisimTheorem} Classification of four-dimensional smooth manifolds up to diffeomorphism can best be understood, strangely enough, by looking first at the classification of smooth manifolds up to diffeomorphism in greater than four dimensions. Looking at the results in higher dimensions serves to put the results in four dimensions in to the proper context. The key result used to classify manifolds up to diffeomorphism in greater than four dimensions is Smale's h-cobordisim theorem~\cite{Smale62}. This theorem establishes a criteria through which one can determine if two simply connected, closed, oriented smooth $n$-manifolds, where $n > 4$, are diffeomorphic. It is this theorem which we will now review. However, before presenting Smale's h-cobordisim theorem, we must introduce a definition~\cite{Gompf99}. Two simply connected smooth manifolds $X_-$ and $X_+$ are \textit{h-cobordant} if there exists a cobordisim $W$ from $X_-$ to $X_+$ such that the inclusions $i_\pm: X_\pm \hookrightarrow W$ are homotopy equivalences. Given this definition we can now state Smale's h-cobordisim theorem~\cite{Gompf99}. \begin{theorem}[Smale's h-Cobordisim Theorem] \label{Theorem:SmaleHCobordisimTheorem} If $W$ is an h-cobordisim between the $n$-dimensional smooth manifolds $X_-$ and $X_+$, where $n > 4$, then $W$ is diffeomorphic to $X_- \times [0,1] $. In particular $X_-$ is diffeomorphic to $X_+$. \end{theorem} With this one can see that if two $n$-dimensional smooth manifolds are h-cobordant and $n > 4$, then these two manifolds are diffeomorphic. In practice this often simplifies the process of determining if two manifolds are diffeomorphic, as proving two manifolds are h-cobordant is often easier than directly proving they are diffeomorphic. This theorem can be used to classify smooth manifolds up to diffeomorphism in more than four dimensions. However, as we will see, this result fails to be true in four dimensions, where a strictly ``weaker'' result holds. This ``weaker'' result is the subject of Freedman's h-cobordisim theorem to which we now turn. \subsection{Freedman's h-Cobordisim Theorem} \label{SubSection:FreedmansHCobordisimTheorem} One may hope that the techniques used to prove Smale's h-cobordisim theorem could be generalized to accommodate the case $n = 4$. However, this is not possible\footnote{The main problem is that ``Whitney's Trick'', which works in more than four dimensions, fails in four dimensions~\cite{Gompf99}.}. The best one can do in four dimensions is Freedman's h-cobordisim theorem~\cite{Gompf99}. \begin{theorem}[Freedman's h-Cobordisim Theorem] \label{Theorem:FreedmanHCobordisimTheorem} If $W$ is an h-cobordisim between the four-dimensional smooth manifolds $X_-$ and $X_+$, then $W$ is homeomorphic to $X_- \times [0,1] $. In particular $X_-$ is homeomorphic to $X_+$. \end{theorem} Thus, if two four-dimensional smooth manifolds are h-cobordant, then these two manifolds are homeomorphic. In four-dimensions this result can not be improved upon. In other words, there exist four-dimensional smooth manifolds $X_-$ and $X_+$ that are h-cobordant and \textit{not} diffeomorphic~\cite{Gompf99}. As they are h-cobordant, Freedman's h-cobordisim theorem implies they are homeomorphic. But, as they are not diffeomorphic, $X_+$ is an exotic version of $X_-$, a manifold homeomorphic but not diffeomorphic to $X_-$. In fact, the original results of Akbulut~\cite{Akbulut88} provide such a pair. As it is a result we will require later, we pause here to note that one can strengthen Freedman's h-cobordisim theorem in the following manner~\cite{Gompf99}. \begin{theorem}[Strengthened Freedman's h-Cobordisim Theorem] \label{Theorem:StrengthenedFreedmanHCobordisimTheorem} Two simply connected, closed, oriented, four-dimensional smooth manifolds $X_-$ and $X_+$ are homeomorphic if and only if they are h-cobordant. \end{theorem} \subsection{Akbulut Corks} \label{SubSection:AkbulutCorks} The results of Akbulut~\cite{Akbulut88}, along with Smale's and Freedman's h-cobordisim theorems, lead one to conjecture that it might be possible to ``excise'' a submanifold $A$ from $W$, a five-dimensional h-cobordisim from $X_-$ to $X_+$, such that the remainder $W - int(A)$ is diffeomorphic to $(X_- - int(A)) \times [0,1]$. Thus, all of the ``strangeness'' that occurs in four dimensions would be contained in $A$, and $W - int(A)$ would be ``trivial''. This conjecture, and in fact much more, is true, as was found by Curtis, Freedman, Hsiang, and Strong~\cite{Curtis96}, Matveyev~\cite{Matveyev95}, Bi\u{z}aca, and Kirby~\cite{Kirby97}. The formal summary of the flurry of work contained in the above articles is given by the following theorem~\cite{Kirby97}. \begin{theorem}[Pr\'ecis of Akbulut Corks] \label{Theorem:PrecisOfAkbulutCorks} If $W$ is a five-dimensional h-cobordism between two smooth four-manifolds $X_-$ and $X_+$, then there exists a five-dimensional h-cobordism $A \subset W$ from the smooth four-manifold $A_- \subset X_-$ to the smooth four-manifold $A_+ \subset X_+$ with the following properties: \begin{description} \item[(1)] $A_-$, and hence $A$ and $A_+$, is contractible. \item[(2)] $W - int(A)$ is diffeomorphic to $(X_- - int(A_-)) \times [0,1]$. \item[(3)] $W - A$, and hence $X_- - A_-$ and $X_+ - A_+$, is simply connected. \item[(4)] $A$ is diffeomorphic to $D^5$, the standard five-dimensional disk with boundary. \item[(5)] $A_- \times [0,1]$ and $A_+ \times [0,1]$ are diffeomorphic to $D^5$. \item[(6)] $A_-$ is diffeomorphic to $A_+$ by a diffeomorphism which, when restricted to $\partial A_- = \partial A_+$, is an involution. \end{description} \end{theorem} The manifolds $A_\pm$ identified above are Akbulut corks and are a generalization of the manifolds first discovered by Akbulut~\cite{Akbulut88} in his foundational work. Given $W$, $X_\pm$, and $A_\pm$ as appear in the previous theorem, one can easily prove the following results. As a result of $(2)$, $X_- - int(A_-)$ is diffeomorphic to $X_+ - int(A_+)$. The definitions of $X_\pm$ and $A_\pm$ imply $X_\pm = (X_\pm - int(A_\pm)) \cup_{id} A_\pm$. Thus, as a result of $(6)$, $X_- = (X_- - int(A_-)) \cup_{id} A_-$ and $X_+ = (X_- - int(A_-)) \cup_{I} A_-$, where $I$ is the involution of $\partial A_-$ from $(6)$ and all equivalences are up to diffeomorphism. Now, assume one has a simply connected, closed, oriented, smooth four-manifold $M$ along with $M'$, a manifold homeomorphic but not diffeomorphic to $M$. (In other words, $M'$ is an exotic version of $M$.) As a result of the strengthened version of Freedman's h-cobordisim theorem, $M$ is h-cobordant to $M'$. Thus, as a result of the argument in the previous paragraph, there exists an Akbulut cork $A_C \subset M$ such that $M = (M - int(A_C)) \cup_{id} A_C$ and $M' = (M - int(A_C)) \cup_{I} A_C$, where $I$ is the involution of $\partial A_C$ given in $(6)$. \section{Axiomatic TQFT and Exotic 4-Manifolds} \label{Section:AxiomaticTQFTAndExotic4Manifolds} This section will be dedicated to proving our main theorem. \begin{theorem} \label{Theorem:MainTheorem} In four-dimensions any unitary, axiomatic topological quantum field theory can not detect changes in the smooth structure of $M$, a simply connected, closed (compact without boundary), oriented smooth four-manifold. \end{theorem} \begin{proof} Assume there exists a smooth manifold $M'$ homeomorphic but not diffeomorphic to $M$, in other words $M'$ has a different smooth structure than $M$. We will prove that $Z(M) = Z(M')$ for any unitary, axiomatic topological quantum field theory. As $M$ and $M'$ are homeomorphic, Theorem~\ref{Theorem:StrengthenedFreedmanHCobordisimTheorem}, the strengthened Freedman's h-cobordisim theorem, implies that there exists an h-cobordisim $W$ from $M$ to $M'$. As there exists an h-cobordisim $W$ from $M$ to $M'$, Theorem~\ref{Theorem:PrecisOfAkbulutCorks} implies that there exists an Akbulut cork $A_C \subset M$ such that \begin{equation*} M = (M - int(A_C))\cup_{id} A_C \end{equation*} and \begin{equation*} M' = (M - int(A_C)) \cup_{I} A_C, \end{equation*} where $I$ is the involution of $\partial A_C$ given in part (6) of Theorem~\ref{Theorem:PrecisOfAkbulutCorks}. As $M = (M - int(A_C)) \cup_{id} A_C$, the results of Section~\ref{SubSection:Remarks} imply the equality \begin{equation*} Z(M) = \langle Z(M - int(A_C)) | Z(A_C) \rangle \end{equation*} Similarly, as $M' = (M - int(A_C)) \cup_{I} A_C$, the results of Section~\ref{SubSection:Remarks} along with Axiom~\ref{Axiom:Functoriality}, the functoriality axiom, imply \begin{equation*} Z(M') = \langle Z(M - int(A_C)) | I(Z(A_C)) \rangle, \end{equation*} where $I$ is the isomorphism of $\mathcal{H}(\partial A_C)$ induced by the involution $I$ of $\partial A_C$. Thus, to prove $Z(M) = Z(M')$ we only need to prove $Z(A_C) = I(Z(A_C))$, or, equivalently, we need to prove $Z(A_C) - I(Z(A_C)) = 0$. If $Z(A_C) - I(Z(A_C)) = 0$, then we are done. So, we can thus safely assume that $Z(A_C) - I(Z(A_C)) \ne 0$. Hence, Axiom~\ref{Axiom:Unitarity}, the unitarity axiom, implies that if the product $\langle Z(\overline{A_C}) - I(Z(\overline{A_C})) | Z(A_C) - I(Z(A_C)) \rangle = 0$, then $Z(A_C) - I(Z(A_C)) = 0$. So, if we can prove that $\langle Z(\overline{A_C}) - I(Z(\overline{A_C})) | Z(A_C) - I(Z(A_C)) \rangle = 0$, we are done. Now, using linearity along with our various definitions we have \begin{align*} \lefteqn{\langle Z(\overline{A_C}) - I(Z(\overline{A_C})) | Z(A_C) - I(Z(A_C)) \rangle} \\ &= \langle Z(\overline{A_C}) | Z(A_C) \rangle - \langle Z(\overline{A_C}) | I(Z(A_C)) \rangle - \\ &\qquad \qquad \qquad {} \langle I(Z(\overline{A_C})) | Z(A_C) \rangle + \langle I(Z(\overline{A_C})) | I(Z(A_C)) \rangle \\ &= Z(\overline{A_C} \cup_{id} A_C) - Z(\overline{A_C} \cup_I A_C) - Z(\overline{A_C} \cup_I A_C) + Z(\overline{A_C} \cup_{I^2} A_C) \\ &= Z(\overline{A_C} \cup_{id} A_C) - Z(\overline{A_C} \cup_I A_C) - Z(\overline{A_C} \cup_I A_C) + Z(\overline{A_C} \cup_{id} A_C) \\ &= 2 ( Z(\overline{A_C} \cup_{id} A_C) - Z(\overline{A_C} \cup_I A_C) ), \end{align*} where in the second to last line we have used the fact that $I$ is an involution and thus $I^2 = id$. As a result of the previous computation, we find that our desired conclusion follows if we can prove $Z(\overline{A_C} \cup_{id} A_C) - Z(\overline{A_C} \cup_I A_C) = 0$. Now, part (5) of Theorem~\ref{Theorem:PrecisOfAkbulutCorks}, pr\'ecis of Akbulut corks, implies $A_C \times [0,1]$ is diffeomorphic to $D^5$, the standard five-dimensional disk with boundary. As $\partial (A_C \times [0,1]) = \overline{A_C} \cup_{id} A_C$, this implies $\overline{A_C} \cup_{id} A_C = S^4$, where $S^4$ is the standard four-dimensional sphere. Thus, $Z(\overline{A_C} \cup_{id} A_C) = Z(S^4)$. Part (4) of Theorem~\ref{Theorem:PrecisOfAkbulutCorks}, pr\'ecis of Akbulut corks, implies $A$, of Theorem~\ref{Theorem:PrecisOfAkbulutCorks}, is diffeomorphic to $D^5$. As $\partial A = \overline{A_C} \cup_I A_C$ in our case, this implies $\overline{A_C} \cup_I A_C = S^4$. Thus, $Z(\overline{A_C} \cup_I A_C) = Z(S^4)$. Collecting the results of the last two paragraphs, \begin{equation*} Z(\overline{A_C} \cup_{id} A_C) - Z(\overline{A_C} \cup_I A_C) = Z(S^4) - Z(S^4) = 0. \end{equation*} So, tracing all our previous steps, we have proven $Z(M) = Z(M')$. \end{proof} \section{Remarks} \label{Section:Remarks} The results of Theorem~\ref{Theorem:MainTheorem} seem, somehow, unsatisfying. It is well known that Donaldson-Witten theory~\cite{Witten88} is a TQFT, in Witten's informal sense, that is able to detect changes in the smooth structure of $M$, a simply connected, closed, oriented smooth four-manifold. So, it comes as somewhat of a surprise that any unitary, axiomatic TQFT can not detect changes in the smooth structure of such an $M$. It feels as if axiomatic TQFT is lacking something that is present in Donaldson-Witten theory, and indeed this is the case. However, the modifications that one must make to axiomatic TQFT in order to allow it to detect changes in smooth structure are relatively easy to spot upon thinking a bit about what is happening in the scenario above. Axiomatic TQFT in four-dimensions is, rather unsurprisingly, a four-dimensional theory. So, in particular, all of its symmetries should arise from symmetries that appear naturally in four-dimensions. For example, Axiom~\ref{Axiom:Naturality}, the naturality axiom, implies that any axiomatic TQFT in four-dimensions is invariant with respect to four-dimensional diffeomorphisms. This makes sense. This is a purely four-dimensional symmetry that arises in a purely four-dimensional theory. However, by contrast, Axiom~\ref{Axiom:Naturality} also states that any orientation--preserving diffeomorphism of closed, oriented, three-dimensional smooth manifolds $f:X \rightarrow X'$ induces an isomorphism $f: \mathcal{H}(X) \rightarrow \mathcal{H}(X')$. At first glance this seems harmless, but, in fact, it is not. The involution $I$ of $\partial A_C$ from part (6) of Theorem~\ref{Theorem:PrecisOfAkbulutCorks}, pr\'ecis of Akbulut corks, is a diffeomorphism of $\partial A_C$ that does not arise from a diffeomorphism of $A_C$. In other words one can not continue $I$ over $A_C$ as a diffeomorphism. The best one can do is to continue $I$ over $A_C$ as a homeomorphism\footnote{The easiest way to see this is to note that if one could continue $I$ over $A_C$ as a diffeomorphism, then one could prove Smale's h-cobordism in four-dimensions, a result known to be false.}. So, the assertion in Axiom~\ref{Axiom:Naturality} that $I$ gives rise to an isomorphism $I: \mathcal{H}(\partial A_C) \rightarrow \mathcal{H}(I(\partial A_C))$ is asserting that there exists a symmetry in the four-dimensional theory that has no natural origin in four-dimensions, as there exists no four-dimensional diffeomorphism of $A_C$ that when restricted to $\partial A_C$ yields $I$. In other words, it is, without any ``physical'' justification, enlarging the symmetry group of the theory. In point of fact, it is just this enlarged symmetry group that we are seeing in Theorem~\ref{Theorem:MainTheorem}. The modifications that one must make to the TQFT axioms such that they allow for detection of changes in smooth structure are rather straightforward. To wit, one must limit the set of orientation-preserving diffeomorphisms that give rise to isomorphisms of $\mathcal{H}(X)$. More specifically, if $X$ is a closed, oriented $n$-dimensional smooth submanifold of a compact, oriented $(n+1)$-dimensional smooth manifold $W$, then any orientation-preserving diffeomorphism $f$ of $X$ that arises as a restriction of an orientation-preserving diffeomorphism of $W$ induces an isomorphism $f: \mathcal{H}(X) \rightarrow \mathcal{H}(f(X))$. If $f'$ is an orientation-preserving diffeomorphism of $X$ that does not arise in such a manner, then its action on $\mathcal{H}(X)$ is undefined. If we call an orientation-preserving diffeomorphism $f$ that arises in such a manner a \textit{restricted} orientation-preserving diffeomorphism, then the naturality and functoriality TQFT axioms must be modified in the following manner so as to allow for detection of changes in smooth structure\footnote{One immediately sees that if one uses these new axioms, the proof of Theorem~\ref{Theorem:MainTheorem} fails.}. \subsection{Naturality} \label{SubSection:Naturality} \begin{axiom}[Naturality] \label{Axiom:NaturalityII} Any orientation-preserving diffeomorphism $f$ of $X$, a closed, oriented $n$-dimensional smooth submanifold of $W$ a compact, oriented $(n+1)$-dimensional smooth manifold, that arises as a restriction of an orientation-preserving diffeomorphism of $W$ induces an isomorphism $f: \mathcal{H}(X) \rightarrow \mathcal{H}(f(X))$. For an orientation--preserving diffeomorphism $g$ from the cobordism $(W,X_-,X_+)$ to the cobordism $(W',X_-',X_+')$, the following diagram is commutative. \[ \xymatrixcolsep{5pc} \xymatrix{ \mathcal{H}(X_-) \ar[d]_{Z(W)} \ar[r]^{g_{|_{X_-}}} &\mathcal{H}(X_-')\ar[d]^{Z(W')}\\ \mathcal{H}(X_+) \ar[r]^{g_{|_{X_+}}} &\mathcal{H}(X_+')} \] Note, $Z(W)$ is shorthand for $Z(W,X_-,X_+)$ and $Z(W')$ is shorthand for $Z(W',X_-',X_+')$. \end{axiom} \subsection{Functoriality} \label{SubSection:Functoriality} \begin{axiom}[Functoriality] \label{Axiom:FunctorialityII} If a cobordism $(W,X_-,X_+)$ is obtained by gluing two cobordisms $(M,X_-,X)$ and $(M',X',X_+)$ using an orientation-preserving diffeomorphism $f$ where $f: X \rightarrow X'$ and $f$ can be viewed as the restriction of an orientation-preserving diffeomorphism of $W$, then following diagram is commutative. \[ \xymatrixcolsep{5pc} \xymatrix{ \mathcal{H}(X_-) \ar[d]_{Z(M)} \ar[r]^{Z(W)} &\mathcal{H}(X_+)\\ \mathcal{H}(X) \ar[r]^{f} &\mathcal{H}(X')\ar[u]_{Z(M')}} \] \end{axiom} \section{Conclusion} \label{Section:Conclusion} \subsection{Remarks} The standard formulation of axiomatic TQFT~\cite{Blanchet2006232} is sufficient for many situations in fewer than four dimensions. However, in four-dimensions the standard axiomatic formulation requires some small modifications if it is to detect changes in smooth structure. These modifications are required as there exist orientation-preserving diffeomorphisms of $\partial A_C$ that do not extend to orientation-preserving diffeomorphisms of the smooth four-manifold $A_C$. (In fewer than four-dimensions such diffeomorphisms do not exist\footnote{If they existed in fewer than four-dimensions, then there would exist exotic manifolds in three or fewer dimensions. There exist no such manifolds.}.) If these small modifications are made, one obtains a set of axioms that allow for the detection of changes in the smooth structure of a four-manifold\footnote{Note, $\mathcal{H}(X)$ may also have to be infinite dimensional in four-dimensions.}. \subsection{Axiomatic DQFT} We call the construct resulting from the modified axioms \textit{axiomatic differential quantum field theory}. In summary its axioms are as follows. \subsubsection{Naturality} \begin{axiom}[Naturality] Any orientation-preserving diffeomorphism $f$ of $X$, a closed, oriented $n$-dimensional smooth submanifold of $W$ a compact, oriented $(n+1)$-dimensional smooth manifold, that arises as a restriction of an orientation-preserving diffeomorphism of $W$ induces an isomorphism $f: \mathcal{H}(X) \rightarrow \mathcal{H}(f(X))$. For an orientation--preserving diffeomorphism $g$ from the cobordism $(W,X_-,X_+)$ to the cobordism $(W',X_-',X_+')$, the following diagram is commutative. \[ \xymatrixcolsep{5pc} \xymatrix{ \mathcal{H}(X_-) \ar[d]_{Z(W)} \ar[r]^{g_{|_{X_-}}} &\mathcal{H}(X_-')\ar[d]^{Z(W')}\\ \mathcal{H}(X_+) \ar[r]^{g_{|_{X_+}}} &\mathcal{H}(X_+')} \] Note, $Z(W)$ is shorthand for $Z(W,X_-,X_+)$ and $Z(W')$ is shorthand for $Z(W',X_-',X_+')$. \end{axiom} \subsubsection{Functoriality} \begin{axiom}[Functoriality] If a cobordism $(W,X_-,X_+)$ is obtained by gluing two cobordisms $(M,X_-,X)$ and $(M',X',X_+)$ using an orientation-preserving diffeomorphism $f$ where $f: X \rightarrow X'$ and $f$ can be viewed as the restriction of an orientation-preserving diffeomorphism of $W$, then following diagram is commutative. \[ \xymatrixcolsep{5pc} \xymatrix{ \mathcal{H}(X_-) \ar[d]_{Z(M)} \ar[r]^{Z(W)} &\mathcal{H}(X_+)\\ \mathcal{H}(X) \ar[r]^{f} &\mathcal{H}(X')\ar[u]_{Z(M')}} \] \end{axiom} \subsubsection{Normalization} \begin{axiom}[Normalization] For any closed, oriented $n$-dimensional smooth manifold $X$, the $\mathbb{F}$ linear map \begin{equation*} Z(X \times [0,1]): \mathcal{H}(X) \rightarrow \mathcal{H}(X) \end{equation*} is the identity. \end{axiom} \subsubsection{Multiplicativity} \begin{axiom}[Multiplicativity] There are functorial isomorphisms \begin{equation*} \mathcal{H}(X \amalg Y) \longrightarrow \mathcal{H}(X) \otimes \mathcal{H}(Y) \end{equation*} and \begin{equation*} \mathcal{H}(\emptyset) \longrightarrow \mathbb{F} \end{equation*} such that the diagrams \[ \xymatrix{ \mathcal{H}((X_1 \amalg X_2) \amalg X_3) \ar[d] \ar[r] &(\mathcal{H}(X_1) \otimes \mathcal{H}(X_2)) \otimes \mathcal{H}(X_3)\ar[d]\\ \mathcal{H}(X_1 \amalg (X_2 \amalg X_3)) \ar[r] &\mathcal{H}(X_1) \otimes (\mathcal{H}(X_2) \otimes \mathcal{H}(X_3))} \] and \[ \xymatrix{ \mathcal{H}(X \amalg \emptyset) \ar[d] \ar[r] &\mathcal{H}(X) \otimes \mathbb{F}\ar[d] \\ \mathcal{H}(X) \ar[r]^{id} &\mathcal{H}(X)} \] commute. Note, the vertical maps are induced by the obvious diffeomorphisms and the standard vector space isomorphisms. \end{axiom} \subsubsection{Symmetry} \begin{axiom}[Symmetry] The isomorphism \begin{equation*} \mathcal{H}(X \amalg Y) \longrightarrow \mathcal{H}(Y \amalg X) \end{equation*} induced by the obvious diffeomorphism corresponds to the standard isomorphism of vector spaces \begin{equation*} \mathcal{H}(X) \otimes \mathcal{H}(Y) \longrightarrow \mathcal{H}(Y) \otimes \mathcal{H}(X). \end{equation*} \end{axiom} \section{Afterward} \label{Section:Afterward} Upon distributing this preprint, it has come to the author's attention that a proof of a result similar to Theorem 4.1 was given as Theorem 4.1 of Freedman et al.~\cite{Freedman05}. In addition, the author was informed of a research program with a focus similar to that of this preprint. This research program was launched by Freedman, Kitaev, Nayak, Slingerland, Walker, and Wang in~\cite{Freedman05}, continued by Kreck and Teichner in~\cite{Kreck08}, and furthered by Calegari, Freedman, and Walker in~\cite{Calegari10}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction \label{sec:intro}} The detection of cosmic neutrinos in the energy range from $\sim 10$~TeV to $\sim$~ PeV by the IceCube Neutrino Observatory~\cite{Aartsen:2013bka,Aartsen:2013jdh,Aartsen:2014gkd,Aartsen:2016xlq} raises interesting questions. The observed energy flux of high-energy neutrinos seems to be comparable to that of ultrahigh-energy cosmic rays (UHECRs) at $\gtrsim 10^{19}$~eV. Is the origin of high-energy neutrinos related to the UHECR sources? Is the comparability of neutrino and UHECR fluxes a consequence of yet-unknown common astrophysical phenomena? Several studies have been reported in the literature to probe these questions, mainly in the framework of hadronuclear ($pp$) collisions inside cosmic-ray reservoirs~\cite{Murase:2013rfa} -- jetted active galactic nuclei (AGN) embedded in clusters and groups of galaxies~\cite{Fang:2017zjf}, starburst galaxies~\cite{Katz:2013ooa,Murase:2016gly}, or phenomenological setups used for UHECR observations~\cite{Kachelriess:2017tvs}. Remarkably, these cosmic-ray reservoir models can even explain the diffuse isotropic gamma-ray background in the sub-TeV range, measured by the {\it Fermi} satellite~\cite{Murase:2016gly,Fang:2017zjf}. High-energy neutrinos can be produced, including by photohadronic interactions ($p\gamma$) inside cosmic-ray emitters (e.g., Ref.~\cite{Winter:2013cla}). The photo-meson production process may occur simultaneously or in succession to the acceleration of cosmic rays. If the power of cosmic-ray particle emitters is sufficiently large, it is indeed possible to emit both $\gtrsim 100$~TeV neutrinos and UHECRs. Various astrophysical models have been investigated, which include classical high-luminosity (HL) gamma-ray bursts (GRBs)~\cite{Waxman:1997ti,Waxman:1999ai}, low-luminosity (LL) gamma-ray bursts~\cite{Murase:2006mm,Gupta:2006jm,Zhang:2018agl}, new-born magnetars~\cite{Murase:2009pg,Fang:2013vla,Fang:2018hjp}, tidal disruption events (TDEs)~\cite{Zhang:2017hom,Guepin:2017abw,Biehl:2017hnb}, and blazars~\cite{Mannheim:1995mm,Atoyan:2001ey,Essey:2009ju,Murase:2011cy,Murase:2014foa}. In this report, we examine a generic unification model to account for observed neutrinos with energies greater than 100 TeV and UHECRs in the photo-meson production scheme. The cumulative neutrino background flux is estimated analytically using parameters to characterize sources such as the photon luminosity and the source number density. The UHECR flux is also estimated semi-analytically by considering their collisions with background photons in intergalactic space. We also derive the source requirements for the acceleration of cosmic-ray protons to ultrahigh energies, and transform them into the criteria of the parameters relevant to high energy neutrino emissions such as the optical depth of $p\gamma$ interactions. The estimated fluxes of neutrinos and UHECRs from sources that satisfy these criteria are compared to the measured flux at 100 TeV $\lesssim E_\nu \lesssim10$~PeV and its upper limit at $E_\nu \gtrsim100$~PeV by IceCube, as well as the measurement of UHECRs at $10^{19}$ eV. The resultant constraints on the parameters of general source characteristics are presented. We finally describe a case study for specific astronomical objects such as LL GRBs. In this work, we use $E$ for the observed energy, $\varepsilon=(1+z)E$ is the energy in the engine frame (or the rest frame of the Hubble flow), and $\varepsilon'$ represents the energy in the comoving frame of the plasma outflow. The standard $\Lambda$CDM cosmology with $H_0 = 73.5$ km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\rm M} = 0.3$, and $\Omega_{\Lambda}=0.7$ is assumed throughout the report. \section{Constraints due to Source Modeling \label{sec:model}} \subsection{UHECR acceleration and survival \label{sec:condition}} In the unification model of UHECRs and IceCube neutrinos, the source to emit $\gtrsim 100~{\rm TeV}$ neutrinos must also be capable of accelerating cosmic rays to UHEs ($E_i\gtrsim 10^{20}~{\rm eV}$) by definition. Some of the required conditions for classification as UHECR emitters are described by relatively simple formulas. Let us consider a source with an acceleration and emission region that is given by $R$ measured in the central engine. We also denote the bulk Lorentz factor of this source by $\Gamma$. For the source to account for the UHECR acceleration, the cosmic ray acceleration time scale, $t'_{\rm acc}=\eta{\varepsilon'}_i/(Z eB^{'}c )$, must be faster than the dynamical time scale, $t'_{\rm dyn}\approx R/(\Gamma\beta c)$. In this case, ${\varepsilon'}_i$ is the cosmic ray ion energy in the plasma rest frame, $B'$ is the comoving magnetic field strength, $Z$ is the atomic number of the cosmic ray ions, $\beta$ is the characteristic velocity in the source, and $\eta^{-1}\leq1$ represents the efficiency of particle acceleration. The condition is transformed to the well-known formula~\cite{Blandford:1999hi,Lemoine:2009pw}, \begin{eqnarray} L'_\gamma &\geq& \frac{1}{2}\xi_B^{-1} c\eta^2\beta^2\left(\frac{\varepsilon_i^{\rm max}}{Z e}\right)^2 \label{eq:hillas}\\ &\simeq&1.7\times 10^{45}~{\rm erg/s}~\xi_B^{-1}\eta^{2}\beta^2\left(\frac{\varepsilon_i^{\rm max}}{Z10^{11}~{\rm GeV}}\right)^2\quad\nonumber, \end{eqnarray} which is equivalent to the Hillas condition in the limit of $\eta\rightarrow\beta^{-2}$. In this case, $\varepsilon_i^{\rm max}\approx\Gamma{\varepsilon'}_i^{\rm max}$ is the maximal energy of UHECRs accelerated at the sources. For a given comoving radiation luminosity $L'_\gamma$, the magnetic energy density in the plasma rest frame, $U^{'}_{\rm B}$, is given by \begin{eqnarray} U^{'}_{\rm B} &=& \xi_{\rm B}\frac{L'_\gamma}{4\pi R^2c} \nonumber\\ &=& \xi_{\rm B}\frac{L_\gamma}{4\pi \Gamma^2 R^2c} \label{eq:magnetic_energy_density} \end{eqnarray} where $\xi_{\rm B}$ is the equipartition parameter. For example, the modeling of GRBs and blazars typically suggests $\xi_B\sim{10}^{-4}-1$. As a reference value, the maximum ion energy is set to $\varepsilon_i^{\rm max}=10^{11}~{\rm GeV}$ thorough this work. Indeed, the best fit value for the Auger data is $10^{10.9}$~GeV~\cite{Aab:2016zth}, so our choice is conservative but reasonable. We also assume the most efficient acceleration case ($\eta=1$) and a transrelativistic or relativistic source ($\beta=1$). UHECRs must be accelerated before cooling via all energy loss processes including synchrotron cooling, i.e., $t'_{\rm acc}<t'_{\rm cool}$, where the cooling time (in the plasma rest frame) is ${t'}_{\rm cool}^{-1}={t'}_{\rm syn}^{-1}+{t'}_{p\gamma}^{-1}+{t'}_{\rm BH}^{-1}+{t'}_{\rm dyn}^{-1}$, where $t'_{p\gamma}$ and $t'_{\rm BH}$ are the photo-meson production and Bethe-Heitler (BH) energy loss time scales, respectively. The last loss time scale represents adiabatic losses. For a power-law target spectrum, the Bethe-Heitler process is important only if the spectrum is softer than $\alpha_\gamma\sim2.2-2.3$~\cite{Murase:2018iyl}. Therefore, we mainly consider cases wherein the BH process is subdominant. The synchrotron cooling time in the plasma rest frame is \begin{eqnarray} {t'}_{\rm syn}^{-1} &=& \frac{4}{3}U^{'}_{\rm B}\sigma_Tc\frac{Z^4}{A^4}\frac{1}{m_pc^2}\left(\frac{\varepsilon_i}{\Gamma m_pc^2}\right)\left(\frac{m_e}{m_p}\right)^2\\ &=& \frac{4}{3}\frac{\xi_B\sigma_TL'_\gamma}{4\pi R^2}\frac{Z^4}{A^4}\frac{1}{m_pc^2}\left(\frac{\varepsilon_i}{\Gamma m_pc^2}\right)\left(\frac{m_e}{m_p}\right)^2,\nonumber \label{eq:sync_time} \end{eqnarray} where $A$ is the mass number of cosmic ray ions. By requiring $t'_{\rm acc}< t'_{\rm syn}$ at the maximum ion energy in the engine frame ($\varepsilon_i^{\rm max}$), we obtain \begin{equation} B'<\frac{A^4 6\pi e m_p^4 c^{4}}{Z^3 \sigma_T m_e^2}\frac{\Gamma^2}{{(\varepsilon_i^{\rm max})}^2}. \label{eq:sync_conditionpre} \end{equation} In addition, we have another condition to ensure the escape of UHECRs. To ensure that UHECRs can leave the sources before losing their energies via synchrotron cooling, the escape time scale $t'_{\rm esc}$ must be faster than $t'_{\rm syn}$. In general, the escape time is model dependent and can be long at lower energies. For conservative estimates, we hereafter assume that the escape time scale is comparable to the dynamical scale in the relativistic environment of the UHECR acceleration site under consideration. This is possible if the escape boundary is comparable to the system size and the magnetic field decays within the dynamical time (see discussion in Ref.~\cite{Zhang:2017moz}). By regarding this ``survival'' condition as a necessary condition ($t'_{\rm dyn}<t'_{\rm syn}$), we obtain: \begin{equation} B'< \frac{6\pi A^4 m_p^4 c^{9/2}}{Z^4 \sigma_T m_e^2 {(2\xi_B L'_\gamma)}^{1/2}}\frac{\Gamma^2}{\varepsilon_i^{\rm max}}. \label{eq:esc_conditionpre} \end{equation} We utilize Eqs.~(1), (\ref{eq:sync_conditionpre}) and (\ref{eq:esc_conditionpre}) as theoretical constraints. We focus on the proton case, i.e., $Z=A=1$, and discuss the cases of nuclei later. \subsection{Photo-meson production} Neutrino emission by the photo-meson production process is characterized by the environment of the target photons. We start to build our generic framework by defining the reference energy of photons $\varepsilon_{\gamma0}$ in the engine frame. Given that a major fraction (but not necessarily all) of photo-meson production in $p\gamma$ interactions occurs around the $\Delta$ resonance region (including direct pion production via the $t$-channel), we introduce the reference ``resonating'' energy as \begin{equation} \tilde{\varepsilon}_{p0}(s)\approx \frac{(s-m_p^2)}{4}\frac{\Gamma^2}{\varepsilon_{\gamma0}}, \label{eq:photon_energy_ref} \end{equation} where $s$ is the Mandelstam variable. In particular, we define ${\tilde{\varepsilon}}_{p0}^{\Delta}\equiv{\tilde{\varepsilon}}_{p0}(s_\Delta)$, where $s_\Delta\approx(1.23~{\rm GeV})^2$ is the square of invariant mass of the $p\gamma$ collisions at the $\Delta(1232)$ resonance. Primed (') characters represent quantities measured in the rest frame of plasma with the Lorentz bulk factor $\Gamma$. In the present model we approximate the target photon spectrum to be \begin{equation} \frac{dn_\gamma}{d\varepsilon'_\gamma}= \frac{K'_\gamma}{\varepsilon'_{\gamma0}}\left(\frac{\varepsilon'_\gamma}{\varepsilon'_{\gamma0}}\right)^{-\alpha_\gamma}, \label{eq:target_photon} \end{equation} where $\alpha_\gamma$ is the photon index, where we focus on $\alpha_\gamma\geq1$. The normalization photon density $K'_\gamma$ is bolometrically connected to the source photon luminosity $L'_\gamma \approx L_\gamma/\Gamma^2$ by \begin{equation} L'_{\gamma} = 4\pi R^2c\int\limits_{\varepsilon_{\gamma}^{\rm min}}^{\varepsilon_{\gamma}^{\rm max}} \frac{dn_\gamma}{d\varepsilon'_\gamma}\varepsilon'_\gamma d\varepsilon'_\gamma. \label{eq:photon_energy_conversion} \end{equation} We have \begin{equation} K'_\gamma = \frac{L'_{\gamma0}}{4\pi R^2 c \varepsilon'_{\gamma0}}=\left\{ \begin{array}{l} \frac{L'_\gamma}{4\pi R^2 c \varepsilon'_{\gamma0} }\frac{\alpha_\gamma-2}{x_{\rm d}^{-\alpha_\gamma+2}-x_{\rm u}^{-\alpha_\gamma+2}} \ \ (\alpha_\gamma \neq 2) \\ \frac{L'_\gamma}{4\pi R^2 c \varepsilon'_{\gamma0}} \frac{1}{\ln\left(\frac{{\varepsilon'}_\gamma^{\rm max}}{{\varepsilon'}_\gamma^{\rm min}}\right)} \ \ (\alpha_\gamma =2), \\ \end{array} \right. \label{eq:photon_luminosity} \end{equation} where the two parameters $x_{\rm d}=({\varepsilon'}_\gamma^{\rm min}/\varepsilon'_{\gamma0})$ and $x_{\rm u}=({\varepsilon'}_\gamma^{\rm max}/{\varepsilon'}_{\gamma0})$ represents the boundary of the main photon emission energy range that appear in the luminosity estimation, Eq.~(\ref{eq:photon_energy_conversion}). In this case, one determines the relationship between the bolometric luminosity $L'_\gamma$ and the reference luminosity $L'_{\gamma0}$~\footnote{This difference is known to be important for model-dependent constraints on neutrinos from GRBs}. The optical depth to the photo-meson production is given by (see also Eq.(6) of Ref.~\cite{Yoshida:2014uka})) \begin{equation} \tau_{p\gamma}(\varepsilon'_p)=\frac{2}{1+\alpha_\gamma}\frac{L'_{\gamma0}}{4\pi R\Gamma c \varepsilon'_{\gamma0}} \int ds \frac{\sigma_{p\gamma}(s)}{s-m_p^2} {\left(\frac{\varepsilon'_p}{{\tilde{\varepsilon}_{p0}}^\prime(s)}\right)}^{\alpha_\gamma-1} \label{eq:optical_depth} \end{equation} where $\varepsilon'_p\approx\varepsilon_p/\Gamma$ and $\sigma_{p\gamma}$ is the photo-meson production cross-section. Using the approximation $\sigma_{p\gamma}\approx(s_\Delta-m_p^2)\bar{\sigma}_{\Delta}\delta(s-s_{\Delta})$ (where $\bar{\sigma}_{\Delta}\sim3\times{10}^{-28}~{\rm cm}^2$ is the cross-section averaged over the resonance range), we reproduce the known results (e.g., Refs.~\cite{Waxman:1997ti,Dermer:2012rg,Murase:2015xka} with inelasiticity taken into account). Using this resonance approximation, the preceding equation is rewritten as \begin{equation} \tau_{p\gamma}(\varepsilon_p)\approx\frac{2}{1+\alpha_\gamma}\frac{L'_{\gamma0}}{4\pi R\Gamma^2 c (\varepsilon'_{\gamma0}/\Gamma)} {\left(\frac{\varepsilon_p}{{\tilde{\varepsilon}_{p0}^{\Delta}}}\right)}^{\alpha_\gamma-1} \int ds \frac{\sigma_{p\gamma}(s)}{s-m_p^2} \label{eq:optical_depth2} \end{equation} This approximation is valid for $\alpha_\gamma\gtrsim1$, and for $\alpha_\gamma\sim1$ there is an enhancement by a factor of $2-3$ due to multipion production~\cite{Murase:2005hy}. Since $E_\nu\sim1$~PeV neutrinos originate from $\varepsilon_p\approx(1+z)20$~PeV protons (where $\bar{z}$ is the typical source redshift), we use $\tilde{\varepsilon}_{p0}^{\Delta}$ as the reference proton energy (in the engine frame), which is fixed to $\tilde{\varepsilon}_{p0}^{\Delta}=10$~PeV. In this case, one should consider that this implicitly requires target photons that can resonantly interact with protons with an energy of 10~PeV. As such, $\varepsilon'_{\gamma0}$ has an implicit $\Gamma$ dependence via Eq.~(\ref{eq:photon_energy_ref}). We have \begin{equation} \varepsilon_{\gamma0}\approx16~\Gamma^2{(\tilde{\varepsilon}_{p0}^{\Delta}/10~{\rm PeV})}^{-1}~{\rm eV}. \label{eq:resnuenergy} \end{equation} As a result, one can see from Eq.~\ref{eq:optical_depth2} that $\tau_{p\gamma}(\tilde{\varepsilon}_{p0}^{\Delta})\equiv \tau_{p\gamma0}\propto L_\gamma \Gamma^{-2}{(\varepsilon_{\gamma0})}^{-1}R^{-1}\propto L'_\gamma \Gamma^{-1}{(\varepsilon'_{\gamma0})}^{-1}R^{-1}\propto L'_\gamma \Gamma^{-2}R^{-1}\tilde{\varepsilon}_{p0}^{\Delta}$. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{./LvsB.pdf} \caption{The relationship between the comoving magnetic field strength ${\rm B'}$ and the comoving photon luminosity $L'_\gamma\approx L_\gamma/\Gamma^2$. The solid line displays the case when $1-\exp{(-\tau_{p\gamma0})}=0.4$ (corresponding to $\tau_{p\gamma0}\sim1$), which is the most optically thick case allowed by the UHECR escape condition. The dashed line shows the $B'-L'_\gamma$ relationship when $1-\exp{(-\tau_{p\gamma0}})=0.1$ (corresponding to $\tau_{p\gamma0}\sim0.1$).} \label{fig:optical_depth} \end{center} \end{figure} The emission radius $R$ appears in Eq.~(\ref{eq:optical_depth2}), but it can be eliminated via Eq.~(2). For a given value of $\tilde{\varepsilon}_{p0}^{\Delta}$, as $R\propto L'_\gamma/\tau_{p\gamma0}/\Gamma^2$ and $U^{'}_{\rm B}\propto \xi_B L'_\gamma/R^2$, the magnetic field strength must satisfy \begin{equation} \frac{B'/\Gamma^2}{\tau_{p\gamma 0}\sqrt{\xi_B/ L'_\gamma}}={C(\alpha_\gamma,\tilde{\varepsilon}_{p0}^{\Delta})}^{-1}, \label{eq:optical_depth3} \end{equation} where $C(\alpha_\gamma,\tilde{\varepsilon}_{p0}^{\Delta})$ is a constant that depends on the photon index. For $\bar{\sigma}_{\Delta}\sim3\times{10}^{-28}~{\rm cm}^2$, we have \begin{eqnarray} C(\alpha_\gamma,\tilde{\varepsilon}_{p0}^{\Delta})&\sim&2.4\times{10}^{-24}~{\rm erg}^{-1}~{\rm cm}^{3/2}~{\rm s}^{1/2}\nonumber\\ &\times&\left(\frac{2}{1+\alpha_\gamma}\right)\left(\frac{\tilde{\varepsilon}_{p0}^{\Delta}}{10~{\rm PeV}}\right)\left(\frac{5L'_{\gamma0}}{L'_\gamma}\right). \end{eqnarray} The source model has been constructed such that for a given $L'_\gamma$ and $\Gamma$, the $p\gamma$ interaction site radius $R$ can arbitrarily vary to realize various values of $\tau_{p\gamma0}$ and $B'$ (assuming a value of the equipartition parameter $\xi_B$) via Eqs.~(\ref{eq:optical_depth2}) and (\ref{eq:magnetic_energy_density}). This enables us to eliminate the model dependence on $R$ that is often very uncertain (see Refs.~\cite{Murase:2005hy,Murase:2008mr} for GRBs). Eq.~(\ref{eq:optical_depth3}) can further be combined with the conditions for UHECR acceleration and survival. The explicit independent parameters for this construction are then $L'_\gamma$, $\Gamma$, and $\tau_{p\gamma0}$, as well as the subparameters $\xi_B$ and $\alpha_\gamma$. With Eqs.~(\ref{eq:sync_conditionpre}) and (\ref{eq:optical_depth3}), one of the UHECR acceleration conditions gives the following upper limit on the $p\gamma$ optical depth: \begin{equation} \tau_{p\gamma0}<\frac{C(\alpha_\gamma,\tilde{\varepsilon}_{p0}^{\Delta})6\pi e m_p^4 c^4}{\sigma_T m_e^2}\frac{A^4}{Z^3}\left(\frac{{L'}_\gamma^{1/2}}{\xi_B^{1/2}{(\varepsilon_i^{\rm max})}^2}\right). \\ \label{eq:sync_condition} \end{equation} By applying the UHECR escape condition (\ref{eq:esc_conditionpre}) to the optical depth formula, Eq.~(\ref{eq:optical_depth2}), we can obtain the condition for $\tau_{p\gamma0}$ without explicitly depending on $L'_{\gamma}$ and $\Gamma$ as: \begin{eqnarray} \tau_{p\gamma0}&<& \frac{2}{1+\alpha_\gamma}\left(\int ds \frac{\sigma_{p\gamma}}{s-m_p^2}\right)\frac{3A^4m_p^4c^4(L'_{\gamma0}/L'_{\gamma})}{4Z^4\sigma_Tm_e^2(\varepsilon'_{\gamma0}/\Gamma)}\frac{1}{\xi_B\varepsilon_i^{\rm max}} \nonumber \\ &\lesssim& 6\times 10^{-2} \frac{2}{1+\alpha_\gamma}\xi_B^{-1}{\left(\frac{A}{Z}\right)}^4{\left(\frac{\varepsilon_i^{\rm max}}{10^{11}\ {\rm GeV}}\right)}^{-1} \label{eq:esc_condition} \end{eqnarray} It should be noted that $\varepsilon'_{\gamma0}/\Gamma = (s_\Delta-m_p^2)/4\tilde{\varepsilon}_{p0}^\Delta$ and thus, this bound is $\Gamma$ independent. Fig.~\ref{fig:optical_depth} displays this dependence for two representative cases for $\tau_{p\gamma0}$ -- the optical depth of protons with energy $\tilde{\varepsilon}_{p0}^{\Delta}=10$~PeV. It should be noted that the UHECR acceleration and survival conditions require that $t'_{\rm acc}<t'_{p\gamma}$ and $t'_{\rm dyn}<t'_{p\gamma}$ should also be satisfied. The latter condition means that the system should not be calorimetric to the sources to simultaneously account for the IceCube neutrino and UHECR fluxes. Therefore, we have \begin{equation} \tau_{p\gamma0}\leq\tau_{p\gamma}(\varepsilon_p^{\rm max})\lesssim 1/\kappa_{p\gamma}\sim5, \label{eq:calorimetric} \end{equation} where $\kappa_{p\gamma}\sim0.2$ is the proton inelasticity. In the cases whereby $\alpha_{\rm cr}\geq2$, this condition is satisfied by the diffuse flux measurements (see below), so that $t_{\rm acc}<t_{\rm dyn}<t_{p\gamma}$ is automatically fulfilled. \section{Constraints due to Diffuse UHECR and Neutrino Fluxes \label{sec:diffuse}} An important observation is that the energy generation rate densities of UHECRs and neutrinos are comparable~\cite{Katz:2013ooa,Murase:2018utn}. The detailed comparison of these fluxes constrains the parameter space of the unification model. \subsection{\label{sec:neutrinoFlux} Neutrino spectra with radiative cooling of mesons and muons} The flux of high-energy neutrinos for a given optical depth $\tau_{p\gamma0}$ has been calculated using various analytical and numerical methods. In this work, based on Ref.~\cite{Yoshida:2014uka}, we outline the analytical formulation and its minor modifications to account for the synchrotron cooling of mesons and muons. The spectrum of UHECRs injected from the UHECR sources is assumed to follow a power-law form, which is \begin{equation} \frac{d\dot{N}_{\rm CR}}{d\varepsilon_i}=\frac{K_{\rm CR}}{\varepsilon_{i0}}\left(\frac{\varepsilon_i}{\varepsilon_{i0}}\right)^{-\alpha_{\rm CR}}e^{-\varepsilon_i/\varepsilon_i^{\rm max}}, \label{eq:UHECRspec} \end{equation} where $\varepsilon_{i0}$ is the reference energy that can be set to $\tilde{\varepsilon}^\Delta_{p0}$ for protons. The normalization factor, $K_{\rm CR}$, of the UHECR yield (with a dimension of [s]$^{-1}$) is linked quasi-bolometrically to the photon luminosity $L_\gamma$ with the CR loading factor $\xi_{\rm CR}$: \begin{equation} K_{\rm CR}\approx\left\{ \begin{array}{l} \frac{(\alpha_{\rm CR}-2)\xi_{\rm CR}L_{\gamma}/\varepsilon_{i0}}{(\frac{\varepsilon_i^{\rm min}}{\varepsilon_{i0}})^{-\alpha_{\rm CR}+2}-(\frac{\varepsilon_i^{\rm max}}{\varepsilon_{i0}})^{-\alpha_{\rm CR}+2}} \quad (\alpha_{\rm CR} \neq 2) \\ \\ \frac{\xi_{\rm CR}L_\gamma/\varepsilon_{i0}}{\ln\left(\frac{\varepsilon_i^{\rm max}}{\varepsilon_i^{\rm min}}\right)} \quad (\alpha_{\rm CR} = 2). \end{array} \right. \label{eq:uhecr_luminosity} \end{equation} Assuming that UHECRs are protons, we set $\varepsilon_i^{\rm min}=\tilde{\varepsilon}^\Delta_{p0}=10$~PeV hereafter. In general, if pions and muons decay into gamma rays and leptons without energy loss, the differential neutrino luminosity from a single source, $d\dot{N}_{\nu}/d\varepsilon_{\nu}$ is formally given by~\cite{Murase:2007yt,Murase:2014foa} \begin{equation} \frac{d\dot{N}_{\nu}}{d\varepsilon_{\nu}} \approx \int d\varepsilon_{i} \frac{d\dot{N}_{\rm CR}}{d\varepsilon_i} \int d\varepsilon'_\gamma \frac{dn_\gamma}{d\varepsilon'_\gamma} \left\langle \frac{d \sigma_{p\gamma\rightarrow\nu}}{d\varepsilon_\nu}(\varepsilon_i,\varepsilon'_\gamma) \right\rangle c t'_{\rm cool}, \label{eq:general_yield} \end{equation} where $d\sigma_{p\gamma\rightarrow\nu}/d\varepsilon_\nu$ is the inclusive differential cross-section with the multiplicity of neutrinos taken into account. Given that we focus on $t'_{\rm cool}\approx t'_{\rm dyn}$, with the energy dependent optical depth $\tau_{p\gamma}\approx \int d\varepsilon'_\gamma (dn_\gamma/d\varepsilon'_\gamma)\langle \sigma_{p\gamma} \rangle ct'_{\rm dyn}$, the preceding equation is approximated to be~\cite{Yoshida:2014uka} \begin{equation} \frac{d\dot{N}_{\nu}}{d\varepsilon_{\nu}}\approx \int d\varepsilon_i \frac{K_{\rm CR}}{\varepsilon_{i0}} \left( \frac{\varepsilon_i}{\varepsilon_{i0}} \right)^{-\alpha_{\rm CR}} Y(\varepsilon_{\nu};\varepsilon_i) \tau_{p\gamma}(\varepsilon_i). \label{eq:general_yield_approx} \end{equation} In this case, $Y(\varepsilon_{\nu};\varepsilon_i)$ denotes the energy distribution of the neutrinos produced by an interaction of a cosmic ray proton. The details of the expression for $Y$ are given in Appendix A (see also Refs.~\cite{He:2012tq,Kimura:2017kan} for another analytical approximation). It should be noted that the neutrino spectrum cannot be harder than $\propto \varepsilon_\nu^0$~\cite{Gaisser:1990vg,Murase:2015xka}. The radiative cooling of pions and muons is important when the cooling time becomes ``shorter'' than the decay time~\cite{Waxman:1997ti,Razzaque:2004yv}, provided that their escape time from the turbulent magnetic field region is much longer. In general, various processes such as inverse Compton and adiabatic losses can be relevant. We consider the case of synchrotron dominance. The ratio of the synchrotron cooling time to the decay time can be written as \begin{equation} \frac{t'_{\pi/\mu,{\rm syn}}}{t'_{\pi/\mu,\rm dec}}=\left(\frac{\varepsilon_{\nu,\pi/\mu}^{\rm syn}}{\varepsilon_\nu}\right)^2, \label{eq:synchrotron_factor} \end{equation} where $t'_{\pi/\mu,{\rm syn}}$ is the synchrotron time scale of pions (muons), $t'_{\pi/\mu,{\rm dec}}=[\varepsilon'_{\pi/\mu}/(m_{\pi/\mu}c^2)]\tau_{\pi/\mu}$ is the lifetime of pions and muons, and $\tau_{\pi/\mu}$ is their proper lifetime. In addition, $\varepsilon_{\nu,\pi/\mu}^{\rm syn}$ is the critical neutrino energy of a pion (or muon), above which the suppression due to synchrotron cooling is relevant. The critical energy is given by~\cite{Waxman:1997ti,Murase:2011cx} \begin{equation} \varepsilon_{\nu,\pi/\mu}^{\rm syn} \approx \Gamma\kappa_{\pi,\mu} \sqrt{\frac{6\pi}{\tau_{\pi,\mu}\sigma_TcB'^2}\frac{(m_{\pi/\mu}c^2)^5}{(m_ec^2)^2}}, \label{eq:critical_synchrotron_energy} \end{equation} where $\kappa_{\pi,\mu}$ is the inelasticity from pion (muon) to a neutrino in the decay process. In this work, $\kappa_{\pi}$ is approximated by $\sim 1-r_{\pi}$~\cite{Gaisser:1990vg}, where $r_{\pi} = m_{\mu}^2 / m_{\pi}^2 \simeq 0.57$ is the muon-to-pion mass-squared ratio. The other fraction goes to a muon and $\kappa_\mu$ is approximated as $\sim 0.3$. In the synchrotron cooling energy regime, {\it i.e.,}, $\varepsilon_\nu\geq\varepsilon_{\nu}^{\rm syn}$, the neutrino yield is suppressed by $t'_{\pi/\mu,\rm syn}/t'_{\pi/\mu,\rm dec}$. Introducing $f_{\rm sup}(\varepsilon_\nu)=1-\exp(-t'_{\pi/\mu,{\rm syn}}/t'_{\pi/\mu,{\rm dec}})$~\cite{He:2012tq,Kimura:2017kan}, the neutrino yield, Eq.~(\ref{eq:general_yield}), is modified as \begin{eqnarray} \frac{d\dot{N}_{\nu}}{d\varepsilon_{\nu}}\approx \int d\varepsilon_i \frac{K_{\rm CR}}{\varepsilon_{i0}} \left( \frac{\varepsilon_i}{\varepsilon_{i0}} \right)^{-\alpha_{\rm CR}} \tau_{p\gamma}(\varepsilon_i)Y(\varepsilon_{\nu};\varepsilon_i)f_{\rm sup}(\varepsilon_\nu).\,\,\,\,\,\,\,\,\, \label{eq:general_yield_wz_sync} \end{eqnarray} The break energy of the neutrino flux due to synchrotron cooling is given by Eq.(\ref{eq:critical_synchrotron_energy}) and scales as $\varepsilon_{\nu}^{\rm syn}\propto \Gamma R/\sqrt{L'_\gamma}$. Since the optical depth $\tau_{p\gamma0}$ scales as $\propto L'_\gamma/(R\Gamma^2)$, we get $\varepsilon_{\nu}^{\rm syn}\propto \sqrt{L'_\gamma}/(\Gamma\tau_{p\gamma0})$. Thus, the upper limit of the neutrino flux in the energy region beyond 10 PeV for IceCube~\cite{Aartsen:2018vtx} can constrain $L'_\gamma$, $\Gamma$ and $\tau_{p\gamma0}$. \subsection{Calculations of diffuse intensities} Assuming emission from standard candles (i.e., identical sources over redshifts), the energy flux of diffuse neutrinos from UHECR sources across the universe, $\Phi_\nu\equiv dJ_{\nu}/dE_{\nu}$, is calculated by (e.g.,~\cite{Murase:2015xka}) \begin{equation} E_\nu^2\Phi_\nu(E_\nu)=\frac{c}{4 \pi} \int_0^{z_{\rm max}}\frac{dz}{1+z}\left|\frac{dt}{dz}\right| \left[\varepsilon_\nu^2\frac{d\dot{N}_{\nu}}{d\varepsilon_\nu}(\varepsilon_\nu)\right]n_0\psi(z), \label{eq:general_t} \end{equation} where $d\dot{N}_{\nu}/d\varepsilon_\nu$ is the neutrino spectrum per source, which is calculated in the previous subsection, and $E_\nu=\varepsilon_\nu/(1+z)\approx\Gamma\varepsilon'_\nu/(1+z)$. The comoving number density of UHECR sources is represented by $n_0\psi(z)$ with the local source density at $z=0$, $n_0$, and its cosmological evolution factor $\psi(z)$. For transient sources such as GRBs, $n_0$ is effectively given by $n_0=\rho_0\Delta T$, where $\rho_0$ and $\Delta T$ are the rate density and the duration of neutrino emission at the sources. The evolution factor $\psi(z)$ is parameterized as $(1 + z)^m$ such that the parameter $m$ represents the ``scale'' of the cosmological evolution that is often used in the literature. In this work, the source evolution is assumed to be compatible with the star formation rate, which is consistent with the constraints on cosmogenic neutrinos with extremely-high energy (EHE) analysis by IceCube~\cite{Aartsen:2016ngq}. Following Refs.~\cite{Kotera:2010yn,Yoshida:2012gf}, we parameterize $\psi(z)$ as \begin{eqnarray} \psi(z) \propto \left\{ \begin{array}{ll} (1 + z)^{3.4} & ( 0 \leq z \leq 1 ) \\ {\rm constant} & (1 \leq z \leq 4) \end{array} \right. . \label{eq:sdf} \end{eqnarray} Based on Eq.~(\ref{eq:calorimetric}), $\varepsilon_i^2(d\dot{N}_{\rm CR}/d\varepsilon_i)$ can essentially be regarded as the luminosity of injected cosmic rays. Then, $n_0\varepsilon_i^2(d\dot{N}_{\rm CR}/d\varepsilon_i)$ corresponds to the UHECR luminosity density that is known to be $E_i(dQ_{\rm CR}/dE_i)\approx{10}^{43.8}~{\rm erg}~{\rm Mpc}^{-3}~{\rm yr}^{-1}$~\cite{Katz:2013ooa,Murase:2018utn}. The diffuse neutrino flux measurements suggest that the energy generation rate density of neutrinos is comparable, $E_\nu (dQ_{\nu}/dE_\nu)\approx{10}^{43.3}~{\rm erg}~{\rm Mpc}^{-3}~{\rm yr}^{-1}$~\cite{Murase:2018utn}. Both the UHECR and the neutrino diffuse fluxes scale as $\propto n_0 \varepsilon_{i0} K_{\rm CR}\sim n_0\xi_{\rm CR}L_\gamma \approx n_0\xi_{\rm CR}L'_\gamma \Gamma^2$. It is convenient to introduce the {\it boosted} source number density defined as \begin{eqnarray} \mathcal{N}_\Gamma &\equiv& n_0\xi_{\rm CR}\Gamma^2\nonumber\\ &=& \rho_0\Delta T\xi_{\rm CR}\Gamma^2. \label{eq:boosted_density} \end{eqnarray} The UHECR and neutrino intensities are then proportional to $\mathcal{N}_\Gamma$ for a given comoving photon luminosity $L'_\gamma$. It should be noted that $Q_{\rm CR}(>10~{\rm PeV})=L'_\gamma {\mathcal N}_\Gamma=\xi_{\rm CR}L_\gamma n_0$. The full description of $\Phi_\nu$ with the present analytical formulation is given in Appendix A. \begin{figure}[t] \begin{center} \includegraphics[width=0.45\textwidth]{./NuFlux2_2_4gamma1_0_esc_xiB_1e-9_4pub.pdf} \caption{An example of the UHECR nucleon and the all-flavor-sum neutrino fluxes from UHECR sources calculated using the presented analysis. The case of $\alpha_{\rm CR}=2.2$, $\alpha_\gamma=1.0$, $\xi_{\rm B}=0.1$ is shown, assuming star formation rate-like evolution. The comoving $L'_\gamma$ is set to $4.5\times 10^{46}$ erg/s and the boosted source number density $\mathcal{N}_\Gamma$ (Eq.~\ref{eq:boosted_density}) is $1\times 10^{-9}$ Mpc$^{-3}$. The optical depth $\tau_{p\gamma0}$ is 0.30 in this example, which gives a magnetic field of $B'= 0.91\Gamma^2$~G with $\xi_{\rm B}=0.1$. The black points represent the IceCube neutrino measurements~\cite{Aartsen:2015zva} and the shaded region represents the flux space, consistent with the IceCube diffuse $\nu_\mu$ data~\cite{Aartsen:2016xlq}. The solid curve labeled as (IceCube $\nu$ UL) is the differential EHE bound for IceCube~\cite{Aartsen:2018vtx}. The cosmic ray data measured by IceTop~\cite{Aartsen:2013wda}, PAO~\cite{Fenu:2017hlc} and TA~\cite{AbuZayyad:2012ru} are also displayed.} \label{fig:spectrum_example_1_0} \end{center} \end{figure} Fig.~\ref{fig:spectrum_example_1_0} shows an example of the UHECR and neutrino fluxes derived using the presented generic model. This realization of the fluxes displayed in Fig.~\ref{fig:spectrum_example_1_0} is associated with a scenario that is consistent with the UHECR and IceCube data. We discuss the allowed parameter space in the next section. It should be noted that the $1/\Gamma$ dependence of the neutrino cut-off energy due to the pion/muon synchrotron losses is also observed in this plot. The spectrum of the UHECR protons after their propagation through intergalactic space to reach the Earth is calculated using a similar analytical technique. The details will be described in Appendix B. \subsection{Observational constraints} The predicted neutrino and UHECR spectra must be consistent with their observations. Qualitatively, the UHECR energy budget constrains the product of $L'_\gamma$ and ${\mathcal N}_\Gamma$, whereas the neutrino energy budget determines the product of $\tau_{p\gamma0}$ and $L'_\gamma{\mathcal N}_\Gamma$. To quantify this consistency, we introduce the following criteria in the present study. \begin{enumerate} \renewcommand{\theenumi}{(\Roman{enum})} \item[(a)] The integrated UHECR proton flux above 10 EeV, $\int_{10 {\rm EeV}} dE_i dJ_{\rm CR}/dE_i$, is less than the measurement by Auger, $8.5\times 10^{-19}$ /cm$^2$/s/sr~\cite{Fenu:2017hlc}\label{cond:Auger}. Considering the uncertainties associated with the UHECR mass composition, we only request these bolometric requirements of the UHECR flux to be conservative with respect to imposing bounds on the relevant parameter space. The results for the required UHECR energy generation rate density are consistent with those obtained based on detailed numerical simulations considering the uncertainties. \item[(b)] The neutrino flux intensity at 100 TeV and the spectral power law index are within the 99 \% C.L. range obtained by the diffuse $\nu_\mu$ data measured by IceCube~\cite{Aartsen:2016xlq}~\label{cond:diffuseNuMu}. \item[(c)] The all-flavor-sum neutrino flux at 100 PeV is less than $2\times 10^{-8}$ GeV/cm$^2$/s/sr, the limit obtained by the IceCube EHE analysis~\cite{Aartsen:2018vtx}. \item[(d)] The neutrino flux at 6 PeV is above $2\times 10^{-9}$ GeV/cm$^2$/s/sr, determined by the 6 PeV energy neutrino detection by IceCube~\cite{Aartsen:2018vtx}\label{cond:6PeV}. \end{enumerate} \section{Constraints on UHECR and Neutrino Emitters \label{sec:results}} Four constraints from UHECR acceleration (Eqs.~\ref{eq:hillas} and \ref{eq:sync_condition}), UHECR escape (Eqs.~\ref{eq:esc_condition} and \ref{eq:calorimetric}) based on the physics of photo-meson production, and diffuse UHECR and neutrino flux measurements, allow us to constrain generic unification models for photohadronic neutrinos. We present the results in the following. \subsection{Cases of fiducial neutrino spectra \label{subsec:parameter_general}} \begin{figure*} \begin{center} \includegraphics[width=0.45\textwidth]{./Intensity2_2_1_0_xiB_1EeVmax_4pub.pdf} \includegraphics[width=0.45\textwidth]{./LgDensity2_2_1_0_esc_xiB_1EeVmax_4pub.pdf} \caption{(Left) The allowed region in the parameter space of luminosity per unit volume, $L'_\gamma\mathcal{N}_\Gamma$, and damping factor $\displaystyle{1-e^{-\tau_{p\gamma0}}}$. The parameters inside the shaded region satisfy the observational consistency criteria of conditions~(a)$\sim$~(d) described in the text, and the UHECR condition of Eq.~(\ref{eq:sync_condition}). The cases of $\alpha_{\rm CR}=2.2$ and $\gamma=1.0$ are shown. We find no $\Gamma$ dependence on these constraints. The horizontal belt represented by the darker shade shows the systematics of the UHECR energetics that originate from the uncertainties on the mass composition and Galactic to the extragalactic transition of UHECRs~\cite{Murase:2018utn}. The vertical line represents the bound on $\tau_{p\gamma0}$ by the UHECR escape condition, Eq.~(\ref{eq:esc_condition}) when $\xi_{\rm B}=0.1$. The maximal bound of $L'_\gamma\mathcal{N}_\Gamma$ is determined by the condition whereby the proton flux from sources should not exceed the measured flux of UHECRs. The lower bound is driven by the intensity of neutrinos measured by IceCube. (Right) The allowed region on the plane of the source luminosity $L'_\gamma$ and the boosted source density $\mathcal{N}_\Gamma$. The parameters inside the shaded region statisfy the observational consistency criteria as shown in the left plot. The horizontal line represents the condition of $t_p^{\rm acc}\leq t_p^{\rm dyn}$, Eq.~(\ref{eq:hillas}). } \label{fig:constraints_when_alpha2_2} \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.45\textwidth]{./Intensity2_5_1_0_xiB_1EeVmax_4pub.pdf} \includegraphics[width=0.45\textwidth]{./LgDensity2_5_1_0_esc_xiB_1EeVmax_4pub.pdf} \caption{Same as Fig.~\ref{fig:constraints_when_alpha2_2} but for $\alpha_{\rm CR}=2.5$. The constraints on $L'_\gamma$--$\mathcal{N}_\Gamma$ (right) has a small dependence on $\Gamma$ when $\Gamma\gg 1$. The region specified by the dashed line corresponds to the allowed space for $\Gamma=100$. } \label{fig:constraints_lumonisoty_when_alpha2_5} \end{center} \end{figure*} The left plot of Fig.~\ref{fig:constraints_when_alpha2_2} displays the luminosity and the optical depth constraints for the spectral power law index of UHECRs $\alpha_{\rm CR}=2.2$ and that of the target photons $\alpha_\gamma=1.0$. Given that the neutrino spectrum follows $\propto E_\nu^{-(\alpha_{\rm CR}-\alpha_\gamma+1)}\sim E_\nu^{-2.2}$ (see Eq.~(\ref{eq:onsource_final})), they represent the case of neutrino spectra with $\alpha_\nu>2$, which is close to the index suggested by IceCube observations. The optical depth $\tau_{p\gamma0}\gtrsim 0.1$ is required because the IceCube neutrino energy flux is compatible with the UHECR flux. As seen in Fig.~\ref{fig:spectrum_example_1_0}, the margins for the neutrino fluxes to be consistent with both the neutrino and UHECR observations are small when the primary UHECR spectrum is as hard as $\alpha_{\rm CR}\lesssim 2.2$. Since $L'_\gamma\mathcal{N}_\Gamma\propto n_0K_{\rm CR}$, the range of the luminosity per unit volume, $L'_\gamma\mathcal{N}_\Gamma$, is bounded by the UHECR flux and the IceCube neutrino flux connected by the optical depth $\tau_{p\gamma0}$. This is an expanded way of presenting the bounds leading to the frequently referenced Waxman-Bahcall limit~\cite{Waxman:1998yy}. The tight constraint is also consistent with the results in Ref.~\cite{Yoshida:2014uka}. The resultant range of the source luminosity per unit volume is $\sim (3-15)\times 10^{44}~{\rm erg}~{\rm Mpc}^{-3}~{\rm yr}^{-1}$. This is comparable with the integrated UHECR luminosity per unit volume at $z=0$ above $10^{18}$~eV~\cite{Decerprit:2011qe}. However, the UHECR escape condition, Eq.~(\ref{eq:esc_condition}), prevents large optical depths unless the magnetic field is weaker than expected from the equipartition condition $\xi_{\rm B}=1$. The bound of $\tau_{p\gamma0}\lesssim 0.06$ derived by Eq.~(\ref{eq:esc_condition}) with the equipartition condition $\xi_{\rm B}=1$ is obviously inconsistent with the shaded region in the left plot of Fig.~\ref{fig:constraints_when_alpha2_2}. Relaxation of the criteria for proton synchrotron cooling by setting $\xi_{\rm B}\sim 0.1$ can open an allowed space of the parameters, $L'_\gamma\mathcal{N}_\Gamma$ and the optical depth $\tau_{p\gamma0}$. We found that the cases of even harder UHECR source spectrum, {\it i.e.}, $\alpha_{\rm CR}\lesssim 2.1$ is nearly excluded for a reasonable range of the magnetic field strengths that are expected due to $\xi_{\rm B}\gtrsim 0.1$. Given that the upper bound of $\tau_{p\gamma 0}$ required by the UHECR escape condition scales as $1/\varepsilon_i^{\rm max}$, ({\it c.f.} Eq.~(\ref{eq:esc_condition})), setting $\varepsilon_i^{\rm max}\ll 10^{11}$~GeV relaxes these constraints. We also found that the allowed range of optical depths is limited, yielding $0.1\lesssim \tau_{p\gamma0}\lesssim 0.6$ for a given value of $\xi_B\sim0.1$, and it is even more severely constrained if $\xi_B\gg 0.1$. This is nearly a universal bound regardless of the UHECR spectral index if $\alpha_{\rm CR}\lesssim 2.3$. The right plot of Fig.~\ref{fig:constraints_when_alpha2_2} shows the allowed parameter space on the source luminosity in the plasma rest frame $L'_\gamma$ and the boosted source number density $\mathcal{N}_\Gamma$. For the requirement of the luminosity condition, Eq.~\ref{eq:hillas}, the unified sources must be relatively rare, $\mathcal{N}_\Gamma\lesssim 10^{-9}~{\rm Mpc}^{-3}$. This is a well-known consequence of the UHECR energy budget argument. The minimal value of $L'_\gamma$ in the shaded region is determined by the synchrotron cooling condition, $t'_{\rm acc}<t'_{\rm syn}$, Eq.~(\ref{eq:sync_condition}), but the lower bound of $L'_\gamma$ demanding $t'_{\rm acc}<t'_{\rm dyn}$, Eq.~(\ref{eq:hillas}), is more stringent. We note that these constraints in the plane of luminosity per unit volume and the optical depth, and the plane of $L'_\gamma$--$\mathcal{N}_\gamma$ are nearly independent of the plasma bulk Lorentz factor $\Gamma$. Thus, they are universal conditions that any class of sources in a unification scheme should satisfy. The constraints on the source luminosity per unit volume $L'_\gamma\mathcal{N}_\gamma$ can be relaxed for the case of the {\it soft} UHECR (and thus neutrino) spectra. Fig.~\ref{fig:constraints_lumonisoty_when_alpha2_5} displays an example, $\alpha_{\rm CR}=2.5$. Since the margin between UHECR and the neutrino fluxes increases if the UHECR proton spectrum is steeper, the luminosity per volume can be $\gtrsim 3\times 10^{45}~{\rm erg}~{\rm Mpc}^{-3}~{\rm yr}^{-1}$. The sources that satisfy this requirement for CRs include galaxies, AGNs, and, galaxy clusters~\cite{Murase:2018utn}. \subsection{Cases of hard neutrino spectra \label{subsec:parameter_relativistic}} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{./NuFlux2_3_4gamma1_5_esc_xiB_1e-9_4pub.pdf} \caption{An example of scenario for hard neutrino flux. $\alpha_{\rm CR}=2.3$ and $\alpha_\gamma=1.5$. The comoving $L'_\gamma$ is set to $5.0\times 10^{48}$ erg/s and the boosted source number density $\mathcal{N}_\Gamma$ (Eq.~\ref{eq:boosted_density}) is $1\times 10^{-9}$ Mpc$^{-3}$. The optical depth $\tau_{p\gamma0}$ is 0.10 in this particular example, we have a magnetic field of $B'= 0.26\Gamma^2$~G with $\xi_{\rm B}=0.1$.} \label{fig:hard_flux_scenario} \end{center} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=0.45\textwidth]{./Intensity2_3_1_5_xiB_1EeVmax_4pub.pdf} \includegraphics[width=0.45\textwidth]{./LgDensity2_3_1_5_esc_xiB_1EeVmax_4pub.pdf} \caption{Same as Fig.~\ref{fig:constraints_when_alpha2_2} but with $\alpha_{\rm CR}=2.3$ and $\alpha_\gamma=1.5$. Only a relativistic plasma flow, $\Gamma\gtrsim 30$, can be consistent with the observation and the resultant allowed region has weak dependences on $\Gamma$. In these plots, the allowed parameter space for $\Gamma=100$, $\Gamma=300$, and $\Gamma=1000$ are represented using different shades.} \label{fig:constraints_hard_flux_scenario} \end{center} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{./LorentzBulkContour_4pub.pdf} \caption{The allowed region in the plane of $\alpha_{\rm CR}$ and $\Gamma$. The regions for $\alpha_\gamma=1.4$ and $\alpha_\gamma=1.3$ and $\alpha_\gamma=1.5$ are represented by different shades. The cases of $\alpha_\gamma=1.0$, 1.1, and 1.2 are represented by the solid curve. The region above each of the lines is allowed. $\xi_{\rm B}=0.1$ is assumed. } \label{fig:constraints_on_lorentz_factor} \end{center} \end{figure} Although cases of harder UHECR spectra, {\it i.e.}, $\alpha_{\rm CR}\lesssim 2.1$ are nearly eliminated, a scenario that predicts hard {\it neutrino} spectra with $\alpha_\nu\lesssim2.0$ is more realistic if the target photon spectrum is softer as $\alpha_\gamma\gtrsim 1.3$. It should be noted that the neutrino spectrum follows $\sim E_\nu^{-(\alpha_{\rm CR}-\alpha_\gamma+1)}$ according to Eq.~(\ref{eq:onsource_final}). A hard neutrino spectrum like $\sim E_\nu^{-2}$ cannot extend well above 100 PeV and should attenuate at a point below this value, given that the spectral extension to $\gg~{\rm PeV}$ with a $E_\nu^{-2}$-like power-law flux has been eliminated by the IceCube EHE limit (see Fig.~3 of Ref.~\cite{Aartsen:2016ngq}). The spectral fall-off behavior of the neutrino spectrum is naturally expected when strong synchrotron cooling occurs. As discussed earlier, since the characteristic synchrotron cut-off energy of neutrinos is $\varepsilon_\nu^{\rm syn}\sim \sqrt{L'_\gamma}/(\Gamma\tau_{p\gamma0})$, a lower energy cut-off via synchrotron cooling is realized in relativistic plasma flow {\it i.e.}, $\Gamma\gg 1$. A scenario of harder neutrino spectra (but softer UHECR spectra) is, therefore, a natural consequence of the unified UHECR/neutrino model for ultra-relativistic sources. An example of the ultra-relativistic scenario is shown in Fig.~\ref{fig:hard_flux_scenario}. The hard neutrino spectrum $\propto E_\nu^{-(\alpha_{\rm CR}-\alpha_\gamma+1)}\sim E_\nu^{-1.8}$ falls off at $\sim 500~(50)~{\rm PeV}$ for sources with $\Gamma=100~(1000)$. These spectra are consistent with the IceCube EHE limit~\cite{Aartsen:2018vtx} based on the null detection of $\gtrsim 10~{\rm PeV}$ neutrinos. They represent a scenario of ultra-relativistic sources with unified UHECR and neutrino emission. Since the cut-off energy of the neutrino spectrum depends explicitly on $\Gamma$ for a given optical depth, the constraints on $L'_\gamma, \mathcal{N}_\gamma$, and $\tau_{p\gamma0}$ exhibits a weak dependence on $\Gamma$ in the case of extremely relativistic sources that yield hard neutrino fluxes. Fig.~\ref{fig:constraints_hard_flux_scenario} displays the allowed region of parameter spaces in the hard neutrino spectrum. Since $E_\nu^{\rm syn}\propto \sqrt{L'_\gamma}/(\Gamma\tau_{p\gamma0})$, a lower $\Gamma$ excludes super-luminous sources, because the neutrino intensity at $\gg~{\rm PeV}$ would overshoot the IceCube EHE limit. Given that the spectral indexes $\alpha_{\rm CR}$ and $\alpha_\gamma$ characterize the emission environments, it is important to understand their allowed space in the unified source model. Fig.~\ref{fig:constraints_on_lorentz_factor} shows the constraints in the plane of $\alpha_{\rm CR}$ and $\Gamma$ for various values of the photon spectral power-law index $\alpha_\gamma$. The rapid fall-off structures observed at $\Gamma\sim 20$ result from the spectral cut-off due to synchrotron cooling. A higher $\Gamma$ facilitates larger parameter spaces of $\alpha_{\rm CR}$ and $\alpha_\gamma$ as it avoids the EHE neutrino limit. The figure also indicates that extremely relativistic cases, $\Gamma\sim 10^3$, would further extend the allowed parameter space. This is because strong synchrotron cooling softens a fairly hard spectrum of neutrinos, which would otherwise be inconsistent with the IceCube observation. Fig.~\ref{fig:constraints_on_lorentz_factor} also indicates that harder UHECR proton emission $\alpha_{\rm CR}\lesssim 2.1$ is nearly excluded, as discussed earlier. This bound depends on the photon spectral index $\alpha_\gamma$ in a non-trivial way. The situation is illustrated in Fig.~\ref{fig:hard_UHECR_flux_scenario}. The cases of $\alpha_\gamma=1.1$ and $1.2$ are allowed but $\alpha_\gamma=1.0$ is inconsistent because it is too soft to be allowed in the diffuse $\nu_\mu$ observations (condition~(b) described in Sec.~\ref{sec:condition}). The IceCube data favors a harder spectrum when we assume a lower side of the allowed intensity region $\displaystyle{E_\nu^2/dJ_{\nu_e+\nu_\mu+\nu_\tau}/dE_\nu\sim 1\times 10^8~{\rm GeV}~{\rm cm}^{-2}~{\rm s}^{-1}~{\rm sr}^{-1}}$ which is the only possibility that allows for consistency with the UHECR flux. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{./NuFlux_gamma_GammaBound_esc_xiB_4pub.pdf} \caption{An example of neutrino energy spectra from the hard UHECR flux with $\alpha_{\rm CR}=2.1$. In this example, $L'_\gamma$, $N_\Gamma$, and $\tau_{p\gamma0}$ are $8.4\times 10^{46}~{\rm erg}~{\rm s}^{-1}$, $3.0\times 10^{-10}$~Mpc$^{-3}$, and 0.42, respectively, which give a magnetic field of $B'= 0.91\Gamma^2$~G with $\xi_{\rm B}=0.1$. The case of $\alpha_\gamma=1.0$ is not consistent with the IceCube diffuse $\nu_\mu$ analysis~\cite{Aartsen:2016xlq}.} \label{fig:hard_UHECR_flux_scenario} \end{center} \end{figure} \subsection{Cases of UHECR nuclei} The recent observations by Auger indicate that UHECRs are likely to have a mixed composition that is dominated by intermediate to heavy nuclei at the highest energies. This adds important conditions to the possible classes of the sources. That is, we require that nuclei with $A>1$ and $Z>1$ are accelerated and survive. The maximum proton energy can be lower, but the survival conditions constrain the source environments more strongly as investigated for GRBs~\cite{Murase:2008mr,Horiuchi:2011zz} and AGNs~\cite{Murase:2011cy,Pe'er:2009rc}. The luminosity requirement (Eq.~\ref{eq:hillas}) is significantly relaxed in the cases of heavy nuclei. The condition by $t_{\rm acc}<t_{\rm syn}$ is similarly imposed via Eq.~(\ref{eq:sync_condition}). The new requirements originate from the photodisintegration of nuclei. As in the proton case, we focus on the situation wherein the system is effectively optically thin to photodisintegration and photo-meson production, in which $t'_{\rm acc}<t'_{\rm dis}$ is automatically satisfied. In this case, $t'_{\rm dis}$ is the photodisintegration energy loss time. After the nuclei are accelerated, they must survive photodisintegration while they leave the sources. The survival condition is more severe~\cite{Murase:2008mr,Murase:2010gj}. The photodisintegration cross-section is larger than that for photo-meson production, which gives the optical depth: \begin{equation} \tau_{A\gamma}(\varepsilon_i)\approx\frac{2}{1+\alpha_\gamma}\frac{L'_{\gamma0}}{4\pi R\Gamma c \varepsilon'_{\gamma0}} \left(\int ds \frac{\sigma_{A\gamma}(s)}{s-m_A^2}\right) {\left(\frac{\varepsilon_i}{\tilde{\varepsilon}^{\rm GDR}_{i0}}\right)}^{\alpha_\gamma-1} \label{eq:optical_depth_nuclei} \end{equation} where $\tilde{\varepsilon}^{\rm GDR}_{i0}$ is introduced as \begin{eqnarray} \tilde{\varepsilon}^{\rm GDR}_{i0}&=&\frac{s_{\rm GDR}-m_A^2}{4}\frac{\Gamma}{\varepsilon'_{\gamma0}}\nonumber\\ &=&\frac{s_{\rm GDR}-m_A^2}{s_{\Delta}-m_p^2}{\tilde{\varepsilon}^{\Delta}_{p0}}, \end{eqnarray} where $s_{\rm GDR}=m_A^2+2 m_A \bar{\varepsilon}_{\rm GDR}$ is the Mandelstam variable at the giant dipole resonance, where $\bar{\varepsilon}_{\rm GDR}\approx42.65A^{-0.21}$~MeV is the resonance energy. The photodisintegration process is dominated by the giant dipole resonance. This relates $\tau_{A\gamma}$ to $\tau_{p\gamma}$ as \begin{equation} \tau_{p\gamma0}\approx\tau_{A\gamma}(\varepsilon_i^{\rm max})\frac{\int ds \frac{\sigma_{p\gamma}(s)}{s-m_p^2}} {\int ds \frac{\sigma_{A\gamma}(s)}{s-m_A^2}} {\left[\left(\frac{s_{\rm GDR}-m_A^2}{s_\Delta-m_p^2}\right) \left(\frac{\tilde{\varepsilon}^{\Delta}_{p0}}{\varepsilon_{i}^{\rm max}}\right)\right]}^{\alpha_\gamma-1} \label{eq:relation_between_pgamma_dis} \end{equation} The importance of this relationship was highlighted in Ref.~\cite{Murase:2008mr} (see also Eq.~6 of Ref.~\cite{Murase:2010gj}). The survival condition is imposed by $t'_{\rm dyn}<t'_{\rm dis}$, which leads to \begin{equation} \tau_{A\gamma}(\varepsilon_i^{\rm max})\lesssim A, \end{equation} which is analogous to Eq.~(\ref{eq:calorimetric}). We get \begin{equation} \tau_{p\gamma0}\lesssim A\frac{\int ds \frac{\sigma_{p\gamma}(s)}{s-m_p^2}} {\int ds \frac{\sigma_{A\gamma}(s)}{s-m_A^2}} {\left[\left(\frac{s_{\rm GDR}-m_A^2}{s_\Delta-m_p^2}\right) \left(\frac{\tilde{\varepsilon}^{\Delta}_{p0}}{\varepsilon_{i}^{\rm max}}\right)\right]}^{\alpha_\gamma-1}. \label{eq:suvival_condition} \end{equation} In particular, for $\alpha_\gamma=1.0$, this leads to $\tau_{p\gamma}\sim\tau_{p\gamma0}\lesssim0.4~{(A/56)}^{0.79}$, which is equivalent to Eq.~10 of Ref.~\cite{Murase:2010gj}. ( Note that the value itself can be enhanced by the quasideutron process, baryon resonances and photofragmentation.). We require this survival condition in addition to Eqs.~(\ref{eq:hillas}), (\ref{eq:sync_condition}) and (\ref{eq:esc_condition}). It should be noted that this constraint is stronger for $\alpha_\gamma>1$. The aforementioned requirements of the sources are applied independently of the details of the UHECR composition. However, the constraints from the diffuse UHECR and neutrino fluxes depend on the composition. Even if UHECRs are dominated by nuclei, the lower-energy cosmic rays that are responsible for IceCube neutrinos may be proton dominated, in which the diffuse constraints remain unchanged from those obtained in the previous subsections. However, if the cosmic rays are dominated by heavy nuclei even at lower energies, the constraints are modified. We hereby consider such cases. The astrophysical neutrino flux from the UHECR nuclei can be approximately described using a treatment similar to the case of proton-dominated UHECRs, if the UHECR sources are effectively transparent to the photodisintegration process. The neutrino flux due to photomeson production via secondary nucleons and primary nuclei is given by \begin{eqnarray} E_\nu^2\frac{d J_\nu}{d E_\nu}&\approx&\frac{3}{8}[1-{(1-\kappa_{p\gamma})}^{\tau_{p\gamma}}][1-{(1-\kappa_{\rm dis})}^{\tau_{A\gamma}}]E_i^2\frac{d J_{\rm CR}}{d E_i}\nonumber\\ &+&\frac{3}{8}[1-{(1-\kappa_{\rm mes})}^{\tau_{\rm mes}}] {(1-\kappa_{\rm dis})}^{\tau_{A\gamma}}E_i^2\frac{d J_{\rm CR}}{d E_i} \end{eqnarray} Note that in the limit of $\kappa_{p\gamma}\ll 1$ and $\kappa_{\rm dis}\ll 1$ keeping $\kappa_{p\gamma}\tau_{p\gamma}<1$ and $\kappa_{A\gamma}\tau_{A\gamma}<1$, we have \begin{eqnarray} E_\nu^2\frac{d J_\nu}{d E_\nu}&\approx&\frac{3}{8}\kappa_{p\gamma}\tau_{p\gamma}[E_i/A] \kappa_{\rm dis}\tau_{A\gamma}E_i^2\frac{d J_{\rm CR}}{d E_i}\nonumber\\ &+&\frac{3}{8}\kappa_{\rm mes}\tau_{\rm mes}[E_i] (1-\kappa_{\rm dis}\tau_{A\gamma})E_i^2\frac{d J_{\rm CR}}{d E_i}, \end{eqnarray} which is similar to Eq.~(11) of Ref.~\cite{Murase:2010gj}. The first term of the right hand side represents the contribution from secondary nucleons while the second term is for the contribution from the photomeson production on nuclei. With $\tau_{\rm mes}[E_i]\sim A\tau_{p\gamma}[E_i/A]$ (because of the approximation, $\sigma_{\rm mes}[E_i]\sim A\sigma_{p\gamma}[E_i/A]$) and $\kappa_{\rm mes}[E_i]\sim \kappa_{p\gamma}[E_i/A]/A$, we approximately obtain~\cite{Murase:2010gj} \begin{equation} E_\nu^2\frac{d J_\nu}{d E_\nu}\approx \frac{3}{8}\kappa_{p\gamma}\tau_{p\gamma}[E_i/A] E_i^2\frac{d J_{\rm CR}}{d E_i} \end{equation} We stress that this formula is derived assuming that all UHECRs are nuclei. Then, noting that $E_i\approx A E_p$, we have \begin{equation} E_\nu^2\frac{d J_\nu}{d E_\nu}\approx \frac{3}{8}\kappa_{p\gamma}\tau_{p\gamma}[E_p] E_p^2\frac{d J_{\rm CR}}{d E_p}A^{2-\alpha_{\rm CR}}. \label{eq:neutrini_intensity_from_nuclei} \end{equation} Finally, the results on such a nuclear case is obtained by introducing the following ``correction'' to the proton case considered before, which is \begin{equation} E_\nu^2\frac{d J_\nu}{d E_\nu}\approx E_\nu^2\frac{d J_\nu^{(p)}}{d E_\nu}A^{2-\alpha_{\rm CR}}. \label{eq:neutrini_intensity_from_nuclei} \end{equation} This is simply because a neutrino with $E_\nu$ mainly originates from nuclei with $E_A\sim20A E_\nu$. Thus, the diffuse constraints derived because the proton composition is regarded as conservative. We ``require'' that the sources should be effectively transparent to the photodisintegration process, and the spectrum of escaping cosmic rays should be the same as that of the accelerated rays up to $E_i^{\rm max}$. We assume that the flux of escaping UHECRs is the same as that of the accelerated UHECRs up to $E_i^{\rm max}$ As previously discussed, the spectrum of escaping cosmic rays can be significantly different. This is usually expected in radiation-rich environments such as GRBs~\cite{Murase:2008mr} and blazars~\cite{Murase:2011cy}. However, diffuse environments such as galaxy clusters are also plausible examples~\cite{Fang:2017zjf}. In general, such a case requires detailed analyses but analytical formulas are adequate for this work. \begin{figure*} \begin{center} \includegraphics[width=0.45\textwidth]{./Intensity2_3_1_0_xiB_Si_1EeVmax_4pub.pdf} \includegraphics[width=0.45\textwidth]{./LgDensity2_3_1_0_xiB_Si_4pub.pdf} \caption{Same as Fig.~\ref{fig:constraints_when_alpha2_2} but show the case of primary silicon nuclei. In the left plot, the constraints for the silicon case is overlaid with the proton case for comparison. The horizontal belt represented by the darker shade shows the systematics of UHECR energetics that originate due to the uncertainties associated with the mass composition and galactic to extra-galactic transition of UHECRs~\cite{Murase:2018utn}. The darker shaded region in the right panel represents the allowed space when the nuclear-survival condition is required.} \label{fig:constraints_nuclei} \end{center} \end{figure*} Fig.~\ref{fig:constraints_nuclei} shows the resultant constraints (for $\alpha_\gamma=1.0$). In this case, we consider silicon ($A=28$) UHECRs as a benchmark. Both the acceleration and escape conditions are considered. The allowed region in the $L'_\gamma-N_\Gamma$ plane is similar but wider than that of the proton-dominated case. The allowed region for $\tau_{p\gamma0}$ is smaller in the nuclei case because of the nucleus-survival condition -- a photon field that facilitates the survival of nuclei is indicative of a low efficiency of photo-meson production. For the nucleus-survival condition, the constraints become even more stringent as indicated by the vertical line in the figure. This may suggest that fine-tuning is needed to build a viable model of UHECR nuclei sources. When the target photon spectrum is softer, the resultant parameter space is even smaller compared to the proton-dominated case. \section{\label{sec:source}Candidate Sources} In this section, we consider different source classes. The list of candidate sources for the unified photohadronic scenario is given in Table~\ref{tb1}. \begin{table*}[t] \begin{center} \caption{Characteristic parameters of the candidate sources of UHECRs and high-energy neutrinos. \label{tb1} } \scalebox{1.2}{ \begin{tabular}{|c|c|c|c|c|c|c|} \hline & HL GRB & LL GRB & Newborn magnetar & Jetted TDEs & Blazar Flares & Jetted AGN\\ \hline $L_\gamma$ [${\rm erg}~{\rm s}^{-1}$] & ${10}^{51-53}$ & ${10}^{46-48}$ & ${10}^{42-44}$ & ${10}^{45-48}$ & ${10}^{45-48}$ & ${10}^{43-48}$ \\ \hline $\Gamma$ & $100-1000$ & $2-30$ & $?$ & $3-100$ & $3-100$ & $3-100$ \\ \hline $\rho$ [${\rm Gpc}^{-3}~{\rm yr}^{-1}$] & $0.1-1$ & $100-1000$ & $1000-10000$ & $0.01-0.1$ & $100-1000$ & --- \\ \hline $\Delta T$ [${\rm s}$] & $10-1000$ & $100-10000$ & ${10}^{2-5}$ & ${10}^{5-7}$ & ${10}^{5-7}$ & --- \\ \hline \end{tabular} } \end{center} \end{table*} \subsection{High-luminosity gamma-ray bursts} HL GRBs are among the most powerful gamma-ray transient sources, which are classically attributed to radiation from nonthermal electrons. They are also potential candidate sources of UHECRs because of their high luminosity and large Lorentz factors~\cite{Milgrom:1995um,Waxman:1995vg,Vietri:1995hs} (see also Refs.~\cite{Murase:2008mr,Globus:2014fka,Biehl:2017zlw} for applications to nuclei). With $L_\gamma\sim10^{51-53}~{\rm erg}~{\rm s}^{-1}$ and $\Gamma\sim300$~\cite{Meszaros:2006rc}, we have the comoving (isotropic-equivalent) luminosity, $L_\gamma'\sim10^{46-48}~{\rm erg}~{\rm s}^{-1}$. The magnetic energy density is assumed to be comparable to that of the radiation luminosity if the synchrotron peak is near the observed peak energy at $\varepsilon_\gamma^b\approx\Gamma\hbar{\gamma'_b}^2\frac{eB'}{m_ec}\sim300$~keV. This implies $B'\sim10^3-10^5$~G for the electron Lorentz factor $\gamma'_b\sim10^3-10^4$. This can be compatible with $\xi_B\sim0.01-100$. UHECR acceleration is allowed based on the luminosity argument~\cite{Murase:2008mr,Samuelsson:2018fan}. The low-energy index of the target photon spectrum (below the peak energy near $\varepsilon_\gamma^b\sim1$~MeV) is relevant for UHECRs, typically $\alpha_\gamma\sim1$, in which the photo-meson production optical depth is approximately energy independent (although multipion production enhances it by a factor of $3$~\cite{Murase:2005hy}). The apparent rate density of HL GRBs and the duration are $\rho\sim1~{\rm Gpc}^{-3}~{\rm yr}^{-1}$~\cite{Wanderman:2014eza} and $\Delta T\sim30$~s, respectively. This gives $n_0\sim{10}^{-15}~{\rm Mpc}^{-3}$. The constraint shown in Fig.~\ref{fig:constraints_when_alpha2_2} indicates that ${\mathcal N}_\Gamma\sim{10}^{-9}~{\rm Mpc}^{-3}$ (see also Figs.~\ref{fig:spectrum_example_1_0} and \ref{fig:hard_flux_scenario} for the cases with $\Gamma\sim100-1000$), which leads to $\xi_{\rm CR}\sim10{(\Gamma/300)}^{-2}$. This is consistent with the value required based on the GRB-UHECR hypothesis~\cite{Murase:2008mr}. One of the advantages of HL GRB models is that the steepening of neutrino spectra above a few PeV energies can readily be explained (see Fig.~2). This is because the strong cooling of pions and muons suppresses the high-energy neutrino spectrum~\cite{Waxman:1997ti}. However, the photo-meson production optical depth required for the unification model is $\tau_{p\gamma}\sim0.1-0.6$, which strongly constrains the HL GRB models. HL GRBs are so bright that stacking limits are very impactful, and the recent IceCube analysis gives the stringent limit, $\tau_{p\gamma}\lesssim0.05$~\cite{Abbasi:2012zw,Aartsen:2014aqy,Aartsen:2017wea} and challenges the GRB-UHECR models~\cite{Bustamante:2014oka,Bustamante:2016wpu}. The null detection of cosmogenic neutrinos by IceCube also substantially constrained the possibility that the HL GRBs are a significant population of UHECR sources~\cite{Aartsen:2016ngq}. Thus, although the allowed parameter space may be compatible with the GRB models, we conclude that HL GRBs are unlikely to provide a unified explanation for UHECRs and PeV neutrinos. \subsection{Low-luminosity gamma-ray bursts and transrelativistic supernovae} Engine-driven supernovae with a Lorentz factor of $\Gamma\beta\gtrsim0.1-1$ have been proposed as the main sources of UHECRs~\cite{Murase:2006mm,Gupta:2006jm,Wang:2007ya,Murase:2008mr,Zhang:2018agl}. Note that this category includes LL GRBs like GRN 060218~\cite{Soderberg:2006vh}, peculiar hypernovae like SN 2009bb~\cite{Margutti:2018rri}, and fast-rising blue optical transients such as AT2018cow~\cite{Margutti:2018rri,Coppejans:2020nxp}. If the jet scenario is assumed, with $L_\gamma\sim10^{46-48}~{\rm erg}~{\rm s}^{-1}$ and $\Gamma\sim3$, we have $L_\gamma'\sim10^{45-47}~{\rm erg}~{\rm s}^{-1}$. The luminosity requirement can be satisfied only for optimistic parameters, e.g., $L_\gamma\sim10^{48}~{\rm erg}~{\rm s}^{-1}$, but it can be more readily fulfilled if UHECRs are heavy nuclei, as opposed to considering protons ~\cite{Murase:2008mr,Samuelsson:2020upt}. The rate density and duration are $\rho\sim100-1000~{\rm Gpc}^{-3}~{\rm yr}^{-1}$ and $\Delta T\sim3000$~s, respectively~\cite{Campana:2006qe,Soderberg:2006vh,Liang:2006ci}, which should be compared to $N_\Gamma\sim{10}^{-9}~{\rm Mpc}^{-3}$ from Fig.~3. The effective number density is $n_0\sim{10}^{-11}-10^{-10}~{\rm Mpc}^{-3}$ and the condition can be satisfied if $\xi_{\rm CR}\sim(1-10){(\Gamma/3)}^{-2}$. The peak energy of GRB 060218 and GRB 100316D is $\varepsilon_\gamma^b\sim1-10$~keV~\cite{Campana:2006qe}. The magnetic field strength is not well understood, but $\xi_B\sim0.1-10$ is expected in the case of the synchrotron. For $\alpha_\gamma\sim1$, the optical depth required for photo-meson production is estimated to be $\tau_{p\gamma}\sim0.01-1$~\cite{Murase:2006mm}. It should be noted that such a hard photon spectrum is necessary to maintain consistency with optical observations~\cite{Murase:2006mm,Samuelsson:2020upt}. Thus, we conclude that LL GRBs could be viable sources of high-energy neutrinos and UHECRs if the luminosity is higher and/or if cosmic rays are nuclei, which is consistent with previous works~\cite{Murase:2008mr,Biehl:2017qen}. However, it should be considered that the mechanism of prompt emission from LL GRBs is still under debate, and another (more promising) possibility is the shock breakout scenario~\cite{Campana:2006qe}, in which gamma rays are attributed to shock breakout from a mildly relativistic outflow (that may be driven by a jet). In this scenario, UHECRs are unlikely to be generated during the prompt phase~\cite{Kashiyama:2013ata}. Although IceCube neutrinos are explained by choked jets or transrelativistic shocks in a dense wind~\cite{Senno:2015tsn}, UHECRs acceleration is attributed to a later transrelativistic component that is decelerated over the time scale of weeks or months ~\cite{Zhang:2017moz}. \subsection{Newborn magnetars} Some of the supernovae are more powerful than ordinary supernovae, and are referred to as hypernovae. Their ejecta are either nonrelativistic or transrelativistic (i.e., the Lorentz factor is $\Gamma\beta\gtrsim0.1-1$), which may be driven by some central engine with possible candidates that include a newborn magnetar (e.g.,~\cite{Thompson:2004wi}), a fallback disk around a black hole (e.g.,~\cite{Dexter:2012xk}), and collisions with dense circumstellar material (e.g.,~\cite{Smith:2007cb}). We discuss the newborn magnetar scenario that has been widely discussed in the recent literature. The spin-down luminosity is $L_{\rm sd}\sim3\times{10}^{49}~{\rm erg}~{\rm s}^{-1}$ for a millisecond rotating magnetar with a dipole magnetic field of $\sim10^{15}$~G. Efficient ion acceleration could occur inside a relativistic wind~\cite{Arons:2002yj}, in which the square of the additional factor $\theta_{\rm mag}=R_s2\pi/(cP)\sim0.2(P/1~{\rm ms})$ should be included as part of the luminosity requirement. Although the UHECR acceleration is possible in this magnetar scenario~\cite{Arons:2002yj}, the photons associated with the dissipation of Poynting dominated winds should be thermalized inside the supernova ejecta. Therefore, our power-law assumption for the photon spectrum may not hold. Furthermore, the model typically predicts neutrino emission in the EeV range rather than in the PeV range~\cite{Murase:2009pg,Fang:2013vla}. As such, it is difficult to explain the situation of IceCube neutrinos in the PeV range using the fiducial model. Thus, this model is not discussed in further detail. Finally, we also note that the IceCube EHE neutrino limit in the EeV range has already started to strongly constrain the magnetar scenario~\cite{Aartsen:2016ngq}. \subsection{Tidal disruption events} TDEs originate from the disruption of a main-sequence star or white dwarf by a supermassive black hole or an intermediate-mass black hole, respectively. Some of the TDEs have powerful jets, and the X-ray luminosity of Sw J1644+57 was $L_\gamma\sim10^{47-48}~{\rm erg}~{\rm s}^{-1}$~\cite{Burrows:2011dn}. For $\Gamma\sim10$, we have $L_\gamma'\sim10^{46-47}~{\rm erg}~{\rm s}^{-1}$. Thus the luminosity requirement can be satisfied~\cite{Farrar:2008ex}. The apparent rate density and duration are $\rho\sim0.01-0.1~{\rm Gpc}^{-3}~{\rm yr}^{-1}$ and $\Delta T\sim3\times{10}^6$~s, respectively. Thus the effective number density becomes $n_0\sim{10}^{-12}-10^{-11}~{\rm Mpc}^{-3}$. In comparison to $N_\Gamma\sim{10}^{-9}~{\rm Mpc}^{-3}$ from Fig.~3, the condition for the unification of UHECRs and PeV neutrinos can be satisfied if $\xi_{\rm CR}\sim(1-10){(\Gamma/10)}^{-2}$. The most common explanation for x rays from Sw J1644+57 is non-themnal synchrotron emission, and the peak energy is $\varepsilon_\gamma^b\approx\Gamma\hbar{\gamma'_b}^2\frac{e B'}{m_e c}\sim100$~keV~\cite{Burrows:2011dn}, which imples that $B'\sim10^2-10^4$~G for the electron Lorentz factor $\gamma'_b\sim10^4-10^5$. These can be compatible with $\xi_B\sim0.01-100$. However, provided that we consider UHECR production inside jets of TDEs such as Sw J1644+57, strong radiation fields lead to $\tau_{p\gamma0}\gg1$~\cite{Senno:2016bso}, which makes it difficult to find parameters that satisfy the constraints in Fig.~\ref{fig:constraints_when_alpha2_2}. The problem is worse if we require the nucleus-survival condition because nuclei are disintegrated in the presence of such intense radiation fields~\cite{Zhang:2017moz,Guepin:2017abw}. It has been suggested that hypothetical low-luminosity or low-state TDEs with $L_\gamma\sim10^{45-46}~{\rm erg}~{\rm s}^{-1}$ are necessary for nuclei to survive, based on which the UHECR flux could be explained~\cite{Zhang:2017moz,Guepin:2017abw}. Alternatively, cosmic-ray acceleration at external shocks formed by jets or winds is also possible~\cite{Farrar:2014yla,Zhang:2017moz}, although efficient PeV neutrino production is not expected in these scenarios. Our results imply that low-luminosity TDEs that allow $\tau_{p\gamma0}\lesssim1$ can satisfy the required conditions for the unification model, but nuclei rather than protons are required to obtain the highest energies. Correspondingly, the required cosmic-ray loading factors would be larger. With $N_\Gamma\sim3\times{10}^{-8}~{\rm Mpc}^{-3}$, we obtain $\xi_{\rm CR}\sim(30-300){(\Gamma/10)}^{-2}$ (see also~\cite{Zhang:2017moz,Biehl:2017hnb,Guepin:2017abw}). However, it is unlikely that TDEs are the common sources of IceCube neutrinos and UHECRs for several different reasons. It has been shown that it is difficult for TDEs to be the dominant population in the diffuse IceCube flux. TDEs are so rare that the limits due to the absence of neutrino multiple sources in the IceCube data are stringent~\cite{Senno:2016bso}. Furthermore, there is no evidence of positive neutrino signals from Sw J1644+57 and other TDEs~\cite{Stein:2019ivm}. Recently, it has been claimed that IceCube-191001A could coincide with TDE AT2019dsg~\cite{Stein:2020xhk,Winter:2020ptf}, but the physical association is still questionable~\cite{Murase:2020lnu} although AT2019dsg is thought to be a rare, luminous class of TDEs. \subsection{Blazar flares and jetted active galactic nuclei}% Some active galactic nuclei (AGNs) have relativistic jets, and such jetted AGNs are considered as promising candidate sources of UHECRs and high-energy neutrinos. Recent studies have argued that steady emission of jetted AGNs is unlikely to be the source of UHECRs, especially if the UHECR composition is dominated by protons. Fanaroff-Riley II (FR II) galaxies and flat-spectrum radio quasars (FSRQs) can satisfy the Hillas condition, Eq.~(\ref{eq:hillas}), but they are too rare in the local universe within 100~Mpc~\cite{Takami:2008rv,Fang:2016ewe}. This difficulty can be overcome if the UHECRs are accelerated during the active/flaring phase, for which the luminosity requirement is satisfied~\cite{Murase:2008sa,Nizamov:2018sbd}. A typical AGN luminosity is $L_j\sim10^{44}~{\rm erg/s}$, and the isotropic-equivalent luminosity can be enhanced by $2/\theta_j^2$. The importance of flaring emission has been strengthened based on the recent discovery of IceCube-170922A that coincided with the flaring blazar TXS 0506+056~\cite{Aartsen2018blazar1}, although this blazar was not favored as an UHECR accelerator~\cite{Amon2018}. The magnetic field strength can be estimated from the Compton dominance parameter. The leptonic modeling of FSRQs often suggests $U'_\gamma\gtrsim U'_B$, and $B'\sim0.1-10$~G is typically expected for FSRQs~\cite{Ghisellini:2009fj,Murase:2014foa}, which corresponds to $\xi_B\lesssim0.01-1$. However, the survival of heavy nuclei is typically difficult in FSRQs, whereas low-luminosity BL Lacs allow nuclei to survive, although the photo-meson production optical depth is expected to be low~\cite{Murase:2011cy,Murase:2014foa}. In the leptohadronic scenario (which includes the proton synchrotron scenario), higher magnetic fields, $B'\sim 10-100$~G, may be required~\cite{Petropoulou:2016ujj,Liodakis:2020dvd} but such highly magnetized environments may be highly demanding for jet physics and may be contradictory to the nucleus-survival condition (see Eq.~\ref{eq:esc_condition}). Furthermore, UHECR emission from Fanaroff-Riley (II) galaxies/FSRQs is not favored given that strongly evolved UHECR sources are not favored by the IceCube EHE limit~\cite{Aartsen:2016ngq} as well as constraints from the absence of small-scale anisotropies. Thus, it is unlikely that the jetted AGNs are responsible for the observed UHECRs if they are dominated by protons. Ref.~\cite{Murase:2014foa} proposed the scenario whereby EeV neutrinos are dominated by FSRQs, whereas UHECRs are dominated by BL Lac objects (see also Ref.~\cite{Rodrigues:2017fmu}). However, the spectrum of neutrinos is typically expected in the EeV range, so the IceCube neutrino flux is not accounted for simultaneously. The reason is as follows. The photo-meson production efficiency cannot decrease with the increase of energy. Even for FSRQs, where external radiation fields are usually dominant as target photons, $\tau_{p\gamma}$ has an energy-independent behavior beyond the pion production threshold due to the multipion production~\cite{Murase:2014foa}. For BL Lacs, radiation from inner jets is typically more important, and the rectangular approximation around the $\Delta$ resonance can be justified. Then, from Eq.~(\ref{eq:resnuenergy}), 1~PeV neutrinos typically originate from photons with $\sim0.8~{(\Gamma/10)}^2~{\rm keV}$. Except for extremely high synchrotron peaked BL Lacs, the spectral index in the X-ray range is around $\alpha_\gamma\sim1.5-3$, so the number of target photons is larger at lower energies. As a result, for both BL Lacs and FSRQs, the spectrum of neutrinos is predicted to be hard in the PeV range since $\Phi_\nu \propto E_\nu^{-(\alpha_{\rm CR}+1-\alpha_\gamma)}$ as shown in Eq.~(\ref{eq:onsource_final}) (e.g.,~\cite{Mannheim:1995mm,Atoyan:2001ey,Murase:2014foa,Tavecchio:2014xha,Petropoulou:2015upa,Padovani:2015mba} for model-dependent numerical calculations). This contradicts the diffuse limits~\cite{Aartsen:2016ngq} if the cosmic-ray spectrum is extended to ultrahigh energies with a simple power law~\cite{Dermer:2014vaa,Amon2018}. For example, the conclusion determined based on the model-dependent calculations for BL Lacs (that may allow the survival of nuclei) can be interpreted using Figs.~\ref{fig:hard_flux_scenario} and \ref{fig:constraints_hard_flux_scenario} considering our generic, model-independent constraints. To compensate for a soft target spectrum with $\alpha_\gamma >1$, a softer UHECR spectrum is required to supply the substantial amount of PeV energy cosmic rays as discussed in Section~\ref{subsec:parameter_relativistic}. For $\alpha_\gamma=1.5$ and $\alpha_{\rm CR}=2.3$, we see that the model violates the IceCube EHE limit unless $\Gamma$ is very large. Given that $\Gamma\lesssim10-100$ is expected for blazars, the cosmic-ray spectral index $\alpha_{\rm CR}$ must be larger than $2.3$ (see Fig.~\ref{fig:constraints_on_lorentz_factor}). Such cases are not excluded but the required energetics is more demanding. \section{\label{sec:summary}Summary and Discussion} We explored the viability of the unification model for UHECRs and IceCube neutrinos considering photohadronic scenarios, in which neutrinos are produced by the interactions between high-energy ions and low-energy photons. The results are summarized as follows. \begin{itemize} \item By requiring necessary conditions for UHECR sources, including those for acceleration (i.e., the Hillas condition) and survival, we obtained constraints on the photo-meson production optical depth in the UHECR sources. We further combined these source constraints with observational constraints imposed by the neutrino data from IceCube as well as the UHECR data from Auger. \item We found the viable parameter space required to explain the diffuse high-energy neutrino flux above 100~TeV energies and the UHECR flux above 10~EeV, simultaneously. The allowed regions of $\tau_{p\gamma0}$ and $Q_{\rm CR}=N_\Gamma L'_\gamma$ depend on $\alpha_{\rm CR}$, $\alpha_\gamma$, and $\Gamma$. For $\alpha_{\rm CR}=2.2$ and $\alpha_{\gamma}=1.0$, we found $0.1\lesssim\tau_{p\gamma}\lesssim0.6$ regardless of $\Gamma$, which can be shifted to lower values for larger $\alpha_{\rm CR}$ and/or smaller $\alpha_\gamma$. We also suggested the cooling break scenario, wherein the observed softness of the neutrino spectrum in the multi-PeV range can be explained by the suppression due to the cooling of mesons and muons. \item The Auger data on the UHECR composition have suggested that the UHECRs are likely to be dominated by intermediate to heavy nuclei above the ankle. The existence of nuclei imposes an additional condition on their survival due to the photodisintegration process. We showed that the allowed parameter space is narrower than the case of only protons. This is mainly because the nucleus-survival condition results in tighter upper limits on the photo-meson production optical depth, therefore, it is more difficult for hard CR spectra and/or soft photon spectra to match the IceCube data. This situation is even more prominent if the observed neutrinos originate from nuclei rather than protons because the neutrino intensity is suppressed by $A^{\alpha_{\rm cr}-2}$ compared to the proton case (see Eq.~\ref{eq:neutrini_intensity_from_nuclei}). For example, with $\alpha_{\rm CR}\sim2.3$ and $\alpha_\gamma\sim1.0$ in the silicon composition case, we obtained $\tau_{p\gamma}\sim0.1\sim0.2$, which is consistent with the nucleus-survival bound derived by Ref.~\cite{Murase:2010gj}. The allowed parameter space is almost unique for $\alpha_{\rm CR}\sim2.3$ and $\alpha_{\gamma}\sim1.0$, which can be used as one of the critical tests for the unification model with cosmic-ray accelerators. \item In general, we derived more conservative constraints that are imposed by matching the IceCube data without overshooting the Auger data. The allowed parameter space is extended, especially for steeper cosmic-ray spectra,because larger values of the photo-meson production optical depths are possible. It should be noted that in this case, the proton component is subdominant so UHECRs should be dominated by nuclei for a viable unification model. \item Based on the conditions derived in this work, we examined different classes of astrophysical sources that could be viable as the sources of $p\gamma$ neutrinos for the unification model. We found that among the known source classes, LL GRBs and jetted TDEs can be viable, but the results of recent studies suggest that the latter source class is likely to be subdominant as the origin of the diffuse neutrino flux. However, we stress that our constraints are generic, and we do not exclude the possibility of other unknown source candidates. \end{itemize} The grand-unification model that accounts for the gamma-ray data has been discussed, especially for the hadronuclear scenario~\cite{Murase:2016gly,Fang:2017zjf}. We did not explicitly calculate the extragalactic gamma-ray background that is expected in the photohadronic scenario for the unification model, because it is highly model-dependent. In our case, gamma rays produced inside the sources are likely to be cascaded inside the sources. There is a correspondence between the optical depth to the $\gamma\gamma\rightarrow e^+e^-$ process and the $p\gamma$ optical depth $\tau_{p\gamma}$. Lower limits of the $p\gamma$ optical depth~\cite{Yoshida:2014uka} suggest that it is more natural for the sources to be optically thick to GeV-TeV gamma rays~\cite{Murase:2015xka}. However, there is an unavoidable contribution of cosmogenic gamma rays induced by UHECRs, which can give rise to a significant contribution to the extragalactic gamma-ray background, especially in GRB and AGN models that have strong redshift evolution. We note that the main purpose of this work is to obtain necessary constraints for the unification model with photohadronic neutrinos. As shown in this work, even the necessary conditions impose strict constraints, and can allow us to determine some implications for various types of possible candidate sources. We expect that the quantitative fitting of the data is possible but detailed analyses are left for future work. In this case, we note that there is a large uncertainty that originates from the UHECR escape mechanism. In the cosmic-ray accelerator models that are considered in this work, the parameter $\alpha_{\rm CR}$ should be regarded as the spectral index of the accelerated cosmic rays, which can be significantly different from that of the escaping UHECRs, especially for transient sources~\cite{Zhang:2017moz,Zhang:2017hom,Zhang:2018agl}. As a result, the spectrum of UHECRs injected into intergalactic space can be harder~\footnote{However, cosmic-ray reservoir models use the spectral index of cosmic rays that are injected into the environment after they escape from the sources~\cite{Fang:2017zjf}.} \acknowledgements The authors are grateful to Markus Ahlers and Francis Halzen for their valuable comments on the manuscript. This work by S.Y. is supported by JSPS KAKENHI Grant No.~18H05206 and Institute for Global Prominent Research (IGPR) of Chiba University; The work of K.M. is supported by the Alfred P. Sloan Foundation, NSF Grant No.~AST-1908689, and JSPS KAKENHI No.~20H01901.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Birth-and-death processes are continuous-time Markov chains on $S = \{ 0,1,2,\cdots \}$ which only jump to neighboring points. Let us consider a birth-and-death process $X$ with positive birth and death rates on $S \setminus \{ 0 \}$ and stopped at $0$. Let $\nu$ be its initial distribution and $T_0$ be its first hitting time of $0$. Our aim is to reproduce the initial distribution $\nu$ from the first hitting time distribution $\bP_{\nu}[T_0 \in dt]$, where $\bP_{\nu}$ denotes the underlying probability measure of $X$ under the initial distribution $\nu$. We show that the reproduction of the initial distribution can be done via a differential operator obtained from the eigenfunctions of the generator. A key tool is the spectral theory. As we will see, there always exist the first hitting time densities $f_i(t) \ (i \geq 1, t > 0)$, that is, there are functions such that $\bP_{i}[T_0 \in dt] = f_{i}(t)dt$. In addition, the density $f_i$ has a spectral representation: \begin{align} f_i(t) = \pi_{i}\int_{0}^{\infty}\mathrm{e}^{-\theta t} \psi_{-\theta }(i)\rho(d\theta), \label{eq130} \end{align} where $\pi$ is the speed measure, $\rho$ is the spectral measure and $\psi_{-\theta}$ is a Dirichlet $(-\theta)$-eigenfunction of the generator, whose precise definitions will be given in Section \ref{section:BD}. As is well-known (e.g., Karlin and McGregor \cite{KarlinMcGregor}), the transition probability has the spectral representation: \begin{align} \bP_{i}[X_t = j] = \pi_{i}\int_{0}^{\infty}\mathrm{e}^{-\theta t} \psi_{-\theta}(i)\psi_{-\theta}(j)\rho(d\theta). \label{eq131} \end{align} Comparing the representations in \eqref{eq130} and \eqref{eq131} and changing the order of the integration and the differentiation formally, we see the following result. We will introduce a matrix $C$ in Proposition \ref{matrix_C} whose columns are generalized $0$-eigenfunctions of the generator $Q$ in the sense that $C_0 = 0$ and $C_{j-1} = QC_{j}$ for $j \geq 1$. We write $\partial_t = \frac{d}{dt}$ and define $\psi_{\partial_t}(j)$ by the differential polynomial \begin{align} \psi_{\partial_t}(j) = \sum_{k = 1}^{j}C(j,k)\partial_t^{k-1} \quad (j \geq 1). \label{eq111} \end{align} \begin{Thm} \label{rep_transition} Let $\nu$ be a probability measure on $S \setminus \{0\}$. Then it holds \begin{align} \bP_{\nu}[X_t = j] = \psi_{\partial_t}(j)f_{\nu}(t) \quad (j \geq 1), \label{eq113} \end{align} where \begin{align} \bP_{\nu}[X_t = j] := \sum_{i = 1}^{\infty}\nu\{ i \} \bP_{i}[X_t = j] \quad \text{and} \quad f_{\nu}(t) := \sum_{i = 1}^{\infty} \nu\{i\} f_i(t). \label{} \end{align} \end{Thm} By taking $t \to 0$ in \eqref{eq113}, we obtain the reproduction of the initial distribution: \begin{Cor} \label{rep_init} Let $\nu$ be a probability measure on $S \setminus \{0\}$. Then it holds \begin{align} \nu\{j\} = \lim_{t \to + 0}\psi_{\partial_t}(j)f_\nu(t) \quad (j \geq 1). \label{eq114} \end{align} \end{Cor} To obtain the reproduction formula in the explicit form, we need to compute the spectral measure of the generator. For this purpose, we look at birth-and-death processes as {\it generalized diffusions}, and apply the spectral theory for the generalized second-order ordinary differential operators. Using Doob's $h$-transform, we compute the matrix $C$ for asymmetric random walks. In appendix, we study reproduction of the initial distributions for one-dimensional diffusions. We will show that under the existence of the Laplace transform of the spectral measure reproduction is possible for initial distributions with square integrable densities with respect to the speed measure. The main tool is the spectral theory, especially the generalized Fourier transform and the spectral representation of the first hitting time densities. \subsection*{Background of the study} In \cite{preprintQSD}, we have studied quasi-stationary distributions of one-dimensional diffusions. Let $X$ be a $\frac{d}{dm}\frac{d}{ds}$-diffusion on $S := [0,b) \ (0 < b \leq \infty)$ stopped at $0$. Let us denote the set of probability distributions on $S$ by $\cP(S)$ or $\cP S$. For a set of initial distributions $\cP \subset \cP(S \setminus \{0\})$, we say that the {\it first hitting uniqueness} holds on $\cP$ if the following holds: \begin{align} \cP \ni \mu \mapsto \bP_\mu[T_0 \in dt] \quad \text{is injective}, \label{eq133} \end{align} where $T_x$ denotes the first hitting time of $x$ for $x \in S$. Recall a probability distribution $\nu$ is called a {\it quasi-stationary distribution} of $X$ when the following holds: \begin{align} \bP_{\nu}[X_t \in dx \mid T_0 > t] = \nu(dx) \quad (t > 0). \label{} \end{align} Define \begin{align} \cP_{\mathrm{exp}} := \{ \mu \in \cP(0,b) \mid \bP_\mu[T_0 \in dt] = \lambda \mathrm{e}^{-\lambda t}dt \quad \text{for some } \lambda > 0 \}. \end{align} One of the main results in \cite{preprintQSD} was the following: \begin{Thm}[{\cite[Theorem 1.1]{preprintQSD}}]\label{main-theorem-03} Let $X$ be a $\frac{d}{dm}\frac{d}{ds}$-diffusion on $[0,b) \ (0 < b \leq \infty)$ stopped at $0$ satisfying \begin{align} \bP_x[T_0 < \infty] = 1 \quad \text{and} \quad \bP_x[T_y < \infty] > 0 \quad (x \in (0,b), y \in [0,b)). \label{} \end{align} Set \begin{align} \mu_{t}(dx) := \bP_{\mu}[X_t \in dx \mid T_0 > t]. \label{} \end{align} Assume the first hitting uniqueness holds on $\cP_{\mathrm{exp}}$ and \begin{align} {\bP_{\nu}}[T_0 \in dt] = \lambda \mathrm{e}^{-\lambda t}dt \quad \text{for some} \ \lambda > 0 \ \text{and some}\ {\nu} \in \cP(0,b). \end{align} Then for $\mu \in \cP(0,b)$ and $\lambda > 0$, the following are equivalent: \begin{enumerate} \item $ \lim_{t \to \infty}\frac{\bP_{\mu}[T_0 > t + s]}{\bP_{\mu}[T_0 > t]} = \mathrm{e}^{-\lambda s} \ (s > 0)$. \item $\bP_{\mu_{t}}[T_0 \in ds] \xrightarrow[t \to \infty]{} \lambda \mathrm{e}^{-\lambda s}ds$. \item $\mu_t \xrightarrow [t \to \infty]{} {\nu}$. \end{enumerate} \end{Thm} Here the convergence of probability distributions is in the sense of the weak convergence. From Theorem \ref{main-theorem-03}, we can see that if there is a quasi-stationary distribution $\nu_\lambda$ with $\bP_{\nu_\lambda}[T_0 > t] = \mathrm{e}^{-\lambda t}$, the convergence $\mu_{t} \xrightarrow[t \to \infty]{w}\nu_\lambda$ is reduced to the tail behavior of $T_0$. Rogers \cite{RogersFPP} has studied the first hitting uniqueness on $\cP(0,b)$ for one-dimensional diffusions and gave a sufficient condition for it by a condition on the resolvent density. His condition was, however, too strong; he gave in the paper an example of a diffusion satisfying the first hitting uniqueness but not his condition. Reproduction of initial distributions for a set $\cP \subset \cP(S)$ obviously implies the first hitting uniqueness on $\cP$. More precisely, we can see that the reproduction formula \eqref{eq114} provides the inverse of the map \eqref{eq133}. \subsection*{Outline of the paper} The remainder of the present paper is organized as follows. In Section \ref{section:BD}, we will recall some basic notion and setup notation for birth-and-death processes. In Section \ref{section:0-eigenfunc}, we will explain the scale function and speed measure for birth-and-death processes and construct the matrix $C$. In Section \ref{section:theta_eigenfunc}, we will show the equality \eqref{eq111} and see that the first hitting time densities have the spectral representations. In Section \ref{section:proof_rep_formula}, we will prove Theorem \ref{rep_transition}. In Section \ref{section:GeneralizedDiff}, we will see birth-and-death processes as generalized diffusions and apply the spectral theory and $h$-transforms for the processes. In Section \ref{section:exBD}, we will discuss symmetric and asymmetric random walks. In Appendix \ref{section:1-dimDiff}, we will study the reproduction of the initial distributions for one-dimensional diffusions. We will also give examples. \section{Birth-and-death processes} \label{section:BD} Let us consider a {\it birth-and-death process} on $S = \{ 0,1,2,\cdots \}$ such that $0$ is a trap, which is a continuous-time Markov chain with neighboring jumps such that $0$ is a trap. We denote the birth rates by $\{\lambda_i\}_{i \geq 1}$ and the death rates by $\{ \mu_i \}_{i \geq 1}$: \begin{align} \bP_{i}[X_{\tau} = i+1] = \frac{\lambda_{i}}{\lambda_{i}+ \mu_{i}}, \quad \bP_{i}[X_{\tau} = i-1] = \frac{\mu_{i}}{\lambda_{i}+ \mu_{i}} \quad \text{and} \quad \bP_{i}[\tau > t] = \mathrm{e}^{-(\lambda_{i} + \mu_{i})} \label{} \end{align} where $\tau = \inf \{ t > 0 \mid X_{\tau} \neq X_0 \}$ denotes the first exit time from the initial state. We set $\mu_0 = \lambda_0 = 0$ according to the assumption that $0$ is a trap. We assume the birth and death rates are positive on $S \setminus \{ 0 \}$: $\lambda_{i} > 0,\ \mu_{i} > 0 \ (i \geq 1)$. Its $Q$-matrix \begin{align} Q = (Q(i,j))_{i,j \geq 0} = \begin{pmatrix} Q(0,0) & Q(0,1) & \cdots \\ Q(1,0) & Q(1,1) & \cdots \\ \vdots & \vdots & \ddots \end{pmatrix} \end{align} is given by \begin{align} Q(i,i-1) = \mu_i \quad (i \geq 1) \quad \text{and} \quad Q(i,i+1) = \lambda_i, \quad Q(i,i) = -(\lambda_i + \mu_i) \quad (i \geq 0). \label{q-matrix} \end{align} We denote the transition probability by $P_t$, that is, \begin{align} \bP_i[X_t = j] = P_t(i,j) \quad (t \geq 0,\ i,j \geq 0). \label{} \end{align} For this process, the transition probability $P_t(i,j)$ satisfies the forward and backward equations: $\partial_t P_t = P_t Q = Q P_t$, that is, \begin{align} \partial_t P_t(i,j) &= \lambda_{j-1}P_t(i,j-1) - (\lambda_j + \mu_j)P_t(i,j) + \mu_{j+1} P_t(i,j+1) \quad (i,j \geq 0), \label{F-eq} \\ \partial_t P_t(i,j) &= \mu_{i}P_t(i-1,j) - (\lambda_i + \mu_i)P_t(i,j) + \lambda_{i} P_t(i+1,j) \quad (i,j \geq 0), \label{B-eq} \end{align} where we understand $\lambda_{-1}P_t(i,-1) = 0$ (see e.g., Anderson \cite[Chapter 2, Theorem 2.2]{Anderson}). Define the {\it speed measure} $\pi = (\pi_i)_{i \in S \setminus \{0\}}$ on $S \setminus \{0\}$ by \begin{align} \pi_1 = 1, \quad \pi_{i} = \frac{\lambda_1 \cdots \lambda_{i-1}}{\mu_2 \cdots \mu_{i}} \quad (i \geq 2). \label{speed-measure} \end{align} Note that $\pi$ is characterized with $\pi_1 = 1$ and the following balancing condition: \begin{align} \pi_{i+1} \mu_{i+1} = \pi_{i} \lambda_i \quad (i \geq 1). \label{balancing_condition} \end{align} Define the operator $Q$ by \begin{align} Qf(i) = \sum_{j = 1}^{\infty}Q(i,j)f(j) = \mu_i f(i-1) - (\lambda_i + \mu_i)f(i) + \lambda_i f(i+1) \quad (i \geq 0) \label{Q-operator} \end{align} for a function $f: S \to \bR$, where we understand $\mu_0 f(-1) = 0$. Then $Q$ is symmetric w.r.t.\ $\pi$ in the sense that for functions $f,g: S \to \bR$ which are finitely supported and satisfy $f(0) = g(0) = 0$ the following holds: \begin{align} \sum_{i = 1}^{\infty}(Qf)(i)g(i)\pi_{i} = \sum_{i = 1}^{\infty}f(i)(Qg)(i)\pi_{i}. \label{} \end{align} Indeed, by \eqref{balancing_condition} it holds \begin{align} &\sum_{i = 1}^{\infty}(Qf)(i)g(i)\pi_{i} \label{} \\ = &\sum_{i = 1}^{\infty}(\mu_i f(i-1) - (\lambda_i + \mu_i)f(i) + \lambda_i f(i+1))g(i)\pi_{i} \label{} \\ = &\sum_{i = 1}^{\infty}\mu_i f(i-1)g(i)\pi_{i} - \sum_{i = 1}^{\infty}(\lambda_i + \mu_i)f(i)g(i)\pi_{i} + \sum_{i = 1}^{\infty}\lambda_i f(i+1)g(i)\pi_{i} \label{} \\ = &\sum_{i = 1}^{\infty}\lambda_i f(i)g(i+1)\pi_{i} - \sum_{i = 1}^{\infty}(\lambda_i + \mu_i)f(i)g(i)\pi_{i} + \sum_{i = 1}^{\infty}\mu_i f(i)g(i - 1)\pi_{i} \label{} \\ = &\sum_{i = 1}^{\infty}f(i)(Qg)(i)\pi_{i}. \label{} \end{align} \section{Generalized $0$-eigenfunctions} \label{section:0-eigenfunc} Let us recall the spectral representation of the transition probability $P_t$ studied in Karlin and McGregor \cite{KarlinMcGregor}. We introduce the scale function of $Q$: \begin{Prop} \label{prop:scale_function} The scale function of $Q$ defined by \begin{align} s(i) := \left\{ \begin{aligned} &0 & (i = 0), \\ &\frac{1}{\mu_1} + \sum_{j = 1}^{i-1}\frac{1}{\pi_{j}\lambda_{j}} & (i \geq 1) \end{aligned} \right. \label{scale-function} \end{align} satisfies $Qu = 0$ and $u(0) = 0$. Conversely, if $u$ is a function satisfying $Qu = 0$ and $u(0) = 0$, then $u$ is a constant multiple of $s$. \end{Prop} \begin{proof} From the definition of $s$, it holds \begin{align} Qs(i) &= \mu_i s(i-1) - (\lambda_i + \mu_i)s(i) + \lambda_i s(i+1) \label{} \\ &= -\frac{\mu_{i}}{\pi_{i-1}\lambda_{i-1}} + \frac{\lambda_{i}}{\pi_{i}\lambda_{i}} = 0. \label{} \end{align} Let $u$ be another function satisfying $u(0) = 0$ and \begin{align} Qu(i) = \mu_i u(i-1) - (\lambda_i + \mu_i)u(i) + \lambda_i u(i+1) = 0 \quad (i \geq 1). \label{eq125} \end{align} Set $c = u(1)$. If $c = 0$, from \eqref{eq125}, it inductively holds $u(i) = 0 \ (i \geq 0)$. If $c \neq 0$, set $v = s - (1/c)u$. Then $v$ satisfies $Qv = 0$ and $v(0) = v(1) = 0$. Thus it follows that $v(i) = 0 \ (i \geq 0)$, and we obtain $u= cs$. \end{proof} We introduce two difference operators $D_\pi$ and $D_s$ whose composition is the $Q$-matrix. \begin{Prop} Define \begin{align} D_{\pi} f(i) := \frac{f(i) - f(i-1)}{\pi_{i}} \quad(i \geq 1)\quad \text{and} \quad D_s f(i) := \frac{f(i+1) - f(i)}{s(i+1) - s(i)} \quad (i \geq 0). \label{dmds} \end{align} Then it holds $Qf(i) = D_{\pi} D_sf(i) \ (i\geq 1)$ for every positive measurable function $f : S \to \bR$. \end{Prop} \begin{proof} It holds from \eqref{balancing_condition} \begin{align} D_{\pi} D_sf(i) &= D_{\pi}(\pi_{i}\lambda_i(f(i+1) - f(i)) \label{} \\ &= \frac{\pi_{i}\lambda_i(f(i+1) - f(i)) - \pi_{i-1}\lambda_{i-1}(f(i) - f(i-1))}{\pi_{i}} \label{} \\ &= \lambda_i(f(i+1) - f(i)) - \mu_{i}(f(i) - f(i-1)) \label{} \\ &= Qf(i). \label{} \end{align} \end{proof} We introduce a matrix $C$ which we will use in Proposition \ref{eigen} to represent an eigenfunction of $Q$. \begin{Prop} \label{matrix_C} There exists a unique matrix \begin{align} C = (C(i,j))_{i,j \geq 0} = (C_0, C_1, \cdots) \quad \text{with} \quad C_j = (C(i,j))_{i \geq 0} = \begin{pmatrix} C(0,j) \\ C(1,j) \\ \vdots \end{pmatrix} \quad (j \geq 0) \label{matrixC} \end{align} which satisfies the relation \begin{align} Q C_{j} = C_{j-1} \quad (j \geq 1) \label{rec-rel} \end{align} with \begin{align} C_0 = 0,\quad C_1(i) = s(i) \quad (i \geq 1) \quad \text{and} \quad C(i,j) = 0 \quad (i < j), \label{} \end{align} where $s$ is the scale function given in \eqref{scale-function}. \end{Prop} \begin{Rem} Note that $C_j$ is a generalized $0$-eigenfunction of rank $j$ in the sense that $Q^{j}C_{j} = 0$ and $Q^{j-1}C_{j} = s \neq 0$. \end{Rem} \begin{proof}[Proof of Proposition \ref{matrix_C}] The matrix $C$ must satisfy \begin{align} C(i,j-1) &= \sum_{k=0}^{\infty}Q(i,k)C(k,j) \label{} \\ &= \mu_i C(i-1,j) - (\lambda_i + \mu_i) C(i,j) + \lambda_i C(i+1,j) \quad (i \geq 1, j \geq 0). \label{} \end{align} Let us determine $C_j$ recursively. For $j = 2,3,4,\cdots$, being fixed, suppose $C_{j-1}$ has been determined. For $i \geq j - 1$, we have \begin{align} C(i+1,j) = \frac{1}{\lambda_i} (C(i,j-1) - \mu_i C(i-1,j) + (\lambda_i + \mu_i) C(i,j)). \label{eq120} \end{align} Substituting $i=j-1, j$ in \eqref{eq120}, we see that $C(j,j)$ and $C(j+1,j)$ are determined as \begin{align} C(j,j) &= \frac{C(j-1,j-1)}{\lambda_{j-1}}, \label{eq110} \\ C(j+1,j) &= \frac{1}{\lambda_j} (C(j,j-1) + (\lambda_j + \mu_i) C(j,j)), \label{eq122} \end{align} respectively. By \eqref{eq120}, the entries $C(i,j)$ for $i \geq j$ are determined recursively. \end{proof} \section{$\theta$-eigenfunctions and spectral representations} \label{section:theta_eigenfunc} The eigenfunctions for $Q$ can be obtained as the generating functions corresponding to the matrix $C$. \begin{Prop} \label{eigen} For $\theta \in \bR$, define \begin{align} \psi_{\theta}(i) = \sum_{j=1}^{\infty}C(i,j)\theta^{j-1} = \sum_{j=1}^{i}C(i,j)\theta^{j-1}. \label{eq103} \end{align} Then $u = \psi_{\theta}$ is the unique solution of the following equation: \begin{align} Qu = \theta u \quad \text{with} \ u(0) = 0, \ D_s u(0) = 1. \label{eq107} \end{align} \end{Prop} \begin{proof} It is obvious that $\psi_{\theta}(0) = 0$. It holds \begin{align} D_s\psi_{\theta}(0) = \mu_{1}(\psi_{\theta}(1) - \psi_{\theta}(0)) = \mu_1 C(1,1) = 1. \label{} \end{align} Note that for \begin{align} \psi_\theta = (\psi_{\theta}(i))_{i \geq 0} = \begin{pmatrix} \psi_{\theta}(0) \\ \psi_{\theta}(1) \\ \vdots \end{pmatrix} \quad \text{and} \quad r_{\theta} := (r_{\theta}(i))_{i \geq 0} = \begin{pmatrix} 0 \\ 1 \\ \theta \\ \theta^2 \\ \vdots \end{pmatrix}, \label{} \end{align} it holds \begin{align} \psi_\theta = C r_{\theta} = \sum_{j=1}^{\infty}C_j\theta^{j-1} \label{}. \label{} \end{align} Thus from \eqref{rec-rel} we have \begin{align} Q\psi_{\theta} = \sum_{j=1}^{\infty}QC_{j} \theta^{j-1} = \sum_{j=1}^{\infty}C_{j-1}\theta^{j-1} = \theta\sum_{j=1}^{\infty}C_j\theta^{j-1} = \theta \psi_{\theta}, \label{} \end{align} where we note that $C_0 = 0$. Hence the function $\psi_{\theta}$ is the solution of \eqref{eq107}. We check the uniqueness. Suppose $u = f$ satisfies the equation \eqref{eq107}. Then $v := f - \psi_{\theta}$ satisfies $Qv = \theta v$ with $v(0) = 0$ and $D_sv(0) = 0$, which implies $v(1) = 0$. It follows from the definition of $Q$ that $v(i) = 0 \ (i \geq 2)$. Thus it follows $f = \psi_{\theta}$. \end{proof} From the spectral theory (see e.g., Karlin and McGregor \cite[p.501]{KarlinMcGregor}), there exists a measure $\rho$ on $[0,\infty)$, which we call the spectral measure of $Q$, such that \begin{align} \sum_{i = 1}^{\infty}f(i)^2\pi_i = \int_{0}^{\infty}\hat{f}(\theta)^2\rho(d\theta), \label{} \end{align} for every finitely supported function $f$ with $f(0) = 0$, where $\hat{f}$ is the {\it generalized Fourier transform} of $f$: \begin{align} \hat{f}(\theta) = \sum_{i = 1}^{\infty}f(i)\psi_{-\theta}(i)\pi_i. \label{} \end{align} The map $f \mapsto \hat{f}$ can be extended to the unitary map between $L^2(\pi)$ and $L^2(\rho)$ and the functions $\{ \psi_{-\theta}(i)\}_{i \geq 1}$ comprise an orthogonal basis of $L^2(\rho)$ and satisfies \begin{align} \frac{\delta_{ij}}{\pi_{j}} = \int_{0}^{\infty}\psi_{-\theta}(i)\psi_{-\theta}(j)\rho(d\theta) \quad (i,j \geq 1), \label{Orthogonal} \end{align} where $\delta_{ij}$ is the Kronecker delta. We now have the spectral representation of the transition probability $P_t$: \begin{align} \frac{1}{\pi_{j}}P_t(i,j) = \int_{0}^{\infty}\mathrm{e}^{-\theta t}\psi_{-\theta}(i)\psi_{-\theta}(j)\rho(d\theta) \quad (i,j \geq 1, t \geq 0). \label{BD-density} \end{align} From \eqref{BD-density}, we show the spectral representation of the first hitting time densities at $0$ as follows: \begin{Prop} \label{BD-hitting_density} For $i \geq 1$, it holds \begin{align} (f_i(t) :=)\ \partial_t\bP_i[T_0 \leq t] = \pi_{i}\int_{0}^{\infty}\mathrm{e}^{-\theta t} \psi_{-\theta}(i)\rho(d\theta). \label{hitting-density_BD} \end{align} \end{Prop} \begin{proof} Note that $\bP_i[T_0 \leq t] = \bP_i[X_t = 0]$. From \eqref{F-eq} with $\lambda_0 = \mu_0 = 0$, \eqref{BD-density} and $\psi_{-\theta}(1) = C(1,1) = s(1) = 1/\mu_{1}$, we have for $i \geq 1$ \begin{align} \partial_t P_t(i,0) &= \mu_1 P_t(i,1) \label{eq116} \\ &= \pi_{i}\mu_1\int_{0}^{\infty}\mathrm{e}^{-\theta t}\psi_{-\theta}(i)\psi_{-\theta}(1)\rho(d\theta) \label{} \\ &= \pi_{i}\int_{0}^{\infty}\mathrm{e}^{-\theta t}\psi_{-\theta}(i)\rho(d\theta). \label{BD-hitting-density} \end{align} \end{proof} \section{Proof of the reproduction formula} \label{section:proof_rep_formula} We prove Theorem \ref{rep_transition}. \begin{proof}[Proof of Theorem \ref{rep_transition}] Let us first consider the case where $\nu$ is a point mass. From Proposition \ref{eigen} and \ref{BD-hitting_density}, we can see for $i \geq 1$ that \begin{align} \psi_{\partial_t}(j)f_{i}(t) &= \sum_{k = 1}^{j}C(j,k){\partial_t}^{k-1}\int_{0}^{\infty}\mathrm{e}^{-\theta t}\psi_{-\theta}(i)\rho(d\theta) \label{} \\ &= \sum_{k = 1}^{j}C(j,k)\int_{0}^{\infty}\mathrm{e}^{-\theta t}(-\theta)^{k-1}\psi_{-\theta}(i)\rho(d\theta) \label{} \\ &= \int_{0}^{\infty}\mathrm{e}^{-\theta t}\psi_{-\theta}(i)\psi_{-\theta}(j)\rho(d\theta). \label{} \end{align} Thus \eqref{eq111} holds when $\nu$ is a point mass. Let us consider the general case. Suppose we may find some constants $\{\alpha_k\}_{k \geq 0}$ such that \begin{align} \max_{i \geq 1}|\partial^k_t f_i (t)| \leq \alpha_k \quad (t > 0). \label{eq112} \end{align} Then we have \begin{align} \sum_{i=1}^{\infty}\nu\{i\}|\partial^k_t f_i (t)| \leq \alpha_k, \label{} \end{align} and we can change the order of the differentiation w.r.t.\ $t$ and the integration by $\nu$: \begin{align} \sum_{i=1}^{\infty}\nu\{i\}\partial^k_t f_i (t) = \partial^k_t \sum_{i=1}^{\infty}\nu\{i\} f_{\nu} (t). \label{} \end{align} Then it follows that \begin{align} \psi_{\partial_t}(j)f_{\nu}(t) = \sum_{i = 1}^{\infty}\nu\{i\}\psi_{\partial_t}(j)f_{i}(t) = \bP_{\nu}[X_t = j]. \label{} \end{align} Here the last equality follows from \eqref{eq113} for a point mass. Let us find such a sequence $\{\alpha_k\}_{k \geq 0}$ as \eqref{eq112} is satisfied. We construct it recursively. From \eqref{eq116}, it holds \begin{align} f_i(t) = \mu_{1}P_t(i,1) \leq \mu_{1}, \label{} \end{align} and we may take $\alpha_{0} := \mu_{1}$. Let $k \geq 0$ and assume we have constants $\alpha_{0},\alpha_{1}, \cdots, \alpha_{k}$ satisfying \eqref{eq112}. Then from \eqref{eq113} in the case when $\nu$ is a point mass, we have \begin{align} C(k+2,k+2){\partial_t}^{k+1}f_i(t) = P_t(i,k+2) -\sum_{l=1}^{k+1}C(k+2,l){\partial_t}^{l-1}f_i(t) \label{} \end{align} for every $i \geq 1$. Thus it follows \begin{align} C(k+2,k+2)|{\partial_t}^{k+1}f_i(t)| \leq 1 + \sum_{l=1}^{k+1}C(k+2,l)\alpha_{l-1}. \label{} \end{align} From \eqref{eq110} it holds $C(k+2,k+2) > 0$ and hence it holds \begin{align} |{\partial_t}^{k+1}f_i(t)| \leq \frac{1 + \sum_{l=1}^{k+1}C(k+2,l)\alpha_{l-1}}{C(k+2,k+2)}, \label{} \end{align} and we may take \begin{align} \alpha_{k+1} := \frac{1 + \sum_{l=1}^{k+1}C(k+2,l)\alpha_{l-1}}{C(k+2,k+2)}. \label{} \end{align} The proof is complete. \end{proof} \section{Looking at birth-and-death processes as generalized diffusions} \label{section:GeneralizedDiff} Generalized diffusions are the processes which unify birth-and-death processes and one-dimensional diffusions. Here we recall some studies of generalized diffusions and consider the application to birth-and-death processes. A main reference for the subject is Kotani and Watanabe \cite{KotaniWatanabe}. We say that $w: [0,\infty) \to [0,\infty]$ is a string when it is non-decreasing and right-continuous. Set $\ell(w) := \inf \{x \in \bR \mid w(x) = \infty\}$. For a string $w$, we define the measure $dw$ on $[0,\ell(w))$ by $dw(a,b] = w(b) - w(a) \ (0 \leq a < b)$ and $dw\{0\} = w(0)$. Let $w$ be a string with and let $B$ be a standard Brownian motion and let $L(t,x)$ be the jointly continuous local time of $B$, that is, for every non-negative measurable function $f$ it holds \begin{align} \int_{0}^{t}f(B_s)ds = 2\int_{\bR}f(x)L(t,x)dx. \label{} \end{align} Define \begin{align} A(t) := \left\{ \begin{aligned} &\int_{0}^{\ell(w)}L(t,x)dw(x) & (t < T_0 \wedge T_b), \\ &\infty & (t \geq T_0 \wedge T_b). \end{aligned} \right. \label{time-change} \end{align} Then the process $X(t) := B(A^{-1}(t))$ is a strong Markov process on $I$ stopped at the boundaries. We call $X$ the $\frac{d}{dw}\frac{d}{dx}${\it -generalized diffusion} and $dw$ the {\it speed measure} of $X$. Note that if the $dw$ has a full support in $(0,b)$, the process $X$ is a diffusion, and if $dw$ is supported on $\bN$, the process $X$ is a birth-and-death process. It is known that the transition density of $X$ has the spectral representation. There exists a jointly continuous function $p(t,x,y)$ and a Radon measure $\rho_{w}$ on $(0,\infty)$ such that \begin{align} \bP_x[X_t \in dy] = p(t,x,y)dw(y) \quad (t > 0,x,y \in (0,b)), \label{} \end{align} and \begin{align} p(t,x,y) = \int_{0}^{\infty}\mathrm{e}^{-\lambda t}\psi_{-\lambda}(x)\psi_{-\lambda}(y)\rho_{w}(d\lambda) \quad (t > 0,x,y \in (0,b)), \label{diffusion_transition} \end{align} (see e.g., McKean \cite{McKean:elementary}). We call the measure $\rho_{w}$ the spectral measure of $\frac{d}{dw}\frac{d}{dx}$. \subsection{Strings and spectral measures} \label{section:krein_theory} For a string $w$, its dual string $w^{-1}$ defined by \begin{align} w^{-1}(x) := \inf \{ y > 0 \mid w(y) > x \} \quad (x \geq 0) \label{} \end{align} is also a string. We denote by $\rho_{w}$ (resp. $\sigma_{w}$) the spectral measure of $\frac{d}{dw}\frac{d}{dx}$ with Dirichlet (resp. Neumann) boundary condition at $0$, and if the boundary $\ell(w)$ is regular, we assume the Dirichlet boundary condition at $\ell(w)$. Then it is known that the following relation holds: \begin{align} \rho_{w}(d\lambda) = \lambda \sigma_{w^{-1}}(d\lambda) \quad \text{on} \ [0,\infty) \label{spectral-correspondence} \end{align} (see Yano \cite[Theorem 2.2]{Yano:Excusionmeasure}). Thus we can obtain the spectral measure from the one corresponding to the dual string. This relation is useful because $\sigma_{w}$ can be sometimes obtained by the spectral theory, which we will explain below. Let $w$ be a string. For $\lambda \in \bC$, define $u = \varphi_\lambda$ as the unique solution of the following equation: \begin{align} u(x) = 1 + \lambda \int_{0}^{x}dy\int_{0-}^{y}u(z)dw(z) \quad (x \geq 0), \label{phi} \end{align} and define $u = \psi_{\lambda}$ as the unique solution of the following equation: \begin{align} u(x) = x + \lambda \int_{0}^{x}dy\int_{0}^{y}u(z)dw(z) \quad (x \geq 0). \label{psi} \end{align} Define the spectral characteristic function $h$ by \begin{align} h(\lambda) := \int_{0}^{\ell(w)}\frac{dx}{\varphi_\lambda(x)^2} \quad (\lambda > 0). \label{} \end{align} Note that the following equality holds: \begin{align} \int_{0}^{\ell(w)}\frac{dx}{\varphi_\lambda(x)^2} = \lim_{x \to \ell(w)}\frac{\psi_{\lambda}(x)}{\varphi_\lambda(x)}. \label{eq115} \end{align} Indeed, since the functions $\varphi_\lambda$ and $\psi_{\lambda}$ are $\lambda$-eigenfunctions for $\frac{d}{dw}\frac{d^+}{dx}$, it holds \begin{align} d\left(\left(\frac{d^+}{dx}\psi_{\lambda}\right)\varphi_\lambda - \left(\frac{d^+}{dx}\varphi_\lambda\right)\psi_{\lambda}\right) = 0, \label{} \end{align} and it is clear that $\left.\left(\frac{d^+}{dx}\psi_{\lambda}\right)\varphi_\lambda - \left(\frac{d^+}{dx}\varphi_\lambda\right)\psi_{\lambda}\right|_{x=0} = 1$. Thus $\left(\frac{d^+}{dx}\psi_{\lambda}\right)\varphi_\lambda - \left(\frac{d^+}{dx}\varphi_\lambda\right)\psi_{\lambda} = 1$. Thus it follows \begin{align} d\left(\frac{\psi_{\lambda}}{\varphi_\lambda}\right) = \frac{(\frac{d^+}{dx}\psi_{\lambda})\varphi_\lambda - (\frac{d^+}{dx}\varphi_\lambda)\psi_{\lambda}}{\varphi_\lambda^2}dx = \frac{dx}{\varphi_\lambda^2}, \label{} \end{align} and \eqref{eq115} holds. By the spectral theory for generalized second-order differential operators, it is known that the function $h$ is represented by \begin{align} h(\lambda) = c + \int_{0-}^{\infty}\frac{\sigma_{w}(d\xi)}{\lambda + \xi} \quad (\lambda > 0) \label{char_func} \end{align} for $c = \inf \Supp dw$. Here we note that $\sigma_{w}\{ 0 \} = 1 / w(\infty)$ (see \cite[p.239]{KotaniWatanabe}). Thus we can obtain $\sigma_{w}$ by the Stieltjes inversion formula: \begin{align} \sigma_{w}(I) = -\frac{1}{\pi}\lim_{\eps \to + 0} \int_{I}\Im h(-\lambda + i \eps)d \lambda \quad \text{for }I \subset [0,\infty) \text{ with } \sigma_{w}(\partial I) = 0. \label{} \end{align} \subsection{Spectral measures of birth-and-death processes} \label{section:dual_process} Let us consider a birth-and-death process $X$ whose $Q$-matrix is given by \eqref{q-matrix}. To apply the theory of generalized diffusions, we need a string which defines the generalized diffusion equivalent to $X$. For the scale function $s$ defined in \eqref{scale-function}, we denote its linear interpolation by the same symbol: \begin{align} s(x) := (i+1-x)s(i) + (x-i)s(i+1) \quad \text{for} \quad i \geq 0 \quad \text{such that} \quad i \leq x < i+1. \label{} \end{align} Define a string $m$ on $(0,\infty)$ by \begin{align} m(x) = \sum_{i=1}^{\infty}\pi_{i} 1 \{ i \leq x \} \quad (x > 0), \label{} \end{align} where $\pi_{i}$ is given in \eqref{speed-measure}. Then \begin{align} w := m \circ s^{-1} \label{stringmBD} \end{align} defines a string and the generalized diffusion $\tilde{X}$ corresponding to $w$ has its state space $\{ s(i) \}_{i \geq 0}$. We check that the generalized diffusion $s^{-1}(\tilde{X})$ is a realization of $X$. Recalling that the generalized diffusions are obtained by the time change of a Brownian motion by $A^{-1}$ for $A$ in \eqref{time-change}, we can see \begin{align} \bP_{s(i)}[\tilde{X}_{\tau} = s(i+1)] &= \frac{1/(\pi_{i-1}\lambda_{i-1})}{1/(\pi_{i}\lambda_{i}) + 1/(\pi_{i-1}\lambda_{i-1})} = \frac{\lambda_{i}}{\lambda_{i} + \mu_{i}} \quad (i \geq 1), \label{eq127} \\ \bP_{s(i)}[\tilde{X}_{\tau} = s(i-1)] &= 1 - \bP_{s(i)}[\tilde{X}_{\tau} = s(i+1)] = \frac{\mu_{i}}{\lambda_{i} + \mu_{i}} \quad (i \geq 1), \label{eq128} \end{align} where $\tau := \inf \{t > 0 \mid \tilde{X}_t \neq \tilde{X}_0\}$ denotes the time of the first jump (see e.g., \cite[Theorem 23.7]{Kallenberg} and \cite[Proposition II.3.8]{RevusYor}). Since $\tau$ is exponentially distributed, we can specify the distribution by the mean. Recall the following well-known formula (see e.g., \cite[Lemma 23.10]{Kallenberg}): \begin{align} \bE_{x}\left[\int_{0}^{T_{a}\wedge T_{b}}f(\tilde{X}_t)dt\right] = \int_{\Supp dw \cap (a,b)}\frac{(x \wedge y - a)(b - x \vee y ) }{b-a}f(y)dw(y) \label{} \\ (a,b,x \in \Supp dw, a < x < b), \nonumber \end{align} where $f$ is non-negative measurable function. Applying this formula for $x = s(i),\ a = s(i-1),\ b = s(i+1)$ and $f = 1$, we have \begin{align} \bE_{s(i)}\tau &= \frac{(s(i) - s(i-1))(s(i+1)- s(i)) }{s(i+1) - s(i-1)}\pi_i = (\lambda_{i} + \mu_{i})^{-1} \quad (i \geq 1). \label{eq129} \end{align} Thus from \eqref{eq127}, \eqref{eq128} and \eqref{eq129}, we see that the process $\tilde{X}$ is equivalent to $X$. From \eqref{spectral-correspondence} and \eqref{char_func}, we can obtain the spectral measure $\rho_{w}$ of $\tilde{X}$ (or $X$) through the eigenfunctions of $\frac{d}{dw^{-1}}\frac{d^{+}}{dx}$. Since the eigenfunctions of $\frac{d}{dw^{-1}}\frac{d^{+}}{dx}$ can be computed from those of $\frac{d}{dw}\frac{d^{+}}{dx}$, we have a formula to compute $\rho_{w}$ from the eigenfunctions of $\frac{d}{dw}\frac{d^+}{dx}$. Here we go into the details. It is not difficult to see that the dual string $w^{-1}$ of $w$ is equal to $s \circ m^{-1}$ and for $x \geq 0$ we have \begin{align} w^{-1}(x) = s(i) \quad \text{for} \quad i \geq 1 \quad \text{such that} \quad m(i-1) \leq x < m(i). \label{} \end{align} Let $\varphi^{(-)}_\lambda$ and $\psi^{(-)}_{\lambda}$ be functions given in \eqref{phi} and $\eqref{psi}$ for $w = w^{-1}$, respectively. Then as we have seen in the previous section, it holds \begin{align} \lim_{x \to \ell(w^{-1})}\frac{\psi^{(-)}_{\lambda}(x)}{\varphi^{(-)}_{\lambda}(x)} &= \int_{0-}^{\infty}\frac{\sigma_{w^{-1}}(d\xi)}{\lambda + \xi} \label{} \\ &= \frac{1}{\lambda\ell(w)} + \int_{0}^{\infty}\frac{\rho_{w}(d\xi)}{\xi(\lambda + \xi)} \quad (\lambda > 0), \label{} \end{align} where we used the obvious fact $w^{-1}(\infty) = \ell(w)$. Note that $\inf \Supp dw^{-1} = 0$ since $w^{-1}(0) = s(1) > 0$. We consider the eigenfunctions of $\frac{d}{dx}\frac{d}{dw}$, which we denote by $u=\varphi^{d}_{\lambda}$, $v=\psi^{d}_{\lambda} \ (\lambda \in \bC)$, defined as the unique solutions of the following equations, respectively: \begin{align} u(x) = 1 + \lambda \int_{0}^{x}dw(y)\int_{0}^{y}u(z)dz \quad (x \geq 0) \label{} \end{align} and \begin{align} v(x) = w(x) + \lambda \int_{0}^{x}dw(y)\int_{0}^{y}v(z)dz. \quad (x \geq 0). \label{} \end{align} It may not be difficult to check that \begin{align} \varphi^{(-)}_{\lambda}(w(x)) = \varphi^{d}_{\lambda}(x) \quad \text{and} \quad \psi^{(-)}_{\lambda}(w(x)) = \psi^{d}_{\lambda}(x) \quad (x \geq 0). \label{} \end{align} We may also easily see that \begin{align} \varphi^{d}_{\lambda}(x) = \psi^{+}_{\lambda}(x) \quad \text{and} \quad \psi^{d}_{\lambda}(x) = \frac{1}{\lambda}\varphi^{+}_{\lambda}(x) \quad (x \geq 0), \label{} \end{align} where $\varphi_\lambda$ and $\psi_{\lambda}$ are the solutions of \eqref{phi} and \eqref{psi} for $w$ in \eqref{stringmBD}, respectively. Eventually, it follows for $\lambda> 0$ \begin{align} \lim_{x \to \ell(w)}\frac{\varphi^{+}_{\lambda}(x)}{\psi^{+}_{\lambda}(x)} = \lim_{x \to \ell(w)}\frac{\lambda\psi^{d}_\lambda(x)}{\varphi^{d}_{\lambda}(x)} = \lim_{x \to \ell(w)}\frac{\lambda\psi^{(-)}_{\lambda}(w(x))}{\varphi^{(-)}_{\lambda}(w(x))} =\frac{1}{\ell(w)} + \lambda \int_{0}^{\infty}\frac{\rho_{w}(d\xi)}{\xi(\lambda+\xi)}. \label{exitKrein} \end{align} Thus, we obtain the Stieltjes transform of $\xi^{-1}\rho_{w}(d\xi)$ through the eigenfunctions $\varphi_{\lambda}$ and $\psi_{\lambda}$. By the Stieltjes inversion formula, we can compute $\rho_{w}$ in some cases. See Section \ref{section:exBD} for such examples. \subsection{Doob's $h$-transform} Here we recall some basic properties of $h$-transform for generalized diffusions (see e.g., Takemura and Tomisaki \cite{TakemuraTomisaki:htransform} for details). Since we are interested in the first hitting time of $0$, we restrict our attention to the case of generalized diffusions corresponding to a string $w$ with $\ell(w) = \infty$. Let $w$ be a string and let $\gamma \geq 0$. Suppose $k_{\gamma}$ be a positive $\gamma$-eigenfunction for $\frac{d}{dw}\frac{d^+}{dx}$, that is, $k_{\gamma}(x) > 0 \ (x > 0)$ and $u = k_{\gamma}$ is the unique solution of the following equation: \begin{align} u(x) = a + b(x - c) + \gamma \int_{c}^{x}dy\int_{c-}^{y}u(z)dw(z) \quad (x > 0) \label{integral-eq} \end{align} for some $a, b \in \bR$ and $c > 0$. Here we note that for $\gamma \geq 0$ the function \begin{align} g_\gamma(x) := \psi_{\gamma}(x)\int_{x}^{\infty} \frac{dy}{\psi_{\gamma}(y)^2} \quad (x \geq 0) \label{} \end{align} is a unique non-increasing solution of \eqref{integral-eq} satisfying \begin{align} u(0) = 1, \quad \lim_{x \to \infty}\frac{d^+}{dx}u(x) = 0 \label{} \end{align} (see e.g., It\^o \cite{Ito_essentials} for details). The $h$-transform $\cL^{[k_{\gamma}]}$ of $ \cL = \frac{d}{dw}\frac{d}{dx}$ by $k_{\gamma}$ is defined by \begin{align} \cL^{[k_{\gamma}]} := \frac{1}{k_{\gamma}}\left(\frac{d}{dw}\frac{d}{dx} - \gamma \right)k_{\gamma}, \label{} \end{align} and we can easily check that \begin{align} \cL^{[k_{\gamma}]} = \frac{d}{dw^{[k_{\gamma}]}}\frac{d}{ds^{[k_{\gamma}]}} \label{} \end{align} for \begin{align} dw^{[k_{\gamma}]} = k_{\gamma}^2 dw, \quad ds^{[k_{\gamma}]} = k_{\gamma}^{-2} dx. \label{eq117} \end{align} Note that when $k_{\gamma}(0) > 0$, the boundary classification for the boundary $0$ is not changed by the transform. As for the boundary $\infty$, it is entrance or natural depending on \begin{align} \int_{1}^{\infty}k_{\gamma}(x)^{-2}dx\int_{x}^{\infty}k_{\gamma}(x)^2dw(x) \label{} \end{align} is finite or not. Let us consider the case of $k_{\gamma}(0) = 1$. When we denote the functions $\psi_{\lambda}$ and $g_\lambda$ for $\cL^{[k_{\gamma}]} \ ( \gamma \geq 0)$ by $\psi_{\lambda}^{[k_{\gamma}]}$ and $g_\lambda^{[k_{\gamma}]}$, it holds \begin{align} \psi_{\lambda}^{[k_{\gamma}]} = \frac{\psi_{\lambda + \gamma}}{k_{\gamma}}, \quad g_\lambda^{[k_{\gamma}]} = \frac{g_{\lambda + \gamma}}{k_{\gamma}} \quad (\lambda \in \bC). \label{eigen_shift} \end{align} Moreover, for the spectral measure $\rho_{w^{[k_{\gamma}]}}$ of $\cL^{[k_{\gamma}]}$ is given by \begin{align} \rho_{w^{[k_{\gamma}]}}(d\lambda) = \rho_{w}(d(\lambda - \gamma)) \quad \text{on} \ (\gamma, \infty). \label{spectral_shift} \end{align} Thus the $h$-transformed transition probability subject to the absorbing boundary at $0$ and the first hitting time densities are \begin{align} P^{[k_\gamma]}_t(x,dy) = \mathrm{e^{-\gamma t}}\frac{k_{\gamma}(y)}{k_\gamma(x)} P_t(x,dy) \quad \text{and} \quad f^{[k_{\gamma}]}_x(t) = \frac{\mathrm{e}^{-\gamma t}}{k_{\gamma}(x)}f_x(t). \label{h-transformed_density} \end{align} \subsection{$h$-transform for birth-and-death processes} Let $w$ be a string such that $dw$ is supported on a discrete countable set $\{a_i\}_{i \geq 1} \quad (0 < a_1 < a_2 < \cdots)$; $dw(x) = \sum_{i = 1}^{\infty}\pi_{i}\delta_{a_i} \quad (\pi_{i} > 0)$. As we have seen in Section \ref{section:dual_process}, the generalized diffusion corresponding to $w$ is equivalent to a birth-and-death process. For $\gamma \geq 0$, let $k_\gamma$ be a $\gamma$-eigenfunction for $\frac{d}{dw}\frac{d}{dx}$: \begin{align} k_\gamma(x) =a + bx + \gamma \int_{0}^{x}dy\int_{0}^{y}k_\gamma(z)dw(z) \quad (x > 0) \label{eq126} \end{align} for some constants $a, b\in \bR$. Note that since the boundary $0$ for birth-and-death processes is always regular, all the $\gamma$-eigenfunction may be represented of the form \eqref{eq126}. Set $a_0 := 0$. The function $k_\gamma$ is linear on each interval $[a_i,a_{i+1}] \ (i = 0,1,2,\cdots)$. Indeed, for $i = 0,1,2,\cdots,$ and $\delta \in [0,(a_{i+1} - a_i))$, since $dw$ is supported on $\{ a_i \}_{i \in \bN}$, it holds \begin{align} k_{\gamma}(a_i + \delta) - k_{\gamma}(a_i) &= b \delta + \gamma \int_{a_i}^{a_i+\delta}dy\int_{0}^{y}k_\gamma(z)dw(z) \label{} \\ &=b\delta + \gamma \int_{a_i}^{a_i+\delta}dy\int_{0}^{a_i}k_\gamma(z)dw(z) \label{} \\ &=b\delta + \gamma \delta \int_{0}^{a_i}k_\gamma (z)dw(z) \label{} \\ &=\left( b + \gamma \int_{0}^{a_i}k_\gamma(z)dw(z) \right)\delta. \label{} \end{align} Suppose $k_{\gamma}(x) > 0$ for all $x \geq 0$. For the scale function $s^{[k_\gamma]}$ defined by \eqref{eq117}, we have \begin{align} s^{[k_{\gamma}]}(a_{i+1}) - s^{[k_{\gamma}]}(a_i) &= \int_{a_i}^{a_{i+1}}k_{\gamma}(x)^{-2}dx \label{} \\ &= \int_{a_i}^{a_{i+1}}\frac{(a_{i+1} - a_{i})^2}{(x(k_{\gamma}(a_{i+1}) - k_{\gamma}(a_i)) + a_{i+1}k_{\gamma}(a_i) - a_ik_{\gamma}(a_{i+1}))^2}dx \label{} \\ &= \frac{a_{i+1} - a_{i}}{ k_{\gamma}(a_i)k_{\gamma}(a_{i+1})}. \label{BD-h-transformed_s} \end{align} For the birth-and-death process corresponding to the $h$-transformed generalized diffusion, we show the birth and death rates $\{\lambda_i^{[k_\gamma]}\}$ and $\{ \mu^{[k_\gamma]}_i \}$. By the same argument in Section \ref{section:dual_process}, we see that \begin{align} \frac{\lambda_i^{[k_\gamma]}}{\mu_i^{[k_\gamma]}} = \frac{s^{[k_\gamma]}(a_{i}) - s^{[k_\gamma]}(a_{i-1})}{s^{[k_\gamma]}(a_{i+1}) - s^{[k_\gamma]}(a_{i})} = \frac{a_{i} - a_{i-1}}{a_{i+1} - a_{i}}\cdot\frac{k_{\gamma}(a_{i+1})}{k_\gamma(a_{i-1})} \label{} \end{align} and \begin{align} (\lambda_i^{[k_\gamma]} + \mu_i^{[k_\gamma]})^{-1} &= \frac{(s^{[k_\gamma]}(a_{i}) - s^{[k_\gamma]}(a_{i-1}))(s^{[k_\gamma]}(a_{i+1}) - s^{[k_\gamma]}(a_{i}))}{s^{[k_\gamma]}(a_{i+1}) - s^{[k_\gamma]}(a_{i-1})} k_\gamma(a_{i})^2\pi_{i}. \label{} \end{align} Solving this, we obtain \begin{align} \lambda_i^{[k_\gamma]} = \frac{1}{\pi_{i}(a_{i+1} - a_{i})}\cdot\frac{k_{\gamma}(a_{i+1})}{k_{\gamma}(a_{i})} \quad \text{and} \quad \mu_i^{[k_\gamma]} = \frac{1}{\pi_{i}(a_{i} - a_{i-1})}\cdot\frac{k_{\gamma}(a_{i-1})}{k_{\gamma}(a_{i})}. \label{} \end{align} We will apply this formula in Section \ref{section:exBD}. We can see how the matrix $C$ is changed under an $h$-transform. Using \eqref{eigen_shift}, we can compute the $\theta$-eigenfunction $\psi_{\theta}^{[k_\gamma]}$ as \begin{align} \psi^{[k_\gamma]}_{\theta}(i) &= \frac{\psi_{\theta + \gamma}(i)}{k_{\gamma}(i)} \label{} \\ &= \frac{1}{k_{\gamma}(i)}\sum_{l=1}^{i}C(i,l)(\theta+\gamma)^{l-1} \label{} \\ &= \frac{1}{k_{\gamma}(i)}\sum_{l=1}^{i}C(i,l)\sum_{j = 1}^{l} \binom{l-1}{j-1} \theta^{j-1}\gamma^{l-j} \label{} \\ &= \frac{1}{k_{\gamma}(i)}\sum_{j = 1}^{i}\theta^{j-1} \sum_{l=j}^{i} \binom{l-1}{j-1} C(i,l) \gamma^{l-j}. \label{} \end{align} Thus we now determine the matrix $C$ corresponding to the $h$-transformed birth-and-death process, which we denote by $C^{[k_\gamma]}$: \begin{align} C^{[k_\gamma]}(i,j) = \frac{1}{k_{\gamma}(i)}\sum_{l=j}^{i} \binom{l-1}{j-1} C(i,l) \gamma^{l-j}. \label{h-transformed_C-matrix} \end{align} \section{Examples} \label{section:exBD} Here we give examples of birth-and-death processes and compute the corresponding matrices $C$. \subsection{Symmetric random walk} \label{ex:sym-RW} Let us consider the case $\lambda_{i} = \mu_i = \kappa > 0$ for every $i \geq 1$. In this case, \begin{align} \pi_{i} = 1 \quad (i \geq 1), \quad s(i) = i/\kappa \quad (i \geq 0) \label{} \end{align} and \begin{align} Qf(i) = \kappa (f(i+1) - 2 f(i) + f(i-1) ) \quad (i \geq 1). \label{} \end{align} Solving the recurrence relation $Qf = \theta f $ for $\theta \in \bR$, we have the following linearly independent solutions $k_\pm$: \begin{align} k_\pm(i) := \alpha^{\kappa}_{\pm}(\theta)^{i} \quad \text{for} \quad \alpha^{\kappa}_{\pm}(\theta) = \left( 1 + \frac{\theta}{2\kappa} \right) \pm \sqrt{\left( 1 + \frac{\theta}{2\kappa} \right)^2 -1} \quad (i \geq 0), \label{eq118} \end{align} where we note that $\alpha^{\kappa}_\pm(\theta)$ are the solutions of the quadratic equation: $\kappa(\alpha^2 - 2\alpha + 1) = \theta \alpha$. We note that $\alpha^{\kappa}_+(\theta)\alpha^{\kappa}_{-}(\theta) = 1$. Thus, it follows \begin{align} \varphi_\theta(i) = \left(\frac{1}{2} - \frac{\theta}{2\sqrt{\theta^2 + 4\kappa \theta}}\right) \alpha^{\kappa}_+(\theta)^i + \left(\frac{1}{2} + \frac{\theta}{2\sqrt{\theta^2 + 4\kappa \theta}}\right) \alpha^{\kappa}_-(\theta)^i \label{} \end{align} and \begin{align} \psi_{\theta}(i) = \frac{1}{\sqrt{\theta^2 + 4\kappa \theta}}(\alpha^{\kappa}_+(\theta)^i - \alpha^{\kappa}_-(\theta)^i). \label{} \end{align} Since it holds \begin{align} \lim_{i \to \infty}\frac{\varphi^{+}_{\theta}(i)}{\psi^{+}_\theta(i)} = \lim_{i \to \infty} \frac{\varphi_{\theta}(i+1) - \varphi_{\theta}(i)}{\psi_{\theta}(i+1) - \psi_{\theta}(i)} = \frac{2\kappa \theta}{\theta + \sqrt{\theta^2 + 4\kappa \theta}} \quad (\theta > 0), \label{} \end{align} it follows from \eqref{exitKrein} that \begin{align} \frac{2\kappa}{\theta + \sqrt{\theta^2 + 4\kappa \theta}} = \int_{0}^{\infty}\frac{\rho_{w}(d\xi)}{\xi(\theta + \xi)} \quad (\theta > 0), \label{} \end{align}s By the Stieltjes inversion, we have \begin{align} \rho_{w}(d\theta) = \frac{\sqrt{\theta(4\kappa - \theta)}}{2\pi}1\{ 0 < \theta < 4\kappa\}d\theta. \label{} \end{align} For $\theta \in (0,4\kappa)$ we can easily see \begin{align} \psi_{-\theta}(i) = \frac{\sin (\beta(\theta) i)}{\kappa \sin (\beta(\theta))} = \frac{1}{\kappa}U_{i-1}(\cos (\beta(\theta))), \label{} \end{align} where $\beta(\theta) = \arctan \left( \frac{\sqrt{4\kappa\theta - \theta^2}}{2\kappa - \theta}\right)$ and $U_{i-1}$ is the $(i-1)$-th Chebyshev polynomial of the second kind. Note that the Chebyshev polynomials are characterized by \begin{align} U_{k}(\cos t) = \frac{\sin ((k+1)t)}{\sin t} \quad (k \geq 0, \ t \in \bR) \label{} \end{align} (see e.g., \cite[p.218]{Specialfunction}). Note that $\cos (\beta(\theta)) = 1 - \frac{\theta}{2\kappa}$. When we write \begin{align} U_{k}(\theta) = \sum_{l = 0}^{k}u(k,l)\theta^l, \label{} \end{align} it holds \begin{align} \psi_{\theta}(i) &= \frac{1}{\kappa}U_{i-1}\left(1 + \frac{\theta}{2\kappa}\right) \label{} \\ &= \frac{1}{\kappa}\sum_{l=1}^{i}u(i-1,l-1)(1 + \theta /2\kappa)^{l-1} \label{} \\ &= \frac{1}{\kappa}\sum_{l=1}^{i}u(i-1,l-1)\sum_{j=1}^{l}\binom{l-1}{j-1} \left(\frac{\theta}{2\kappa}\right)^{j-1} \label{} \\ &= \frac{1}{\kappa}\sum_{j=1}^{i} \left(\frac{\theta}{2\kappa}\right)^{j-1} \sum_{l=j}^{i}u(i-1,l-1)\binom{l-1}{j-1}. \label{} \end{align} Thus it follows \begin{align} C(i,j) = \frac{1}{2^{j-1}\kappa^{j}} \sum_{l=j}^{i} \binom{l-1}{j-1} u(i-1,l-1). \label{} \end{align} Note that from \cite[p.219]{Specialfunction}, it holds \begin{align} u(i,j) = \begin{cases} (-1)^{n-k} \binom{n+k}{n-k}2^{2k} & (i = 2n,\ j = 2k, \ 0 \leq k \leq n), \\ (-1)^{n-k} \binom{n+k+1}{n-k}2^{2k+1} & (i = 2n+1,\ j = 2k+1, \ 0 \leq k \leq n), \\ 0 & \text{otherwise.} \end{cases} \label{} \end{align} \subsection{Asymmetric random walk} \label{Ex:asymRW} The asymmetric random walk with birth rate $\lambda$ and death rate $\mu$ can be obtained from the symmetric one by $h$-transform. We keep the notation of Section \ref{ex:sym-RW}. Recall that for $\theta > 0$, $k_+(i)$ and $k_-(i)$ are positive increasing and decreasing $\theta$-eigenfunctions given in \eqref{eq118}, respectively. Fix $\gamma \geq 0$. From \eqref{BD-h-transformed_s}, the speed measure $\pi^{\pm}$ and scale function $s^{\pm}$ of the symmetric random walk of the previous section transformed by $k_{\pm}(i) := \alpha^{\kappa}_{\pm}(\gamma)^{i} $ are \begin{align} \pi^{\pm}_i = \alpha^{\kappa}_{\pm}(\gamma)^{2i}, \quad s^{\pm}(i) - s^{\pm}(i-1) = \kappa^{-1}\alpha^{\kappa}_{\pm}(\gamma)^{-2i+1} \quad (i \geq 1). \label{h-ms} \end{align} Thus its birth and death rates are given by \begin{align} \lambda^{\pm}_{i} = \kappa \alpha^{\kappa}_{\pm}(\gamma), \quad \mu^{\pm}_i = \kappa \alpha^{\kappa}_{\pm}(\gamma)^{-1} = \kappa\alpha^{\kappa}_{\mp}(\gamma) \quad (i \geq 0). \label{h-lm} \end{align} The transition probability and the first hitting time densities can be easily obtained by \eqref{h-transformed_density}. We can also compute the matrix $C$ corresponding to these processes, which we denote by $C^{\pm}_{\gamma}$. By \eqref{h-transformed_C-matrix}, we have \begin{align} C^{\pm}_{\gamma}(i,j) &= \frac{1}{\alpha^{\kappa}_{\pm}(\gamma)^i}\sum_{l=j}^{i}\binom{l-1}{j-1}C(i,l)\gamma^{l-j}. \label{} \\ &= \frac{1}{\alpha^{\kappa}_{\pm}(\gamma)^i}\sum_{l=j}^{i}\binom{l-1}{j-1}\frac{\gamma^{l-j}}{2^{l-1}\kappa^{l}}\sum_{m = l}^{i}\binom{m-1}{l-1}u(i-1,m-1) \label{} \\ &= \frac{1}{\alpha^{\kappa}_{\pm}(\gamma)^i} \sum_{m = j}^{i} u(i-1,m-1) \sum_{l=j}^{m} \binom{l-1}{j-1} \binom{m-1}{l-1} \frac{\gamma^{l-j}}{2^{l-1}\kappa^{l}}. \label{} \end{align} We may represent every asymmetric random walk as an $h$-transform of a symmetric one. Let $\lambda, \mu > 0$. Set $\kappa = \sqrt{\lambda \mu}$. When $\lambda > \mu$, we can take $\gamma > 0$ so that \begin{align} \alpha^{\kappa}_{+}(\gamma)^2 = \frac{\alpha^{\kappa}_+(\gamma)}{\alpha^{\kappa}_-(\gamma)} = \frac{\lambda}{\mu} \label{} \end{align} since $\alpha^{\kappa}_+ :[0,\infty) \to [1,\infty)$ is increasing and homeomorphic. Then it holds \begin{align} \lambda^+_i = \kappa \alpha^{\kappa}_+(\gamma) = \lambda \quad \text{and} \quad \mu^+_i = \kappa \alpha^{\kappa}_+(\gamma)^{-1} = \mu. \label{} \end{align} When $\lambda < \mu$, by the same way as above, we can take $\kappa = \sqrt{\lambda \mu}$ and $\gamma > 0$ such that $\alpha^{\kappa}_-(\gamma) = \lambda / \mu$. Then it holds \begin{align} \lambda^-_i = \kappa \alpha^{\kappa}_{-}(\gamma) = \lambda \quad \text{and} \quad \mu^-_i = \kappa \alpha^{\kappa}_{-}(\gamma)^{-1} = \mu. \label{} \end{align} \section{\@startsection {section}{1}{\z@ {-3.5ex \@plus -1ex \@minus -.2ex {2.3ex \@plus.2ex {\normalfont\large\bf}} \makeatother \makeatletter \renewcommand\subsection{\@startsection {subsection}{1}{\z@ {-3.5ex \@plus -1ex \@minus -.2ex {2.3ex \@plus.2ex {\normalfont\normalsize\bf}} \makeatother
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The last two decades have witnessed great progresses on the QCD description of hard exclusive processes, in terms of generalized parton distributions (GPDs) \cite{GPD} describing the 3-dimensional content of hadrons \cite{impact}. Much hope exists on a meaningful extraction of dominant (i.e. chiral-even) GPDs from JLab12 near future experiments. To increase our confidence in this extraction, one however needs to probe various processes, thus verifying the universality of the GPDs. On the other hand, access to the chiral-odd transversity GPDs~\cite{defDiehl}, noted $H_T$, $E_T$, $\tilde{H}_T$, $\tilde{E}_T$, which decouple from deeply virtual Compton scattering and deeply virtual meson production at leading order, has turned out to be even more challenging~\cite{DGP} than the usual transversity distributions: one photon or one meson electroproduction leading twist amplitudes are insensitive to transversity GPDs. Quark mass effects \cite{PS2015} or production of a meson described by a twist 3 distribution amplitude \cite{liuti} are two ways to evade this difficulty. The alternate strategy followed in Ref.~\cite{IPST,PLB}, was to study the leading twist contribution to processes where more mesons (denoted A and B) are present in the final state. The hard scale which allows to probe the short distance structure of the nucleon is the invariant squared mass of the meson pair $s=M_{A,B}^2\, \sim |t'|$ in the fixed angle regime. A similar strategy has also been advocated in Ref.~\cite{kumano} for chiral-even GPDs. We study the process: \begin{equation} \gamma ^{(*)}(q)+ N (p_1) \rightarrow \gamma(k) + \rho (p_\rho, \epsilon_\rho) + N' (p_2)\,, \label{process} \end{equation} where $\epsilon_\rho$ is the polarization vector of the $\rho$ meson. This process is sensitive to both chiral-even and chiral-odd GPDs due to the chiral-even (resp. chiral-odd) character of the leading twist distribution amplitude (DA) of $\rho_L$ (resp. $\rho_T$). To study this process, we closely follow the method described in Ref.~\cite{PLB}. \begin{figure}[h] \psfrag{TH}{$\Large T_H$} \psfrag{Pi}{$\pi$} \psfrag{P1}{$\,\phi$} \psfrag{P2}{$\,\phi$} \psfrag{Phi}{$\,\phi$} \psfrag{Rho}{$\rho$} \psfrag{tp}{$t'$} \psfrag{s}{$s$} \psfrag{x1}{$\!\!\!\!\!\!x+\xi$} \psfrag{x2}{$\!x-\xi$} \psfrag{RhoT}{$\rho_T$} \psfrag{t}{$t$} \psfrag{N}{$N$} \psfrag{Np}{$N'$} \psfrag{M}{$M^2_{\gamma \rho}$} \psfrag{GPD}{$\!GPD$} \centerline{ \raisebox{1.6cm}{\includegraphics[width=14pc]{factorisation-2to2ter.eps}}~~~~~~~~~~~~~~ \psfrag{TH}{$\,\Large T_H$} \includegraphics[width=14pc]{factorisation2.eps}} \caption{\label{process}The wide angle Compton scattering process (left) and its generalization to the photoproduction of a $\gamma \rho$ pair (right). } \end{figure} To factorize the amplitude of this process we use the now classical proof of the factorization of exclusive scattering at fixed angle and large energy~\cite{LB}. The amplitude for the wide angle Compton scattering process $\gamma + \pi \rightarrow \gamma + \rho $ is written \cite{Nizic} as the convolution of mesonic DAs and a hard scattering subprocess amplitude $\gamma +( q + \bar q) \rightarrow \gamma + (q + \bar q) $ with the final meson states replaced by a collinear quark-antiquark pair. We then extract from the factorization procedure of the deeply virtual Compton scattering amplitude near the forward region the right to replace one entering meson DA by a $N \to N'$ GPD, and thus get Fig.~1 (right panel). The needed skewness parameter $\xi$ is written in terms of the final photon - meson squared invariant mass $M^2_{\gamma\rho}$ as \begin{equation} \label{skewedness} \xi = \frac{\tau}{2-\tau} ~~~~,~~~~\tau = \frac{M^2_{\gamma\rho}-t}{S_{\gamma N}-M^2}\,. \end{equation} Indeed the same collinear factorization property bases the validity of the leading twist approximation which either replaces the meson wave function by its DA or the $N \to N'$ transition by nucleon GPDs. A slight difference is that light cone fractions ($z, 1- z$) leaving the DA are positive, while the corresponding fractions ($x+\xi,\xi-x$) may be positive or negative in the case of the GPD. Our Born order calculation shows that this difference does not ruin the factorization property. In order for the leading twist factorization of a partonic amplitude to be valid, one should avoid the dangerous kinematical regions where a small momentum transfer is exchanged in the upper blob, namely small $t' =(k-q)^2$ or small $u'=(p_\rho-q)^2$, and the resonance regions for each of the invariant squared masses {$(p_\rho +p_{N'})^2,$} $(k+p_\rho)^2\,.$ Let us finally stress that our discussion applies as well to the case of electroproduction where a moderate virtuality of the initial photon may help to access the perturbative domain with a lower value of the hard scale $M_{\gamma \rho}$. \section{Kinematics} Our conventions are the following. We decompose all momenta on a Sudakov basis as $ v^\mu = a \, n^\mu + b \, p^\mu + v_\bot^\mu $, with $p$ and $n$ the light-cone vectors $ p^\mu = \frac{\sqrt{s}}{2}(1,0,0,1), n^\mu = \frac{\sqrt{s}}{2}(1,0,0,-1), $ $v_\bot^\mu = (0,v^x,v^y,0) $ and $v_\bot^2 = -\vec{v}_t^2\,. $ The particle momenta read \begin{equation} \label{impini} p_1^\mu = (1+\xi)\,p^\mu + \frac{M^2}{s(1+\xi)}\,n^\mu~, \quad p_2^\mu = (1-\xi)\,p^\mu + \frac{M^2+\vec{\Delta}^2_t}{s(1-\xi)}n^\mu + \Delta^\mu_\bot\,, \quad q^\mu = n^\mu ~, \end{equation} \begin{eqnarray} \label{impfinc} k^\mu = \alpha \, n^\mu + \frac{(\vec{p}_t-\vec\Delta_t/2)^2}{\alpha s}\,p^\mu + p_\bot^\mu -\frac{\Delta^\mu_\bot}{2},~ ~p_\rho^\mu = \alpha_\rho \, n^\mu + \frac{(\vec{p}_t+\vec\Delta_t/2)^2+m^2_\rho}{\alpha_\rho s}\,p^\mu - p_\bot^\mu-\frac{\Delta^\mu_\bot}{2},\nonumber \end{eqnarray} with $\bar{\alpha} = 1 - \alpha$ and $M$, $m_\rho$ are the masses of the nucleon and the $\rho$ meson. The total center-of-mass energy squared of the $\gamma$-N system is \begin{equation} \label{energysquared} S_{\gamma N} = (q + p_1)^2 = (1+\xi)s + M^2\,. \end{equation} From these kinematical relations it follows that : \begin{equation} \label{2xi} 2 \, \xi = \frac{(\vec{p}_t -\frac{1}2 \vec{\Delta}_t)^2 }{s \, \alpha} +\frac{(\vec{p}_t +\frac{1}2 \vec{\Delta}_t)^2 + m_\rho^2}{s \, \alpha_\rho}\,, \end{equation} and \begin{equation} \label{exp_alpha} 1-\alpha-\alpha_\rho = \frac{2 \, \xi \, M^2}{s \, (1-\xi^2)} + \frac{\vec{\Delta}_t^2}{s \, (1-\xi)}\,. \end{equation} On the nucleon side, the transferred squared momentum is \begin{equation} \label{transfmom} t = (p_2 - p_1)^2 = -\frac{1+\xi}{1-\xi}\vec{\Delta}_t^2 -\frac{4\xi^2M^2}{1-\xi^2}\,. \end{equation} The other various Mandelstam invariants read \begin{eqnarray} \label{M_pi_rho} s'&=& ~(p_\gamma +p_\rho)^2 = ~M_{\gamma\rho}^2= 2 \xi \, s \left(1 - \frac{ 2 \, \xi \, M^2}{s (1-\xi^2)} \right) - \vec{\Delta}_t^2 \frac{1+\xi}{1-\xi}\,, \\ \label{t'} - t'&=& -(p_\gamma -q)^2 =~\frac{(\vec p_t-\vec\Delta_t/2)^2}{\alpha} \;,\\ \label{u'} - u'&=&- (p_\rho-q)^2= ~\frac{(\vec p_t+\vec\Delta_t/2)^2+(1-\alpha_\rho)\, m_\rho^2}{\alpha_\rho} \; . \end{eqnarray} Let us remind the reader that we are interested in the kinematical domain where $s', -t', -u'$ are large (as compared to $\Lambda^2_{QCD}$) and that $0 < \alpha, \alpha_\rho < 1$. \section{The scattering amplitude} \label{Sec:scattering} The scattering amplitude of the process (\ref{process}) is written in the factorized form: \begin{equation} \label{ampl} \mathcal{A}(t,M^2_{\gamma\rho},u') = \sum\limits_{q,i} \int_{-1}^1dx\int_0^1dz\ T_i^q(x,v,z) \, H_i^{q}(x,\xi,t)\Phi_{\rho_{L,T}}(z)\,, \end{equation} where $T_i^q$ is the hard part of the amplitude and $H_i^{q}$ the corresponding (chiral-even and chiral-odd) GPDs of a parton $q$ in the nucleon target, and $\Phi_{\rho_{L,T}}(z)$ the leading twist chiral-even (resp. chiral-odd) distribution amplitude of the $\rho_L$ (resp. $\rho_T$) meson. \begin{figure}[h] \centerline{\includegraphics[width=32pc]{AllDiagramsGathered.eps}} \caption{\label{diagrams}The Feynman diagrams describing the subprocess at leading order; in Feynman gauge only the 4 diagrams on the right contribute to the $\rho_T$ case} \end{figure} The scattering sub-process is described by 20 Feynman diagrams, but an interesting (quark-antiquark interchange) symmetry allows to deduce the contribution of half of the diagrams from the 10 diagrams shown on Fig.~\ref{diagrams} through a ($x \leftrightarrow -x \,; \,z \leftrightarrow 1-z$) interchange. Moreover, in Feynman gauge, only the 4 diagrams on the right of Fig.~\ref{diagrams} contribute to the chiral-odd case. The scattering amplitudes get both real and imaginary parts. Focusing on the chiral-odd amplitude (since accessing transversity GPDs was the first motivation of our study), we get the following results. The $z$ and $x$ dependence of this amplitude can be factorized as \begin{equation} T_i^q = e_q^2 \,\alpha_{em}\, \alpha_s \,{\mathcal{N} (z,x)}\, \mathcal{T}^i \end{equation} with (in the gauge $p.\epsilon_k =0$): \begin{eqnarray} \mathcal{T}^i &=& (1-\alpha) \left[ \left( \epsilon_{q\bot} . p_\bot \right) \left( \epsilon_{k\bot}.\epsilon_{\rho\bot} \right) - \left( \epsilon_{k\bot} . p_\bot \right) \left( \epsilon_{q\bot}.\epsilon_{\rho\bot} \right) \right] p_\bot^i \nonumber \\ &-& (1+\alpha) \left(\epsilon_{\rho\bot}.p_\bot\right) \left( \epsilon_{k\bot}.\epsilon_{q\bot}\right) p_\bot^i + \alpha \left( \alpha^2 -1\right) \xi s \left(\epsilon_{q\bot}.\epsilon_{k\bot}\right)\epsilon_\rho^i \\ &-&\alpha \left( \alpha^2 -1 \right) \xi s \left[ \left(\epsilon_{q\bot}.\epsilon_{\rho\bot}\right) \epsilon_{k\bot}^i - \left(\epsilon_{k\bot}.\epsilon_{\rho\bot}\right) \epsilon_{q\bot}^i \right]\,. \nonumber \end{eqnarray} Using as a first estimate the asymptotic form of the $\rho-$meson distribution amplitude, we perform analytically the integration over $z$. Inserting a model for the transversity GPDs \cite{PLB}, we use numerical methods for the integration over $x$. Starting with the expression of the scattering amplitude (\ref{ampl}), the differential cross-section as a function of $t$, $M^2_{\gamma\rho},$ $-u'$ reads \begin{equation} \label{difcrosec} \left.\frac{d\sigma}{dt \,du' \, dM^2_{\gamma\rho}}\right|_{\ t=t_{min}} = \frac{|\mathcal{M}|^2}{32S_{\gamma N}^2M^2_{\gamma\rho}(2\pi)^3}. \end{equation} \noindent We show in Fig.~\ref{resultS20} this cross section (\ref{difcrosec}) as a function of $-u'$ at $S_{\gamma N}$ = 20 GeV$^2$ for $M^2_{\gamma\rho}$ = 6 GeV$^2$ i.e. $\xi = 0.186$, with cuts in $-u'$ corresponding to the constraints $-t'>1$ GeV$^2$ and $-u'> 1$ GeV$^2$. The cross section grows with $(-u')$ but its normalization is rather small. We expect a larger cross-section for the longitudinal $\rho$ case where chiral-even GPDs contribute; this will not help disentangling the transverse $\rho$ cross section, although a complete analysis of the angular distribution of the emerging $\pi^+ \pi^-$ pair allows in principle to access the chiral-odd sensitive contribution at the amplitude level. \begin{figure}[h] \vspace{.5cm} \psfrag{U}{\raisebox{-.15cm}{$1$}} \psfrag{D}{\raisebox{-.15cm}{$2$}} \psfrag{T}{\raisebox{-.15cm}{$3$}} \psfrag{Q}{\raisebox{-.15cm}{$4$}} \psfrag{C}{\raisebox{-.15cm}{$5$}} \psfrag{V}{\raisebox{0cm}{\hspace{-.4cm}$0.1$}} \psfrag{W}{\raisebox{0cm}{\hspace{-.4cm}$0.2$}} \psfrag{B}{\raisebox{.3cm}{\hspace{-.9cm}$\left.\frac{d \sigma}{dt du' dM_{\gamma \rho}^2}\right|_{|t|_{\rm min}}$(pb.GeV$^{-6}$)}} \psfrag{A}{\raisebox{-.3cm}{$-u'$ (GeV$^2$)}} \centerline{\hspace{-.5cm}\includegraphics[width=12cm]{dsigma_dt_dup_dM2-20-6-GPD-DD-final-car.eps}} \caption{\label{resultS20}The differential cross section (\ref{difcrosec}) for the production of $\gamma \rho_T$ involving chiral-odd GPDs, as a function of $-u'$ at $S_{\gamma N}$ = 20 GeV$^2$ for $M^2_{\gamma\rho}$ = 6 GeV$^2$, i.e. $\xi = 0.186$.} \label{resultS20} \end{figure} The quest for an easy extraction of chiral-odd GPDs is obviously not solved by our proposal for $\gamma \rho_T$ photoproduction. \ack This work is partly supported by the Polish Grant NCN No DEC-2011/01/B/ST2/03915 and the French grant ANR PARTONS (Grant No.ANR-12-MONU-0008-01). L.~S. was partially supported by a French Government Scholarship. \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION} This paper is the second in a series devoted to studying the metal content of high-redshift galaxies and their progenitors. Our primary objectives are \indent (1) to record the emergence of metals in galaxies, \indent (2) to trace the mean cosmic metallicity from $z$ $\approx$ 4.5 to the present, \indent (3) to determine the kinematic state of galaxies from $z$ $\approx$ 4.5 to the present. \noindent We are implementing this study using HIRES, the echelle spectrograph on the Keck 10m telescope (\cite{vgt92}), to obtain high-resolution spectra of QSOs with foreground damped {Ly$\alpha$ } systems. The damped {Ly$\alpha$ } systems are a population of neutral gas layers exhibiting properties indicating they are either galaxy progenitors, or well formed galaxies detected during an early evolutionary phase. Recent studies indicate the comoving density of neutral gas in damped systems at $z \approx 3.3$ is comparable to the density of visible stars in current galaxies. At lower redshifts, the comoving density of neutral gas decreases with time in a manner consistent with gas consumption by star formation (\cite{wol95}). Therefore, studies of the metal content of the damped Ly$\alpha$ systems enable one to trace the chemical evolution of representative galaxies from a presumably metal-poor gaseous progenitor phase to metal rich epochs when most of the baryons are in stars. As a result, the age-metallicity relation, kinematic conditions, etc., deduced from the damped Ly$\alpha$ systems should tell us more about the history of galaxies at large redshifts than analogous relations deduced from old stars found in the solar neighborhood (\cite{evd93}). In a previous paper we presented echelle spectra of the $z$ = 2.309 damped system toward PHL 957 at a spectral resolution of $\approx$ 8 {km~s$^{-1}$ } (FWHM) and signal-to-noise ratio of $\approx$ 35:1 (\cite{wol94}). By fitting multiple Voigt velocity components to low-ion transitions such as Zn II 2026, Ni II 1741, and Cr II 2062 we obtained accurate abundances for Zn, Ni, and Cr in the neutral gas. The Zn and Cr abundances were accurate because we resolved Zn II 2062.664 from Cr II 2062.234 for the first time in a QSO absorption system. We found that the abundances relative to solar were low: [Zn/H] = $-$1.55$\pm$0.11, [Cr/H] = $-$1.79$\pm$0.10, and [Ni/H] = $-$2.13$\pm$0.08. The Zn abundance is especially significant because Zn is relatively undepleted by grains in the ISM of the Galaxy (\cite{sem95}) and is presumably unaffected by dust which may be present in damped Ly$\alpha$ systems (\cite{fal93}). We also found the line profiles to be asymmetric in the sense that the low column density gas was found in absorption only at velocities higher than the high column-density gas. The kinematics can be explained by the passage of the line of sight through a rotating disk in which the density of clouds decreases with radius and with perpendicular distance from midplane. The purpose of this paper is to present HIRES spectra for Q0201+365, a $V$ = 17.5 QSO with emission redshift $z_{em}$ = 2.49. While we identify more than $80\%$ of the absorption features and find a total of 13 metal-line redshift complexes, the focus of this paper is on the damped Ly$\alpha$ system at $z$ = 2.462 (\cite{lzwt91,lwt93}). \cite{lwt93} studied this system at a resolution of $\approx$ 50 {km~s$^{-1}$ }. They fitted a Voigt damping profile to the Ly$\alpha$ absorption trough and found $\N{HI} = 2.4 \sci{20} \, \rm cm^{-2}$. They also identified the metal transitions Si II 1190, Si II 1193, Si III 1206, Si II 1260, and Fe II 1144. Because these transitions are (a) saturated and (b) in the Ly$\alpha$ forest, neither the abundances nor kinematics were accurately measured. The present study represents a major improvement over the previous work for the following reasons. First, we obtain spectra at a resolution of $\approx 8$ {km~s$^{-1}$ } and a typical signal-to-noise ratio of 33:1. Second, in contrast to the previous work, we focus on metal lines redward of Ly$\alpha$ emission where confusion with Ly$\alpha$ forest lines is absent. Third, because of the higher accuracy of the data we focus on weak unsaturated transitions of ions expected to dominate the ionization state of gas in neutral clouds. In fact we establish accurate element abundances for Fe, Si, Ni, and Cr, and a lower limit on the abundance of Zn. Furthermore, we use computer simulations to investigate the ionization of this system. We also analyze the relative metal abundances and comment on the characteristics of dust grains in this system. In addition, we examine a Ly$\alpha$ absorption system at $z$ = 1.955 and use kinematic and abundance arguments to suggest it may be a damped Ly$\alpha$ system. Finally, we discuss the kinematics of the $z$=2.462 damped absorption system, contrasting its features with several other systems toward Q0201+365, as well as other damped Ly$\alpha$ systems measured with HIRES (\cite{wol94}). The paper is organized as follows. In $\S$ 2 we describe the data acquisition and reduction techniques, present the spectra (Figure~\ref{sptra}) and give a nearly complete absorption line list in Table~\ref{orders}. We detail the analytic methods utilized throughout the paper in $\S$ 3. In $\S$ 4 we present velocity profiles of the most significant metal line systems along with the VPFIT package solutions where applicable. $\S$ 5 presents the ionic column densities of the two systems associated with the damped Ly$\alpha$ profile at $z$=2.46. In $\S$ 6 we argue that the degree of photoionization of the damped Ly$\alpha$ system is low. $\S$ 7 gives the results of the abundance measurements and discusses the possible depletion of the gas-phase metals by dust grains. We also describe the kinematics of the Ly$\alpha$ system. Finally, $\S$ 8 summarizes the results and gives concluding remarks. \section{DATA} In this section we present the HIRES spectra, detailing the techniques used for the acquisition and reduction of the data. Table~\ref{orders} gives an absorption line list with measured equivalent widths and 1$\sigma$ errors and identifies over 80$\%$ ($\geq 88\%$ redward of the Ly$\alpha$ forest) of the features. \subsection{Acquisition} We observed Q0201+365 with the HIRES echelle spectrograph on the 10m W.M. Keck Telescope on three separate nights for a total integration time of 9.7 hrs. Table~\ref{obs} presents a journal of the observation dates, exposure times, wavelength coverages and resolution of the data. Unfortunately, the signal to noise ratio (SNR) of the data was significantly limited by clouds. We used the kv38 filter to block out 2nd order blue wavelengths, the C5 decker plate with 1.1$''$ slit, and standard 2$\times 1$ binning on the 2048 x 2048 Tektronix CCD. This setup afforded a resolution ranging from $7.2 - 8.0$ km~s$^{-1}$ and wavelength coverage from $4720-7180 {\rm \, \AA}$ over 26 orders. Gaps are evident between orders redward of $\approx 5100 {\rm \, \AA}$ where the free spectral range of the echelle exceeds the format of the CCD. For reduction and calibration we took 300s exposures of the nearby standard star BD+28411 and images of quartz and Th-Ar arc lamps. \subsection{Reduction} The 2-D CCD images were reduced to 1-D uncalibrated spectra with a software package kindly provided by T. Barlow (1995). In short, the package subtracts the baseline and bias from the 2-D frames, determines the gain from dark images converting digital numbers (DN) to e$^-$ counts, and extracts the data from the 2-D images by tracing the bright standard-star profile. It also performs sky subtraction and the removal of cosmic ray events. We wavelength calibrated the data of each night by fitting low-order Legendre polynomials to our Th-Ar arc frames, properly correcting to vacuum heliocentric wavelengths. We obtained continuum fits to the spectra by fitting high-order Legendre polynomials to the spectral flux in each order. The continuum fits were then used to normalize the flux to unity. We also calculated a 1$\sigma$ error array assuming Poisson statistics, ignoring the errors associated with sky subtraction and continuum fitting (i.e. $\sigma = \sqrt{N_{\rm obj} + N_{\rm sky}}$). Finally we coadded the spectra, weighting by the calculated SNR of each image while rejecting spurious values assumed to arise from cosmic rays or poor sky subtraction. The final average SNR is $\approx$ 30 with the exception of Order 75 (SNR $\approx 22$) where only one night of data was acquired due to a difference in the alignment of the CCD on the first night. The spectra are presented along with 1$\sigma$ error arrays in Figure~\ref{sptra}. \subsection{Absorption Lines} Table~\ref{orders} lists the wavelengths, equivalent widths and 1$\sigma$ errors for all absorption line features which exceed the 5$\sigma$ limit in equivalent width as measured by techniques similar to those of Lanzetta et al. (1991). We believe 5$\sigma$ is a conservative but appropriate limit for this data. The reported wavelengths represent rough estimates of the centroids of complex line profiles and should be taken only as guides for differentiating between features. Because almost all of the transitions are resolved, an accurate determination of the wavelength of every feature would require months of laborious Voigt profile fitting. This task bears little scientific merit for the task of the present paper which is to examine specific systems. For this reason, we carried out detailed profile fits for only a selected subset of the lines in Table~\ref{orders}. We do not report equivalent widths for those features identified as sky absorption lines or complicated multi-system blends. Table~\ref{orders} also includes the transition names and approximate redshifts of those features we successfully identified. Identification proceeds in a largely {\it ad hoc} fashion, with the initial emphasis placed on finding C IV and Si IV doublets. Once we composed a list of redshifts for the metal-line systems, we attempted to match the remaining features with the strongest metal-line transitions. Finally, we compared the line profiles of the transitions of a given redshift system in velocity space for conclusive identification. Although fairly laborious, this approach is highly effective and essentially error-proof. By comparing the object frames with the similarly reduced standard star images, we identified night-sky emission and absorption features in the spectra. These features are labeled appropriately in Table~\ref{orders} along with all other identified spurious features. Table~\ref{fval} lists the rest wavelengths and oscillator strengths for all of the metal transitions analyzed in this paper. Almost all of the values are taken from Morton (1991). It is important to be very clear what values are assumed in order to allow for consistent abundance comparisons. \section{ANALYTIC METHODS} This section describes the least squares line profile fitting method and the apparent optical depth method used to analyze our metal line systems. \subsection{VPFIT (Least-Squares Line-Profile Fitting Method)} With the aid of the VPFIT fitting package kindly provided by R.F. Carswell we performed least-squares fits of Voigt profiles, produced by individual Gaussian components, to our metal lines for several of the absorption systems toward Q0201+365. The VPFIT package fits Voigt profiles to an absorption system, simultaneously determining the redshift, column density, $b$ values (where the Doppler velocity $b$ and velocity dispersion $\sigma$ are related by $b = \sqrt{2} \sigma$) and associated errors of the individual components while minimizing a $\chi^2$ parameter matrix . When performing a fit, one can tie several transitions together forcing the redshift and $b$ values of the different transitions to match while allowing the column densities to vary individually. In performing the fits we first isolated the broadest resolved transition in a given absorption complex. We found we could always obtain a reasonably accurate profile fit to this transition. We then tied it together with the other associated transitions. Given the inherent differences between low and high-ion line profiles, we chose to treat the two types separately. If necessary, we added or removed a velocity component to those transitions with line profiles that have features not evident in the other transitions. In several cases, this significantly lowered the final $\chi^2$ value. In all our fits we assumed bulk motion dominates thermal motion because damped Ly$\alpha$ systems are relatively cool ($T <$ 1000 K) and the transitions arise in metals that are comparatively massive. We also set a minimum value for the $b$ parameter at 3 km~s$^{-1}$ to prevent the package from fitting features narrower than $\approx$ 4 pixels, the FWHM of the line spread function. \subsection {Apparent Optical Depth Method and Hidden Component Analysis} Savage and Sembach (1991) have stressed that measuring column densities with the line profile method does not always account for hidden saturated components. These saturated components may be under represented in the line profile analysis, leading to significant errors in the measured ionic column densities. Therefore, as a check on our VPFIT results we performed a hidden component analysis of those ions where multiple transitions were observed (e.g. Ni$^+$ for the damped Ly$\alpha$ system at $z$ = 2.462). The analysis involves calculating $N_a (v)$, the apparent column density per unit velocity, for each pixel from the optical depth equation \begin{equation} N_a(v) = {m_e c \over \pi e^2} {\tau_a(v) \over f \lambda} , \end{equation} \noindent where $\tau_a(v) = \ln [I_i (v) / I_a (v)]$, f is the oscillator strength, $\lambda$ is the rest wavelength and $I_i$ and $I_a$ are the incident and measured intensity. Comparing $N_a (v)$ deduced from two or more transitions of the same ion, one finds the stronger transition will have smaller values of $N_a (v)$ in those features where hidden saturation is present. Thus, one can ascertain the likelihood of saturated components for ions with multiple transitions. Furthermore, summing $N_a(v)$ over the proper velocity intervals serves as an excellent check on ionic column densities measured with the VPFIT package, and as a further check on the existence of saturation. \section{VELOCITY PLOTS AND VPFIT SOLUTIONS} This section presents the velocity profiles of the most complex metal line absorption systems toward Q0201+365. For several of the systems, we superimpose the solutions of our least square fits from the VPFIT package. In all plots a dashed vertical line is drawn for reference, usually identifying the strongest feature present. For clarity, we have plotted features not related to the given system with dotted lines. \subsection{Damped Ly$\alpha$ System ($z$ = 2.462)} Figure~\ref{2462V} presents the line profiles and fits of the transitions associated with the damped Ly$\alpha$ system at $z$=2.462\footnote{ Note there are two `subsystems' within the velocity space of the damped Ly$\alpha$ profile: the present subsystem and the $z$=2.457 subsystem discussed in $\S$4.2}. The velocity centroids for the Gaussian components of the fit to the low-ion profiles are denoted by the short vertical lines above the Fe II 1608 profile. The velocity centroids for the high-ion fits are the vertical lines above the Si IV 1393 profile. We found 13 individual components were required for an optimal fit to the low-ions. The multi-component fit to the transitions Fe II 1608, Ni II 1741, Ni II 1751, Cr II 2056 and Si II 1808 has a reduced $\chi_\nu^2 = 1.05$ with a probability $P_{\chi^2} = 0.216$. Table~\ref{2462TL} lists the redshift, $b$ value and column density along with 1$\sigma$ error of every velocity component in the VPFIT solution. The quality of the fit reflects the high degree to which the low-ion profiles track one another. On the other hand, 10 components (only 8 over the same region as the low ion profiles) were necessary for an optimal fit to the high-ion transitions C IV 1550 and Si IV 1393. These components are significantly broader (higher $b$ values) than those of the low-ions. This fit is not as accurate as the low-ion fit ($\chi^2_\nu = 1.805$) because it is much more difficult to fit broad shallow components such as those around $\approx -260$ km~s$^{-1}$ . The results of this fit are presented in Table~\ref{2462H}. Figure~\ref{2462Vb} shows velocity profiles of 4 transitions we chose not to fit with the VPFIT package. Because the Al II 1670 profile is so highly saturated, we were unable to accurately fit it with the other low-ion profiles. In particular, the VPFIT package could not properly model the heavy absorption at $\approx -140$ km~s$^{-1}$ . The Al III 1862 profile exhibits characteristics of both the low and high-ion transitions and therefore was difficult to fit it to either. Instead, we measured its column density with the apparent optical depth method. Finally, the 3 transitions Zn II 2026, Cr II 2062 and Zn II 2062 all suffer from significant blending: Zn II 2026 is blended with both Fe II 2374 and Fe II 2600 from 2 other systems, and Zn II 2062 and Cr II 2062 are blended with each other. As a result all three transitions were excluded from the fit. \subsection{Companion System at $z$=2.457} The velocity profiles and least-square fits of the other subsystem (found at $z$=2.457) associated with the damped Ly$\alpha$ system at $z$=2.46 is shown in Figure~\ref{2457V}. A 12 component fit to the low-ion transitions Fe II 1608, Si II 1526 and Al II 1670 was optimal and yielded a reduced $\chi_\nu^2 = 1.50$. Difficulties arose in fitting the Fe II 1608 profile because the second component appears to have a significantly lower $b$ value than the same component in the other low-ion profiles. This may be a result of thermal broadening. We also fitted the high-ions SiIV 1393, 1402 and CIV 1548 and again found broader, fewer components were required in the best fit. Tables~\ref{2457L} and \ref{2457H} list the results of the fits. \subsection{Possible Damped System at $z$=1.955} Figure~\ref{1955V} presents the velocity profiles for a redshift system with $z$=1.955 which is a possible damped Ly$\alpha$ system. We chose not to fit this system because significant blending with other systems prevented us from determining which features are associated only with the $z$=1.955 system. In particular, we have been unable to determine whether the absorption seen at positive relative velocities in Figure~\ref{1955V} is due to the $z$=1.955 system. Lu et al. (1994) could not confidently classify the system as damped because the noisy Ly$\alpha$ profile could not be fitted with a Voigt damping profile. The metal lines have many resolved components and exhibit a large velocity interval ($\approx$ 250 km~s$^{-1}$ ). Figure~\ref{1955L} plots the observed Ly$\alpha$ profile, a Voigt damped Ly$\alpha$ profile, and the Fe II 1608 line profile. The Voigt Ly$\alpha$ profile was derived by letting $\N{HI} = 1.5 \sci{20} \, \rm cm^{-2}$ at $z$=1.955. It appears the low-ions satisfy the metal criteria associated with damped Ly$\alpha$ systems; i.e., the low-ion metal profile is significantly narrower than the Ly$\alpha$ profile and its velocity centroid is near that of the Ly$\alpha$ profile (Wolfe et al. 1993). An apparent optical depth measurement of the Cr II and Si II ions gives $\N{Cr^+} = 12.73 \pm .070$ and $\N{Si^+} = 15.25 \pm 0.03$ suggesting a lower limit of $\log[\N{H}] \geq 19.0$ and $19.7$ respectively, assuming cosmic abundances and no dust depletion. This analysis indicates the $z$=1.955 system may be damped. Furthermore, Figure~\ref{1955V} reveals evidence for C I 1656, a transition previously seen only in damped Ly$\alpha$ systems. However, even with our high resolution data the proper classification remains inconclusive. \subsection{System at $z$=2.325} The velocity profiles and VPFIT profile solutions for the absorption system at $z$=2.325 are shown in Figure~\ref{2325V}. Note the feature at $-80$ km~s$^{-1}$ in the CIV 1548 transition is due to a blend with the CIV 1550 transition from the system at $z$=2.320. By contrast with the damped system, the Al III 1854 transition was best fitted with the low-ion solution. We tied the Si II 1526, Fe II 1608, Al II 1670, and Al III 1854 transitions together and fitted the CIV doublet separately. Tables~\ref{2325TL} and \ref{2325H} present the results of the 2 fits. Unlike the damped system, the high and intermediate ions more closely follow the low-ion profiles. Because the high-ion profiles have several components with higher $b$ values, however, it is still impossible to fit them together. Figure~\ref{2325L} plots the Si II 1526, C IV 1550 and Ly$\alpha$ profiles for the $z$=2.325 system. All three transitions span nearly the same velocity interval and track one another moderately well. As expected the Ly$\alpha$ profile tracks the low-ion profiles (in particular the component at $v \approx 20$ km~s$^{-1}$ ) more closely than the high-ion profiles. Although we did not fit the Ly$\alpha$ profile, Figure~\ref{2325L} is consistent with the profiles of Lyman limit systems. \subsection{Mg II Systems at $z$=1.476, 1.699} Figure~\ref{MgII} shows the velocity profiles of two Fe II and Mg II transitions for metal line systems found at the redshifts $z$=1.476 (a) and $z$=1.699 (b). Both sets of transitions are relatively complex and most of the absorption spans moderate velocity intervals ($\approx 100$ km~s$^{-1}$ ). However, the Fe II 2374 and Fe II 2586 transitions in the $z = 1.476$ system may exhibit an additional feature at $\approx -275$ km~s$^{-1}$ . Because the predicted wavelengths for the other Fe II lines fall in the inter order gaps, we cannot verify the reality of this feature. Figures 10a and 10b are hidden component analyses of the Fe transitions from the two systems. Fe II 2586 was not plotted in either hidden component analysis because it closely traces the Fe II 2374 profile in each system and would clutter the figures. In Figure 10a, the weakest Fe transition (Fe II 2374) has significantly larger $N_a (v)$ values, while the strongest Fe transition (Fe II 2600) has the smallest $N_a (v)$ profile. This is direct evidence of hidden saturated components. In fact, because of these hidden components, we found it impossible to fit all of the transitions together in the $z$=1.476 system. We were successful in fitting the Fe II 2344, 2586 transitions, however, and the results are presented in Table~\ref{1476}. There is little evidence of hidden saturated components in the $z$=1.699 system. What the hidden component analysis does reveal, however, is a blend in the Fe II 2374 profile with Al III 1854 from the $z = 2.457$ at velocities greater than $\approx$ 0 km~s$^{-1}$ and a blend in the Fe II 2600 profile with Fe II 2374 from the $z$ = 1.955 system at velocities lower than $\approx$ $-70$ km~s$^{-1}$ . Unfortunately, these blends, and the low SNR in the Fe II 2586 and Fe II 2374 profiles prevented accurate profile fits. \section{IONIC COLUMN DENSITIES} This section presents the ionic column densities of the two systems associated with the damped Ly$\alpha$ profile at $z=2.46$. We perform hidden component analyses where applicable, and use both the line profile and apparent optical depth methods to measure column densities. \subsection{Damped Ly$\alpha$ System at $z$ = 2.462} The hidden component analysis of the Ni II 1741, 1751 transitions for the damped Ly$\alpha$ system at $z$ = 2.462 is shown in Figure~\ref{HCA-Ni}. With a few minor exceptions, the $N_a (v)$ curves for the two Ni II transitions match within $1 \sigma$, suggesting no hidden saturated components. Therefore, abundances based on column densities inferred from profile fitting or the apparent optical depth method should not suffer from significant hidden saturation effects. Table~\ref{2462I} lists the measured ionic column densities and 1$\sigma$ errors for Fe$^+$, Cr$^+$, Si$^+$, Al$^+$ and Ni$^+$ for the damped Ly$\alpha$ system at $z$=2.462 as measured by both the line profile (VPFIT) and apparent optical depth methods. For the line profile method we summed the column densities of the individual components of each transition and calculated the 1$\sigma$ error in the total value with standard least squares techniques. We adopt a final value for the ionic column density for all of our measurements by averaging the two values and adopting the VPFIT errors. We found the apparent optical depth method underestimates the error, particularly transitions which are nearly saturated (e.g. FeII 1608). For nearly saturated or very weak lines, errors associated with sky subtraction and continuum fitting will play a significant role, yet these errors are not included in the 1$\sigma$ array. The VPFIT package estimates errors based on the fit of all profiles and is influenced more by the deviation of the data from the fit. Therefore, we chose to adopt the VPFIT errors for all of the ions. Note that in almost every case the calculated values from the two methods match, further indicating no hidden saturated components. As noted above, we can place only a lower limit on the Zn$^+$ column density because of blending. We find the Zn II 2026 transition to be dominated by Fe II 2374 associated with the $z$ = 1.955 absorption system, and Zn II 2062 to be partially blended with the stronger Cr II 2062 transition. Although Zn II 2026 cannot be extracted from the Fe II profile, the Zn II 2062 profile can be used to estimate a lower limit on the Zn$^+$ abundance. Figure~\ref{HCA-Cr} is a hidden component analysis of the Cr II 2056 and Cr II 2062 transitions. Note that the Cr II 2062 (dotted) profile dominates the Cr II 2056 profile over the entire velocity interval, as expected if Zn II 2062 is present. This analysis reveals two features around 40 and 60 km~s$^{-1}$ evident only in the Cr II 2062 profile. We suggest that these two components are not due to Cr$^+$ absorption, but are components of Zn II 2062 at the high velocity edge of the profile. In the velocity space of Zn II 2062 these components correspond exactly to the strongest components of the other low-ion profiles at $z$=2.46258 and $z$=2.46280 (i.e. at v = $-20$ and 0 km~s$^{-1}$ in the Zn II velocity space). Our hidden component analysis of Cr II reveals a third component of Zn II at 7139.66 ${\rm \, \AA}$ ($-60$ km~s$^{-1}$ in Figure~\ref{HCA-Cr}), also corresponding to a significant feature in the other low-ion profiles. Summing the column densities over velocity space with the apparent optical depth method, we find we find $\log \N{Zn^+}_i = 11.753 \pm 0.116, 12.087 \pm 0.066,$ and $12.060 \pm 0.074$ respectively for the three components (blue to red). The resulting lower limit for the total Zn$^+$ abundance, $\log \N{Zn^+}_t = 12.468 \allowbreak \pm 0.046$ where the error reported is only useful as an indication of the uncertainty in this lower limit value. Table~\ref{2462I} also lists ionic column densities for the $s$-process transitions Pb II 1682 and Ge II 1602. Because these transitions are so weak, we the used linear curve of growth to infer the column densities from the respective rest-frame equivalent widths. We report the following values for the rest-frame equivalent width of the transitions: $W({\rm Pb}) = 1.78 \pm 0.47 \sci{-2} {\rm \, \AA}$ and $W({\rm Ge}) = -3.4 \pm 4.6 \sci{-3} {\rm \, \AA}$. The negative equivalent width value for Ge II 1602 is certainly a function of the continuum error and is consistent with a null detection (less than even a $1 \sigma$ detection). We have chosen, therefore, to report its column density and abundance in terms of a $3 \sigma$ upper limit. The Pb II 1682 value is significant at the $3 \sigma$ level, but could also be explained through a sizable continuum error, improper sky subtraction, or an unidentified blend. Although Sn II 1400 and Ga II 1414 both lie within our wavelength coverage, they are overwhelmed by blends with transitions from other metal line systems and could not be analyzed. \subsection{Companion Subsystem at $z$=2.457} Table~\ref{2457I} presents the measured ionic column densities for the transitions of the companion system at $z$=2.457. Final values are adopted according to the criteria described above. In nearly all of the low-ion transitions the measured column densities are $\approx$ 20 times lower than those from the $z$=2.462 subsystem. On the other hand the column densities of the high-ions are nearly the same, possibly indicating this system is in a higher state of ionization, that the two ion groups are kinematically disjoint, or that the same high-ion gas envelopes two dissimilar low-ion configurations. \section{IONIZATION} This section investigates the photoionization of the damped Ly$\alpha$ system at $z$=2.462. \subsection{Ly$\alpha$ Profile} Figure~\ref{2462L} is a velocity plot of the Ly$\alpha$ profile and Fe II 1608 profiles associated with the two absorption systems at $z$=2.462 and $z$=2.457 together with the velocity profile of the Fe II 1608 transitions. Unrelated features at $\approx -800$ and 500 km~s$^{-1}$ have been removed for clarity. We derived the Voigt profile by distributing the total HI column density, $\log \N{H^0}$ = 20.38, into the 23 Fe II 1608 velocity components weighted by their corresponding column density fractions ($\N{Fe^+}_i / \N{Fe^+}_{\rm tot}$). This treatment is proper provided (a) the Fe II 1608 line profiles accurately trace the HI gas, as one would predict for a sufficiently neutral system, and (b) [Fe/H] is constant across the entire velocity profile, which is predicted for a well mixed system. As Figure~\ref{2462L} demonstrates, the resulting Voigt profile is well fitted to the low resolution data. Table~\ref{HI} lists the column density, redshift and $b$ values of the 23 components as adopted. Here, about 5$\%$ of the HI gas is located in the companion system at $z$=2.457. We found one could place no more than 15$\%$ of the HI gas in the companion system before significantly distorting the left wing of the Ly$\alpha$ line profile. \subsection{Ionization Models} \subsubsection{Neutral Hydrogen Model} Table~\ref{HI} shows that all but the most abundant components of our damped system have $\log[\N{HI}] \leq 19.5$. If these clouds were isolated structures in the IGM, they would be highly ionized. This would markedly reduce the accuracy of abundance determinations based on the assumption that most of the hydrogen is neutral and most of the metals are singly ionized. In order to address this problem we ran several ionization simulations with the aid of a program developed by Vincent Virgilio to investigate the predicted neutral hydrogen fraction (H$^0$ / H) in the damped system. Figure 14a shows the neutral hydrogen fraction plotted against $\log[\N{H^0}]$ measured from either face of a plane-parallel layer with constant H volume density, $n_H = 0.1 \> \rm cm^{-3}$. The layer is subjected on both sides to attenuated power law continuum radiation as calculated by Madau (1992) for a redshift of $z$=2.46 with a mean intensity of $J_\nu = .195 \sci{-21} \> \rm ergs \> s^{-1} \, \rm cm^{-2} \> Hz^{-1} \> sr^{-1}$ at 1 Rydberg. The simulation assumes a temperature of $10^4$ K (purely for determining the recombination rate of H$^+$) while satisfying the ionization and transfer equations in a large number (100) of parallel discrete cells. Each face of the plane parallel layer is illuminated by uniform radiation with incidence angles covering $2 \pi$ steradians. The calculation also assumes zero source function (i.e. no reemission). Although this model is rather simplified, we find similar CLOUDY (version 84.12: Ferland 1991) calculations are in good agreement with our results. Figure~\ref{I-Comp} compares the neutral fraction predictions of our model with the corresponding predictions by CLOUDY. The 2-sided illumination simulation predicts a lower neutral fraction than CLOUDY even though CLOUDY assumes only perpendicular incidence. Thus, our simulations reveal that the 1-sided assumptions inherent in CLOUDY are probably underestimating the degree of ionization in optically thick absorption systems. Our simulations predict a uniform layer with $\N{H^0} = 2.4 \sci{20} \cm 2$ and $n_H = 0.1 \; \rm cm^{-3}$ will be $96\%$ neutral. The results should be similar for a more realistic multicomponent system, since we expect the majority of the 23 components in the $z$=2.46 system to be shielded from ionizing radiation by gas both above and below the location of the cloud in the layer, thereby maintaining neutrality (Figure 14b). Therefore, we measure the metal abundances of our damped Ly$\alpha$ system by assuming the abundance of element X to equal $\N{X^+} / \N{H^o}$, where $X^+$ is the first ionization state of element X. Note, however, that this result is very dependent on the value of $n_H$. For instance, we find a neutral fraction of $\approx 37\%$ for $n_H = 10^{-3} \; \rm cm^{-3}$ and $\N{H^0} = 2.4 \sci{20} \, \rm cm^{-2}$. Of course, physically this would be imply a very large system with dimensions exceeding 100 kpc. \subsubsection{CLOUDY simulations} We have performed CLOUDY photoionization simulations with the aim of using the relative abundances of different ionization states of the same element to estimate the degree of ionization of our system. The analysis parallels the treatment presented by Lu et al. (1995). For the input radiation we used an attenuated power law ionizing spectrum computed by Madau (1992). The spectrum was normalized to a mean intensity of $J_\nu = .195 \sci{-21} \> \rm ergs \> s^{-1} \, \rm cm^{-2} \> Hz^{-1} \> sr^{-1}$ at 1 Rydberg calculated at an average redshift of $z$=2.46. The calculations assume a plane-parallel geometry with only one face illuminated and with incident radiation perpendicular to the surface, in contrast to the more realistic assumptions used to calculate H$^o$/H in the previous section. The metallicity is fixed at [Z/H] = $-0.5$, the hydrogen volume density is varied, and the program terminates at a column density $\log [\N{H^o}] = 20. 38$. Thus the neutral hydrogen column density is fixed at the observed value $\log [\N{H^o}] = 20.38$ while the total hydrogen column density is allowed to vary. Although this simulation is overly simplified (e.g. single side illumination, normal incidence, no dust depletion model), it does provide a good estimate of the degree of ionization. Figure~\ref{I-Comp} presents the results of the simulations. Note that no conclusions concerning the degree of ionization can be drawn by comparing the relative abundances of the low-ions because their column densities track $\N{H^0}$ which is held constant. Given the CLOUDY results, we can use the observed Si$^+$ to Si$^{3+}$ and Al$^+$ to Al$^{++}$ ratios to determine the ionization level of the gas. Since the AlII profile is heavily saturated, we take $N(v) = 12.00$ per pixel corresponding to $I_a(v)/I_i(v) = 0.05$ in the optical depth equation (Equation 1). Given the degree of saturation, this value yields a {\em very conservative} lower limit for [Al$^+$/Al$^{++}$] $\equiv \log[\N{Al^+}] - \log[\N{Al^{++}}]$. Figure~\ref{Rtio} is a plot of (a) [Si$^+$/Si$^{3+}$] and (b) [Al$^+$/Al$^{++}$] for 5 pixel bins over the entire low-ion region. The large dots represent the average value over the velocity regions defined by the vertical dashed lines and the borders of the plot. Table~\ref{rtio_tab} gives the values of [Si$^+$/Si$^{3+}$] and [Al$^+$/Al$^{++}$] as well as the corresponding H$^0$/H$^+$ ratio derived from the CLOUDY results for the velocity regions. With the exception of the region $-40$ km~s$^{-1}$ $< v < $ 40 km~s$^{-1}$ , we find [Al$^+$/Al$^{++}$] $>$ 0.4 dex. This yields a neutral hydrogen fraction H$^0$/H $>$ 0.5. The velocity region $v$ = [-100, 40] {km~s$^{-1}$ } corresponds to the strongest feature in all of the low-ion profiles and we expect the abundance of Al II to greatly exceed the value assumed above. The [Si$^+$/Si$^{3+}$] results are more difficult to interpret because the Si IV line-profile does not closely trace the low-ion or Al III profiles. According to the SiII to SiIV ratio, the most highly ionized region is $-218$ km~s$^{-1}$ $ < v < - 140$ km~s$^{-1}$ which is in clear contradiction with the [Al$^+$/Al$^{++}$] results. We believe that a significant portion of the SiIV absorption is due to gas which is physically separate from the low-ion and Al$^{++}$ gas. As such, we consider the [Si$^+$/Si$^{3+}$] values to be lower limits. To summarize, our results suggest the hydrogen fraction of the gas H$^0$/H $>$ 0.5 is conservative lower limit. Therefore, it is highly likely that the gas in the $z$ = 2.46 damped system is mainly neutral. It is also likely that the neutrality of the gas is unaffected by the presence of {\em internal} sources of ionizing radiation such as O B stars. We checked this possibility by performing CLOUDY calculations with input from a black body with T = 4$\times$10$^{4}$ K. The resulting Al$^{+}$/Al$^{++}$ and Si$^{+}$/Si$^{++}$ ratios are indistinguishable from the ratios predicted by the external radiation field considered above. The results are similar because the sharp drop in ionizing flux at the Lyman limit in the attenuated external spectrum (\cite{mad92}) mimics the exponential fall off of a black body. As a result the metal-line ratios indicate H is mainly neutral in the $z$ = 2.46 absorber, for plausible sources of external and internal radiation. \section{RESULTS} This section presents the results from our abundance and kinematic analyses of the damped Ly$\alpha$ system at $z$=2.462. We discuss the evidence for dust in this system and remark on the nearly constant abundances relative to Zn for three of the velocity features in our system. Finally, we compare and contrast the kinematics of the damped Ly$\alpha$ system with several other systems, including another published HIRES damped Ly$\alpha$ system (\cite{wol94}). \subsection{Abundances of the $z$=2.462 System} Table~\ref{abnd} lists the column density $\log[\N{X}]$, and the logarithmic abundance of element X relative to hydrogen normalized to solar abundances, [X/H] $\equiv \log[\N{X}/\N{H}] - \log[\N{X}/\N{H}]_\odot$, for the damped Ly$\alpha$ system at $z$=2.462. The abundance is derived assuming $\log [\N{H}] = 20.38 \pm 0.045$ (10$\%$ error) and standard solar abundances (\cite{and89}). Our results indicate a relatively metal-rich system. If the three Zn$^+$ features (discussed in $\S 5.1$) comprise 50$\%$ of the total Zn$^+$ abundance (an analysis of these components in the other low-ions suggests $\approx 45\%$), we find [Zn/H] $= -0.262$. This is the most metal-rich of any damped Ly$\alpha$ system for which accurate abundances have been determined at redshift $z \geq 2.0$. Furthermore, it has a higher metallicity (assuming Zn predicts metallicity; see below) than all but one of the systems measured in the most extensive survey of metallicity in damped Ly$\alpha$ systems carried out so far (\cite{ptt94}). Figure~\ref{RelAbd} and Table~\ref{depl} present the abundances of the low-ions relative to Zn (assuming cosmic abundances) in the three velocity features where we could reliably measure the Zn abundance. The error bars are not entirely accurate (they are derived with the apparent optical depth method), but do serve as valuable guides. The overall variation of the abundances relative to Zn is very small for the 3 features and all of the elements are depleted with respect to Zn. The values of [X/Zn], however, do show a slight increase from the first to the third feature. We believe the most likely explanation lies in our measurements of the Zn abundance, particularly since features 1 and 2 are more likely to be blended with absorption from Cr II 2062 and because the trend is relatively systematic. Even given these minor differences, the relative abundances between Fe, Ni, Si, Cr and Zn are essentially constant over the 3 features (corresponding to nearly 200 km~s$^{-1}$ in velocity space). Therefore, we see little evidence for gas-phase abundance variations throughout our system. This observation indicates damped Ly$\alpha$ systems are chemically well-mixed which further suggests they are detected at ages large compared to their internal dynamical time scales (i.e. the rotation period). \subsection{Dust Depletion} In this section we analyze variations of the gas-phase element abundances in damped {Ly$\alpha$ } systems with (a) velocity and (b) condensation temperature. Such variations have been observed in the ISM of the Galaxy and have been used to infer properties of the dust responsible for element depletion. We wish to see whether similar effects are present at high redshifts. \subsubsection{Abundance Variations} The absence of variations in [X/Zn] with respect to velocity in the $z$ = 2.462 system is contrary to the presence of such variations in the ISM. Spitzer \& Fitzpatrick (1993) and Spitzer \& Fitzpatrick (1995) recently used GHRS spectra to infer these variations for the sightlines to the Galactic stars HD 93521 and HD 149881. These sightlines are relevant because the measured $N$(H$^{0}$) are similar to the value inferred for the damped system. Moreover, the depletion levels are similar, {\em provided one interprets negative values of {\rm [X/Zn]} in damped Ly$\alpha$ systems to result from grain depletion.} From Figure~\ref{RelAbd} and Table~\ref{depl} we infer an upper limit of $\approx$ 0.3 dex for the variation of [Si/Zn], [Fe/Zn], [Ni/Zn], or [Cr/Zn] across the 3 velocity features in the $z$ = 2.46 damped system, while the variation for [Fe/Zn] in HD 93521 is $\approx$ 0.9 dex and for [Cr/Zn] in HD 149881 is 0.6 dex (\cite{spz93,spz95}). Fitzpatrick and Spitzer attribute the changes in [X/Zn] to the increase in dust destruction for clouds with increasing random velocity with respect to galactic rotation speed. Variations in [X/Zn] could also be due to density variations along the line of sight. Studies of the ISM (\cite{jen87}) have shown correlations between the average Hydrogen volume density $n_H$ and the degree of depletion $D$ relative to cosmic abundances for a variety of elements, including Si, Cr, and Fe. Jenkins fitted the ISM data with the following depletion curve: \begin{equation} D = d_0 + m \, \left [ \, \log n_H + 0.5 \, \right ] \end{equation} \noindent where $D$ is the logarithmic depletion relative to cosmic abundances (i.e., [X/H]), $m$ is the slope, $d_0$ is the value of $D$ at log $n_H = -0.5$ and $n_H$ is the Hydrogen volume density averaged over the line of sight to a given star. It is straightforward to show that the difference of two depletion values corresponding to two measurements ($\Delta D \equiv D_2 - D_1$) can be related to the ratio of the $n_H$ values of each measurement: \begin{equation} {(n_H)_2 \over (n_H)_1} = 10^{\big | {\Delta D \over m} \big |} \quad . \end{equation} As stated above the variation of Si, Cr, and Fe relative to Zn over the 3 features is only $\approx$ 0.3 dex, where the largest variation is between the first and third features (the reddest and bluest). As suggested in $\S 7.1$ this variation is almost certainly systematic, most likely a result of blending with Cr II 2062. Therefore, the value can be considered an \underline{upper limit} to the actual abundance variation over these 3 components. Table~\ref{nH} lists the depletion variations, the ISM slopes cited in Jenkins (1987), the corresponding predicted $n_H$ variation, and 1$\sigma$ errors for Fe, Si, and Cr. We note the Cr measurements place the tightest constraints on the predicted variation of $n_H$. Of course, the errors are rather large, but we can confidently predict that the variations of $n_H$ are no larger than $\approx 1$ dex provided this technique is applicable. What suppresses variations of [X/Zn] in the damped system? If dust is present, then the sightline through the damped system must encounter an ISM in which $n_{H}$ varies by less than a factor of 10 along the line of sight. Moreover, the efficiency of grain destruction by shocks must be less efficient than in the ISM. Because the frequency of supernovae explosions account for the inhomogeneous nature of the ISM (\cite{mck77}) and because supernovae are the main contributor of shocks in the ISM, then a lower frequency of supernova explosions could account for the lack of variations in the damped system. However, in the ISM a sightline of a few kpc typically encounters 4 orders of magnitude variation in $n_{H}$ (\cite{kul87}). A drastic reduction in the rate of supernova explosions would be required to reduce this to one order of magnitude. But models for the evolution of disk galaxies suggest that supernova input in the past was stronger not weaker than in the present (\cite{bru92}). Other, nucleosynthetic considerations suggest that supernovae rates in the past were as least as frequent as they are at present (\cite{tru95}). Therefore, while we cannot rule out this explanation altogether, it seems implausible. Therefore, the null detection of variations in [X/Zn] requires a weaker coupling of the depletion rate to the gas density, and a lower grain destruction rate by interstellar shocks than in the ISM. As a result, dust in the damped system is either absent or has significantly different properties from dust in the ISM. \subsubsection{Condensation Temperature} Figure~\ref{Tcond} plots the gas phase abundances versus condensation temperature, $T_C$, for the observed low-ions from the damped Ly$\alpha$ system (solid dots) and from the line-of-sight toward the Galactic star $\zeta$ Oph (\cite{crd94}; open squares\footnote{The error bars for the ISM data are on the order of the size of the squares or smaller.}). The idea is to determine whether the anti-correlation between [X/H] and $T_C$ observed in the ISM (\cite{jen87}) is also present in high-$z$ damped Ly$\alpha$ systems. Because $T_C$ is the temperature at which half the gas phase atoms in a stellar atmosphere condense to solid form, the ISM anti-correlation is taken as evidence for dust formation in stellar atmospheres (\cite{fie74}). Comparison between the ISM and damped diagrams shows some similarities. Specifically, the relative rankings of [Cr/H], [Si/H], [Fe/H], and [Ni/H] are about the same. However, the absolute values are significantly higher in the damped system. In addition [Zn/H] in the ISM greatly exceeds all the other [X/H], but in the damped system [Zn/H] does not greatly exceed [Cr/H], [Fe/H], and [Ni/H], and may in fact be comparable to [Si/H]. The differences between the ISM and damped [X/H] may indicate the presence of gas in which the metallicity (inferred from undepleted [Zn/H]) is $> -0.5$ and has a lower dust-to-gas ratio to account for the smaller difference between [Zn/H] and [X/H] (cf. \cite{ptt94}). On the other hand, Lu et al.\ (1995, 1996a, 1996b) interpret these patterns in terms of nucleosynthetic yields from type II supernovae. They point out that the damped Ly$\alpha$ abundance patterns of N/O, Si/Fe, Cr/Fe, and Mn/Fe are consistent with the abundance patterns observed for population II halo stars which have been primarily enriched by type II supernovae (\cite{whe89}). They also stress that Mn/Fe for damped Ly$\alpha$ systems and halo stars disagrees with Mn/Fe inferred for the ISM, which is influenced by grain depletion. The only difficulties with the Lu et al.\ hypothesis stem from Zn/Fe. The quantity [Zn/Fe] $>$ 0 in damped Ly$\alpha$ systems, while [Zn/Fe] $\approx 0$ for stars with $-3 <$ [Fe/H] $< 0$ (\cite{sne91}). Theorists working on chemical evolution have long been puzzled as to why [Zn/Fe] is independent of [Fe/H]. The closed box model by Malaney and Chaboyer (1996) predicts [Zn/Fe] to increase with [Fe/H] owing to the metallicity-dependent yield computed for type II supernovae explosions. These authors suggest that the constant [Zn/Fe] could be peculiar to the chemical evolution of the Galaxy, due to non-LTE effects in stellar atmospheres, or errors in the calculated yields. In addition, while the Malaney-Chaboyer model predicts [Zn/Fe] $< 0$, Hoffman et al.\ (1996) suggest neutrino driven winds in type II supernovae might result in [Zn/Fe] $> 0$. This effect may help explain the overabundance of Zn relative to Fe in damped Ly$\alpha$ systems. Therefore, while some questions remain to be addressed, nucleosynthetic yields and grain depletion are equally plausible explanations for the abundance patterns observed in damped Ly$\alpha$ systems. These competing explanations lead to at least two different ways for interpreting the metallicity of damped Ly$\alpha$ systems. First, if the Lu et al.\ interpretation is correct and Zn is somehow overproduced with respect to Fe, than the metallicity of damped Ly$\alpha$ systems is significantly lower ($\approx 0.7$ dex) than estimated from the Zn abundances. In terms of the system analyzed in this paper, the metallicity would be $-0.83$ dex which would still be the highest metallicity observed in a damped Ly$\alpha$ system with $z > 2.0$. On the other hand, if we accept depletion, the metallicity of these systems is at least as high as the Zn abundance, and the Malaney-Chaboyer results indicate it could be even higher (possibly requiring an even greater level of depletion). \subsection{Kinematics of the Damped Ly$\alpha$ System ($z$=2.462)} In this subsection we discuss the kinematics of the damped Ly$\alpha$ system at $z$=2.462. We stress again the observed differences between the high and low-ion profiles and intercompare the velocity profiles of the $z$=2.462 system and several other damped systems. \subsubsection{Comparison of the Low and High Ion Profiles} Although all of the transitions from the $z = 2.462$ system span approximately the same velocity interval ($\approx$ 200 km~s$^{-1}$ ), there are marked differences between the low and high-ion line profiles. Similar to the damped Ly$\alpha$ system toward PHL 957 discussed by Wolfe et al. (1994), the velocity profiles of all of the low-ions are visibly asymmetric with the component at the red edge being the strongest of the profile. In this case, the strongest component is at the red edge, where in PHL957 the strongest component is at the blue edge. Figure~\ref{Per-fig} and Table~\ref{per} give the column density by percent of total in 5 binned intervals in velocity space corresponding to visible features in the line profiles. This serves as quantitative evidence of the asymmetry in the low-ions and highlights the inherent differences between the low and high-ion profiles. The overall profile of the high-ions has nearly the opposite shape of the low-ion profiles with the strongest feature on the blue edge. In addition, the profile of the high-ions is smoother with less structure than the low ions. We contend these differences, particularly those evident in the Si II and Si IV profiles, indicate the absorption associated with the high-ions does not originate in the same region as that associated with the low-ions. The most compelling evidence for this hypothesis lies in the extreme difficulty encountered in obtaining an accurate fit to the Si IV profile by altering the column densities of the calculated Si II components while holding the redshift and $b$ values constant. Both the smoothness and the overall shape of the Si IV profile require significantly different $b$ values and argue against the inclusion of so many thin, resolved components. \subsubsection{The $z$=2.457 subsystem} The neighboring system at $z$=2.457 was unidentified in the lower resolution data. It has similar line profile structure as the $z$=2.462 system with a large velocity interval, smoother high-ion profiles and low-ion profiles with more velocity structure. It does not, however, exhibit the edge leading asymmetry apparent in the $z$=2.462 system. In fact, the low-ions appear relatively symmetric about $v = 70$ km~s$^{-1}$ . We contend these differences in the kinematic characteristics result from the fact that the $z$=2.457 system is not a damped Ly$\alpha$ system but most likely an ionized Lyman limit system in which the gas kinematics are not determined by rotation. \subsubsection{Comparison with PHL 957} Contrasting the damped Ly$\alpha$ metal transitions with those toward PHL 957 at $z$ = 2.309, we note several important differences. First, the velocity interval of the Q0201+365 damped system is significantly larger (200 km~s$^{-1}$ vs. 50 km~s$^{-1}$ ). Secondly, the low-ions of the Q0201+365 system exhibit more velocity structure, an effect which is not due to differences in resolution. Finally, the asymmetric shape is not as prevalent in our damped system possibly because of its larger velocity interval. That is, the gas is more evenly distributed in Q0201+365 than in the PHL957 absorber. In terms of the thick rotating disk model, these differences may be explained by differences in the inclination angle of the line-of-sight with respect to the disk. \subsubsection{Comparison with System at $z$ = 2.325} A comparison of the damped Ly$\alpha$ kinematics with the absorption system at $z$ = 2.325 toward Q0201+365 further stresses the damped Ly$\alpha$ characteristics. The $z$ = 2.325 system (a possible Lyman limit system) also spans $\approx$ 200 km~s$^{-1}$ in velocity space, but does not possess the same edge-leading asymmetric profile observed in the damped systems. Furthermore, the high-ion profiles at $z$=2.325 trace the low-ion profiles more closely than in the damped Ly$\alpha$ systems. For instance, Al III was successfully tied to the low-ion VPFIT solution. These characteristics are all consistent with an explanation depicting the $z$=2.325 system as a multi-cloud system in which the velocities are random rather than systematic as in the case of rotation. Moreover the gas is more evenly distributed. In short, the $z=2.325$ system more closely resembles the $z$=2.457 system which is also a likely Lyman limit system. Perhaps we are observing kinematic characteristics which differentiate damped Ly$\alpha$ from Lyman limit systems. However, the statistics of small numbers makes this observation a speculation rather than a conclusion. \section{CONCLUSIONS} This paper presented HIRES spectra obtained with the Keck 10m telescope of absorbing gas toward toward Q0201+365. We identified over $80\%$ of the absorption features and have analyzed several of the more interesting metal-line systems. We have focused on the damped Ly$\alpha$ system at $z$=2.462 as part of an ongoing program to investigate the chemical content and kinematics of damped systems within the redshift interval $z \approx 2 - 4$. We summarize our results as follows. (1) Based on the analysis of ionization simulations, we predict the damped Ly$\alpha$ system to be significantly neutral. Although it is possible the system is partially ionized, our analysis predicts the metals are all essentially in the singly ionized state, and that the total hydrogen column density is well within a factor of two of the adopted value from the $\N{H^0}$ measurement. (2) A hidden component analysis of the Ni II 1741, 1751 transitions did not reveal any significant hidden saturated components. We expect this to hold true for the other low-ion transitions. (3) With the VPFIT least squares line profile fitting package we have measured ionic column densities for the damped Ly$\alpha$ system at $z$=2.462 (as well as several other systems). We performed similar measurements with the apparent optical depth method and found the two results to be in agreement, further eliminating the possibility of hidden line saturation. (4) We measured the following abundances of Si, Fe, Cr, and Ni for the damped Ly$\alpha$ system: [Si/H] = $-0.376 \pm 0.052$, [Fe/H] = $-0.830 \pm 0.051$, [Cr/H] = $-0.902 \pm 0.064$, and [Ni/H] = $-1.002 \pm 0.054$. We placed limiting values on the abundances of the s-process elements Pb (3$\sigma$ detection) and Ge (upper limit), [Pb/H] = $2.233 \pm 0.121$ and [Ge/H] $<$ 0.664, and a lower limit value on the abundance of Zn, [Zn/H] $> -0.562$. Based on the VPFIT solution of the low-ions, we expect the metallicity is [Z/H] $\approx -0.262$. This damped Ly$\alpha$ system has the highest metallicity measured to date at $z \geq 2.0$. (5) Comparing individual features of the damped Ly$\alpha$ system, we find the relative abundance between Si, Fe, Cr, Ni and Zn remains nearly constant throughout our system (Figure~\ref{depl}). This suggests a well mixed system with an age large compared to the internal dynamical time scale at the epoch of detection. (6) We have used the relatively minor variations observed in the Si, Cr, and Fe abundances relative to Zn to place limits on the expected variation in the Hydrogen volume density throughout the damped Ly$\alpha$ system, having assumed the presence of dust grains and the Jenkins relation (\cite{jen87}). Our measurements of [Cr/Zn] place a maximum variation of $n_H$ at $\approx 1$ dex. The lack of $n_H$ (and [X/Zn]) variations could be evidence of weaker supernova input in the past, but we believe they are more likely due to the absence of grains with the properties of dust found in the ISM. (7) Plotting the measured abundances versus condensation temperature (Figure~\ref{Tcond}), we do find evidence for a depletion pattern, but the overall depletion level of Si, Fe, Cr and Ni with respect to Zn is indicative of a relatively dust free ISM cloud. Although gas with a lower dust-to-gas ratio than evident in the ISM can account for the pattern, one can also explain the pattern in terms of nucleosynthetic yields from type II supernovae (\cite{lu95b}). Both explanations are problematic and are under continued debate. Determining the proper explanation is particularly important as they predict different metallicities for damped Ly$\alpha$ systems, which will significantly affect the investigation of galactic chemical evolution. (8) The low-ion profiles of the damped system exhibit an edge leading asymmetry as predicted by a simple model of rotation. The shape is similar to the other damped system observed with HIRES (PHL 957; \cite{wol94}), though the velocity interval is significantly greater. \acknowledgments The authors would like to thank Bob Carswell for providing the line-profile fitting package VPFIT as well as Tom Barlow for his excellent HIRES data reduction software. We would also like to thank Vincent Virgilio for his help in developing the neutral hydrogen model and Piero Madau for providing the ionizing spectrum. Finally, we would like to thank Ed Jenkins and Edward Fitzpatrick for helpful discussions. AMW and JXP were partially supported by NASA grant NAGW-2119 and NSF grant AST 86-9420443. \clearpage \begin{table*} \begin{center} \begin{tabular}{lccc} UT Date & Exposure & Wavelength & Resolution\\ & Time (s) & Coverage ($\rm {\rm \, \AA}$) & (km~s$^{-1}$ ) \\ \tableline 1994 Sep 15 & 9600 & 4720 - 7130 & 7.2 - 8.0 \cr 1994 Sep 30 & 11950 & 4790 - 7180 & 7.0 - 8.0 \cr 1994 Oct 1 & 13030 & 4790 - 7180 & 7.2 - 8.1 \cr \end{tabular} \end{center} \caption{JOURNAL OF OBSERVATIONS} \label{obs} \end{table*} \clearpage \include{Q0201_tbl2} \include{Q0201_tbl3} \include{Q0201_tbl4a} \begin{table} \dummytable\tablenum{2}\label{orders} \end{table} \begin{table} \dummytable\tablenum{3}\label{fval} \end{table} \begin{table} \dummytable\tablenum{4a}\label{2462TL} \end{table} \begin{table*} \begin{center} \begin{tabular}{ccccclcc} Comp & $z$ & $\sigma_z$ & b & $\sigma_b$ & Ion & log $N$ & $\sigma_{\rm{log} {\it N}}$\cr & & ($\sci{-5}$) & (km~s$^{-1}$ ) & (km~s$^{-1}$ ) & & ($\, \rm cm^{-2}$) & ($\, \rm cm^{-2}$) \cr \tableline 1 & 2.459346 & 2.5 & 18.33 & 3.03 & Si$^{+3}$ & 11.72 & 0.53 \cr & & & & & C$^{+3}$ & 13.47 & 0.07 \cr 2 & 2.459713 & 4.4 & 43.14 & 8.77 & Si$^{+3}$ & 12.81 & 0.12 \cr & & & & & C$^{+3}$ & 13.95 & 0.04 \cr 3 & 2.460433 & 1.0 & 9.43 & 0.99 & Si$^{+3}$ & 12.77 & 0.07 \cr & & & & & C$^{+3}$ & 13.45 & 0.10 \cr 4 & 2.460753 & 1.2 & 16.65 & 1.50 & Si$^{+3}$ & 13.45 & 0.04 \cr & & & & & C$^{+3}$ & 14.19 & 0.13 \cr 5 & 2.461035 & 0.8 & 11.63 & 0.64 & Si$^{+3}$ & 13.54 & 0.04 \cr & & & & & C$^{+3}$ & 13.10 & 0.24 \cr 6 & 2.461647 & 22.3 & 82.54 & 22.64 & Si$^{+3}$ & 13.36 & 0.14 \cr & & & & & C$^{+3}$ & 12.85 & 0.38 \cr 7 & 2.461863 & 0.9 & 15.01 & 1.61 & Si$^{+3}$ & 12.81 & 0.08 \cr & & & & & C$^{+3}$ & 13.41 & 0.48 \cr 8 & 2.462229 & 3.5 & 11.01 & 5.97 & Si$^{+3}$ & 11.67 & 0.52 \cr 9 & 2.462762 & 9.1 & 38.26 & 11.23 & Si$^{+3}$ & 12.81 & 0.31 \cr & & & & & C$^{+3}$ & 13.35 & 0.11 \cr 10 & 2.462908 & 2.1 & 5.38 & 5.93 & Si$^{+3}$ & 11.59 & 0.34 \cr & & & & & C$^{+3}$ & 13.63 & 0.12 \cr \end{tabular} \end{center} \tablenum{4b} \caption{FIT FOR $z$=2.462 -- HIGH IONS} \label{2462H} \end{table*} \clearpage \include{Q0201_tbl5a} \begin{table} \dummytable\tablenum{5a}\label{2457L} \end{table} \clearpage \begin{table*} \begin{center} \begin{tabular}{ccccclcc} Comp & $z$ & $\sigma_z$ & b & $\sigma_b$ & Ion & log $N$ & $\sigma_{\rm{log} {\it N}}$\cr & & ($\sci{-5}$) & (km~s$^{-1}$ ) & (km~s$^{-1}$ ) & & ($\, \rm cm^{-2}$) & ($\, \rm cm^{-2}$) \cr \tableline 1 & 2.456343 & 0.7 & 24.36 & 0.78 & Si$^{+3}$ & 12.98 & 0.02 \cr & & & & & C$^{+3}$ & 13.86 & 0.02 \cr 2 & 2.457273 & 0.5 & 34.58 & 0.70 & Si$^{+3}$ & 13.65 & 0.01 \cr & & & & & C$^{+3}$ & 14.54 & 0.02 \cr 3 & 2.457873 & 0.4 & 8.77 & 0.61 & Si$^{+3}$ & 12.98 & 0.02 \cr & & & & & C$^{+3}$ & 13.75 & 0.05 \cr 4 & 2.458106 & 0.9 & 7.53 & 1.17 & Si$^{+3}$ & 12.45 & 0.06 \cr & & & & & C$^{+3}$ & 13.32 & 0.07 \cr 5 & 2.458454 & 1.5 & 30.67 & 1.98 & Si$^{+3}$ & 12.67 & 0.04 \cr & & & & & C$^{+3}$ & 13.77 & 0.03 \cr \end{tabular} \end{center} \tablenum{5b} \caption{FIT FOR $z$=2.457 -- HIGH IONS} \label{2457H} \end{table*} \clearpage \include{Q0201_tbl6a} \begin{table} \dummytable\tablenum{6a}\label{2325TL} \end{table} \clearpage \begin{table*} \begin{center} \begin{tabular}{ccccclcc} Comp & $z$ & $\sigma_z$ & b & $\sigma_b$ & Ion & log $N$ & $\sigma_{\rm{log} {\it N}}$\cr & & ($\sci{-5}$) & (km~s$^{-1}$ ) & (km~s$^{-1}$ ) & & ($\, \rm cm^{-2}$) & ($\, \rm cm^{-2}$) \cr \tableline 1 & 2.319744 & 0.2 & 9.11 & 0.29 & C$^{+3}$ & 13.64 & 0.01 \cr 2 & 2.323508 & 4.4 & 15.48 & 3.55 & C$^{+3}$ & 13.13 & 0.13 \cr 3 & 2.323736 & 1.2 & 9.13 & 1.83 & C$^{+3}$ & 13.47 & 0.16 \cr 4 & 2.324048 & 1.4 & 24.68 & 2.38 & C$^{+3}$ & 14.25 & 0.04 \cr 5 & 2.324569 & 2.4 & 15.07 & 3.74 & C$^{+3}$ & 13.56 & 0.13 \cr 6 & 2.324825 & 1.9 & 11.15 & 2.60 & C$^{+3}$ & 13.41 & 0.18 \cr 7 & 2.325100 & 3.8 & 16.40 & 4.30 & C$^{+3}$ & 13.25 & 0.12 \cr 8 & 2.325624 & 0.7 & 18.12 & 1.23 & C$^{+3}$ & 13.47 & 0.02 \cr 9 & 2.326123 & 0.2 & 14.21 & 0.25 & C$^{+3}$ & 14.23 & 0.01 \cr \end{tabular} \end{center} \tablenum{6b} \caption{ FIT FOR $z$=2.325 -- HIGH IONS} \label{2325H} \end{table*} \begin{table*} \begin{center} \begin{tabular}{ccccclcc} Comp & $z$ & $\sigma_z$ & b & $\sigma_b$ & Ion & log $N$ & $\sigma_{\rm{log} {\it N}}$\cr & & ($\sci{-5}$) & (km~s$^{-1}$ ) & (km~s$^{-1}$ ) & & ($\, \rm cm^{-2}$) & ($\, \rm cm^{-2}$) \cr \tableline 1 & 1.475760 & 1.5 & 1.21 & 5.88 & Fe$^+$ & 12.24 & 0.18 \cr 2 & 1.475858 & 0.7 & 4.27 & 1.87 & Fe$^+$ & 12.71 & 0.07 \cr 3 & 1.476039 & 0.4 & 3.59 & 0.52 & Fe$^+$ & 13.65 & 0.03 \cr 4 & 1.476115 & 0.8 & 2.44 & 1.37 & Fe$^+$ & 13.22 & 0.06 \cr 5 & 1.476182 & 0.9 & 2.05 & 1.86 & Fe$^+$ & 12.92 & 0.08 \cr 6 & 1.476301 & 0.1 & 2.70 & 0.22 & Fe$^+$ & 13.64 & 0.04 \cr \end{tabular} \end{center} \tablenum{7} \caption{FIT FOR $z$=1.476} \label{1476} \end{table*} \begin{table*} \begin{center} \begin{tabular}{lccc} Transition & Apparent & VPFIT & Adopted \cr \tableline Si IV 1393 & $14.039 \pm 0.006$ & $14.050 \pm 0.038$ & $14.045 \pm 0.038$ \cr C IV 1550 & $14.617 \pm 0.005$ & $14.615 \pm 0.062$ & $ 14.616 \pm 0.062$ \cr \cr C I 1560 & $13.228 \pm 0.059$ & & $13.228 \pm 0.059$ \cr Fe II 1608 & $15.033 \pm 0.004$ & $15.086 \pm 0.024$ & $15.060 \pm 0.024$ \cr Al II 1671 \cr Ni II 1709 & $13.785 \pm 0.020$ & & $13.785 \pm 0.020$ \cr Ni II 1741 & $13.638 \pm 0.013$ & $13.631 \pm 0.030$ & $ 13.628 \pm 0.030 $ \cr Ni II 1751 & $13.610 \pm 0.020$ & & \cr Si II 1808 & $15.561 \pm 0.010$ & $15.546 \pm 0.026$ & $15.554 \pm 0.026$ \cr Al III 1862 & $13.606 \pm 0.008$ & & $13.606 \pm 0.008$ \cr Cr II 2056 & $13.158 \pm 0.030$ & $13.138 \pm 0.040$ & $13.148 \pm 0.040$ \cr Cr II 2062 & $13.401 \pm 0.025$ & & $13.401 \pm 0.025$ \cr Zn II 2062 & $12.468 \pm 0.046$ & & $12.468 \pm 0.046$ \cr \cr Pb II 1682 & $12.66 \pm 0.114$ & & $12.66 \pm 0.114$ \cr Ge II 1602 & $< 12.65$ & & $< 12.65$ \cr \end{tabular} \end{center} \tablenum{8} \caption{IONIC COLUMN DENSITIES FOR $z$ = 2.462} \label{2462I} \tablecomments{Values reported in logarithmic space have deceptively small errors. For instance, the value for $\N{Pb}$ is not a $5 \sigma$ detection and that for $\N{Ge}$ is not even a $1 \sigma$ detection.} \end{table*} \begin{table*} \begin{center} \begin{tabular}{lccc} Transition & Apparent & VPFIT & Adopted \cr \tableline Si IV 1393 & $13.839 \pm 0.003$ & $13.85 \pm 0.01$ & $ 13.85 \pm 0.01$ \cr Si IV 1403 & $13.851 \pm 0.007$ & & $13.85 \pm 0.01$ \cr C IV 1550 & $14.686 \pm 0.041$ & $14.74 \pm 0.01$ & $ 14.737 \pm 0.010 $ \cr \cr Si II 1526 & $14.129 \pm 0.006$ & $14.18 \pm 0.02$ & $ 14.18 \pm 0.02$ \cr Fe II 1608 & $13.839 \pm 0.022$ & $13.85 \pm 0.03$ & $ 13.843 \pm 0.018 $ \cr Al II 1671 & $13.321 \pm 0.004$ & $13.36 \pm 0.02$ & $ 13.36 \pm 0.02 $ \cr Al III 1862 & $12.920 \pm 0.028$ & & $12.920 \pm 0.028$ \cr \end{tabular} \end{center} \tablenum{9} \caption{IONIC COLUMN DENSITIES FOR $z$ = 2.457} \label{2457I} \end{table*} \begin{table*} \begin{center} \begin{tabular}{cccl} Component & log$_{10} \> N$(HI) & $z_{\rm abs}$ & $b$ (km~s$^{-1}$ ) \cr \tableline 1 & 18.38 & 2.460470 & 4.59 \cr 2 & 19.38 & 2.460598 & 4.44 \cr 3 & 19.26 & 2.460807 & 5.91 \cr 4 & 19.07 & 2.460944 & 12.47 \cr 5 & 19.26 & 2.461347 & 13.94 \cr 6 & 19.01 & 2.461426 & 3.00 \cr 7 & 19.47 & 2.461735 & 8.56 \cr 8 & 19.18 & 2.461905 & 3.24 \cr 9 & 18.95 & 2.462091 & 8.23 \cr 10 & 18.78 & 2.462266 & 6.22 \cr 11 & 19.26 & 2.462494 & 11.31 \cr 12 & 19.15 & 2.462594 & 3.00 \cr 13 & 19.70 & 2.462818 & 14.58 \cr \cr 14 & 17.61 & 2.456041 & 3.95 \cr 15 & 18.46 & 2.456295 & 9.58 \cr 16 & 17.57 & 2.456540 & 5.44 \cr 17 & 18.05 & 2.456679 & 3.30 \cr 18 & 18.22 & 2.456844 & 8.95 \cr 19 & 18.31 & 2.457079 & 12.83 \cr 20 & 18.05 & 2.457336 & 3.00 \cr 21 & 17.90 & 2.457460 & 3.00 \cr 22 & 18.38 & 2.457858 & 6.37 \cr 23 & 17.44 & 2.458100 & 4.65 \cr \end{tabular} \end{center} \tablenum{10} \caption{HI COMPONENTS IN Ly$\alpha$ PROFILE ($z$=2.46)} \label{HI} \end{table*} \begin{table*} \begin{center} \begin{tabular}{cccccc} Velocity (km~s$^{-1}$ ) & [Si$^+$/Si$^{3+}$] & [H$^0$/H$^+$]$_{\rm Si}$ & [Al$^+$/Al$^{++}$] & [H$^0$/H$^+$]$_{\rm Al}$ & [H$^0$/H$^+$]$_{\rm Al \; + \; Si}$ \cr \tableline $-218 < v < -140$ & 1.0 & $-0.16$ & 0.42 & $-0.06$ & $-0.06$ \cr $-140 < v < -110$ & 1.7 & $0.12$ & 0.47 & $0.00$ & $0.12$ \cr $-110 < v < -40$ & 1.8 & $0.13$ & 0.43 & $-0.04$ & $0.13$ \cr $-40 < v < 40$ & 2.1 & $0.31$ & 0.27 & $-0.30$ & $0.31$ \cr \end{tabular} \end{center} \tablenum{11} \caption{IONIZATION LIMITS} \label{rtio_tab} \tablecomments{All values are conservative lower limits} \end{table*} \begin{table*} \begin{center} \begin{tabular}{lccc} Metal & log$_{10} N$(X) (cm$^-2$) & [X/H] \hfil \cr \tableline Fe & $15.060 \pm 0.024$ & $-0.830 \pm 0.051$ \cr Ni & $13.628 \pm 0.030$ & $-1.002 \pm 0.054$ \cr Al & SATU \cr Si & $15.554 \pm 0.026$ & $-0.376 \pm 0.052$ \cr Cr & $13.158 \pm 0.030$ & $-0.902 \pm 0.064$ \cr Zn & $> 12.468 \pm 0.046$ & $> -0.562 \pm 0.064$ \cr \cr C & $14.616 \pm 0.062$ & $-2.324 \pm 0.077$ \cr Si & $14.045 \pm 0.038$ & $-1.885 \pm 0.059$ \cr \cr Pb & $12.66 \pm 0.114$ & $2.23 \pm 0.121$ \cr Ge & $< 12.65$ & $< 0.644$ \cr \end{tabular} \end{center} \tablenum{12} \caption{ABUNDANCES FOR $z$ = 2.462} \label{abnd} \end{table*} \begin{table*} \begin{center} \begin{tabular}{cccc} [X/Zn] & Feature 1 & Feature 2 & Feature 3 \cr & $-144 \leftrightarrow -110$ km/s & $-40 \leftrightarrow -9$ km/s & $-9 \leftrightarrow 35$ km/s \cr \tableline Fe & $-0.856 \pm 0.185$ & $-0.679 \pm 0.175$ & $-0.508 \pm 0.243$ \cr Ni & $-0.939 \pm 0.204$ & $-0.831 \pm 0.186$ & $-0.638 \pm 0.249$ \cr Si & $-0.388 \pm 0.197$ & $-0.157 \pm 0.180$ & $-0.035 \pm 0.246$ \cr Cr & $-0.833 \pm 0.261$ & $-0.729 \pm 0.220$ & $-0.534 \pm 0.274$ \cr \end{tabular} \end{center} \tablenum{13} \caption{DEPLETION RELATIVE TO Zn FOR $z$ = 2.462} \label{depl} \end{table*} \begin{table*} \begin{center} \begin{tabular}{cccc} Metal & $\Delta D$ & $|m|$\tablenotemark{a} & ${(n_H)_2 \over (n_H)_1}$ \cr \tableline Fe & $0.348 \pm 0.305 $ & $ 0.38 \pm 0.05 $ & $ 8.2 \pm 9.7 $ \cr Si & $0.353 \pm 0.315 $ & $ 0.49 \pm 0.15 $ & $ 5.3 \pm 6.6 $ \cr Cr & $0.299 \pm 0.378 $ & $ 0.50 \pm 0.11 $ & $ 4.0 \pm 5.3 $ \cr \end{tabular} \end{center} \tablenotetext{a}{$m$ values are taken from Jenkins 1987} \tablenum{14} \caption{$n_H$ VARIATIONS FOR $z$=2.462} \label{nH} \end{table*} \begin{table*} \begin{center} \begin{tabular}{lccccc} Metal & Comp 1 & Comp 2 & Comp 3 & Comp 4 & Comp 5 \hfil\cr & $-211 \leftrightarrow -181$ & $-181 \leftrightarrow -144$ & $-144 \leftrightarrow -110$ & $-110 \leftrightarrow -40 $ & $-40 \leftrightarrow 35$ \hfil\cr \tableline Fe & 0.11 & 0.14 & 0.11 & 0.25 & 0.39 \cr Ni & 0.09 & 0.14 & 0.13 & 0.27 & 0.38 \cr Si II & 0.10 & 0.11 & 0.11 & 0.28 & 0.42 \cr Cr & 0.11 & 0.15 & 0.14 & 0.23 & 0.40 \cr Si IV & 0.17 & 0.46 & 0.09 & 0.14 & 0.10 \cr \end{tabular} \end{center} \tablenum{15} \caption{PERCENT OF TOTAL ABUNDANCES FOR $z$ = 2.462} \label{per} \end{table*} \clearpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Dimension reduction techniques have been widely used for inferring and explaining an underlying structure in high dimensional data. One of these techniques is \emph{factor analysis}, which linearly maps high dimensional data onto a lower dimensional subspace. This is achieved by finding a set of latent variables, known as \emph{factors}, such that the observed variables may be represented by linear combinations of these factors. The aim of dimension reduction is realised by using a number of factors much smaller than the number of observed variables. In some applications, it is desirable for each factor to be associated with only a subset of the observed variables. In other words, the factor loadings, which quantify the weighting of each variable on each factor, are expected to be sparse. \emph{Sparse factor analysis} is an extension of factor analysis that allows such sparsity to be captured. The benefit of sparse factor analysis is its increased interpretability of the inferred factors, as each factor is encouraged to have only a few significant loadings. Sparse factor analysis has been applied to the analysis of gene expression data~\citep{West2003, sabatti2006, fatf}. One of the aims of such analyses is to infer gene regulatory networks, i.e. to identify sets of genes each regulated by a shared biological pathway. Thus, the use of sparse factor models is appropriate, as it allows the interpretation of factors as biological pathways, which each regulate a small number of the genes. Recent extensions of sparse factor analysis in genomics include~\citet{gao2016, hore2016, fsclvm, mofa, ebmf}. Bayesian approaches have modelled the sparsity of factor loadings by using sparsity-inducing priors such as a ``spike and slab prior'' \cite{West2003}. Markov chain Monte Carlo (\textsc{mcmc}), which relies on sampling from the posterior distribution, has been typically employed for Bayesian inference in sparse factor analysis. On the other hand, the recent extensions of sparse factor models in \citet{hore2016, fsclvm, mofa, ebmf} have used variational inference (\textsc{vi}). \textsc{vi} reformulates the inference problem to an optimisation problem of finding an approximate distribution that resembles the posterior distribution. It is known that \textsc{vi} tends to be faster than \textsc{mcmc}, but it does not provide theoretical guarantees of finding the exact posterior distribution, which \textsc{mcmc} provides \citep{viblei}. We aim to investigate the relative strengths and weaknesses of \textsc{mcmc} and \textsc{vi}, when applied to sparse factor models with a spike and slab prior. We derive and implement \textsc{mcmc} and \textsc{vi} algorithms and assess a trade-off between accuracy and computational efficiency using both simulated and biological data. \citet{manu} performed a similar comparison, but they used a relaxed sparsity prior for their \textsc{vi} algorithm, instead of the exact spike and slab prior. Our work differs from \citet{manu} as we consider a slightly more flexible sparse factor model, and derive a \textsc{vi} algorithm for the exact spike and slab prior. Our comparison results show that the higher computational efficiency of \textsc{vi} is desirable over the small gain in accuracy when using \textsc{mcmc}, provided that sufficient \textsc{vi} trials are run. Our implementation of the \textsc{mcmc} and \textsc{vi} algorithms for sparse factor models is available at \url{https:github.com/ysfoo/sparsefactor}. \section{The sparse factor model} Given $N$ observations $\mathbf{Y} = [\boldsymbol{y}_1, \boldsymbol{y}_2, \ldots, \boldsymbol{y}_N]$ each with $G$ features, the sparse factor model describes the data using $K$ factors with a loading matrix $\displaystyle \mathbf{L}\in \mathbb{R}^{G\times K}$ and activation matrix $\mathbf{F}\in \mathbb{R}^{K\times N}$ such that $\mathbf{Y} = \mathbf{L}\mathbf{F} + \mathbf{E}$, where $\mathbf{E}\in \mathbb{R}^{G\times N}$ is a matrix of random errors. In the context of gene expression, $\mathbf{Y}$ represents gene expression data across $N$ samples, each measured on $G$ genes. A possible interpretation of the $K$ factors is to view them as biological pathways which regulate gene expression. By assuming independent normal errors with feature-specific variance, the distribution of $\mathbf{Y}$ is given by \begin{equation} p\giventhat*{\boldsymbol{y}_{\cdot j}}{\mathbf{L}, \mathbf{F}, \boldsymbol{\tau}} = \mathcal{N}\giventhat*{\boldsymbol{y}_{\cdot j}}{\mathbf{L}\boldsymbol{f}_{\cdot j},\text{diag}{\left(\left\{\tau_i^{-1}\right\}_{i=1}^G\right)}}, \end{equation} where $\boldsymbol{y}_{\cdot j}$ and $\boldsymbol{f}_{\cdot j}$ indicate the $j$-th column of $\mathbf{Y}$ and $\mathbf{F}$ respectively, and $\tau_i$ is the precision of normal errors for observations on feature~$i$. \textbf{Prior specifications.} To induce sparsity in the loading matrix $\mathbf{L}$, we introduce a binary matrix $\mathbf{Z}\in \mathbb{R}^{G\times K}$ whose entries are 1 when the corresponding loading is nonzero. We then specify the following spike-and-slab prior: \begin{equation} p\giventhat*{l_{ik}}{z_{ik}, \alpha_k} = \begin{cases} \delta_0{\left(l_{ik}\right)} &\text{if }z_{ik} = 0\\ \mathcal{N}\giventhat*{l_{ik}}{0, \alpha_k^{-1}} &\text{if }z_{ik} = 1 \end{cases}, \end{equation} where $\delta_0$ is the Dirac delta distribution, $l_{ik}$ is the loading of factor $k$ on feature $i$, $z_{ik}$ is a binary variable which indicates whether feature~$i$ is related to factor~$k$, and $\alpha_k$ is the factor-specific normal precision of the nonzero values of $l_{ik}$. Independent Bernoulli priors are placed on the connectivity matrix $\mathbf{Z}$: \begin{equation} p{\left(z_{ik}\right)} = \text{Bernoulli}\giventhat*{z_{ik}}{\pi_{k}}, \end{equation} where $\boldsymbol{\pi} = \left\{\pi_{k}\right\}_{k=1}^K$ are hyperparameters to be specified. Note that $\pi_{k}$ controls the sparsity of column $k$ of $\mathbf{Z}$, which corresponds to factor $k$. A gamma prior (shape-rate parametrisation) is imposed on the precisions of the loading matrix $\mathbf{L}$: \begin{equation} p{\left(\alpha_k\right)} = \Gamma\giventhat*{\alpha_k}{a_\alpha,b_\alpha}, \end{equation} where $a_\alpha$ and $b_\alpha$ are hyperparameters to be specified. To avoid non-identifiability issues caused by scaling \citep{fatf, manu}, a unit variance normal prior is used for the activation matrix $\mathbf{F}$: \begin{equation} p{\left(\boldsymbol{f}_{\cdot j}\right)} = \mathcal{N}\giventhat*{\boldsymbol{f}_{\cdot j}}{\mathbf{0},\mathbf{I}}, \end{equation} where $\mathbf{I}$ is the identity matrix of size $K$. Lastly, a gamma prior is placed on the precision parameters of the error model: \begin{equation} p{\left(\tau_i\right)} = \Gamma\giventhat*{\tau_i}{a_\tau,b_\tau}, \end{equation} where $a_\tau$ and $b_\tau$ are hyperparameters to be specified. \textbf{Bayesian inference.} Bayesian inference aims to find the posterior distribution $p\giventhat*{\mathbf{L}, \mathbf{F}, \mathbf{Z}, \boldsymbol{\tau}, \boldsymbol{\alpha}}{\mathbf{Y}}$. An exact calculation of the posterior distribution is intractable, so we resort to approximate methods to obtain the posterior distribution. The next two sections describe two possible Bayesian inference techniques for the sparse factor model, namely Markov chain Monte Carlo and variational inference. \section{Markov chain Monte Carlo} \emph{Markov chain Monte Carlo} (\textsc{mcmc}) is a family of algorithms which simulate the posterior distribution $p{(\boldsymbol\theta| \mathbf{Y})}$, where $\boldsymbol\theta$ and $\mathbf{Y}$ denote model parameters and data respectively. In particular, \textsc{mcmc} simulates samples from $p{(\boldsymbol\theta| \mathbf{Y})}$ by constructing a Markov chain $\left\{\boldsymbol\theta^{(n)}\right\}_{n=1}$ that converges to $p{(\boldsymbol\theta| \mathbf{Y})}$. \emph{Gibbs sampler} is a \textsc{mcmc} sampler for a multivariate $\boldsymbol\theta = (\theta_1,\ldots, \theta_m)$ which uses full conditional distributions to construct the Markov chain. Specifically, the transition probability of the chain (assuming a fixed ordering) can be written as \begin{equation} p\giventhat*{\boldsymbol\theta^{(n)}}{\boldsymbol\theta^{(n-1)}} = \prod_{i=1}^m p\giventhat*{\theta^{(n)}_i}{ \theta^{(n)}_1, \ldots, \theta^{(n)}_{i - 1}, \theta^{(n-1)}_{i + 1}, \ldots, \theta^{(n-1)}_{m}, \mathbf{Y}}. \end{equation} That is, the Gibbs sampler cycles through sampling each parameter (or parameter block) from its full conditional posterior distribution. \textbf{Collapsed Gibbs sampler for the sparse factor model.} In the sparse factor model, there is a strong dependence between the parameters $l_{ik}$ and $z_{ik}$, as they must be either both zero or both nonzero. Hence, applying standard Gibbs sampler to the sparse factor model will lead to slow mixing. To improve the mixing of the chain, a collapsed Gibbs sampler is used, following the approach of \cite{manu}. Specifically, $\mathbf{L}$ is marginalised out from the conditional distribution of $\mathbf{Z}$, so that $z_{ik}$ is sampled from $p\giventhat*{z_{ik}}{\mathbf{Y}, \mathbf{F}, \mathbf{Z}_{-ik}, \boldsymbol{\tau}, \boldsymbol{\alpha}}$ instead of the full conditional $p\giventhat*{z_{ik}}{\mathbf{Y}, \mathbf{L}, \mathbf{F}, \mathbf{Z}_{-ik}, \boldsymbol{\tau}, \boldsymbol{\alpha}}$, where $\mathbf{Z}_{-ik}$ denotes the elements of $\mathbf{Z}$ excluding $z_{ik}$. Algorithm~\ref{cgs} describes this sampler in full, and the derivations of the conditional distributions can be found in Appendix~\ref{dg}. \begin{algorithm}[h] \KwIn{$T, \mathbf{Y}, \boldsymbol{\pi}, a_\tau, b_\tau, a_\alpha, b_\alpha$} \KwOut{$T$ samples approximating the posterior distribution} randomly initialise $\mathbf{L}', \mathbf{F}', \mathbf{Z}', \boldsymbol{\tau}', \boldsymbol{\alpha}'$ (most recent sample)\; \For{$t\leftarrow 1$ \KwTo $T$}{ \For{$i\leftarrow 1$ \KwTo $G$}{ \For{$k\leftarrow 1$ \KwTo $K$}{ $z'_{ik}\leftarrow z_{ik}^{(t)}\sim p\giventhat*{z_{ik}}{\mathbf{Y}, \mathbf{F}', \mathbf{Z}'_{-ik}, \boldsymbol{\tau}', \boldsymbol{\alpha}', \boldsymbol{\pi}}$\; } } $\mathbf{L}'\leftarrow \mathbf{L}^{(t)}\sim p\giventhat*{\mathbf{L}}{\mathbf{Y}, \mathbf{F}', \mathbf{Z}', \boldsymbol{\tau}', \boldsymbol{\alpha}'}$\; $\mathbf{F}'\leftarrow \mathbf{F}^{(t)}\sim p\giventhat*{\mathbf{F}}{\mathbf{Y}, \mathbf{L}', \mathbf{Z}', \boldsymbol{\tau}', \boldsymbol{\alpha}'}$\; $\boldsymbol{\tau}'\leftarrow \boldsymbol{\tau}^{(t)}\sim p\giventhat*{\boldsymbol{\tau}}{\mathbf{Y}, \mathbf{L}', \mathbf{F}', \mathbf{Z}', \boldsymbol{\alpha}', a_\tau, b_\tau}$\; $\boldsymbol{\alpha}'\leftarrow \boldsymbol{\alpha}^{(t)}\sim p\giventhat*{\boldsymbol{\alpha}}{\mathbf{Y}, \mathbf{L}', \mathbf{F}', \mathbf{Z}', \boldsymbol{\tau}', a_\alpha, b_\alpha}$\; } \KwRet{$\left\{\mathbf{L}^{(t)}, \mathbf{F}^{(t)}, \mathbf{Z}^{(t)}, \boldsymbol{\tau}^{(t)}, \boldsymbol{\alpha}^{(t)}\right\}_{t=1}^T$} \caption{Collapsed Gibbs sampler for the sparse factor model}\label{cgs} \end{algorithm} \textbf{Handling the symmetry of the sparse factor model.} Given a mode of the posterior distribution, if factors (of equal $\pi_k$) are permuted, or if the sign of the entries of $\mathbf{L}$ and $\mathbf{F}$ corresponding to a factor are switched, one obtains another equivalent mode. These symmetries result in up to $2^K K!$ equivalent modes in the posterior distribution, implying that the model is non-identifiable. A \textsc{mcmc} sampler for this model potentially suffers from the label-switching or sign-switching issue. If this occurs, posterior averages will not provide meaningful summaries of the information available; see \citet{relabel} for more discussion. For our Gibbs sampler, label-switching or sign-switching rarely happens within a chain, and each chain usually explores only one of the equivalent modes. This is because we simulate $\mathbf{L}$ and $\mathbf{F}$ in separate steps. For example, when sampling $\mathbf{L}$, it is unlikely to have one of its column's signs flipped (for large enough $G$) while $\mathbf{F}$ is held constant. Similar behaviour of \textsc{mcmc} samplers has been previously noted in \citet{structure}. Exploring a single mode corresponding to a particular labelling of the factors is not a huge problem because the equivalent modes from permuted factors are the same from the point of view of inferring a set of factors. The ambiguity of the sign could be resolved later based on domain-specific knowledge, such as genes known to be up-regulated in a particular pathway~\citep{manu}. Nevertheless, model non-identifiability is still an issue when it is desired to combine multiple chains from different starting values as each chain may explore a different mode. Thus, we implemented a relabelling algorithm \cite{relabel} to deal with this issue, as well as any potential label-switching or sign-switching issue during sampling. See Appendix~\ref{rs} for details of our relabelling algorithm. \section{Variational inference} \emph{Variational inference} (\textsc{vi}) is a method from machine learning that approximates probability distributions using optimisation \citep{viblei}, serving as an alternative approach to \textsc{mcmc}. We first review \textsc{vi} in Section~\ref{subsec:vi}, and then describe its application to the sparse factor model in Section~\ref{subsec:vi_sfa}. Further background on \textsc{vi} can be found in \citet{viblei}. \subsection{Variational inference as a Bayesian inference technique} \label{subsec:vi} Let $\boldsymbol\theta$ and $\mathbf{Y}$ denote the model parameters and data, respectively. Instead of sampling from the posterior distribution $p\giventhat*{\boldsymbol\theta}{\mathbf{Y}}$, \textsc{vi} approximates the posterior distribution by recasting the inference problem into an optimisation problem. Given a family of probability distributions $\mathcal{D}$, \textsc{vi} aims to find the member of $\mathcal{D}$ (called the variational approximation) which minimises its Kullback-Leibler (\textsc{kl}) divergence to the exact posterior, \begin{equation} \label{eq:vi_op} q^*{(\boldsymbol\theta)} = \argmin_{q{(\boldsymbol\theta)}\in\mathcal{D}} \kl*{ q{(\boldsymbol\theta)} }{ p\giventhat*{\boldsymbol\theta}{\mathbf{Y}} } = \argmin_{q{(\boldsymbol\theta)}\in\mathcal{D}} \E{\log q{(\boldsymbol\theta)} - \log p\giventhat*{\boldsymbol\theta}{\mathbf{Y}}}, \end{equation} where the expectation is taken with respect to $q$. \textsc{kl} divergence penalises choices of $q$ which place significant probability mass on areas where $p$ has little probability mass, thus coercing the density of $q$ to match that of $p$. It does not however, penalise as much the choices of $q$ which place less probability mass on areas where $p$ has more probability mass. Therefore, \textsc{vi} tends to underestimate the variance of the posterior distribution \citep{viblei}. Another implication of this penalisation is that \textsc{vi} attempts to match the most significant modes of $p$ and $q$, potentially disregarding other modes of $p$ that are further away. Thus, when applied to the sparse factor model, $q$ tends to capture only one of the modes. Finally, a choice of $\mathcal{D}$ that is too restrictive may result in a variational approximation that does not capture the posterior distribution accurately. \textbf{Evidence lower bound.} In practice, the \textsc{kl} divergence cannot be computed directly, but is related to the \emph{evidence lower bound} $\textsc{elbo}{(q)} = \E{\log p{(\mathbf{Y}, \boldsymbol\theta)} - \log q{(\boldsymbol\theta)}}$ by the equation \begin{equation} \kl*{ q{(\boldsymbol\theta)} }{ p\giventhat*{\boldsymbol\theta}{\mathbf{Y}} } = \E{\log q{(\boldsymbol\theta)} - \log p{(\mathbf{Y}, \boldsymbol\theta)} + \log p{(\mathbf{Y})}} = -\textsc{elbo}{(q)} + \log p{(\mathbf{Y})}. \end{equation} Since $\log p{(\mathbf{Y})}$ is constant, minimising the \textsc{kl} divergence is equivalent to maximising the \textsc{elbo}. As the \textsc{kl} divergence is always nonnegative \citep{kl}, it follows that $\textsc{elbo}{(q)}\le \log p{(\mathbf{Y})}$, hence the name evidence lower bound. Provided that the family of distributions $\mathcal{D}$ is simple enough, the \textsc{elbo} is a tractable quantity to compute. \textbf{Mean-field approximation and coordinate ascent variational inference.} A common choice of $\mathcal{D}$ is the \emph{mean-field variational family}, where the model parameters $\boldsymbol\theta = \left\{\theta_i\right\}^{m}_{i=1}$ are mutually independent in $q$. In other words, the variational approximation can be written as a product of variational factors, \begin{equation} q{(\boldsymbol\theta)} = \prod_{i=1}^m q_i{(\theta_i)}. \end{equation} One of the most commonly used algorithms for solving the optimisation problem in equation~\eqref{eq:vi_op} with the mean-field family is coordinate ascent variational inference (\textsc{cavi}) \citep{bishop}. The \textsc{cavi} algorithm iterates through the variational factors, updating each $q_i{(\theta_i)}$ while holding the other variational factors fixed: \begin{equation} \label{eq:cavi} q_i^*{(\theta_i)} \propto \exp\left\{\Eover{-i}{\log p\giventhat*{\theta_i}{\mathbf{Y},\boldsymbol\theta_{-i}}}\right\} \propto \exp\left\{\Eover{-i}{\log p{\left(\mathbf{Y},\boldsymbol\theta\right)}}\right\}, \end{equation} where the expectation $\mathbb{E}_{-i}$ is taken with respect to the currently fixed variational factors, $\prod_{j\neq i}^m q_j{(\theta_j)}$. This update maximises the \textsc{elbo} given the currently fixed variational factors \citep{viblei}. This enables the algorithm to monotonically optimise the \textsc{elbo}, eventually reaching a local optimum. \subsection{Variational inference for the sparse factor model} \label{subsec:vi_sfa} For the sparse factor model, we choose the following mean-field variational family to approximate the posterior distribution: \begin{equation} q{\left(\mathbf{L},\mathbf{F},\mathbf{Z},\boldsymbol{\tau},\boldsymbol\alpha\right)} = \prod_{i=1}^{G} \left[ q{(\tau_i)} \prod_{k=1}^K q{\left(l_{ik},z_{ik}\right)} \right] \times \prod_{k=1}^K \left[ q{\left(\alpha_k\right)} \prod_{j=1}^{N} q{\left(f_{kj}\right)} \right], \end{equation} where \begin{align} q{\left(l_{ik},z_{ik}\right)} &= q{\giventhat*{l_{ik}}{z_{ik}}}q{\left(z_{ik}\right)}\\ &= \mathcal{N}\giventhat*{l_{ik}}{\mu_{l_{ik}},\sigma^2_{l_{ik}}}^{z_{ik}} \times\delta_0{\left(l_{ik}\right)}^{1-z_{ik}}\times \text{Bernoulli}\giventhat*{z_{ik}}{\eta_{ik}}\\ q{\left(f_{kj}\right)} &= \mathcal{N}\giventhat*{f_{kj}}{\mu_{f_{kj}},\sigma^2_{f_{kj}}} \\ q{\left(\tau_i\right)} &= \Gamma\giventhat*{\tau_i}{\hat{a}_{\tau_i},\hat{b}_{\tau_i}} \\ q{\left(\alpha_k\right)} &= \Gamma\giventhat*{\alpha_k}{\hat{a}_{\alpha_k},\hat{b}_{\alpha_k}}. \end{align} Each variational factor we choose is conjugate to the distribution in the likelihood function, so the variational family satisfies the update rule in equation~\eqref{eq:cavi}, and an analytic computation of the expectation on the right is possible. We use \textsc{cavi} to optimise the \textsc{elbo}. Algorithm~\ref{cavi} shows our \textsc{cavi} for the sparse factor model; see Appendix~\ref{dc} for details of the \textsc{cavi} updates and the derivation of the \textsc{elbo}. Note that the variational factor $q{\left(l_{ik},z_{ik}\right)}$ does not factorise into $q{\left(l_{ik}\right)}q{\left(z_{ik}\right)}$, as it is not possible to remove the dependency of $l_{ik}$ and $z_{ik}$ being either both zero or both nonzero. Moreover, we derived the variational factor for the exact spike and slab prior instead of the relaxed sparsity prior used in \citet{manu}. \begin{algorithm}[h] \KwIn{$\mathbf{Y}, \boldsymbol{\pi}, a_\tau, b_\tau, a_\alpha, b_\alpha$} \KwOut{variational factors which approximate the posterior distribution} randomly initialise $ q{\left(l_{ik},z_{ik}\right)}, q{\left(f_{kj}\right)}, q{\left(\tau_i\right)}, q{\left(\alpha_k\right)} \;\forall i,j,k$\; \While{\textsc{elbo} has not converged}{ \For{$i\leftarrow 1$ \KwTo $G$}{ \For{$k\leftarrow 1$ \KwTo $K$}{ $q{\left(l_{ik},z_{ik}\right)} \propto \exp{\left\{\Eover{\mathbf{L}_{-ik},\mathbf{F},\mathbf{Z}_{-ik},\tau_i,\boldsymbol{\alpha}}{\log p\giventhat*{l_{ik},z_{ik}}{\mathbf{Y},\mathbf{L}_{-ik},\mathbf{F},\mathbf{Z}_{-ik},\boldsymbol{\tau},\boldsymbol\alpha}}\right\}}$\; } } \For{$k\leftarrow 1$ \KwTo $K$}{ \For{$j\leftarrow 1$ \KwTo $N$}{ $q{\left(f_{kj}\right)} \propto \exp{\left\{\Eover{\mathbf{L},\mathbf{F}_{-kj},\mathbf{Z},\boldsymbol{\tau},\boldsymbol\alpha}{\log p\giventhat*{f_{kj}}{\mathbf{Y},\mathbf{L},\mathbf{F}_{-kj},\mathbf{Z},\boldsymbol{\tau},\boldsymbol\alpha}}\right\}}$\; } } \For{$i\leftarrow 1$ \KwTo $G$}{ $q{\left(\tau_i\right)} \propto \exp{\left\{\Eover{\mathbf{L},\mathbf{F},\mathbf{Z},\boldsymbol\alpha}{\log p\giventhat*{\tau_i}{\mathbf{Y},\mathbf{L},\mathbf{F},\mathbf{Z},\boldsymbol\alpha}}\right\}}$\; } \For{$k\leftarrow 1$ \KwTo $K$}{ $q{\left(\alpha_k\right)} \propto \exp{\left\{\Eover{\mathbf{L},\mathbf{F},\mathbf{Z},\boldsymbol\tau}{\log p\giventhat*{\alpha_k}{\mathbf{Y},\mathbf{L},\mathbf{F},\mathbf{Z},\boldsymbol\tau}}\right\}}$\; } } \KwRet{$ q{\left(l_{ik},z_{ik}\right)}, q{\left(f_{kj}\right)}, q{\left(\tau_i\right)}, q{\left(\alpha_k\right)} \;\forall i,j,k$} \caption{\textsc{cavi} for the sparse factor model}\label{cavi} \end{algorithm} \textbf{Initialisation.} \textsc{cavi} is a hill-climbing algorithm that may find only a local optimum of the \textsc{elbo}. In practice, we run multiple \textsc{vi} trials with different initialisations, and select the trial that converges to the largest \textsc{elbo} for inference. To reduce computation, trials may be stopped early, and only the trial corresponding to the largest \textsc{elbo} (at early stopping) is run until convergence. \section{Numerical comparisons} We compare the performance of \textsc{mcmc} and \textsc{vi}, focusing on accuracy and computational efficiency. It is expected that \textsc{vi} would converge faster than \textsc{mcmc}, but \textsc{mcmc} will provide more accurate inference in the long run. The comparison is carried out for simulated datasets and a real biological dataset. \subsection{Simulated data} The simulated datasets each consist of $G$~=~800 features over $N$~=~100 samples, explained by $K$~=~6 factors. We simulated three datasets with varying amount of noise to evaluate the robustness of each inference technique. All three datasets share the same underlying connectivity structure $\mathbf{Z}$ (the first panel of Figure~\ref{sim_zmat}), consisting of 5 factors with sparse loadings and 1 factor with full loadings, corresponding to the sparsity hyperparameters $\boldsymbol\pi$~=~(0.075, 0.15, 0.25, 0.375, 0.5, 1). The entries of $\mathbf{L}$ (that correspond to $z_{ik} = 1$) and $\mathbf{F}$ were simulated from independent standard normal distributions. The random errors present in each dataset was controlled by varying the signal-to-noise ratio (snr~=~1, 5, 25). We quantified the signal for feature~$i$ using the sample variance $V_i$ of the entries in row $i$ of $\mathbf{LF}$ (the expectation of the data for feature $i$). The precision of the error is then given by \begin{equation} \tau_i = \frac{\text{snr}}{V_i}. \end{equation} \begin{figure*}[t] \centering \includegraphics[width=6in]{imgs/sim_zmat.png} \caption{True connectivity structure for $\mathbf{Z}$ (simulated with snr = 5), and inferred structures (posterior mean of $\mathbf{Z}$). Results from a \textsc{mcmc} chain with the best accuracy of $\mathbf{Z}$ and a \textsc{vi} trial with the largest converged \textsc{elbo} are shown.}\label{sim_zmat} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=5in]{imgs/sim_perf_timing.png} \caption{Performance over computation time on a simulated dataset with a moderate amount of noise (snr = 5), based on the posterior mean of the connectivity structure $\mathbf{Z}$, loading matrix $\mathbf{L}$, activation matrix $\mathbf{F}$, and low-dimensional structure $\mathbf{LF}$.}\label{sim_perf_timing} \end{figure*} We applied $\textsc{mcmc}$ and \textsc{vi} to each of these datasets, assuming \emph{a priori} that we know the correct number of sparse factors and dense factors (5 and 1 respectively). The sparsity hyperparameters $\boldsymbol\pi$ were set to be 0.1 and 0.9 for sparse factors and dense factors respectively. The remaining hyperparameters for the gamma priors were set to be $a_\tau = b_\tau = a_\alpha = b_\alpha = 10^{-3}$, corresponding to vague priors. For $\textsc{mcmc}$, we discarded the first 100 iterations as a burn-in, and then ran 200,000 iterations, keeping one out of every 10 successive samples for inference. We ran $\textsc{mcmc}$ 5 times with different initial values, giving 5 chains of 20,000 samples each. We ran 10 \textsc{vi} trials until the \textsc{elbo} converged (up to absolute difference of $10^{-10}$ or relative difference of $10^{-14}$). \begin{figure*}[p] \centering \includegraphics[width=5.1in]{imgs/sim_robust.png} \caption{Performance over computation time across different simulated datasets with varying amounts of noise (snr = 1, 5, 25), based on the posterior mean of the connectivity structure $\mathbf{Z}$, loading matrix $\mathbf{L}$, activation matrix $\mathbf{F}$, and low-dimensional structure $\mathbf{LF}$.}\label{sim_robust} \end{figure*} \textbf{Comparison of accuracy and speed.} We first compare the performance of \textsc{mcmc} and \textsc{vi} using a dataset with a moderate amount of noise (snr = 5). Performance is evaluated via the accuracy of the inferred connectivity structure $\mathbf{Z}$, loading matrix $\mathbf{L}$, activation matrix $\mathbf{F}$, and low-dimensional structure $\mathbf{LF}$. The accuracy of $\mathbf{Z}$ is defined as the proportion of correctly inferred entries after rounding the posterior means of $\mathbf{Z}$ to 0 or 1. The accuracy of $\mathbf{L}$, $\mathbf{F}$, and $\mathbf{LF}$ is quantified by the relative root mean squared error (\textsc{rrmse}). As an example, the \textsc{rrmse} for $\mathbf{L}$ is \begin{equation} \textsc{rrmse}{(\hat{\mathbf{L}}, \mathbf{L})} = \sqrt{\frac{\sum_{i,k}(\hat{l}_{ik} - l_{ik})^2}{\sum_{i,k} l_{ik}^2}}, \end{equation} where $\hat{\mathbf{L}}$ is the posterior mean of $\mathbf{L}$. We included the performance measures of the prior mean as a baseline to compare to. These performance measures were calculated after the inferred model parameters have been permuted and scaled appropriately to match the simulation parameters. Figure~\ref{sim_perf_timing} shows the accuracy of each method over computation time. Three out of the five \textsc{mcmc} chains captured the underlying structure well, as evident from a high accuracy of $\mathbf{Z}$ and small \textsc{rrmse} of $\mathbf{L}$, $\mathbf{F}$ and $\mathbf{LF}$. Two chains failed to converge, even after more than two hours of running 200,000 \textsc{mcmc} iterations for each chain. In contrast, all \textsc{vi} trials converged in about 10 seconds, although the performance varied across trials. This is expected, as each trial climbs the \textsc{elbo} to a different local optimum. Moreover, the trial which converged to the largest \textsc{elbo} does not display any significant loss in accuracy when compared to the best accuracy achieved by \textsc{mcmc}. Figure~\ref{sim_zmat} (the second and third panels) presents a visualisation of the inferred connectivity structure $\mathbf{Z}$ from the MCMC chain with the best accuracy and the VI trial with the largest ELBO, showing that both techniques are capable of discovering $\mathbf{Z}$. False negatives observed in the results from both methods most likely correspond to small factor loadings that were shrunk to zero. \textbf{Robustness against noise.} Now we compare the performance of \textsc{mcmc} and \textsc{vi} when applied to datasets with different amounts of noise (snr =1, 5, 25). The best \textsc{vi} trial (best in the sense of largest converged \textsc{elbo}) achieved better performance as the amount of noise decreases (Figure~\ref{sim_robust}). In all cases, its accuracy of $\mathbf{Z}$ roughly matched that of the most accurate result from a \textsc{mcmc} chain. The only case where \textsc{mcmc} may have an advantage is the dataset with snr = 1, where 2 out of the 5 \textsc{mcmc} chains achieved a lower error on $\mathbf{L}$, $\mathbf{F}$, and $\mathbf{LF}$ than the best \textsc{vi} trial. In fact, these 2 chains managed to accurately infer factor 1, which is the factor with the most sparsity. The remaining 3 chains and all 10 \textsc{vi} trials did not find this factor. \textsc{mcmc} may be more capable to infer sparse factors from noisy data than \textsc{vi}, but does not do so consistently. The accuracy measures for \textsc{mcmc} took a longer time to converge for the dataset with the least noise (snr = 25). A possible explanation is that stronger signals make the dependency structure in the posterior distribution stronger, leading to less efficient convergence of the Gibbs sampler. In the noisier datasets, some \textsc{mcmc} chains were clearly stuck in non-optimal modes that do not match the underlying structure. \subsection{Biological data} In this section, we compare the performance of \textsc{mcmc} and \textsc{vi} when applying the sparse factor model to a real dataset. To this end, we used {\it GTEx eQTL summary data} from \cite{ebmf}, which consists of $Z$-scores measuring the associations of $G$ = 16069 genetic variants with gene expression measured in $N$ = 44 human tissues. In other words, $y_{ij}$ indicates the strength of effect of genetic variant~$i$ on gene expression in tissue~$j$. This dataset originates from the Genotype Tissue Expression (GTEx) Project \citep{gtex}, which \cite{ebmf} used as part of their evaluation of \emph{flash}, a \textsc{vi}-based method they developed for an empirical Bayes approach to matrix factorisation. See \cite{ebmf} for a further description of the GTEx eQTL summary data. The \emph{flash} is capable of automatically selecting the number of factors $K$, which \cite{ebmf} report to be $K$ = 26 when applied to this dataset. We used the same number of factors as inferred by \emph{flash}, and treated all 26 factors as sparse factors, each with a sparsity hyperparameter of $\pi_k = 0.1$. The remaining hyperparameters for the gamma priors remained at $10^{-3}$. The first 2,000 $\textsc{mcmc}$ iterations were discarded as a burn-in. After the burn-in period, 16,000 iterations were run, where one out of every 10 successive samples were kept for inference. We ran $\textsc{mcmc}$ 5 times with different starting points, giving 5 chains of 1,600 samples each. We ran 10 \textsc{vi} trials until the \textsc{elbo} converged up to a tolerance of $10^{-3}$. \begin{figure*}[t] \centering \includegraphics[width=3in]{imgs/gtex_perf.png} \caption{Performance on GTEx eQTL summary data over computation time, based on the posterior mean of predictions on held-out data (10\% of full data).}\label{gtex_perf} \end{figure*} \textbf{Fill-in test.} As the ground truth for an underlying structure is not available, we assessed the performance of each method using a \emph{fill-in test}, following \cite{manu} and \cite{ebmf}. We first held-out (masked) 70704 data entries in $\mathbf{Y}$, corresponding to 10\% of the data entries. Then, we inferred the model parameters using the remaining 90\% of the data and predicted (filled-in) these 70704 missing values using the inferred parameters. Finally, we assessed the performance of each method using the RRMSE of the posterior mean of predictions on the held-out data, against the observed held-out data. The idea is that model parameters which better capture the true underlying structure will predict the held-out entries more accurately \citep{manu}. As expected, \textsc{vi} is computationally more efficient than \textsc{mcmc} (Figure~\ref{gtex_perf}), and its RRMSE is only slightly worse than that of \textsc{mcmc}. \section{Conclusion} We have compared two Bayesian inference techniques, \textsc{mcmc} and \textsc{vi}, when applied to the sparse factor model. We have derived and implemented \textsc{mcmc} and \textsc{vi} algorithms, and investigated the relative strengths and weaknesses of two methods in terms of accuracy and computational efficiency using both simulated and biological data. Our empirical investigation showed that \textsc{mcmc} gives more slightly accurate inference than \textsc{vi}, however the difference is outweighed by the much faster speed of \textsc{vi}. After taking into account the need of running multiple \textsc{vi} trials to select the trial with the best \textsc{elbo}, \textsc{vi} achieves similar accuracy as \textsc{mcmc} in significantly less time. \textbf{Acknowledgements.} Special thanks to Matthew Stephens for sharing the GTEx data used in the numerical comparison and allowing us to make them publicly available. The GTEx Project was supported by the Common Fund of the Office of the Director of the National Institutes of Health, and by NCI, NHGRI, NHLBI, NIDA, NIMH, and NINDS. We thank Yao-ban Chan for helpful comments on a draft manuscript. This research used the Spartan High Performance Computing system at the University of Melbourne. This work was supported by a Vacation Research Scholarship provided by the Australian Mathematical Sciences Institute to Y.S.F. \textbf{Availability of data and materials.} The GTEx eQTL summary data and our implementation of \textsc{mcmc} and \textsc{vi} algorithms for sparse factor analysis are publicly available at \url{https://github.com/ysfoo/sparsefactor}. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Radial variables have several key advantages compared with static stars, making them good stellar tracers. They can be easily identified even in crowded stellar fields using differential photometry. They are typically good distance indicators, and individual distances can be estimated with an accuracy better than a few percent. Classical Cepheids and Miras do provide the unique opportunity to estimate individual ages, since their periods are anti-correlated with their individual ages. This implies the opportunity to trace radial gradients across the main Galactic components (thin disk: \citealt{dasilva16}; bulge: \citealt{kunder13,zoccali16}; halo: \citealt{fiorentino15a}) and in nearby stellar systems \citep{martinezvazquez16b}. Miras play a crucial role in this context, since their parent population covers a broad range in stellar ages: from a few hundred Myr up to the age of globular clusters (GCs). This means they are ubiquitous, because they are present in intermediate-age to old stellar environments. We are interested in cluster Miras, since they allow us to have {\it a priori} robust information concerning the chemical composition, the environment and the evolutionary channel where they come from. Moreover, they allow us to develop a homogeneous metallicity scale between Miras and other stars in GCs, mainly red giants, widely investigated \citep[][and references therein]{carretta09}. We focused our attention on V1 in NGC~5927, since this is a well known metal-rich GC \citep{pancino10}. It should be noted that V1 is listed as a irregular variable (Lb class) in \cite{clement01}, but we identified this object as a Mira, or an intermediate type between a Mira and a semi-regular variable, according to its periodic variation with a large infrared amplitude (Fig. 1 in \citealt{sloan10}, and see also Section~\ref{sec:parameters}). Its ampliude, {$\sim$}0.4~mag, is around the lower end of the infrared amplitudes of Miras (e.g., \citealt{matsunaga09}). As illustrated in Fig. 9 of Sloan et al. (2010), V1 lies on the period-luminosity relation of Miras (and relatively large-amplitude semi-regulars). The selection of a metal-rich GC was mainly driven by the fact that the occurrence of Miras appears to be correlated with iron abundance \citep{frogel98}. The reasons why we decided to collect NIR high-resolution, high signal-to-noise ratio spectra with WINERED are manifold: a) we are mainly interested in Miras located in the bulge (field, globulars); this means stellar environments that are crowded and heavily reddened. b) WINERED covers a substantial wavelength range (0.91--1.35 $\mu$m) and it is characterized by a high spectral resolution ($R \sim 28,000$, WIDE mode). Miras are late-type stars, this means that they are intrinsically brighter in the quoted wavelength range. Thus, we have the opportunity to identify many iron and $\alpha$--element lines. Moreover, WINERED is also characterized by a very high-sensitivity and impressive throughputs ---from $\sim$30\% in the $z$-band to more than 50\% in the $J$-band--- when compared with similar NIR spectrographs \citep{ikeda16}. c) WINERED can also collect spectra with very high spectral resolution ($R \sim 68,000$, HIRES mode, \citealt{otsubo16}), covering either the $Y$ or the $J$ band. We present in this {\it Letter} the first spectroscopic characterisation of a cluster Mira done by using a high-resolution near-infrared spectrum ($z$, $Y$, $J$ bands), and report its abundances for iron, $\alpha$-elements and sodium. \section{Observations and data reduction} We observed the Mira V1 in NGC~5927 with the WIDE mode, $R \sim 28,000$, of WINERED, a PI instrument attached to the 3.58-m New Technology Telescope (NTT) at La Silla observatory, ESO, Chile. The observation was done at around 08:25 on 2017 Feb 13 (UT), and the weather condition was fairly stable. We obtained two integrations for the target of 300 seconds each, and the co--added spectrum is expected to give a S/N higher than 200. The spatial spread function shows a FWHM of about 1.4~arcsec including the seeing and the tracking accuracy. Two integrations were done with the target at different positions within the slit (i.e.\ AB positions). The reduction was performed by using the automated pipeline developed by the WINERED team \citep[see e.g.][]{taniguchi18}. This pipeline produces continuum-normalized spectra after standard analysis steps including bad pixel masking, sky subtraction, flat-fielding, scattered light subtraction, spectrum extraction, wavelength calibration and continuum normalization. We used ThAr lamp data for the wavelength calibration and the wavelengths were corrected to the standard air scale. \subsection{Tellurics subtraction} The main spurious features affecting every stellar spectrum are caused by the Earth's atmosphere. Molecular absorption bands are observed at fixed and well known wavelengths, but their strength depends on the current atmospheric conditions. In particular, NIR bands are more affected by tellurics than the optical bands. These lines are removed from the raw spectrum before performing any kind of abundance analysis, to avoid possible mis-identification and systematics in the estimate of the equivalent widths. The most common approach relies on the use of telluric standard stars. An early-type star with few and weak metallic lines is observed, close in time and in airmass to the target star, and its spectrum is subtracted from the target \citep[][and references therein]{sameshima18}. This technique faces three main problems: a) atmospheric conditions can change rapidly during the night, thus it is not trivial to observe a telluric standard close in time and in sky position to the individual targets; b) it requires a significant investment in telescope time; c) telluric lines and stellar photospheric lines might be blended, thus limiting the accuracy of the correction \citep{sameshima18}. We decided to adopt a different approach and to use the synthetic sky modeller {\sc TelFit} by \citet{gullikson14} to compute the telluric spectra for individual target spectra. The synthetic sky was modelled independently for the 20 spectral orders of WINERED ($\Delta \lambda \simeq 300$~\AA). This approach allows us to properly trace the variation in spectral resolution when moving from the blue ($\lambda \simeq 9,200$~\AA, $R \sim 28,000$) to the red ($\lambda \simeq 13,400$~\AA, $R \sim 30,000$) regime of WINERED (see Fig. 5 in \citealt{ikeda16}). A comparison between {\sc TelFit} and the standard telluric approach is shown in Fig.~\ref{fig:telfit} for the range 12,600--12,900~\AA. The subtraction of tellurics based on synthetic sky spectra and on the standard star agree quite well, and indeed both the residuals are of the order of 3\%. However, note that the standard star shows a disturbing hydrogen absorption feature at 12,818 \AA, which is completely absent with the synthetic sky approach, compromising the identification of some useful absorption lines (see Fig. \ref{fig:synth1}). The approach based on synthetic sky spectra appears very promising, since the spectrum of the telluric standard was collected in ideal conditions, i.e.\ 26~min after the Mira spectrum and with a minimal difference in airmass (1.04 vs 1.19). \begin{figure*} \includegraphics[width=\textwidth]{winered_tellurics2.eps} \caption{Left column: comparison between the original WINERED spectrum of the Mira V1 in NGC~5927 (black line) and the synthetic sky modelled with {\sc TelFit} (red line). Residuals are shown in the bottom panel. Right column: as before, but the spectrum of the standard telluric star HD~118054 was used instead of {\sc TelFit}. Note that the residuals show intrinsic lines of the target.} \label{fig:telfit} \end{figure*} \section{Results and discussions} \subsection{Stellar atmospheric parameters}\label{sec:parameters} As a first step, we derived the radial velocity (RV) of our target through cross-correlation with a grid of synthetic spectra in selected wavelength regions, from 11,700 to 13,000~\AA. We determined a heliocentric velocity of RV= $-$105.2 $\pm$ 2.0 km s$^{-1}$ (based on 34 spectral lines), which agrees quite well with the cluster average value given by the Harris's catalogue (\citeyear{harris96}, 2010 update\footnote{\url {http://physwww.mcmaster.ca/~harris/mwgc.dat}}) of $-107.5 \pm 0.9$~km~s$^{-1}$ and by \citet{simmerer13b} of $-104.03 \pm 5.03$~km~s$^{-1}$. Note that the velocity amplitude of the Miras minimally affects this finding, since their typical variation is {$\sim$}10~km~s$^{-1}$ \citep{wood79}. Since our spectral coverage does not grant the inclusion of a sufficient number of Fe~{\sc i} and (most crucially) Fe~{\sc ii} lines, the atmospheric parameters were adopted from photometric properties. More specifically, effective temperature ($T_{\rm eff}$) was obtained using $J-K_{\rm s}$ colours and the calibration by \citet{alonso99}, assuming the reddening value from \citet{harris96} of $E(B-V)=0.45$, which corresponds to $E(J-K_{\rm s})=0.23$ based on the extinction law of \citet{cardelli89}. In order to estimate the pulsation phase, we used the ASAS-SN light curve \citep{shappee14,kochanek17} which covers the epoch of our spectroscopic observation well (330 phase points from 2016 Mar to 2017 Jul; period of the Mira P=202 days from \citet{sloan10}). Although the angular resolution of the ASAS-SN is low (15~arcsec) for our target in the GC, its light curve clearly indicates that the target was near a minimum and the $V$-band magnitude is estimated at $15.3 \pm 0.1$~mag. Unfortunately, we have no recent infrared photometry for the target, and thus we used a light curve obtained at 1.4-m Infrared Survey Facility about ten years ago. \citet{matsunaga06b} obtained 49 photometric points which show periodic variation from 2002 Mar to 2005 Aug with an amplitude of {$\sim$}0.4~mag in $K_{\rm s}$. Assuming that the phase lag between $V$-band and $K$-band light curves is 0.0--0.2 (with $V$ preceding, see e.g. \citealt{smith06}), the $K$-band phase for the spectroscopic data is 0.3--0.1~cycles before the minimum leading to $J-K_{\rm s}=1.3\pm 0.05$~mag and $K_{\rm s}=8.9 \pm 0.15$~mag from the IRSF light curve. $V-K_{\rm s}$ is then $6.4\pm 0.2$~mag, which corresponds to $(V-K_{\rm s})_0=5.1\pm 0.4$~mag, while $(J-K_{\rm s})_0=1.05\pm 0.05$~mag, with the reddening corrected. We obtained a $T_{\rm eff}=3600$~K using the $J-K_{\rm s}$ colours and 3500~K using the $V-K_{\rm s}$ colours and the calibration by \citet{bessell98}. We adopted the former one, since the NIR photometry was collected simultaneously. The $J-K_{\rm s}$ is also less prone to reddening uncertainties when compared with $V-K_{\rm s}$ colour, since E(J-K)/E(V-K)=0.19 mag \citep{cardelli89}. An error of 100~K is thus a plausible uncertainty. We also applied the temperature scale based on the reddening-free method of line-depth ratios constructed by \citet{taniguchi18}. Some lines of their 81 line pairs cannot be measured in the crowded spectrum of V1 in NGC~5927, however, we estimated $T_{\rm eff} =3665 \pm 63$~K. The current value is consistent with the estimate based on the colour--temperature transformations, thus suggesting that they are minimally affected by a possible reddening variation and/or dust formation in warm Miras. Note that this temperature estimate was slightly extrapolated, since the temperatures of the calibrating stars used by \citet{taniguchi18} range from 3780 to 5400~K. From the photometric $T_{\rm eff}$, assuming a mass of $M=0.6~M_\odot$, a true distance modulus of $\mu=14.44$~mag (\citealt{harris96}), and the bolometric correction for $K$ magnitudes by \citet{buzzoni10}, we estimated the surface gravity of $\log g=0.0\pm 0.2$, where the error comprises contributions from all the different sources of uncertainty (i.e.\ temperature, luminosity). A microturbulence of $\xi =2.0$~km~s$^{-1}$ was set, following prescriptions from the literature for this kind of cool, giant stars \citep[e.g.][]{origlia13}; note also that \citet{nowotny10} and \citet{lebzelter14} imposed a value of $\xi=2.5$~km~s$^{-1}$ for Miras, in agreement, within the errors, with our value (see Table~\ref{tab:abundances}). \subsection{Abundance analysis} The determination of elemental abundances was carried out via spectral synthesis calculations using the driver {\it synth} in {\sc moog} by C.~Sneden (\citeyear{sneden73}, 2017 version) and the MARCS grid of spherical model atmospheres \citep{gustafsson08}, with $\alpha$ enhancements. The above mentioned atmospheric parameters were adopted, along with a global metallicity in the model atmosphere of [M/H]$=-0.5$\footnote{We adopt the standard notation for abundances, whereby [X/H]$=A({\rm X})-A({\rm X})_\odot$ and $A({\rm X})=\log (N_{\rm X}/N_{\rm H})+12$.} (see Harris's catalogue). The following crucial step includes the building of the line list. We carefully selected only atomic lines that are proven to be relatively isolated, unblended and not affected by departures from local thermodynamical equilibrium (LTE). Our spectrum encompasses several K~{\sc i} lines, but we discarded this species since non-LTE corrections are not available for the lines under scrutiny (i.e.\ $\lambda = 11,772.838, 12,432.27$ and 12,522.134~\AA). Moreover, we only kept lines that provide abundances for the Sun ($T_{\rm eff}$=5770 K, logg=4.44, $\xi$=0.9 km~s$^{-1}$, [M/H]=0, \citealt{dorazi17}), and Arcturus ($T_{\rm eff}$=4286 K, logg=1.66, $\xi$=1.74 km s$^{-1}$, [M/H]=$-$0.52, \citealt{ramirez11}) in compliance with literature values: all our measurements are in agreement within 0.1~dex with \citet{asplund09} and \citet{ramirez11}, respectively. Our choice, though limiting the number of lines and species that can be measured, allows us to infer reliable abundance measurements, with no major systematics affecting our values. Our final line list includes Na~{\sc i}, Fe~{\sc i}, Si~{\sc i}, Ca~{\sc i} and Ti~{\sc i} lines and is given in Table~\ref{tab:linelist}, where we report for each line the atomic parameters, i.e.\ excitation potential and $\log gf$. The latter come from different literature sources, including values by Kurucz{\footnote{\url{http://kurucz.harvard.edu/linelists.html}} and the most recent computations for Ti~{\sc i} lines by \citet{lawler13}. In order to perform the comparison between observed and synthetic spectrum, we have selected six wavelength regions with each interval covering {$\sim$}200~\AA: this means synthetic calculations for more than 1000~\AA, by covering all the spectral lines of interest. An example of a spectral region that we have selected for our chemical analysis is shown in Fig.~\ref{fig:synth1}, whereas a zoom on the Ti line at 12,671~\AA\ is displayed in Fig.~\ref{fig:zoom}. Our target has a low effective temperature and to properly locate the continuum we included molecular line lists for CH, CN, CO and OH from B.~Plez (private communication). The determination of C, N, O abundances is a tricky task because of their inter-dependency and because they are changing during the star's evolution. To add further complications, since our star is a GC member, all the three elements under discussion are involved in the hot hydrogen burning that is commonly accepted to happen in a fraction of the cluster first generation stars (the so-called {\it multiple population scenario}, see \citealt{gratton12} for an extensive review). Moreover, the WINERED spectral coverage does not grant the inclusion of the CO bandhead and/or OH features located in the $H$- and $K$-band, which are commonly used to derive abundances for carbon and oxygen. Conversely, our spectra are populated with a large number of CN features. Thus, it is not straightforward to get insights on the initial content for C, N, O and on the amount of depletion/enhancement that has occurred as the star gets evolved. For the current purpose we computed a grid of different synthetic spectra assuming different CNO abundances, and finding the best fit that minimises the $\chi^2$. Note that this approach does not allow us to derive C, N and O abundances, since different combination can provide similar $\chi^2$ values. We are taking into account these molecular features to improve the continuum determination. \begin{figure*} \includegraphics[width=\textwidth,height=0.45\textheight]{synth1.eps} \caption{Example of a spectral window exploited to compare synthetic (solid line) and observed (dot-dashed line) spectrum. Key diagnostics for abundances are marked (Fe, Ca and Ti)}. \label{fig:synth1} \end{figure*} \begin{figure} \includegraphics[width=0.5\textwidth]{zoom.eps} \caption{Zoom on the Ti~{\sc i} line at 12,671~\AA. Different spectral syntheses (solid lines) are for [Ti/Fe]$=0.00\pm 0.2$, compared with the observed spectrum (dot-dashed line).} \label{fig:zoom} \end{figure} Chemical abundances are affected by internal uncertainties due to two main sources of error: (1) uncertainties on the best fit determination (that takes into account continuum displacement and line measurements) and (2) errors related to the adopted set of stellar parameters. For the first kind of error we adopted the standard deviation (r.m.s) from the mean abundances as given from different spectral lines: typical values are in the range 0.07--0.10~dex. To estimate errors due to stellar parameters ($T_{\rm eff}$, $\log g$, $\xi$ and [M/H]) we have proceeded in the standard way, that is changing each parameter one by one and evaluating the corresponding variation on the resulting abundances. Thus, temperature, gravity, microturbulence and global metallicity were changed by $\pm 100$~K, $\pm 0.2$~dex, $\pm 0.5$~km~s$^{-1}$ and $\pm 0.1$~dex; we found errors on the [X/Fe] ratios of 0.10--0.12~dex. We then added in quadrature the four different error contributions and calculated the final error related to best fit and stellar parameters as: \begin{equation}\label{eq:sensi} \sigma= \sqrt{\sigma^2_{\rm best} + \sigma_{T_{\rm eff}}^2 + \sigma_{\log g}^2 +\sigma_{\xi}^2+ \sigma_{\rm [M/H]}^2}, \end{equation} see the result given in Table~\ref{tab:abundances}. We note that, given the very good agreement for benchmark stars such as the Sun and Arcturus, major systematic uncertainties should not affect our abundance values at a level larger than {$\sim$}0.1~dex. \subsection{Results and concluding remarks} Our results are given in Table~\ref{tab:abundances}, along with the corresponding total uncertainty (best fit procedure and errors due to stellar parameters). The metallicity, [Fe/H]$=-0.55 \pm 0.15$, is in good agreement, within the observational uncertainties, with previous determinations from optical spectroscopy of GC giant members. \cite{harris96} gives for NGC 5927 a value of [Fe/H]$=-0.49$, whereas \citet{pancino17} found a slightly larger metal content, [Fe/H]$=-0.39\pm 0.04$. Very recently, \citet{mura18} have presented high-resolution, FLAMES/UVES observations for a sample of seven red giants in this cluster, reporting a mean metallicity of [Fe/H]$=-0.47\pm 0.02$ (error of the mean, with r.m.s.$=$0.06~dex). Concerning the other chemical species, the cluster is included in the Gaia ESO survey but \citet{pancino17} have published abundances only for Mg and Al (see their Table~2). On the other hand, \citet{mura18} have derived abundances for iron-peak, $\alpha$ and heavy elements (e.g.\ Ba and Eu). In the last column of Table~\ref{tab:abundances} we show their values of [X/Fe] ratios for the species in common with the present study. The two sets of elemental abundances agree quite well. Titanium and silicon abundances are slightly higher in \citet{mura18}, but still compatible within the uncertainties, whereas there is an excellent agreement between the two Ca measurements. The current findings suggest a modest $\alpha$-enhancement, less than {$\sim$}0.2~dex. RGB stars in the Bulge display a steady decrease in $\alpha$-enhancement as a function of the iron content \citep{gonzalez11} approaching solar abundances ([$\alpha$/Fe]$\sim$0) in the metal-rich regime ([Fe/H]$\ge 0$). The trend for GCs ---targets that are old ($t \ge 10$~Gyr) and almost coeval--- for iron abundances higher than $-0.7$~dex is not well established, due to their paucity and for the limited number that has been spectroscopically investigated \citep{zoccali16}. However, the current estimate suggests that NGC~5927 is located in the lower envelope of the $\alpha$-enhancements typical of GCs \citep{pritzl05b,mura18}. \begin{table} \centering \caption{Stellar parameters ($T_{\rm eff}, \log g$ and $\xi$) and abundances for our target star. The corresponding uncertainties are given (see text for details).The last column gives the cluster average abundances along with the standard deviation by \citet{mura18}}. \label{tab:abundances} \small \begin{tabular}{lrc} \hline\hline & Mira V1 & Cluster average\\ \hline $\alpha$ & $15^h28^m15^s.2$ & --- \\ $\delta$ & -50\arcdeg 38\arcmin 09\arcmin \arcmin .0 & --- \\ $K_{\rm s}^a$ (mag) & $8.9 \pm 0.15$ & --- \\ $A K_{\rm s}^a$ (mag) & $\sim$0.4 & --- \\ P$^b$ (days) & 202 & --- \\ $T_{\rm eff}$ (K) & $3600 \pm 100$ & --- \\ $\log g$ (cgs) & $0.00 \pm 0.20$ & --- \\ $\xi$ (km~s$^{-1}$) & $2.0 \pm 0.5$ & --- \\ $[$Fe/H$]$ & $-0.55 \pm 0.15$ & $-0.47 \pm 0.06$ \\ $[$Na/Fe$]^{c}$ & $+0.35 \pm 0.20$ & $+0.18 \pm 0.13$ \\ $[$Si/Fe$]$ & $+0.14 \pm 0.15$ & $+0.24 \pm 0.08$ \\ $[$Ca/Fe$]$ & $+0.13 \pm 0.20$ & $+0.15 \pm 0.04$ \\ $[$Ti/Fe$]$ & $+0.17 \pm 0.13$ & $+0.32 \pm 0.06$ \\ \hline\hline \end{tabular} \begin{tablenotes} \item $^a$~Mean $K_{\rm s}$-band magnitude and amplitude \citep{matsunaga06b} \item $^b$~\cite{sloan10} \item $^c$~Element affected by p-capture reactions. \end{tablenotes} \end{table} As for Na, we obtained [Na/Fe]$=0.35 \pm 0.20$, to be compared with $0.18 \pm 0.13$ of the cluster average. The sodium content deserves a specific discussion. There is a debate in the literature as to whether second-generation (i.e.\ Na-rich) AGB stars do exist \citep[see e.g.][]{campbell13,lapenna15,wang16,maclean16}. The Na abundances reported by \citet{mura18} are corrected for departures from LTE following prescriptions given in the INSPECT database\footnote{\url{http://inspect.coolstars19.com/index.php?n=Main.HomePage}} that are not available for our Na~{\sc i} line at 12,679~\AA. Thus, this could in principle explain part of the discrepancy with our value; however, there is another critical point that has to be considered. Na is one of the species involved in proton-capture reaction processes that occur in GCs. All the GCs with a sufficient number of stars analysed display internal variation in Na \citep[e.g.][]{gratton12}. In particular, while the first generation stars have Na in agreement with field stars (at the corresponding metallicity), the second-generation GC stars exhibit a significant Na enhancement. At the present stage, we cannot confirm (or disprove) that Mira V1 in NGC 5927 belongs to the cluster second generation, because of the low precision and also because we lack a control sample of red giants acquired with the same instrument. The abundance analysis of Mira stars has been affected by a number of long-standing problems: incompleteness of atomic and molecular line list \citep{uttenthaler15}, inhomogeneous atmospheres and complex circumstellar envelops \citep{hron15}, together with nonlinear phenomena in the cool molecular region located between the photosphere and the expanding molecular shell. These issues and the impact of both hydrostatic and dynamical models have been addressed in detail by \citet{lebzelter15}. These difficulties are at least partly reduced because we are dealing with a Mira that is on average warmer than typical Miras. The interesting finding in the current approach is the similarity between optical and NIR abundance scale in spite of the difference in the adopted spectroscopic diagnostics. However, a more quantitative analysis of the impact of 1D versus 3D and static versus dynamical atmosphere models \citep{chiavassa18} would be highly desirable in view of the unprecedented opportunity to observe Mira stars in Local Volume galaxies with the next generation of ELTs \citep{bono17}. \begin{table} \centering \caption{Atomic line list for Na~{\sc i} (Species$=$11.0), Si~{\sc i} (14.0), Ca~{\sc i} (20.0), Ti~{\sc i} (22.0) and Fe~{\sc i} (26.0) lines that we used for the abundance analysis.The [X/H] ratios are given in the last column.} \label{tab:linelist} \small \begin{tabular}{lcccr} \hline\hline Wavelength & Species & E.P. & $\log gf$ & [X/H]\\ (\AA) & & (eV) & \\ \hline 12,679.144 & 11.0 & 3.614 & $-$0.04 & $-$0.20\\ 12,679.144 & 11.0 & 3.614 & $-$1.34 & ---\\ 12,679.224 & 11.0 & 3.614 & $-$2.65& ---\\ 11,984.198 & 14.0 & 4.926 & $+$0.19 & $-$0.55\\ 11,991.568 & 14.0 & 4.916 & $-$0.16 & $-$0.35\\ 12,816.046 & 20.0 & 3.907 & $-$0.63 & $-$0.40\\ 12,823.868 & 20.0 & 3.907 & $-$0.85 & $-$0.45\\ 11,780.542 & 22.0 & 1.442 & $-$2.17 & $-$0.55\\ 11,797.186 & 22.0 & 1.429 & $-$2.28 & $-$0.15\\ 11,892.768 & 22.0 & 4.175 & $-$2.17 & $-$0.15\\ 11,949.547 & 22.0 & 1.442 & $-$1.57 & $-$0.55\\ 12,569.571 & 22.0 & 2.173 & $-$2.05 & $-$0.45\\ 12,671.092 & 22.0 & 1.429 & $-$2.52 & $-$0.55\\ 12,738.370 & 22.0 & 4.803 & $-$2.35 & $-$0.25\\ 12,738.477 & 22.0 & 4.728 & $-$1.25 & ---\\ 12,811.480 & 22.0 & 2.159 & $-$1.39 & $-$0.55\\ 12,821.672 & 22.0 & 1.459 & $-$1.19 & $-$0.40\\ 12,831.442 & 22.0 & 1.429 & $-$1.49 & $-$0.55\\ 12,840.607 & 22.0 & 4.660 & $-$2.85 & $-$0.25\\ 12,847.033 & 22.0 & 1.442 & $-$1.55 & $-$0.35\\ 11,882.846 & 26.0 & 2.196 & $-$2.17 & $-$0.51\\ 12,190.100 & 26.0 & 3.632 & $-$2.73 & $-$0.60\\ 12,648.943 & 26.0 & 6.395 & $-$2.69 & $-$0.54\\ \hline\hline \end{tabular} \end{table} \acknowledgments We thank the staff of La Silla Observatory, European Southern Observatory, in Chile for their support during our observations. The development and operation of WINERED have been supported by MEXT Programs for the Strategic Research Foundation at Private Universities (Nos.~S0801061 and S1411028) and Grants-in-Aid, KAKENHI, from Japan Society for the Promotion of Science (JSPS; Nos.~16684001, 20340042, 2184005 and 2628028). We thank the anonymous referee for his/her positive and pertinent suggestions on the early draft of this letter.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The first detection of positrons from the galactic bulge dates from the seventies, when a balloon experiment by \cite{Johnson1972} showed a gamma ray line at the energy of $476\pm 26$ keV from the Galactic Center. However, due to the low energy resolution, the physical origin of this emission was unclear. A few years later, in 1977, thanks to the advent of the high resolution spectrometers, the radiation was identified as the 511~keV line produced by the electron-positron annihilation. Observations of the galactic annihilation line continued progressively until the present days, with the most recent morphological and spectral study by the spectrometer SPI \citep{Vedrenne2003} on board the INTEGRAL gamma ray observatory, as discussed by Jean in this proceeding Among all possible interpretations on the origin of Galactic positrons \citep[see e.g.][]{Diehl2009}, the LMXBs population could give a relevant contribution; indeed the spatial distribution of these sources within the Galaxy could explain two observed proprieties of the 511~keV diffuse emission. First, the LMXB are old systems concentrated in the galactic bulge, that is where we detect the main flux of annihilation radiation. The second reason is that the recent SPI results show an asymmetric 511~keV emission that can be reproduced by a spatial distribution consistent with the LMXBs population \citep{Weidenspointner2008,Weidenspointner2008a}. However it is not obvious that the source distribution must be correlated to the diffuse proprieties, as the correlation between the source distribution and annihilation diffuse emission depends on how long the positrons propagate in the interstellar medium before annihilation. In this sense, the $\beta^{+}$ decay from supernovae has been proposed as a full solution to explain the SPI observations \citep{Higdon2009}. Historically transient 511 keV broad lines were observed in two Galactic point sources. One of these sources is the X-ray binary 1E~1740.7-2942, that is well monitored by INTEGRAL \citep{Delsanto2007}. Discovered by the {\em Einstein} soft X-ray telescope \citep{Hertz1984}, this source is situated $\approx 48^{\prime}$ away from the radio source Sgr~A*. On October 13 1990 the SIGMA telescope, aboard the GRANAT space observatory, measured a remarkable feature in its emission spectrum, which appears as a bump, reaching a maximum intensity around 500~keV, followed by a $\approx$ 700~keV cutoff. This very broad line contains almost 50~\% of the energy radiated by the source in the 35~keV-1~MeV interval. The transient feature appeared clearly during a 13 hours observation and then possibly in two further occasions but at a less significant level, leading to controversial conclusions \citep{Jung1995}. Moreover, among the 9 Ms devoted to the Galactic Centre region during the SIGMA mission, these 3 episodes of 511 keV activity represent a very small fraction of time, pointing to a low duty cycle. A 511~keV feature was also reported in Nova Muscae \citep{Gilfanov1991}. The IBIS imager on the INTEGRAL satellite gave us the opportunity to search for possible 511~keV point sources associated to known objects as X-ray binaries or supernovae, or new ones. Analyzing 5 years of IBIS data, we searched for possible point like 511~keV sources in a time scale of day, month and year. In the next sections we report the data analysis and the results in terms of flux upper limits, and finally a short discussion. \section{Data analysis} \input{exp} We use in our analysis the events detected by ISGRI, that is the lower energy position sensitive detector in the IBIS coded mask telescope \citep{Ubertini2003}. We estimate that the sensitivity that could be obtained using the higher energy layer PICsIT should be about 2 times better. However the PICsIT instrument response is strongly affected by systematic artifacts due to strong detector disuniformities \citep{Lubinski2008}. The correction of these effects has not yet been fully implemented in the current PICsIT data analysis software release. For this reason we focused on the ISGRI data, while PICsIT will be considered in the future. The data reported in this work has been reduced with the OSA 7.0 software release. The data set is made by all the IBIS data available at in April 2008 when we started the analysis, i.e. about 5 year of observations, from October 17 2002, when INTEGRAL was launched, until April 2007 and the Core program data until April 2008. All the selected data correspond to 39413 science windows\footnote{INTEGRAL/IBIS data are organized in short pointings (science windows) of $\sim$ 2000 s} (ScWs). The data sample had been filtered removing the periods typically occurring during solar flares, affected by a strong background or a bad detector response. The maximum exposure of about 10 Ms (Fig \ref{fig:exp}) corresponds to the Galactic Center Region, where the bulk of the positron emission is detected by the SPI spectrometer. The data set includes also high latitude observations, since some 511~keV emission might possibly come from Low Mass X-ray binaries in globular clusters. Moreover, if the emission line detected in our galaxy is due to Dark Matter annihilations, then one should also detect a 511 keV line from nearby dwarf spheroidals \citep{Boehm2004}. \input{fwhm} We made the IBIS sky mosaics in the 431 -- 471~keV, 491 - 531~keV, 551 -- 591~keV energy bands. The width of these bands takes into account the 511~keV line $FWHM$ (measured by fitting the ISGRI background spectra) distribution among the IBIS pointings (Fig. \ref{fig:fwhm}). \section{Results} We do not detect any significant 511~keV signal in day, month time scales as well as in the 5 year IBIS all sky map. By the estimation of the ISGRI effective area, we are able to put upper limits on the 511 keV flux from point sources. As this limit depends on the square root of the exposure, the best constraint is achieved in the center region of the Galaxy with an exposure of 10~Ms: \begin{equation} S_{2\sigma}(Sgr\, A^*) = 1.6 \times 10^{-4}\,ph\,cm^{-2}\,s^{-1} \label{eq:sgralimit} \end{equation} The flux limits for the hard X ($E > 20\, keV$) microquasars detected by IBIS are shown in table \ref{tab:microquasars}. A better sensitivity for a sample like that can be obtained with some hypothesis staking all the sources signals, but also with this method we do not get any signal at 511~keV. In table \ref{tab:gcvis} we show the flux limits in shorter (roughly two months) time scales during the IBIS Galactic center visibility periods. \input{table_microquasars} \input{table_sgra} \section{Discussion} We have approached our problem in an empirical way: using all IBIS available data to have the best sensitivity and search possible excess at 511~keV on any time scale. The lack of any detection at 511~keV from point sources with IBIS is in agreement with the SPI spectral analysis. Indeed the SPI data suggests that the electron-positron annihilation takes place in a warm interstellar medium: the positrons should travel in the Interstellar medium before interacting with electrons and, as a consequence, the gamma ray emission produced by the annihilation must be diffuse, therefore not-detectable by IBIS with the standard analysis. The very broad line features that were detected by SIGMA in 1E~1740.7-2942 and Nova Muscae were explained as being due to the electron-positron annihilation in hot pair plasma in the framework developed by \cite{Ramaty1981}. If this would happen again, we should detect this kind of transient phenomena with IBIS. We can actually confirm with our data that these events have a low duty cycle. For the future, if the next gamma ray mission EXIST \citep{Grindlay2009} is definitively approved, the 511~keV line sensitivity for point sources should be improved by a factor 10 or even more. This progress will be achieved thanks to the larger collecting area and detector thickness, possible with the application of technological advancements in solid state detectors. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The demand for increased capacity in cellular networks continues to grow, which is driving the deployment of spectrally-efficient small cells \cite{4623708,6768783,6171992,anpalagan_bennis_vannithamby_2015}. While the deployment of small cells leads to significant capacity gains over macrocell-only systems, the proximity of small cell base stations (BSs) to one another can cause severe interference between them. This interference must be managed carefully to maximize the overall network capacity. Thus, powerful interference mitigation methods as well as optimal resource allocation schemes that involve multiple cells must be developed for 5G networks. In this work we investigate a flexible network structure for cellular systems where, instead of each BS serving all users within its own cell independently, several BSs act cooperatively to create a "virtual cell" with joint resource allocation. In order to design cellular networks that are composed of virtual cells, we address in this work the following two design challenges: 1) Creating the virtual cells, i.e., clustering the BSs and users into virtual cells. 2) Allocating the resources in each virtual cell. In this work we address the uplink resource allocation problem for joint channel allocation and power allocation for the single user detection scenario. We also address the resource allocation problem for coordinated multi-point decoding scenarios in which BSs in a virtual cell jointly decode the signals that they receive. BS and user clustering as part of a resource allocation strategy is discussed in the Cooperative Multi-Point (CoMP) literature, see for example \cite{6530435,6707857,6181826,6555174,5594575,5502468,6655533,4533793,5285181,6786390,8260866}. The work \cite{7839266} presents an extensive literature survey of cell clustering for CoMP in wireless networks. The clustering of BSs and users can be divided into three groups: 1) Static clustering which considers a cellular network whose cells are clustered statically. Hence, the clustering does not adapt to network changes. Examples for static clustering algorithms are presented in \cite{6530435,6181826,6707857,6555174}. 2) Semi-dynamic clustering, in which static clusters are formed but the cluster affiliation of users is adapted according to the networks changes. Examples for such algorithms are presented in \cite{5594575,5502468,6655533}. 3) Dynamic clustering in which the clustering of both BSs and users adapts to changes in the network. Examples for dynamic clustering algorithms are presented in \cite{4533793,5285181,6786390,8260866}. Resource allocation in virtual cells is closely related to cloud radio access networks \cite{5594708,CIT-048,6924850,7487951,6601765} in which several cells act cooperatively. The coordination between the cells can be divided into the following categories: 1) Interference coordination in which only channel states are available at the coordinated BSs. 2) Full cooperation in which BSs share not only channel states but also the data signals they receive. 3) Rate limited coordination in which the BSs exchange data via a limited-capacity backhaul. 4) Relay-assisted cooperation in which cooperation is carried out by dedicated relay nodes that connect users from different cells and BSs. In addition, resource allocation in virtual cells is also closely related to the interference mitigation paradigm called Cooperative Multi-Point (CoMP) (see \cite{5706317}) that encompasses several cooperation models. Two such models are the Uplink Interference Prediction model in which cooperation is allowed in the resource allocation stage only, and the Uplink Joint Detection model that allows BS cooperation in both the resource allocation and decoding stages. In this work we investigate a flexible cooperative resource allocation structure for cellular systems where, instead of each BS serving all users within its own cell independently, several BSs act cooperatively to create a "virtual cell". We consider two BS cooperation models for the uplink communication in virtual cells. The first model allows for cooperation in the resource allocation stage only, whereas the second model allows for cooperation in both the resource allocation and the decoding stages. We refer to the first model as the interference coordination model and to the second as the coordinated multi-point model. Our work \cite{YeminiGoldsmith2} considers the coordinated multi-point decoding model in which BSs jointly decode their messages assuming infinite capacity backhaul links between BSs in the same virtual cell. Additionally, in \cite{YeminiGoldsmith1} we propose channel and power allocation schemes for the interference coordination model. This manuscript presents a unified framework that evaluates both cooperation models analyzed in \cite{YeminiGoldsmith2} and \cite{YeminiGoldsmith1}. It extends the analysis of the resource allocation schemes presented in \cite{YeminiGoldsmith1}, and also further evaluates and compares the network optimization schemes presented in both \cite{YeminiGoldsmith2} and \cite{YeminiGoldsmith1}. Clustering as part of a resource allocation strategy in wireless networks is also investigated in the ultra-dense networks literature, see for example \cite{7008373,7579583,7794900,6786390,7248710,8110665,8496818}. These works can be categorized into two groups: cell clustering (see \cite{7008373,7579583,7794900}), in which the existing cells of a cellular networks are merged, and user-centric clustering (see \cite{6786390,7248710,8110665}), in which each user chooses a subset of BSs to communicate with. The work presented in this manuscript differs from these works in several key aspects. First, our channel state information model differs from that of the aforementioned works which either assume that the inter-cluster interference is perfectly known for all the channels in the network \cite{7008373,7579583,7794900,8496818}, or strictly statistical for all the channels in the network \cite{7248710,8110665,6786390}. In our setup we assume perfect channel state information inside each virtual cell but no channel information regarding users in different virtual cells. We note that our resource allocation schemes can be adapted to statistical knowledge regarding the inter-cluster interference. Second, in addition to proposing a clustering scheme to create virtual cells, we also address both the channel and power allocation problems. In contrast, the analysis presented in the aforementioned works are limited to the channel allocation problem and do not address the power allocation problem within the clusters. Instead it is assumed that the power allocation is fixed. A fixed power allocation can degrade significantly the performance of cooperative models, such as the coordinated multi-point decoding model, in which BSs jointly decode the signals that they receive. Additionally, to the best of our knowledge, prior works optimizing performance based on cell clustering or CoMP did not consider how performance varied with the number of clusters or with the user affiliation rules. Our work is also related to the concept of Software Defined Networks (SDN), introduced in \cite{6994333,6739370,7000974,1237143,7473831}. The underlying idea behind SDN is the separation of the data plane, which carries the data in the network, and the control plane, which determines how packets in the network are forwarded. Theoretically, the concept of SDN can be harnessed in limiting the interference in the network by allocating the resources in the network centrally \cite{6385040,6385039}. However, the very thing that makes SDN's centralized control plane attractive also renders its implementation complexity challenging due to the required flexibility. These complexity issues are more severe in wireless communication networks employing SDN because of their time-varying nature, which requires fast updating rules for the control plane. Creating virtual cells that are composed of several cells can assist in managing wireless network and close the gap between the promising concept of SDN and the difficulties that arise in its implementation. \subsection{Main Contributions:} This work extends the concept of cellular networks while preserving several of its key desirable properties, such as simple user association rules and dividing the network into independent cells that may cooperate to suppress interference. We call this network paradigm a cellular network with virtual cells. A cellular network design with virtual cells has the following benefits: \begin{enumerate} \item improves network performance while balancing the computational complexity of optimal resource allocation \item uses both local and global network information \item ensures that local changes in the network do not cause a ``butterfly effect" in which the allocation of resources across the whole network design must be recalculated due to a local change. \end{enumerate} We create the virtual cells by clustering the BSs, instead of users, in the network, and then associate users with the clustered BSs. We cluster BSs based on the hierarchical clustering method with minimax linkage criterion that creates a dendrogram. The dendrogram shows which clusters are merged when the number of clusters is decreased and which are separated when this number is increased. We propose using this clustering approach since it enjoys the unique property that decreasing or increasing the number of clusters affects only the clusters that are being merged or separated, while leaving all others unchanged. By contrast, in other clustering methods, such as K-means or spectral clustering, even a small variation in the number of clusters requires the reclustering of the whole network, which may cause a global change. This is undesirable behavior for wireless communication networks since the channel state information between all users in the new virtual cells and the new virtual BSs must be estimated. Thus, we propose using hierarchical clustering in which the number of clusters can adapt efficiently to the current state of the network without requiring an overall update in the network. Additionally, the method we propose requires only local channel state information that is used in the user association rule and in computing the resource allocation scheme inside the virtual cells. The BS clustering which constructs the ``backbone'' of the network does not require knowledge of the channel state between all the users and BSs in the network. To optimize the performance of cellular networks with virtual cells we also develop resource allocation schemes for virtual cells in the single user detection scenario, and compare them to previously proposed resource allocation schemes for heterogeneous cells. Interestingly, numerical results show that the performance of these resource allocation schemes depends on the number of virtual cells in the network. Additionally, we address resource allocation for the coordinated multi-point decoding scenario. The resource allocation in both setups uses local channel state information, that is, we assume that the BSs in a virtual cell acquire the channel state information between them and all the users in the virtual cells. Finally we note that, while we do not suppress interference between virtual cells in the resource allocation stage, as we decrease the number of virtual cells, interference is dominated by interference within the virtual cell so that our resource allocation scheme mitigates this dominant interference. \subsection{Outline and Notation} The remainder of this paper is organized as follows. Section \ref{sec:problem_formualtion} presents the problem formulation that we analyze in this work. Section \ref{sec:virtual_cell_create} describes the method for forming the virtual cells. Sections \ref{sec:joint_power_allocation} and \ref{sec:alternating_optimization} present several algorithms for allocating resources in the interference coordination model. In particular, Section \ref{sec:joint_power_allocation} proposes a joint channel and power allocation scheme. Section \ref{sec:alternating_optimization} proposes channel and power allocation algorithms based on an alternating optimization in which the resource allocation is calculated by alternating between a channel and power allocation problem. Section \ref{sec:alternating_optimization} presents three channel allocation schemes that we evaluate: a user-centric one that we propose and two existing ones, a BS centric scheme and a sum rate maximization matching scheme. Section \ref{sec:joint_decoding} presents an optimal resource allocation scheme in virtual cells for the coordinated multi-point decoding model. Section \ref{se:simulation} presents numerical results of the average system sum rate for all of our proposed clustering and resource allocation methods. Finally, \ref{sec:conclusion} summarizes and concludes this work. \textit{Notation:} The following notations are used throughout this paper. Vectors are denoted by boldface lowercase letters whereas matrices are denoted by boldface uppercase letters. We denote the transpose of a vector $\boldsymbol a$ by $\boldsymbol a'$, and the conjugate transpose of a matrix $\boldsymbol A$ by $\boldsymbol A^{\dagger}$. The expected value of a random variable $x$ is denoted by $E(x)$. Additionally, we denote the covariance matrix of a random vector $\boldsymbol x$ by $\text{cov}(\boldsymbol x)$. $\det(\boldsymbol A)$ denotes the determinant of a square matrix $\boldsymbol A$. Finally, $\mathbbm{1}_{\mathcal{E}}$ denotes the indicator function; it is equal to one if the event $\mathcal{E}$ is true and zero otherwise. Finally the cardinality of a set $\mathcal{S}$ is denoted by $|\mathcal{S}|$. \section{Problem Formulation}\label{sec:problem_formualtion} We consider a communication network that comprises a set of BSs (BSs) $\mathcal{B}$, a set of users $\mathcal{U}$ and a set of frequency bands $\mathcal{K}$. The users communicate with their BSs and these transmissions interfere with one another. Each user $u\in\mathcal{U}$ has a maximal transmission power of $\overline{P}_u$ dBm. The BSs and users are clustered into virtual cells that must fulfill the following characteristics. \subsection{Virtual Cells}\label{sec:virtual_cell_requirements} \begin{definition}[Virtual BS] Let $b_1,..,b_n$ be $n$ BSs in the set of BSs $\mathcal{B}$, we call the set $\{b_1,..,b_n\}$ a virtual BS. \end{definition} \begin{definition}[Proper clustering] Let $\mathcal{B}$ be a set of BSs, $\mathcal{U}$ be a set of users. Denote $\mathcal{V}=\{1,\ldots,V\}$. For every $v$, define the sets $\mathcal{B}_v\subset \mathcal{B}$ and $\mathcal{U}_v\subset \mathcal{U}$ . We say that the set $\mathcal{V}$ is a proper clustering of the sets $\mathcal{B}$ and $\mathcal{U}$ if $\mathcal{B}_v$ is a partition of the sets $\mathcal{B}$ and $\mathcal{U}$. That is, $\bigcup_{v\in\mathcal{V}}\mathcal{B}_v = \mathcal{B}$, $\bigcup_{v\in\mathcal{U}}\mathcal{U}_v = \mathcal{U}$. Additionally, $\mathcal{B}_{v_1}\cap\mathcal{B}_{v_2}=\emptyset$ and $\mathcal{U}_{v_1}\cap\mathcal{U}_{v_2}=\emptyset$ for all $v_1,v_2\in\mathcal{V}$ such that $v_1\neq v_2$. \end{definition} \begin{definition}[Virtual cell] Let $\mathcal{B}$ be a set of BSs, $\mathcal{U}$ be a set of users, and $\mathcal{V}$ be a proper clustering of $\mathcal{B}$ and $\mathcal{U}$. For every $v\in\mathcal{V}$ the virtual cell $\mathcal{C}_v$ is composed of the virtual BS $\mathcal{B}_v$ and the set of users $\mathcal{U}_v$. \end{definition} This condition ensures that every BS and every user belongs to exactly one virtual cell. This implies that all the transmission power of a user is dedicated to communicating with BSs in the same virtual cell, thus power allocation can be optimized in a virtual cell. Let $\mathcal{V}$ be a proper clustering of the set of BSs $\mathcal{B}$ and the set of users $\mathcal{U}$, and let $\{\mathcal{C}_v\}_{v\in\mathcal{V}}$ be the set of virtual cells that $\mathcal{V}$ creates. In each virtual $\mathcal{C}_v$ we assume that the BSs that compose the virtual BS $\mathcal{B}_v$ jointly allocate their resources. \subsection{The Uplink Resource Allocation Problem for the Interference Coordination Model}\label{subsection:uplink_interference_coordination_problem} In each virtual cell we consider the uplink resource allocation problem in which all the BSs in the virtual cell jointly optimize the channel allocation and the transmission power of the users within the virtual cell. Further, we consider single user detection in which every BS $b$ decodes each of its codewords separately. That is, suppose that users $u_1$ and $u_2$ are both served by BS $b$, then $b$ decodes the codeword of $u_1$ treating the codeword of $u_2$ as noise, and decodes the codeword of $u_2$ treating the codeword of $u_1$ as noise. We refer to this model as the interference coordination model. While each user can communicate with all the BSs in its virtual cell, it follows by \cite{1237143} that, given a power allocation scheme, the maximal communication rate for each user is achieved when the message is decoded by the BS with the highest SINR for this user. Recall that $\mathcal{K}$ is the set of frequency bands. Denote by $h_{u,b,k}$ the channel coefficient of the channel from user $u\in\mathcal{U}$ to BS $b$ over frequency band $k$, and let $P_{u,k}$ be the transmit power of user $u$ over frequency band $k$. Further, let $\sigma^2_{b,k}$ denote the noise power at BS $b$ over frequency band $k$, and let $W_k$ denote the bandwidth of band $k$. The uplink resource allocation problem in each virtual cell $\mathcal{C}_v$, ignoring interference from other virtual cells, is given by: \begin{flalign}\label{eq:no_decoding_cooperation_single_discrete} \max & \sum_{b\in\mathcal{B}_v}\sum_{u\in\mathcal{U}_v}\sum_{k\in\mathcal{K}} \gamma_{u,b,k}W_k\log_2\left(1+\frac{|h_{u,b,k}|^2P_{u,k}}{\sigma^2_{b,k}+J_{u,b,k}}\right)\nonumber\\ \text{s.t.: } & 0\leq P_{u,k},\quad \sum_{k\in\mathcal{K}}P_{u,k} \leq \overline{P}_u,\quad \forall\: u\in \mathcal{U}_v,k\in\mathcal{K},\nonumber\\ &\hspace{-0.15cm} \sum_{\substack{\tilde{u}\in\mathcal{U}_v,\\ \tilde{u}\neq u}} |h_{\tilde{u},b,k}|^2P_{\tilde{u},k}= J_{u,b,k},\: \forall u\in\mathcal{U}_v,b\in \mathcal{B}_v,k\in\mathcal{K} \nonumber\\ &\gamma_{u,b,k}\in\{0,1\},\quad \sum_{b\in\mathcal{B}_v}\gamma_{u,b,k}\leq 1,\quad \forall\:u\in \mathcal{U}_v,b\in \mathcal{B}_v,k\in\mathcal{K}. \end{flalign} This is a mixed-integer programming problem that is NP-hard. Sections \ref{sec:joint_power_allocation} and \ref{sec:alternating_optimization} present two different approaches to approximate this problem for a given virtual cell. The first approach, presented in Section \ref{sec:joint_power_allocation}, translates this problem from a mixed-integer programming problem to an equivalent problem with continuous variables. The second approach, presented in Section \ref{sec:alternating_optimization}, approximates the optimal solution by solving a user-centric channel allocation problem, and a power allocation problem, alternately. \subsection{The Uplink Resource Allocation Problem for Coordinated Multi-Point Decoding}\label{subsection:uplink_joint_decoding_problem} In the coordinated multi-point decoding model BSs jointly decode the signals that they receive. This model can be realized, for example, based on cloud decoding of the signals received by all BSs under the assumption that the BS communication to the cloud has unconstrained capacity. This model is equivalent to a multiple access channel (MAC) with a single transmitting antenna at each user and multiple antennas corresponding to all BS antennas at the receiver. Recalling that $\mathcal{K}$ is the set of frequency bands, denote by $x_{u},k$ the signal of user $u$ on frequency band $k$, and by $y_{b,k}$ the received signal at BS $b$ for band $k\in\mathcal{K}$. For the sake of clarity, we label the BSs in the cluster $v$ by $b_1,\ldots, b_{|\mathcal{B}_v|}$, and label the users in cluster $v$ by $u_1,\ldots,u_{|\mathcal{U}_v|}$. Denote $\boldsymbol y_{v,k}\triangleq(y_{b_1,k},\ldots,y_{b_{|\mathcal{B}_v|},k})'$ and let $\boldsymbol x_{v,k}\triangleq(x_{u_1,k},\ldots,x_{u_{|\mathcal{U}_v|},k})'$. The receiving signal at BS $b\in \mathcal{B}_v$, ignoring the interference from other clusters, in frequency band $k$ is \begin{flalign} y_{b,k} = \sum_{i=1}^{|\mathcal{U}_v|}h_{u_i,b,k} x_{u_i,k}+n_{b,k}, \end{flalign} where $h_{u_i,b,k}$ is the channel coefficient from user $u_i$ in $v$ to the BS $b$ in $v$ over frequency band $k$, and $n_{b,k}$ is a white Gaussian noise at BS $b$ over frequency band $k$. Let $\boldsymbol h_{u_i,k} = (h_{u_1,b_1,k},\ldots,h_{u_i,b_{|\mathcal{B}_v|},k})'$ be the channel coefficient vector between user $u_i$ in $v$ to all the BSs in cluster $v$. Then the receiving signal vectors at the BSs in $v$ are \begin{flalign} \boldsymbol y_{v,k} &= \sum_{i=1}^{|\mathcal{U}_v|}\boldsymbol h_{u_i,k} x_{u_i,k}+\boldsymbol n_{v,k}, \end{flalign} where $\boldsymbol n_{v,k}=(n_{b_1,k},\ldots,n_{b_{|\mathcal{B}_v|,k}})$ is a white noise vector at the BSs. Let $\boldsymbol C_{v,k}=\text{cov}\left(\boldsymbol{x}_{v,k}\right)$ and $\boldsymbol N_{v,k} = \text{cov}(\boldsymbol n_{v,k})$; the sum capacity of the uplink in the virtual cell is then: \begin{flalign}\label{eq:uplink_problem_clean} \max &\sum_{k\in\mathcal{K}}W_k\log_2\det\left(\boldsymbol I+\sum_{u\in\mathcal{U}_v}p_{u,k}\boldsymbol h_{u,k} \boldsymbol h_{u,k}^{\dagger}\boldsymbol{N}_{v,k}^{-1}\right)\nonumber\\ \text{s.t.: } & \sum_{k\in\mathcal{K}} p_{u,k}\leq \overline{P}_u,\quad p_{u,k}\geq 0. \end{flalign} We note that while interference between virtual cells is not addressed in this work, as the number of virtual cells is decreased, each virtual cell becomes larger, and the interference inside the virtual cells becomes the dominant interference. This interference is mitigated in (\ref{eq:no_decoding_cooperation_single_discrete}) and (\ref{eq:uplink_problem_clean}) to improve network performance. Additionally, we note that if an approximated inter-cluster interference is known to be $i_{b,k}$ at BS $b$ at frequency band $k$, then term $\sigma_{b,k}^2$ can be replaced with $\sigma_{b,k}^2+i_{b,k}$ in the interference coordination model. Similarly, in coordinated multi-point decoding, the noise covariance matrix $\boldsymbol N_{v,k}$ can be replaced with the term $\boldsymbol N_{v,k}+\boldsymbol I_{v,k}$ where $\boldsymbol I_{v,k}$ is some approximation for the covariance matrix of inter-cluster interference in the virtual cell $v$. \section{Forming the Virtual Cells}\label{sec:virtual_cell_create} This section presents the clustering approach that creates the virtual cells within which the resource allocation scheme we present in Sections \ref{sec:joint_power_allocation}-\ref{sec:joint_decoding} operate. \subsection{Base Station Clustering via Hierarchical Clustering with Minimax Linkage Criterion} A hierarchical clustering algorithm creates a linkage tree, using a linkage criterion, that shows which clusters are merged when the number of clusters is decreased, and which are separated when this number is increased. This linkage tree is called a dendrogram. We propose using the hierarchical clustering algorithm to cluster BSs, since it enjoys the unique property that decreasing or increasing the number of clusters only affects the clusters that are being merged or separated. Thus, the number of clusters can adapt efficiently to the current state of the network without requiring a full clustering update. By contrast, in other clustering methods, such as K-means or spectral clustering, even a small variation in the number of clusters requires a full clustering update. This is undesirable in wireless networks since a large setup time and overhead for each reclustering is needed for information acquisition and other message passing. Furthermore, we propose using the hierarchical clustering algorithm with the minimax linkage criterion proposed in \cite{BienTibshirani2011} and that we depict in Algorithm \ref{algo:hierarchical_clustering}. This algorithm gets a set of points $S$ and produces the clusterings $B_1,\ldots,B_n$, where $B_m$ is the clustering of size $m$. The algorithm defines the center of a cluster to be the member of the cluster with the minimal maximal distance to all other members in the cluster. This minimal maximal distance is the cluster radius. Then, in every step, the minimax linkage criterion merges the two clusters that will jointly have the smallest radius out of all merging possibilities. Since interference tends to increase on average as the distance between interferers is decreased, at each stage the minimax linkage criterion merges the two clusters of BSs that maximize the smallest anticipated interference at the center of the new cluster caused by the cluster BSs. In addition, the minimax linkage criterion benefits from fulfilling several desirable properties in cluster analysis, as discussed in \cite{BienTibshirani2011}, that other linkage criteria such as the centroid linkage criteria do not fulfill. Next, we formally depict the hierarchical clustering algorithm with minimax linkage criterion. Let $d:\mathbb{R}^2\times\mathbb{R}^2\rightarrow\mathbb{R}$ be the Euclidean distance function, and let $S$ be a set of points in $\mathbb{R}^2$. We then define the following: \begin{definition}[Radius of a set around point] The radius of $S$ around $s_i \in S$ is defined as $r(s_i,S)=\max_{s_j\in S}\:d(s_i,s_j)$. \end{definition} \begin{definition}[Minimax radius] The minimax radius of $S$ is defined as $r(S) = \min_{s_i\in S}\: r(s_i,S)$. \end{definition} \begin{definition}[Minimax linkage] The minimax linkage between two sets of points $S_1$ and $S_2$ in $\mathbb{R}^2$ is defined as $d(S_1,S_2) = r(S_1\cup S_2)$. \end{definition} Let $S=\{s_1,\ldots,s_n\}$ be the set of locations of the BSs in $\mathcal{B}$. We use Algorithm \ref{algo:hierarchical_clustering} below with the input $S$ to create the virtual BSs for each number of clusters $m$. This produces the dendrogram which shows what clusters are merged as the number of clusters is decreased. \setlength{\textfloatsep}{.7cm \begin{algorithm} \caption{}\label{algo:hierarchical_clustering} \begin{algorithmic}[1] \State Input: A set of point $S=\{s_1,\ldots,s_n\}$; \State Set $B_n = \left\{\{s_1\},\dots,\{s_n\}\right\}$; \State Set $d(\{s_i\},\{s_j\})=d(s_i,s_j),\:\forall s_i,s_j\in S$; \For {$m = n-1,\ldots,1$} \State Find $(S_1,S_2) = \arg\min_{\stackrel{G,H\in B_{m+1}:}{G\neq H}} d(G,H)$; \State Update $B_{m} = B_{m+1} \bigcup \{S_1\cup S_2\} \setminus \{S_1,S_2\}$; \State Calculate $d(S_1\cup S_2,G)$ for all $G\in B_m$; \EndFor \end{algorithmic} \end{algorithm} \subsection{Users' Affiliation with Clusters}\label{sec_user_affil} To create the virtual cells, we consider two affiliation rules: \begin{enumerate} \item Closest BS rule in which each user is affiliated with its closest BS. \item Best channel rule in which each user is affiliated with the BS to which it has the best channel (absolute value of the channel coefficient). \end{enumerate} Then each user is associated with the virtual BS that its affiliated BS is part of. This way every virtual BS and it associated users compose a virtual cell. It is easy to verify that the formation of the virtual cells we propose fulfills the requirement presented in Section \ref{sec:virtual_cell_requirements}. The combination of creating virtual cells by using global network information for BS clustering and local network information to associate users with virtual cells creates an easy-to-manage network architecture that does not require a global update when local changes in the network occur. \section{Channel and Power Allocation for the Interference Coordination Model}\label{sec:joint_power_allocation} This section introduces the first resource allocation scheme we propose for the interference coordination model. This scheme is found by converting the problem (\ref{eq:no_decoding_cooperation_single_discrete}) to an equivalent continuous variable problem and then solving the new problem via a convex approximation. \subsection{An Equivalent Continuous Variable Resource Allocation Problem} We can represent the problem (\ref{eq:no_decoding_cooperation_single_discrete}) by an equivalent problem with continuous variables. Suppose that, instead of sending a message to at most one single BS at each frequency band, a user sends messages to all BSs. The signal of user $u\in\mathcal{U}_v$ over frequency band $k$ is then given by $x_{u,k}=\sum_{b\in\mathcal{B}_v}x_{u,b,k}$ where $x_{u,b,k}$ is the part of the signal of user $u$ that is transmitted over frequency band $k$ and is intended to be decoded by BS $b$. Let $P_{u,b,k}$ be the power allocation of the part of the signal of user $u$ that is transmitted over frequency band $k$ and is intended to be decoded by BS $b$.; i.e. $P_{u,b,k}=E\left( x_{u,b,k}^2\right)$, where $E\left( x_{u,b,k}^2\right)$ denotes the expected value of $x_{u,b,k}^2$. We next prove that (\ref{eq:no_decoding_cooperation_single_discrete}) can in fact be written in the following equivalent form: \begin{flalign}\label{eq:no_decoding_cooperation_single_continuous} \max & \sum_{b\in\mathcal{B}_v}\sum_{u\in\mathcal{U}_v}\sum_{k\in\mathcal{K}} W_k\log_2\left(1+\frac{|h_{u,b,k}|^2P_{u,b,k}}{\sigma^2_{b,k}+J_{u,b,k}}\right)\nonumber\\ \text{s.t.: } & 0\leq P_{u,b,k},\quad \sum_{b\in\mathcal{B}_v}\sum_{k\in\mathcal{K}} P_{u,b,k}\leq \overline{P}_u,\quad \forall \: u\in\mathcal{U}_v,b\in\mathcal{B}_v,k\in\mathcal{K},\nonumber\\ & \hspace{-0.55cm}\sum_{\substack{(\tilde{u},\tilde{b})\in\mathcal{U}_v\times \mathcal{B}_v,\\(\tilde{u},\tilde{b})\neq (u,b)}}\hspace{-0.5cm} |h_{\tilde{u},b}|^2P_{\tilde{u},\tilde{b},k}= J_{u,b,k},\: \forall\: u\in\mathcal{U}_v,b\in \mathcal{B}_v,k\in\mathcal{K}. \end{flalign} \begin{theorem}\label{theorem:equivalence:discrete_continuous} The mixed-integer programming problem (\ref{eq:no_decoding_cooperation_single_discrete}) and the continuous variables problem (\ref{eq:no_decoding_cooperation_single_continuous}) are equivalent. \end{theorem} \begin{IEEEproof} The equivalence of (\ref{eq:no_decoding_cooperation_single_discrete}) and (\ref{eq:no_decoding_cooperation_single_continuous}) is argued as follows. First, the solution of (\ref{eq:no_decoding_cooperation_single_discrete}) can be achieved by the solution of (\ref{eq:no_decoding_cooperation_single_continuous}) by setting $x_{u,b,k}=0$ whenever $\gamma_{u,b,k}=0$, and $E \left(x_{u,b,k}^2\right) = P_{u,k}$ whenever $\gamma_{u,b,k}=1$. Thus the maximal sum rate that is found by solving (\ref{eq:no_decoding_cooperation_single_continuous}) upper bounds the maximal sum rate that is found by solving (\ref{eq:no_decoding_cooperation_single_discrete}). On the other hand, suppose that the optimal transmission power of user $u$ using frequency band $k$, given the transmission power of all other users, is $P_{u,k}$, that is $P_{u,k} = \sum_{b\in\mathcal{B}_v}P_{u,b,k}$. It follows by the duality between the multiple-access channel and the broadcast channel that is proved in \cite{1237143} that the optimal power allocation $(P_{u,b,k})_{b\in\mathcal{B}_v}$ for user $u$ in frequency band $k$, given the power allocation of all other users, is to allocate all its transmission power $P_{u,k}$ over frequency band $k$ to the transmission to the BS with the highest SINR. It follows that the maximal sum rate of (\ref{eq:no_decoding_cooperation_single_continuous}) cannot be larger than that of (\ref{eq:no_decoding_cooperation_single_discrete}). Thus, the two problems (\ref{eq:no_decoding_cooperation_single_discrete}) and (\ref{eq:no_decoding_cooperation_single_continuous}) are equivalent. \end{IEEEproof} \subsection{Solving an Approximation of the Continuous Variable Resource Allocation Problem Optimally}\label{sec:continuous_HSINR_gradient} In the following, we solve problem (\ref{eq:no_decoding_cooperation_single_continuous}). Denote: \begin{flalign}\label{eq:SINR_def} \text{SINR}_{u,b,k}(\boldsymbol P) =\frac{|h_{u,b,k}|^2 P_{u,b,k}}{\sigma^2_b+\sum_{\substack{(\tilde{u},\tilde{b})\in\mathcal{U}_v\times \mathcal{B}_v,\\(\tilde{u},\tilde{b})\neq (u,b)}} |h_{\tilde{u},b,k}|^2P_{\tilde{u},\tilde{b},k}}, \end{flalign} where $\boldsymbol P = (P_{u,b,k})_{(u,b,k)\in\mathcal{U}_{v}\times\mathcal{B}_{v}\times\mathcal{K}}$ is the matrix of the transmission power. Using the high SINR approximation \cite{5165179} \begin{flalign}\label{eq:high_SINR_approx_improved} \log(1+z)\geq \alpha(z_0)\log z+\beta(z_0), \end{flalign} where \begin{flalign}\label{eq:alpha_beta_def} \alpha(z_0) = \frac{z_0}{1+z_0},\qquad\beta(z_0) =\log(1+z_0)-\frac{z_0}{1+z_0}\log{z_0}, \end{flalign} we obtain the approximated iterative problem (\ref{eq:iterative_alpha_approx}) where $\alpha_{u,b,k}^{(m)}=\alpha(\text{SINR}_{u,b,k}(\boldsymbol P^{(m-1)}))$, $\beta_{u,b,k}^{(m)}=\beta(\text{SINR}_{u,b,k}(\boldsymbol P^{(m-1)}))$ and $\alpha_{u,b,k}^{(0)}=1$, $\beta_{u,b,k}^{(0)}=0$ for all $u\in\mathcal{U}_v$, $b\in\mathcal{B}_v$ and $k\in\mathcal{K}$. \begin{flalign}\label{eq:iterative_alpha_approx} \boldsymbol P^{(m)} =& \arg\max_{\boldsymbol P} \sum_{b\in\mathcal{B}_v}\sum_{u\in\mathcal{U}_v}\sum_{k\in\mathcal{K}} W_k\left[\alpha_{u,b,k}^{(m)}\log_2\left(\frac{|h_{u,b,k}|^2P_{u,b,k}}{\sigma^2_{b,k}+ \sum_{\substack{(\tilde{u},\tilde{b})\in\mathcal{U}_v\times \mathcal{B}_v,\\(\tilde{u},\tilde{b})\neq (u,b)}} |h_{\tilde{u},b,k}|^2P_{\tilde{u},\tilde{b},k}}\right)+\beta_{u,b,k}^{(m)}\right]\nonumber\\ &\text{s.t.: } \: 0\leq P_{u,b,k},\quad \sum_{b\in\mathcal{B}_v}\sum_{k\in\mathcal{K}} P_{u,b,k}\leq \overline{P}_u,\quad \forall \: u\in\mathcal{U}_v,b\in\mathcal{B}_v,k\in\mathcal{K}\nonumber\\ & \sum_{\substack{(\tilde{u},\tilde{b})\in\mathcal{U}_v\times \mathcal{B}_v,\\(\tilde{u},\tilde{b})\neq (u,b)}} |h_{\tilde{u},b,k}|^2P_{\tilde{u},\tilde{b},k}= J_{u,b,k},\quad \forall \: u\in\mathcal{U}_v,b\in \mathcal{B}_v,k\in\mathcal{K}. \end{flalign} It is left to solve the problem (\ref{eq:iterative_alpha_approx}). By transforming the variables of the problem using $P_{u,b,k}=\exp(g_{u,b,k})$ and noticing that the terms $\beta_{u,b,k}^{(m)}$ do not affect the optimal power allocation, we get the equivalent convex problem: \begin{flalign}\label{sol_continuous_power_approx} &\ln(\boldsymbol P^{(m)}) = \arg\max \sum_{b\in\mathcal{B}_v}\sum_{u\in\mathcal{U}_v}\sum_{k\in\mathcal{K}} W_k\alpha_{u,b,k}^{(m)}\cdot\log_2\left(\frac{|h_{u,b,k}|^2\exp(g_{u,b,k})}{\sigma^2_{b,k}+ \sum_{\substack{(\tilde{u},\tilde{b})\in\mathcal{U}_v\times \mathcal{B}_v,\\(\tilde{u},\tilde{b})\neq (u,b)}} |h_{\tilde{u},b,k}|^2\exp(g_{\tilde{u},\tilde{b},k})}\right)\nonumber\\ &\text{s.t.: } \sum_{b\in\mathcal{B}_v}\sum_{k\in\mathcal{K}} \exp(g_{u,b,k})\leq \overline{P}_u,\quad \forall\: u\in \mathcal{U}_v. \end{flalign} The Lagrangian of (\ref{sol_continuous_power_approx}) is given by \begin{flalign}\label{eq:Lagrangian_dual_prob_continuous} &L(\boldsymbol g,\boldsymbol\lambda;m) = \sum_{b\in\mathcal{B}_v}\sum_{u\in\mathcal{U}_v}\sum_{k\in\mathcal{K}} W_k\alpha_{u,b,k}^{(m)}\cdot\log_2\left(\frac{|h_{u,b,k}|^2\exp(g_{u,b,k})}{\sigma^2_{b,k}+ \sum_{\substack{(\tilde{u},\tilde{b})\in\mathcal{U}_v\times \mathcal{B}_v,\\(\tilde{u},\tilde{b})\neq (u,b)}} |h_{\tilde{u},b,k}|^2\exp(g_{\tilde{u},\tilde{b},k})}\right)\nonumber\\ &\hspace{5cm}- \sum_{u\in\mathcal{U}_v} \lambda_u\left(\sum_{b\in\mathcal{B}_v}\sum_{k\in\mathcal{K}} \exp(g_{u,b,k})-\overline{P}_u\right), \end{flalign} where $m$ denotes the $m$th time (\ref{sol_continuous_power_approx}) is solved. Furthermore, the dual function of the Lagrangian is given by \begin{flalign}\label{eq:maximizer_Lagrangian_continuous} q(\boldsymbol \lambda;m) = \sup_{\boldsymbol g} L(\boldsymbol g,\boldsymbol\lambda;m). \end{flalign} Thus the dual problem of (\ref{sol_continuous_power_approx}) is \begin{flalign}\label{eq:prob_dual_ptob_continuous} &\max q(\boldsymbol\lambda;m),\nonumber\\ & \text{s.t.: } \lambda_u\geq 0,\:\forall u\in\mathcal{U}_v. \end{flalign} Since the problem (\ref{sol_continuous_power_approx}) is convex with a non-empty interior, its duality gap is zero. Additionally, since (\ref{sol_continuous_power_approx}) has a compact domain in terms of $P_{u,b,k}$, it follows from \cite[Proposition 6.1.1]{Bertsekas/99} that we can solve the dual problem (\ref{eq:prob_dual_ptob_continuous}) using the gradient ascend method, that is: \begin{flalign}\label{eq:grad_ascend} \lambda_u^{(m,n+1)} = \left[\lambda_u^{(m,n)}+\epsilon_{\lambda}\left(\sum_{b\in\mathcal{B}_v}\sum_{k\in\mathcal{K}} \exp(g_{u,b,k}^{(m,n)})-\overline{P}_u\right)\right]^+, \end{flalign} where $\boldsymbol g^{(m,n)} = (g_{u,b,k}^{(m,n)})_{u\in\mathcal{U}_v,b\in\mathcal{B}_v,k\in\mathcal{K}}$ is the maximizer of $L(\boldsymbol g,\boldsymbol\lambda^{(m,n)};m)$. Recall that $P_{u,b,k}=\exp(g_{u,b,k})$. It is left to solve the subproblem (\ref{eq:maximizer_Lagrangian_continuous}). Since its objective function is a strictly concave and differentiable function of $\boldsymbol g$, a solution is attained at the point: \begin{flalign}\label{eq:fixed_point_prob_continuous} &P_{u,b,k}=\frac{W_k\alpha_{u,b,k}^{(m)}}{\lambda_u\ln 2+W_k\sum_{\substack{(\tilde{u},\tilde{b})\in\mathcal{U}_v\times \mathcal{B}_v,\\(\tilde{u},\tilde{b})\neq (u,b)}}\alpha_{\tilde{u},\tilde{b},k}^{(m)}\frac{\text{SINR}_{\tilde{u},\tilde{b},k}(\boldsymbol P^{(m)})}{P_{\tilde{u},\tilde{b},k}^{(m)}|h_{\tilde{u},\tilde{b},k}|^2 }|h_{u,\tilde{b},k}|^2}. \end{flalign} By \cite{5165179} and \cite{414651} we can solve the fixed point (\ref{eq:fixed_point_prob_continuous}) problem iteratively: \begin{flalign}\label{update_rule_continuous_orig} &P_{u,b,k}^{(m,n,s+1)}=\frac{W_k\alpha_{u,b,k}^{(m)}}{\lambda_u^{(n)}\ln 2+W_k\sum_{\substack{(\tilde{u},\tilde{b})\in\mathcal{U}_v\times \mathcal{B}_v,\\(\tilde{u},\tilde{b})\neq (u,b)}}\alpha_{\tilde{u},\tilde{b},k}^{(m)}\frac{\text{SINR}_{\tilde{u},\tilde{b},k}(\boldsymbol P^{(m,n,s)})}{P_{\tilde{u},\tilde{b},k}^{(m,n,s)}|h_{\tilde{u},\tilde{b},k}|^2 }|h_{u,\tilde{b},k}|^2} \end{flalign} to achieve the optimal power allocation of the subproblem (\ref{eq:maximizer_Lagrangian_continuous}) where $m$ denotes the iteration number of the high SINR approximation, $n$ denotes the iteration number of the gradient ascent algorithm used to solve the dual problem, and $s$ denotes the iteration of the iterative fixed point solution. The existence of the solution is guaranteed because of the strong concavity of (\ref{eq:maximizer_Lagrangian_continuous}). \subsection{Solving an Approximation of the Continuous Variable Resource Allocation Problem Efficiently}\label{sec:continuous_HSINR_fixed_point} Since the problem (\ref{sol_continuous_power_approx}) is convex with a non empty interior, its duality gap is zero, and the Karush–Kuhn–Tucker (KKT) conditions are sufficient for the points to be primal and dual optimal. The KKT conditions for (\ref{sol_continuous_power_approx}), after substituting $P_{u,b,k}=\exp(g_{u,b,k})$, are \begin{flalign} &P_{u,b,k}=\frac{W_k\alpha_{u,b,k}^{(m)}}{\lambda_u\ln 2+W_k\sum_{\substack{(\tilde{u},\tilde{b})\in\mathcal{U}_v\times \mathcal{B}_v,\\(\tilde{u},\tilde{b})\neq (u,b)}}\alpha_{\tilde{u},\tilde{b},k}^{(m)}\frac{\text{SINR}_{\tilde{u},\tilde{b},k}(\boldsymbol P^{(m)})}{P_{\tilde{u},\tilde{b},k}^{(m)}|h_{\tilde{u},\tilde{b},k}|^2 }|h_{u,\tilde{b},k}|^2},\quad \forall u\in\mathcal{U}_v,\\ &0=\lambda_u\left(\sum_{b\in\mathcal{B}_v}\sum_{k\in\mathcal{K}} P_{u,b,k}- \overline{P}_u\right),\quad \forall u\in\mathcal{U}_v,\\ & \sum_{b\in\mathcal{B}_v}\sum_{k\in\mathcal{K}} P_{u,b,k}\leq \overline{P}_u,\qquad\lambda_u\geq 0, \quad \forall u\in\mathcal{U}_v. \end{flalign} Define the following iterative update rule \begin{flalign}\label{update_rule_continuous} &P_{u,b,k}^{(m,s+1)}=\frac{W_k\alpha_{u,b,k}^{(m)}}{\lambda_u^{(s+1)}\ln 2+W_k\sum_{\substack{(\tilde{u},\tilde{b})\in\mathcal{U}_v\times \mathcal{B}_v,\\(\tilde{u},\tilde{b})\neq (u,b)}}\alpha_{\tilde{u},\tilde{b},k}^{(m)}\frac{\text{SINR}_{\tilde{u},\tilde{b},k}(\boldsymbol P^{(m,s)})}{P_{\tilde{u},\tilde{b},k}^{(m,s)}|h_{\tilde{u},\tilde{b},k}|^2 }|h_{u,\tilde{b},k}|^2}, \end{flalign} where $\lambda_u^{(s+1)}=0$ if \begin{flalign} \sum_{b\in\mathcal{B}_v}\sum_{k\in\mathcal{K}} \frac{\alpha_{u,b,k}^{(m)}}{\sum_{\substack{(\tilde{u},\tilde{b})\in\mathcal{U}_v\times \mathcal{B}_v,\\(\tilde{u},\tilde{b})\neq (u,b)}}\alpha_{\tilde{u},\tilde{b},k}^{(m)}\frac{\text{SINR}_{\tilde{u},\tilde{b},k}(\boldsymbol P^{(m,s)})}{P_{\tilde{u},\tilde{b},k}^{(m,s)}|h_{\tilde{u},\tilde{b},k}|^2 }|h_{u,\tilde{b},k}|^2}\leq \overline{P}_u. \end{flalign} Otherwise $\lambda_u^{(s+1)}$ is chosen such that $\sum_{b\in\mathcal{B}_v}\sum_{k\in\mathcal{K}}P_{u,b,k}^{(m,s+1)}=\overline{P}_u$. We have that if this update rule converges, it must converge to a KKT point, which in turn is globally optimal. While there is no known proof that guarantees convergence, in practice convergence is observed in simulations. \section{Solving the Resource Allocation Problem via Alternating Optimization}\label{sec:alternating_optimization} A more traditional approach to solving the resource allocation problem (\ref{eq:no_decoding_cooperation_single_discrete}) separates it into two subproblems: a channel allocation problem that sets the value of $\gamma_{u,b,k}$ to be either zero or one, and a power allocation problem that optimizes the transmission power. Then we iteratively solve these two problems until a stopping criterion is fulfilled. A resource allocation scheme of this type is depicted by Algorithm \ref{algo:Altenating_general}. \setlength{\textfloatsep}{.7cm \begin{algorithm} \caption{}\label{algo:Altenating_general} \begin{algorithmic}[1] \State Notations: $\boldsymbol P^{(n)} = (P^{(n)}_{u,b,k})_{(u,b,k)\in\mathcal{U}_v\times\mathcal{B}_v\times\mathcal{K}}$, $\boldsymbol \gamma^{(n)} = (\gamma^{(n)}_{u,b,k})_{(u,b,k)\in\mathcal{U}_v\times\mathcal{B}_v\times\mathcal{K}}$; \State Input: $\delta>0, N_{\max}\in\mathbb{N}$; \State Set $n=0$, $\delta_0 = 2\delta$; \State Set $P^{(0)}_{u,b,k}=\overline{P}_u/(|\mathcal{B}_v||\mathcal{K}|)$ and $\gamma^{(0)}_{u,b,k}=0$ for all $u\in\mathcal{U}_v$, $b\in\mathcal{B}_v$ and $k\in\mathcal{K}$; \While{ $\delta_n>\delta$ and $n<N_{\max}$} \State $n=n+1$; \State \textbf{Channel allocation:} Given the power allocation $\boldsymbol P^{(n-1)}$, set $\gamma^{(n)}_{u,b,k}$ to be either zero or one for every $u\in\mathcal{U}_v$, $b\in\mathcal{B}_v$ and $k\in\mathcal{K}$. \State \textbf{Power allocation:} Given $\boldsymbol\gamma^{(n)}$, calculate $\boldsymbol P^{(n)}$ by solving the iterative problem (\ref{sol_continuous_power_approx}) starting with some initial values $\alpha_{u,b,k}^{(0)}$, $(u,b,k)\in\mathcal{U}_v\times\mathcal{B}_v\times\mathcal{K}$. \State Calculate the sum rate \[R(\boldsymbol P^{(n)},\boldsymbol \gamma^{(n)})\hspace{-0.1cm} =\hspace{-0.1cm}\sum_{b\in\mathcal{B}_v}\hspace{-0.05cm}\sum_{u\in\mathcal{U}_v}\hspace{-0.05cm}\sum_{k\in\mathcal{K}} \gamma^{(n)}_{u,b,k}W_k\log_2\left(1+\frac{|h_{u,b,k}|^2P^{(n)}_{u,b,k}}{\sigma^2_{b,k}+J^{(n)}_{u,b,k}}\right); \] \State Calculate $\delta_n = R(\boldsymbol P^{(n)},\boldsymbol \gamma^{(n)})-R(\boldsymbol P^{(n-1)},\boldsymbol \gamma^{(n-1)})$; \EndWhile \end{algorithmic} \end{algorithm} For the sake of depicting the channel allocation schemes and the initial values of $\alpha^{(0)}_{u,b,k}$ we use the notation \begin{flalign}\label{SINR_single_receiver} \overline{\text{SINR}}_{u,b,k}(\boldsymbol P) = \frac{|h_{u,b,k}|^2\sum_{b\in\mathcal{B}_v}P_{u,b,k}}{\sigma^2_{b,k}+\sum_{\substack{\tilde{u}\in\mathcal{U}_v,\tilde{u}\neq u,\\\tilde{b}\in\mathcal{B}_v}} |h_{\tilde{u},b,k}|^2P_{\tilde{u},\tilde{b},k}}. \end{flalign} The interference term in the denominator of (\ref{SINR_single_receiver}) incorporates the constraint that each user communicates with at most one BS at each frequency band. This constraint does not appear in the interference term of the SINR expression (\ref{eq:SINR_def}). This follows since the channel allocation is a by-product of the power allocation scheme presented in Section \ref{sec:joint_power_allocation}. That is, a user is allocated a channel only when the power allocation scheme allocates strictly positive power to the transmission of the user over that channel. Next we present three channel allocation schemes. The first of these channel allocation schemes is a user-centric (UC) one in which, at each frequency band, every user chooses its receiving BS to be the one with the maximal SINR for this user given an initial power allocation. The second and third channel allocation schemes are existing approaches that we also consider for comparison. In particular, the second scheme is BS-centric (BSC) used, for example, in \cite{6678362,6815733}. In this scheme, in each frequency band every BS chooses its transmitting user to be the one with the maximal SINR. The third and final channel allocation scheme we consider is presented in \cite{7873307}. In this scheme, given a power allocation, channels are allocated to maximize the sum rate for that given power allocation using the Hungarian methods. We refer to this approach as the maximum sum rate matching (MSRM) approach. Interestingly, numerical results show that, as the number of virtual cells decreases and their size increases, both the UC channel allocation and the equivalent continuous problem approach outperform both the BSC approach and the MSRM approach. We remark that this work only considers a single power allocation scheme in Algorithm \ref{algo:Altenating_general}. That is due to the results presented in \cite{6678362}, where different power allocation schemes coupled with channel allocation yielded virtually the same average throughput. Hence we believe that different power allocation schemes will yield little difference in the system sum rate from that obtained with the power allocation algorithm used in Algorithm \ref{algo:Altenating_general}. \subsection{User-Centric (UC) Channel Allocation}\label{sec:alternating_power_allocation_user} This section presents the first channel allocation scheme, depicted in Algorithm \ref{algo:Altenating_single_set_power_UCB}, for the interference coordination model. This scheme is a UC one in that every user chooses the receiving BS to be the one with the maximal SINR for this user. \setlength{\textfloatsep}{.7cm \begin{algorithm \caption{}\label{algo:Altenating_single_set_power_UCB} \begin{algorithmic}[1] \State Input: Power allocation $\boldsymbol P = (P_{u,b,k})_{u\in\mathcal{U}_v,b\in\mathcal{B}_v,k\in\mathcal{K}}$; \State For every $u\in\mathcal{U}_v$, $b\in\mathcal{B}_v$ and $k\in\mathcal{K}$ calculate $\overline{\text{SINR}}_{u,b,k}(\boldsymbol P)$; \State For every $u\in\mathcal{U}_v$ and $k\in\mathcal{K}$, calculate: $b_{u,k} = \arg\max_{b\in\mathcal{B}_v} \overline{\text{SINR}}_{u,b,k}(\boldsymbol P)$; \State For every $(u,b,k)\in\mathcal{U}_v\times\mathcal{B}_v\times\mathcal{K}$ set $\gamma_{u,b,k}=\mathbbm{1}_{\{b = b_{u,k}\}}$; \end{algorithmic} \end{algorithm} The motivation behind this approach is allowing the power allocation stage more flexibility to choose the users who transmit to a given BS. More specifically, in previously proposed channel allocation schemes discussed in Sections \ref{sec:alternating_power_allocation_BS} and \ref{sec:MSRM_channel_allocation}, at most one user is allocated to a BS in each frequency band. However, in the UC approach, in each frequency band each BS has a list of users that chose it as their receiving BS, then the power allocation stage chooses the identity of the user in that list who actually transmits to the BS by allocating to that user a positive transmission power. Interestingly, numerical results show that as the number of virtual cells decreases and their size increases, both the UC channel allocation and the equivalent continuous problem approach outperform both of the previously-proposed channel allocation methods that we next discuss. \subsection{Base Station (BS) Centric Resource Allocation}\label{sec:alternating_power_allocation_BS} This section presents the second channel allocation scheme for the interference coordination model. This scheme is a BS-centric one in that every BS chooses its transmitting user to be the one with the maximal SINR for this BS. This scheme is inspired by the works \cite{6678362} and \cite{6815733}, however, we remark that we do not restrict users in this work to transmit to the same BS over all frequency bands but allow them to communicate with different BSs in the virtual cell across different frequency bands. We depict the BS-centric channel allocation scheme in Algorithm \ref{algo:Altenating_single_set_power_BCU}. \setlength{\textfloatsep}{.7cm \begin{algorithm} \caption{}\label{algo:Altenating_single_set_power_BCU} \begin{algorithmic}[1] \State Input: Power allocation $\boldsymbol P = (P_{u,b,k})_{u\in\mathcal{U}_v,b\in\mathcal{B}_v,k\in\mathcal{K}}$; \State For every $u\in\mathcal{U}_v$, $b\in\mathcal{B}_v$ and $k\in\mathcal{K}$ calculate $\overline{\text{SINR}}_{u,b,k}(\boldsymbol P)$; \State For every $b\in\mathcal{B}_v$ and $k\in\mathcal{K}$, calculate: $u_{b,k} = \arg\max_{u\in\mathcal{U}_v} \text{SINR}_{u,b,k}(\boldsymbol P)$; \State For every $u\in\mathcal{U}_v$, $b\in\mathcal{B}_v$ and $k\in\mathcal{K}$ set $\gamma_{u,b,k}=\mathbbm{1}_{\{u = u_{b,k}\}}$; \end{algorithmic} \end{algorithm} The motivation behind this approach is interference reduction, that is, if the SINR at two or more BSs is maximized by the same user, then a transmission of this user intended for one of these BSs strongly interferes with the communication of the other BS. To reduce interference, the same user is chosen as the transmitting user by all of these BSs, then the power allocation scheme will chose the identity of the receiving BSs among them in accordance with the global objective function of the power allocation stage. We remark that even though in this approach several BSs can choose the same user, it can be proved, following the argument presented in the proof of Theorem \ref{theorem:equivalence:discrete_continuous}, that an optimal power allocation scheme will allocate power only to the transmission of no more than one BS. In practice, this behavior is observed using the high SINR approximation. If a power allocation scheme that does not display this behavior is used, that is, after the power allocation stage there is a user that has a positive transmission power over the same frequency band to two or more BSs, one can improve the sum rate by using all the allocated transmit power of that user over that frequency band to the communication with the BS that has the highest SINR for that frequency band. \subsection{Maximum Sum Rate Matching (MSRM) Channel Allocation}\label{sec:MSRM_channel_allocation} This section presents the third and final channel allocation scheme for the interference coordination model. This scheme allocates the channels in a virtual cell optimally for a given power allocation by solving the maximum sum rate matching problem; this approach is presented in \cite{7873307}. Next we depict the channel allocation problem as a matching problem. Let $B_k=(\mathcal{U}_v,\mathcal{B}_v,E,\boldsymbol{P},k)$ denote the bipartite graph that connects the set of users $\mathcal{U}_v$ to the set of BSs $\mathcal{B}_v$ where the set $E$ is the set of all pairs $\{u,b\}$ such that $u\in\mathcal{U}_v$ and $b\in\mathcal{B}_v$. Each edge $\{u,b\}$ is assigned a weight that is equal to the transmission rate from $u$ to $v$ using frequency band $k$, given the power allocation $\boldsymbol{P}$. We allocate the channels at each frequency band $k$ by solving the sum rate maximization matching problem of $B_k$ optimally. This optimal matching can be found for example by using the Hungarian method \cite{doi:10.1002/nav.3800020109} for every $B_k$. This channel allocation scheme is depicted in Algorithm \ref{algo:Altenating_single_set_power_assignment}. \setlength{\textfloatsep}{.7cm \begin{algorithm} \caption{}\label{algo:Altenating_single_set_power_assignment} \begin{algorithmic}[1] \State Input: Power allocation $\boldsymbol P = (P_{u,b,k})_{u\in\mathcal{U}_v,b\in\mathcal{B}_v,k\in\mathcal{K}}$; \State For every $u\in\mathcal{U}_v$, $b\in\mathcal{B}_v$ and $k\in\mathcal{K}$ calculate $\overline{\text{SINR}}_{u,b,k}(\boldsymbol P)$ and \[R_{u,b,k}=W_k\log_2\left(1+\text{SINR}_{u,b,k}(\boldsymbol P)\right);\] \State For every $k\in\mathcal{K}$ find the optimal matching of $B_k=(\mathcal{U}_v,\mathcal{B}_v,E,\boldsymbol{P},k)$, then set $\gamma_{u,b,k}=1$ if user $u$ was matched with BS $b$ in frequency band $k$ and $\gamma_{u,b,k}=0$ otherwise; \end{algorithmic} \end{algorithm} We note that, as stated in \cite{7873307}, given a power allocation $\boldsymbol P$, Algorithm \ref{algo:Altenating_single_set_power_assignment} finds the optimal channel allocation that maximizes the sum rate for that power allocation. However, since the power allocation may not be optimal, the overall solution is not necessarily optimal. Interestingly, as we previously wrote, numerical results show that as the number of virtual cells decreases and their size increases both the user-centric channel allocation and the equivalent continuous problem approach outperforms this scheme. \subsection{Convergence of Algorithm \ref{algo:Altenating_general}} The convergence of Algorithm \ref{algo:Altenating_general} depends on the channel allocation scheme used and the initial values $\alpha_{u,b,k}^{(0)}$. Since the system sum rate is bounded, convergence must occur whenever there is an $N_0\in\mathbb{N}$ such that $R(\boldsymbol P^{(n)},\boldsymbol\gamma^{(n)})\geq R(\boldsymbol P^{(n-1)},\boldsymbol\gamma^{(n-1)})$ for all $n\geq N_0$. This, in turn must occur if $R(\boldsymbol P^{(n-1)},\boldsymbol\gamma^{(n)})\geq R(\boldsymbol P^{(n-1)},\boldsymbol\gamma^{(n-1)})$ and $R(\boldsymbol P^{(n)},\boldsymbol\gamma^{(n)})\geq R(\boldsymbol P^{(n-1)},\boldsymbol\gamma^{(n)})$ for every $n\geq N_0$. This condition holds when allocating channels using Algorithm \ref{algo:Altenating_single_set_power_UCB} or Algorithm \ref{algo:Altenating_single_set_power_assignment} and choosing the initial values $\alpha_{u,b,k}^{(0)}$ at time $n$ to be $\gamma^{(n)}_{u,b,k}\overline{\text{SINR}}_{u,b,k}(\boldsymbol P^{(n-1)})$, since Algorithm \ref{algo:Altenating_single_set_power_UCB} and Algorithm \ref{algo:Altenating_single_set_power_assignment} cannot decrease the sum rate of a virtual cell, and since the high SINR approximation (\ref{eq:high_SINR_approx_improved}) is achieved with equality for $z=z_0$. In practice, convergence was observed in simulations for all channel allocation algorithms presented in this work for the choices $\alpha_{u,b,k}^{(0)}=\gamma^{(n)}_{u,b,k}\overline{\text{SINR}}_{u,b,k}(\boldsymbol P^{(n-1)})$ and $\alpha_{u,b,k}^{(0)}=\gamma^{(n)}_{u,b,k}$. The latter choice provided a small improvement over the first and was used in our simulations. \section{Resource Allocation for Coordinated Multi-Point Decoding in Virtual Cells}\label{sec:joint_decoding} This section is dedicated to solving the problem (\ref{eq:uplink_problem_clean}) that is presented in Section \ref{subsection:uplink_joint_decoding_problem} in which BSs use cloud decoding with backhaul links of infinite capacity. Note that this setup is equivalent to a multiple access channel (MAC) with a single transmitting antenna at each user and multiple antennas at the receiver. Using the identity $\det(\boldsymbol{AB})=\det(\boldsymbol{A})\det(\boldsymbol{B})$ we have that problem (\ref{eq:uplink_problem_clean}), which depicts the capacity of the virtual cell, can be written as follows: \begin{flalign}\label{problem_joint_decode_ininite} \max &\sum_{k\in\mathcal{K}}W_k\left[\log_2\det\left(\boldsymbol{N}_{v,k}+\sum_{u\in\mathcal{U}_v}p_{u,k}\boldsymbol h_{u,k} \boldsymbol h_{u,k}^{\dagger}\right)-\log_2\det\left(\boldsymbol{N}_{v,k}\right)\right],\nonumber\\ \text{s.t.: } & \sum_{k\in\mathcal{K}} p_{u,k}\leq \overline{P}_{u,k},\quad p_{u,k}\geq 0. \end{flalign} Since the terms $\log_2\det\left(\boldsymbol{N}_{v,k}\right)$ are constants, hereafter we omit them from the objective function. Denote $\boldsymbol{p}_u = (p_{u,1},\ldots,p_{u,K})$ and let: \begin{flalign} &f\left(\boldsymbol{p}_{u_1},\ldots,\boldsymbol{p}_{u_{|U|}}\right) =\log_2\det\left(\boldsymbol{N}_{v,k}+\sum_{u\in\mathcal{U}_v}p_{u,k}\boldsymbol h_{u,k} \boldsymbol h_{u,k}^{\dagger}\right). \end{flalign} In order to optimally solve the problem (\ref{problem_joint_decode_ininite}) iteratively using the cyclic coordinate ascend algorithm \cite[Chapter 2.7]{Bertsekas/99}, the following three conditions must hold: \begin{enumerate} \item The function $f\left(\boldsymbol{p}_{u_1},\ldots,\boldsymbol{p}_{u_{|\mathcal{U}_v|}}\right)$ is concave. \item Define \begin{flalign} \mathcal{P}&\triangleq \left\{\left(\boldsymbol{p}_{u_1},\ldots,\boldsymbol{p}_{u_{|\mathcal{U}_v|}}\right):\sum_{k\in\mathcal{K}} p_{u,k}\leq \overline{P}_u,\:\: \sum_{k\in\mathcal{K}}p_{u,k}\geq 0 \:\: \forall\: u\in\mathcal{U}_v\right\},\nonumber\\ \mathcal{P}_u&\triangleq\left\{\boldsymbol{p}_u:\sum_{k\in\mathcal{K}}p_{u,k}\leq \overline{P}_u,\:p_{u,k}\geq0\right\}, \end{flalign} then $\mathcal{P} = \mathcal{P}_{u_1}\times\ldots\times\mathcal{P}_{u_{|U|}}$. \item The problem \begin{flalign}\label{problem_joint_decode_ininite_single} \max_{\tilde{\boldsymbol{p}}_{u_i}}\: &f\left(\boldsymbol{p}_{u_1},\ldots,\boldsymbol{p}_{u_{i-1}},\tilde{\boldsymbol{p}}_{u_i},\boldsymbol{p}_{u_{i+1}},\boldsymbol{p}_{u_{|U|}}\right)\nonumber\\ \text{s.t.: } & \tilde{\boldsymbol{p}}_{u_i}\in\mathcal{P}_{u_i}, \end{flalign} has a unique maximizing solution. \end{enumerate} Next we solve problem (\ref{problem_joint_decode_ininite_single}) and show that the optimal solution is uniquely attained. Denote $\boldsymbol\Sigma_{i,k} = \boldsymbol{N}_{v,k}+\sum_{j\neq i}p_{u_j,k}\boldsymbol h_{u_j,k}\boldsymbol h_{u_j,k}^{\dagger}$. Problem (\ref{problem_joint_decode_ininite_single}) is then \begin{flalign}\label{problem_joint_decode_ininite_single_eq} \max &\sum_{k\in\mathcal{K}}W_k\log_2\det\left(\boldsymbol\Sigma_{i,k}+p_{u_i,k}\boldsymbol h_{u_i,k} \boldsymbol h_{u_i,k}^{\dagger}\right)\nonumber\\ \text{s.t.: } & \sum_{k\in\mathcal{K}} p_{u_i,k}\leq \overline{P}_{u_i},\quad p_{u_i,k}\geq 0. \end{flalign} The Lagrangian of (\ref{problem_joint_decode_ininite_single_eq}) is: \begin{flalign*} &L(\boldsymbol p_{u_i},\lambda,\boldsymbol\mu) = \sum_{k\in\mathcal{K}}W_k\log_2\det\left(\boldsymbol\Sigma_i(k)+p_{u_i,k}\boldsymbol h_{u_i,k} \boldsymbol h_{u_i,k}^{\dagger}\right)-\lambda_{u_i}(\sum_{k\in\mathcal{K}} p_{u_i,k}-\overline{P}_{u_i}) +\sum_{k\in\mathcal{K}}\mu_{u_i,k}p_{u_i,k}. \end{flalign*} Next, we calculate the derivative of the Lagrangian with respect to $p_{u_i,k}$: \begin{flalign} \frac{\partial L(\boldsymbol p_{u_i},\lambda,\boldsymbol\mu)}{\partial p_{u_i,k}}&= W_k\boldsymbol h_{u_i,k}^{\dagger}\left(\boldsymbol\Sigma_{i,k}+p_{u_i,k}\boldsymbol h_{u_i,k} \boldsymbol h_{u_i,k}^{\dagger}\right)^{-1}\boldsymbol h_{u_i,k} - \lambda_{u_i}+\mu_{u_i,k} \nonumber\\ & = W_k\frac{\boldsymbol h_{u_i,k}^{\dagger}\boldsymbol\Sigma_{i,k}^{-1}\boldsymbol h_{u_i,k}}{1+\boldsymbol h_{u_i,k}^{\dagger}\boldsymbol\Sigma_{i,k}^{-1}\boldsymbol h_{u_i,k}p_{u_i,k}}-\lambda_{u_i}+\mu_{u_i,k}. \end{flalign} The KKT conditions for (\ref{problem_joint_decode_ininite_single_eq}) are \begin{flalign} & W_k\frac{\boldsymbol h_{u_i,k}^{\dagger}\boldsymbol\Sigma_{i,k}^{-1}\boldsymbol h_{u_i,k}}{1+\boldsymbol h_{u_i,k}^{\dagger}\boldsymbol\Sigma_{i,k}^{-1}\boldsymbol h_{u_i,k}p_{u_i,k}}-\lambda_{u_i}+\mu_{u_i,k} = 0,\nonumber\\ & \lambda_{u_i}\left(\sum_{k\in\mathcal{K}} p_{u_i,k}-\overline{P}_{u_i}\right) = 0,\quad \mu_{u_i,k}p_{u_i,k}=0,\nonumber\\ & \mu_{u_i,k}\geq 0,\quad \lambda_{u_i}\geq0. \end{flalign} Since $\mu_{u_i,k}$ is nonnegative for all $k$, and the matrix $\boldsymbol\Sigma^{-1}_{i,k}$ is positive definite for all $k$, in order to fulfill the first KKT condition, $\lambda_{u_i}$ must be strictly positive. Now, if $p_{u_i,k}>0$, then $\mu_{u_i,k} =0 $ and by the first KKT condition we have \begin{flalign} p_{u_i,k}=\frac{W_k}{\lambda_{u_i}}-\frac{1}{\boldsymbol h_{u_i,k}^{\dagger}\boldsymbol\Sigma_{i,k}^{-1}\boldsymbol h_{u_i,k}}. \end{flalign} Also, if $p_{u_i,k}=0$, then by the first KKT condition we have \begin{flalign} W_kh_{u_i,k}^{\dagger}\boldsymbol\Sigma_{i,k}^{-1}\boldsymbol h_{u_i,k}+\mu_{u_i,k} = \lambda_{u_i}. \end{flalign} It follows that \begin{flalign} p_{u_i,k} = \left(\frac{W_k}{\lambda_{u_i}}-\frac{1}{\boldsymbol h_{u_i,k}^{\dagger}\boldsymbol\Sigma_{i,k}^{-1}\boldsymbol h_{u_i,k}}\right)^+ \end{flalign} where $\lambda_{u_i}$ is chosen such that $\sum_{k\in\mathcal{K}}p_{u_i,k} = \overline{P}_{u_i}$. \section{Numerical Results}\label{se:simulation} This section presents Monte Carlo simulation results for the resource allocation and user affiliation schemes presented in this paper. In these simulations there are $8$ frequency bands, each of bandwidth 20 KHz, and the carrier frequency is set to $1800$ MHz. The noise power received by each BS is $-174$ dBm/Hz, and the maximal power constraint for each user is $23$ dBm. Finally, in each frequency band the channel exhibits Rayleigh fading, log-normal shadowing with standard deviation $8$ dB, and a path loss of $PL(d)= 35\log_{10}(d)+34$, where $d$ denotes the distance between the transmitter and the receiver in meters (see \cite{4138008}). The network comprises $15$ BSs and $100$ users which are uniformly located in a square of side $2000$ meters. The results are averaged over $1000$ system realizations. The numerical results depict the average system sum rate achieved by the BS clustering, resource allocation methods, and user affiliation scheme we propose in this paper. To evaluate the performance of our BS clustering we compare the average system-sum rate achieved by using the hierarchical clustering with minimax linkage criterion to that of other popular clustering algorithms, namely, the K-means clustering algorithm and the spectral clustering algorithm \cite{Ng:2001:SCA:2980539.2980649} for the choices $\sigma=\sqrt{2000}$ and $\sigma=2000$. The simulation results for the system setup stated above are shown in Figures \ref{Best_Channel_Average_Sum_Rate_fig_single_all}-\ref{Several_Joint_decoding_exhaustive_hierarchial_fig}. An additional figure, Fig.~\ref{Several_Comparison_Clustering_Max_Average_Sum_Rate_max_single2}, presents numerical results that evaluate the clustering choice for a system setup with $10$ BSs and $80$ users that are uniformly located in a square of side $1000$ meters; all the other system parameter remain the same. The line descriptions of the figures are of the structure $F1 - F2 - F3$ where \begin{itemize} \item The $F1$ field describes the BS clustering method used. This field can take one of following options: \textit{Hierarchical}, which stands for the hierarchical clustering with minimax linkage criterion; \textit{K-means}, which stands for the K-means clustering algorithm, and \textit{Spectral clustering $\sigma=x$}, which stands for spectral clustering where $\sigma$ takes the value $x$. \item The $F2$ field describes the resource allocation scheme. This field can take one of the following options: \begin{itemize} \item \textit{JD}, which stands for Joint Decoding, refers to the resource allocation schemes for the coordinated multi-point model which is presented in Section \ref{sec:joint_decoding}. \item \textit{Continuous}, which refers to the resource allocation presented in Section \ref{sec:joint_power_allocation}. \item \textit{UC}, which refers to the resource allocation presented in Section \ref{sec:alternating_power_allocation_user}. \item \textit{BSC}, which refers to the resource allocation presented in Section \ref{sec:alternating_power_allocation_BS}. \item \textit{MSRM}, which refers to the resource allocation presented in Section \ref{sec:MSRM_channel_allocation}. \item \textit{Max SUD} which refers to the maximal average sum rate produced by each of the above resource allocation schemes for the interference coordination model. \end{itemize} \item The $F3$ field describes the user affiliation criterion. This field can either be \textit{``best channel"} or \textit{``closest BS"}. \end{itemize} \subsection{Average System Sum Rate} Figures \ref{Best_Channel_Average_Sum_Rate_fig_single_all}-\ref{Best_Channel_Average_Sum_Rate_fig_all} depict the average system sum rate as a function of the number of virtual cells. We clustered the BSs in the network according to the hierarchical clustering algorithm with the minimax linkage criterion that is depicted in Algorithm \ref{algo:hierarchical_clustering}. We considered both of the user affiliation rules we propose in Section \ref{sec_user_affil}, i.e., the ``closest BS" criterion and the ``best channel" criterion. We examined the average system sum rate of both cooperation models discussed in this paper: the interference coordination model whose resource allocation schemes are discussed in Sections \ref{sec:joint_power_allocation}-\ref{sec:alternating_optimization}, and the coordinated multi-point decoding model whose resource allocation scheme is discussed in Section \ref{sec:joint_decoding}. Fig.~\ref{Best_Channel_Average_Sum_Rate_fig_single_all} depicts the average system sum rate of the interference coordination model for each of the resource allocation schemes and each of the user affiliation schemes we propose in this paper. Fig.~\ref{Best_Channel_Average_Sum_Rate_joint_decoding} depicts the average system sum rate of the coordinated multi-point decoding for each of the user affiliation schemes we propose. Finally, Fig.~\ref{Best_Channel_Average_Sum_Rate_fig_all} depicts the average system sum rate achieved by each of the cooperation models we consider. \begin{figure} \centering \includegraphics[scale=0.67]{Figure1.png} \vspace{-0.4cm} \caption{Comparison of the average system sum rate of the interference coordination model as a function of the number of virtual using hierarchical BS clustering with minimax linkage criterion.} \label{Best_Channel_Average_Sum_Rate_fig_single_all} \vspace{-0.3cm} \end{figure} \begin{figure} \centering \includegraphics[scale=0.67]{Figure2.png} \vspace{-0.4cm} \caption{Comparison of the average system sum rate of the coordinated multi-point decoding as a function of the number of virtual cells using hierarchical BS clustering with minimax linkage criterion.} \label{Best_Channel_Average_Sum_Rate_joint_decoding} \vspace{-0.3cm} \end{figure} \begin{figure} \centering \includegraphics[scale=0.67]{Figure3.png} \vspace{-0.4cm} \caption{Comparison between the average sum rate of the interference coordination model and the coordinated multi-point decoding as a function of the number of virtual cells using hierarchical BS clustering with minimax linkage criterion.} \label{Best_Channel_Average_Sum_Rate_fig_all} \vspace{-0.4cm} \end{figure} Figures \ref{Best_Channel_Average_Sum_Rate_fig_single_all}-\ref{Best_Channel_Average_Sum_Rate_fig_all} lead to several interesting insights and conclusions. First, they confirm the expectation that, as the number of virtual cells decreases, the average sum rate increases. Second, they show that the best channel affiliation rule outperforms the closest BS one when the number of virtual cells is large. However, as Fig.~\ref{Best_Channel_Average_Sum_Rate_fig_single_all} shows, this changes in the interference coordination model when the number of virtual cells decreases. In this case the closest BS affiliation rule either outperforms or is on par with the best channel one, depending on the resource allocation scheme. Additionally, Fig.~\ref{Best_Channel_Average_Sum_Rate_fig_single_all} shows that it is best to use the BSC or MSRM channel allocation methods, which yielded similar performance, for allocating channels and power in virtual cells except when there is a single virtual cell (fully centralized optimization). In this case the two new resource allocation techniques that we propose outperform these other methods. This can be explained by the fact that our new schemes provide more freedom in the power allocation stage to choose which users have a positive transmission power compared with existing methods. However, since the power allocation problem is solved approximately, its solution may not be optimal. When the size of the virtual cells is small (i.e. there are many virtual cells), the channel allocation choice of the existing methods is good whereas the new methods suffer loss in performance due to the suboptimality of the power allocation stage. However, as the size of the virtual cells grows (as their number is decreased), the ability of the new methods to consider in the power allocation stage more channel allocation combinations improves the resource allocation performance, even though the solution of the power allocation problem is only approximately optimal. Overall the average sum rate increase of the resource allocation schemes of the fully centralized scenario, i.e., a single virtual cell compared to the fully distributed scenario, is approximately $20\%$ when considering the best achieved average sum rate at each point. Fig.~\ref{Best_Channel_Average_Sum_Rate_joint_decoding} depicts the average system sum rate of the coordinated multi-point decoding as a function of the number of virtual cells comprising the network. It shows the monotonic and significant improvement in average system sum rate as the number of virtual cells decreases; the overall improvement in average system sum rate is 330\%. Fig.~\ref{Best_Channel_Average_Sum_Rate_fig_all} compares the average system sum rate achieved by the coordinated multi-point decoding and the one achieved by the interference coordination model. Fig.~\ref{Best_Channel_Average_Sum_Rate_fig_all} shows that coordinated multi-point decoding can achieve significantly higher average system sum rate compared with single user decoding. However, single user decoding may yield a higher sum rate when the number of virtual cells is large. For a large number of virtual cells, where the limited coordination between BSs is similar to having no coordination between BSs, ignoring out of cell interference affects the joint decoding scheme more severely, since it depends on the exact second order statistics of the interference. Thus ignoring the interference outside virtual cells affects the coordinated multi-point scheme more severely than the interference coordination model with a large number of virtual cells. In this case the loss in performance caused by using an inexact interference covariance matrix is not compensated by the gain in performance of using joint decoding in the virtual cell. \subsection{Comparison with Other Clustering Algorithms} We also compared the average system sum rate using the hierarchical clustering algorithm with minimax linkage criterion with that of two other popular clustering algorithms, namely, the K-means clustering algorithm and that of the spectral clustering algorithm \cite{Ng:2001:SCA:2980539.2980649} for the choices $\sigma=\sqrt{2000}$ and $\sigma=2000$. Fig.~\ref{Several_Comparison_Clustering_Max_Average_Sum_Rate_max_single} depicts the maximal average system sum rate achieved by each of the clustering algorithms where the maximization is taken over the resource allocation schemes for the interference coordination model presented in this work. Additionally, Fig.~\ref{Several_Joint_decoding_exhaustive_hierarchial_fig} depicts the average system sum rate achieved by coordinated multi-point decoding. Fig.~\ref{Several_Comparison_Clustering_Max_Average_Sum_Rate_max_single}-\ref{Several_Joint_decoding_exhaustive_hierarchial_fig} show that the hierarchical algorithm consistently outperforms both the K-means and the spectral clustering algorithms for both user affiliation rules and both cooperation models. \begin{figure} \centering \includegraphics[scale=0.67]{Figure4.png} \vspace{-0.4cm} \caption{Comparison of the maximal average sum rate of several BSs clustering algorithms as a function of the number of virtual cells for the interference coordination model.} \label{Several_Comparison_Clustering_Max_Average_Sum_Rate_max_single} \vspace{-0.4cm} \end{figure} \begin{figure} \centering \includegraphics[scale=0.67]{Figure5.png} \vspace{-0.4cm} \caption{Comparison of the maximal average sum rate of several BSs clustering algorithms as a function of the number of virtual cells for coordinated multi-point decoding.} \label{Several_Joint_decoding_exhaustive_hierarchial_fig} \vspace{-0.4cm} \end{figure} We considered an additional network setup which was comprised of $10$ BSs and $80$ users that were uniformly located in a square of side $1000$ meters. Fig.~\ref{Several_Comparison_Clustering_Max_Average_Sum_Rate_max_single2} presents the average system sum rate as a function of the number of virtual cells for the interference coordinated model. The results were averaged over $1000$ system realizations. Fig.~\ref{Several_Comparison_Clustering_Max_Average_Sum_Rate_max_single2} shows that a proper choice of the clustering algorithm is crucial for improving network performance. This is evident in the plot of the spectral clustering algorithm in which the network performance monotonically decreases as the number of virtual cells is decreased from 10 to 5. \begin{figure} \centering \includegraphics[scale=0.6]{Figure6.png} \vspace{-0.4cm} \caption{Comparison of the maximal average sum rate of several BSs clustering algorithms as a function of the number of virtual cells for the interference coordination model.} \label{Several_Comparison_Clustering_Max_Average_Sum_Rate_max_single2} \vspace{-0.4cm} \end{figure} \section{Conclusion}\label{sec:conclusion} This work addressed the role of virtual cells in resource allocation and network management for future wireless networks. It proposed methods for two design aspects of this network optimization; namely, forming the virtual cells and allocating the communication resources in each virtual cell to maximize total system sum rate. We considered two cooperation models in virtual cells. The first model used interference coordination, where the resource allocation in each virtual cell is performed jointly for all users and BSs in the virtual cell but there is no decoding cooperation. The second cooperation model we considered was the coordinated multi-point decoding model, whereby BSs in a virtual cell allocate the communication resources jointly and also decode their signal cooperatively. We presented two types of resource allocation schemes for the interference coordination model. The first scheme converted the NP-hard mixed-integer resource allocation problem into a continuous resource allocation problem and then found an approximate solution. The second scheme alternated between the power allocation and channel allocation problems. We proposed a new channel allocation that was carried out in a user-centric manner, and also considered a BS centric approach. We additionally considered a maximum sum rate matching approach where an optimal channel assignment is found for a given power allocation. Since this power allocation may not be optimal, the overall solution may be sub-optimal as well. We also solved the joint decoding resource allocation problem for the coordinated multi-point decoding model in each virtual cell optimally. All of these schemes assume the BSs have been assigned to virtual cells via clustering. For this clustering we proposed the use of hierarchical clustering in the clustering of the BSs to form the virtual cells, since changing the number of virtual cells only causes local changes and does not force a reclustering of all the virtual BSs in the network. We presented numerical results for all of the aforementioned models. Our numerical results demonstrate the increase in system sum rate that our neighborhood-based optimization yields. This increase is monotonic as the neighborhood-based optimization reverts from distributed to centralized optimization. Additionally, our numerical results indicate that coordinated multi-point communication systems show greater increase in system sum rate as the number of virtual cells decreases, in comparison with interference coordination communication systems. Finally, they show that the hierarchical clustering with the minimax linkage criterion yields higher system sum rate than both K-means and spectral clustering. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Dataset} \begin{figure}[ht] \centering \includegraphics[width=0.98\textwidth] {./img/dataset_7x7.png} \caption[The dataset of topology optimization process]{The samples from the dataset of topology optimization process.} \label{fig:dataset} \end{figure} \newpage \section{Results} \begin{figure}[ht] \centering \includegraphics[width=0.9 \textwidth] {./img/pred_big_4x4.png} \caption{Results of the application of the proposed model to the prediction of the final structure.} \label{fig:pred_appendix_big} \end{figure} \section{Conclusion} \label{sec:conclusion} In this paper, we proposed a neural network as an effective tool for the acceleration of topology optimization process. Out model learned the mapping from the intermediate result of the iterative method to the final structure of the design domain. It allowed us to stop SIMP method earlier and significantly decrease the total time consumption. We demonstrated that the model trained on the dataset of minimal compliance problems could produce the rough approximation of the solution for other types of topology optimization problems. Various experiments showed that the proposed neural network transfers successfully from the dataset with a small resolution to the problems defined on the grids with better resolution. \section{Introduction} \label{sec:intro} Topology optimization solves the layout problem with the following formulation: how to distribute the material inside a design domain such that the obtained structure has optimal properties and satisfies the prescribed constraints? The most challenging formulation of the problem requires the solution to be binary i.e. it should state whether there is a material or a void for each of the parts of the design domain. One of the common examples of such an optimization is the minimization of elastic strain energy of a body for a given total weight and boundary conditions. Initiated by the demands of automotive and aerospace industry in the $20^{th}$ century, topology optimization has spread its application to a wide range of other disciplines: e.g. fluids, acoustics, electromagnetics, optics and combinations thereof \cite{topopt_application}. All modern approaches for topology optimization used in commercial and academic software are based on finite element methods. SIMP (Simplified Isotropic Material with Penalization), which was introduced in 1989 \cite{Bendsoe1989}, is currently a widely spread simple and efficient technique. It proposes to use penalization of the intermediate values of density of the material, which improves the convergence of the solution to binary. Topology optimization problem could be solved by using BESO (Bi-directional evolutionary structural optimization) \cite{beso_1, beso_2} as an alternative. The key idea of this method is to remove the material where the stress is the lowest and add material where the stress is higher. The more detailed review is discussed in Section \ref{sec:topopt}. For all of the above-described methods, the process of optimization could be roughly divided into two stages: general redistribution of the material and the refinement. During the first one, the material layout varies a lot from iteration to iteration. While during the second stage the material distribution converges to the final result. The global structure remains unchanged and only local alteration could be observed. In this paper, we propose a deep learning based approach for speeding up the most time-consuming part of a traditional topology optimization methods. The main novelty of this work is to state the problem as an image segmentation task. We leverage the power of deep learning methods as an efficient pixel-wise image labeling technique to accelerate modern topology optimization solvers. The key features of our approach are the following: \begin{itemize} \item acceleration of optimization process; \item excellent generalization properties; \item absolutely scalable techniques; \end{itemize} \section{Learning Topology Optimization} \label{sec:learn_topopt} As it was illustrated in Section \ref{sec:topopt}, it is enough for the solver to perform a few number $N_0$ of iterations to obtain the preliminary view of a structure. The fraction of non-binary densities could be close to 1, however, the global layout pattern is close to the final one. The obtained image $I$ could be interpreted as a blurred image of a final structure, or an image distorted by other factors. The thing is that there are just two types of objects on this image: the material and the void. The image $I^*$, obtained as a result of topology optimization does not contain intermediate values and, therefore, could be interpreted as the mask of image $I$. According to this notation, starting from iteration $N_0$ the process of optimization $I \rightarrow I^*$ mimics the process of image segmentation for two classes or foreground-background segmentation. We propose the following pipeline for topology optimization: use SIMP method to perform the initial iterations and get the distribution with non-binary densities; use the neural network to perform the segmentation of the obtained image and converge the distribution to $\{0, 1\}$ solution. \subsection{Architecture} \begin{figure} \centering \includegraphics[width=\textwidth] {./img/architecture.png} \caption[Architecture of the neural network for topology optimization]{The architecture of the proposed neural network for topology optimization. All kernels are of size $3 \times 3$. The number of kernels is represented by the number at the bottom of the layer. Blue arrows and opaque boxes represent the concatenation of the features from different layers.} \label{fig:nn_arch} \end{figure} Here we introduce the \textbf{\textit{Neural Network for Topology Optimization}} --- deep fully-convolutional neural network aimed to perform the convergence of densities during the topology optimization process. The input of the model is two grayscale images (or a two-channel image). The first one is the density distribution $X_n$ inside of the design domain which was obtained after the last performed iteration of topology optimization solver. The second input is the last performed update (gradient) of the densities $\delta X = X_n - X_{n-1}$, i.e. the difference between the densities after the $n$-th iteration and $n-1$-th iteration. The output of the proposed model is a grayscale image of the same resolution as an input, which represents the predicted final structure. The architecture of our model mimics the common for the image segmentation hourglass shape. The proposed model has an encoder network and a corresponding decoder network, followed by a final pixel-wise classification layer. The architecture is illustrated in Figure \ref{fig:nn_arch}. The encoder network consists of 6 convolutional layers. Each layer has kernels of size $3 \times 3$ and is followed by ReLU nonlinearity. The first two layers have 16 convolutional kernels. This block is followed by the pooling of the maximal element from the window of size $2 \times 2$. The next two layers have 32 kernels and are also followed by MaxPooling layer. The last block consists of 2 layers with 64 kernels each. The decoder network copies the architecture of the encoder part and reverses it. The MaxPooling layers are replaced with Upsampling layers followed by the concatenation with features from the corresponding low-level layer as it is performed in U-Net \cite{RonnebergerU-Net:Segmentation}. The pooling operation introduces the invariance of the subsequent network to small translations of the input. The concatenation of features from different layers allows one to benefit from the use of both the raw low-level representation and significantly encoded parametrization from the higher levels. The decoder is followed by the Convolutional layer with 1 kernel and sigmoid activation function. We included 2 Dropout layers \cite{dropout} as the regularization for our network. The width and height of the input image could vary, however, they must be divisible by 4 in order to guarantee the coherence of the shapes of tensors in the computational graph. The proposed neural network has just 192,113 parameters. \subsection{Dataset} \label{sect:dataset} To train the above-described model, we need example solutions to the System \ref{nn_topopt:mechanical_problem}. The collection of a large dataset from the real-life examples is difficult or even impossible. Therefore, we use synthetic data generated by using Topy \cite{topy} --- an open source solver for 2D and 3D topology optimization, based on SIMP approach. To generate the dataset we sampled the pseudo-random problem formulations and performed 100 iterations of standard SIMP method. Each problem is defined by the constraints and the loads. The strategy of sampling is the following: \begin{itemize} \item The number of nodes with fixed $x$ and $y$ translations and the number of loads is sampled from the Poisson distribution: \begin{gather*} N_{x} \sim P(\lambda = 2),\\ N_{y}, N_{\text{L}} \sim P(\lambda = 1) \end{gather*} \item The nodes for each of the above-described constraints are sampled from the distribution defined on the grid. The probability of choosing the boundary node is 100 times higher than that for an inner node. \item The load values are chosen as $-1$. \item The volume fraction is sampled from the Normal distribution $f_0 \sim \mathcal{N}(\mu = 0.5, \sigma=0.1)$ \end{itemize} The obtained dataset\footnote{\label{note_dataset}The dataset and the related code is available at \url{https://github.com/ISosnovik/top}} has 10,000 objects. Each object is a tensor of shape $100 \times 40 \times 40$: 100 iterations of the optimization process for the problem defined on a regular $40 \times 40$ grid. \subsection{Training} We used dataset, described in Section \ref{sect:dataset}, to train our model. During the training process we ``stopped" SIMP solver after $k$ iterations and used the obtained design variables as an input for our model. The input images were augmented with transformations from group D4: horizontal and vertical flips and rotation by 90 degrees. $k$ was sampled from some certain distribution $\mathcal{F}$. Poisson distribution $P(\lambda)$ and discrete uniform distribution $U[1, 100]$ are of interest to us. For training the network we used the objective function of the following form: \begin{equation}\label{nn_topopt:loss} \begin{split} \mathcal{L} = \mathcal{L}_{\text{conf}} (X_{\text{true}}, X_{\text{pred}}) + \beta \mathcal{L}_{\text{vol}} (X_{\text{true}}, X_{\text{pred}}) \end{split} \end{equation} where the confidence loss is a binary cross-entropy: \begin{equation}\label{nn_topopt:loss_conf} \mathcal{L}_{\text{conf}} (X_{\text{true}}, X_{\text{pred}}) = - \frac{1}{NM} \sum \limits_{i = 1}^{N} \sum \limits_{j = 1}^{M} \Big[ X_{\text{true}}^{ij} \log ( X_{\text{pred}}^{ij}) + ( 1 - X_{\text{true}}^{ij}) \log ( 1 - X_{\text{pred}}^{ij}) \Big] \end{equation} where $N \times M$ is the resolution of the image. The second summand in Equation (\ref{nn_topopt:loss}) represents the volume fraction constraint: \begin{equation}\label{nn_topopt:loss_const} \mathcal{L}_{\text{vol}} (X_{\text{true}}, X_{\text{pred}}) = (\bar{X}_{\text{pred}} - \bar{X}_{\text{true}})^2 \end{equation} We used ADAM \cite{KingmaADAM:OPTIMIZATION} optimizer with default parameters. We halved the learning rate once during the training process. All code is written in Python\footnote{\label{note_code}The implementation is available at \url{https://github.com/ISosnovik/nn4topopt}}. For neural networks, we used Keras \cite{chollet2015keras} with TensorFlow \cite{tensorflow2015-whitepaper} backend. NVIDIA Tesla K80 was used for deep learning computations. The training of a neural network from scratch took about 80-90 minutes. \section{Related work} \label{sec:related} The current research is supposed to be the first one which utilizes deep learning approach for the topology optimization problem. It is inspired by the recent successful application of deep learning to problems in computational physics. Greff at al. \cite{Greff2016} used the fully-connected neural network as a mapping function from the nano-material configuration and the input voltage to the output current. The adaptation of restricted Boltzmann machine for solving the Quantum Many-Body Problem was demonstrated in paper \cite{CarleoSolvingNetworks}. Mills et al. \cite{MillsK.SpannerM.DeepEquation} used the machinery of deep learning to learn the mapping between potential and energy, bypassing the need to numerically solve the Schr\"{o}dinger equation and the need for computing wave functions. Tompson et al. \cite{Tompson2016AcceleratingNetworks} and Kiwon et al. \cite{Um2017LiquidNetworks} accelerated the process of modeling of liquids by the application of neural networks. The paper \cite{SmithANI-1:Cost} demonstrates how a deep neural network trained on quantum mechanical density functional theory calculations can learn an accurate and transferable potential for organic molecules. The cutting-edge research \cite{Paganini2017CaloGAN:Networks} shows how generative adversarial networks could be used for simulating 3D high-energy particle showers in multi-layer electromagnetic calorimeters. \section{Results} \label{sec:results} \begin{figure} \centering \includegraphics[width=0.9\textwidth] {./img/pred_2.png} \caption[Results of the application of proposed model to the prediction of the final structure.]{Top: SIMP is stopped after 8 iterations, binary accuracy 0.96, mean IoU 0.92; Bottom: solver is stopped after 5 iterations, binary accuracy 0.98, mean IoU 0.95.} \label{fig:pred_0} \end{figure} The goal of our experiments is to demonstrate that the proposed model and the overall pipeline are useful for solving topology optimization problems. We compare the performance of our approach with standard SIMP solver \cite{topy} in terms of the accuracy of the obtained structure and the average time consumption. We report two metrics from common image segmentation evaluation: Binary Accuracy and Intersection over Union (IoU). Let $n_l,\;l=0,1$ be the total number of pixels of class $l$. The $\omega_{tp}, \; t, p = 0,1$ is a total number of pixels of class $t$ predicted to belong to class $p$. Therefore: \begin{equation}\label{nn_topopt:metrics} \text{Bin. Acc.} = \frac{\omega_{00} + \omega_{11}}{n_0 + n_1}; \quad \text{IoU} = \frac{1}{2} \Big[ \frac{\omega_{00}}{n_0 + \omega_{10}} + \frac{\omega_{11}}{n_1 + \omega_{01}} \Big] \end{equation} We examine 4 neural networks with the same architecture but trained with different policies. The number of iterations after which we ``stopped" SIMP algorithm was sampled from different distributions. We trained one neural network by choosing discrete uniform distribution $U[1, 100]$ and another three models were trained with Poisson distribution $P(\lambda)$ with $\lambda=5, 10, 30$. \subsection{Accuracy and performance} We conducted several experiments to illustrate the results of the application of the proposed pipeline and the exact model to mechanical problems. Figure \ref{fig:pred_0} demonstrates that our neural network restores the final structure while being used even after 5 iterations. The output of the model is close to that of SIMP algorithm. The overall topology of the structure is the same. Furthermore, the time consumption of the proposed method, in this case, is almost 20 times smaller. Neural networks trained with different policies produce close results: models preserve the final structure up to some rare pixel-wise changes. However, the accuracy of these models depends on the number of the initial iterations performed by SIMP algorithm. Tables \ref{nn_topopt:table_acc_mech}, \ref{nn_topopt:table_iou_mech} summarize the results obtained in the series of experiments. The trained models demonstrate the sufficiently more accurate results comparing to the thresholding applied after the same number of iterations of SIMP method. Some models benefit when they are applied after 5-10 iterations, while others demonstrate better result in the middle or at the end of the process. The proposed pipeline could significantly accelerate the overall algorithm with minimal reduction in accuracy, especially when CNN is used at the beginning of the optimization process. The neural network which used discrete uniform distribution during the training process does not demonstrate the highest binary accuracy and IoU comparing to other models till the latest iterations. However, this model allows one to outperform the SIMP algorithm with thresholding throughout the optimization process. \begin{table} \begin{center} \caption{Binary Accuracy of the proposed method and the standard one on the mechanical dataset.} \begin{tabular}{ |c|ccccccccc| } \hline &\multicolumn{9}{ |c| }{Iteration}\\ \hline Method & 5 & 10 & 15 & 20 & 30 & 40 & 50 & 60 & 80 \\ \hline Thresholding & 92.9 & 95.4 & 96.5 & 97.1 & 97.7 & 98.1 & 98.4 & 98.6 & 98.9\\ CNN $P(5)$ & \textbf{ 95.8} & 97.3 & 97.7 & 97.9 & 98.2 & 98.4 & 98.5 & 98.6 & 98.7\\ CNN $P(10)$ & 95.4 & \textbf{ 97.6} & \textbf{ 98.1} & 98.4 & 98.7 & 98.9 & 99.0 & 99.0 & 99.0\\ CNN $P(30)$ & 92.7 & 96.3 & 97.8 & \textbf{ 98.5} & \textbf{ 99.0} & \textbf{ 99.2} & \textbf{ 99.4} & \textbf{ 99.5} & 99.6\\ CNN $U[1, 100]$ & 94.7 & 96.8 & 97.7 & 98.2 & 98.7 & 99.0 & 99.3 & 99.4 & \textbf{ 99.6}\\ \hline \end{tabular} \label{nn_topopt:table_acc_mech} \end{center} \end{table} \begin{table} \begin{center} \caption{Intersection over Union (IoU) of the proposed method and the standard one on the mechanical dataset.} \begin{tabular}{ |c|ccccccccc| } \hline &\multicolumn{9}{ |c| }{Iteration}\\ \hline Method & 5 & 10 & 15 & 20 & 30 & 40 & 50 & 60 & 80 \\ \hline Thresholding & 86.8 & 91.2 & 93.3 & 94.3 & 95.6 & 96.3 & 96.8 & 97.3 & 97.9\\ CNN $P(5)$ & \textbf{ 92.0} & 94.7 & 95.4 & 96.0 & 96.5 & 96.9 & 97.1 & 97.3 & 97.4\\ CNN $P(10)$ & 91.1 & \textbf{ 95.3} & \textbf{ 96.4} & 96.9 & 97.4 & 97.8 & 98.0 & 98.0 & 98.1\\ CNN $P(30)$ & 86.4 & 92.9 & 95.7 & \textbf{ 97.0} & \textbf{ 98.1} & \textbf{ 98.5} & \textbf{ 98.8} & \textbf{ 99.0} & 99.2\\ CNN $U[1, 100]$ & 90.0 & 93.9 & 95.5 & 96.4 & 97.5 & 98.1 & 98.6 & 98.8 & \textbf{ 99.2}\\ \hline \end{tabular} \label{nn_topopt:table_iou_mech} \end{center} \end{table} \subsection{Transferability} This research is dedicated to the application of neural networks to the topology optimization of minimal compliance problems. Nevertheless, the proposed model does not rely on any prior knowledge of the nature of the problem. Despite the fact that we used mechanical dataset during the training, other types of problems from topology optimization framework could be solved by using the proposed pipeline. To examine the generalization properties of our model, we generated the small dataset of heat conduction problems defined on $40 \times 40$ regular grid. The exact solution and the intermediate densities for the problems were obtained in absolutely the same way as it was described in Section \ref{sec:learn_topopt}. The conducted experiments are summarized in Table \ref{nn_topopt:table_acc_heat}, \ref{nn_topopt:table_iou_heat}. During the initial part of the optimization process, the results of the pre-trained CNNs are more accurate than this of thresholding. Our model approximates the mapping to the final structure precisely when the training dataset and the validation dataset are from the same distribution. However, it mimics updates of SIMP method during the initial iterations even when CNN is applied to another dataset. Therefore, this pipeline could be useful for the fast prediction of the rough structure in various topology optimization problems. The neural network described in Section \ref{sec:learn_topopt} is fully-convolutional, i.e. it consists of Convolutional, Pooling, Upsampling and Dropout layers. The architecture itself does not have any constraints on the size of the input data. In this experiment, we checked the scalable properties of our method. The model we examined had been trained on the original dataset with square images of size $40 \times 40$. Figure \ref{fig:res_resolution} visualizes the result of the application of CNN to the problems defined on grids with different resolution. Here we can see that changes in the aspect ratio and reasonable changes in the resolution of the input data do not affect the accuracy of the model. Pre-trained neural network successfully reconstructs the final structure for a given problem. Significant changes of the size of the input data require additional training of the model because the typical size of a common patterns changes with the increase of the resolution of an image. Nevertheless, demonstrated cases did not require one to tune neural network and allowed to transfer model from one resolution to another. \begin{table} \begin{center} \caption{Binary Accuracy of the proposed method and the standard one on heat conduction dataset. Models were trained on the minimal compliance dataset.} \begin{tabular}{ |c|ccccccccc| } \hline &\multicolumn{9}{ |c| }{Iteration}\\ \hline Method & 5 & 10 & 15 & 20 & 30 & 40 & 50 & 60 & 80 \\ \hline Thresholding & 97.5 & 98.4 & 98.8 & 99.1 & 99.4 & \textbf{ 99.6} & \textbf{ 99.7} & \textbf{ 99.8} & \textbf{ 99.9}\\ CNN $P(5)$ & 98.1 & 98.7 & 99.0 & 99.2 & 99.4 & 99.5 & 99.6 & 99.7 & 99.7\\ CNN $P(10)$ & \textbf{ 98.1} & 98.8 & 99.0 & 99.2 & 99.4 & 99.5 & 99.6 & 99.7 & 99.8\\ CNN $P(30)$ & 97.3 & \textbf{ 99.0} & \textbf{ 99.2} & \textbf{ 99.4} & \textbf{ 99.5} & 99.6 & 99.7 & 99.7 & 99.8\\ CNN $U[1, 100]$ & 97.8 & 98.8 & 99.1 & 99.3 & 99.5 & 99.6 & 99.7 & 99.7 & 99.8\\ \hline \end{tabular} \label{nn_topopt:table_acc_heat} \end{center} \end{table} \begin{table} \begin{center} \caption{IoU of the proposed method and the standard one on heat conduction dataset. Models were trained on the minimal compliance dataset.} \begin{tabular}{ |c|ccccccccc| } \hline &\multicolumn{9}{ |c| }{Iteration}\\ \hline Method & 5 & 10 & 15 & 20 & 30 & 40 & 50 & 60 & 80 \\ \hline Thresholding & 95.1 & 96.8 & 97.6 & 98.1 & 98.8 & \textbf{ 99.2} & \textbf{ 99.4} & \textbf{ 99.6} & \textbf{ 99.9}\\ CNN $P(5)$ & 96.2 & 97.5 & 98.0 & 98.4 & 98.8 & 99.0 & 99.2 & 99.3 & 99.5\\ CNN $P(10)$ & \textbf{ 96.3} & 97.6 & 98.1 & 98.4 & 98.9 & 99.1 & 99.3 & 99.4 & 99.5\\ CNN $P(30)$ & 94.8 & \textbf{ 98.0} & \textbf{ 98.5} & \textbf{ 98.7} & \textbf{ 99.0} & 99.2 & 99.3 & 99.4 & 99.5\\ CNN $U[1, 100]$ & 95.7 & 97.7 & 98.2 & 98.6 & 98.9 & 99.2 & 99.3 & 99.4 & 99.6\\ \hline \end{tabular} \label{nn_topopt:table_iou_heat} \end{center} \end{table} \begin{figure} \includegraphics[width=1.0\textwidth] {./img/res_resolution.png} \caption[]{Results of the application of the proposed CNN to the problems defined on grids with resolution and aspect ratio different from that of the training dataset.} \label{fig:res_resolution} \end{figure} \section{Topology Optimization Problem} \label{sec:topopt} Current research is devoted to topology optimization of mechanical structures. Consider a design domain $\Omega : \{\omega_j\}_{j=1}^N$, filled with a linear isotropic elastic material and discretized with square finite elements. The material distribution is described by the binary density variable $x_j$ that represents either absence (0) or presence (1) of the material at each point of the design domain. Therefore, the problem, that we seek to solve, can be written in mathematical form as: \begin{equation}\label{nn_topopt:mechanical_problem} \begin{split} \begin{cases} \min\limits_{\bm{x}} & c(\bm{u}(\bm{x}), \bm{x}) = \sum\limits_{j=1}^{N} E_j(x_j)\bm{u}^T_j \bm{k_0} \bm{u}_j\\ \hfill \text{s.t.} & V(\bm{x}) / V_0 = f_0 \\ \hfill & \bm{KU} = \bm{F} \\ \hfill & x_j \in \{ 0; 1\}, \quad j = 1 \dots N \end{cases} \end{split} \end{equation} where $c$ is a compliance, $\bm{u_j}$ is the element displacement vector, $\bm{k_0}$ is the element stiffness matrix for an element with unit Young's modulus, $\bm{U}$ and $\bm{F}$ are the global displacement and force vectors, respectively and $\bm{K}$ is the global stiffness matrix. $V(\bm{x})$ and $V_0$ are the material volume and design domain volume, respectively. $f_0$ is the prescribed volume fraction. The discrete nature of the problem makes it difficult to solve. Therefore, the last constraint in (\ref{nn_topopt:mechanical_problem}) is replaced with the following one: $x_j \in [ 0; 1], \; j = 1 \dots N$. The most common method for topology optimization problem with continuous design variables is so-called SIMP or power-law approach \cite{Bendsoe1989,Mlejnek1992}. This is a gradient-based iterative method with the penalization of non-binary solutions, which is achieved by choosing Young\textquotesingle s modulus of a simple but very efficient form: \begin{equation}\label{nn_topopt:simp_young} E_j(x_j) = E_{\text{min}} + x^p_j (E_0 - E_{\text{min}}) \end{equation} The exact implementation of SIMP algorithm is out of the scope of the current paper. The updating schemes, as well as different heuristics, can be found in excellent papers \cite{bendsoe1995optimization,sigmund1997,bourdin2001,Svanberg2013,Groenwold2009}. The topology optimization code in Matlab is described in details in \cite{topopt99lines,topopt88lines} and the Python implementation of SIMP algorithm is represented in \cite{topy}. \begin{figure} \centering \includegraphics[width=0.5\linewidth] {./img/mbb_beam_statement.pdf} \caption{The design domain, boundary conditions, and external load for the optimization of a half MBB beam.} \label{nn_topopt:mbb_beam} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=\textwidth] {./img/simp_iter_3.pdf} \caption{Iteration 3} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=\textwidth] {./img/simp_iter_13.pdf} \caption{Iteration 13} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=\textwidth] {./img/simp_iter_30.pdf} \caption{Iteration 30} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=\textwidth] {./img/simp_iter_80.pdf} \caption{Iteration 80} \end{subfigure} \caption{The process of optimization of half MBB beam with SIMP method. 120 $\times$ 40 mesh. Black --- 1, white --- 0} \label{nn_topopt:simp_iters} \end{figure} Standard half MBB-Beam problem is used to illustrate the process of topology optimization. The design domain, constraints, and loads are represented in Figure \ref{nn_topopt:mbb_beam}. The optimization of this problem is demonstrated in Figure \ref{nn_topopt:simp_iters}. During the initial iterations, the general redistribution of the material inside of the design domain is performed. The rest of the optimization process includes the filtering of the pixels: the densities with intermediate values converge to binary values and the silhouette of the obtained structure remains almost unchanged.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec.int} Binaries have long been thought to have a crucial impact on globular cluster dynamics and evolution (Hut et al. 1992), but only recently (especially with the use of {\it HST}) have large numbers of them been found in globular cluster cores where they are expected to act as the central energy source that drives cluster expansion. Discoveries of binary millisecond pulsars (e.g. Manchester et al. 1991) and multiple low-luminosity X-ray sources (Hertz \& Grindlay 1983) have recently been supplemented by discoveries of large numbers of eclipsing binaries in globulars (e.g. 47 Tuc; Edmonds et al. 1996 and Kaluzny et al. 1998) and a significant population of main sequence--main sequence binaries in NGC 6752 (Rubenstein and Bailyn 1997). Another recent breakthrough has been the discovery of cataclysmic variables (CVs) in the cores of globular clusters, using either dwarf nova (DN) outbursts in M5 (Oosterhoff 1941), 47 Tuc (Paresce \& De Marchi 1994) and NGC 6624 (Shara, Zurek \& Rich 1996), UV excess to recover an old nova in M80 (Shara and Drissen 1995) or narrow-band H$\alpha$ emission. Using the latter technique 3 CVs have been reported in NGC 6397 by Cool et al. (1995) and Grindlay et al. (1995; hereafter GC95), and a fourth CV candidate by Cool et al. (1998; hereafter CG98). Also, 2 probable CVs have been reported in NGC 6752 by Bailyn et al. (1996). These CVs appear to be the long-sought optical counterparts of the low-luminosity X-ray sources found in globular cluster cores. In particular, the 3 brightest optical emission line objects in NGC 6397 (GC95) are the probable counterparts of the 3 brightest X-ray sources found by Cool et al. (1993; see also Cool et al. 1995). Observations of CVs in clusters can be used for a variety of studies including: (1) CV formation and evolution in low-metallicity environments, (2) stellar interactions in high-density environments, and (3) cluster dynamical evolution. Probable formation mechanisms for globular cluster CVs include tidal capture and exchange collisions between main sequence (MS) stars and white dwarfs (WDs), complementing studies of the MS star - MS star interactions that produce blue stragglers. Since these formation mechanisms differ from those for field CVs, and the stellar environment is different, it would not be surprising to find systematic differences between globular cluster and field CVs. In particular, GC95 and Grindlay (1996) have suggested that the CVs in NGC 6397 might have a much higher percentage of magnetic WDs than field CVs. The dense, collapsed core of NGC 6397 is a prime region to study the effects of stellar interactions because its high central density makes interaction rates large and its relative proximity at 2.2 kpc makes it possible to probe the core with high spatial resolution (CG98). This paper presents new {\it HST}/FOS spectra of CV 1 (from GC95) and CV 4 (from CG98) in NGC 6397. By combining all available spectra for CVs 1--4 with the photometry of CG98 and comparing with field CVs we show that the CV disks are consistent with those of faint quiescent DNe, given their expected periods, but that their \ion{He}{2}\ \la4686 lines are unusually strong for DNe (such systems would probably have long recurrence times between outbursts). Instead, we argue that CVs 1--3 may be magnetic CVs (or perhaps old novae), and that CV 4 is either a low accretion rate DN or a magnetic CV. It is possible that these objects may even be quiescent LMXBs, although this is unlikely based on detailed comparisons with the x-ray and optical properties (Grindlay 1996, 1998). In any case we present good evidence that there {\it are} systematic differences between populations of globular cluster and field CVs. Along with the 3 previously known classes of UV bright stars in NGC 6397 (blue stragglers, CVs and WDs), CG98 have discovered another class of UV bright stars. Three faint, hot stars have been found within only $\sim$16$''$ of the cluster center, all of them non-variable (unlike the flickering CVs). CG98 have argued that these non-flickering (NF) stars are unlikely to be CVs, ``normal'' CO WDs (recently evolved from single red giants), extended horizontal branch stars or field stars, but instead that they are good candidates for low-mass helium WDs. Helium WDs have masses $\lesssim$ 0.49$M_{\odot}$, and in the field are usually found in binary systems containing either another WD or a neutron star (Marsh, Dhillon \& Duck 1995, and Rappaport et al. 1995). These double degenerates are thought to form by Roche lobe overflow (and usually common envelope events) in primordial binaries containing red giants, if He ignition in the red giant core, a proto helium WD, is avoided (Iben, Tutukov and Yungelson 1997 discuss detailed formation scenarios). Several low-mass WDs have been found or inferred in open and globular clusters including the helium WD -- red giant binary S1040 in M67 (Landsman et al. 1997), and the ultra--short period X-ray binary systems 4U 1820-30 in NGC 6624 (Anderson et al. 1997) and Star S in NGC 6712 (Anderson et al. 1993). S1040 in M67 probably formed after a subgiant underwent Roche lobe overflow in a primordial binary (Landsman et al. 1997), but in denser globular clusters, primordial binary evolution may be less important than interactions involving subgiants or red giants. For example, red giant/WD or red giant/MS star direct collisions should cause a helium WD to be left behind in a binary system (Davies, Benz \& Hills 1991). Systems such as 4U 1820-30 and Star S probably result from either neutron star/red giant collisions (Verbunt 1987) or neutron star/MS star capture and delayed mass transfer (Bailyn \& Grindlay 1987). Here, we report the first spectrum of one of the NGC 6397 NFs. The lack of emission lines in the spectrum provides extra evidence against the CV possibility and the log g value presented here argues against a CO WD or extreme horizontal branch identification. By comparing with published model atmospheres we determine log g and $T_{\scriptsize {\mbox{eff}}}$\ for the NF and we then compare these parameters (along with the luminosity) with WD evolutionary models to show that a low-mass helium WD is, indeed, a plausible explanation for the NF. We also present evidence that the NF spectrum is significantly Doppler shifted from the expected wavelength, suggesting that the NF is in a binary system with a massive dark companion. \section{Observations and Analysis} \label{sec.obs} \subsection{CV 1} \label{sec.cv1} Spectroscopic observations with {\it HST} were made of CV 1, the brightest CV of the 3 studied by GC95, on October 2nd, 1996. Ultraviolet and optical observations were obtained with the FOS/PRISM, giving a spectral coverage from 1850--8950\AA\ with variable dispersion and resolution decreasing from the blue to the red. Ultraviolet observations were obtained with the FOS/G160L grating, with a spectral coverage from 1150--2510\AA\ (at a spectral resolution of 9.2\AA), including a small contribution from second-order geocoronal Ly$\alpha$ near the red end. The data were originally reduced using the normal pipeline processing but were then recalibrated using updated flat-fields and inverse sensitivity files (removing some flat-field features at around the 5\% level). Figure \ref{fig.allcv1} summarizes all of the FOS data available for CV 1. The full G160L spectrum is shown and then the PRISM spectrum from 2510\AA\ red-ward, along with the red G570H spectrum (with a resolution of 4.5 \AA) from Cycle 4. No scaling was used for either the G160L or PRISM spectra, showing good self-consistency in the spectral calibration and the photometric states of (variable) CV 1 between the two separate observations. Despite the low spectral resolution of the PRISM in the blue, the H$\gamma$ and H$\delta$ emission lines are visible, along with some absorption lines from the secondary (for example MgII \la2800 and the Ca H/K doublet). Since NGC 6397 lies close to the galactic plane its reddening is significant and therefore Figure \ref{fig.allcv1} also shows a dereddened version of the smoothed PRISM spectrum (using $E(B-V)=0.17$ from Alcaino et al. 1997). Using the WFPC2 photometry from CG98 we estimated the UV and optical contribution from the secondary component in CV 1. We began by noting that CV 1 is almost on the MS in the $V$ vs $V-I$ CMD, which implies a bright secondary and relatively faint disk\footnote{Note that: (1) we use the term ``disk'' loosely for any hot, accretion related element of the binary system, and (2) in general the total CV light minus the secondary will have contributions from both the disk and WD, however the WD component is likely to be negligible, particularly in the $B$ and $V$ passbands.} for this system. Then, assuming that all of the flux in $I$ comes from the secondary we used the position of the MS in CG98 to estimate $B$ and $V$ for the secondary (showing the advantage of observing binaries in a cluster). We then chose the reddened Kurucz atmosphere (log Z = --2.0) which best matched the $BVI$ photometry for the secondary. Figure \ref{fig.allcv1} shows the Kurucz stellar atmosphere with a dotted line. A close--up of the G160L spectrum is shown in Figure \ref{fig.g160}. The dashed line shows a reddened blackbody fit to the UV spectrum (temperature = 12850 K, without removing the small contribution of the secondary) and the solid line shows dereddening of this fit using $E(B-V)=0.17$. The detection of Ly$\alpha$ in second order and a marginal (3.5$\sigma$) detection of \ion{He}{2}\ \la1640 are labeled. No other UV lines are detected, and we set 3$\sigma$ upper limits on the equivalent width (EW) of NV \la1246 (449\AA), SIV/OIV \la1402 (109\AA) and CIV \la1553 (58\AA). The most striking feature of the G160L and PRISM spectra is the relatively low UV flux. We have compared the measured flux distribution of CV 1 with that of various subclasses of field CVs studied by Verbunt (1987) with IUE. This comparison has limitations, as pointed out by the referee, since Verbunt's sample may not include the full range of CV properties (including highly magnetic WD primaries and a possible lack of disks in some DQ Hers). However, the data set and analysis is homogeneous and covers most CV classes, with the notable exception of AM Her systems (see below). Verbunt (1987) determined the fluxes of CVs in 80\AA\ width bins centered on 1460, 1800, 2140 and 2880 \AA. We first normalized each (quiescent) system in Verbunt's study to have the same flux at 1460\AA\ and then averaged the fluxes at other wavelengths over each class. We then predicted, for each CV class, the average UV fluxes expected for $V$=18.27, the magnitude of CV 1 ($V$ magnitudes are also given in Verbunt 1987). We found reddened fluxes of $F_{1460} = 4.3 \times 10^{-16}$ergs cm$^{-2}$s$^{-1}$\AA$^{-1}$ (DQ Hers), $4.0 \times 10^{-16}$ (U Gems) and $4.5 \times 10^{-16}$ (Z Cams), and even brighter for other CV classes (for example $F_{1460} = 1.2 \times 10^{-15}$ for old novae). These fluxes are all over ten times brighter than the observed (reddened) $F_{1460}$ for CV 1 (see Figures \ref{fig.allcv1} and \ref{fig.g160}). However, scaling by the $V$ magnitude of the system has limited usefulness because the secondary is more massive and brighter than most field CVs. A better comparison is to calculate relative fluxes in the UV for CV 1, where the contribution from the secondary is less important. The dereddened UV fluxes for CV 1 are shown in Figure \ref{fig.uv} along with fluxes for various CV classes (the error bars combine uncertainties in the continuum level estimation and the reddening). Not surprisingly, the reddest flux distributions are found for quiescent dwarf novae (DNe), characterized by low accretion rates, and DQ Her systems, with inner disks truncated by the magnetic field of the WD. Clearly, CV 1 has a much redder UV flux distribution than the field CVs in Verbunt's sample. Figure \ref{fig.uv} also plots the flux distribution for the CV 1 disk after subtracting off the secondary and shows that the disk is marginally redder, given the large errors, than all classes of field CV (without the secondaries subtracted). \subsection{CV 4} \label{sec.cv4} We have obtained an optical spectrum of a fourth CV candidate discovered near the center of NGC 6397 (CG98). Figure \ref{fig.cv4} shows the G570H spectrum of this star, after correcting for diffuse light (from a combination of extended PSF halos from bright stars such as giants and diffuse light from faint cluster stars). For comparison, we also show an average of the spectra of CVs 1--3 from GC95, where the continua of CVs 2 and 3 were normalized to that of CV 1. Strong Balmer emission lines are present in the spectrum of the new CV candidate along with several HeI lines and \ion{He}{2}\ \la4686. The spectrum confirms that this star is a CV, as suggested by its UV excess and flickering (CG98). The relative weakness of the \ion{He}{2}\ line is the most obvious difference between CV 4 and the average spectrum of CVs 1--3 (see a quantitative analysis below). The spectrum of CV 4 has the highest signal to noise (S/N) ratio of the 4 CV spectra, enabling easy detection of relatively weak lines such as HeI \la4713. The dotted line shows the estimated contribution of the secondary, using the procedure outlined above for CV 1. To improve the spectra, we applied boxcar smoothing over 7 channels (this improved the S/N ratio per independent channel by a factor of 2.6 at H$\beta$, but in simulations had a negligible effect on measured line parameters). We found that a Voigt profile gave optimal fits to the emission lines, after also experimenting with Gaussians and Lorentzians. Before deriving EWs (see Section \ref{sec.he2}) and integrated magnitudes from the spectrum of this faint star ($V$=20.81), we considered 2 possible additional sources of light in the 0.43$''$ aperture used: (1) possible light from individual neighboring stars and (2) the diffuse light contribution. By examining the high S/N WFPC2 images of CG98 we found that light from neighbors is negligible, since CV 4 has no measurable companions $< 0.93''$ away, but that diffuse light is an important factor for this star. To measure the spectral contribution of the diffuse light we used Kurucz (1993) stellar atmospheres to fit a single temperature model to the diffuse light components in $B$, $V$ and $I$ derived by CG98. A 6,000 K Kurucz model gave an excellent fit and showed that relatively hot stars dominated the diffuse light at the position of CV 4. Around 45\% of the flux at 5500\AA\ comes from the diffuse light showing the importance of the above correction for accurate EWs. By comparing the CV 4 spectrum with the Kurucz spectrum of the secondary (see Figure \ref{fig.cv4}) we see some evidence for a small residual red component. This component is too red to be caused by the CV disk (or WD), and is perhaps caused by earthshine (a red component also appears, for data near the earth limb, to be present in the spectrum of the NF; see below). \subsection{\ion{He}{2}\ \la4686 line strength} \label{sec.he2} An important diagnostic for CVs are the line fluxes, especially the relative strength of \ion{He}{2}\ \la4686. Table 1 gives values of the wavelength, line flux, EW and FWHM (Gaussian and Lorentzian) of the H$\beta$, H$\alpha$, HeI \la5876 and \ion{He}{2}\ \la4686 emission lines for CVs 1--4 (1$\sigma$ errors are quoted). We have not undertaken a complete analysis for CVs 1--3, which requires taking account of light contaminants in the FOS aperture (CV 2, for example, has considerable contamination from a bright neighboring star). However, the line ratio between \ion{He}{2}\ \la4686 and H$\beta$\ will be much less sensitive to contamination errors than the EWs and so we derive these ratios from the uncorrected spectra, after taking into account blends from HeI \la4713. Since multiline fitting in IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.}/SPLOT was unable to separate the heavily blended \ion{He}{2}\ \la4686 and HeI \la4713 lines in CVs 1--3, we used the relative strength of HeI \la4713 compared to HeI \la5876 for CV 4 (where the blend from the weak \ion{He}{2}\ \la4686 line was small; see Figure \ref{fig.cv4}) to remove the HeI \la4713 line from these systems. The \ion{He}{2}\ \la4686/H$\beta$\ line ratio for CV 4 is only 0.07 $\pm$ 0.01, compared to 0.32 $\pm$ 0.04, 0.34 $\pm$ 0.05 and 0.25 $\pm$ 0.03 for CVs 1--3 respectively (these more accurate determination replace those given in GC95). Two other notable differences are that the lines of CV 4 are narrower (smaller FWHMs) than those of CVs 1--3 and the EWs of most of the CV 4 lines are greater than those of CVs 1--3. This latter result is part of a very clear trend that the EWs increase going from CV 1 to CV 4, mainly because the secondaries (which dominate the optical flux) get fainter, without significant changes in the line fluxes. For example, comparing CV 4 with CV 1, the ratio of EWs of H$\beta$\ is 8.8, with a factor of 0.08 difference in the continuum levels and only a factor of 0.7 difference in line fluxes. To compare the measured \ion{He}{2}\ \la4686/H$\beta$\ line ratios with those of field CVs we used the emission line data of Williams (1983) and Echevarria (1988) for various CV classes (after confirming the CV classification using Ritter \& Kolb 1998). Because \ion{He}{2}\ \la4686 is often quite a weak line and it is highly variable only a subset of the CVs in Echevarria's sample include \ion{He}{2}\ \la4686, but even with this relatively small sample, some clear trends emerge. The average \ion{He}{2}\ \la4686/H$\beta$\ line ratios for the non-magnetic CV classes are 0.16 $\pm$ 0.05 (17 DNe), 0.94 $\pm$ 0.50 (8 old novae) and 0.76$\pm$ 0.51 (13 nova--likes, excluding magnetic systems). These values are probably overestimates (although we lack information about upper limits), especially for DNe where only 17 out of 39 systems in the above sample have measurable \ion{He}{2}\ \la4686 (the completeness for the other systems is 8/10 for old novae and 13/17 for nova--likes). For magnetic systems we used the classifications by Patterson (1994) and Ritter \& Kolb (1998) and the spectra of Williams (1983) to calculate an average \ion{He}{2}\ \la4686/H$\beta$\ line ratio of 0.57 $\pm$ 0.46 (measurable \ion{He}{2}\ \la4686 for 7/8 DQ Her systems) and 0.59 $\pm$ 0.21 (4/4 AM Her systems). The high accretion rate novae and nova--likes can have relatively strong \ion{He}{2}\ \la4686 (probably because of very hot inner disks), while the magnetic systems also usually have strong \ion{He}{2}\ \la4686, thought to be because the hot accretion ``curtains'' along the magnetic field lines of the WD are directly visible. Only the lower accretion rate DNe have relatively weak \ion{He}{2}\ \la4686 compared to H$\beta$. \subsection{Disk brightness} \label{sec.disk} For comparison with field CVs we have also estimated the brightness of the disk in CVs 2--4, using the techniques described in Section \ref{sec.cv1} to subtract off the secondary. Table 2 shows the total absolute magnitudes of CVs 1--4, plus estimated absolute magnitudes and masses ($M_2$) of the secondaries (from Bergbusch and Vandenberg 1992), and absolute magnitudes of the disks ($M_V$ (disk)). Since Smith and Dhillon (1998) have shown that secondaries in CVs with orbital periods below $\sim$7 h, as likely here (see discussion below) are typically not evolved, the mass estimates for main sequence stars should be reasonable. Note that, using this technique, we found that the $V$ magnitude of the secondary in CV 1 was 0.02 mag {\it brighter} than the $V$ magnitude of the total system, clearly an impossibility. Variability is the likely explanation, plus errors will also play a part; to investigate the sensitivity of these results to overestimating the brightness of the secondary (and other errors) we made the estimated $I$ magnitude of the secondary fainter by 0.1 mag and 0.2 mag and rederived the expected $M_V$ (disk) (see Table 2). We also show in Table 2, estimates of upper limits for periods of CVs 1--4, using the relationship given in Warner (1995) between the mass of a Roche-lobe filling secondary and the CV period, namely $M_2 = 0.065 P_{orb}^{5/4}$(h). Since this equation applies to Pop I stars and Pop II stars have smaller radii for a given mass than Pop I stars (implying a smaller period for Roche-lobe overflow), we have used the study by Stehle, Kolb \& Ritter (1997) to estimate the period correction required for Pop II stars ($<$ 1 h for all masses $<$ 0.9$M_{\odot}$). As a guide to the validity of this procedure for PopII stars we note that the two CV candidates in NGC 6752 (Bailyn et al. 1996), have absolute $V$ and $I$ magnitudes only a few tenths of a magnitude different from CVs 2 and 3 in NGC 6397. The estimated periods of 4.4 h and 3.8 h for CVs 2 and 3 compare well with the observed periods of 5.1 h and 3.7 h for the NGC 6752 CV candidates. \subsection{Non Flickerer} The G570H spectrum of the NF is shown in Figure \ref{fig.allnf}. The blue continuum and broad H$\beta$\ line suggest that this star is a high gravity object (see below). The spectrum suffers from both diffuse light contamination and from having a much brighter ($\Delta$V = 3.5 mag) neighbor only 0.3$''$ distant (a star near the MS turnoff). With perfect pointing during the FOS observations we would simply be able to use the high S/N WFPC2 images from CG98 to determine the exact amount of turnoff star contamination. However, there are 0.08$''$ uncertainties in the pointing of {\it HST} which can potentially make a significant difference to the amount of light contamination. There is also a source of variable light, since the apparent flux of the star increases by about 15\% towards the end of an orbit (probably due to increased scattered light or earthshine as noted above). Since it is difficult to estimate the ``clean'', uncontaminated spectrum of this star we forced its continuum to equal that of a reddened Kurucz atmosphere constrained by the $B$, $V$ and $I$ measurements of CG98. A log Z = --2.0 Kurucz spectrum was normalized so that the equivalent $V$ magnitude agreed with Cool's $V$ magnitude, and the temperature was varied to give the best agreement at $B$ and $I$. The best fit spectrum had a temperature of 22,000 $\pm$ 7,000 K, where the error is 1$\sigma$. Use of this technique means that errors in the continuum determination are dominated by errors in the photometry only (normalization from a separate observation is valid because CG98 have shown that the star does not vary) without incurring larger (unknown) errors by attempting a poorly constrained independent measurement. Having determined the continuum level, the shape of the H$\beta$\ line for the NF must be determined. Since both the neighboring turnoff star and diffuse light contributions have their own spectral components, estimates of these aperture contaminants are required before the decontaminated H$\beta$\ line profile can be determined (the contribution of the neighboring turnoff star is most important because its relatively hot temperature implies a reasonably strong H$\beta$\ line). Therefore, we still made an approximate determination of the telescope pointing during the FOS observations. We compared the flux in the FOS spectrum with the flux derived from a 0.43$''$ test aperture centered over the NF in the $V$ image from CG98 (thus including all the aperture light, not just that from the NF). We first subtracted away from the original spectrum the earthshine component derived earlier ($V$=21.7), leaving only constant components. The equivalent $V$ magnitude of this ``constant'' FOS component is 19.44 mag, in excellent agreement with the $V$ magnitude derived from the WFPC2 image (19.45). Since the NF does not vary and the pointing of {\it HST} was effectively constant during the FOS observations (no drifts above the $\sim$0.001$''$ level), the contamination from the turnoff star in the FOS aperture must be approximately the same as in the test WFPC2 aperture. Using Cool's photometry we decomposed the FOS spectrum into its separate components of NF, turnoff star and diffuse light (all fitted by Kurucz atmospheres). The resulting residual is shown in Figure \ref{fig.allnf} and the close agreement with zero shows we made a reasonable, self--consistent determination of the various aperture components. Since the residual is so red, its contribution to the NF H$\beta$\ line profile should be negligible (the source of this residual may be incomplete removal of the earthshine component mentioned earlier). As noted above, the high temperature and the broad H$\beta$\ line appear consistent with the hypothesis that this star is a high gravity object such as a WD. To test this hypothesis we used the pure hydrogen atmosphere models of Wesemael et al. (1980). Modern refinements of these models do not offer any significant advantages in the analysis of this low S/N ratio spectrum. The advantages of using these models are that they include detailed line profiles and cover a large range in log g (4 $<$ log g $<$ 9) and $T_{\scriptsize {\mbox{eff}}}$\ (20,000 K $<$ $T_{\scriptsize {\mbox{eff}}}$\ $<$ 150,000 K). Since the only obvious line present (H$\beta$\ in absorption) is broader than the emission lines for CVs 1--4, and because the S/N ratio in this line is low, we increased the length scale of the smoothing to 11 channels and applied the same smoothing factor to the models. We resampled the Wesemael line profiles to the resolution of the FOS data, applied smoothing and then experimented with different model profile fits (Lorentzians and Voigt profiles gave almost identical results). To fit the line profile the ICFIT algorithm within IRAF/SPLOT was used to fit the continuum and a Lorentzian was used to fit the line profile. The line depth and EW were found to be 0.55 $\pm$ 0.05 and 23.9 $\pm$ 2.4 \AA\ respectively, where the errors are a combination of random errors in the parameter measurements (determined by Monte Carlo experiments) and systematic errors in the continuum and line profile measurements. For simplicity we defined each model H$\beta$\ line with two parameters, the line depth and the EW. We generated a complete grid of these two parameters for all of the available Wesemael models, interpolated the log g = 1 spacing to log g = 0.25 and interpolated the temperature scale where necessary (for some log g values fewer models were available). Since the Wesemael models are for $T_{\scriptsize {\mbox{eff}}}$\ $>$ 20,000 K, we used the Kurucz models (with 4 $<$ log g $<$ 5 and $T_{\scriptsize {\mbox{eff}}}$\ $<$ 20,000 K) to extend our line depth/EW grid below 20,000 K down to 10,000 K. (Encouragingly, excellent agreement was found between the Wesemael and Kurucz models in the overlap region of $T_{\scriptsize {\mbox{eff}}}$\ = 20,000 K, 4 $<$ log g $<$ 5; differences of at most 2\% were found in the two line parameters.) We extrapolated the Kurucz models to higher log g values (from log g = 5 to log g = 9), constrained by the functional form of the Wesemael models for the line depth and EW at 20,000 K, using the same smoothing and interpolation as before (see comments below about the validity of this extrapolation). Finally, the measured line depth and EW were used to search for a solution in log g and $T_{\scriptsize {\mbox{eff}}}$. The optimal solution was found to be $T_{\scriptsize {\mbox{eff}}}$\ = 17,500 $\pm$ 5,000 K and log g = 6.25 $\pm$ 1.0 (1$\sigma$ errors). The large errors in the gravity and temperature are caused by the limited information present in one noisy line, in particular the inability of the spectrum to trace possible narrow line cores. Figure \ref{fig.contnf} shows the total $\chi^2$ for the above solution as a function of $T_{\scriptsize {\mbox{eff}}}$\ and log g. The optimal solution with $\chi^2 = 0.08$ is marked (``L''). The first, second and fourth contour levels correspond roughly to 1$\sigma$, 2$\sigma$ and 3$\sigma$ respectively. Figure \ref{fig.contnf} also shows the regions where the Wesemael and Kurucz models have been used, and where they were extrapolated. Although extrapolation may incur extra uncertainties, Figure \ref{fig.contnf} shows that a Wesemael model having $T_{\scriptsize {\mbox{eff}}}$\ = 20,000 and log g = 6.5 (with $\chi^2 = 0.8$) is close to an optimal solution, without requiring any extrapolation. The Kurucz models are useful because they show that a low gravity solution is probably not consistent with the data (besides independently checking the low temperature, low gravity Wesemael models). Since the errors for log g and $T_{\scriptsize {\mbox{eff}}}$\ from the H$\beta$\ line measurement are considerable, we briefly discuss extra constraints on these two parameters. First, we note from Figure \ref{fig.contnf} that relative to the optimal solution (``L'') only high temperature/high gravity or low temperature/low gravity solutions are allowed. The high gravity solution with log g = 7.25 seems ruled out by the photometry of Cool, Sosin \& King (1997), since the NFs clearly have lower gravities than log g = 7, the low gravity limit of the models used. The low gravity solution with log g = 5.25 and $T_{\scriptsize {\mbox{eff}}}$\ = 12,500 K has a temperature that is probably inconsistent with the determination from the photometry of CG98 (22,000 $\pm$ 7,000 K). To determine the wavelength of the H$\beta$\ line we adopted a two-step procedure: (1) we used the Lorentzian fit to the H$\beta$\ line as a first-order solution and then (2) used a spectral model (template) with fixed continuum, line depth and EW [determined in (1)] but with variable wavelength to refine the initial estimate. Using template shifts of up to 10\AA\ in 0.1\AA\ steps we selected the shift which minimized the difference between the template and measured spectrum. This procedure is formally similar to a cross correlation, but it gives sub-pixel resolution without polynomial fitting and allows us to easily weight the first-order solution to optimize sensitivity to Doppler shifts. Using this procedure the H$\beta$\ line was determined to have a wavelength of 4865.8\AA, noticeably redder than the laboratory value of 4861.3\AA\ [step (1) alone gave 4864.5\AA]. The random error given by IRAF/SPLOT is only 0.4\AA, so we investigated the possibility of systematic effects. According to the {\it HST} Data Handbook, the overall 1$\sigma$ random uncertainty is 0.7\AA\ (or 43 km s$^{-1}$) for the accuracy with which the wavelength scale is known in an individual FOS spectrum. However, the possibility of filter grating wheel (FGW) displacements means that a worst--case disagreement in wavelength of over 4\AA\ is possible unless we have independent confirmation of the wavelength scale (when the wavelength scale is fixed the overall 1$\sigma$ random uncertainty falls to 0.5\AA\ or 31 km s$^{-1}$). Our only independent constraints on the wavelength scale are the G570H spectra of CV 4 (obtained over 3 orbits), since no movement of the FGW was made between the observations of the NF and CV 4. The wavelength of the H$\beta$\ line for CV 4 was determined to be 4861.8\AA. The 0.5\AA\ shift from the laboratory value is likely to consist of the velocity of the emission region in the binary, the cluster radial velocity of 21 km s$^{-1}$\ (0.34\AA), orbital motion of the Earth around the Sun, and the motion of the telescope itself (the last 2 effects being negligible given the resolution), plus systematic effects because of the FGW position. While CV emission lines are hardly ideal radial velocity standards in general, we believe that CV 4 provides a useful wavelength reference for several reasons: (1) the measured wavelength for H$\beta$\ is only 0.15\AA\ red-ward of the laboratory value, taking the cluster radial velocity into account, (2) the CV 4 H$\beta$\ wavelength measurement appears very stable since the 3 sub-observations for the 3 separate orbits (separated in time by almost 2.5 hours, well over half the expected period of CV 4) give wavelengths of 4861.7\AA, 4861.8\AA, and 4862.0\AA, (3) the line is very symmetrical (unlike CVs 1 and 2) so that the wavelength measurement is unlikely to have been skewed, and (4) the wavelength of H$\alpha$ for CV 4 is 0.4\AA\ red-ward of the laboratory value, in excellent agreement with the 0.5\AA\ red--shift for H$\beta$. Also, two of the 3 strongest HeI lines give consistent Doppler shifts red-ward of the laboratory value: HeI 4921, shift = 0.35 $\pm$ 0.45\AA\ and HeI 5876, shift = 0.32 $\pm$ 0.21\AA. Only HeI 6678 gives a different shift, --0.72 $\pm$ 0.25\AA\ relative to the laboratory value, however this line is over 1800\AA\ red-ward of H$\beta$\ and therefore has less value as a velocity reference. Using the wavelength of H$\beta$\ for CV 4 as a reference, the Doppler shift of the NF was measured to be 247 $\pm$ 50 km s$^{-1}$. To derive the error we added together in quadrature the random error (25 km s$^{-1}$), the systematic instrumental error quoted above (31 km s$^{-1}$) and an estimate of other systematic errors, including the use of CV 4 as a velocity reference (0.5\AA\ or 31 km s$^{-1}$). A closeup of the H$\beta$\ line profile is shown in Figure \ref{fig.closenf}. Note, in particular, the vertical lines showing the difference in central wavelengths between the average NF and CV 4 H$\beta$\ lines. The significance of the apparent shift of H$\beta$\ for the NF is also clearly shown by comparing the two Lorentzians (the astrophysical significance of this 247 km s$^{-1}$\ shift will be discussed in Section \ref{sec.hewd}). The variable flux but constant wavelength of the H$\beta$\ line for CV 4 is clearly shown by the 3 emission profiles (the apparently constant velocity will be discussed in a future publication). \section{Discussion} \subsection{Cataclysmic variables} \label{sec.discv} One of the crucial advantages of studying cluster CVs, besides their known distance, is the opportunity to study CVs with different metallicities from those of field CVs (see la Dous, 1991 for examples of UV spectra of field CVs). Although the bluest portion of the G160L spectrum is noisy, the detection of \ion{He}{2}\ \la1640 coupled with the non-detection of CIV \la1550 is probably a direct reflection of the low metallicity for NGC 6397. The lack of obvious emission at MgII \la2800 may be another measure of the low cluster metallicity, but we defer detailed spectral modeling incorporating the cluster's low metallicity to a future publication. We now discuss the significance of the UV distribution of CV 1, the disk brightnesses of CVs 1--4 and their \ion{He}{2}\ line ratios. Regarding the UV flux distribution, one obvious explanation for the redness of the disk explained in Section \ref{sec.cv1}, given our sample size of one (for G160L data), is an inclination effect. A second possible explanation is the relatively large contribution from the bright secondary. This explanation seems plausible in that: (1) our predicted period for CV 1 is 5.1 h, while the average period of the 8 DQ Hers in Verbunt's sample is only 3.8 h (also, PopII secondaries tend to be brighter than PopI secondaries at a given mass - Stehle, Kolb \& Ritter 1997), and (2) if we replaced CV 1 by any one of the other CVs the flux distribution would be much bluer. For example, while the estimated secondary for CV 2 is $\sim$1.3 mag fainter than the secondary for CV 1, the disk in CV 2 is {\it brighter} (in $U_{336}$) than the disk in CV 1. If we then ignore the contribution of the secondary (see Figure \ref{fig.uv}) a simple interpretation of the flux distribution for CV 1 compared to the field CVs is that it has either: (1) a relatively faint and/or cool disk because of a low accretion rate, or (2) a WD with a moderately strong magnetic field. As noted earlier, AM Her systems (with strong magnetic fields) are not included in Verbunt's sample. These systems, lacking disks, can be quite red, however the likely periods of the NGC 6397 CVs are longer than most AM Her systems (see Ritter \& Kolb 1998). Also, the line emission of the NGC 6397 CVs differs from field AM Her systems (see below). To investigate (1) we compared $M_V$ (disk) and the CV periods from Table 2 with Figures 3.5 and 9.8 of Warner (1995), plots of $M_V$ (disk) versus orbital period for {\it non}-magnetic field CVs. (The two principal error sources for field CVs are estimates of the secondary component and distance estimates, errors that are considerably reduced by studying cluster CVs). Clearly, the NGC 6397 CVs fall near the faint limit for CV disks when their expected period is considered. Among field CVs only faint DNe in quiescence have disks this faint, and these systems have much longer recurrence times for outbursts than brighter DNe (Warner 1987), possibly explaining the observed paucity of DN outbursts in globular clusters (Shara et al. 1996). Generally CVs with periods above the period gap have disks that are brighter than $M_V \sim 8$. Indeed, a large number of observational and theoretical studies have concluded that CVs with periods above the period gap generally have high accretion rates, with correspondingly bright disks while CVs with periods below the period gap have lower accretion rates with fainter disks (Patterson 1984, di Stefano \& Rappaport 1994 and Stehle, Kolb \& Ritter 1997, the latter two studies specifically for PopII CVs). Another way to emphasize the unusually faint disks of the NGC 6397 CVs is to compare globular cluster and open cluster CVs (minimizing uncertainties in $M_V$ (disk) estimates). There are now 6 probable CVs in globular clusters with well-measured $VI$ magnitudes {\it all} lying at most $\sim$0.2 magnitudes away from the MS in the $V$ vs $V-I$ CMD (4 in NGC 6397 and 2 in NGC 6752). We contrast this result with the 3 CVs detected in open clusters that have quiescent $V-I$ colors blue-ward of the MS by $\gtrsim 0.6$ mag (M67; Gilliland et al. 1991), $\sim$0.7 mag and $\sim$1.0 mag (both NGC 6791; Kaluzny et al. 1997). Since M67 and NGC 6791 are much less dense and dynamically evolved than NGC 6397, their CVs are expected to have evolved from primordial binary systems, just as with field CVs. Hence, the detection of a nova--like and Z Cam (relatively high accretion rate DN) system in NGC 6791 is not surprising, since in any sample of field CVs these are among the brightest, intrinsically, because of their high accretion rate (resulting in blue $V-I$ colors). The much lower metallicity of NGC 6397 compared to metal-rich field or open cluster CVs should, in general, mean that the NGC 6397 CVs have {\it higher} accretion rates (for given binary parameters), according to Stehle, Kolb \& Ritter (1997), implying that even brighter disks should be present. A possible explanation we have already suggested (GC95), is that these CVs are mostly magnetic systems with truncated and thus relatively faint disks. Our original suggestion was based on the relative strength of \ion{He}{2}\ \la4686 compared to H$\beta$\ for CVs 1--3. To summarise the results in Section \ref{sec.he2}, for the AM Her systems (with their strong magnetic fields), a strong \ion{He}{2}\ \la4686 line is a well known feature (Warner 1995). However, known DQ Hers show a large range in \ion{He}{2}\ \la4686/H$\beta$\ line ratios with values ranging from over one (V533 Her) to zero (AE Aqr). To discriminate between magnetic and non-magnetic systems (and eliminate high accretion systems like nova--likes), Silber (1992) has shown that line ratios of \ion{He}{2}\ \la4686/H$\beta$\ $>$ 0.4 and EW(H$\beta) >$ 20\AA\ are reasonable signatures of magnetic systems. By combining our knowledge of $M_V$ (disk) and the \ion{He}{2}\ line ratios of CVs 1--4, we can attempt identification of these systems. With its weak \ion{He}{2}\ line and faint disk, CV 4 is a reasonable candidate for a quiescent DN system. Since there is remarkable similarity between the \ion{He}{2}\ \la4686/H$\beta$\ line ratios for CVs 1--3 and also similarities in the Balmer line fluxes themselves (see Table 1), CVs 1--3 may have very similar properties, as originally suggested by GC95. They do not appear to be recent old novae or nova--likes because of their faint disks (with extra evidence from their \ion{He}{2}\ \la4686 line ratios), nor do they appear to be DNe because they have moderately strong \ion{He}{2}\ lines. The final option is magnetic systems. CVs 1--3 do not have \ion{He}{2}\ ratios as large as AM Her systems, but while their ratios are also weaker than an ``average'' DQ Her system, there is considerable scatter for the DQ Her systems, as pointed out above. For example, CVs 1--3 have \ion{He}{2}\ ratios stronger than 4 of the 8 DQ Her systems in the sample of Williams (1983), and CVs 1 and 2 only just fail the magnetic criteria of Silber (1992). To conclude, CVs 1--3 do not appear to be DNe, but they could be DQ Her type systems. Is it true that DQ Her systems tend to have disks as faint as those found in CVs 1--4? Extra information about DQ Her disks compared to other CV classes comes from the continuum slopes in Williams (1983). An estimate of the disk contribution (and temperature) relative to the secondary comes from analyzing the continuum ratio between two different wavelengths (equivalent to a color). The two most convenient continuum points for both CVs 1--4 and the spectra of Williams (1983) are at H$\beta$\ and H$\alpha$. Figure \ref{fig.he2disk} shows plots of this continuum ratio versus the \ion{He}{2}/H$\beta$\ line ratio for several different classes of field CV (from Williams 1983, updated by Ritter \& Kolb 1998) along with these ratios for CVs 1--4. We examined the linear correlation between the continuum and line ratios for the individual CV classes shown, finding a linear correlation with absolute value $>$ 0.5 in 3 cases: DQ Her systems (linear correlation = 0.77), nova--likes (--0.69) and old novae (0.57). The correlation for DQ Hers implies that small \ion{He}{2}\ line ratios imply continuum ratios of $\sim$1.0 (meaning that the secondary dominates unless the disk is very cool). A notable element of Figure \ref{fig.he2disk} is that all of CVs 1--4 lie close to the best fit line for DQ Hers, and all of them have continuum ratios of $\sim$1.0. The dominance of the secondary for CVs 1--4 is therefore exactly what we expect for field DQ Her systems with weak \ion{He}{2}\ lines. Using estimates of $V$ (disk) and distances for DQ Her systems (from Patterson 1994) we have estimated $M_V$ (disk) for DQ Hers, neglecting reddening (the $\sim 50\%$ errors in the distances dominate the errors) and assuming that the systems are in their faint, low accretion state. We found a strong linear correlation (--0.83) between the \ion{He}{2}/H$\beta$\ line ratio and $M_V$ (disk), as shown in the upper panel of Figure \ref{fig.he2disk}, in the sense that higher \ion{He}{2}/H$\beta$\ implies a brighter disk (consistent with the bluer colors given above). We then used this correlation to {\it predict} $M_V$ (disk) using the measured \ion{He}{2}/H$\beta$\ line ratios for CVs 1--4 given in Section \ref{sec.cv4}. The results are shown in the final column of Table 2, and agree nicely with the $M_V$ (disk) values given in Table 2 for CVs 1--4, especially given the large uncertainties in $M_V$ for the field CVs. (We excluded GK Per from the sample, with its $\sim$2 day period, evolved companion and thus high accretion rate. Including GK Per gave a linear correlation of --0.59 between $M_V$ (disk) and \ion{He}{2}/H$\beta$\ and $M_V$ (disk) values $\sim$0.6 mag brighter.) These results are the best evidence that these cluster CVs are indeed largely DQ Her type systems. A possible alternative to the DQ Her hypothesis is that some of the 6397 CVs are old novae (possibly in deep hibernation between outbursts; see Shara et al. 1986). This explanation is consistent with the correlation noted above between the continuum ratios and \ion{He}{2}/H$\beta$\ line ratios for old novae, plus the proximity of the 6397 CVs to the best-fit line for old novae. Although the hibernation scenario is difficult to test (and yet to be confirmed), upcoming {\it HST} spectra of the probable old nova in M80 will provide a critical comparison with the spectra of the NGC 6397 CVs. It is also possible that some of the cluster CVs fall in both categories, since, for example, 3 of the DQ Hers in the sample from Williams (1983) are also old novae. \subsection{Helium White Dwarfs} \label{sec.hewd} The photometry of the NFs (CG98) and the spectroscopy presented above place important constraints on the possible properties and evolution of these systems, which appear to be helium WDs. From the spectroscopy we have already determined $T_{\scriptsize {\mbox{eff}}}$\ and log g and from the photometry we calculated the luminosity of the NF (log($L/L_{\odot})=-0.6$) using the bolometric corrections presented in Bergeron, Wesemael \& Beauchamp (1995). Armed with these three quantities we have examined evolutionary models of WDs to estimate masses and lifetimes for the NFs. We began by using the WD evolutionary models presented by Bergeron et al. (1995). Models with log g = 7 (the limit of Bergeron's models), $T_{\scriptsize {\mbox{eff}}}$\ = 17,500 K and with thick hydrogen layers have $M \approx 0.3 $$M_{\odot}$. Extrapolating to log g = 6.25, models should have $M \approx 0.2-0.25$$M_{\odot}$. Using the study by Webbink (1975) of the evolution of helium WDs in close binaries, we used the measured log($L/L_{\odot}$) and $T_{\scriptsize {\mbox{eff}}}$\ to derive a mass between 0.2 and 0.25 $M_{\odot}$\ for the NF, with cooling ages for these models between $\sim 2 \times 10^8$ yr (0.25$M_{\odot}$) and $\sim 5 \times 10^8$ yr (0.20$M_{\odot}$). Using log g, $T_{\scriptsize {\mbox{eff}}}$\ and log($L/L_{\odot}$), a similar mass is determined from the He WD study of Althaus \& Benvenuto (1997), although they do not include a hydrogen envelope. Since the only significant line present in the NF is H$\beta$, a hydrogen envelope must be present, and therefore we have made comparisons with the models of Benvenuto \& Althaus (1998) on the effects of hydrogen envelopes on the structure and evolution of low-- and intermediate--mass WDs. They computed the evolution of WDs with masses from 0.15 -- 0.5 $M_{\odot}$, while treating the mass of the hydrogen envelope as a free parameter within the range $10^{-8} \leq M_H/M \leq 4 \times 10^{-3}$. Because only a representative sample of the models are presented in Benvenuto \& Althaus (1998), L. Althaus has kindly performed a dedicated search for models consistent with our log g and $T_{\scriptsize {\mbox{eff}}}$. Two good solutions were found for a 0.24$M_{\odot}$\ model with $M_H/M = 1 \times 10^{-3}$ and a 0.235$M_{\odot}$\ model with $M_H/M = 5 \times 10^{-4}$, with both of these solutions having log($L/L_{\odot})=-0.52$, in excellent agreement with the measured luminosity. These solutions are close to the valid bright limit of the models of Benvenuto \& Althaus (1998), so the cooling age for these solutions of $\sim 7 \times 10^7$ yr is highly uncertain (and likely to be an underestimate) since Benvenuto \& Althaus (1998) did not attempt a detailed treatment of the formation of helium WDs in binary systems. A detailed treatment of the evolution of a 0.3 $M_{\odot}$\ WD in a binary system was given by Iben \& Tutukov (1986; hereafter IT86). Their model experiences two hydrogen shell flashes before cooling to $T_{\scriptsize {\mbox{eff}}}$\ $\sim$ 16,000 K at an age of $\sim 1 \times 10^8$ yr (a comparable age to that found by Benvenuto \& Althaus (1998), but smaller than Webbink's values). Using their cooling curve, IT86 construct a number--luminosity distribution function for helium WDs. This function has a similar slope to that for CO WDs, but at most magnitudes is about a factor of 4 smaller. One exception to this behavior is the region with $-1 \lesssim $log$(L/L_{\odot}) \lesssim -1.7$. At log$(L/L_{\odot}) \sim -1$ the predicted helium WD number-luminosity function shows a ``bump'' where it increases to be roughly equal to the CO WD function. It then drops quickly with decreasing luminosity by about an order of magnitude before returning to the average value at log$(L/L_{\odot}) \sim -1.7$. This behavior is caused by the very slow rate of decline in luminosity following the first hydrogen shell flash, when t = $10^7 - 10^8$ yr. Although IT86 warn that a complete theoretical distribution function requires superposition of contributions from WDs of many different masses (each experiencing a hydrogen shell flash at a different luminosity), this oscillatory behavior may cause an enhancement in the number of helium WDs at relatively high luminosity and a dearth for $\sim$1.5 mag below this. This could be consistent with the observed CMD distribution in CG98, perhaps explaining the lack of obvious low-mass WDs further down a cooling sequence, particularly since log$(L/L_{\odot}) \sim -1$ is not far from the measured luminosities for the two faintest helium WD candidates (eventually close double degenerate systems should merge after a few Gyr, forming a more massive WD). However, clearly more data from other clusters are needed to test this hypothesis. An obvious question remains: are the possible formation mechanisms listed in Section \ref{sec.int} consistent with the likely significant red--shift of the H$\beta$\ line? We reject the possibility of the red--shift being an ejection velocity, since at 247 km s$^{-1}$, the star would have moved 2.2 pc in only $10^4$ yr, and thus would be well outside the core. Gravitational red--shift is likely to contribute only a small red--shift for this low-mass object, since $K_R = 0.635 \times M/M_\odot \times R_\odot/R~$ km s$^{-1}$\ (Reid 1996), and using the mass and radius estimates from Benvenuto and Althaus' models, $K_R \approx$ 3 km s$^{-1}$. Instead, we argue that the red--shift may be a Doppler shift from the helium WD orbiting a massive but faint companion, probably a WD. This is consistent with the primary formation mechanism listed above. Also, a binary nature for this star would hardly be surprising for dynamical reasons alone, since the detection of 3 NFs near the center of NGC 6397 is good evidence of a binary origin for these stars (through mass segregation), and most known low-mass WDs are in binary systems. Finally, we note the possibility that these low-mass WDs may have neutron star companions, likely to be binary millisecond pulsars (although none have been found yet in NGC 6397) as found in helium WD/NS binary systems in the field. If we have measured an orbital Doppler shift its size is an important consideration. Of the 8 double--degenerate systems listed in Iben et al. (1997), all with a helium WD as the primary (the brighter component), only 2 have $K > 150$ km s$^{-1}$. Little is known about the masses of the secondaries in these systems, but models by Iben et al. (1997) show that $K \approx 250$ km s$^{-1}$\ should be quite common for high-inclination systems with a helium WD primary and a CO WD secondary (from Cool, Piotto \& King 1996 and CG98, NGC 6397 clearly has a large reservoir of $\sim 0.55$ $M_{\odot}$\ CO WDs, and it should also have many higher mass WDs). For more massive secondaries such as neutron stars, larger values of $K$ can be found, for example $K = 280$ km s$^{-1}$\ for the helium WD -- MSP binary J1012+5307 (van Kerkwijk, Bergeron \& Kulkarni 1996). Adopting a range of possible mass functions for a binary system containing the NF and a unknown companion, and assuming $K$ = 247 km s$^{-1}$\ and helium WD mass = 0.25 $M_{\odot}$, we have derived expected periods for the system as a function of orbital inclination ($i$). Although the NF observations were spread over 3 (96 minute) orbits, $\sim$42 min of spectra were taken on the middle orbit, but only $\sim$10 min at the end of the first orbit and $\sim$14 min at the start of the third. Therefore, we cannot rule out a $\sim$4-5 h period, and for $i < 60 \arcdeg$ we require a WD companion with mass $\gtrsim$ 0.7 $M_{\odot}$, slightly higher than the mass of WDs currently being produced in the cluster (low-mass MS stars are ruled out, consistent with the photometry of CG98). Since we have measured only a lower limit for $K$, we have also experimented with values that are 25\% higher than observed, maintaining the other assumptions. In this case a high mass WD (mass $\gtrsim$ 1.1 $M_{\odot}$) or neutron star companion is required (clearly longer observations may provide powerful constraints on the mass of the companion). To determine whether 3 systems with ages of $\sim 1 - 5 \times 10^{8}$ yr (IT86 and Webbink 1975) are likely to be present in NGC 6397, we used expected encounter rates between various combinations of MS stars, WDs, neutron stars and red giants (Davies \& Benz 1995) to make rough estimates of expected numbers of binaries or merger products. The calculations by Davies \& Benz (1995) are specifically for $\omega$ Cen and 47 Tuc - we used their numbers for the denser cluster 47 Tuc and divided the interaction rates by 5 to account for the smaller mass of NGC 6397 (using cluster masses from Pryor \& Meylan, 1993). Assuming a lifetime of 4 Gyr (Sandquist, Bolte, \& Hernquist 1997) for blue stragglers less than $\sim$1.5 mag brighter than the MS turnoff (where most of the observed blue stragglers are found) and assuming all MS star collisions result in mergers, we used the calculations of Davies \& Benz (1995) to estimate that 75 blue stragglers should currently be found in NGC 6397 (from both 2- and 3-body interactions), comparing favorably with the 54 blue straggler candidates found by Kaluzny (1997). An overestimate of total blue stragglers is not surprising because merged stars with total masses at or below the turnoff are, of course, not included in Kaluzny's sample. For CVs, we assumed that half the collisions between WDs and MS stars result in the formation of binaries (Davies \& Benz 1995), and used a conservative limit on the mass ratio of 1 for stable mass transfer (Davies \& Benz 1995). Then, assuming the average CV lifetime is 1.5 Gyr for CVs above the period gap (Kolb \& Stehle 1996), we expect 6 CVs from 2- and 3-body interactions, close to the observed number. Finally, assuming (1) ages of 0.1 -- 0.5 Gyr for the observed helium WDs, (2) that all of the red giant collisions with either neutron stars or WDs result in the formation of He WDs (see Davies, Benz \& Hills 1991), and (3) that 3-body interactions result in as many helium WDs as 2-body interactions, we predict 0.7 -- 3.5 bright helium WDs to currently be found in NGC 6397. The reasonable agreement between observed and estimated numbers of blue stragglers and CVs is comforting given the large uncertainties in the interaction rates used, the influence of possible {\it differences} in interaction rates between NGC 6397 and 47 Tuc (e.g. NGC 6397 is core-collapsed and 47 Tuc may or may not be), and the effect of the unknown binary content (Davies \& Benz 1995 assume 10\% binaries). Since the lifetimes for blue stragglers are known reasonably well and the number of observed systems is probably fairly complete, the possible differences listed above must either be small or cancel each other out. We also have reasonable agreement between the numbers of expected and observed helium WDs when using Webbink's cooling ages. Although our assumptions may appear somewhat generous, there may be other mechanisms for the formation of He WDs: for example, red giant -- MS star collisions, binary -- binary collisions and primordial binary evolution. It is the expected lifetimes for the helium WDs that are one of the largest sources of uncertainty in the above number estimates. The (unknown) thickness of the hydrogen envelope can make a significant difference to the lifetimes especially if hydrogen burning occurs. For example, a model with $M = 0.3$$M_{\odot}$\ and $M_H/M = 2 \times 10^{-3}$ (the thickest envelope considered by Benvenuto \& Althaus (1998), with log g and $T_{\scriptsize {\mbox{eff}}}$\ consistent with our low-mass NF within the errors) has an age $\sim$50\% greater than the 0.3$M_{\odot}$\ model with $M_H/M = 1 \times 10^{-3}$. It has also been suggested (Alberts et al. 1996) that the cooling time--scales of very low-mass WDs (mass $<$ 0.25$M_{\odot}$) can be considerably underestimated by the traditional WD cooling curves of IT86 and others for higher mass WDs (mass $>$ 0.3$M_{\odot}$). Alberts et al. (1996) predict that hydrogen shell flashes do not occur on WDs with mass $<$ 0.2$M_{\odot}$, but that these WDs experience long--lasting phases of hydrogen burning which significantly prolong their lifetimes (however this difference in behavior may be caused by differences in time resolution between the models of Alberts et al. and IT86). Finally, Sarna, Antipova \& Muslimov (1998) find much greater cooling ages for their 0.16$M_{\odot}$\ WD compared to IT86 because of differences in the formation mechanisms. While in IT86 the donor star fills its Roche lobe when it is on the red giant branch, forming a $\sim$0.3$M_{\odot}$\ helium WD with a relatively thin hydrogen envelope, the donor star in Sarna et al.'s calculations fills its Roche lobe while it is evolving through the Hertzsprung gap, resulting in a 0.16$M_{\odot}$\ WD with a much thicker hydrogen envelope. Since bright red giants have much shorter lifetimes than subgiants (and perhaps limited cross-sections, despite larger radii, because of low densities) it is possible that collisions involving subgiants or faint red giants are much more efficient at producing helium WDs than collisions involving brighter red giants. This selection effect would preferentially cause very low-mass ($\lesssim$0.25$M_{\odot}$) WDs to form (since the red giant core mass increases with increasing brightness), giving objects with longer cooling times according to Webbink (1975) and Sarna et al. (1998), and enhancing the number of helium WDs seen. \section{Conclusion} \label{sec.con} We summarize here the results for CVs 1--4: (1) a 4th CV candidate in NGC 6397 has been spectroscopically confirmed, (2) UV data for CV 1 implies that it has a red disk when compared with field CVs, (3) the photometry of CG98 combined with Kurucz spectra for the estimated secondaries provide strong evidence that CVs 1--4 all have faint disks and probably low accretion rates (consistent with faint quiescent DNe), (4) the \ion{He}{2}\ \la4686/H$\beta$\ line ratios (together with the faint disks) imply that CVs 1--3 may be DQ Her systems, (5) the correlations between the \ion{He}{2}/H$\beta$\ line ratios for CVs 1--4 and both (a) the continuum ratios between H$\beta$\ and H$\alpha$ and (b) $M_V$ (disk) provide extra evidence that the 6397 CVs are mainly DQ Her systems. This is consistent with the finding of Verbunt et al. (1997) that CVs 1--3 could be DQ Her systems based on their X-ray and optical fluxes. An alternative explanation is that some of the CVs are old novae in hibernating phases between nova eruptions. We conclude that there may be fundamental differences between populations of globular cluster CVs and field/open cluster CVs, perhaps caused by the different formation mechanism of tidal capture and exchange collisions or perhaps because of the different environment in globular clusters. One possible explanation is that interactions cause stars to rotate more quickly, resulting in stronger magnetic fields than in most field stars, as suggested by GC95. Alternatively, Vandenberg, Larson \& De Propris (1998) have suggested that rapidly rotating cores of giant stars in the metal poor globular cluster M30 might reconcile this cluster's luminosity function with stellar evolutionary theory. Similar problems exist in understanding the luminosity function of NGC 6397, although further study of the theory of Vandenberg et al. (1998) is needed. Another possibility is that magnetic WDs are formed in globulars from differentially rotating cores in blue stragglers (Grindlay 1996). Prospects for further work on the CV data include modeling of the disk for CV 1, power spectrum analysis of both the time series obtained by CG98 and the sub-exposures for the FOS spectra, studies of line profile changes with time and detailed spectral modeling incorporating the cluster's low metallicity. A clear test of the hypothesis that most of the 4 CVs are DQ Her systems is to search for a stable optical (or X-ray) period with $P < P_{orb}$ (DQ Hers usually have $P \ll P_{orb}$; Patterson 1994). Because the short FOS observations of the CVs are inadequate for this purpose, we shall propose to obtain simultaneous spectra of CVs 1 and 2 using moderate time-resolution ($\Delta\tau \sim$ 60s) spectroscopy in the blue (using STIS with a long slit), to directly test the magnetic CV hypothesis and place constraints on the hibernating nova scenario. The results for the NF are: (1) using detailed comparisons with stellar atmospheres from Wesemael et al. (1980) and Kurucz (1993) we have determined log g = 6.25 $\pm$ 1.0, and $T_{\scriptsize {\mbox{eff}}}$\ = 17,500 $\pm$ 5,000 K (consistent with $T_{\scriptsize {\mbox{eff}}}$\ = 22,000 $\pm$ 7,000 K using the photometry of CG98), (2) by using these line parameters and the luminosity of the NF we have shown that the NF spectrum is consistent with a helium WD having a mass of $\sim$0.25$M_{\odot}$\ and an age between 0.1 and 0.5 Gyr (depending on the models used), and (3) the NF spectrum appears to be significantly Doppler shifted from the expected wavelength, suggesting the presence of a dark, massive companion. The low mass of the NF (and probably similar or lower masses for the others) suggest that interactions between degenerate stars and subgiants or faint red giants are more efficient at producing helium WDs than interactions involving degenerate stars and brighter red giants. Although we have not yet made a rigorous attempt to find evidence for velocity variability of the H$\beta$\ line for the NF, the prospects from subdividing this low S/N spectrum are poor, especially since almost two thirds of the data for the NF was obtained over just one $\sim$42 min time segment. Clearly, observations over a longer time are needed to confirm that Doppler shift evidence presented above and to study radial velocity variations. Use of STIS with the long slit would enable two NFs to be observed simultaneously, along with many cluster stars providing an ideal radial velocity reference. Observations in the blue would also give excellent coverage of Balmer absorption lines (with the exception of H$\alpha$), giving considerably more accurate line parameters, and helping determine whether the luminosity difference between the bright NF and the two fainter ones is mainly because of mass or age differences. \acknowledgments We thank R. Kurucz, L. Althaus and O. Benvenuto for models, B. Hansen, R. Di Stefano and F. Wesemael for discussions and an anonymous referee for helpful comments. This work was partially supported by NASA grants NAG5-3808 and HST grant GO-06742 (PDE and JEG), and NASA grant NAG5-6404 (CDB). \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{SectIntr} A quantum integrable system corresponds to each solution of the Yang-Baxter equation \be \lb{YB} \mathbb{R}_{12}(u-v) \,\mathbb{R}_{13}(u) \,\mathbb{R}_{23}(v) = \mathbb{R}_{23}(v) \,\mathbb{R}_{13}(u)\, \mathbb{R}_{12}(u-v) \,. \ee Consequently classification of its solutions is of major interest for mathematical physics. The linear operator $\mathbb{R}_{ij}(u)$ from Eq. \p{YB} that acts on a tensor product of two linear spaces and depends on a spectral parameter $u \in \C$ is traditionally referred to as $\mathrm{R}$-{\it matrix}. We prefer to call it $\mathrm{R}$-{\it operator}, since we will extensively work with infinite-dimensional linear spaces. The operators in Eq. \p{YB} act on a tensor product of three linear spaces. Each $\mathrm{R}$-operator acts non-trivially on a pair of spaces denoted by its lower indices, and it is extended as an identity operator on the remaining space of the triple. It is well known that solutions of the Yang-Baxter equation can be rather intricate \cite{KuSk,Jim}. None the less appealing to the Quantum Inverse Scattering Method~\cite{Fad,KuSk1} one can put forward a reasonable conjecture that they are composite objects having internal structure and that they are constructed out of elementary blocks. A more refined statement is that the $\mR$-operator admits factorization, i.e. it is a product of several simpler operators. This observation enabled to construct the general solution of the Yang-Baxter equation,Eq. \p{YB}, acting on a tensor product of two infinite-dimensional principal series representations of the group $\mathrm{SL}(N,\C)$ \cite{DM09}. In the case of rank one algebra this result has been carried over to trigonometric and elliptic deformations. The general $\mR$-operators for the Faddeev's modular double and the elliptic modular double has been constructed in a factorized from in \cite{CD14} and \cite{DS1}, respectively. In this note we will deal with finite-dimensional representations. We will prove that $\mR$-operators for rank 1 algebras acting on a tensor product of an {\it arbitrary} finite-dimensional and an arbitrary infinite-dimensional representations admit factorization as well. These solutions of the Yang-Baxter equation, Eq. \p{YB}, can be thought of as generalizations of the quantum Lax operator, since the fundamental representation in the auxiliary space $\C^2$ is substituted by a higher-spin representation in $\C^{n+1}$. Let us consider firstly solutions of Eq. \p{YB} that are invariant with respect to the Lie algebra $s\ell_2$. In the following sections we will consider as well its trigonometric deformation that is the modular double (along with $U_q(s\ell_2)$) and its elliptic deformation that is the Sklyanin algebra. The commutation relations between $s\ell_2$ generators are the following \be \lb{sl2} [\,\mathbf{S}^{+}\,,\, \mathbf{S}^{-}\,] = 2 \mathbf{S} \;\; , \ \ \ \ [\,\mathbf{S}\,,\,\mathbf{S}^{\pm}\,] = \pm \mathbf{S}^{\pm}\,. \ee The symmetry restriction implies commutativity of the $\mathrm{R}$-operator and the co-product of the algebra generators $$ [\,\mathbb{R}_{12}(u)\,,\, \mathbf{S}^{\pm}_{1} + \mathbf{S}^{\pm}_2 \,] = 0\ \; , \ \ \ \ [\,\mathbb{R}_{12}(u)\,,\, \mathbf{S}_1 + \mathbf{S}_2 \,] = 0\,. $$ The linear spaces the $\mathrm{R}$-operator acts upon are representation spaces of $s\ell_2$. We will be concerned with representations of $s\ell_2$ that are Verma modules. We realize Verma modules in the space of polynomials $\C[z]$. The generators of $s\ell_2$ are first order differential operators acting on the space of polynomials and depending on a parameter $\ell \in \C$, which we call {\it spin} of the representation, \begin{equation}\label{diff} \mathbf{S} = z\partial -\ell \ ,\ \mathbf{S}^{-} = \partial \ ,\ \mathbf{S}^{+} = -z^2\partial + 2 \ell z\ . \end{equation} At generic $\ell$ the action of the generators \p{diff} on the space $\C[z]$ is irreducible, and the representation is infinite-dimensional. At (half)-integer spins $2 \ell = n \in \mathbb{Z}_{\geq 0}$ the representation is reducible, and a $(n+1)$-dimensional irreducible representation with the basis $\{1,z,z^2,\cdots,z^n\}$ decouples. We prefer to gather the basis monomials in the generating function $(z-x)^n$ which depends on an auxiliry parameter $x$. Expanding the generating function with respect to $x$ we recover all basis vectors. An elegant formula for $s\ell_2$-invariant solutions of the Yang-Baxter equation, Eq. \p{YB}, acting on a tensor product of two representations of arbitrary spins $s$ and $\ell$ has been derived in~\cite{KRS81,FTT83}, \be \lb{FTT} \mathbb{R}_{12}(u|s,\ell) = \mathrm{P}_{12} \frac{\Gamma(u-J)}{\Gamma(u+J)}\,, \ee where $J$ is a ``square root'' of the Casimir operator: $J(J+1) \equiv (\vec{S}_1 + \vec{S}_2)^2$; the operator $\mathrm{P}_{12}$ swaps the tensor factors: $\mathrm{P}_{12} \Phi(z_1,z_2) = \Phi(z_2,z_1)$. The formula \p{FTT} is valid for both finite-dimensional and infinite-dimensional representations. The operator $J$ is defined rather formally. Thus the formula \p{FTT} has to be accompanied with a decomposition of the tensor product of two representations into irreducibles, which are eigenspaces of $J$. In \cite{KhT} the universal R-matrix for the Yangian double of $s\ell_2$ has been found in a form of a product of three power series in generators $\mathbf{S},\,\mathbf{S}^{\pm}$, Eq. \p{sl2}. This universal R-matrix taken in the evaluation representation is an alternative to Eq. \p{FTT}. Here we choose another opportunity. We will obtain a number of explicit formulae for solutions of the Yang-Baxter equation, Eq. \p{YB}, working with the functional realization of representations, Eq. \p{diff}. Indeed, the $\mathbb{R}$-operator acting on the space of polynomials of two complex variables $\C[z_1]\otimes\C[z_2]$ takes the form \be \lb{Rsl2R} \mathbb{R}_{12}(u|s,\ell)= \mathrm{P}_{12} \frac{\Gamma(z_{21}\dd_2-2s)}{\Gamma(z_{21}\dd_2 -u -s - \ell)} \frac{\Gamma(z_{12}\dd_1 + u -s - \ell)}{\Gamma(z_{12}\dd_1-2s)} \ee where $z_{ij} \equiv z_i - z_j$. We imply that representations of the form \p{diff} specified by spins $s$ and $\ell$ are realized in the spaces $\C[z_1]$ and $\C[z_2]$, respectively. The ratio of two gamma functions of operator argument can be rewritten as an integral operator by means of the Euler integral of the first kind $$ \frac{\Gamma(z_{12}\dd_1 + a)}{\Gamma(z_{12}\dd_1 + b)}\,\Phi(z_1,z_2)= \frac{1}{\Gamma(b-a)}\int_0^1 d\alpha \,\alpha^{a-1}(1-\alpha)^{b-a-1} \Phi(\alpha z_1+(1-\alpha)z_2,z_2)\,. $$ Let us note that the $\mathbb{R}$-operator in Eq. \p{Rsl2R} is factorized. The origin and the meaning of this and other similar factorizations has been clarified in \cite{DKK07}. The equality of $\mathrm{R}$-operators \p{FTT} and \p{Rsl2R} (up to an inessential normalization factor), provided the functional realization of $s\ell_2$, Eq. \p{diff}, is adopted, can be checked by a straightforward calculation \cite{DM09}. The solution \p{Rsl2R} of the Yang-Baxter equation has been constructed in \cite{Der05} for infinite-dimensional representations of Verma module type. The spins $s$ and $\ell$ are assumed to be generic. The case of (half)-integer spins demands an additional refinement. Indeed, the limit $s \to \frac{n}{2}$ in Eq. \p{Rsl2R} has to be calculated carefully, since the divergences arise in both factors. In~\cite{CDS} it has been shown that at (half)-integer spin $2s = n \in \mathbb{Z}_{\geq 0}$ the operator \p{Rsl2R} can be restricted to a finite-dimensional invariant subspace in the first space of the tensor product. The restricted operator acts on a tensor product of the $(n+1)$-dimensional space (where spin $s = \frac{n}{2}$ representation is realized) and an infinite-dimensional space (where spin $\ell$ representation is realized). In other words it is a $(n+1)\times (n+1)$ matrix, whose entries are differential operators acting on the space of polynomials $\C[z]$. In~\cite{CDS} the restriction of the $\mR$-operator has been calculated and the generating formula for its matrix matrix elements has been found. More exactly, the $\mathrm{R}$-operator being applied to the generating function $(z_1-x)^n$ of the finite-dimensional representation in the first space and to a polynomial $\Phi(z)$ from the second space\footnote{ In order to simplify notations we use $z$ instead of $z_2$.} gives the following\footnote{Inessential normalization factors in \p{Rsl2R} and \p{redsl2alg} are different.} \be\lb{redsl2alg} \mathbb{R}_{12}(u|{\textstyle\frac{n}{2}},\ell) \,(z_1-x)^n \,\Phi(z) = \ee $$ = (z-x)^{-u+\frac{n}{2}+\ell} \, (z_1-z)^{u+\frac{n}{2}+\ell+1} \, \dd_{z}^n \, (z_1-z)^{-u + \frac{n}{2} - \ell - 1} \, (z-x)^{u + \frac{n}{2} - \ell} \, \Phi(z)\,. $$ Expanding both sides of Eq. \p{redsl2alg} in powers of the auxiliary parameter $x$ we recover matrix elements of $\mathbb{R}_{12}(u|{\textstyle\frac{n}{2}},\ell)$. If we choose the second spin in Eq. \p{redsl2alg} to be (half)-integer as well $2\ell = m \in \mathbb{Z}_{\geq 0}$, then we immediately obtain the restriction of the $\mathrm{R}$-operator to a $(m+1)$-dimensional representation in the second space. Indeed, applying $\mathbb{R}_{12}(u|{\textstyle\frac{n}{2}},{\textstyle\frac{m}{2}})$ to the generating function $\Phi(z) = (z - y)^m$ of the finite-dimensional representation in the second space, we find the generating function for matrix elements of $\mathbb{R}_{12}(u|{\textstyle\frac{n}{2}},{\textstyle\frac{m}{2}})$, which is a $(n+1)(m+1)\times (n+1)(m+1)$ matrix solving the Yang-Baxter equation, Eq.~\p{YB}. The formula \p{redsl2alg} contains in a compact form all matrix elements of the restricted $\mathrm{R}$-operator. However the matrix form of the restricted $\mathrm{R}$-operator is still rather obscure. Using the formula \p{redsl2alg} as a starting point, we will infer an explicit formula for the restricted $\mathrm{R}$-operator as a matrix of differential operators. Moreover we will see that this matrix is organized very simply and that it is much more transparent than \p{redsl2alg}. In order to get accustomed to \p{redsl2alg} let us consider several examples. In the case of restriction to two-dimensional space (spin $s = \frac{1}{2}$) the formula~(\ref{redsl2alg}) gives rise to the quantum $\mathrm{L}$-operator \cite{Fad}. In order to see it, let us choose the following basis in $\C^2$: $\mathbf{e}_1 = z_1$, $\mathbf{e}_2 = 1$. In matrix notations $\mathbf{e}_1 = (1,0)$, $\mathbf{e}_2 = (0,1)$. Equating coefficients by equal powers of the auxiliary parameter $x$ in both sides of $$ \mathbb{R}_{12}(u-\textstyle\frac{1}{2}|{\textstyle\frac{1}{2}},\ell) \,(z_1-x) \,\Phi(z) = (z-x)^{-u+\ell+1} \, (z_1-z)^{u+\ell+1} \, \dd_{z} \, (z_1-z)^{-u - \ell} \, (z-x)^{u - \ell} \, \Phi(z)\, $$ yields the action of the $\mR$-operator on the basis elements $\mathbf{e}_1, \mathbf{e}_2$ \begin{align*} \mathbb{R}_{12}(u-\textstyle{\frac{1}{2}}|\textstyle{\frac{1}{2}},\ell) \,\mathbf{e}_1 & = \mathbf{e}_1 \, (z \dd - \ell + u) + \mathbf{e}_2 \,(-z^2 \dd + 2 \ell z) \,, \\ \mathbb{R}_{12}(u-\textstyle{\frac{1}{2}}|\textstyle{\frac{1}{2}},\ell) \,\mathbf{e}_2 & = \mathbf{e}_1 \, \dd + \mathbf{e}_2 \, (u + \ell - z \dd)\,. \end{align*} We tacitly assume that both sides of the previous equalities are applied to an arbitrary polynomial $\Phi(z)$. Thus the matrix of the operator $\mathbb{R}_{12}(u-\textstyle{\frac{1}{2}}|\textstyle{\frac{1}{2}},\ell)$ in the chosen basis is the following \be \lb{LaxNonFact} \mathbb{R}_{12}(u-\textstyle{\frac{1}{2}}|\textstyle{\frac{1}{2}},\ell) = \begin{pmatrix} u - \ell + z \dd & \dd \\ -z^2 \dd + 2 \ell z & u + \ell - z\dd \end{pmatrix} = \begin{pmatrix} u + \mathbf{S} & \mathbf{S}^{-} \\ \mathbf{S}^{+} & u - \mathbf{S} \end{pmatrix}\,. \ee It does coincide with the $\mathrm{L}$-operator. The implemented shift of the spectral parameter simplifies the previous formula. A straightforward calculation enables to check that the $\mL$-operator is a product of several upper-triangular and lower-triangular matrices \be \lb{LaxFact} \mathbb{R}_{12}(u-\textstyle{\frac{1}{2}}|\textstyle{\frac{1}{2}},\ell) = \begin{pmatrix} 1 & 0 \\ -z & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & u_2 \end{pmatrix} \begin{pmatrix} 1 & \dd \\ 0 & 1 \end{pmatrix} \begin{pmatrix} u_1 & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ z & 1 \end{pmatrix} \,, \ee where we introduce linear combinations of the spin and the spectral parameter $$ u_1 \equiv u - \ell -1 \ \,,\ u_2 \equiv u + \ell\,. $$ The factorization formula \p{LaxFact} is rather natural, since both the initial infinite-dimensional $\mR$-operator, Eq. \p{Rsl2R}, and its restriction, Eq. \p{redsl2alg}, have the factorized form. This leads to a reasonable question: does there exist a factorized matrix form of the $\mR$-operator at (half)-integer spin $s=\frac{n}{2}$ which would be analogous to \p{LaxFact} ? In order to answer this question let us consider a more intricate example of spin $s = 1$, i.e. the restriction to $\mathbb{C}^3$ ($n=2$). Straightforward application of the formula~(\ref{redsl2alg}) yields the following matrix of the operator $\mathbb{R}_{12}(u-1|1,\ell)$ written in the basis $\mathbf{e}_1 = z_1^2$, $\mathbf{e}_2 = z_1$, $\mathbf{e}_3 = 1$ $$ \begin{pmatrix} u^2 + u (2\mathbf{S}-1) + \mathbf{S}(\mathbf{S} - 1) & - u \mathbf{S}^- + \mathbf{S} \mathbf{S}^- & (\mathbf{S}^-)^2 \\ -2 u \mathbf{S}^+ + 2 \mathbf{S}^+ \mathbf{S} & u^2 - u -2 \mathbf{S}^2 + \ell(\ell+1) & -2 u \mathbf{S}^- - 2 \mathbf{S}^- \mathbf{S} \\ (\mathbf{S}^+)^2 & -u \mathbf{S}^+ - \mathbf{S} \mathbf{S}^+ & u^2 - u (2\mathbf{S}+1) + \mathbf{S}(\mathbf{S} + 1) \end{pmatrix}\,. $$ The previous matrix entries are represented through the algebra generators, Eq. \p{diff}. It is easy to check that the matrix can be decomposed in a product of several more simply organized triangular matrices \be \lb{spin1} \begin{pmatrix} 1 & 0 & 0 \\ - 2z & 1 & 0 \\ z^2 & - z & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0\\ 0 & u_2-1 & 0 \\ 0 & 0 & u_2 (u_2-1) \end{pmatrix} \begin{pmatrix} 1 & \dd & \dd^2 \\ 0 & 1 & 2\dd \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} u_1(u_1-1) & 0 & 0\\ 0 & u_1-1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \\ 2z & 1 & 0 \\ z^2 & z & 1 \end{pmatrix}. \ee Considering more examples we are able to guess the factorization formula for the matrix of the operator $\mathbb{R}(u-\textstyle\frac{n}{2}|\textstyle{\frac{n}{2}},\ell)$ written in the basis $\mathbf{e}_1 = z_1^{n}\,, \mathbf{e}_2 = z_1^{n-1}\,,\,\ldots\,, \mathbf{e}_{n+1} = 1$ \be\lb{formula1} \mathbb{R}_{12}(u-\textstyle\frac{n}{2}|\textstyle{\frac{n}{2}},\ell) = Z^{-1} \, U^{+}(u_2) \, D \, U^{-}(u_1) \, Z \,. \ee For the few first triangular matrices $Z$ and $D$ (at spin $s = \frac{1}{2}\,,1\,,\frac{3}{2}\,,2,\cdots$) we obtain \be Z_{(\frac{1}{2})} = \begin{pmatrix} 1 & 0 \\ z & 1 \end{pmatrix} , Z_{(1)} = \begin{pmatrix} 1 & 0 & 0 \\ 2z & 1 & 0 \\ z^2 & z & 1 \end{pmatrix} , Z_{(\frac{3}{2})} = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 3z & 1 & 0 & 0\\ 3z^2 & 2z & 1 & 0 \\ z^3 & z^2 & z & 1 \end{pmatrix} , Z_{(2)} = \begin{pmatrix} 1 & 0 & 0 & 0 & 0\\ 4z & 1 & 0 & 0 & 0\\ 6z^2 & 3z & 1 & 0 & 0 \\ 4z^3 & 3z^2 & 2z & 1 & 0\\ z^4 & z^3 & z^2 & z & 1 \end{pmatrix} ,\cdots \lb{Z} \ee \be D_{(\frac{1}{2})} =\begin{pmatrix} 1 & \dd \\ 0 & 1 \end{pmatrix} , D_{(1)} =\begin{pmatrix} 1 & \dd & \dd^2 \\ 0 & 1 & 2\dd \\ 0 & 0 & 1 \end{pmatrix} , D_{(\frac{3}{2})} = \begin{pmatrix} 1 & \dd & \dd^2 & \dd^3\\ 0 & 1 & 2\dd & 3\dd^2 \\ 0 & 0 & 1 & 3\dd \\ 0 & 0 & 0 & 1 \end{pmatrix} , D_{(2)} = \begin{pmatrix} 1 & \dd & \dd^2 & \dd^3 & \dd^4 \\ 0 & 1 & 2\dd & 3\dd^2 & 4\dd^3 \\ 0 & 0 & 1 & 3\dd & 6\dd^2 \\ 0 & 0 & 0 & 1 & 4\dd \\ 0 & 0 & 0 & 0 & 1 \end{pmatrix} ,\cdots \lb{DD} \ee Thus one infers immediately the general pattern. The diagonal matrices $U^{+}(u)$ for the same values of the spin are the following \begin{align} U^{+}_{(\frac{1}{2})} &= \mathrm{diag}(1,u)\;\;\;,\;\;\; U^{+}_{(1)} = \mathrm{diag}(1,u-1,u(u-1))\, \notag \\ U^{+}_{(\frac{3}{2})} &= \mathrm{diag}(1,u-2,(u-1)(u-2),u(u-1)(u-2))\,, \notag \\ U^{+}_{(2)} &= \mathrm{diag}(1,u-3,(u-2)(u-3),(u-1)(u-2)(u-3), u(u-1)(u-2)(u-3))\,, \cdots \lb{U+} \end{align} The eigenvalues of $U^{-}(u)$ are in a reverse order \begin{align} U^{-}_{(\frac{1}{2})} &= \mathrm{diag}(u,1)\;\;\;,\;\;\; U^{-}_{(1)} = \mathrm{diag}(u(u-1),u-1,1)\, \notag \\ U^{-}_{(\frac{3}{2})} &= \mathrm{diag}(u(u-1)(u-2),(u-1)(u-2),u-2,1)\,, \notag \\ U^{-}_{(2)} &= \mathrm{diag}(u(u-1)(u-2)(u-3),(u-1)(u-2)(u-3),(u-2)(u-3),u-3,1)\,\cdots \lb{U-} \end{align} The examined examples lead to a transparent factorization pattern expressed by Eq. \p{formula1}. The factorization formula \p{formula1} offers considerably more explicit description of finite-dimensional solutions of the Yang-Baxter equation, Eq. \p{YB}, as compared with all other known approaches. The factorization formula \p{formula1} is equivalent to the generating formula for matrix elements of the restricted $\mR$-operator, Eq.~(\ref{redsl2alg}), hence it confirms the efficiency of the result (\ref{redsl2alg}) obtained initially in \cite{CDS}. Let us mention that it would be difficult to guess the factorization formula~\p{formula1} taking as a starting point the classical result~\p{FTT}. The quantum Lax operator (also called $\mL$-operator) is a $2 \times 2$ matrix (the fundamental representation of a rank 1 algebra is two-dimensional), and its matrix entries are linear in generators of the symmetry algebra. The $\mL$-operator can be factorized in a product of several more simple matrices in the case of $s\ell_2$ symmetry algebra as well as in the case of its trigonometric and elliptic deformations~\cite{BS,KrZa97,DKK07}. This observation helps a lot in solving the $\mathrm{RLL}$-relation, which imposes severe constraints on the infinite-dimensional $\mR$-operator~\cite{DKK07,DM09,DS1,CD14} and eventually enables to find the general solution of the Yang-Baxter equation, Eq.~\p{YB}. The purpose of this note is to show that more general solutions of Eq. \p{YB} than the $\mL$-operator can be factorized as well. In the next Sect. we will prove the factorization formula~(\ref{formula1}). Besides the reduction formula \p{redsl2alg} for $s\ell_2$ the analogous restrictions (to a finite-dimensional representation) of the general $\mR$-operator has been obtained in the paper~\cite{CDS} for the Lie group $\mathrm{SL}(2,\C)$ and for the modular double of $U_q(s\ell_2)$~\cite{F99}. In the accompanying paper~\cite{CDS2} the analogous restriction of the general $\mathrm{R}$-operator \cite{DS1} has been carried out for elliptic deformations of $s\ell_2$, which are the Sklyanin algebra and the elliptic modular double \cite{AA2008}. The matrix factorization of $\mathrm{SL}(2,\C)$-symmetric $\mathrm{R}$-operators does not differ essentially from the formula (\ref{formula1}), since finite-dimensional representations of $\mathrm{SL}(2,\C)$ are tensor products of two $s\ell_2$ representations. So we will not consider factorization for $\mathrm{SL}(2,\C)$. In Sect. \ref{SecModDub} and \ref{SecDubFact} we deal with the modular double and obtain counterparts of the results presented in the current Sect. The trigonometric factorization is given by the formula \p{trigfactor}. In Sect. \ref{SecSkl} we consider elliptic deformations and find the elliptic factorization formula \p{factellip}. \section{Rational factorization} In this Sect. we will prove the matrix factorization formula for the restricted $s\ell_2$-invariant $\mR$-operator, Eq. \p{formula1}. The reduction formula \p{redsl2alg} obtained in~\cite{CDS} will be a starting point for us. The proof consists of two steps. Firstly, we rewrite the matrix formula \p{formula1} in an operator form. Secondly, we act by this operator on the polynomial $(z_1-x)^n \,\Phi(z)$ and transform the result to the form \p{redsl2alg}. For the sake of simplicity let us consider firstly an example such that the spin $s = 1$. We are going to rewrite the factorization formula \p{formula1} at $s = 1$ in an operator form. Recall that the matrix of the operator $\mathbb{R}_{12}(u-1|1,\ell)$ does factorize, Eq. \p{spin1}. It is constructed out of matrices $Z_{(1)}$, $D_{(1)}$, $U^\pm_{(1)}$ (see Eqs. \p{Z}, \p{DD}, \p{U+}, \p{U-}) \be \lb{Rspin1} \mathbb{R}_{12}(u-\textstyle\frac{n}{2}|1,\ell) = Z^{-1}_{(1)} \, U^{+}_{(1)}(u_2) \, D_{(1)} \, U^{-}_{(1)}(u_1) \, Z_{(1)} \,. \ee Each matrix factor in the previous formula has an operator counterpart. The matrices $Z_{(1)}$ and $D_{(1)}$ have a simple exponential form $$ Z_{(1)} = \exp \left(z \mathbf{D}_{(1)}\right) \ \ ;\ \ D_{(1)} = \mathbf{C}_{(1)}\, \exp\left(\dd\, \mathbf{D}_{(1)}\right)\,\mathbf{C}_{(1)}\,, $$ where we introduce the numerical matrices \be \lb{DC} \mathbf{D}_{(1)} \equiv \begin{pmatrix} 0 & 0 & 0 \\ 2 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}\ \ ;\ \ \mathbf{C}_{(1)} \equiv \begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix}\,. \ee The matrices $U^+_{(1)}$ and $U^-_{(1)}$ are related to each other by the similarity transformation $$ U^+_{(1)}(u_2) = \mathbf{C}_{(1)}\, U^-_{(1)}(u_2)\,\mathbf{C}_{(1)}\,. $$ Substituting the previous expressions in Eq. \p{Rspin1} and taking into account that $\mathbf{C}_{(1)}\,\mathbf{C}_{(1)} = \II$ we rewrite the matrix \p{Rspin1} as follows $$ \mathbb{R}_{12}(u-1|1,\ell) = \exp\left(-z\, \mathbf{D}_{(1)}\right)\, \mathbf{C}_{(1)}\,U^{-}_{(1)}(u_2)\,\exp\left(\dd\, \mathbf{D}_{(1)}\right)\,\mathbf{C}_{(1)}\, U^{-}_{(1)}(u_1)\, \exp\left(z\, \mathbf{D}_{(1)}\right)\,. $$ The matrices that constitute the previous expression are matrices of some operators in the basis $\mathbf{e}_1 = z_1^2$, $\mathbf{e}_2 = z_1$, $\mathbf{e}_3 = 1$: \begin{itemize} \item the lower-triangular matrix $\mathbf{D}_{(1)}$, Eq. \p{DC}, is a matrix of the differential operator $\partial_{z_1}$ with respect to the given basis; \item the matrix $\mathbf{C}_{(1)}$, Eq. \p{DC}, is a matrix of the inversion operator $\hat{\mathrm{C}}_1 \equiv \hat{\mathrm{C}} \otimes \II:\,z_1^k\to z_1^{n-k}$ at $n=2$ with respect to the given basis; \item the diagonal matrix $U^{-}_{(1)}(u)$ is a matrix of the operator $\frac{\Gamma(z_1\dd_1+u+1-n)}{\Gamma(u+1-n)}$ at $n=2$ with respect to the given basis\,. \end{itemize} Straightforward generalization of the considered example $s = 1$ (i.e. $n=2$) to arbitrary $n$ yields an equivalent operator form (with respect to the basis $\mathbf{e}_1 = z_1^{n}\,, \mathbf{e}_2 = z_1^{n-1}\,,\,\ldots\,, \mathbf{e}_{n+1} = 1$) of the matrix factorization formula, Eq. \p{formula1}, \be \lb{Rmat} \mathbb{R}_{12}(u-{\textstyle\frac{n}{2}}|{\textstyle\frac{n}{2}},\ell) = \exp\left(-z\, \dd_1\right)\, \hat{\mathrm{C}}_1\, \frac{\Gamma(z_1\dd_1+u_2+1-n)}{\Gamma(u_2+1-n)}\,\exp\left(\dd\, \dd_1\right)\,\hat{\mathrm{C}}_1\, \frac{\Gamma(z_1\dd_1+u_1+1-n)}{\Gamma(u_1+1-n)}\, \exp\left(z\, \dd_1\right). \ee We proceed to the second step of the proof. We are going to act by the operator \p{Rmat} on the polynomial $(z_1-x)^{n}\,\Phi(z)$ and to verify that the result does coincide with the formula for matrix elements of the $\mR$-operator, Eq. \p{redsl2alg}. Thus we act sequentially on $(z_1-x)^{n}\,\Phi(z)$ by the operator factors from Eq. \p{Rmat}. At the first step we perform the shift $z_1\to z_1+z$, \be \lb{step1} \exp\left(z\, \dd_1\right)\,(z_1-x)^{n}\,\Phi(z) = (z_1+z-x)^{n}\,\Phi(z)= \sum_{k=0}^{n} \frac{n!}{k!(n-k)!}\,z_1^k\,(z-x)^{n-k}\,\Phi(z)\,. \ee At the second step we act by the operator $\frac{\Gamma(z_1\dd_1+u_2+1-n)}{\Gamma(u_2+1-n)}$ on the previous expression. The action of this operator on $z_1^k$ is equivalent to the substitution $z_1\dd_1\to k $, \be \lb{step2} \sum_{k=0}^{n} \frac{n!}{k!(n-k)!}\, \frac{\Gamma(k+u_2+1-n)} {\Gamma(u_2+1-n)}\,z_1^k\,(z-x)^{n-k}\,\Phi(z)\,. \ee At the third step we act by the operators $\hat{\mathrm{C}}_1$ and $\exp\left(\dd\, \dd_1\right)$ on $z_1^k$ that is present in Eq. \p{step2}, \be \lb{step3} \exp\left(\dd\, \dd_1\right)\,\hat{\mathrm{C}}_1\,z_1^k = \exp\left(\dd\, \dd_1\right)\,z_1^{n-k} = (z_1+\dd)^{n-k} = \sum_{p=0}^{n-k} \frac{(n-k)!}{p!(n-k-p)!}\,z_1^{n-k-p}\,\dd^{p}\,, \ee and at the last step we apply $\exp\left(-z\, \dd_1\right)\, \hat{\mathrm{C}}_1\, \frac{\Gamma(z_1\dd_1+u_2+1-n)}{\Gamma(u_2+1-n)}$ to $z_1^{n-k-p}$ that is present in Eq. \p{step3}, \be \lb{step4} \exp\left(-z\, \dd_1\right)\, \hat{\mathrm{C}}_1\, \frac{\Gamma(z_1\dd_1+u_2+1-n)}{\Gamma(u_2+1-n)}\,z_1^{n-k-p} = (z_1-z)^{k+p}\,\frac{\Gamma(u_2+1-k-p)}{\Gamma(u_2+1-n)}\,. \ee Finally, gathering the previous expressions, Eqs. \p{step2}, \p{step3}, \p{step4}, we obtain $$ \mathbb{R}_{12} (u-\textstyle\frac{n}{2}|{\textstyle\frac{n}{2}},\ell) \,(z_1-x)^{n}\,\Phi(z) = $$ \be = \sum_{k=0}^{n} \frac{n!}{k!(n-k)!}\frac{\Gamma(u_1+1-n+k)}{\Gamma(u_1+1-n)} \sum_{p=0}^{n-k} \frac{(n-k)!}{p!(n-k-p)!}\frac{\Gamma(u_2+1-k-p)}{\Gamma(u_2+1-n)} \,(z_1-z)^{k+p}\,\partial_z^p\,(z-x)^{n-k}\,\Phi(z)\,. \lb{Right1} \ee The previous formula is an operator reformulation of the matrix formula \p{formula1}. More exactly, according to Eq. \p{Right1}, the matrix \p{formula1} is applied to the $(n+1)$-dimensional vector $(z_1-x)^{n}$ and operator entries of the matrix \p{formula1} act on a polynomial $\Phi(z)$. In order to complete the proof we need to show that the right hand side of Eq.~\p{Right1} coincides with the right hand side of Eq.~(\ref{redsl2alg}) where the spectral parameter is shifted $u \to u - \frac{n}{2}$, \be\lb{key} \mathbb{R}_{12}(u-\textstyle\frac{n}{2}|{\textstyle\frac{n}{2}},\ell)\,(z_1-x)^{n}\,\Phi(z) = (z-x)^{-u_1-1+n}\,(z_1-z)^{u_2+1}\,\partial_z^n\, (z_1-z)^{-u_2-1+n}\,(z-x)^{u_1+1}\,\Phi(z)\,. \ee The proof of the needed identity will be based on the Cauchy's differentiation formula \be \lb{Cauchy} \partial_z^p\, F(z) = \frac{p!}{2\pi \textup{i}} \oint \frac{d\lambda}{(\lambda-z)^{p+1}} F(\lambda)\,, \ee where the closed contour around $\lambda = z$ does not encircle singularities of an analytic function $F(\lambda)$. Let us consider the right hand side of Eq. (\ref{Right1}). We reduce fractions canceling $(n-k)!$ and change the summation index in the second sum $p \to n-k-p$ $$ \sum_{k=0}^{n} \frac{n!}{k!}\frac{\Gamma(u_1+1-n+k)}{\Gamma(u_1+1-n)} \sum_{p=0}^{n-k} \frac{1}{p!(n-k-p)!}\frac{\Gamma(u_2+1-n+p)}{\Gamma(u_2+1-n)} \,(z_1-z)^{n-p}\,\underline{\partial_z^{n-k-p}\,(z-x)^{n-k}\,\Phi(z)}\,. $$ Further we exploit the integral representation \p{Cauchy} for the underlined factor, $$ \sum_{k=0}^{n} \frac{n!}{k!}\frac{\Gamma(u_1+1-n+k)}{\Gamma(u_1+1-n)} \sum_{p=0}^{n-k} \frac{1}{p!(n-k-p)!}\frac{\Gamma(u_2+1-n+p)}{\Gamma(u_2+1-n)} \,(z_1-z)^{n-p}\,\frac{(n-k-p)!}{2\pi \textup{i}} \times $$ $$ \cdot \oint \frac{d\lambda \;(\lambda-x)^{n-k}}{(\lambda-z)^{n-k-p+1}} \,\Phi(\lambda) = \frac{n!}{2\pi \textup{i}}\oint \frac{d\lambda}{(\lambda-z)^{n+1}}(z_1-z)^{n}(\lambda-x)^{n}\times $$ \be \lb{lamint} \cdot \sum_{k=0}^{n} \frac{\Gamma(u_1+1-n+k)}{k!\Gamma(u_1+1-n)}\, \left(\frac{\lambda-z}{\lambda-x}\right)^k\, \sum_{p=0}^{n-k} \frac{\Gamma(u_2+1-n+p)}{p!\Gamma(u_2+1-n)} \, \left(\frac{\lambda-z}{z_1-z}\right)^p\,\Phi(\lambda)\,. \ee Both summations in the previous formula can be extended freely to infinity. Indeed, the unwanted terms contain $(\lambda-z)^{m}$, $m \geq n+1$, and, consequently, disappear being integrated over $\lambda$. The emerging power series are of binomial type $$ \left(1-z\right)^{-\alpha} = \sum_{k=0}^{\infty} \frac{\Gamma(\alpha+k)}{k!\Gamma(\alpha)}\, z^k \,, $$ so the sums in Eq. \p{lamint} can be evaluated explicitly $$ \frac{n!}{2\pi \textup{i}}\oint \frac{d\lambda}{(\lambda-z)^{n+1}}\,(z_1-z)^{n}\, (\lambda-x)^{n}\, \left(1-\frac{\lambda-z}{\lambda-x}\right)^{n-u_1-1}\, \left(1-\frac{\lambda-z}{z_1-z}\right)^{n-u_2-1}\,\Phi(\lambda)\,. $$ Further we rearrange the factors in the previous expression and calculate the contour integral according to Eq. \p{Cauchy} that produces immediately the desired result \p{key}, $$ (z-x)^{-u_1-1+n}\,(z_1-z)^{u_2+1}\frac{n!}{2\pi \textup{i}}\oint \frac{\mathrm{d}\lambda}{(\lambda-z)^{n+1}}\, \left(\lambda-x\right)^{u_1+1}\, \left(z_1-\lambda\right)^{n-u_2-1}\,\Phi(\lambda) = $$ $$ =(z-x)^{-u_1-1+n}\,(z_1-z)^{u_2+1}\,\partial_z^n\, (z_1-z)^{-u_2-1+n}\,(z-x)^{u_1+1}\,\Phi(z)\,. $$ Thus the identity~(\ref{key}) along with the matrix factorization formula for the operator $\mathbb{R}_{12}(u-\textstyle\frac{n}{2}|{\textstyle\frac{n}{2}},\ell)$, Eq.~(\ref{formula1}), are proven. \section{Modular double} \lb{SecModDub} In this Sect. we consider solutions of the Yang-Baxter equation, Eq.~\p{YB}, that are invariant with respect to the modular double. The modular double of the quantum algebra $U_q(s\ell_2)$ has been introduced by Ludwig Faddeev in~\cite{F99}. This algebra is formed by two sets of generators $\mathbf{E}\,,\mathbf{K}\,,\mathbf{F}$ and $\widetilde{\mathbf{E}}\,,\widetilde{\mathbf{F}}\,,\widetilde{\mathbf{K}}$. The standard commutation relations for the generators $\mathbf{E}\,,\mathbf{K}\,,\mathbf{F}$, which form the quantum algebra $U_q(s\ell_2)$ with the deformation parameter $q = e^{\textup{i} \pi \tau}$ (we assume that $\tau \in \C \backslash \mathbb{Q}$, i.e. $q$ is not a root of unity) \begin{equation} \label{qsl2} \begin{array}{c} [\,\mathbf{E}\,,\,\mathbf{F}\,] = \frac{\mathbf{K}^2 - \mathbf{K}^{-2}}{q-q^{-1}} \;,\;\;\; \mathbf{K} \,\mathbf{E} = q \,\mathbf{E} \,\mathbf{K} \;,\;\;\; \mathbf{K} \,\mathbf{F} = q^{-1}\, \mathbf{F}\, \mathbf{K}\,, \end{array} \end{equation} are supplemented by analogous commutation relations for $\widetilde{\mathbf{E}},\,\widetilde{\mathbf{F}},\,\widetilde{\mathbf{K}}$ with the deformation parameter $\widetilde{q} = e^{\textup{i} \pi / \tau}$. In addition, the generators $\mathbf{E}$ and $\mathbf{F}$ commute with $\widetilde{\mathbf{E}}$ and $\widetilde{\mathbf{F}}$; the generator $\mathbf{K}$ anticommutes with $\widetilde{\mathbf{E}}$ and $\widetilde{\mathbf{F}}$; $\widetilde{\mathbf{K}}$ anticommutes with $\mathbf{E}$ and $\mathbf{F}$; $\mathbf{K}$ and $\widetilde{\mathbf{K}}$ commute. The representation theory of the modular double has been elaborated in a number of papers, see for example~\cite{BT02,F99,FKV,Had,Pawelkiewicz:2013wga} and references therein. We will use the following parametrization $\tau = \frac{\omega'}{\omega}$, where $\omega, \,\omega' \in \C$, $\mathrm{Im}\, \omega >0$, $\mathrm{Im}\, \omega' >0$, are constrained by $\omega \omega' = -\frac{1}{4}$. Then $$ q = \exp\left(\textup{i} \pi \omega' / \omega \right) \;,\;\;\; \widetilde{q} = \exp\left(\textup{i} \pi \omega / \omega' \right)\,, $$ so the change $q \rightleftarrows \widetilde{q}$ is equivalent to $\omega \rightleftarrows \omega'$. We will also profit from the notation $\omega'' = \omega + \omega'$. Further we deal with realization of the modular double generators by finite-difference operators $\mathbf{K}_s = \pi_s(\mathbf{K})\,, \mathbf{E}_s = \pi_s(\mathbf{E})\,, \mathbf{F}_s = \pi_s(\mathbf{F})$ acting on the space of entire functions rapidly decaying at infinity along contours parallel to the real axis. The representation $\pi_s$ is parametrized by a complex number $s$, which we call {\it spin}. The generators have the following explicit form~\cite{BT02,BT06,CD14} \begin{equation} \label{Gs} \mathbf{K}_s = e^{-\frac{\textup{i} \pi}{2\omega} \hat p} \;\;\;,\;\;\;\; \begin{array}{l} (q-q^{-1})\mathbf{E}_s = e^{\frac{\textup{i} \pi x}{\omega}} \left[ e^{-\frac{\textup{i} \pi}{2 \omega}\left(\hat p -s - \omega''\right)} - e^{\frac{\textup{i} \pi}{2 \omega}\left(\hat p -s - \omega''\right)} \right] \;,\\[0.3 cm] (q-q^{-1})\mathbf{F}_s = e^{-\frac{\textup{i} \pi x}{\omega}} \left[ e^{\frac{\textup{i} \pi}{2 \omega}\left(\hat p + s + \omega''\right)} - e^{-\frac{\textup{i} \pi}{2 \omega}\left(\hat p + s + \omega''\right)} \right]\,, \end{array} \end{equation} where $\hat p$ is a momentum operator in the coordinate representation, $\hat p = \frac{1}{2 \pi \textup{i}}\, \partial_{x}$. The formulae for generators $\widetilde{\mathbf{K}}_s\,, \widetilde{\mathbf{E}}_s\,, \widetilde{\mathbf{F}}_s$ are obtained by a change $\omega \rightleftarrows \omega'$ in Eq. \p{Gs}. \bigskip The noncompact quantum dilogarithm naturally arises in the representation theory of the modular double. In the context of quantum integrable systems it has been found first in \cite{F95}. The properties of this special function have been thoroughly examined in ~\cite{FKV,Vol05}. We will need not the quantum dilogarithm itself but another closely related special function defined by the integral \be \lb{D} D_a(z) = \exp\left(-\frac{\textup{i}}{2}\int\limits^{+\infty}_{-\infty} \frac{d\,t}{t}\,\frac{\sin(a t)\cos(z t)}{\sin(\omega t)\sin(\omega^{\prime} t)}\right)\,, \ee where the contour goes above the singularity at $t = 0$. The R-matrix of the Faddeev-Volkov model is expressed through this function~\cite{VF,Bazhanov:2007mh}. A number of identities for the $D$-function are contained in~\cite{BT06}. It naturally arises as an intertwining operator of equivalent representations of the modular double. Further we indicate some basic properties of the $D$-function that we will need. The function $D_{a}(z)$ is even, \be \lb{Dev} D_a(z) = D_a(-z) \ \ ;\ \ D_a(z) D_{-a}(z) = 1 \ \ ; \ \ D_0 (z) = 1 \,. \ee From the definition \p{D} we infer that it is symmetric with respect to the change $\omega \rightleftarrows \omega'$. The $D$-function satisfies a pair of finite-difference equations of the first order \be \lb{FunEq} \frac{D_a(z-\omega')}{D_a(z + \omega')} = \frac{\cos \frac{\pi}{2 \omega} ( z-a)}{\cos \frac{\pi}{2 \omega} ( z+a)} \;\; ; \;\; \frac{D_a(z-\omega)}{D_a(z + \omega)} = \frac{\cos \frac{\pi}{2 \omega'} ( z-a)}{\cos \frac{\pi}{2 \omega'} ( z+a)}\,. \ee Consequently $2\omega$ and $2\omega'$ have the meaning of its quasi-periods. At generic spin $s$ the representation $\pi_s$ is irreducible and infinite-dimensional. However it is not of a Verma module type, since the representation space does not contain a highest-weight vector $\Omega(x)$: $\mathbf{F}_s \,\Omega(x) = 0$, $\widetilde{\mathbf{F}}_s \,\Omega(x) = 0$, $\mathbf{K}_s \, \Omega(x) = \lambda \,\Omega(x)$, $\widetilde{\mathbf{K}}_s \, \Omega(x) = \widetilde{\lambda}\, \Omega(x)$. The representation $\pi_s$ is a deformed analogue of the principal series representations of the Lie group $\mathrm{SL}(2,\C)$. The situation drastically changes at spin $s = s_{n,m} \equiv - \omega''- n \omega - m \omega'$, where integers $n , m \in \mathbb{Z}_{\geq 0}$ enumerate points of a quarter-infinite lattice on the complex plane (or a line, for real $\omega/\omega'$). In this case the representation $\pi_{s_{n,m}}$ is reducible, and a $(n+1)(m+1)$-dimensional irreducible representation decouples. Since we will need finite-dimensional representations, let us consider their structure in more details. The basis of the $(n+1)(m+1)$-dimensional representation is formed by monomials $$ \widetilde{X}^{n-2k} X^{m-2l} \;\;\;\;\;\;\text{at}\;\;\;\; k=0,1,\cdots,n\,,\;\;\; l=0,1,\cdots,m\,, $$ with respect to the variables \be \lb{Xvar} X \equiv X(x) = e^{\frac{\textup{i} \pi}{2\omega}x} \;\;,\;\; \widetilde{X} \equiv \widetilde{X}(x) = e^{\frac{\textup{i} \pi}{2\omega'}x}\,. \ee Therefore any finite-dimensional representation of the modular double is a tensor product of finite-dimensional representations of $U_q(s\ell_2)$ and $U_{\widetilde{q}}(s\ell_2)$. For our purposes finite-dimensional representations of spin $s = s_m \equiv - \omega''- m \omega'$, $m \in \mathbb{Z}_{\geq 0}$, will be sufficient. Thus we will deal with only a half of the modular double. By means of the $D$-function, Eq. \p{D}, the basis vectors $X^{m-2l}$, $l=0,1,\cdots,m$, can be arranged in a sole function. Indeed, $D_{m\omega'}(x-y)$ is a generating function of the $(m+1)$-dimensional representation. It reduces to a linear combination of exponents by means of certain contiguous relations similar to \p{FunEq}, \begin{align} D_{m\omega'}(x-y) = \prod\limits^{m-1}_{l=0} \left( Y^{-1} X \, q^{\frac{m-1}{2}-l} + Y X^{-1} \, q^{-\frac{m-1}{2}+l} \right) \ \ , \ \ Y \equiv Y(y) = e^{\frac{\textup{i} \pi}{2\omega}y} \,, \lb{modGenFun} \end{align} where $y$ is an auxiliary parameter. \section{Trigonometric factorization} \lb{SecDubFact} After a brief introduction to the modular double given in the previous Sect. we will consider solutions of the Yang-Baxter equation, Eq. \p{YB}, that are invariant with respect to this quantum algebra. The invariance means that $\mR$-operators commute with the co-product of all six generators. Let us note that the symmetry restriction imposed by $U_q(s\ell_2)$ is not sufficient to fix completely (up to an inessential normalization factor) the $\mR$-operator. Initially the R-operator for the modular double acting on a tensor product of two infinite-dimensional representations $\pi_{s_{(1)}} \otimes \pi_{s_{(2)}}$ was constructed in \cite{BT06} in a form similar to Eq. \p{FTT} where the role of the Euler beta function is played by the $D$-function, Eq. \p{D}. The remarks made in Sect. \ref{SectIntr} concerning the formula \p{FTT} are equally valid for this realization of the $\mR$-operator for the modular double. In~\cite{CD14} relying on the realization \p{Gs} of the algebra generators a more explicit formula for the $\mR$-operator has been proposed. There we demonstrated that the $\mR$-operator can be realized as an integral operator, and it factorizes in a product of four Faddeev-Volkov type $\mathrm{R}$-matrices~\cite{VF}. In~\cite{Mangazeev:2014gwa,Mangazeev:2014bqa} an explicit hypergeometric formula for the R-matrix of $U_q(s\ell_2)$ acting on a tensor product of two highest-weight representations has been discovered. In~\cite{Khoroshkin:2014hla} a universal factorization formula for the trigonometric $\mathrm{R}$-operator has been obtained by means of the universal $\mathrm{R}$-matrix. In~\cite{CDS} the integral $\mR$-operator for the modular double acting on a tensor product of two infinite-dimensional representations $\pi_{s_{n,m}} \otimes \pi_{s}$ has been used as a tool to produce finite-dimensional solutions of the Yang-Baxter equation, Eq. \p{YB}. There the restriction of the $\mR$-operator to a finite-dimensional representation in the first space at spin $s_{n,m} \equiv - \omega''- n \omega - m \omega'\,$, $n,\,m \in \mathbb{Z}_{\geq 0}\,$, has been found. Let us mention that in~\cite{Pawelkiewicz:2013wga} a similar restriction has been implemented for Racah-Wigner 6j-symbols. As we have already explained in the precious Sect., we are going deal with finite-dimensional representations only of spin $s_{m} \equiv - m \omega' - \omega''$. A generic finite-dimensional representation of spin $s_{m,n}$ can be taken into account as well without considerable changes in the following reasoning. The $\mR$-operator acts on the generating function of finite-dimensional representations at spin $s_{m}$, Eq. \p{modGenFun}, according to the following formula\footnote{Here we implemented a shift of the spectral parameter $u$ as compared with \cite{CDS}.} \begin{align} \lb{redmodm} & \mathbb{R}_{12}(u |s_m,s) \cdot D_{m \omega'}(x_{13})\,\Phi(x_2) = D_{u_2}(x_{12})\times \\ & \makebox[1em]{} \cdot \,D_{-u_1+m \omega'}(x_{23})\cdot D_{m \omega'}(\hat p_{2})\cdot D_{-u_2+m \omega'}(x_{12})\, D_{u_1}(x_{23})\,\Phi(x_2)\,, \notag \end{align} where $x_3$ is an auxiliary parameter of the generating function. We recall the shorthand notation $x_{ij} \equiv x_i -x_j$. Instead of the spectral parameter and spin $s$ we prefer another pair of parameters $$ u_1 = u + \frac{s}{2} \;;\;\;\; u_2 = u - \frac{s}{2} \,. $$ In these variables the final result will take a more simple form. $D_{m \omega'}(\hat p_{2})$ from Eq. \p{redmodm} is a finite-difference operator. It factorizes in a product of $m$ finite-difference operators of the first order due to the formula \p{modGenFun}. The generating formula \p{redmodm} uniquely specifies the solution of the Yang-Baxter equation, Eq. \p{YB}, that is a $(m+1)\times (m+1)$ matrix with operator entries. According to Eq. \p{redmodm}, the entries are finite-difference operators of order $m$. The formula \p{redmodm} enables to obtain a more explicit realization of the restricted $\mR$-operator. With respect to the basis \be \lb{trigbas} \mathbf{e}_j = X_1^{m+2-2j} \ , \ \ \ \ j=1,\ldots,m+1\,, \ee where $X_1 = X_1(x_1)$, Eq. \p{Xvar}, the matrix factorization formula for the restricted $\mR$-operator is valid \be \lb{trigfactor} \mathbb{R}_{12}(u |s_m,s) = Z \,M(u_2)\, D \,M(u_1)\, Z^{-1}\,. \ee The previous formula is a trigonometric counterpart of the rational factorization, Eq.~\p{formula1}. The matrices $Z$ and $D$ are diagonal \be \lb{ZDtrig} (Z)_{kj} = \delta_{kj} \,X_2^{2k-m-2}\ \ ;\ \ (D)_{kj} = \delta_{kj} \,q^{(m-1)(m+2-2k)}\,e^{(m+2-2k)\omega'\partial_2}\,. \ee The coordinate $x_2$ is present only in the matrix $Z$, and the momentum operator $\hat p_2$ is present only in the matrix $D$. The numerical matrix $M(u)$ is given by the following hypergeometric sum \begin{align} \left(M(u)\right)_{kj} = \sum_{p} \frac{(q^{2};q^{2})_{j-1}\,(q^{2};q^{2})_{m-j+1} \,q^{(k-p-1)^2+p(p+2-2j)+(j-1)m - \frac{m^2}{2}}} {(q^{2};q^{2})_{p}(q^{2};q^{2})_{j-1-p} (q^{2};q^{2})_{k-p-1}(q^{2};q^{2})_{m+2-j-k+p}} \,U^{2(2p-j-k+2) + m}\,, \lb{Mtrig} \end{align} where the summation over integer $p$ is from $\mathrm{max}(0,k+j-2-m)$ to $\mathrm{min}(k-1,j-1)$; the q-Pochhammer symbol $(q^2;q^2)_k \equiv (1-q^2)(1-q^4)\cdots(1-q^{2k})$; $U \equiv U(u) = e^{\frac{\textup{i}\pi u}{2\omega}}$. In order to illustrate the formula \p{Mtrig} we indicate the first few matrices $M(u)$, $m = 1, 2 ,3$. Here we use shorthand notations $M^{(m)} = M^{(m)}(u + m)$, $$ M^{(1)} = \begin{pmatrix} U & U^{-1} \\ U^{-1} & U \end{pmatrix} \ , \ \ M^{(2)} = \begin{pmatrix} U^2 & 1 & U^{-2} \\ q+q^{-1} & q U^2 + q^{-1} U^{-2} & q+q^{-1} \\ U^{-2} & 1 & U^2 \end{pmatrix}, $$ $$ M^{(3)} = \begin{pmatrix} U^3 & U & U^{-1} & U^{-3} \\ (q^2 + 1 + q^{-2}) U & q^2 U^3 + (1 + q^{-2}) U^{-1} & q^{-2} U^{-3} + (1 + q^2) U & (q^2 + 1 + q^{-2}) U^{-1} \\ (q^2 + 1 + q^{-2}) U^{-1} & q^{-2} U^{-3} + (1 + q^2) U & q^2 U^3 + (1 + q^{-2}) U^{-1} & (q^2 + 1 + q^{-2}) U \\ U^{-3} & U^{-1} & U & U^3 \end{pmatrix}\,. $$ In the case $m = 1$ the $\mR$-operator being restricted to the fundamental representation turns into the quantum Lax operator~\cite{BT06}. The factorization of the $\mathrm{L}$-operator of the XXZ spin chain has been discovered firstly in~\cite{BaSt} in the context of the chiral Potts models. \vspace{0.3 cm} The rest of this Sect. is devoted to the proof of the trigonometric factorization formula, Eq. \p{trigfactor}. We rewrite the finite-difference operator $D_{m \omega'}(\hat p_{2})$ from Eq. \p{redmodm} as a sum of shift operators \be \lb{djsum} D_{m\omega'}(\hat p_{2}) = \sum^{m+1}_{j=1} d_j\, e^{(m+2-2j)\omega'\partial_2}\,. \ee An explicit expression for numerical coefficients $d_j$ will not be relevant for a while (see Eq. \p{dk}). Then we rearrange the factors in Eq. \p{redmodm}. We collect all functions depending on $u_2$ to the left of the shift operators and all functions depending on $u_1$ to the right of the shift operators, \begin{align} \mathbb{R}_{12}(u | s_m ,s) \cdot D_{m\omega'}(x_{13}) \,\Phi(x_2) = \sum^{m+1}_{j=1} d_{j}\, D_{u_2}(x_{12})\, D_{-u_2+m\omega'}(x_{12}+(2j-m-2)\omega') \times \notag \\ \cdot \,e^{(m +2 -2j)\omega'\partial_2}\, D_{u_1}(x_{23}) \, D_{-u_1+m\omega'}(x_{23}+(2j-m-2)\omega')\,\Phi(x_2)\,. \lb{redmodm2} \end{align} So the coordinates are present in Eq. \p{redmodm2} only in the form of the function $D_{u}(x)D_{-u+m\omega'}(x+(2 j-m-2)\omega')$, where $j=1,\ldots,m+1$. By means of contiguous relations similar to \p{FunEq} one can check that this function is given by the following finite product \begin{align} &D_{u}(x)D_{-u+m\omega'}(x+(2 j-m-2)\omega') = \nonumber\\ & = U^{m+2-2j}\,q^{j-\frac{m}{2}-1}\,X^m\, \prod_{k=0}^{m-j}\left(1+q \,X^{-1}\,U^{-1}\,q^{2k}\right)\, \prod_{k=0}^{j-2}\left(1+q^{3-2j} \,X^{-1}\,U\,q^{2k}\right)\,. \lb{DDprod} \end{align} Expanding the right hand side of Eq. \p{DDprod} we obtain the following sum \begin{align} D_{u}(x)D_{-u+m\omega'}(x+(2j-m-2)\omega') = \sum^{m+1}_{k=1} d_{jk}(u)\,X^{m+2-2k}\,, \lb{DDsum} \end{align} where $d_{jk}(u)$ are some numerical coefficients, which will be calculated afterwards (see Eq. \p{djk}). Now we are ready to calculate the matrix of the operator $\mathbb{R}_{12}(u|s_m,s)$ with respect to the basis~\p{trigbas}. We substitute the expansion \p{DDsum} into Eq. \p{redmodm2} to the left and to the right of the shift operators. Then we expand both sides of Eq. \p{redmodm2} in powers of $X_3=X_3(x_3)$, Eq. \p{Xvar}. The generating function $D_{m\omega'}(x_{13})$ can be expanded with respect to the basis \p{trigbas} and simultaneously in powers of $X_3$ according to Eq.~\p{modGenFun}. Equating coefficients by powers of $X_3$ in both sides of Eq.~\p{redmodm2} yields \begin{align} &\mathbb{R}_{12}(u|s_m,s) \cdot d_k\,\mathbf{e}_k \,\Phi(x_2) = \nonumber\\ &= \sum^{m+1}_{i=1} \mathbf{e}_i\,\left(\sum^{m+1}_{j=1} d_{ji}(u_2)\,X_2^{2i-m-2}\, d_{j}\,e^{(m+2-2j)\omega'\partial_2}\, d_{jk}(u_1) \,X_2^{m+2-2k}\right)\,\Phi(x_2)\,. \lb{preMat} \end{align} Matrix entries of the operator $\mathbb{R}$ are coefficients in the expansion of the vector $\mathbb{R}\,\mathbf{e}_k$ with respect to the basis \p{trigbas}: $\mathbb{R}\,\mathbf{e}_k = \sum_{i=1}^{m+1} \mathbf{e}_i\,(\mathbb{R})_{ik}$. Consequently the formula \p{preMat} produces immediately the matrix entries of $\mathbb{R}_{12}(u | s_m ,s)$, \begin{align} \left(\mathbb{R}_{12}(u | s_m ,s)\right)_{ik} = \left(X_2^{-(m+2-2i)}\,d_{pi}(u_2)\, d_{p}\right)\,\left(\delta_{pj}e^{(m+2-2j)\omega'\partial_2}\right)\, \left(\frac{d_{jk}(u_1)}{d_{k}}\,X_2^{m+2-2k}\right)\,. \lb{matrfact} \end{align} In the previous formula we tacitly imply summation over repeated indices $p,j$; the matrix entries are presented in the operator form, and we omit an arbitrary function $\Phi(x_2)$. Thus the right hand side of Eq. \p{matrfact} is factorized. It is a product of three matrices: the diagonal matrix containing shift operators is sandwiched between two matrices made of numerical coefficients $d_{jk}(u)$, $d_{k}$ and coordinates $X_2^{\pm(m+2-2k)}$. Stripping off the diagonal matrices that contain $X_2$ from the lateral matrices we obtain the factorization formula \be \lb{facttrig2} \mathbb{R}_{12}(u |s_m,s) = Z \,M_2(u_2)\, \overline{D} \,M_1(u_1)\, Z^{-1}\,. \ee The diagonal matrix $Z$ is defined in Eq. \p{ZDtrig}. The diagonal matrix $\overline{D}$ is slightly different from $D$ defined in Eq. \p{ZDtrig}, $$ (\overline{D})_{ik} = \delta_{ik} \,e^{(m+2-2k)\omega'\partial_2}\,. $$ The numerical matrices $M_1(u)$, $M_2(u)$ are constructed out of the expansion coefficients $d_{jk}(u)$, $d_{k}$, \be \lb{M1M2} \left(M_1(u)\right)_{ik} = \frac{d_{ik}(u)}{d_{k}}\ \ ;\ \ \left(M_2(u)\right)_{ik} = d_{k}\,d_{ki}(u)\,. \ee The factorization formula \p{facttrig2} is slightly different from Eq. \p{trigfactor}. In order to recast the formula \p{facttrig2} into \p{trigfactor}, first of all we need to find the coefficients $d_{jk}(u)$, $d_{k}$ defined by expansions \p{djsum} and \p{DDsum}. This goal is easily accomplished by means of the q-binomial theorem \be \lb{q-binom} (-x;q^2)_m \equiv \prod_{k=0}^{m-1}\left(1+x\,q^{2k}\right) = \sum_{k=0}^{m} \frac{(q^2;q^2)_m\, q^{k(k-1)}}{(q^2;q^2)_k(q^2;q^2)_{m-k}}\,x^k \,. \ee Indeed, the function $D_{m\omega'}$, which produces coefficients $d_j$, Eq. \p{djsum}, is just the product \p{modGenFun} of the type~\p{q-binom}. Consequently, \be \lb{dk} d_k = \frac{(q^2;q^2)_m\, q^{(k-1)(k-m-1)}}{(q^2;q^2)_{k-1}(q^2;q^2)_{m-k+1}}\,. \ee The coefficients $d_{jk}(u)$, Eq. \p{DDsum}, can be extracted from the product of two q-binomial sums. We omit the details of the calculation that results in \begin{align} d_{jk}(u) = \sum_{p} \frac{(q^{2};q^{2})_{j-1}\,(q^{2};q^{2})_{m-j+1} \,q^{(k-p-1)^2+p(p+2-2j) + j-\frac{m}{2}-1}} {(q^{2};q^{2})_{p}(q^{2};q^{2})_{j-1-p} (q^{2};q^{2})_{k-p-1}(q^{2};q^{2})_{m-j+2-k+p}} \,U^{2(2p-j-k+2)+m}\,. \lb{djk} \end{align} The summation limits over integer $p$ in the previous formula are the same as in Eq. \p{Mtrig}. Let us define $$ \overline{d}_{jk}(u) \equiv d_j\,d_{jk}(u) \,q^{j(m-1) + 1 -\frac{m(m+1)}{2}}\,. $$ Substituting the explicit expressions for the coefficients \p{dk}, \p{djk} in the definition of $\overline{d}_{jk}(u)$, one straightforwardly checks that it is symmetric in indices $j,k$: $\overline{d}_{jk}(u) = \overline{d}_{kj}(u)$. This observation enables us to simplify Eq.~\p{facttrig2}. We separate the diagonal matrix $\delta_{ik}\,d_k$, move it from $M_2(u)$ towards $M_1(u)$, Eq. \p{M1M2}, and cancel it. Thus instead of a pair of different matrices $M_1(u)$ and $M_2(u)$ a pair of identical matrices $\left(M(u)\right)_{kj} = d_{jk}(u) \,q^{j(m-1) + 1 -\frac{m(m+1)}{2}}$, Eq. \p{Mtrig}, is present in the factorization formula \p{trigfactor}. The formula \p{trigfactor} is proven. It would be interesting to relate the factorization formula \p{trigfactor} to explicit expressions for $\mathrm{R}$-matrices from~\cite{Mangazeev:2014gwa,Mangazeev:2014bqa} as well as to the universal factorization formula~\cite{Khoroshkin:2014hla}. \section{Sklyanin algebra and elliptic factorization} \lb{SecSkl} In this Sect. we will factorize solutions of the Yang-Baxter equation, Eq.~\p{YB}, whose symmetry is encoded by the Sklyanin algebra~\cite{skl1}. The Sklyain algebra is a two-parametric deformation of $s\ell_2$ or a one-parametric deformation of $U_q(s\ell_2)$. It serves as a dynamical symmetry algebra of the 8-vertex model~\cite{Baxter1}. The four generators $\mathbf{S}^0,\,\mathbf{S}^1,\,\mathbf{S}^2,\,\mathbf{S}^3$ of the algebra respect commutation relations \begin{align} \mathbf{S}^\alpha\,\mathbf{S}^\beta - \mathbf{S}^\beta\,\mathbf{S}^\alpha = \textup{i}\cdot\left(\mathbf{S}^0\,\mathbf{S}^\gamma +\mathbf{S}^\gamma\,\mathbf{S}^0\right)\,, \notag \\ \mathbf{S}^0\,\mathbf{S}^\alpha - \mathbf{S}^\alpha\,\mathbf{S}^0 = \textup{i}\,\mathbf{J}_{\beta \gamma}\cdot \bigl(\mathbf{S}^\beta\,\mathbf{S}^\gamma +\mathbf{S}^\gamma\,\mathbf{S}^\beta \bigr)\,, \lb{SklAlg} \end{align} where the triple $(\alpha,\beta,\gamma)$ is an arbitrary cyclic permutation of $(1,2,3)$. The structure constants $\mathbf{J}_{\alpha\beta}= \frac{\mathbf{J}_{\beta}-\mathbf{J}_{\alpha}}{\mathbf{J}_{\gamma}}$, $\gamma\neq \alpha,\beta$, are expressed through the Jacobi theta functions (we assume $\eta \in \mathbb{C}$ and $\theta_a(\eta)\neq 0,\, a=1,\ldots,4$) \be \lb{strcnst} \mathbf{J}_{1}=\theta_2(2\eta)\theta_2(0) \theta_2^{-2}(\eta)\ ;\quad \mathbf{J}_{2}=\theta_3(2\eta)\theta_3(0) \theta_3^{-2}(\eta)\ ;\quad \mathbf{J}_{3}= \theta_4(2\eta)\theta_4(0) \theta_4^{-2}(\eta)\,. \ee We adopt shorthand notations $\theta_{a}(z|\tau) \equiv \theta_a(z)$, $a=1,\cdots,4$, for theta functions with modular parameter $\tau \in\mathbb{C}$, Im$(\tau)>0$, $ \theta_{1}(z|\tau) \equiv \theta_1(z) = -\sum_{n\in\mathbb{Z}} \mathrm{e}^{\pi \textup{i} \left(n+\frac{1}{2}\right)^2\tau}\cdot \mathrm{e}^{2\pi \textup{i} \left(n+\frac{1}{2}\right)\left(z+\frac{1}{2}\right)}\,. $$ The rest three theta-functions are obtained by shifts of the argument of $\theta_1$ by quasi-period halves \begin{eqnarray*} \theta_{2}(z|\tau)=\theta_1(z+{\textstyle\frac{1}{2}}|\tau)\,, \quad \theta_{3}(z|\tau)=e^{\frac{\pi \textup{i}\tau}{4}+\pi \textup{i} z}\theta_2(z+{\textstyle \frac{\tau}{2}}|\tau)\,, \quad \theta_4(z|\tau)= \theta_3(z+{\textstyle\frac{1}{2}}|\tau)\,. \end{eqnarray*} Besides theta functions $\theta_a(z)$ with modular parameter $\tau$ we will need as well theta functions with modular parameter $\frac{\tau}{2}$, which we denote as follows \be \lb{thetahalf} \bar\theta_3(z) = \theta_3(z|{\textstyle\frac{\tau}{2}}) \;\;,\;\; \bar\theta_4(z) = \theta_4(z|{\textstyle\frac{\tau}{2}})\,. \ee Two types of theta functions are related to each other by the identity \be \lb{theta1tothetabar} 2\,\theta_1(x+y)\,\theta_1(x-y) = \bar\theta_4(x)\,\bar\theta_3(y) -\bar\theta_4(y)\,\bar\theta_3(x)\,. \ee In the previous Sect. we have seen that the noncompact quantum dilogarithm (more exactly, the $D$-function) is omnipresent when one deals with representations of the modular double. In the case of elliptic deformation the same role is played by the elliptic gamma function~\cite{rui} \begin{equation} \Gamma(z)\equiv\Gamma(z|\tau,2\eta)\equiv \prod_{n,m=0}^{\infty} \frac{1-\mathrm{e}^{-2\pi\textup{i}z} p^{n+1}q^{m+1}}{1-\mathrm{e}^{2\pi\textup{i}z} p^ nq^m} \;\;\;,\;\;\; p=e^{2\pi\textup{i}\tau}\;\;,\;\; q=e^{4\pi\textup{i}\eta} \label{egamma} \ee where $|p|,|q|<1$. This function possess a number of remarkable properties. We will need the reflection formula \be \lb{refl} \Gamma(z) \,\Gamma(-z + 2\eta +\tau) = 1 \ee and its quasi-periodicity at the shift by $2\eta$, \be \lb{shift2eta} \Gamma(z+2\eta) = \mathrm{R}(\tau)\,e^{\textup{i}\pi z}\,\theta_1(z|\tau)\,\Gamma(z)\;\;\;,\;\;\; \mathrm{R}(\tau) \equiv \frac{p^{-\frac{1}{8}}} {\textup{i} (p;p)_\infty}\,. \ee In the following we extensively use the short-hand notation $\Gamma(\pm z \pm x):= \Gamma(z+x) \Gamma(z-x) \Gamma(-z+x) \Gamma(-z-x)$. Various connections between the Sklyanin algebra and elliptic hypergeometric functions were considered in~\cite{Rains,ros:elementary,ros:sklyanin,AA2008}. Let us briefly outline some basic facts about representations of the Sklyanin algebra. It admits a highly nontrivial explicit realization of generators as first order finite-difference operators with elliptic coefficients found by Sklyanin in his pioneering paper~\cite{skl2}, \begin{equation}\label{SklyanMod} \mathbf{S}^a = e^{\pi\textup{i}z^2/\eta}\frac{\textup{i}^{\delta_{a,2}} \theta_{a+1}(\eta)}{\theta_1(2 z) } \Bigl[\,\theta_{a+1} \left(2 z-g +\eta\right)e^{\eta\partial_z} - \theta_{a+1} \left(-2z-g+\eta\right)e^{-\eta\partial_z}\Bigl]e^{-\pi\textup{i}z^2/\eta}. \end{equation} The operators depend on a parameter $g \in \mathbb{C}$ called the {\it spin}. They act on the space of holomorphic functions of $z$. In Eq. \p{SklyanMod} we use unconventional similarity transformation by means of $e^{\pm\pi\textup{i}z^2/\eta}$, whose meaning is explained in~\cite{DS1}. At generic $g$ the representation \p{SklyanMod} is infinite-dimensional and irreducible. However, for a discrete set of spin values $g = g_n \equiv (n+1) \eta + \frac{\tau}{2}$, $n \in \mathbb{Z}_{\geq 0}$, a $(n+1)$-dimensional representation decouples. The finite-dimensional representation can be realized in the space $\Theta^+_{2n}$ of even theta functions of order $2n$. It is formed by holomorphic functions that are even $f(z) = f(-z)$ and have simple quasi-periodicity properties under the shifts of $z$ by $1$ and $\tau$: $$ f(z+1) = f(z) \;\; ,\qquad f(z+\tau) = \mathrm{e}^{-2 n\pi i\tau -4 n\pi i z } f(z)\,. $$ The action of the generators \p{SklyanMod} at spin $g =g_n$ is invariant and irreducible on this space. One can easily check that the monomials constructed out of theta functions \p{thetahalf} form a basis $\{\varphi_{j}^{(n)}(z)\}_{j = 1}^{n+1}$ in the space $\Theta^+_{2n}$, \be \label{phi} \varphi_{j+1}^{(n)}(z) = \left[\bar\theta_3 \left(z\right)\right]^j \, \left[\bar\theta_4 \left(z\right)\right]^{n-j} \;\; , \;\; j = 0, 1 , \cdots , n\,. \ee The elliptic gamma function, Eq. \p{egamma}, enables to combine the basis elements $\varphi_{j}^{(n)}(z)$ into a sole object. Indeed, $\Gamma\left(\mp z \mp x + g_n\right)$ is a generating function of the $(n+1)$-dimensional representation, and it depends on an auxiliary parameter $x$. Owing to Eqs. \p{theta1tothetabar}, \p{refl}, \p{shift2eta} it reduces to a product of linear combinations of $\bar\theta_3(z)$ and $\bar\theta_4(z)$, \be \lb{genellip} c \cdot\Gamma\left(\mp z \mp x + g_n\right) = \prod_{r = 0}^{n-1} \left[ \,\bar\theta_3(z) \,\bar\theta_4\left(x+(n-1-2r)\eta\right) + \bar\theta_4(z) \,\bar\theta_3\left(x+(n-1-2r)\eta\right) \,\right], \ee where an inessential numerical constant $c = (-2)^{n} \mathrm{R}^{-2n}(\tau) e^{-\frac{i\pi \tau}{2}n}$. The previous product is equivalent to a linear combination of basis vectors $\varphi^{(n)}(z)$, Eq. \p{phi}, with some coefficients $\psi^{(n)}(x)$ depending on the auxiliary parameter $x$. The generating function, Eq. \p{genellip}, is invariant under the change $z \rightleftarrows x$, hence it contains the second natural basis $\{\psi^{(n)}_j(z)\}_{j = 1}^{n+1}$, $$ c \cdot\Gamma\left(\mp z \mp x + g_n\right) = \sum_{j = 1}^{n+1} \psi_{n+2-j}^{(n)}(x)\, \varphi_{j}^{(n)}(z) = \sum_{j = 1}^{n+1} \varphi_{n+2-j}^{(n)}(x)\,\psi_{j}^{(n)}(z)\,, $$ that is formed by products of $\bar\theta_3(z)$ and $\bar\theta_4(z)$ with shifted arguments, \be \lb{psi} \psi_{j+1}^{(n)}(z) = \mathrm{Sym} \prod_{r=0}^{n-1} \bar\theta_{a_r}\left(z+(n-1-2r)\eta\right) \;\; , \;\; a_r \in \{3,4\} \;\; ,\;\; j = 0, 1 , \cdots , n \ee where $\bar\theta_3$ appears $j$ times and $\bar\theta_4$ appears $n-j$ times; symmetrization $\mathrm{Sym}$ is over indices $\{a_r\}$. Let us denote a pair of basis \p{phi}, \p{psi} of the $(n+1)$-dimensional space $\Theta^+_{2n}$ by $\{\mathbf{e}_{j}\}_{j=1}^{n+1}$ and $\{\mathbf{f}_j\}_{j=1}^{n+1}$, $$ \mathbf{e}_{j} = \varphi^{(n)}_j(z)\;\; , \;\; \mathbf{f}_{j} = \psi^{(n)}_j(z) \;\; , \;\; j = 1 , 2, \cdots , n+1 \,. $$ At $n = 1$ the representation is $2$-dimensional, the bases coincide \be \lb{efBasLax} \mathbf{e}_1 = \mathbf{f}_1 = \bar\theta_4(z) \;\;,\;\; \mathbf{e}_2 = \mathbf{f}_2 = \bar\theta_3(z) \,. \ee At higher spins the bases are different. At $n = 2$ the representation is $3$-dimensional, a pair of bases is $$ \mathbf{e}_1 = \bar\theta^2_4(z)\;,\; \mathbf{e}_2 = \bar\theta_4(z)\bar\theta_3(z)\;,\; \mathbf{e}_3 = \bar\theta^2_3(z) \;; $$ $$ \mathbf{f}_1 = \bar\theta_4(z-\eta)\bar\theta_4(z+\eta) \;,\; \mathbf{f}_2 = \bar\theta_4(z-\eta)\bar\theta_3(z+\eta) + \bar\theta_3(z-\eta)\bar\theta_4(z+\eta) \;,\; \mathbf{f}_3 = \bar\theta_3(z-\eta)\bar\theta_3(z+\eta) \,. $$ These basic facts about the Sklyanin algebra and its representations will be sufficient for our purposes. Let us take a look at the corresponding solutions of the Yang-Baxter equation, Eq. \p{YB}. The symmetry restrictions imposed by the Sklyanin algebra do not allows to fix uniquely the solution $\mathbb{R}_{12}(u)$ acting on a tensor product of two infinite-dimensional representations specified by spins $g_{(1)}$ and $g_{(2)}$~\cite{DS1}. However, more severe restrictions produced by the elliptic double enable to fix the $\mathrm{R}$-operator unambiguously (up to an inessential constant). This $\mR$-operator has been constructed in~\cite{DS1} in a form of an integral operator acting on a tensor product of two arbitrary infinite-dimensional representations of the elliptic double. The integral kernel of this operator is given by the product of elliptic gamma functions, Eq.~\p{egamma}. The proof that this integral operator with an elliptic hypergeometric kernel solves the Yang-Baxter equation is based on a number of sophisticated identities: the elliptic beta integral evolution formula~\cite{spi:umn,spi:essays}, an integral Bailey lemma~\cite{spi:bailey}, and elliptic Fourier transformation~\cite{spi-war:inversions}. The elliptic beta integral evolution formula is equivalent to the star-triangle relation~\cite{BS}. The elliptic double consists of two Sklyanin algebras, whose structure constants, Eq. \p{strcnst}, are parametrized by $2\eta,\tau$ and $\tau,2\eta$, so their generators commute or anticommute with each other. Finite-dimensional representations of the modular double are equivalent (up to a sign) to a tensor product of finite-dimensional representations of the Sklyanin algebras. Since we are aimed at finite-dimensional representations and matrix realizations of $\mR$-operators, we will consider only one of two Sklyanin algebras constituting the elliptic double. In~\cite{CDS2} the integral $\mR$-operator for the elliptic double has been taken as a starting point and restrictions of this operator to finite-dimensional representations have been implemented. In particular, the restriction to a $(n+1)$-dimensional representation in the first space at spin $g_n \equiv (n+1)\eta +\frac{\tau}{2},\, n \in \mathbb{Z}_{\geq 0}$, has been considered. The action of the $\mR$-operator on the generating function of the finite-dimensional representation, Eq. \p{genellip}, is given by the formula\footnote{In order to simplify the notation we denote the complex variable $z_2$ by $z$.} \begin{multline} \lb{redsl2ellip} \mathbb{R}_{12}(u|g_{n} \,,\,g)\,\Gamma(\mp z_1 \mp z_3 + g_{n})\, \Phi(z) = \\ = \frac{\Gamma(\mp z \mp z_3 -\frac{u}{2}+\frac{g_{n}+g}{2})} {\textstyle\Gamma(\mp z_1\mp z -\frac{u}{2}-\frac{g_{n}+g}{2} + \eta+\frac{\tau}{2})}\, \mathrm{M}( n \,\eta)\, \frac{\Gamma(\mp z_1 \mp z -\frac{u}{2}+\frac{g_{n}-g}{2})} {\Gamma(\mp z \mp z_3 -\frac{u}{2}+\frac{g-g_{n}}{2} + \eta + \textstyle\frac{\tau}{2})}\,\Phi(z)\,. \end{multline} An arbitrary holomorphic functions $\Phi(z)$ belongs to the second space where a generic spin $g$ representation is realized; $z_3$ is an auxiliary parameter of the generating function. The finite-difference operator from the previous formula \be \lb{Mintw} \mathrm{M}(n \eta) = \sum_{l = 0}^{n} \beta^{(n)}_l(z) \,e^{(n-2l)\eta\dd_z} \ee is an intertwining operator of equivalent representations of the Sklyanin algebra. For the first time it has been constructed by A. Zabrodin in~\cite{Z}. In~\cite{DS2} the factorized representation for the intertwiner has been found $$ \mathrm{M}(n \eta) = \mathrm{A}_a(n\eta-\eta)\cdots \mathrm{A}_a(\eta) \mathrm{A}_a(0)\cdot \bar\theta_a^{-n} \left(z\right) \;\;,\;\;\;\;\; a = 3, 4\,. $$ It is a product of finite-difference operators of the first order $$ \mathrm{A}_a(g) = e^{\pi \textup{i}\frac{z^2}{ \eta}}\,\frac{1}{\theta_1(2z | \tau)} \left[ \bar\theta_a \left(z+g+\eta \right)\, e^{\eta \partial_z} - \bar\theta_a \left(z-g-\eta \right)\, e^{-\eta \partial_z} \right]\, e^{-\pi \textup{i}\frac{z^2}{ \eta}} \,. $$ The coefficients $\beta^{(n)}_l(z)$, Eq. \p{Mintw}, can be found in \cite{DS2,Z}. Expanding \p{redsl2ellip} by means of Eq. \p{genellip} and equating coefficients on both sides of the formula that accompany the linear independent functions $\{\phi^{(n)}_j(z_3)\}$ of the auxiliary parameter $x_3$, we obtain the matrix form of the restricted $\mR$-operator $$ \mathbb{R}_{12}(u|g_{n} \,,\,g)\,\psi^{(n)}_{j}(z_1) = \varphi^{(n)}_{l}(z_1) \bigl(\mathbb{R}_{12}(u|g_{n} \,,\,g)\bigr)_{lj}\, $$ with respect to the pair of bases \p{phi}, \p{psi}: $\{\mathbf{e}_{j}\}_{j=1}^{n+1}$ and $\{\mathbf{f}_j\}_{j=1}^{n+1}$, $$ \mathbf{e}_{j} = \varphi^{(n)}_j(z_1)\;\; , \;\; \mathbf{f}_{j} = \psi^{(n)}_j(z_1) \;\; , \;\; j = 1 , 2 , \cdots , n +1\,. $$ The matrix elements are finite-difference operators of the $n$-th order whose coefficients are constructed out of theta functions. It happens that the matrix form of the restricted $\mR$-operator is more illustrative than Eq. \p{redsl2ellip}. Indeed, this matrix solution of the Yang-Baxter equation can be factorized as follows \be \lb{factellip} \mathbb{R}_{12}(u|g_{n} \,,\,g) = V(u_1,z) \,D(z,\dd)\, \mathbf{C} \, V^{T}(u_2,z) \, \mathbf{C}\,. \ee The matrix $D(z,\dd)$ is diagonal, and it is formed by the terms of the intertwining operator $\mathrm{M}(n \eta)$, Eq.~\p{Mintw}, $$ \left(D(z,\dd)\right)_{lj} = \delta_{lj}\, \beta^{(n)}_{l-1}(z)\,e^{(n+2-2l)\eta \dd_z}\,. $$ In the numerical matrix $\mathbf{C}$ only the antidiagonal is nonzero: $\left( \mathbf{C} \right)_{lj} = \delta_{n+2-l , j}$\,. We see that it is convenient to arrange the spectral parameter $u$ and the spin $g$ in the linear combinations $$ u_1 = \frac{u + g}{2} \;\;,\;\; u_2 = \frac{u-g}{2}\,. $$ The matrix $V$ consists of theta functions $\left(V(u,z)\right)_{jl} = V^{(n)}_{jl}(u,z)$ that are specified by the following defining relation $$ \sum_{j = 1}^{n+1} \varphi_j^{(n)}(x)\,V_{jl}^{(n)}(z,u) \equiv \prod_{r = 0}^{n - l}\theta_1\left({\textstyle \pm x + z - u + \frac{g_n}{2} + 2\eta(\frac{n}{2} - l - r) }\right) \prod_{r = 2}^{l}\theta_1\left({\textstyle \pm x + z + u - \frac{g_n}{2} + 2\eta( \frac{n}{2}-l + r) }\right)\,. $$ In view of Eq. \p{theta1tothetabar}, the function $V_{jl}^{(n)}$ is a linear combination of theta functions $\bar\theta_3$ and $\bar\theta_4$, whose arguments are shifted in a certain way. Each monomial contains $j$ times $\bar\theta_4$ and $n-j$ times $\bar\theta_3$, i.e. \begin{align} V_{jl}^{(n)}(z,u) = (-1)^{n+1-j} \,\mathrm{Sym} \sideset{}{_{r=2}^{l}}\prod\bar{\theta}_{a_{r-1}} \left({\textstyle \pm x + z + u - \frac{g_n}{2} + 2\eta( \frac{n}{2}-l + r) }\right) \times \notag\\ \cdot \, \sideset{}{_{r=0}^{n-l}}\prod\bar{\theta}_{a_{r+l}} \left({\textstyle \pm x + z - u + \frac{g_n}{2} + 2\eta(\frac{n}{2} - l - r) }\right), \notag \end{align} where $a_r \in \{3,4\}$. Let us note that an immediate corollary of the defining relation is $V_{jl}^{(n)}(-z,u) = V_{j,n+2-l}^{(n)}(z,u)$, i.e. $V(-z,u) = V(z,u)\,\mathbf{C}$. The proof of Eq. \p{factellip} follows the line of reasoning used to prove the factorization formula \p{trigfactor} for the modular double in Sect. \ref{SecDubFact}. It relies on the properties \p{refl}, \p{shift2eta} of the elliptic gamma function. In order to elucidate the formula \p{factellip} we indicate the matrix factors that are involved in Eq. \p{factellip} at $n=1$ and $n=2$. Diagonal matrices $D_{(n)}$: \begin{align} D_{(1)} &= e^{\pi\textup{i}z^2/\eta} \,\frac{1}{\theta_1(2z)} \mathrm{diag}( e^{\eta \dd} , - e^{-\eta \dd} ) \,e^{-\pi\textup{i}z^2/\eta} \;, \notag\\ D_{(2)} &= e^{\pi\textup{i}z^2/\eta}\, \frac{1}{\theta_1(2z-2)\theta_1(2z)\theta_1(2z+2)} \mathrm{diag}\left( \theta_1(2z-2) e^{2\eta \dd} , - \frac{\theta_1(4\eta)}{\theta_1(2\eta)}\theta_1(2z) , \theta_1(2z+2) e^{-2\eta \dd} \right) \, e^{-\pi\textup{i}z^2/\eta} \,.\notag \end{align} Matrices $V_{(n)}(u)$: $$ V_{(1)}(u+{\textstyle\frac{\tau}{4}}) = \left( \begin{array}{cc} -\bar\theta_3\left(z - u\right) & -\bar\theta_3\left(z+u \right) \\ \bar\theta_4\left(z - u\right) & \bar\theta_4\left(z+u\right) \end{array} \right), $$ $$ V_{(2)}(u{\textstyle- \frac{\eta}{2}+ \frac{\tau}{4}}) = \left( \begin{array}{ccc} \bar\theta_3\left(z - u\right) \bar\theta_3\left(z - u + 2\eta\right) & \bar\theta_3\left(z - u \right) \bar\theta_3\left(z+u \right) & \bar\theta_3\left(z + u \right) \bar\theta_3\left(z+u- 2\eta\right) \\ \bar\theta_{\{3}\left(z - u\right) \bar\theta_{4\}}\left(z - u + 2\eta\right) & \bar\theta_{\{3}\left(z - u \right) \bar\theta_{4\}}\left(z+u \right) & \bar\theta_{\{3}\left(z + u \right) \bar\theta_{4\}}\left(z+u- 2\eta\right) \\ \bar\theta_4\left(z - u\right) \bar\theta_4\left(z - u + 2\eta\right) & \bar\theta_4\left(z - u \right) \bar\theta_4\left(z+u \right) & \bar\theta_4\left(z + u \right) \bar\theta_4\left(z+u- 2\eta\right) \end{array} \right)\,. $$ The curly brackets in the second line of the previous formula denote symmetrization with respect to the theta function indices. Let us recall that at $n = 1$ the bases $\mathbf{e}$ and $\mathbf{f}$, Eq. \p{efBasLax}, are identical and the restricted $\mR$-operator coincides with the quantum elliptic Lax operator. The factorization of the elliptic $\mL$-operator appeared before in \cite{KrZa97}. \section*{Acknowledgment} This work is supported by the Russian Science Foundation (project no. 14-11-00598).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Metastable states of Bose Einstein condensates (BECs) made of attractive $^{7}$Li atoms, namely single and multiple bright soliton configurations, have been observed in two different experiments \cite{r1,r2}. These metastable bright solitons, which can travel for long distances without dispersion, have been the subject of various theoretical investigations because of their relevance in nonlinear atom optics \cite{r3,r4}. Recently, repulsive BECs in a quasi-1D ring have been produced and studied \cite{r5}. The case of an attractive BEC in a ring has not yet been experimentally investigated but appears very interesting. For this system a quantum phase transition from a uniform condensate to a bright soliton. has been predicted by G.M. Kavoulakis and by R. Kanamoto, H. Saito and M. Ueda \cite{r6}. This prediction is based on mean-field and beyond mean-field numerical results for a 1D Bose gas with contact interaction and periodic boundary conditions \cite{r6}. Later, the same authors have shown that the quantum transition properties of the attractive BEC in a 1D ring are strongly modified if the confining trap is rotating \cite{r7}. \par In this paper we investigate an attractive BEC in a 3D ring, taking into account transverse variations of the BEC width, showing that the phase diagram of the system reveals novel and peculiar structures. In particular, we prove that, contrary to the simple 1D case, the localized soliton has a limited existence and stability domain, which nevertheless strongly extends the stability domain of the uniform solution. Moreover, we find that the system supports also multi-peak solitons, which are energetically unstable but can be dynamically stable. Finally, we analyze the effect of a rotating ring. In this case the multi-peak solitons are always energetically and dynamically unstable, while the one-peak soliton is stable in a domain that, for a fixed rotation frequency, critically depends on the system parameters. \section{Toroidal trap} We consider a BEC confined in a toroidal potential given by \begin{equation} U(\rho ,\zeta ) = {1\over 2} m \omega_{\bot}^2 \left( (\rho - R_0)^2 + \zeta^2 \right) \; , \end{equation} where $\rho$ is the cylindric radial coordinate, $\zeta$ is the cylindric axial coordinate and $\theta$ is the azimuthal angle. The BEC has transverse harmonic confinement of frequency $\omega_{\bot}$ and the two characteristic lengths of the toroidal trap are $R_0$ and $a_{\bot}=(\hbar/(m\omega_{\bot}))^{1/2}$. In the remaining part of the paper we use scaled units: time in units of $\omega_{\bot}^{-1}$, length in units of $a_{\bot}$ and energy in units of $\hbar \omega_{\bot}$. To simplify the 3D Gross-Pitaevskii equation (GPE) we impose that the order parameter $\Psi (\rho ,\theta ,\zeta ,t)$ of the Bose condensate is the product of a generic azimuthal function $f(\theta ,t)$ and a radial Gaussian function, which has two variational parameters: the width $\sigma(\theta ,t)$ and the coordinate $R$ of the center of mass in the radial direction. The trial wave function is given by \begin{equation} \Psi (\rho ,\theta ,\zeta ,t) = { f(\theta ,t) \over R^{1/2} } {\exp{\left( -{(\rho - R)^2 + \zeta^2 \over 2 \sigma(\theta ,t)^2}\right)} \over \pi^{1/2}\sigma(\theta ,t) } \; . \end{equation} We follow a procedure similar to that described in \cite{r8} inserting the trial wave function into the GPE Lagrangian density \begin{equation} {\cal L} = \Psi^* \left[i {\partial \over \partial t} + {1\over 2} \nabla^2 - U - {1\over 2} \Gamma |\Psi|^2 \right] \Psi \; , \end{equation} where $\Gamma = 4\pi a_s N/a_{\bot}$, with $N$ the number of condensed atoms and $a_s<0$ the attractive s-wave scattering length. The trapping potential of Eq. (1) has a cusp at the origin, a feature not usually present in experimental traps, but this is expected to have minor effects on physical properties if only a small fraction of particles are near the origin. For example, if $R/\sigma >3/2$, the fraction of particles belonging to the radial region $[0,R-\sigma ]$ is below $1\%$. In the range $1<R/\sigma <3/2$ the population near the origin is not so small and the results reported below are not fully reliable in this range. We integrate over $\rho$ and $\zeta$ coordinates \cite{r9}. In this way, from the Euler-Lagrange equations of the resulting effective Lagrangian density, we get the nonpolynomial Schr\"odinger equation (NPSE) \begin{equation} \left[ i\frac{\partial}{\partial t} + \frac{1}{2}\frac{\partial^2}{\partial z^2} - T(n)\right] \psi = 0 \; , \label{dyn} \end{equation} where $\psi(z,t)=f(\theta ,t)/R^{1/2}$ is the azimuthal wave function of the condensate with $z = R \theta$, $n=|\psi|^2$ is the density profile normalized to unity and $T(n)=\frac{dW(n)}{dn}$ with $W(n)=n(1-gn)^{1/2}$. The scaled interaction strength $g$ is given by $g = |\Gamma|/(2\pi) = 2N|a_s|/a_{\bot}$. In toroidal geometry the solution $\psi(z,t)$ must obey periodic boundary conditions $\psi(0,t)=\psi(L,t)$ with $L=2\pi R$. Within our variational approach the transverse width $\sigma$ of the BEC is given by \begin{equation} \sigma^2 = \left( 1 - g n \right)^{1/2} \; . \end{equation} For $g n \ll 1$ one has $\sigma \simeq 1$, $T(n)\simeq - gn +1$, and the NPSE reduces to the 1D GPE. In addition, we find that the variational parameter $R$ is implicitly given by the equation \begin{equation} R - {1\over R} \int_{0}^{2\pi R} \left|{\partial \psi\over \partial z}\right|^2 dz + {g\over 2 \sigma^2} \int_0^{2\pi R} |\psi|^4 dz = R_0 \; . \label{nuovo} \end{equation} This formula shows that the effective radius $R$ of the BEC ring depends on the interaction strength $g$. We have verified that for a static and attractive BEC, the effective radius $R$ decreases very slowly by increasing $g$, while for a repulsive BEC the opposite is true. In practice, because for an attractive BEC the strength $g$ cannot exceed the scaled inverse density $n^{-1}$, the effective radius $R$ is always close to $R_0$: for a uniform BEC, where Eq. (\ref{nuovo}) becomes $R+g/\left( 4\pi R^2 (1-g/(2\pi R) )^{1/2} \right)=R_0$, it is easy to check that the relative difference between $R$ and $R_0$ is tipically of few percents and becomes $10\%$ only near the collapse. \par The very good accuracy of the NPSE in approximating the 3D GPE with a transverse harmonic potential has been verified in \cite{r8} for both positive and negative scattering length. In the derivation of NPSE one neglects the space and time derivatives of the width $\sigma(z,t)$. By including these terms one gets the coupled equations derived by Kamchatnov and Shchesnovich \cite{r10}, but it is not clear if these terms give an improvement. We have verified that, according to the 3D GPE, the single-peak bright soliton of an attractive BEC in an infinite cylinder collapses at the critical strength $g_c/2=0.676$ (see also \cite{r11}), a value very close to the NPSE prediction: $g_c/2=2/3=0.66{\bar 6}$ \cite{r8}. \section{Uniform and localized solutions} The NPSE conserves both the norm of the wavefunction and the total energy $E$ of the configuration. The stationary solutions follow from Eq. (\ref{dyn}) by looking for solutions of the form $\psi(z,t)=\phi(z) e^{-i\mu t}$ for some chemical potential $\mu$. The resulting non-linear eigenvalue equation is the static NPSE. In toroidal geometry, the uniform solution $\phi(z)=1/\sqrt{L}$ is always present for $g <L$, i.e. with density $N/L < a_{\bot}/(2|a_s|)$, and corresponds to the eigenvalue $\mu=T(1/L)$. In addition other, less trivial, profiles may be present. Beyond this limit (i.e. for $g>L$) the attraction is too strong and no regular solution is possible, leaving the BEC collapse as the only possibility. A generic (real) solution $\phi(z)$ of the stationary NPSE may be interpreted as the classical ``time" evolution of a fictitious particle moving in a potential $V(\phi)=\mu \phi^2 - W(\phi^2)$. As a consequence, the ``energy" conservation equation for this motion reads: \begin{equation} \frac{1}{2}\left (\frac{d\phi}{dz}\right )^2 + V(\phi) = \epsilon \; . \end{equation} According to the values of the two parameters $\mu$ and $\epsilon$ two kinds of ``trajectories" may be realized: For $\mu > 0 $ and $\epsilon > 0$ the solution $\phi(z)$ oscillates between a positive and a negative value, thereby crossing zero, while for $\epsilon < 0$ the solution $\phi(z)$ remains always positive. In the first case the solutions are named {\sl nodal solitons} while in the second case {\sl nodeless solitons}. \begin{figure} \centerline{\psfig{file=ring-f1.ps,height=2.5in,clip=}} {FIG. 1 (color online). Existence diagram in the $(R,g)$ plane, where $g=2N|a_s|/a_{\bot}$ is the interaction coupling and $R=L/(2\pi )$ is the azimuthal radius of the ring (in units $a_{\bot}$). Uniform solution: exists for $g < 2\pi R$. One-peak localized nodeless solution: exists between the two solid lines. Nodeless two-peak localized solution exists between the two dashed lines. The nodal two-peak localized solution exists between the two dot-dashed lines.} \end{figure} \par For fixed $g$ and $L$, the two parameters $\mu$ and $\epsilon$ are implicitly determined by the two consistency equations: \begin{equation} L=\sqrt{2}N_s\,\int_{\phi_{min}}^{\phi_{max}} d\phi \frac{1}{\sqrt{\epsilon -V(\phi)}} \end{equation} and \begin{equation} 1=\sqrt{2}N_s\,\int_{\phi_{min}}^{\phi_{max}} d\phi \frac{\phi^2}{\sqrt{\epsilon -V(\phi)}} \; , \end{equation} where $N_s$ is the number of maxima of the soliton, i.e. the number of ``periods" of the corresponding orbit, and $\phi_{min}$ ($\phi_{max}$) is the minimum (maximum) value attained by $\phi(z)$ during the periodic oscillation. As customary in Newtonian problems, the extremal values of $\phi$ are implicitly defined by the equation $V(\phi)=\epsilon$ in terms of $\epsilon$ and $\mu$. The first equation comes from the commensurabilty requirement of the solitonic solution, which, after an integer number of periods must close without discontinuities. The second equation is just the normalization condition of the wavefunction. \par The two consistency equations (8) and (9) have been solved and the domain of existence of soliton solutions of given topology has been determined in the $(R,g)$ plane. Results are shown in Fig. 1. The numerical analysis shows that two solutions of same symmetry may be present in a portion of the existence domain. For sake of clarity in Fig. 1 we report the existence domain only for the one-peak soliton ($N_s=1$) and for the two-peak soliton ($N_s=2$). Note that the existence diagram of the 1D GPE is strongly different from that of the NPSE. In fact, as previously stressed, the 1D GPE does not take into account the transverse dinamics and, as a consequence, no collapse of solitonic solutions is predicted by 1D GPE. \section{Energetic stability} We now turn to the discussion of the energetic stability \cite{r12} of the previously defined solitonic configurations. One can rigorously prove that the energetic stability can be expressed in terms of the eigenvalues $\lambda_l$ of $H\pm nT'$, where \begin{equation} H=-\frac{1}{2}\frac{d^2}{dz^2} +T(n)+n\frac{dT(n)}{dn}-\mu \end{equation} and $n(z)$ is the density profile of the stationary solution. The stationary solution $\phi(z)$ is energetically stable only if either of the two conditions is satisfied: 1) all the eigenvalues $\lambda_l$ are non-negative; 2) a single negative eigenvalue $\lambda_0$ is present and $ \frac{dg}{d\mu}\le 0 $. When we apply this general analysis to the simple case of the uniform stationary solution in toroidal geometry $\phi(z)=1/\sqrt{L}$ we find that this case satisfies the latter of the two conditions previously stated and the solution is stable until the second eigenvalue gets negative, triggering the instability. The resulting stability condition $(2\pi /L)^2 + 4nT' \ge 0$ explicitly becomes: \begin{equation} {\pi^2 \over gL} \left ( 1-\frac{g}{L}\right )^{3/2} \ge \left (1-\frac{3g}{4L}\right ) \; . \end{equation} This formula reduces to $\pi^2/(gL)$ for large $L$ (1D limit), that is precisely the result one finds with the 1D GPE \cite{r6}. The stability analysis of the solitonic configurations, however, does not allow for a general analytic solution and the eigenvalue equations of $H\pm nT'$ must be investigated numerically. \begin{figure} \centerline{\psfig{file=ring-f2.eps,height=3.5in,clip=}} {FIG. 2. Energy $E$ and chemical potential $\mu$ of the one-peak solitons as a function of the coupling $g$ for $L=10$. The energetically stable solutions are the solid ones, the unstable solutions are the dashed ones.} \end{figure} \par The operators $H\pm nT'$ may be numerically diagonalized by introducing a finite mesh in the interval $0 \le z < L$ and approximating the differential operator with the corresponding finite difference operator. The two resulting equations then give rise to a matrix eigenvalue problem. The numerical results show that: \begin{itemize} \item in the regions where the uniform solution satisfies the energetic stability condition $\frac{dg}{d\mu}\le 0$ no other solitonic wavefunction can be stabilized; \item only the one peak soliton is energetically stable in a portion of the domain where it is defined; \item when distinct one peak solutions exist for the same values of $g$ and $L$, the soliton is stable only in the branch corresponding to the lowest energy. \end{itemize} The latter remark is illustrated in the upper panel of Fig.2 where the energy $E$ of the one-peak solution is shown as a function of $g$ for $L=10$. It is also interesting to plot the chemical potential $\mu$ versus $g$ in the stable and unstable branch. The case of $L=10$ previously analyzed is shown in the lower panel of Fig. 2, where it appears that the onset of instability corresponds to an extremum of the coupling constant $g$ as a function of the chemical potential. In fact, this immediately follows from the previous analysis which led to $\frac{dg}{d\mu}\le 0$: if $H+nT'$ admits a single negative eigenvalue, $\frac{dg}{d\mu}=0$ signals the onset of the instability. \begin{figure} \centerline{\psfig{file=ring-f3.ps,height=3.9in,clip=}} {FIG. 3 (color online). Attractive BEC in a ring rotating with frequency $\Omega$. Energetic-stability diagram in the plane $(R,g)$. The uniform solution is the ground-state only below the solid line. The one-peak localized solution exists between the two dashed lines but it is the ground-state only between the solid line and the upper dashed line. Note that for almost all $R=L/(2\pi )$ the solid curve is superimposed to the lowest dashed curve: only for $1<R<1.5$ the lowest dashed line is below the solid line. } \end{figure} \par The energetic stability region of stationary solutions of the NPSE in a ring are shown in the top panel ($\Omega =0$) of Fig. 3. Below the solid curve the uniform solution is energetically stable. The one-peak soliton, which exists between the two dashed curves, is energetically stable in the domain limited by the solid and the upper dashed line. In the remaining regions of the phase diagrams no energetically stable stationary solution is present and the BEC is expected to collapse. Note that for large $R=L/(2\pi )$ the upper dashed line tends to $g = 4/3$, that is the formula one finds for the collapse of a bright soliton in a infinite cylinder (see L. Salasnich {\it et al.} 2002 in \cite{r3}). It is not difficult to show that with a large $R$ the existence domain of a $N_s$-peak bright soliton is instead given by \begin{equation} 0< g < {4\over 3} N_s \; . \end{equation} See for instance the existence domain of the two-peak solitons shown in Fig. 1. As previously stressed upper bounds do not exist within the 1D GPE approach: the collapse of single and multiple bright solitons is due to the transverse dynamics of the condensate. \par In the other panels of Fig. 3 it is shown the effect of a rotating toroidal trap on the attractive BEC. The analysis is developed by observing that one has to include the centrifugal operator \begin{equation} - \Omega {\hat M_{\theta}} = i \Omega {L\over 2\pi } {\partial \over \partial z} \end{equation} into Eq. (\ref{dyn}), where $\Omega$ is the rotation frequency (in units of the frequency $\omega_{\bot}$ of the harmonic transverse confinement) and ${\hat M_{\theta}}$ is the azimuthal angular momentum. As shown in Ref. \cite{r7} by using the 1D GPE, the uniform state of the attractive BEC is superfluid, i.e. it exists a critical frequency $\Omega_c$ below which the uniform state remains stationary, and only above this critical frequency the uniform state rotates. In general the stationary uniform solution is thus given by $\psi(z) =e^{i 2\pi z j /L}/\sqrt{L}$, where the integer number $j$ is a function of $\Omega$ and $L$, namely \begin{equation} j(\Omega, L) =int[{\Omega L^2 \over 4\pi^2} + {1\over 2}] \; , \end{equation} with $int[x]$ the maximum integer that does not exceed $x$. The localized soliton solution has a different behavior: its angular momentum is not quantized \cite{r7} and this means that the quantum phase transition from the uniform to the localized state suppresses the superfluidity of the system \cite{r7}. Setting $\psi(z) = \phi(z) e^{i\alpha(z)}$, where both $\phi (z)$ and $\alpha(z)$ are real, from the stationary NPSE with the centrifugal operator of Eq. (13) one finds \begin{equation} {d\alpha \over dz} = \Omega {L\over 2 \pi} + {c \over \phi^2} \; , \end{equation} where the constant $c$ is given by the equation \begin{equation} \Omega R^2 + {c \over 2\pi } \int_0^L {dz\over \phi(z)^2 } = j \; . \end{equation} In addition, the function $\phi(z)$ is obtained from the two consistency equations (8) and (9) with $V(\phi )$ now given by \begin{equation} V(\phi ) = \mu + {1\over 2} \left({\Omega L\over 2\pi}\right)^2 - W(\phi ) + {c^2\over 2 \phi^2} \; , \end{equation} where $W(\phi ) = \phi^2 (1 - g \phi^2)^{1/2}$. \par Following the previous analysis one finds that the energetic stability condition reads \begin{equation} 1 - 4 \left( {\Omega L^2\over 4\pi^2} - j(\Omega, L) \right)^2 \le {g L\over \pi^2} {(1-\frac{3g}{4L}) \over ( 1-\frac{g}{L} )^{3/2} } \; . \end{equation} In the 1D limit of large $L$ the previous formula gives $1 - 4 \left( {\Omega L^2\over 4\pi^2} - j(\Omega, L) \right)^2 \ge gL/\pi^2$, that is the results found in Ref. \cite{r7}. Fig. 3 shows the energetic stability diagram in the plane $(R,g)$ for two non-zero values of the rotating frequency $\Omega$. Below the solid line the stability condition holds and the uniform state is energetically stable. The periodic structure for $\Omega\neq 0$ is a consequence of the periodic quantization of the angular momentum $j(\Omega, L)$. In particular, for a fixed $\Omega$, the solid line touches the horizontal axis $g=0$ for discrete values of $R=L/(2\pi )$, which correspond to jumps in the quantum number $j$. \begin{figure} \centerline{\psfig{file=ring-f4.eps,width=3.4in,height=3.6in,clip=}} {FIG. 4 Angular momentum $M_{\theta}=\langle \hat{M}_{\theta} \rangle$ and azimuthal width $\Sigma = \langle z^2 \rangle^{1/2}$ of the one-peak soliton as a function of the coupling $g$ for $\Omega = 0.1$ and two values of $R$: $R=2.2$ solid line, $R=2.3$ dashed line.} \end{figure} The one-peak solitonic solution exists between the two dashed lines and it is energetically stable between the solid line and the upper dashed line. Interestingly, Fig. 3 shows that for large $R$ the lower dashed line and the solid line practically coincide. As in the non rotating case, we find that also for $\Omega\neq 0$ solutions with more than one peak are not energetically stable. \par In Fig. 4 we plot the angular momentum $M_{\theta}$ and the azimuthal width $\Sigma$ of the rotating one-peak bright soliton. The figure shows that the angular momentum is not quantized: it approaches the value $M_{\theta}=\Omega R^2$ of a ``classical particle'' for $g$ close to the collapse ($g\simeq 4/3$), but it becomes quantized, i.e. $M_{\theta}=j$, where $j$ depends on $\Omega$ and $R$, for the small value of $g$ that gives the transition to the uniform solution. Obviously, when the angular momentum becomes quantized the width $\Sigma$ of the bright soliton coincides with that of the uniform solution. Fig. 4 also shows that the width $\Sigma$ is independent on the ring radius $R$ as the bright soliton is close to the collapse; in this case $g\simeq 4/3$ and $\Sigma \simeq 0.85$. \section{Dynamical stability} It is important to stress that energetic stability implies dynamical stability but the converse is not true \cite{r12}. In order to investigate the dynamical stability of stationary solutions in the ring one can solve the Bogoliubov-de Gennes (BdG) equations which give the elementary excitations $\epsilon_l$ ($l=1,2,3,...$) of the system: the appearence of a complex excitation signals dynamical instability while a negative excitation implies energetic instability \cite{r12}. For the uniform solution $\psi(z) =e^{i 2\pi z j /L}/\sqrt{L}$ one finds $$ \epsilon_l = -\left( \Omega -{4\pi^2\over L^2} j(\Omega,L) \right) l $$ \begin{equation} + {1\over 2} \left\{ ({2 \pi l \over L})^2 \left[ ({2 \pi l \over L})^2 - {4g\over L} {(1 + {3g\over 4L})\over (1-{g\over L})^{3/2} } \right] \right\}^{1/2} \; . \end{equation} This result confirms that the uniform solution is energetically stable if the previously written stability condition is satisfied; moreover it shows that the dynamical stability of the uniform solution is independent on $\Omega$. For localized solutions the BdG equations are computationally rather demanding. For this reason we have analyzed the dynamical stability by numerically solving the time-dependent NPSE taking as initial condition the stationary localized solution $\phi(z)$ with a very weak perturbation. \begin{figure} \centerline{\psfig{file=ring-f5.eps,width=3.2in,height=3.2in,clip=}} {FIG. 5. Left panels: density profile $\rho(z)$ of the solitonic solutions. Right panels: time-dependence of the mean squared widths $\langle z^2 \rangle - \langle z \rangle^2$ for the weakly perturbed solitonic solutions. Ring axial lenght: $L=15$. Interaction strength: $g=1$. Rotational frequency: $\Omega=0$. } \end{figure} \par In Fig. 5 we plot the density profile $\rho(z)=|\phi(z)|^2$ of the one-peak and the nodal two-peak solutions, choosing $L=15$, $g=1$ and two values of $\Omega$. In addition, we plot the time-evolution of the mean squared width $\langle z^2\rangle - \langle z \rangle^2$. Its behavior reveals that these solitonic solutions are dynamically stable. We have investigated the dynamical stability for various initial conditions. For $\Omega =0$ we have found that: \begin{itemize} \item the one-peak soliton is dynamically stable where it exists; \item the nodal two-peak soliton is dynamically stable in the plane $(R,g)$ only below the upper curve of existence of the one-peak soliton; \item the nodeless two-peak soliton is dynamically unstable. \end{itemize} Similar results are found for solitonic solutions with a larger number $N_s$ of peaks. \par The case $\Omega\ne 0$ leads to similar results, keeping in mind, however, that nodal solitons do not exist under rotation. For high rotational frequencies, namely when $\Omega$ approaches the frequency of transverse harmonic confinement (that is $1$ in our units), the effect of the centrifugal force on the transverse dynamics becomes relevant. For a given angular momentum $j\simeq R^2 \Omega$, the centrifugal force increases the effective radius $R$ of the BEC ring. For a rotating and uniform ideal BEC one easily finds from Eq. (\ref{nuovo}) that $R=R_0/(1-\Omega^2)$. This formula holds also for an azimuthally localized ideal BEC. At the critical frequency $\Omega =1$ the radius $R$ diverges and this means that the Bose condensate is no more confined. As in the non-rotating case, an investigation of Eq. (\ref{nuovo}) shows the effect of the interaction strength $g$ on the effective radius $R$ is negligible for an attractive BEC. \section{Conclusions} We have predicted novel quantum phases for an attractive Bose condensate in a ring. Our results can be experimentally tested with the optical and magnetic traps recently developed \cite{r5,r13}, where a time-dependent stirring potential can be used to set into rotation the system. For instance, by choosing $10^{3}$ $^7$Li atoms ($a_s=-1.4$ nm) in a toroidal trap with $L\simeq 25$ $\mu$m and $a_{\bot}\simeq 3$ $\mu$m, a sequence of transitions between the uniform state and solitonic configurations takes place when the stirring frequency $\Omega$ is varied between $0$ and $1$ kHz. These experiments will open the way to the observation of amazing quantum phenomena like the solitonic condensate without superfluidity and the dynamically induced phase transition from uniform to localized states. \section*{acknowledgements} The authors thank F. Dalfovo, A. Recati, S. Stringari and C. Tozzo for useful suggestions.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In the world of fault-tolerance, distributed tasks that admits self-stabilizing solutions have been long studied \cite{dolev}. An algorithm is self-stabilizing if, starting from arbitrary initial values in the registers used by the algorithm, it can eventually stabilize to a correct final value. In particular, when looking at some computed values, the algorithm can output incorrect values as long as it eventually outputs correct ones. In contrast, an algorithm is snap-stabilizing if it can withstand arbitrary initial values and output only correct values \cite{snapstab}. Snap-stabilizing tasks form a subset of self-stabilizing tasks where the algorithm is required to retain computed values until it is ''sure'' that they are correct. Snap-stabilizing algorithms have really interesting properties, they can withstand arbitrary transient failures, while at the same time, improving on self-stabilizing algorithms about a key point : the stabilization moment is not unknown : when a response is given, it is correct. We present here the first characterization of snap-stabilizing tasks on anonymous networks. Not only we are reusing techniques borrowed from the study of the non-stabilizing tasks in anonymous networks and show they apply also here, but we complete the correspondence between self/snap-stabilizing tasks and termination detection. How does snap-stabilizing tasks differ from self-stabilizing tasks has not been considered so far in anonymous networks to the best of our knowledge. Here we show that, on anonymous networks, there are tasks that admit self-stabilizing solutions but that have no snap-stabilizing ones. We show that the difference between self and snap stabilization is actually the same one gets with non-stabilizing tasks when considering implicit vs explicit termination. This result completes the understanding of the computability power of fault-tolerant and non fault-tolerant algorithms. \subsection{Our Result} We give the first characterization of the computability of snap-stabilization. In order to show that it complements known results about self-stabilizing and non self-stabilizing tasks in anonymous networks, we recall the previous equivalence established by Boldi and Vigna. Solving a task means solving a given specification linking inputs labels to output labels for a given set of graphs. Informally an algorithm has implicit termination if it is allowed to write numerous times a (tentative) solution in the dedicated \out register. An algorithm has explicit termination when it is possible to write in \out only once. Whenever the \out register is defined, this means that (locally) the algorithm has terminated its computation. \begin{theorem}[Boldi and Vigna \cite{BVanonymous,BVselfstab}] A task is solvable on a family of anonymous networks by a self-stabilizing algorithm if and only if it is solvable with implicit termination. \end{theorem} The ``only if'' part being obvious, the merit of \cite{BVselfstab} is to show that there is a universal algorithm to solve tasks (that are at all solvable) by a self-stabilizing algorithm on anonymous networks, and that the condition for solvability (informally speaking: stability of the specification by lifting) is exactly the one required by implicit termination. In other words, once a task is solvable with implicit termination, it admits a reliable self-stabilizing solution without any additional condition. \begin{theorem}[this paper]\label{snapequiv} A task is solvable on a family of anonymous networks by a snap-stabilizing algorithm if and only if it is solvable with explicit termination. \end{theorem} As in the Boldi and Vigna result, the ``only if'' part is immediate. We therefore focus on establishing the ``if'' part. So the main contribution of this paper is a universal snap-stabilizing algorithm that solves the task at hand if this task satisfies the condition for being solvable by an algorithm with explicit termination. This condition is given in Theorem~\ref{CN}. It is the same as the one given in \cite{CGMterm} for solvability with explicit termination. We first prove our results for terminating tasks in the asynchronous model, then we show how to extend the technique for long lived tasks in the synchronous model (for simplicity of exposition). The roadmap is the following. Section 2 introduces the model of computation and the definition of snap-stabilizing algorithms. Section 3 introduces the algebraic tools that are necessary to express the condition in Theorem~\ref{CN}. Section 4 describes a universal snap-stabilizing algorithm based upon Mazurkiewicz enumeration algorithm \cite{MazurEnum}. \subsection{Related Work} Given a distributed task, the condition for it being solvable by an algorithm with explicit termination was first given in \cite{BVanonymous}. The presentation we will use in this paper is the one given in \cite{CGMterm}. Instead of the View algorithm of \cite{YKsolvable,BVanonymous}, we use Mazur\-kie\-wicz' algorithm \cite{MazurEnum}. A variation of Mazur\-kie\-wicz' algorithm was proved to be self-stabilizing in \cite{selfstabenum}, in the Mazurkiewicz model, a model that offers strong synchronization between neighbours. We present here a version for the cellular model. Snap-stabilizing algorithms were introduced in \cite{snapstab}. A more recent exposition can be found in \cite{recentsnapstab}. In \cite{breakingin,recentsnapstab}, a general transformation technique is given to obtain simple snap-stabilizing algorithms from self-stabilizing ones. The authors expose a snap-stabilizing transformer for non-anonymous networks which implies that, in networks with identities, the tasks that are solvable by snap-stabilizing algorithms are exactly the ones that are solvable by self-stabilizing algorithms. In this paper, we prove the task equivalence between snap-stabilization and explicit termination in anonymous networks and show that this implies that the expressivity of snap-stabilizing algorithms is different of self-stabilizing algorithms in the anonymous context. In \cite{probasnap}, a probabilistic correction condition is proposed for snap-stabilizing algorithms. A Las Vegas algorithm is an algorithm whose termination is not guaranteed but whose outputs is always correct. The condition of \cite{probasnap} defines, in a sound way, what is a Las Vegas stabilizing algorithm that is robust to arbitrary corruption of the initial memory. Anonymous networks are networks where nodes do not have a name that is unique. It has seen many works since the seminal work of Angluin \cite{angluin}. There have been two main universal algorithms proposed to solve problems in this setting. The first one has been proposed by Yamashita and Kameda in \cite{YKsolvable}. Its universality has been extended by Boldi and Vigna in and \cite{BVanonymous} (explicit termination) and \cite{BVselfstab} (implicit termination). It computes the (possibly infinite) universal cover of the underlying graph. The second one computes a minimal base of the underlying graph. It was presented by Mazurkiewicz \cite{MazurEnum} to solve enumeration. Its universality has been extended in \cite{GMelection}. Its extension to numerous other models has been done by Chalopin in \cite{theseJC}, its application to the Election problem in the message passing model has been done in \cite{CGMelection}. Boldi and Vigna have also shown how to derive a minimal base (in a finite time) from the universal covering \cite{BVselfstab}. One of the main advantage of Mazurkiewicz' algorithm is that it is always stabilizing, contrary to the View algorithm of \cite{BVselfstab} where it is necessary to know or derive an estimate of the size to make it stabilizing. On the distributed computability side, the first complete characterization of tasks that admits self-stabilizing algorithms has been given in \cite{BVselfstab}. Here, we use a mix of different techniques from the second approach, some of which were first introduced in \cite{CGMterm}. There is an unpublished version of Mazurkiewicz' algorithm in the communication model of this paper but without transient faults in \cite[chap. 4]{theseJC}, where the model is coined the ``cellular model''. \section{Definitions and Notations} \subsection{Basic Definition for Computability} A network is represented by a graph or digraph $G$ where vertices corresponds to nodes and edges or arcs corresponds to (possibly asymmetric) communication links. The set of vertices is denoted by $V(G)$. We consider a fixed set of labels $\Lambda$. Labels are used to represent the local states of parts of the communication network. So we consider labelled graphs in the general sense. Nodes can be labelled (internal state of the nodes), arcs can be labelled (messages in transit, port numbering). We will use $\bG$ to denote a (di)graph with all its associated labels. Since the input labels can be encoded in the labels, we consider all labelled graphs as the possible inputs for distributed algorithms. The set of all labelled graphs is denoted $\allg$. Given a labelled graph $\bG=(G,\lambda)$, where $G$ is the underlying graph and $\lambda:V(G)\mapsto\Lambda$ is the labelling function, we will conveniently note $(\bG,\lambda')$ the graph $G$ labelled by $\lambda\times\lambda'$. Given a network $\bG\in\allg$ and a vertex $v$ in $\bG$, we assume that the state of a node $v$ during the execution of any algorithm is of the form $(\lambda(v),\mem(v), \out(v))$. This tuple of registers has the following semantics. $\lambda(v)$ is a read-only part of the state, $\mem(v)$ is the internal memory of $v$, $\out(v)$ will contain the output value, i.e. the result of the computation at node $v$. When the register $\out$ is not defined, it contains the value $\bot$. A distributed algorithm is an algorithm that is replicated on every node and operates on the local state of the node $v$ by way of communication with the neighbours of $v$. The communication here is done in the \emph{locally shared variables} model of Dijsktra, that is also called the \emph{cellular} model \cite{theseJC}. A distributed algorithm is a set of rules (pairs of precondition and command) that describe how a node has to change its current state (the command) according to its own state and the state of \emph{all its neighbors} (the precondition or guard). We say that a rule $R$ is activable at a node $v$ if the neighborhood of $v$ satisfies the precondition of $R$. In this case, the vertex $v$ is also said to be activable. If a rule $R$ is activable in $v$, an atomic move for $v$ consists of reading the states of all its neighbors, computing a new value of its state according to the command of $R$, and writing this value to the register \mem and/or \out. If more than one rule is activable at a node, one is chosen non-deterministically. Of course, it is possible to have priorities for rules, and to discard this non-determinism. A daemon is a distributed adversary that chooses at each step a set of activated nodes among the activable ones. If only one node can be chosen at a time, this is called the \emph{central daemon}. If any set can occur, this is called the \emph{asynchronous daemon}. If the sets of activated nodes is exactly the set of activable nodes this is called the \emph{synchronous daemon}. Given a daemon, an execution, or run, is a sequence of atomic moves of activated nodes. We consider here the asynchronous daemon (whose executions contain the synchronous daemon execution). A vertex-relabelling relation is a relation between labelled graphs where the underlying graphs are identical. The evolution of the global system can be seen as a sequence of relabelling steps where only the state part of the labels of the graphs is modified, according to the application of rules prescribed by the algorithm at a set of locations that depends of the kind of daemon that is considered. Under a given execution $\rho$, the evolution of the global configuration of the network \bG % is described by the sequence of labelled graphs $\bG,(\bG,\mem_1\times\out_1),(\bG,\mem_2\times\out_2),\cdots$; this is usually abbreviated to $\bG_0,\bG_1,\bG_2,\cdots$ for convenience. If the sequence is finite, that is if there is a step $t\in\N$ where no rule is applicable, or if there is an infinite suffix starting from step $t\in\N$ where the registers \out are not modified, we say that the execution has stabilized and denote by $\bG^f$ the graph labelled with $\out_t$, $\bG^f=(\bG,\out_t)$. It is the \emph{terminal state} of the computation. A terminating problem is a distributed problem for which it is expected that the nodes have final values. For example, the Election problem is a terminating problem that should be compared with the Mutual Exclusion problem where nodes have to solve indefinitely the problem of entering the critical section one node at a time. We formally define now what is a terminating distributed problem. \begin{definition} A \emph{terminating task} is a couple $(S,\gfam)$ where $\gfam\subset\allg$ is a family of labelled graphs and $S$ is a vertex-relabelling relation on \allg. \end{definition} The \emph{specification} $S$ is a general way to describe our distributed problem in terms of relation between inputs and outputs. This description is independent of the \emph{domain} $\gfam$ where we want to solve our problem. For example, the well-known Election problem is specified by $S_{LE}$ such that $\bG S_{LE} \bG'$ if $\bG'=(\bG,\lambda')$ has only one node labelled by the special label \textsc{Elected}. The Size problem where the algorithm has to compute the number of nodes of the network is specified by $S_{size}$ such that $\bG S_{size} (\bG,|V(\bG)|)$. \begin{definition} Given a terminating task $(S,\gfam)$, an algorithm \algo solves $S$ on $\bG\in\gfam$ if for any execution $(\bG_0,\bG_1,\bG_2,\cdots)$ with $\bG_0=\bG$: \begin{description} \item[decision] $\forall v\in V(\bG), \out(v)$ is written exactly once by $v$; \item[stabilization] the execution stabilizes and the terminal state is denoted $\bG^f$; \item[correction] $\bG S \bG^f$. \end{description} \end{definition} \begin{definition} The terminating task $(S,\gfam)$ is solvable if there exists an algorithm \algo such that \algo solves $S$ for all $\bG\in\gfam.$ \end{definition} When the stabilization is obtained with only finite executions, we say the algorithm is silent. When, besides \textbf{correction}, the \textbf{stabilization} property is the only property, we talk about \emph{implicit termination} (or message termination \cite{Tel}). When we have both \textbf{stabilization} and \textbf{decision}, we talk about \emph{explicit termination} (or process termination \cite{Tel}). In the context of this paper solvability is meant in the explicit termination setting. Implicit termination is weaker than explicit termination, and for obvious reason, it is the termination for self-stabilizing algorithms. Note that, in a distant area of Distributed Computing, this is also the termination type of \emph{failure detectors} \cite{ChandraToueg}. Those are the two main termination mode that are classically considered in distributed algorithms. See also \cite{CGMterm,GMTterm} for other types of termination. \subsection{Self- and Snap-Stabilization} Informally, a distributed algorithm is said to be self-stabilizing if an execution starting from any arbitrary global state has a suffix belonging to the set of legitimate states. Note that when we consider the terminating task $(S,\gfam)$, the set of legitimate states corresponds simply to the set of $S-$admissible outputs for the given input graph, that is the set $\{(\bG,\mem,\out)\in\allg \mid \bG\in\gfam, \bG S(\bG,\out)\}$. So, in the context of terminating tasks, this corresponds to the definition of solvability with implicit termination if we require the domain \gfam to be closed by arbitrary corruption of the initial memory. More formally, it is possible to define self-stabilization in the framework of the previous section. Given a family \gfam, we define $\overline{\gfam}=\{(\bG,mem)\mid \bG\in\gfam, mem:V(\bG)\to\Lambda\}$. The terminating task $(S,\gfam)$ is solvable with self-stabi\-li\-zation if $(S,\overline{\gfam})$ is solvable with implicit termination. \medskip Here we focus on snap-stabilization and give only a formal definition for snap-stabilization. Snap-stabilizing algorithms were introduced in \cite{snapstab}. A more recent exposition can be found in \cite{recentsnapstab}. A snap-stabilizing algorithm computes tasks that are initiated by ''requests'' at some nodes of the network. A request is a special event. This event is an event exterior to the algorithm and occurs \emph{after} the end of the faults that led to arbitrary incorrect values. Given that the initial memory can be arbitrarily corrupted, the safety requirement of the problem specification has to have a special form that takes into account the fact that starting nodes have seen a request, see \cite{recentsnapstab}. In order to have a unified framework, we chose in our equivalent presentation, to accept any specification $S$ but to ''implement'' the special form in the definition, independently of the specific specification. So since the initial memory can be arbitrarily corrupted, the correction of the \out register is only required to be satisfied by nodes that have been causally influenced by the initial requests, i.e. nodes for which there exists a sequence of atomic moves that follow a path originating from a node where a request has been made. In other words, a distributed algorithm is snap-stabilizing if an execution starting from any arbitrary global state has \emph{all its causal suffixes} belonging to the set of legitimate states. Given a specific daemon and an algorithm, the system evolves according to the daemon and the algorithm: at one step, some nodes are activable and activated (their actions are processed). Given an execution $\rho$ on \bG, that is a sequence $\bG_0,\bG_1,\bG_2...$ of relabelling of \bG where $\bG_0=\bG$, we denote $A_1,A_2,\cdots$ the sequence of activated nodes. We have that $\bG_i$ is obtained by applying to $\bG_{i-1}$ the actions for the nodes of $A_i$. We proceed to the formal definition. One or more external actions, \emph{the requests}, are applied at some nodes $U\subset V(\bG)$. At time $t$, a node $v$ is \emph{causally influenced} by $U$ if there exists a path $u_0,u_1,\cdots,u_k$ such that $u_0\in U$, $u_k=v$, and there exists a strictly increasing function $\sigma:\N\longrightarrow\N$ $\forall i\geq 1, u_i\in A_{\sigma(i)}$, and $\sigma(k)\leq t$. \begin{definition} Given a terminating task $(S,\gfam)$, an algorithm \algo is snap-stabilizing to $S$ on $\bG\in\gfam$ if for any request applied to $U\subset V(\bG)$, \begin{description} \item[causal decision] $\forall v\in V(\bG)$, $\out(v)$ is written exactly once after $v$ has been causally influenced by $U$; \item[stabilization] the execution stabilizes and the terminal state is denoted $\bG^f$; \item[correction] $\bG S \bG^f$. \end{description} \end{definition} \begin{definition} The terminating task $(S,\gfam)$ is solvable by snap-stabilization if there exists an algorithm \algo such that \algo is snap-stabilizing to $S$ for all $\bG\in\overline{\gfam}$. \end{definition} For the sake of simplicity, in the following we always assume that $\overline{\gfam}=\gfam$. \subsection{Examples} To illustrate the various definitions we present in Fig.~\ref{LCR} an Election algorithm inspired by the well-known Le Lann Chang-Roberts algorithm \cite{lelann,changroberts}. We will show that it is (non-silently) self-stabilizing to the Election task on unidirectional rings, but that it is not snap-stabilizing. We consider a unidirectional ring of known size $N$. The predecessor of a node $v$ is denoted $pred(v)$. Each node $v$ is equipped with a unique identity denoted $id(v)$. The algorithm maintains two variables $min$ and $ttl$. \begin{figure} \begin{rrule}{LCR} \ritem{Initiate\label{initLCR}}{ \item $min(v_0)<min(v)$, \item $min(v_0)<id(v_0)$, }{ \item $min(v_0):=id(v_0)$, \item $ttl(v_0):=N$ } \ritem{Circulate\label{flood}}{ \item $min(v_0)>min(v)$, }{ \item $min(v_0):=min(v)$ \item $ttl(v_0):=ttl(v)-1$ } \ritem{Cleaning\label{clean}}{ \item $min(v_0)\neq min(v)$ or $ttl(v_0)\neq ttl(v)-1$ \item $ttl(v_0)\neq N$ }{ \item $min(v_0):=id(v_0)$, \item $ttl(v_0):=N$ } \ritem{Election\label{elect}}{ \item $id(v_0)=min(v)$, \item $ttl(v)=1$, }{ \item $\out(v_0)=$\textsc{Elected} \item $ttl(v_0)=0$, } \end{rrule} \caption{\label{LCR} A LCR Election algorithm. The center of the cell is denoted $v_0$, $v$ is $pred(v_0)$.} \end{figure} By considering the sequences of consecutive nodes, it is immediate to see that the labels are stable if and only if the sequence starts from a local minimum and the variables follow the semantic of the propagation of this local minimum according to the original LCR algorithm. This algorithm is therefore self-stabilizing but not snap-stabilizing even when adding a special Initiate rule to deal with the requests as below. \begin{rrule}{snapLCR} \ritem{Initiate\label{init}}{ \item $Request(v_0)$ }{ \item $min(v_0)=id(v_0)$, \item $ttl(v_0)=N$ } \end{rrule} Indeed any node corrupted in such a way that the Election rule is immediately applicable will incorrectly set its output value to \textsc{ELected} if its predecessor is requested. \section{Computability of Terminating Tasks} We start by considering snap-stabilizing terminating tasks. We show how the general techniques from explicitly terminating non-stabilizing tasks can be extended to the snap-stabilizing case as well. \subsection{Digraphs and Fibrations} \subsubsection{Definitions} In the following, we give the definitions for the tools introduced by Boldi and Vigna, and extensively studied in \cite{BVfibrations}, to characterize self-stabilizing tasks in \cite{BVselfstab}. To introduce the main tool, that is \emph{fibrations}, we need to consider directed graphs (or digraphs) with multiple arcs and self-loops. A \emph{digraph} $D=(V(D),A(D))$ is defined by a set $V(D)$ of vertices and a set $A(D)\subset V(D)\times V(D)$ of arcs. Given an arc $a$, we denote $s(a)$ and $t(a)$, the source and target of the arc. An undirected graph $G$ corresponds to the digraph $Dir(G)$ obtained by replacing all edges of $G$ by the two corresponding arcs. In the following, we will not distinguish $G$ and $Dir(G)$ when the context permits. The family of all digraphs with multiple arcs and self-loops is denoted $\mathcal D$. Note that the simple symmetric graphs of $\allg$ have direct counterparts in $\mathcal D$ via $Dir$. A dipath $\pi$ of length $p$ from $u$ to $v$ in $D$ is a sequence of arcs $a_1,a_2,\cdots,a_p$ such that $s(a_1)=u, t(a_p)=v$ and for all $i$, $s(a_{i+1})=t(a_i)$. A digraph is strongly connected if there is a path between all pairs of vertices. We assume all digraphs to be strongly connected. Labelled digraphs will be designated by bold letters like $\bD$, $\bG$, $\bH$ ... A \emph{homomorphism} $\gamma$ between the digraphs $D$ and $D'$ is a mapping $\gamma: V(D) \cup A(D)\longrightarrow V(D') \cup A(D')$ such that the image of a vertex is a vertex, the image of an arc is an arc and for each arc $a\in A(D)$, $\gamma(s(a))=s(\gamma(a))$ and $\gamma(t(a))=t(\gamma(a))$. A {homomorphism} $\gamma: V(D) \cup A(D)\longrightarrow V(D') \cup A(D')$ is an \emph{isomorphism} if $\gamma$ is bijective. As previously we consider labelled graphs and digraphs. We extend the definition of homomorphisms to labelled digraphs by adding the condition they also preserve the labelling ($\lambda(v)=\lambda(\gamma(v))$ for any vertex $v$). In a digraph $\bG$, given $v_0\in V(\bG)$ and $r\in\N$, we denote by $B^\unaryminus_\bG(v_0,r)$ the in-ball of center $v_0$ and radius $r$, that is the set of vertices $v$ and arcs $a$ such that there is a dipath of length at most $r$ from $v$ to $v_0$. \subsubsection{Fibrations and Quasi-Fibrations} The notions of fibrations and quasi-fibrations enable to describe exactly the ''similarity'' between two anonymous networks that yields ''similar'' execution for any algorithm in the model of this paper. For the model of Angluin (used by Mazurkiewicz), the notions of coverings and quasi-coverings are the graph morphisms to be used, see eg. \cite{godard_characterization_2002}. A digraph $\bD'$ is a fibration of a digraph $\bD$ via $\phi$ if $\phi$ is a homomorphism from $\bD'$ to $\bD$ such that for each arc $a\in A(\bD)$ and for each vertex $v\in\phi^{-1}(t(a))$ (resp. $v\in\phi^{-1}(s(a))$), there exists a unique arc $a'\in\phi^{-1}(a)$ such that $t(a')=v$ (resp. $s(a')=v$). The following lemma shows the importance of fibrations when we deal with anonymous networks. This is the counterpart of the lifting lemma that Angluin gives for coverings of simple graphs \cite{angluin} and the proof can be found in \cite{BVelection,BVselfstab,CMelection}. \begin{lemma}[Lifting Lemma \cite{BVelection}] \label{lifting} If $\bD'$ is a fibration of $\bD$ via $\phi$, then for any daemon, any execution $\rho$ of an algorithm \algo on $\bD$ can be lifted up to an execution $\rho'$ of \algo on $\bD'$, such that at any step, for all $v\in V(\bD')$, $(\mem(v),\out(v))=(\mem({\phi(v)}),\out({\phi(v)})$. In particular, when the execution $\rho$ has stabilized, the execution $\rho'$ has also stabilized and the computed values are the same for $v$ and $\phi(v)$. \end{lemma} In the following, one also needs to express similarity between two digraphs up to a certain distance. The notion of quasi-coverings was introduced as a formal tool in \cite{MMW,GMelection} for this purpose in the Mazurkiewicz model. The next definition is an adaptation of this tool to fibrations. \begin{definition} Given digraphs $\bK$ and $\bH$, and integer $r$ and $v\in V(\bK)$ and an homomorphism $\gamma$ from $B^{-}_\bK(v,r)$ to \bH, \bK is a quasi-fibration of \bH of center $v$ and radius $r$ via $\gamma$ if there exists a finite or infinite digraph \bG such that \bG is a fibration of \bH via a homomorphism $\phi$ and there exists $w\in V(\bG)$ and an isomorphism $\delta$ from $B^\unaryminus_\bK(v,r)$ to $B^\unaryminus_\bG(w,r)$ such that for any $x\in V(B_\bK^-(v,r))\cup A(B_\bK^-(v,r)), \gamma(x) = \phi(\delta(x))$ \end{definition} If a digraph \bG is a fibration of \bH, then for any $v \in V(\bG)$ and for any $r\in\N,$ \bG is a quasi-fibration of \bH , of center $v$ and of radius $r$. Conversely, if \bK is a quasi-fibration of \bH of radius $r$ strictly greater than the diameter of \bK, then \bK is a fibration of \bH. The following lemma is the counterpart of the lifting lemma for quasi-fibrations. \begin{lemma}[Quasi-Lifting Lemma, \cite{CGMterm,CGMelection}] \label{quasilifting} Consider a digraph \bK that is a quasi-fibration of \bH of center $v$ and of radius $r$ via $\gamma$. For any algorithm \algo, any execution $\rho$ of \algo on $\bH$ can be lifted up to an execution $\rho'$ of \algo on $\bK$, such that at any step $t\leq r$, for all $v\in V(\bK)$, $(\mem(v),\out(v))=(\mem({\phi(v)}),\out({\phi(v)})$. In particular, when the execution $\rho$ has stabilized in less than $r$ steps, the execution $\rho'$ has also stabilized and the computed values are the same for $v$ and $\phi(v)$. \end{lemma} \subsection{Main Result} In this section we state our main result in Theorem~\ref{CN}. By comparing its statement to that of \cite{CGMterm} we obtain Theorem~\ref{snapequiv}. It is obvious that the impossibility result of \cite{CGMterm} applies here, as well as its proof. We present the impossibility result integrally here to make the paper self-contained. We recall some technical notations and definitions from \cite{CGMterm}. \macro{\allv}{\mathcal D_\bullet} We denote $\allv$ the set $\{(\bG,v) \mid \bG\in\mathcal D, v\in V(\bG)\}.$ Given a family $\gfam\subset\allg$, we denote by $\gfam_\bullet$ the set $\{(\bG,v) \mid \bG\in\gfam, v\in V(\bG)\}.$ A function $f:\mathcal D\longrightarrow \Lambda\cup\{\bot\}$ is an \emph{output function} for a task $(S,\gfam)$ if for each network $\bG\in\gfam$ the labelling obtained by applying $f$ on each node $v\in V(\bG)$ satisfies the specification $S$. That is $\bG S (\bG,\lambda)$ where $\forall v\in V(\bG),$ $\lambda(v)=f(\bG,v)$. In order to give our characterization, we need to formalize the following idea. When the in-ball at distance $k$ of two processes $v_1$, $v_2$ in two digraphs $\bD_1, \bD_2$ cannot be distinguished (this is captured by the notion of quasi-fibrations and Lemma~\ref{quasilifting}), and $v_1$ computes its final value in $k$ rounds, then $v_2$ computes the same final value.% \begin{definition} Given a function $r : \allv \longrightarrow \N\cup\{\infty\}$ and a function $f : \allv \longrightarrow \Lambda\cup\{\bot\}$, the function $f$ is $r-$lifting closed if for all $\bK,\bH \in \mathcal D$ such that \bK is a quasi-fibration of \bH, of center $v \in V (\bK)$ and of radius $k\in\N$ via the homomorphism $\gamma$, if $k \geq \min\{r(\bK,v), r(\bH, \gamma(v))\}$, then $f(\bK,v) = f(\bH, \gamma(v))$. \end{definition} Intuitively, a function $f$ is $r-lifting$ closed if $f(\bG,v)$ depends only of $B_\bG^-(v,r(\bG,v))$, and it is undefined if $r(\bG,v)=\infty$. We give now the characterization of terminating snap-stabilizing tasks. We give the proof of the necessary condition. The converse will be proved in the following section, by describing a snap-stabilizing version of Mazurkiewicz' algorithm. \begin{theorem}\label{CN} A terminating task $(S,\gfam)$ is solvable by snap-stabilization if and only if there exists a function $r : \allv \longrightarrow \N\cup\{\infty\}$ and an output function $f : \allv \longrightarrow \Lambda\cup\{\bot\}$ for $(S,\gfam)$ such that, \begin{theoenum} \item \label{iterm}% for all $(\bG,v)\in\allv$, $r(\bG,v)\neq\infty$ if and only if $f(\bG,v)\neq\bot$; \item \label{rlifting}% $f$ and $r$ are $r-lifting$-closed; \end{theoenum} \end{theorem} \begin{qedproof}[[of the necessary condition]] Consider \algo a distributed algorithm that snap-stabilizes to $S$ on $\gfam$ in $t$ rounds. We construct $r$ and $f$ by considering a subset of the possible executions of \algo. We consider the synchronous execution of \algo on any digraph $\bG\in\mathcal D$. For any $v \in V (\bG)$, if $\out(v) = \bot$ during the whole execution, then we set $f(\bG, v) = \bot$ and $r(\bG, v) = \infty$. This is possible since it could be that $\gfam\varsubsetneq\mathcal D$ and \algo might be not terminating on graphs not in \gfam. Let $r_v$ be the first causal step after which $\out(v)\neq\bot$; in this case, if $r_v\leq t$, we set $f (\bG, v) = \out(v)$ and $r(\bG, v) = r_v$. If $t<r_v$, then we set $f(\bG, v) = \bot$ and $r(\bG, v) = \infty$. By construction, \ref{iterm} is satisfied. % We also show that $f$ is an output function and that $f$ and $r$ satisfy \ref{rlifting}. Consider two digraphs \bK and \bH such that \bK is a quasi-fibration of \bH, of center $v_0 \in V(\bK)$ and of radius $k$ via $\gamma$ with $k \geq r_0 = \min\{r(\bK, v_0 ), r(\bH, \gamma(v_0 ))\}$. If $r_0 = \infty$, then $r(\bK, v_0 ) = r(\bH, \gamma(v_0)) = \infty$ and $f(\bK, v_0 ) = f(\bH, \gamma(v_0)) = \bot.$ Otherwise, from Lemma~\ref{quasilifting}, we know that after $r_0$ rounds, $\out(v_0) = \out(\gamma(v0))$. Thus $r_0 = r(\bK, v_0) = r(\bH,\gamma(v0))$ and $f(\bK, v_0) = f(\bH , \gamma(v_0)).$ Consequently, $f$ and $r$ are $r-$lifting closed. \end{qedproof} The previous proof shows that the output function $f$ can be seen as corresponding to the final values obtained from the deterministic execution of an algorithm solving $(S,\gfam)$ under the synchronous daemon. The value of $r(\bG, v)$ can be understood as the number of steps needed by $v$ to compute its final value in \bG. \section{Main Algorithm} In this section, in order to obtain our sufficient condition, we present a general algorithm $\mathcal M_{f,r}$ in Figure~\ref{mazurSSP} for which we use parameters that depend on functions $f$ and $r$ corresponding, via Theorem~\ref{CN}, to the terminating task $(S,\gfam)$ we are interested in solving. This algorithm is a combination of a snap-stabilizing enumeration algorithm, adapted from \cite{selfstabenum} and a generalization of an algorithm of Szymanski, Shy and Prywes (the SSP algorithm for short) \cite{SSP}. The algorithm in \cite{selfstabenum} is described in a different model, where each computation step involves some strong synchronization between adjacent processes. It is a self-stabilizing adaptation of an enumeration algorithm presented by Mazurkiewicz in \cite{mazur}. The SSP algorithm enables to detect the global termination of an algorithm provided the processes know a bound on the diameter of the graph. The Mazurkiewicz-like algorithm always stabilizes on any network \bG and during its execution, each process $v$ can compute an integer $n(v)$ and reconstruct at some computation step $i$ a digraph $\bG_i(v)$ such that \bG is a quasi-fibration of $\bG_i(v)$ and the image of $v$ is $n(v)$. By applying the output function $f$ on $\bG_i(v)$ for $n(v)$, $v$ can compute its \out value. However, the enumeration algorithm does not enable $v$ to compute effectively the radius of this quasi-fibration. We use a generalization of the SSP algorithm to compute a counter that is a lower bound on this radius, as it has already been done in Mazurkiewicz' model \cite{GMTterm} and in the message passing model \cite{CGMterm}. When the SSP counter is greater than $r(\bG_i(v),n(v))$, the condition on $f$ and $r$ from Theorem~\ref{CN} implies than the \out value at $v$ is correctly computed for $S$. \subsection{Modifying Mazurkiewicz' Enumeration Algorithm} An enumeration algorithm on a network \bG is a distributed algorithm such that the \out value are integers and the result of any computation is a labelling of the vertices that is a bijection from $V(\bG)$ to $\{1, 2, \cdots, |V (\bG)|\}$. In particular, an enumeration of the vertices where vertices know whether the algorithm has terminated solves the Election Problem. Since Election is not solvable in all networks, it is not possible to solve the Enumeration problem on all networks. However, even if not solving Enumeration, in any network \bG, the Enumeration algorithm of Mazurkiewicz always stabilizes and yields a digraph $\bG_i(v)$ such that \bG is a quasi-fibration of $\bG_i(v)$. We give first a general description of the Mazurkiewicz algorithm. Every vertex attempts to get its own name in \N\footnote{ this name shall be an integer between $1$ and $|V (\bG)|$ to have an actual Enumeration algorithm. Here we would need more work to enforce this, however since this is not needed for our purpose, these technicalities will be skipped. See \cite{selfstabenum} for a way to get a real Enumeration.}. A vertex chooses a name and broadcasts it together with the name of its adjacent vertices all over the network. If a vertex $u$ discovers the existence of another vertex $v$ with the same name, then it compares its \emph{local view}, i.e., the labelled in-ball of center $u$ and radius $1$, with the \emph{local view} of its rival $v$. If the local view of $v$ is "stronger", then $u$ chooses another name. Node $u$ also chooses another name if its appears twice in the view of some other vertex as a result of a corrupted initial state. Each new name is broadcast again over the network. At the end of the computation it is not guaranteed that every node has a unique name, unless the graph is fibration minimal. However, all nodes with the same name will have the same local view, i.e., isomorphic labelled neighborhoods. The crucial property of the algorithm is based on a total order on local views such that the "strength" of the local view of any vertex cannot decrease during the computation. To describe the local view we use the following notation: if $v$ has degree $d$ and its in-neighbors have names $n_1 , \cdots , n_d$ , with $n_1>\cdots>n_d$ , then $\overline N(v)$, the local view, is the $d-$tuple $(n_1 , \cdots , n_d)$. Let $T$ be the set of such ordered tuples. The lexicographic order defines a total order, $\prec$ , on $T$. Vertices $v$ are labelled by triples of the form $(n, \overline N , M)$ representing during the computation: \begin{compactitem} \item $n(v)\in\N$ is the name of the vertex $v$, \item $\overline N (v)\in T$ is the latest view of $v$, \item $M(v)\subset\N \times T$ is the mailbox of $v$ and contains all information received at this step of the computation. \end{compactitem} We introduce other notations. We want to count the number of times a given name appear in a local view. For a local view $\overline N$ , and $n\in \N$, we define $\delta_{\overline N}(n)$ to be the cardinality of $n$ in the tuple $\overline N.$ For a given view $\overline N$ , we denote by $sub(\overline N, n, n' )$ the copy of $\overline N$ where any occurrence of $n$ is replaced by $n'$. The complete algorithm is given in Fig.~\ref{mazurSSP}. The rules are given in the \textit{priority order} and $v_0$ denotes the center of the cell (ie the in-ball of radius 1). \macro{\compN}{\overline{N}} % \macro{\Pred}{\mathbb P} % \begin{figure}[t] \begin{multicols}{2} \begin{rrule}{Enum} \ritem{Initialization\label{label1}}{ \item $Request(v_0)$ }{ \item $n(v_0):=0$, \item $\compN(v_0):=N(v_0)$, \item $M(v_0):=\emptyset,$ \item $a(v_0):= -1.$ } \ritem{Diffusion rule\label{dif_rule}}{ \item There exists $v \in B(v_0)$ such that $M(v)\neq M(v_0)$. \item[or] $(n(v_0), N(v_0) ) \notin M(v_0)$, \item[or] $\compN(v_0) \neq N(v_0).$ }{ \item $M(v_0):=\mathop{\bigcup}\limits_{w\in B(v_0)}M(w)\cup\{(n(v_0),N(v_0))\}$. \item $\compN(v_0) := N(v_0).$ \item $a(v_0):= -1.$ } \ritem{Renaming rule\label{relab_rule}}{ \item For all $v\in B(v_0), M(v)=M(v_0)$. \item $(n(v_0)=0)$ or $( n(v_0)>0 \mbox{ and there exists }(n(v_0),N)\in M(v_0) \mbox{ such that }((N(v_0)\prec N)))$. \item $n(v_0) > 0 \text{ and } \exists (n_1 , N_1 ) \in M(v_0 )$ such that $\delta_{N_1}(n(v_0))\geq 2.$ }{ \item $n(v_0) = 1+\max \{n\in\N\mid (l,n,N)\in M(v_0)\,\,\text{for some}\,\, l,N \}$. \item $M(v_0) = M(v_0)\cup \{(n(w),N(w)) | w \in B(v_0)\}$, \item $a(v_0)=-1$. } \ritem[gSSPfix]{Fix gSSP counter}{ \item If there exists $v\in B(v_0), \; |a(v) - a(v_0)| \geq 2$ or $(M(v)\neq M(v_0)$ and $a(v_0)\neq-1)$ }{ \item $a(v_0) := -1.$ } \ritem[gSSP]{gSSP rule}{ \item $\forall v\in B(v_0), \; M(v) = M(v_0),$ $|a(v) - a(v_0)| \leq 1$ and $\neg\Pred(v_0)$ }{ \item $a(v_0) := 1 + \min\{a(v) \mid v\in B(v_0)\}.$ } \ritem[Decision]{Output rule\label{Mdecision}}{ \item For all $v\in B(v_0), \; M(v) = M(v_0)$ and $\Pred(v_0)$ }{ \item $\out(v_0) = f(\bK(v_0),w(v_0))$ } \end{rrule} \end{multicols} \caption{\label{mazurSSP}Snap-stabilizing algorithm $\mathcal M_{f,r}$. The parameters are the functions $f$ and $r$ from Theorem~\ref{CN}. $\bK$ is defined by a local procedure and the predicate \Pred depends on $r$. } \end{figure} \macro{\strong}{\textsc{Strong}} The labeling function obtained at the end of a run $\rho$ of Mazurkiewicz' algorithm is noted $\pi_\rho$. If $v$ is a vertex of \bG, the couple $\pi_\rho(v)$ associated with $v$ is denoted $(n_\rho(v), M_\rho(v)).$ We also note the final local view of $v$ by $N_\rho(v).$ For a given mailbox $M$ and a given $n \in \N,$ we note $\strong_M(n)$ the local view that dominates all $\compN, (n,\compN) \in M$ (\textit{i.e.}\xspace $\compN \prec \strong_M (n).$ Except for the first corrupted stages, $\strong_{M(v)}(n)$ is actually the "strongest local view" of $n.$ \begin{theorem} A run $\rho$ of Mazurkiewicz' Enumeration Algorithm on \bG with any initial values finishes and computes a final labeling $\pi_\rho$ verifying the following conditions for all vertices $v, v'$ of $V(\bG)$~: \begin{theoenum} \item $M_\rho(v) = M_\rho(v').$ \item $\strong_{M_\rho(v')}(n_\rho(v)) = \compN(v) = N_\rho(v).$ \item \label{injlabel} $n_\rho(v) = n_\rho(v')$ if and only if $N_\rho(v) = N_\rho(v')$. \end{theoenum} \end{theorem} \begin{qedproof} Even if the model is different, beside technicalities, this can be proved similarly to the proof of \cite{selfstabenum}. % \end{qedproof} Now we explain how it is possible to extract the map of a minimal base. This is usually done by considering the graphs induced by the numbers and associated local views that have maximal views. However, here, due to the arbitrary initial failures, the mailbox should be cleaned up before use. It is possible to have some maximal $(n,\compN)$ but $n$ does not actually exists on any $v$. Finally, each vertex shall compute locally the set of actual final names from the final mailbox $M_\rho$. We note $\bG_\rho$ the graph defined by \begin{eqnarray*} V_\rho &=& \{n_\rho(v) | v \in V(\bG)\},\\ A_\rho &=& \{(n_\rho(v_1 ), n_\rho (v_2)) | (v_1 , v_2 ) \in A(G)\}. \end{eqnarray*} For a mailbox $M$ and an integer $n$, we define the set $V^M(n)$ by induction. \begin{eqnarray*} V^M_0 &=& \{n\},\\ V^M_{i+1} &=& V^M_i \cup\{ t | \exists s \ V^M_i , \delta_{\strong_M(s)}(t)= 1 \}. \end{eqnarray*} If $i_0$ is such that $V^M_{i_0} = V^M_{i_0+1}$ then we define $V^M(n) = V^M_{i_0}$. Finally, we have, \begin{lemma}[\cite{selfstabenum}] For all $v\in V(\bG)$, $V^{M_\rho} (n_\rho (v)) = V_\rho$. \end{lemma} By defining $A^M$ by $\{(n_1,n_2)|n_1,n_2\in V^M(n)\mbox{ and }\delta_{\strong_M(n_1)}(n_2)=1\}$, we obtain a graph $\bG_{M(v)}=(V^{M(v)},A^{M(v)})$. We can not readily use $\bG_{M(v)}$ since it could be that it is not in \gfam. We denote by $\bK(v)$ a digraph that is in \gfam and that is a quasi-fibration of $\bG_{M(v)}$ of radius $a(v)$ and of center $w(v)$. Such a digraph can be found by a local procedure enumerating all graphs and vertices of $\gfam_\bullet$ until one is found. This semi-algorithm will always terminate because of the following property. \begin{proposition} \label{quasifib-base} Let $P$ be the set of requesting processes. Let $v$ that has been causally influenced by $P$, and such that $a(v)\geq 0$. The graph \bG is a quasi-fibration of $\bG_{M(v)}$ of center $v$ and radius $a(v)$. \end{proposition} \begin{qedproof} We add that every $w\in B(v,a(v))$ has been influenced to the statement and prove this new statement by induction on $i$, the number of steps since $P$ has received the requests. Initially, at step 1, the requests are being processed by Enum1, \textit{i.e.}\xspace the set of influenced nodes is $P$ and the property holds trivially. Assume the property holds at step $i$ and consider $v_0$ a vertex that is activated at round $i+1$. We have to consider two cases, either $v_0$ was already influenced at round $i$ or it is a newly influenced node. If $v_0$ is a newly influenced node. The only rule of interest is gSSP because other rules are setting $a(v_0)$ to $-1$. But we show that $v_0$ cannot apply this rule. Indeed, assume $M(v_0)\neq\emptyset$, then, the causality path to $v_0$ starts in a root whose variables have been reset, and from which the causality chain of applications will propagate its new name. So $M(v_0)$ has to be updated to, at least, this name before being able to apply gSSP. If $v_0$ has already been influenced then the induction statement applies at the previous round. Denote $a(v_0)$ the value of the counter at the end of round $i$ and assume that for all $v\in N(v_0), a(v)=a(v_0)$. We prove that the statement holds for $a(v_0)+1$ at round $i+1$. If $a(v_0)=0$ then, by the same argument as in the previous case, the neighbours of $v_0$ have all been influenced and the statement holds with a radius $1$. If $a(v_0)>0$ then the neighbours have been influenced by induction assumption. Moreover, every $v\in N(v_0)$ is the center of a quasi-fibration of radius $a(v_0)$. Therefore, $v_0$ is the center of a quasi-fibration of radius $a(v_0)+1$. Similarly, every $w\in B(v,a(v_0))$ has been influenced and the ball $B(v_0,a(v_0)+1)$ is totally influenced. The statement holds at round $i+1$. \end{qedproof} The algorithm from Fig.~\ref{mazurSSP} uses the functions $f$ and $r$ given in the necessary condition of Theorem~\ref{CN}. The two functions are used to define a digraph \bK (defined above) and a predicate \Pred defined below. The predicate needs to make the counter $a$ to increase when what can be extracted from the mailboxes (that is the minimum base of \bG)) is the same locally. But it must also make the algorithm stop when there is enough information to conclude. This information is enough when the value $r$ for the reconstructed base matches the counter of stability $a$. \begin{theorem} With $\Pred(v) := (a(v) < r(\bK(v),n(v)))$, the algorithm $\mathcal M_{f,r}$ snap-stabilizes to $S$ for any set $P$ of requested nodes. \end{theorem} \begin{qedproof} Consider a node $v$ just after it has applied rule \ref{Mdecision}, we have $\strong(M(v))$ that is constant in the neighbourhood, $r(\bK(v),n(v))\leq a(v)$ and $out(v)=f(\bK(v),w(v)).$ Since, by construction, $\bK(v)$ is a quasi-fibration of $\bG_{M(v)}$ of radius $a(v)\geq r(\bK(v),n(v))$ and of center $n(v)$, and since $f$ and $r$ are $r-$lifting closed, $\out(v) = f(\bK(v),w(v)) = f(\bG_{M(v)},n(v)),$ and $r(\bK(v),w(v)) = r(\bG_{M(v)},n(v)).$ From Prop.~\ref{quasifib-base}, since $a(v) \geq r(\bG_{M(v)},n(v)$ and since $f$ is $r-$lifting closed, $\out(v) = f(\bG_M(v),n(v))=f(\bG,v).$ Since $f$ is an output function for $(S,\gfam)$, the \out labels are correct for $S$ in \bG. \end{qedproof} \subsection{Complexity} The algorithm $\mathcal M_{f,r}$ is a universal algorithm and therefore for given $s,\gfam)$ it can have a bigger complexity than a tailored algorithm. However it should be noted that the complexity of $\mathcal M_{f,r}$ is divided in two components, the stabilization of the Enumeration part and the increase of the SSP counter until it is greater than $r$. Note that the former depends on the graph \bG only and that the latter depends on the family \gfam. The complexity from the Enumeration has been shown in \cite{selfstabenum} to be, in the Angluin model, at most $t |V(\bG)|^2$ where $t$ is the sum of the number of vertices and of the highest name $n$ initially known. The proof can be extended to the model of this paper. \section{Conclusion} We have shown that for anonymous networks, the terminating tasks that can be solved by a snap-stabilizing algorithms are exactly the ones that can be solved by a distributed algorithm with explicit termination. This complements the already known task-equivalence between self-stabilizing terminating tasks and distributed tasks computed with implicit termination. The important consequence is that the partial knowledge (like bound on the size, diameter etc ...) that could be used to get explicit termination in the non-stabilizing case are also the ones that can be used to have snap-stabilizing solutions. A limit of this result is that it does not give the intrinsic complexity of a problem and it could be that solving a problem by snap-stabilization is harder than solving it with explicit termination. The computability is equivalent however whether the complexity is also equivalent is an open problem. For lack of space, we do not discuss probabilistic snap-stabilization \cite{probasnap}. It is not difficult to see that the techniques presented here enable to prove that a task has a probabilistic snap-stabilizing solution if and only it has a (non-stabilizing) Las Vegas solution. An interesting open question, as in the self-stabilizing case, would be to find a \textit{direct} way to transform any given anonymous algorithm into a snap-stabilizing one. Such transformation might have benefits regarding the complexity. \medskip The author wishes to thank Jérémie Chalopin for sharing ideas and fruitful discussions about distributed computability in various settings, including some closely related to this paper. \input{snapstab.bbl} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{intro}Perhaps the most fundamental problem hindering a better understanding of SN~Ia\ explosions is the unclear nature of the progenitor system. One way of addressing this problem is to carry out numerical simulations for different scenarios that involve thermonuclear explosions of white dwarfs (WDs) and to compare the results with observations. Obviously, detailed observational data are a prerequisite for this approach. At the same time, the comparison should be based on models that avoid free parameters in the description of the explosion mechanism, as far as possible. Only then the predictive power of theoretical models will be sufficient to discriminate between explosion models and to draw conclusions about progenitor systems. In addition to the possibility of directly constraining the progenitor system from archival data \citep{li2011b,liu2011a} or early observations \citep{brown2011a,nugent2011a, bloom2012a}, the recently discovered nearby SN~Ia\ 2011fe offers a unique opportunity for a comparison with explosion models. Of particular value are the possibility to follow this close object photometrically to extremely late epochs and the exact knowledge of the explosion time. SN~2011fe\ was first detected by the Palomar Transient Factory on 2011 August 24.167 in M101 \citep{nugent2011b}. A preliminary analysis of our data indicates that it reached an apparent $B$-band peak magnitude of $9.9$ on September 11. Combined with the derived explosion date of 2011 August 23.7 \citep{nugent2011a}, the $B$-band rise time of SN~2011fe\ is $\sim$$18.3\,\mathrm{d}$---a typical value for normal SNe~Ia\ \citep{conley2006a,hayden2010a}. Assuming a distance to M101 of $6.4\,\mathrm{Mpc}$ \citep{shappee2011a}, SN~2011fe\ is a normal SN~Ia\ with $M_{B,\mathrm{max}}=-19.13$, having produced $\sim$$0.6 \, M_\odot$ of $^{56}$Ni \citep{stritzinger2006a}. The identification of SN~2011fe\ as a prototypical SN~Ia\ is also corroborated by the observed spectra (as shown below). With the development of three-dimensional simulations of thermonuclear explosions in carbon--oxygen WDs and of the subsequent radiative transfer (RT) leading to the formation of the observables, a new generation of models is currently becoming available. These have the advantage that the explosion physics is represented in a far less parameterized manner than in previous one-dimensional models. Due to their improved predictive power, a comparison with observational data would in principle allow us to constrain the explosion scenario of SNe~Ia. However, no currently available multi-dimensional model reaches the level of agreement that can be obtained fitting one-dimensional semi-empirical models to data. This challenges the interpretation of the comparison between the new models and SN~Ia\ data. Here, we address the question of whether SN~2011fe\ can be explained by models of an exploding Chandrasekhar-mass WD (realized as a delayed detonation) or a violent merger of two WDs. These scenarios can lead to observables that resemble normal SNe~Ia\ \citep{mazzali2007a, kasen2009a, pakmor2012a}, but they differ fundamentally in the explosion mechanism, the mass and the structure of the ejecta. A discrimination between them based on comparison with observations would help to shed light on the open question of the progenitor system. An explosion of the WD near the Chandrasekhar mass is usually attributed to the single-degenerate progenitor channel in which a carbon-oxygen WD accretes matter from a non-degenerate companion; however, the formation of a Chandrasekhar-mass object in a merger of two WDs cannot be excluded. Our second scenario results from the merger of two WDs with similar and rather high masses adding up to a total of $2\,M_\odot$. Both models are set up to produce $\sim$$0.6\,M_\odot$ of $^{56}$Ni but apart from that they follow generic assumptions and are not tuned to fit the data of SN~2011fe. \section{Explosion Models}The most promising way of producing observables in reasonable agreement with observations of normal SNe~Ia\ from an explosion of a Chandrasekhar-mass WD is the delayed detonation mechanism \citep{khokhlov1991a}. We model this scenario using the techniques described by \citet{reinecke1999a,roepke2005b, schmidt2006c} and \citet{roepke2007b}. An isothermal ($T=5\times10^5\,\mathrm{K}$) WD composed of carbon and oxygen in equal parts by mass was set up in hydrostatic equilibrium with a central density of $2.9\times10^9\,\ensuremath{\mathrm{g} \, \mathrm{cm}^{-3}}$ and an electron fraction of $Y_e=0.498864$, corresponding to solar metallicity. The model was discretized on a three-dimensional Cartesian moving grid \citep{roepke2005c} with $512^3$ cells consisting of two nested parts. To reach the intended $^{56}$Ni production, the initial deflagration was ignited in $100$ sparks placed randomly in a Gaussian distribution within a radius of $150\,\mathrm{km}$ from the WD's center on the inner grid, which had a resolution of $1.92\times10^5\,\mathrm{cm}$. After an initial deflagration phase similar to that described by \citet{roepke2007c}, a detonation was triggered at every location on the flame where the fuel density was in the range of $6$ to $7\times10^6 \,\ensuremath{\mathrm{g} \, \mathrm{cm}^{-3}}$ and the grid cell contained preferentially fuel material, provided that the turbulent velocity fluctuations exceeded $10^8\,\ensuremath{\mathrm{cm} \, \mathrm{s}^{-1}}$ at a significant fraction of the flame area and persisted for sufficiently long times. This loosely follows the criteria proposed by \citet{woosley2009a}. Since the initiation of a detonation proceeds on scales that are not resolved in our simulations, the probability of finding high turbulent velocities on unresolved scales is extrapolated applying the procedure of \citet{roepke2007d}. The evolution was followed to a time of $100\,\mathrm{s}$ after ignition, by which homologous expansion of the ejecta was reached to a good approximation. This model, called N100, is part of a larger set of delayed-detonation simulations (Seitenzahl et al., in preparation). The details of the nucleosynthesis in this explosion were determined from thermodynamic trajectories recorded by $10^6$ tracer particles distributed in the exploding WD \citep{travaglio2004a, seitenzahl2010a}. The characteristics of the model are summarized in Table~\ref{tab:models}. The second simulation we discuss models the inspiral, merger and explosion of two WDs with $1.1\,M_\odot$ and $0.9\,M_\odot$, respectively. Details of the corresponding simulations are given by \citet{pakmor2012a}. While the inspiral and merger phases were followed with a version of the SPH code \textsc{Gadget} \citep{springel2005a}, the subsequent thermonuclear detonation was modeled with techniques similar to those employed in N100. The question of whether a detonation triggers at the interface between the two merging stars is controversial. Simulations with sufficiently high numbers of SPH particles, such as presented here, show the formation of a hot spot, and we assume a detonation to trigger at this location when the temperature exceeds $2.5\times 10^{9}\,\mathrm{K}$ in material of $\rho\approx 2\times 10^{6}\,\ensuremath{\mathrm{g} \, \mathrm{cm}^{-3}}$, relying on the microscopic simulations of detonation initiation by \citet{seitenzahl2009b}. Again, the evolution was followed up to $100\,\mathrm{s}$ and the composition of the ejecta was determined in a post-processing step. The results of this simulation are given in Table~\ref{tab:models}. Density and composition of both models in homologous expansion are visualized in Figs.~\ref{fig:deldet_model} and \ref{fig:merger_model}. We note that the ejecta structure resulting from the WD-WD merger differs fundamentally from that of N100. This is because the explosion of the secondary WD happens shortly after that of the primary. Therefore, the outer ejecta material originates from the primary. At the onset of the explosion, the primary had a radius below $0.02\,R_\odot$ making our \emph{violent} merger scenario consistent with the constraint on the radius of the exploding object derived by \citet{bloom2012a}. \begin{figure*} \centerline{ \includegraphics[width=\linewidth]{f1_deg.eps}} \caption{Slices through our delayed-detonation model N100 in the $x$--$z$-plane showing the density (top left) and abundance distribution of selected species at 100\,s after explosion.\label{fig:deldet_model}} \end{figure*} \begin{figure*} \centerline{ \includegraphics[width=\linewidth]{f2_deg.eps}} \caption{Same as Fig.~\ref{fig:deldet_model} but for the merger model.\label{fig:merger_model} } \end{figure*} \begin{table*} \begin{center} \caption{Model characteristics.} \label{tab:models} \hspace{-2.5cm} \begin{tabular}{lll} \hline & delayed detonation (N100) & violent merger\\ \hline total ejecta mass [$M_\odot$] &1.40 & 1.95\footnote{$0.05\,\mathrm{M_\odot}$ are lost during the explosion simulation because of the finite extent of the grid} \\ asymptotic kinetic energy of ejecta [$10^{51}\,\mathrm{erg}$] & 1.45 & 1.7 \\ \hline $^{56}$Ni mass [$M_\odot$] &0.604& 0.616\\ total iron group [$M_\odot$] & 0.839 & 0.697 \\ total intermediate mass elements [$M_\odot$] &0.454& 0.5 \\ carbon mass [$M_\odot$] &0.003& 0.153\\ oxygen mass [$M_\odot$] &0.101& 0.492 \\ combined mass of $^{55}$Fe and $^{55}$Co [$M_\odot$] & $1.33\times10^{-2}$ & $3.73\times10^{-3}$ \\ combined mass of $^{57}$Ni and $^{57}$Co [$M_\odot$] &$1.88\times10^{-2}$&$1.49\times10^{-2}$\\ \hline $B$-band rise time [days] & 16.6 & 20.8 \\ $B$-band peak luminosity [mag] & $-19.0$ & $-19.0$ \\ $\Delta m_{15}(B)$ [mag] & 1.34 & 0.95 \\ \hline $\mathrm{D}_\mathrm{late}^{500}\equiv m_{1400\mathrm{d}} - m_{900\mathrm{d}}$ in leptonic light curve [mag] & 2.25 & 2.65 \\ $\mathrm{D}_\mathrm{late}^{1000}\equiv m_{1900\mathrm{d}} - m_{900\mathrm{d}}$ in leptonic light curve [mag] & 3.20 & 3.87 \\ \end{tabular} \end{center} \end{table*} \section{Comparison with spectra of SN~2011\lowercase{fe}}From the nucleosynthesis tracer particles we constructed detailed abundance distributions of the explosion ejecta at 100\,s and mapped them to $50^3$ Cartesian grids. These grids were then used to derive synthetic light curves and spectra with the Monte Carlo RT code \textsc{Artis} \citep{kromer2009a,sim2007b}. To this end, we simulated the propagation of $10^8$ photon packets from 2 to 120 days after explosion using the cd23\_gf-5 atomic dataset of \citet{kromer2009a}, which is based on the lines contained in the CD23 compilation of \citet{kurucz1995a}. To account for higher ionization at early times, we added the ionization stages \textsc{vi} and \textsc{vii} for Sc to Ni, leading to a total of ${\sim}5\times10^5$ atomic lines. Both our models yield a $B$-band peak magnitude of $-19.0$, roughly in agreement with that observed for SN~2011fe. Their rise times, however, differ: while N100 reaches $B$-band maximum after 16.6\,d, the merger takes 20.8\,d (further parameters of our synthetic light curves are given in Table~\ref{tab:models}). Thus, neither of the models gives a perfect match to the light curves of SN~2011fe\ but both are sufficiently close to warrant further investigation. In Fig.~\ref{fig:spectra} we compare synthetic spectra from our models with flux-calibrated spectra of SN~2011fe\ taken by the SNfactory collaboration with the SNIFS instrument \citep{aldering2002a} on the University of Hawaii 2.2\,m telescope on Mauna Kea. To our knowledge this is the first direct comparison of consistent three-dimensional SN~Ia\ models with a spectrophotometric time-series. Overall, the spectra of both explosion scenarios reproduce the main features of the observed spectra and the flux level reasonably well (note that these are not fits but predictions from ``first-principle'' models and that \emph{absolute} fluxes are compared). In detail, however, there are problems in both models. \begin{figure*} \centerline{\includegraphics[width=\linewidth]{f3_deg.eps}} \vspace*{4ex} \centerline{\includegraphics[width=0.995\linewidth]{f4.eps}} \caption{Spectral evolution of our delayed detonation N100 (left) and merger model (right) from 6 to 27 days after the explosion. The angle-averaged spectrum is plotted in black while 25 spectra for representative viewing angles are shown in gray (the variability with viewing angle of the earliest spectra is dominated by Monte Carlo noise in both models). For comparison the observed spectra of SN~2011fe\ are over-plotted in red assuming an explosion date at August 23.7 (MJD 55796.7; \citealt{nugent2011a}). The observations were corrected for Galactic reddening assuming $E(B-V)_\mathrm{Gal}=0.009$\,mag \citep{schlegel1998a} and de-redshifted according to a heliocentric radial velocity $v_\mathrm{hel}=241\,\ensuremath{\mathrm{km} \, \mathrm{s}^{-1}}$ given by \citet{devaucouleurs1991a}. Reddening from the host is negligible. The bottom panel compares the observed spectrum of SN~2011fe\ near $B$-band maximum (red) with synthetic spectra from the merger corresponding to three different viewing angles (blue colors) and the angle-average (black).\label{fig:spectra}} \end{figure*} The predicted absorption features are blue-shifted with respect to the observations. Although this effect is stronger in N100, it is also visible for the merger, indicating too high ejecta velocities in the models. Since the mass of the exploding object is very different in the two cases, the nuclear energy release is likely too high in both explosion processes. A potential way to cure this problem is to increase the oxygen abundance in the progenitor WDs at the expense of carbon thus increasing the average nuclear binding energy of the fuel. While N100 is only marginally too bright at the early epochs, the merger is clearly too faint. This corresponds to the shorter/longer rise time of the respective model (see Tab.~\ref{tab:models}) compared to SN~2011fe. Around $B$-band maximum at ${\sim}18$\,d the merger compares favorably to the observed spectra. The flux level and the overall shape of its synthetic spectra match the data better than those of N100. After maximum the agreement with the observations deteriorates for both models, although the effect is more drastic for N100. In particular, the models fail to reproduce the spectral features between $5000\,$\AA\ and $6000\,$\AA\@. Moreover, both models become redder faster than the observation. Again, this trend is more pronounced in N100, but is also visible in the merger. Figs.~\ref{fig:deldet_model} and \ref{fig:merger_model} show that iron-group elements (IGEs) extend significantly beyond velocities of $10,000\,\ensuremath{\mathrm{km} \, \mathrm{s}^{-1}}$ in N100 but also in some directions in the merger. The W7 model of \citet{nomoto1984a}, which is known to reproduce observables of SNe~Ia\ well, does not contain IGEs at such high velocities. However, they are reported in abundance tomographies of the normal SNe 2002bo \citep{stehle2005a} and 2004eo \citep{mazzali2008b} and \citet{nugent2011a} identify iron in the earliest spectra of SN~2011fe\ at velocities of $16,000\,\ensuremath{\mathrm{km} \, \mathrm{s}^{-1}}$. Our synthetic spectra do not show mismatches with observed lines that can be directly attributed to high-velocity IGEs. It is possible, however, that they contribute to the fast reddening of the models. As for the high ejecta velocities, a decreased carbon/oxygen ratio in the exploding WDs may alleviate this problem. An important difference between the two models is also visible from Figs.~\ref{fig:deldet_model} and \ref{fig:merger_model}: N100 is---apart from small-scale anisotropies---roughly spherical (see also \citealt{blondin2011a}). In contrast, the merger shows pronounced large-scale asymmetries. This is reflected in a strong viewing-angle dependence in its spectra which increases after maximum due to the growing asymmetry of the inner ejecta for smaller radii. As demonstrated in the lower plot of Fig.~\ref{fig:spectra} individual line-of-sight spectra from the merger reproduce the observation at least as well as the angle-average. Nevertheless, the high level of asymmetry in the merger could be in conflict with the observed spectral homogeneity of normal SNe~Ia\ and the low level of continuum polarization observed \citep{wang2008a}. Currently our RT simulations do not include polarization. Note, however, that \citet{smith2011a} report a continuum polarization of 0.2--0.4\% for SN~2011fe\ which they interpret as a sign of persistent asymmetry in the last-scattering surface. The fact that the merger reproduces the observed spectra better than N100 at maximum light (and later) suggests that the chemical structure of its deeper ejecta, which dominate the spectrum formation at this epoch, is closer to that of SN~2011fe. Since neither of the models matches the optical data of early epochs perfectly, our comparison gives slight preference to a WD merger scenario over a delayed detonation in a Chandrasekhar-mass WD as an explanation for this object. But as there are major shortcomings in both models, a definitive conclusion cannot be drawn. Observables other than maximum-light spectra may, however, have more discriminating power. We discuss promising possibilities in the following. \section{Late-time observables}While optical data taken before and around peak brightness probe predominantly the outermost layers, observations at later epochs are sensitive to the core of the ejecta. Since this is the region where the differences between the two models considered here are most pronounced, late-time observables are a very useful diagnostic tool. A fundamental difference between our two explosion scenarios is the density at which the material is burned in the thermonuclear combustion. Due to the high central density of the Chandrasekhar-mass WD, substantial burning proceeds on thermodynamic trajectories with peak densities above $2\times10^8\,\ensuremath{\mathrm{g} \, \mathrm{cm}^{-3}}$ in N100 (especially in its deflagration phase), whereas in the violent merger all the burning occurs at peak densities below $2\times10^8\,\ensuremath{\mathrm{g} \, \mathrm{cm}^{-3}}$. This leads to vastly different degrees of neutronization in the ashes due to electron capture reactions, resulting in higher abundances of stable IGEs (in particular $^{54}$Fe, $^{56}$Fe, and $^{58}$Ni) for N100 which should be reflected in the presence of Ni lines in spectra from the nebular phase \citep{maeda2010c,gerardy2007a}. Moreover, our models differ significantly in composition and geometry of the innermost ejecta. While the $^{56}$Ni distribution is roughly spherical for N100, most of the $^{56}$Ni is off-center in the merger. Asymmetric and shifted lines can be expected here and could correspond to the effects discussed by \citet{maeda2010c, maeda2010b}. In the merger, the detonation of the secondary WD at low densities produces copious amounts of oxygen in the innermost ejecta. Potentially, this could lead to strong [OI] $\lambda\lambda$6300,6364 emission at late times \citep{kozma2005a}, which is not observed in SNe~Ia. The efficiency of [OI] emission, however, depends strongly on the contamination with other elements and the ionization state of the oxygen-rich zone. For strong ionization (as might be the case in our low-density core), the presence of oxygen in the core would pose no problem for the merger. Clearly, nebular spectra contain important information, but a firm conclusion can only be drawn upon detailed three-dimensional modeling of the late-time RT, which is beyond the scope of this letter. There is, however, a more direct and perhaps more easily studied effect. Due to the low central densities of the two sub-Chandrasekhar mass WDs, all of the IGEs in the merger are synthesized in either $\alpha$-rich freeze-out from nuclear statistical equilibrium or in incomplete Si-burning. In contrast, much of it is produced under ``normal'' freeze-out conditions in the deflagration phase of N100 (\citealt{thielemann1986a} place the dividing line between $\alpha$-rich and ``normal'' freeze-out at ${\sim}2\times10^8\,\ensuremath{\mathrm{g} \, \mathrm{cm}^{-3}}$ for explosive burning of C+O material in SNe~Ia). This leads to a higher abundance of $^{55}$Co---an isotope mainly synthesized in the ``normal'' freeze-out and in incomplete Si-burning (e.g.\ \citealt{thielemann1986a})---in the ejecta of the Chandrasekhar-mass WD explosion than in those of the merger. Such different isotopic ratios in the IGEs affect the shapes of the predicted late-time light curves \citep{seitenzahl2009d}. Starting at ${\sim}800\,\mathrm{d}$ after the explosion, the \emph{leptonic light curves} that assume full transparency to $\gamma$-rays and pure leptonic heating of the ejecta will be increasingly powered by the decay of isotopes other than $^{56}$Co. This is illustrated in Fig.~\ref{fig:latelc}. At ${\sim}1000\,\mathrm{d}$ after the explosion, the decay of $^{57}$Co to $^{57}$Fe, which (in ${\sim}80\%$ of all decays) emits internal conversion electrons, starts to dominate the light curves. Later, the decay of $^{55}$Fe, which is mainly synthesized as $^{55}$Co, to $^{55}$Mn (a ground-state to ground-state transition followed by the emission of Auger electrons) contributes significantly and eventually dominates the radioactive energy generation. \begin{figure} \includegraphics[width=\linewidth]{f5.eps} \caption{Leptonic luminosity as a function of time after the explosion for N100 (black bold line) and the merger model (red bold line). The dashed, dash-dotted and dotted lines give the contribution to the leptonic luminosity due to the decay of selected isotopes. Thin solid lines give the luminosities including decay X-rays.\label{fig:latelc}} \end{figure} The leptonic light curve of N100 will fall off more slowly than that of the merger. For example, the decrease in combined leptonic energy production from $900\,\mathrm{d}$ to $1400\,\mathrm{d}$ ($1900\,\mathrm{d}$) corresponds to a dimming by 2.25 (3.20) magnitudes for N100 and by 2.65 (3.87) magnitudes for the merger (see Table~\ref{tab:models}). Thus, a measurement of the late light curve decline rate would distinguish between an explosion of a Chandrasekhar-mass WD (which in any scenario requires some pre-expansion in a deflagration stage) and alternative models based on detonations in low density material---such as mergers of WDs. For this, neither the correct distance to the object nor the exact $^{56}$Ni production have to be known. Note, however, that the light curves shown are idealized cases assuming that the leptonic energy production rates can be directly translated into UVOIR light curves (which may be precluded by effects such as the infrared catastrophe, ``frozen-in ionization'', CSM interaction, leptonic losses, etc.). \section{Conclusions}Although the nearby SN~2011fe\ offers a unique opportunity to scrutinize explosion models, at present a clear preference of one scenario over the other is hard to establish. We therefore discuss two models that are very distinct in the explosion characteristics and in the resulting structure of the ejecta---a delayed detonation of a Chandrasekhar-mass WD and a merger of two WDs with a total of $2\,M_\odot$. Comparing with early and near-maximum optical spectra, both scenarios reproduce the main features but the merger is slightly preferred because it provides a better match to the observations around peak brightness. There are, however, shortcomings in other aspects---such as the too long rise time---and therefore the working hypothesis of SN~2011fe\ resulting from a merger of two WDs requires additional confirmation. As shown here, alternatives to early-phase optical data may have more decisive power. At very late epochs nucleosynthetic effects lead to different characteristics in the photometric evolution that may allow us to discriminate between explosion models. This, however, requires true bolometric measurements, and it is unclear at which wavelengths the maximum emission occurs at those late epochs. Observations of SN~2011fe\ will help to clarify this issue and thus multi-wavelength monitoring of this object is essential. If the maximum emission falls into the optical range, a clear distinction between explosion models (that can then be related to progenitor scenarios) will be possible from photometric measurements at ${\gtrsim}1000\,\mathrm{d}$. Thanks to its proximity, these observations should be feasible for SN~2011fe.\begin{acknowledgements}This work was supported by the Deutsche Forschungsgemeinschaft via the Transregional Collaborative Research Center TRR~33, the Emmy Noether Program (RO 3676/1-1) and the Excellence Cluster EXC~153, DOE Contracts DE-AC02-05CH1123 and DE-AC02-05CH11231, the Gordon \& Betty Moore Foundation, CNRS/IN2P3, CNRS/INSU, and PNC in France, the Max Planck Society, and the Tsinghua University Center for Astrophysics. The simulations were performed at JSC (grants PRACE042 and HMU014) and NCI at the ANU.\@ We are grateful to C. Aspin, E.~Gaidos, A.~Mann, M.~Micheli, T.~Riesen, S.~Sonnett, and D.~Tholen, who granted us interrupt time to observe SN~2011fe.\end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{introduction} The VC dimension is a measure of what is called the ``capacity'' of a classification algorithm: that is, roughly speaking, its flexibility or expressive power. The practical interest of this quantity arises from its role in the \emph{Probably Approximately Correct} learning model and related models (see e.g. \citet[ch. 3]{Shais}, \citet{Holden95}), where it appears in results relating the size of the training set to the classifier's accuracy. The VC dimension is defined in terms of the concept of \emph{shattering}~\citep{Vapnik98}. For a two-class problem in $d$-dimensional space $\mathbb{R}^d$, a set $A$ of functions $\mathbb{R}^d \to \{+1,-1\}$ is said to shatter a set of points (in $\mathbb{R}^d$) if any labelling of that set can be given by an element of $A$. The VC dimension of $A$ is the size of the largest set which can be shattered by $A$. The nearest-neighbour (1NN) classification rule is a classic technique of supervised learning, first introduced by \cite{Fix52}. The 1NN rule determines a label for (``classifies'') a given point in a metric space by assigning it the label of the nearest point in a previously determined set of labelled reference points. In the original and simplest algorithm, the reference set is the set of all the training data. If the amount of training data is \emph{a priori} unbounded, then the VC dimension of the set of associated 1NN-rule classifying functions is infinite: trivially, a set of points of any size is labelled correctly by a classifier whose reference set is the same set of labelled points. However, it is often not practical to store all the training data, especially in an era of ``big data''. Therefore many algorithms have been proposed which learn a smaller reference set from the training data: \cite{Garcia12} and \cite{Triguero12} survey more than 75 such algorithms between them. What is the VC dimension associated with these ``editing'' NN algorithms? If the reference set may grow without bound, then the VC dimension is infinite, as for the na{\"i}ve classifier described above. But if the reference set is constrained not to exceed a given size, then the VC dimension is finite. The main results of the present work are lower and upper bounds for the VC dimension of the set of all 1NN-rule classifiers which use a reference set of given fixed size, in Euclidean space. It should be noted that, in general, the size of the reference set to be formed by an editing algorithm is not fixed \textit{a priori}. There are exceptions, an important one being the case where a classifier for a data stream is kept current by using a fixed window of the most recent $N$ points as its reference set \citep{GunnNeurocomputing}. But it is very common to impose a \emph{maximum} size limit on the reference set, if only by hand in an \textit{ad hoc} fashion. Our upper bounds will apply to any algorithm for which the size of the reference set is bounded, though our lower bounds will apply only where the size is prescribed, and the algorithm is capable of returning any reference set of that size. We believe that there is significant interest in the VC dimension of prototype classifiers with fixed-size reference sets, and that this is shown by the fact that an incorrect purported result \cite[Proposition 2]{Karacali} has been cited more than 50 times as giving the VC dimension of such classifier sets. Our study corrects the record. We feel there is significant value in bringing together the existing theoretical results, some of which seem otherwise likely to remain obscure to practitioners. Beyond our deductions from existing theory, our novel contributions are 1) A new lower bound for the two-dimensional case, higher than that implied by previous results for polytopes (Proposition \ref{OneBetter}), and 2) an upper bound for all dimensions, slightly less tight than the best that can be deduced from existing theory, but avoiding the use of exotic functions and thereby facilitating a discussion of the asymptotic behaviour of the limits (Corollary \ref{LooseCor}). We will discuss the theoretical framework in section \ref{NNinVC}, and briefly review the relevant literature in section \ref{lit}. In sections \ref{lower2d} and \ref{lowerdd} we derive lower bounds for the VC dimension, and in section \ref{upper} we determine upper bounds. Our results are summarised in section \ref{conclusions}. \section{1NN classifiers in the VC theory} \label{NNinVC} We use ``classifier'' to mean a function from the feature space to the set of classes, which is used to classify unlabelled examples. An algorithm which learns such a function from training data is a ``classification algorithm''. The set of all classifiers which the algorithm might produce in response to all sets of training data is called the ``hypothesis class'' of that algorithm. (For example, the original Rosenblatt Perceptron selects among all functions which map one half-space to one class, and the other half-space to the other class: this set of functions is the hypothesis class of the Perceptron.) When we speak of the VC dimension of a classification algorithm, we mean the VC dimension of the hypothesis class from which that algorithm learns a classifier. NN classification algorithms are not usually thought of as selecting a classifier from a fixed hypothesis class in this way~\citep[see e.g.][ch.\ 19]{Shais}. This is because the great practical advantage of 1NN classification is that a classifier function does not need to be explicitly evaluated when classifying an unlabelled example; the new point is simply assigned the class of the nearest prototype in the reference set, which can be identified efficiently using a $k$-d tree. However, this process is equivalent to classifying the new point according to a classifier function which maps the Voronoi cell of each prototype to the label of that prototype. Thus, the hypothesis class of the 1NN rule with a reference set of $m$ prototypes in $d$-dimensional space is the set of all labellings of all $m$-cell Voronoi diagrams in the space; it is parameterised by the $md$ co-ordinates of the $m$ prototypes, and the $m$ choices of label. We consider only features which take real values, thus classifiers whose domain is $\mathbb{R}^d$, and consider only the 1NN rule with the Euclidean metric. $\mathrm{1NN}(d,m)$ will denote the set of all classifiers $g:\mathbb{R}^d \to \{+1,-1\}$ which use the (Euclidean) nearest-neighbour rule with a reference set of size $m$. $\mathrm{VCdim}(A)$ will denote the VC dimension of a set $A$ of classifiers. The purpose of this paper is to give lower and upper bounds for $\mathrm{VCdim}(\mathrm{1NN}(d,m))$. \section{Related Work} \label{lit} A number of existing theoretical results have implications for the VC dimension of the NN classifier with arbitrary reference set of fixed size. However, these results are scattered in the literature and their implications have not been made explicit. In particular, the results we use to establish lower bounds were developed for a class of polytope classifiers, without mention of NN classifiers. In the present section we give a brief overview of relevant theoretical work, starting with some brief historical context and going on to include the work whose implications we will directly use in subsequent sections. Questions of ``separating capacities'' of families of decision surfaces were already considered before the advent of the Vapnik-Chervonenkis theory (see e.g.\ \cite{Cover65} and references therein). During and shortly after the years of the initial development of the VC theory, several authors used approaches from classical combinatorial geometry to derive results about the VC dimension (or related separability properties) of several simple sets. Readers interested in this literature will need to be aware of the distinction between the VC dimension of a set of classifiers, as we have defined it above, and the VC dimension of a set of subsets of the Euclidean space in question (called by some authors a ``concept class''). See our discussion at the start of section \ref{upper}, and~\citet[pp. 196, 199, and 215]{Devroye96}. To give an example of the results obtained, \citet{Dudley79} reports that the set of balls in $\mathbb{R}^d$ has VC dimension $d+1$. (N.B. Dudley's quantity $V$ is one greater than the quantity defined as the VC dimension in more recent literature.) Similarly, it may be shown that the set of all half-spaces in $\mathbb{R}^d$ has VC dimension $d+1$~\citep[see e.g.][ch. 13]{Devroye96}. More recent authors have considered the intersection or union of half-spaces, which is to say, sets which are the interior or exterior of (possibly unbounded) polytopes. \citet{Blumer89} show that the set of interiors of $N$-gons has VC dimension $2N + 1$. This result is quoted by \citet{TakacsPataki} as the starting point for their work in which they find upper and lower bounds for the VC dimension of convex polytope classifiers, from which we will derive a lower bound in section~\ref{lowerdd}. NN classifiers, like convex polytope classifiers, have decision boundaries which are the union of subsets of hyperplanes. However, the decision region for a NN classifier is not in general a simple intersection or union of half-spaces; it may be a complicated union of the interiors of polytopes formed by such intersections. This may explain why the question of the VC dimension of the NN classifier with an arbitrary reference set of fixed size $m$ has not previously, to our knowledge, been addressed, other than in~\cite{Karacali}. (\citet{Devroye96} consider the closely related property of the \emph{shatter coefficient} of such classifiers; we will make explicit the implications of this work in section \ref{upper}.) \citet[Proposition 2]{Karacali} claim that the VC dimension of the NN classifier with reference set of size $m$, $\mathrm{VCdim}(\mathrm{1NN}(d,m))$ in our notation, is exactly $m$. Their argument consists of exhibiting a set of $m+1$ points which cannot be correctly classified by $m$ prototypes. They do not argue that a general set of $m+1$ points cannot be shattered. This apparently reflects a misunderstanding of the definition of VC dimension. The VC dimension is the largest number for which some set of that size can be shattered, not the largest number such that all sets of that size can be shattered. The latter quantity is called the Popper dimension, a quantity which thus far has not found a r\^{o}le in statistical learning theory~\citep{Corfield09}. \section{Lower bounds - two dimensions} \label{lower2d} Tak{\'a}cs and Pataki \citep{Takacs,TakacsPataki} prove bounds for the VC dimension of sets of classifiers whose decision boundaries are convex polytopes. These sets can easily be related to sets of nearest-neighbour classifiers, giving lower bounds for the latter. This is because a decision boundary which is an $N$-faceted convex polytope can be obtained as the decision boundary of a 1NN classifier with $N+1$ prototypes, by placing a prototype of one label inside the polytope, and the remaining $N$ prototypes, with the opposite label, as the reflections of the first prototype in the $N$ facets of the desired decision boundary. See Figure \ref{fig:PolygonVoronoi} for an example construction with $N=5$, $d=2$. We formalise this observation as Proposition \ref{subset} below, and give a formal proof in the appendices. \begin{figure}[htb] \centering \includegraphics[width=0.6\linewidth]{PolygonVoronoiRegion} \caption{Illustration of the construction of a data set whose classification region is determined by a given convex polygon $P$. The required prototype set (filled and empty circles) is constructed by reflecting an arbitrary interior point in (the lines containing) the edges of $P$. Voronoi boundaries are shown; the Voronoi cell of the interior point is the decision region of the classifier, coincident with $P$.} \label{fig:PolygonVoronoi} \end{figure} We will use $G(d,N)$ to denote the set of classifiers $g:\mathbb{R}^d \to \{+1,-1\}$ whose decision boundary is a convex $N$-faceted polytope. \begin{lemma} \label{subset} The set of convex $N$-faceted polytope classifiers is a subset of the set of NN classifiers with reference set of size $N+1$, in the same Euclidean space. That is, \begin{equation} G(d,N) \subseteq \mathrm{1NN}(d,N+1). \end{equation} \end{lemma} \begin{corollary} \label{impliesLowerBound} A lower bound for the VC dimension of $G(d,N)$ is also a lower bound for the VC dimension of $\mathrm{1NN}(d,N+1)$. \end{corollary} The corollary follows immediately: any set of points which can be shattered by an element of $G(d,N)$ can be shattered by an element (the same element) of $\mathrm{1NN}(d,N+1)$. Tak{\'a}cs gives the following result for two-dimensional Euclidean space: \begin{lemma} (Tak{\'a}cs) \label{Takacs1} \begin{equation} h(G(2,N)) \geq 2N + 2, \end{equation} for $N \geq 2$. \end{lemma} We give a sketch of the proof given in \cite{Takacs} in the appendices. By a more complicated argument, Tak{\'a}cs establishes that this lower bound is also an upper bound for the VC dimension of polytope classifiers. In general, upper bounds for polytope classifiers are of no relevance to the more general set of NN-rule classifiers. But the $N=2$ case is an exception, and we make the following brief side remark about this case: \begin{remark} With three prototypes, the only decision surfaces an NN-rule classifier can form are an open 2-gon, or a pair of parallel lines. But any finite set of points which can be correctly dichotomised by a pair of parallel lines can also be correctly split by a digon formed by a suitably small adjustment of the lines such that they are non-parallel. So for this case, the set of NN-rule classifiers has no greater separating power than the related set of polygon classifiers. Thus a precise value for the VC dimension of this set of NN-rule classifiers is established: \begin{equation} \mathrm{VCdim}(\mathrm{1NN}(2,3)) = 6. \end{equation} \end{remark} Now, for general $m$, Lemma \ref{Takacs1}, with Corollary \ref{impliesLowerBound}, implies a lower bound for $\mathrm{VCdim}(\mathrm{1NN}(2,m))$: \begin{equation} \mathrm{VCdim}(\mathrm{1NN}(2,m)) \geq 2m \end{equation} However, we can do better than this. For $m \geq 4$, the NN-rule classifier can create a larger class of decision surfaces than a single convex polytope. In Appendix~\ref{appendix1} we present a new argument, inspired by the elementary geometrical approach of Tak{\'a}cs but considerably more involved, demonstrating a stronger lower bound for $\mathrm{VCdim}(\mathrm{1NN}(2,m))$, $m \geq 4$: \begin{proposition} \label{OneBetter} \begin{equation} \mathrm{VCdim}(\mathrm{1NN}(2,m)) \geq 2m + 1. \end{equation} \end{proposition} Though this improvement is the smallest possible, it establishes the principle that the VC dimension of 1NN classifiers with $m$ prototypes in $\mathbb{R}^2$ is larger than that of the relevant comparable class of polygon decision boundaries (recall that Tak{\'a}cs' lower bound, Lemma \ref{Takacs1}, is also an upper bound for that class). That is, Proposition \ref{OneBetter} establishes that 1NN classifiers in $\mathbb{R}^2$ gain in expressivity from their ability (for $m \geq 4$) to form boundaries other than convex polygons. \section{Lower bound -- higher dimensions} \label{lowerdd} Tak{\'a}cs' result was extended by Tak{\'a}cs and Pataki to Euclidean spaces of dimension higher than two: \begin{proposition} (\cite{TakacsPataki}) \label{Takacs2} \begin{equation} \mathrm{VCdim}(G(d,N)) \geq dN + 2, \end{equation} for $d \geq 2$, $N \geq 2$. \end{proposition} \begin{proof} The geometrical arguments are less simple than for the two-dimensional case; we refer readers to \cite{TakacsPataki} for the proof. \end{proof} \begin{remark} Tak{\'a}cs and Pataki also offer slightly stronger lower bounds for the special cases $d=3$ and $d=4$: $h(G(3,N)) \geq 3N + 3$ and $h(G(4,N)) \geq 4N + 5$; but these require a higher minimum value for $N$ than Lemma \ref{Takacs2} does. \end{remark} As in the two-dimensional case, we can deduce a lower bound for the VC dimension of the NN classifier: \begin{corollary} \label{LowerBoundProp} \begin{equation} \mathrm{VCdim}(\mathrm{1NN}(d,m)) \geq dm + 2 - d, \end{equation} for $d \geq 2$, $m \geq 3$. \end{corollary} \begin{remark} The relation does not hold for $d=2$, $m=2$: an NN-rule classifier with two prototypes in the plane has for its decision boundary a line, so cannot shatter four points: linear classifiers are, famously, unable to solve the XOR problem. \end{remark} \section{Upper bounds} \label{upper} The \emph{shatter coefficient} of a family of sets $B$ (for our purposes, $B$ is a set of subsets of $\mathbb{R}^d$) is a number closely related to VC dimension. It is called the ``growth function'' by some authors, but other authors give that term a different definition. The $n$th shatter coefficient of $B$, denoted $S(B,n)$, is the maximum number of different subsets of $n$ points that can be formed by intersection of the $n$ points with elements of $B$: that is, the number of subsets that can be ``picked out'' using elements of $B$. (The maximum is taken over all sets of $n$ points.) The VC dimension of $B$ is then the largest $n$ such that $S(B,n) = 2^n$. This is an expression of the concept of shattering for families of sets $B$ rather than classifiers: if the set of all subsets of the $n$ points which can be formed by intersection of the $n$ points with elements of $B$ is all $2^n$ possible subsets of the $n$ points, then the $n$ points are shattered by $B$. The VC dimension of a family of classifiers, as we defined it in section \ref{introduction}, is equal to the VC dimension of the associated family of decision regions, as just defined, and the shatter coefficient of a family of classifiers is defined equal to the shatter coefficient of the associated family of decision regions see \citep[see][ch.12]{Devroye96}. Devroye et al. give upper bounds for the shatter coefficients of the class $C(d,m)$: \begin{lemma} (Devroye, Gy{\"o}rfi, and Lugosi) For $m \geq 3$, \label{DevroyeLemma} \begin{align} S(C,n) &\leq 2^m n^{9(m-2)} &\text{for } d=2\\ S(C,n) &\leq 2^m n^{(d+1)m(m-1)/2} &\text{for } d \geq 3. \end{align} \end{lemma} \begin{proof} See \cite[p. 312]{Devroye96} \end{proof} As before, $d$ is the dimension of the space and $m$ is the number of prototypes used by the classifier. These upper bounds are based simply on the observation that for a reference set with $m$ prototypes there are at most $m(m-1)/2$ Voronoi cell boundaries, so the number of points which can be shattered by the set of Voronoi diagrams with $m$ centres is bounded above by the number of points which can be shattered by $m(m-1)/2$ hyperplanes. The stronger result for $d=2$ comes from a restriction on the number of edges of a planar graph, applied to the Delaunay triangulation which is the dual of the Voronoi diagram. These bounds imply the following result for the VC dimension of the NN classifier: \begin{proposition} \label{Wbound} For $m \geq 3$, \begin{equation} \mathrm{VCdim}(\mathrm{1NN}(d,m)) \leq -\frac{q}{\log{2}} W_{-1} \left(-\frac{\log 2}{q} 2^{-\frac{m}{q}} \right), \end{equation} where \begin{align} q &= 9(m-2) &\text{for } d = 2\\ q &= (d+1)m(m-1)/2 &\text{for } d \geq 3. \end{align} $W$ is the Lambert W function; the $W_{-1}$ branch is the relevant branch. The logarithms are natural logarithms. \end{proposition} \begin{proof} See appendix \ref{LambertResult}. \end{proof} We can obtain from this a looser but more easily interpretable upper bound by using a recent result which gives a lower bound for the $W_{-1}$ branch of the Lambert function: \begin{lemma} (Chatzigeorgiou) \begin{equation} W_{-1}(-e^{-u-1}) > -1 - \sqrt{2u} - u, \end{equation} for $u>0$. \end{lemma} \begin{proof} See \cite{Chatz13}. \end{proof} Using this result with proposition \ref{Wbound} gives the following looser bound on the VC dimension: \begin{corollary} \label{LooseCor} Let $q' = q/\log{2}$, where $q$ is defined as in proposition \ref{Wbound}. Then for $m \geq 3$, \begin{equation} \label{looseupper} \mathrm{VCdim}(\mathrm{1NN}(d,m)) < q' \left( \sqrt{2\left(\frac{m}{q'} + \log{q'} -1\right)} + \frac{m}{q'} + \log{q'} \right). \end{equation} \end{corollary} This upper bound enables us to bound the the asymptotic rate of growth of $C$. Now, $q'$ grows monotonically with $m$ and with $d$, and grows at least as fast as $O(m)$ for increasing $m$. So whether considering growth with increasing $m$ or growth with increasing $d$, the fastest-growing term within the brackets of (\ref{looseupper}) is $\log{q'}$. So we have \begin{equation} \label{simq} \mathrm{VCdim}(\mathrm{1NN}(d,m)) \lesssim q' \log{q'}, \end{equation} for large $m$ or $d$. Table~\ref{tab:results} summarises the VC dimension bounds derived in this study. \section{Discussion of asymptotic behaviour} \label{discussion} Considering first the rate of growth of the VC dimension with $m$ for fixed $d$, equation \ref{simq} implies \begin{equation} \label{Om2} \mathrm{VCdim}(\mathrm{1NN}(D,m)) \in O(m^2 \log(m^2)) = O(m^2 \log(m)), \end{equation} for fixed $d=D>2$, with the better result \begin{equation} \mathrm{VCdim}(\mathrm{1NN}(2,m)) \in O(m \log(m)), \end{equation} for $d = 2$. Figure~\ref{fig:VCBoundsFigure} illustrates the upper bounds as a function of $m$, for the cases $d = 2$ and $d = 3$. \begin{figure}[htb] \centering \includegraphics[width=0.65\linewidth]{VCBoundsFigure} \caption{Upper bounds for the VC dimension of NN-rule classifiers. ``Accurate'' upper bounds are the best we have obtained, given in Proposition~\ref{Wbound}. ``Approximate'' upper bounds are the looser bounds given in Corollary~\ref{looseupper}.} \label{fig:VCBoundsFigure} \end{figure} It is interesting to compare the log-linear growth in $m^2$ given by equation (\ref{Om2}) with recent results for neural networks given by \cite{Harvey}. Consider a neural network with $d$ input neurons (the real coordinates of the feature space), and $N$ neurons in a single hidden layer with binary threshold activation functions. Each neuron in the hidden layer of such a network encodes a hyperplane decision boundary: the neuron will be in one or the other of its binary states depending on which side the input vector lies of a plane normal to the vector of weights of that neuron. So a network with $N=\frac{1}{2}m(m-1)$ hidden neurons is comparable to 1NN classifiers with $m$ prototypes, in the sense that both build their decision boundaries from parts of $\frac{1}{2}m(m-1)$ hyperplanes. \cite{Harvey} prove that a piecewise-linear neural network with $W$ parameters has VC dimension with asymptotic growth $O(W \log W)$ in the number of parameters. For the network just described, the number of parameters is $ \sim Nd = \frac{1}{2}m(m-1)d$, meaning the asymptotic growth of its VC dimension is $O(m^2d \log m^2d)$, just as relation (\ref{simq}) gives as an upper bound for our 1NN classifiers. That is, the neural network achieves the (asymptotically) highest value it can for its VC dimension, given the number of hyperplane decision surfaces it has to work with. It is an interesting open question whether the same is true for 1NN classifiers. Turning now to consider the rate of growth with $d$ for fixed $m=M$, equation (\ref{simq}) implies \begin{equation} \mathrm{VCdim}(\mathrm{1NN}(d,M)) \in O(d \log(d)). \end{equation} Thus, the VC dimension grows polynomially in $d$ (asymptotically slower than $d^2$). This has implications for learnability: for example, polynomial growth of the VC dimension of a class with the dimension of the space is a necessary condition for the class to be properly polynomially learnable~\cite[Theorem 3.1.1.]{Blumer89}. \section{Conclusions} \label{conclusions} \begin{table} \caption{Summary of the VC dimension bounds. $m$ denotes the number of prototypes in the reference set, and $d$ denotes dimensionality. $W_{-1}$ denotes the $-1$ branch of the Lambert W function.} \label{tab:results} \centering \begin{tabular}{ccc} Type of bound&Dimensionality&Expression\\ \hline Lower&$d = 2$&$2m + 1$\\ Lower&$d \geq 2$&$dm + 2 - d$\\ Upper&$d = 2$&$-\frac{9(m-2)}{\log{2}} W_{-1} \left(-\frac{\log 2}{9(m-2)} 2^{-\frac{m}{9(m-2)}} \right)$\\ Upper&$d \geq 2$&$-\frac{(d+1)m(m-1)}{2\log{2}} W_{-1} \left(-\frac{2\log 2}{(d+1)m(m-1)} 2^{-\frac{2m}{(d+1)m(m-1)}} \right)$ \end{tabular} \end{table} The VC dimension for the set of all NN-rule classifiers in $d$-dimensional Euclidean space with a reference set of size $m$ grows at least as fast as $dm$ and not faster than $O(m^2 \log{m}$) as $m$ increases. For the case of two-dimensional Euclidean space, the VC dimension grows not faster than $O(m \log{m}$). Considering instead growth with $d$, the VC dimension for the set of all NN-rule classifiers in $d$-dimensional Euclidean space with a reference set of size $m$ grows at least as fast as $d(m-1)$ and not faster than $O(d \log{d}$) as $d$ increases. Precise lower and upper bounds for this VC dimension are given in our Corollary \ref{LowerBoundProp} and Proposition \ref{Wbound} respectively, and summarised in Table~\ref{tab:results}. The consequence of these bounds that is of interest to practitioners is the implication for the size of sample needed to learn an accurate classifier. In the Probably Approximately Correct learning model, the \emph{sample complexity} is the number of training examples needed to learn a classifier of given accuracy with given probability. The sample complexity of a family of classifiers is known to depend linearly on the VC dimension~\cite[Theorem 6.8]{Shais}. Therefore, the bounds we give above for the asymptotic growth of the VC dimension are also bounds on the asymptotic growth of the size of the training data set needed to learn (with given probability) an accurate NN classifier with reference set of given size. The lower bound applies only to classification algorithms which produce reference sets of given fixed size (and can produce \emph{any} reference set of that size). The upper bound is significantly more broadly applicable. The upper bound applies to any NN classification algorithm which may not have more than $m$ points in its reference set. Thus we may conclude: the size of the training set needed to learn an accurate NN-rule classifier with reference set of size $m$ in $d$-dimensional Euclidean space grows not faster than $O(m^2 \log{m}$) as $m$ increases. For the case of two-dimensional Euclidean space, the size of the training set required grows not faster than $O(m \log{m}$). Considering instead growth with $d$ for fixed $m$, the size of the training set required grows not faster than $O(d \log{d}$). The fact that the growth rate of the upper bound is asymptotically faster (with $m$) for $d \geq 3$ than for $d=2$ raises the interesting possibility that there may be something fundamentally different about the behaviour of NN-rule classifiers in 3 dimensions and higher from their behaviour in two-dimensional space. If future work were to establish an $O(m^2)$ lower bound for $d \geq 3$, this would be confirmed. If, instead, an $O(m \log(m))$ upper bound were established, implying the same behaviour for the 1NN classifier in higher dimensions as in 2 dimensions, this would imply instead an interesting discrepancy between the behaviour of 1NN classifiers and the behaviour of neural networks with access to an equal number of hyperplanes from which to construct their decision boundaries, as discussed in section \ref{discussion}. \section*{Acknowledgment} This work was done under project RPG-2015-188 funded by The Leverhulme Trust, UK. While preparing the paper for publication, IG received support from the European Union's Horizon 2020 research and Innovation programme under grant agreement No 731593. \bibliographystyle{natbib}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Algorithm}\label{sec:algorithm The proposed SPULTRA algorithm is based on a pre-learned union of sparsifying transforms. The process of learning such a union of transforms from a dataset of image patches has been detailed in~\cite{pwls-ultra2018}. The learning problem in \cite{pwls-ultra2018} simultaneously groups the training patches into $K$ \textcolor{black}{classes }and learns a transform in each \textcolor{black}{class }along with the sparse coefficients (in the transform domain) of the patches. This learning is accomplished by an alternating algorithm (see~\cite{pwls-ultra2018}). This section focuses on describing the algorithm in the reconstruction stage for SPULTRA, i.e., for \eqref{eq:P0}. The data-fidelity term $\mathsf{L}(\x)$ in \eqref{eq:P0} is nonconvex when the electronic noise variance $\sigma^2$ is nonzero. It is challenging to directly optimize such a logarithmic nonconvex function. We propose to iteratively design quadratic surrogate functions for this data-fidelity term $\mathsf{L}(\x)$. In each iteration, we optimize the surrogate function that is a quadratic data-fidelity term together with the ULTRA regularizer using alternating minimization that alternates between an image update step and a sparse coding and clustering step that has closed-form solution~\cite{wen:14:sos}. We use the relaxed OS-LALM algorithm for the image update step~\cite{nien:16:rla}. We perform only one alternation between the two steps for each designed surrogate function, which saves runtime and works well in practice. \vspace{-0.15in} \subsection{Surrogate function design} \vspace{-0.03in} We design a series of quadratic surrogate functions for $\mathsf{L}(\x)$ as follows:\\ \begin{equation}\label{eq:surrogate} \begin{aligned} \phi(\x;\x^n) &= \mathsf{L}(\x^n) + \bm{d}_h(l^n)\A(\x - \x^n)\\ &\qquad+\frac{1}{2}(\x - \x^n)^T\A^T\W^n\A(\x - \x^n), \end{aligned} \end{equation} where $(\cdot)^n$ denotes values at the $n$th iteration and ${\bm{d}_h(l^n) \in \mathbb{R}^{N_d}}$ is a row vector capturing the gradient information and is defined as $\bm{d}_h(l^n) \triangleq [\dot{h_i}(l_i^n)]_{i = 1}^{N_d}$. The curvatures of the $n$th updated parabola (surrogate) are described by $\W^n\triangleq \text{diag}\{c_i(l^n_i)\}$. In this paper, we use the optimum curvatures \cite{TMI99} that are defined as follows: \begin{equation} \label{eq:OptCuv} c_i(l^n_i) = \begin{cases} \big[2\frac{h_i(0) - h_i(l_i^n) + (l_i^n)\dot{h_i}(l_i^n)}{(l_i^n)^2}\big]_+, &l_i^n > 0\\ \big[\ddot{h_i}(0)\big]_+, &l_i^n = 0, \end{cases} \end{equation} where $\ddot{h}_i$ is the second-order derivative, and operator $[\cdot]_+$ sets the non-positive values to zero. In practice, we replace negative values with a small positive number so that the diagonal matrix $\W^n$ is invertible. Due to numerical precision, \eqref{eq:OptCuv} might become extremely large when $l_i^n$ is nonzero but small. To avoid this problem, we use an upper bound of the maximum second derivative $\big[\ddot{h_i}(0)\big]_+$ for the curvature $c_i(l^n_i)$ when $l_i^n > 0$ \cite{TMI99}. By ignoring the terms irrelevant to $\x$ in \eqref{eq:surrogate}, we get the following equivalent form of $\phi(\x;\x^n)$: \begin{equation}\label{deduce2} \phi(\x;\x^n) \equiv \frac{1}{2}||\tilde{\y}^n - \A\x||_{\W^n}^2, \end{equation} where ``$\equiv$'' means equal to within irrelevant constants of $\x$, and $\tilde{\y}^n \triangleq \A\x^n -\big(\W^n\big)^{-1}[\bm{d}_h(l^n)]^T$. The overall surrogate function at the $n$th iteration for the penalized-likelihood objective function \textcolor{black}{$G(\x)$} in \eqref{eq:P0} is then \begin{equation}\label{eq:surr_all} \Phi(\x;\x^n) = \frac{1}{2}||\tilde{\y}^n - \A\x||_{\W^n}^2 + \R(\x),\ \textcolor{black}{\text{s.t.}\ \x \in \mathcal{X}.} \end{equation} We \textcolor{black}{descend} the surrogate function $\Phi(\x;\x^n)$ in \eqref{eq:surr_all} by alternating \textcolor{black}{once} between an image update step, and a sparse coding and clustering step. \vspace{-0.2in} \subsection{Image Update Step}\vspace{-0.05in} In the image update step, we update the image $\x$ with fixed sparse codes $\{\z_j\}$ and \textcolor{black}{class} assignments $\{C_k\}$. The relevant part of the majorizer for this step is \begin{equation}\label{eq:Phi1} \begin{aligned} \Phi_1(\x;\x^n) = \phi(\x;\x^n) + \beta \sum_{k=1}^{K} \sum_{j\in C_k} \tau_j \|\omg_k \P_j \x - \z_j\|^2_2 \end{aligned} \end{equation} \textcolor{black}{Although we have a box} constraint on $\x$\textcolor{black}{, i.e., $\x \in \mathcal{X}$, in practice, the upper bound $x_{\mathrm{max}}$ can be set high such that it will not be active. We applied the }relaxed OS-LALM algorithm~\cite{nien:16:rla} \textcolor{black}{to minimize }\eqref{eq:Phi1} \textcolor{black}{with the constraint. This algorithm is } shown in Algorithm \ref{alg: spultra-alg} (steps 7-10). The OS-LALM method uses majorizing matrices. In particular, the matrix $\A^T\W^n\A$ is majorized by \textcolor{black}{$\D_{\A} \triangleq \text{diag}\{\A^T\W^n \A \mathbf{1}\}$, where $\mathbf{1}$ denotes a vector of ones.} Denoting the regularization term in \eqref{eq:Phi1} as $\R_2(\x)$, its gradient is \begin{equation}\label{eq:grad_r2} \nabla \R_2(\x) = 2\beta\sum_{k=1}^{K} \sum_{j\in C_k} \tau_j \P_j^T \omg_{k}^T (\omg_k \P_j \x - \z_j ). \vspace{-0.05in} \end{equation} The Hessian of $\R_2(\x)$ is majorized by the following diagonal matrix:\\ \begin{equation}\label{eq: D_R} \D_{\R} \triangleq 2\beta \bigg\{\max_k \|\omg_{k}^T\omg_{k}\|_2\bigg\}\sum_{k=1}^{K} \sum_{j\in C_k}\tau_j \P^T_j \P_j . \end{equation} The (over-)relaxation parameter $\alpha \in [1,2)$ and the parameter $\rho_t >0$ decreases with iterations $t$ in OS-LALM according to the following equation \cite{nien:16:rla}: \begin{equation}\label{eq:rho} \rho_t(\alpha)= \begin{cases} 1, & t=0\\ \frac{\pi}{\alpha(t+1)}\sqrt{1-\big(\frac{\pi}{2\alpha(t+1)}\big)^2}, & \text{otherwise.} \end{cases} \vspace{-0.1in} \end{equation} \vspace{-0.1in} \subsection{Sparse Coding and Clustering Step} Here, with $\x$ fixed, we jointly update the sparse codes and the \textcolor{black}{class }memberships of patches. The relevant part of the cost function for the sparse coding and clustering step is \begin{equation}\label{eq:codes-clusters} \min_{\{\z_j, C_k\}} \sum_{k=1}^{K} \bigg\{ \sum_{j\in C_k} \tau_j \{\|\omg_k\P_j \x - \z_{j}\|^2_2 + \gamma_c^2\|\z_j\|_0 \}\bigg\} . \end{equation} \textcolor{black}{Problem \eqref{eq:codes-clusters} is separable in terms of the patches, so each patch is clustered and sparse coded independently in parallel. The optimal sparse code $\z_j$ in \eqref{eq:codes-clusters} is obtained by hard-thresholding, i.e., ${\z_j= \mathit{H}_{\gamma_c}(\omg_{k_j}\P_j\x)}$, where ${\mathit{H}_{\gamma_c}(\cdot)}$ represents a vector hard-thresholding operator that zeros out elements whose magnitudes are smaller than $\gamma_c$, and leaves other entries unchanged. Then the optimized class $\hat{k}_j$ for the $j$th patch is computed as follows~\cite{pwls-ultra2018}:} \begin{equation} \label{eq:clustering} \small{\hat{k}_j = \argmin_{1\leq k \leq K} || \omg_{k} \P_j \x - \mathit{H}_{\gamma_c}(\omg_{k}\P_j\x)||^2_2 + \gamma_c^2\|\mathit{H}_{\gamma_c}(\omg_{k}\P_j\x)\|_0. } \end{equation} \textcolor{black}{We compute the cost values on the right hand side of \eqref{eq:clustering} for each $k = 1, \cdots, K$, and determine the $\hat{k}_j \in \{1, \cdots, K\}$ that gives the minimal cost value, i.e., patch $\P_j \x$ is grouped with the transform that provides the smallest value of the cost in \eqref{eq:clustering}. Then, the corresponding optimal sparse code is ${\hat{\z}_j = \mathit{H}_{\gamma_c}(\omg_{\hat{k}_j}\P_j\x)}$.} \textbf{Algorithm} \ref{alg: spultra-alg} illustrates the proposed optimization algorithm for Problem \eqref{eq:P0}. \begin{algorithm}[!htp] \caption{SPULTRA Algorithm} \label{alg: spultra-alg} \begin{algorithmic}[1] \Require~~\\ initial image $\hat{\x}^{0}$; $\alpha = 1.999$ ; $\rho_0 = 1$; \\ pre-computed $\D_\R $ according to \eqref{eq: D_R};\\ number of outer iterations $N$, number of inner iterations $P$, and number of ordered-subsets $M$. \Ensure reconstructed image $\hat{\x}^N$. \For {$n=0,1,2,\cdots, N-1$} \State \textbf{(1) Image Update}: Fix $\hat{\z}_j^{n}$ and $\hat{C}_k^n$; \\ \textit{Initializations: }\begin{enumerate} \item $\x^{(0)} = \hat{\x}^{n}$, \item Determine $c_i(l^n_i)$ according to \eqref{eq:OptCuv}, \item $\W^n = \diag\{c_i(l^n_i)\}$, \item $\D_{\A} \triangleq \diag\{\textcolor{black}{\A^T\W^n\A\mathbf{1}}\}$, \item \textcolor{black}{$\bm{d}_h(l^n) = [I_0e^{-f_i(l^n_i)}\dot{f_i}(l_i^n)(\frac{Y_i}{I_0e^{-\textcolor{black}{f_i(l_i^n)}} + \sigma^2} - 1)]_{i = 1}^{N_d}$, }\item $\tilde{\y}^n = \A\x^{(0)} -{(\W^n)}^{-1}[\bm{d}_h(l^n)]^T$, \item $\ze^{(0)} = \g^{(0)} = M\A_M^T\W^n_M(\A_M\x^{(0)}-\tilde{\y}_M^n) $, \item $\bm{\eta}^{(0)} = \D_\A \x^{(0)} - \ze^{(0)}$, \item compute $\nabla \R_2(\x)$ according to \eqref{eq:grad_r2}. \end{enumerate} \For {$p=0,1,2,3,\cdots,P-1$} \For {$m=0,1,2,3,\cdots,M-1$} \begin{equation*}\label{eq:rlalm} \begin{aligned} &t ~= ~pM + ~m;\\ &\hspace{-0.1in}\left\{ \begin{aligned} \s^{(t+1)} &= \rho_t(\D_\A \x^{(t)} -\bm{\eta}^{(t)}) + (1-\rho_t)\g^{(t)} \\ \x^{(t+1)} &= [\x^{(t)} - (\rho_t\D_\A+\D_\R)^{-1}(\s^{(t+1)} +\nabla \R_2(\x^{(t)}))]_\mathcal{C} \\ \ze^{(t+1)}& \triangleq M\A_m^T\W^n_m(\A_m\x^{(t+1)}-\tilde{\y}_m^n) \\ \g^{(t+1)} &= \frac{\rho_t}{\rho_t+1}(\alpha \ze^{(t+1)} + (1-\alpha)\g^{(t)}) + \frac{1}{\rho_t+1}\g^{(t)}\\ \bm{\eta}^{(t+1)} &= \alpha(\D_{\A} \x^{(t+1)} -\ze^{(t+1)}) + (1-\alpha)\bm{\eta}^{(t)} \end{aligned} \right.\\ &\text{Decrease $\rho_t$ according to \eqref{eq:rho}}; \end{aligned} \end{equation*} \EndFor \EndFor \State $\hat{\x}^{n+1} = \x^{(t+1)}$; \State \textbf{(2) Sparse Coding and Clustering}: Fix $\hat{\x}^{n+1}$, compute class assignments $\hat{k}_j^{n+1}$ using \eqref{eq:clustering}, and sparse codes $\hat{\z}_{j}^{n+1} = H_{\textcolor{black}{\gamma_c}}(\omg_{\hat{k}_j^{n+1}} \P_j \hat{\x}^{n+1}),\ \forall \ j$. \EndFor \end{algorithmic} \end{algorithm} \vspace{-0.13in} \subsection{Computational Cost} \label{sec:computation analysis} \vspace{-0.02in} The SPULTRA algorithm has a similar structure in each iteration as the recent PWLS-ULTRA \cite{pwls-ultra2018}, except for several initializations in the image update step. Since forward and backward projections are used to compute $\D_\A$ and $\tilde{\y}^n$ during initialization, the image update step of SPULTRA is slightly slower than PWLS-ULTRA. In our experiments, we observed that the initializations took around $20 \%$ of the runtime in each outer iteration. However, in practice, especially for low doses, SPULTRA reconstructs images better than PWLS-ULTRA for a given number of outer iterations. Or alternatively, SPULTRA takes much fewer outer iterations (and runtime) to achieve the same image reconstruction quality as PWLS-ULTRA. These results are detailed in Sec. \ref{sec:experiments}. \section{Conclusions}\label{sec:conclusion} \vspace{-0.01in} This paper proposes a new LDCT reconstruction method dubbed SPULTRA that combines the shifted-Poisson statistical model with the union of learned transforms or ULTRA regularizer. To deal with the nonconvex data-fidelity term arising from the shifted-Poisson model, we iteratively designed quadratic surrogate functions for this term in the proposed algorithm. In each surrogate function update iteration, the overall cost function (i.e., majorizer) has a similar structure as in the very recent PWLS-ULTRA method, and is optimized by performing an image update step with a quadratic cost and a sparse coding and clustering step with an efficient closed-form update. We evaluated the proposed SPULTRA scheme with numerical experiments on the XCAT phantom, synthesized clinical data, \textcolor{black}{and beam-hardened ultra low-dose raw measurement simulations. SPULTRA outperformed }prior methods in terms of eliminating bias and noise in the reconstructed image while maintaining the resolution of the reconstruction under very low X-ray doses. SPULTRA was also much faster than PWLS-ULTRA in achieving a desired reconstruction quality for low-doses\textcolor{black}{, and it had better generalization property than the WavResNet based denoising scheme. Moreover, we investigated the }convergence guarantees of the proposed surrogate function based alternating minimization scheme. \textcolor{black}{We will investigate }variations or generalizations of the SPULTRA model such as exploiting unions of overcomplete or tall transforms, or rotationally invariant transforms in future work. \section{Experimental Results}\label{sec:experiments} \vspace{-0.02in} Here we present numerical experiments demonstrating the behavior of SPULTRA. We evaluated the proposed SPULTRA method on the \textcolor{black}{3D XCAT phantom \cite{segars:08:rcs} and synthesized clinical data at multiply X-ray doses, as well as an ultra low-dose 2D shoulder phantom scan simulated from real raw data,} and compared its performance with that of the state-of-the-art PWLS-ULTRA \cite{pwls-ultra2018}. We computed the root mean square error (RMSE) and structural similarity index (SSIM) \cite{xu:12:ldx,ssim2} of \textcolor{black}{XCAT} images reconstructed by various methods in a region of interest (ROI). The RMSE is defined as $\sqrt{\sum_{i\in \mathrm{ROI}}(\hat{x}_i - x_i^*)^2/N_{p, ROI}}$, where $N_{p,ROI}$ is the number of pixels in the ROI, $\hat{\x}$ is the reconstructed image, and $\x^*$ is the ground-truth image. We also compared to PWLS reconstruction with an edge-preserving regularizer (PWLS-EP) ${\R(\x) = \sum_{j =1}^{N_p} \sum_{k\in N_{j}}\kappa_{j} \kappa_{k} \varphi(x_j - x_k)}$, where $N_j$ represents the neighborhood of the $j$th pixel, $\kappa_j$ and $\kappa_k$ are elements of $\bm{\kappa}$ that encourages resolution uniformity \cite{kappa}. \textcolor{black}{The potential function for 3D reconstruction was ${\varphi (t) = \delta^2(|t/\delta| - \log(1+|t/\delta|))}$ with\footnote{``HU" used in this paper is the shifted Hounsfield unit, where air is 0 HU and water is 1000 HU.} ${\delta = 10\text{ HU}}$, and that for 2D shoulder phantom simulations was ${\varphi (t) = \delta^2(\sqrt{1+|t/\delta|^2}-1)}$ with ${\delta = 100 \text{ HU}}$. The results obtained by PWLS-EP were taken as initial images for other methods we compared with in this section. } The SPULTRA method shifts uncorrected pre-log data by the variance of electronic noise. Such un-preprocessed pre-log data and the variance of the electronic noise on a CT scanner are proprietary to CT vendors, especially for LDCT. In our experiments \textcolor{black}{of XCAT phantom simulations and the synthesized clinical data}, we generated pre-log data $\hat{\y}$ from the XCAT phantom as well as from a clinical image $\tilde{\x}$ reconstructed by the PWLS-ULTRA method as follows: \begin{equation} \hat{\y}_i = \text{Poisson} \{I_0 e^{-[\A\tilde{\x}]_i}\} + \mathcal{N}\{0, \sigma^2\}, \end{equation} where $\mathcal{N}\{\mu, \sigma^2\}$ denotes a Gaussian distribution with mean $\mu$ and variance $\sigma^2$. \textcolor{black}{We refer to the image $\tilde{\x}$ used for generating the synthesized clinical data as the \textit{``true'' clinical image}. We also simulated an ultra low-dose scan from raw (pre-log) measurements of a standard-dose scan of a 2D shoulder phantom as: \vspace{-0.1in} \begin{equation}\label{eq:shoulder_gen} \hat{\y}_i = \text{Poisson} \{\frac{1}{\alpha}\y_{i_{s}}\} + \mathcal{N}\{0, \sigma^2\}, \end{equation} where $\alpha$ is a scale factor we used to lower the dose from standard-dose measurements, and $\y_{i_{s}}$ denotes the raw standard-dose measurements. We set $\sigma = 5$ for all the simulations, as suggested in prior works~\cite{pre-post-log, pwls-ultra2018}. We implemented the system model $\A$ via the separable footprint projector methods~\cite{long2010SF}.} \textcolor{black}{MATLAB code to reproduce the results in this work \textcolor{black}{is} released at \url{http://web.eecs.umich.edu/~fessler/}. Some additional results are included in the supplement. } \vspace{-0.11in} \subsection{XCAT phantom results} \subsubsection{Framework} We pre-learned a union of 15 square transforms from $8 \times 8 \times 8$ overlapping patches extracted from a $420\times 420\times 54$ XCAT phantom with a patch stride $2 \times 2 \times 2$. These transforms were initialized during training \cite{pwls-ultra2018} with 3D DCT, and the clusters were initialized randomly. \begin{figure}[!htp]\vspace{-0.1in} \centering \includegraphics[width = 0.4\textwidth]{figures/xtrue.pdf} \caption{\textcolor{black}{Reconstruction targeted ROI of the true XCAT phantom displayed with central slices along the axial, sagittal and coronal directions. The display window is [800, 1200] HU.}} \label{fig:3d-trueimg} \vspace{-0.15in} \end{figure} We simulated 3D axial cone-beam scans using a $840\times 840\times 96$ XCAT phantom with $\Delta_x = \Delta_y = 0.4883$ mm and $\Delta_z = 0.625$ mm. We generated sinograms of size $888\times 64 \times 984$ using GE LightSpeed cone-beam geometry corresponding to a mono-energetic source with $I_0 = \textcolor{black}{1\times}10^4$, $5\times 10^3$, $3\times 10^3$, and ${2\times 10^3}$ incident photons per ray and no scatter, respectively. Tab.~\ref{tab:non-pos-perc} shows percentages of non-positive measurements under different dose levels. We set these non-positive measurements to $1\times 10^{-5}$ for generating the post-log sinogram that PWLS-based methods rely on \cite{pre-post-log}. We reconstructed the 3D volume with a size of $420\times 420\times 96$ at a coarser resolution of $\Delta_x = \Delta_y = 0.9766$ mm and $\Delta_z = 0.625$ mm. The patch size during reconstruction was $8\times 8 \times 8$ and the stride was $3\times 3 \times 3$. For evaluating reconstruction performance, \textcolor{black}{we chose an ROI that was composed of the central 64 out of 96 axial slices, and refer to it as the \textit{reconstruction targeted ROI}}. Fig.~\ref{fig:3d-trueimg} shows the central slices of the true XCAT phantom \textcolor{black}{inside this ROI }along three directions. In the reconstruction stage of PWLS-ULTRA and SPULTRA, we used 4 iterations for the image update step, i.e., $P = 4$, for a good trade-off between algorithms' convergence and computational costs. We used $12$ ordered subsets, i.e., $M =12$, to speed up the algorithm. The initial image for the ULTRA methods was reconstructed by PWLS-EP, whose regularization parameter was set empirically to ensure good reconstruction quality as $\beta_{ep} = 2^{13}$ for all the experimented dose cases. \textcolor{black}{We used an analytical filtered back-projection (FBP) method FDK \cite{feldkamp1984practical} to initialize PWLS-EP. The FDK images of XCAT phantom for all the dose levels are shown in the supplement.} Due to the fact that SPULTRA has a similar cost function as PWLS-ULTRA in each outer iteration, we used the same parameter settings for both methods: $\beta = 4\times 10^4$ and $\gamma_c = 4\times 10^{-4}$, which we observed worked well for all the dose levels we tested. \begin{table}[!htp] \centering \caption{Percentages of non-positive measurements under different dose levels for XCAT phantom simulations.} \label{tab:non-pos-perc} \begin{tabular}{c c c c c} \toprule {$I_0$} & $1\times 10^4$ & $5\times 10^3$ & $3\times 10^3$ & $2\times 10^3$ \\ \midrule \begin{tabular}[c]{@{}l@{}}Non-positive\\ Percentage (\%)\end{tabular} &0.06 & 0.20 &0.48 & 0.96 \\ \bottomrule \end{tabular} \vspace{-0.1in} \end{table} \subsubsection{Behavior of the learned ULTRA Models} The learned union of transforms contributes to the clustering and sparsification of image patches. To illustrate the behavior of the learned transforms, we selected 3 of the 15 transforms that capture important structures/features of the reconstructed image \textcolor{black}{(with $I_0 = 1\times 10^4$)} in their \textcolor{black}{classes}. \begin{figure*}[!tbhp] \centering \includegraphics[width=1\textwidth]{figures/sparsecode/stride1_15cluster_1_13_14_spc_fig_v6.pdf} \caption{\textcolor{black}{The three rows correspond to the 1st, 13th, and 14th classes respectively. The first column displays three voxel-level clustered images of the central axial slice. Each of them is formed by image patches lie in the corresponding class. The second column displays part of the transforms for the corresponding classes. The third, fourth and fifth columns show the central axial slice of the sparse coefficient maps obtained by applying specific filters (shown in the top left corner) to patches belonging to the corresponding classes.} The patch stride for plotting these figures was $1\times 1\times 1$.} \label{fig:sparsecode} \end{figure*} \begin{table*}[ht] \centering \caption{\textcolor{black}{RMSE (HU) and SSIM of the reconstruction targeted ROI at various dose levels ($I_0$) using the PWLS-EP, PWLS-ULTRA, and SPULTRA methods for the XCAT phantom simulations.}} \label{tab:numerical} \begin{subtable}{0.45\textwidth} \centering \caption{RMSE (HU)} \begin{tabular}{l c c c} \toprule $\quad I_0$ & PWLS-EP & PWLS-ULTRA & SPULTRA \\ \midrule {$1\times 10^4$ } & 45.3 &29.1 & \textbf{28.9} \\ \midrule {$5\times 10^3$ } &47.1 &33.3 & \textbf{32.8} \\ \midrule {$3\times 10^3$ } & 49.7 & 37.7 &\textbf{36.4} \\ \midrule {$2\times 10^3$} & 53.5 & 43.2 & \textbf{39.9} \\ \bottomrule \end{tabular} \end{subtable} \hspace{0.2in} \begin{subtable}{0.45\textwidth} \centering \caption{SSIM} \begin{tabular}{l c c c} \toprule $\quad I_0$ & PWLS-EP & PWLS-ULTRA & SPULTRA \\ \midrule {$1\times 10^4$ } & 0.941 & 0.974 & \textbf{0.974} \\ \midrule {$5\times 10^3$ } & 0.937 & 0.969 & \textbf{0.970} \\ \midrule {$3\times 10^3$ } &0.927 &0.961 & \textbf{0.963} \\ \midrule {$2\times 10^3$} &0.911 & 0.948 & \textbf{0.956} \\ \bottomrule \end{tabular} \end{subtable} \end{table*} Fig.~\ref{fig:sparsecode} (first column) shows three \textcolor{black}{voxel}-level \textcolor{black}{classes }(\textcolor{black}{voxels} are clustered by majority vote among patches overlapping them) for the reconstructed central axial slice. The top image only contains soft tissues, whereas the middle image shows some edges and bones in the vertical direction, and the bottom image captures some high-contrast structures. Fig.~\ref{fig:sparsecode} (second column) shows the transforms for the corresponding classes. Each learned transform has 512 $8\times 8 \times 8$ filters, and we show the first $8\times 8$ slice of 256 of these filters that show gradient-like and directional features. \begin{figure*}[!htbp] \centering \begin{subfigure}[h]{0.245\textwidth} \centering \includegraphics[width=1\textwidth]{figures/rmse_comp/rmse_comp_1e4_bt4e4_gm4e-4.pdf} \caption{$I_0 = \textcolor{black}{1\times}10^4$} \label{fig:SPULTRA-rmse-1e4} \end{subfigure} \hfill \hspace{-0.2in} \begin{subfigure}[h]{0.245\textwidth} \centering \includegraphics[width=1\textwidth]{figures/rmse_comp/rmse_comp_5e3_bt4e4_gm4e-4.pdf} \caption{$I_0 = 5\times 10^3$} \label{fig:SPULTRA-rmse-5e3} \end{subfigure} \hfill \hspace{-0.2in} \begin{subfigure}[h]{0.245\textwidth} \centering \includegraphics[width=1\textwidth]{figures/rmse_comp/rmse_comp_3e3_bt4e4_gm4e-4.pdf} \caption{$I_0 = 3\times 10^3$} \label{fig:SPULTRA-rmse-3e3} \end{subfigure} \hfill \hspace{-0.2in} \begin{subfigure}[h]{0.245\textwidth} \centering \includegraphics[width=1\textwidth]{figures/rmse_comp/rmse_comp_2e3_bt4e4_gm4e-4.pdf} \caption{$I_0 = 2\times 10^3$} \label{fig:SPULTRA-rmse-2e3} \end{subfigure} \caption{RMSE comparison of SPULTRA and PWLS-ULTRA. The cursors indicate the RMSEs (Y) at specific number of outer iterations (X).} \label{fig:rmse_comp} \vspace{-0.16in} \end{figure*} \begin{figure*}[!htp] \centering \begin{subfigure}[h]{1\textwidth} \centering \includegraphics[width=0.9\textwidth]{figures/ROIs_xcat/3e3_xcat_recon_err_v7} \caption{} \label{fig:xcat-3e3} \end{subfigure} \vfill \begin{subfigure}[h]{1\textwidth} \centering \includegraphics[width=0.9\textwidth]{figures/ROIs_xcat/2e3_xcat_recon_err_v7} \caption{} \label{fig:xcat-2e3} \end{subfigure} \caption{\textcolor{black}{Comparison of reconstructions and reconstruction errors at (a) $I_0 = 3\times 10^3$ and (b) $I_0 = 2\times 10^3$ dose levels. The 3D images are displayed with the central slices along the axial, sagittal, and coronal directions. The unit of the display windows is HU.}} \label{fig:xcat-recon} \vspace{-0.1in} \end{figure*} Fig.~\ref{fig:sparsecode} also shows the \textcolor{black}{central axial slice of the }sparse coefficient \textcolor{black}{maps (volumes)} for different filters of the transforms in the third, fourth and fifth columns. Each \textcolor{black}{voxel} value in a sparse coefficient \textcolor{black}{map} is obtained by applying the specific 3D filter to a 3D patch (whose \textcolor{black}{front} top left corner is at that \textcolor{black}{voxel}) and hard-thresholding the result. Coefficients for patches \emph{not belonging} to the specific \textcolor{black}{class }are set to zero (masked out). The sparse code \textcolor{black}{maps} capture different types of image features (e.g., edges at different orientations or contrasts) depending on the filters and \textcolor{black}{classes}. \subsubsection{Numerical Results} We compare the RMSE and the SSIM for SPULTRA with those for PWLS-EP and PWLS-ULTRA. Tab.~\ref{tab:numerical} lists the two metrics \textcolor{black}{for the reconstruction targeted ROI} after sufficient iterations (800 iterations) for convergence of PWLS-EP, PWLS-ULTRA, and SPULTRA, for various dose levels. The results show that SPULTRA achieves significant improvements in RMSE and SSIM in low-dose situations. Notably, compared to PWLS-ULTRA, SPULTRA further decreases the RMSE by up to 1.3 HU when $I_0 = 3\times 10^3$, and by around 3.3 HU when $I_0 = 2\times 10^3$. The RMSE improvement of SPULTRA over PWLS-ULTRA can be more clearly observed from Fig.~\ref{fig:rmse_comp} that shows the RMSE evolution with the number of outer iterations under different dose levels. At low-doses, SPULTRA decreases the RMSE more quickly (from the same initial value) and to much lower levels than PWLS-ULTRA. Fig.~\ref{fig:rmse_comp} shows that to achieve the same RMSE as PWLS-ULTRA at 600 outer iterations, SPULTRA takes 487, 365, 251 and 133 outer iterations under $I_0 = \textcolor{black}{1\times}10^4, \ 5\times 10^3, \ 3\times 10^3, \text{and } 2\times 10^3$, respectively. \subsubsection{Computational Costs} As discussed in Sec. \ref{sec:computation analysis}, SPULTRA has a similar computational cost per iteration as PWLS-ULTRA, except for computing some initializations for image update. Fig.~\ref{fig:rmse_comp} shows that the SPULTRA method requires much fewer number of outer iterations than PWLS-ULTRA to achieve the same RMSE for the reconstruction, especially at low doses. \begin{figure}[!htbp] \centering \begin{subfigure}[h]{0.45\textwidth} \centering \includegraphics[width=1\textwidth]{figures/ROIs_xcat/3e3_ROI1} \caption{\textcolor{black}{$I_0 = 3\times 10^3$}} \label{fig:xcat-3e3-roi1} \end{subfigure} \vfill \begin{subfigure}[h]{0.45\textwidth} \centering \includegraphics[width=1\textwidth]{figures/ROIs_xcat/2e3_ROI1} \caption{\textcolor{black}{$I_0 = 2\times 10^3$}} \label{fig:xcat-2e3-roi1} \end{subfigure} \caption{\textcolor{black}{3D displays of reconstructions of ROI 1 defined in Fig.~\ref{fig:xcat-recon}. The display windows are {[900, 1200]} HU.}} \label{fig:xcat-recon-roi1} \vspace{-0.06in} \end{figure} When the dose is very low, e.g., when $I_0 = 2\times 10^3$, SPULTRA takes only a quarter the number of outer iterations as PWLS-ULTRA to achieve the same RMSE. Thus, the total runtime to achieve a specific reconstruction quality at low doses is typically much lower for SPULTRA than for PWLS-ULTRA. When the dose is not very low, for example when $I_0 = \textcolor{black}{1\times}10^4$, the SPULTRA and the PWLS-ULTRA methods have similar computational costs and runtimes. To achieve RMSE of 29.26 HU (see Fig.~\ref{fig:SPULTRA-rmse-1e4}), PWLS-ULTRA requires 600 outer iterations, while SPULTRA requires 487$\times 120\% \approx$ 584 effective outer iterations where the additional $20\%$ runtime is associated with initializations in each SPULTRA outer iteration. \subsubsection{Visual Results and Image Profiles} Fig.~\ref{fig:xcat-recon} shows the reconstructed images and the corresponding error images for PWLS-EP, PWLS-ULTRA, and SPULTRA, at $I_0 = 3\times 10^3$ and $I_0 = 2\times 10^3$. Compared to the PWLS-EP result, both PWLS-ULTRA and SPULTRA achieved significant improvements in image quality in terms of sharper reconstructions of anatomical structures such as bones and soft tissues, and suppressing the noise. However, the PWLS-ULTRA method introduces bias in the reconstructions, which leads to larger reconstruction errors compared to the proposed SPULTRA method. In Fig.~\ref{fig:xcat-recon}, we \textcolor{black}{marked three 3D ROIs in the axial plane, i.e., ROI~1, ROI~2, and ROI~3. Fig.~\ref{fig:xcat-recon-roi1} shows the zoom-in images of a 3D plot of ROI~1, and those of ROI~2 and ROI~3 are shown in the supplement. We also plot the evolution of RMSE through the axial slices of the three 3D ROIs in Fig.~\ref{fig:xcat-metric-roi}. The figures demonstrate that SPULTRA clearly outperforms the competing PWLS-EP and PWLS-ULTRA schemes.} \begin{figure}[!htbp] \centering \begin{subfigure}[h]{0.24\textwidth} \centering \includegraphics[width=1\textwidth]{figures/ROIs_xcat/ROI_metrics/3e3_ROI_metrics_v3.pdf} \caption{\textcolor{black}{$I_0 = 3 \times 10^3$}} \label{fig:xcat-3e3-roi-metric} \end{subfigure} \hfill \begin{subfigure}[h]{0.24\textwidth} \centering \includegraphics[width=1.03\textwidth]{figures/ROIs_xcat/ROI_metrics/2e3_ROI_metrics_v3.pdf} \caption{\textcolor{black}{$I_0 = 2 \times 10^3$}} \label{fig:xcat-2e3-roi-metric} \end{subfigure} \caption{\textcolor{black}{RMSE (HU) for each axial slice of the 3D ROIs (ROI~1, ROI 2, and ROI 3). The X-axis shows slice indices of the central 64 out of 96 axial slices. Left plot: $I_0 = 3 \times 10^3$. Right plot: $I_0 = 2 \times 10^3$.}} \vspace{-0.1in} \label{fig:xcat-metric-roi} \end{figure} The above advantages \textcolor{black}{of SPULTRA} can be seen more clearly when observing the image profiles. Fig.~\ref{fig:xcat-recon-profile} plots the image profiles \textcolor{black}{for} the three methods together with that of the ground-truth image. Fig.~\ref{fig:xcat-recon} shows the horizontal green solid line and the vertical red dashed line, whose intensities are plotted in Fig.~\ref{fig:xcat-recon-profile}. It is obvious that the \textcolor{black}{profiles} for SPULTRA \textcolor{black}{are} closest to the ground-truth among the three compared methods. The gap between the profiles of the PWLS-based methods and the ground-truth shows the bias caused by the compared PWLS methods. \begin{figure}[!htbp] \centering \begin{subfigure}[h]{0.24\textwidth} \centering \includegraphics[width=1\textwidth]{figures/profile/3e3_profile_v2} \vspace{-0.15in} \caption{$I_0 = 3 \times 10^3$} \label{fig:xcat-3e3-profile} \end{subfigure} \hfill \begin{subfigure}[h]{0.24\textwidth} \centering \includegraphics[width=1\textwidth]{figures/profile/2e3_profile_v2} \vspace{-0.15in} \caption{$I_0 = 2 \times 10^3$} \label{fig:xcat-2e3-profile} \end{subfigure} \caption{Image profiles along the horizontal and vertical lines indicated in Fig.~\ref{fig:xcat-recon}. \textcolor{black}{Left plot: $I_0 = 3\times 10^3$. Right plot: $I_0 = 2\times 10^3$.}} \vspace{-0.2in} \label{fig:xcat-recon-profile} \end{figure} \subsection{Synthesized Clinical Data} \subsubsection{Framework} We used the pre-learned union of 15 square transforms from the XCAT phantom simulations to reconstruct the synthesized helical chest scan volume of size ${420\times 420\times 222}$ with $\Delta_x = \Delta_y = 1.1667$ mm and ${\Delta_z = 0.625}$ mm. The sinograms were of size $888\times 64\times 3611$. Since the clinical data is synthesized via the PWLS-ULTRA reconstruction, the noise model for this synthesized data is obscure, making it difficult to determine appropriate low-dose levels for such data. We tested the radiation dose of $I_0 =1\times 10^4$ with an electronic noise variance the same as the XCAT phantom simulation, i.e., $\sigma^2 = 25$. The percentage of non-positive pre-log measurements for the synthesized clinical data in this case was around $0.14\%$. Such non-positive values were replaced by $1\times 10^{-5}$ for PWLS-based methods. Fig.~\ref{fig:clinic_pwlsultra_zxh} shows the ``true'' clinical image that was reconstructed from real clinical regular-dose sinogram using the PWLS-ULTRA method. \begin{figure}[!htp]\vspace{-0.1in} \centering \begin{subfigure}[h]{0.24\textwidth} \centering \includegraphics[width=1\textwidth]{figures/clinical/clinic_pwls_ultra.pdf} \caption{} \label{fig:clinic_pwlsultra_zxh} \end{subfigure} \hfill \begin{subfigure}[h]{0.24\textwidth} \centering \includegraphics[width=1\textwidth]{figures/clinical/init_pwls_ultra/1e4/xrlalm_l2b15_iter100_os12.pdf} \caption{} \label{fig:clinic-pwls-ultra-1e4-l2b15} \end{subfigure} \vspace{-0.05in} \caption{(a) ``true" clinical image \textcolor{black}{(HU)}, (b) the reconstruction \textcolor{black}{(HU) }of the synthesized data with PWLS-EP for $I_0 = 1\times 10^4$ with $\beta_{ep} = 2^{15}$. \textcolor{black}{The central axial, sagittal, and coronal slices of the volume are shown.}} \vspace{-0.15in} \end{figure} \begin{figure*}[!htbp] \centering \includegraphics[width=1\textwidth]{figures/profile/clinic_1e4_horizont_ROI.pdf} \caption{Reconstructed images (columns 1 to 3) and the image profiles (the 4th column) along the green line in the ``true'' clinical image for the synthesized clinical data with $I_0 = \textcolor{black}{1\times}10^4$ and $\sigma^2 = 25$. (a) Results for axial slice No. 67, (b) results for slice No. 90, and (c) results for slice No. 120. \textcolor{black}{We selected one ROI for each of these three slices and the arrows point out some small structures in the image. The display windows for reconstructed images are {[800, 1200]} HU, and those for the zoomed-in ROIs are {[950, 1200]} HU.}} \label{fig:clinic-recon-profile-1e4} \vspace{-0.25in} \end{figure*} Similar to the XCAT phantom simulation, the initial image for both SPULTRA and PWLS-ULTRA was a reconstruction obtained using PWLS-EP. We set the regularizer parameter $\beta_{ep}$ for PWLS-EP \textcolor{black}{to} $2^{15}$ to generate a smoother (with less noise) initial image, which led to good visual image equality for the SPULTRA and PWLS-ULTRA reconstructions. \textcolor{black}{Since the optimization problem for PWLS-EP is strictly convex, we simply initialized PWLS-EP with a zero image. Fig.~\ref{fig:clinic-pwls-ultra-1e4-l2b15} shows the PWLS-EP reconstructed image for $I_0 = 1\times 10^4$. We set the regularizer parameters for both PWLS-ULTRA and SPULTRA as $\gamma_c = 5\times 10^{-4}$, and $\beta = 1.5 \times 10^{4}$}. \subsubsection{Reconstruction results for the synthesized clinical data} Fig.~\ref{fig:clinic-recon-profile-1e4} shows three axial slices from the 3D reconstructions with SPULTRA and PWLS-ULTRA at $I_0 = 1\times 10^4$: the middle slice (No. 67) and two slices located farther away from the center (No. 90 and No. 120). The image profiles along a horizontal line \textcolor{black}{(shown in green)} in the displayed slices are also shown in Fig.~\ref{fig:clinic-recon-profile-1e4}. The reconstructed slices using PWLS-ULTRA appear darker around the center compared to the ``true'' clinical image and the reconstructions with SPULTRA. This means PWLS-ULTRA produces a strong bias in the reconstruction. The bias can be observed more clearly in the profile plots: the pixel intensities for the SPULTRA reconstruction better follow those of the ``true" clinical image, while those for the PWLS-ULTRA reconstruction are much \textcolor{black}{worse} than the ``true" values. Moreover, SPULTRA achieves sharper rising and failing edges compared to PWLS-ULTRA. In other words, SPULTRA also achieves better resolution than PWLS-ULTRA. \textcolor{black}{Fig.~\ref{fig:clinic-recon-profile-1e4} also shows a zoomed-in ROI for each of the chosen slices, and highlights some small details with arrows. It is clear that in addition to reducing the bias, SPULTRA reconstructs image details better than PWLS-ULTRA.} \vspace{-0.12in} \subsection{\textcolor{black}{Ultra Low-dose Experiments with Raw Data}} \subsubsection{\textcolor{black}{Framework}} \textcolor{black}{We obtained from GE a 2D fan-beam raw (pre-log) scan of a shoulder phantom, which included the beam-hardening effect. The provided $200~\text{mA}$ with 1 second scan can be viewed as a standard-dose scan and all the raw measurements are positive. Based on this standard-dose scan, we simulated an ultra low-dose scan as shown in \eqref{eq:shoulder_gen} with $\alpha = 200$, and added Poisson and Gaussian noise ($\sigma = 5$) to the measurements. The simulated measurements have about $0.4\%$ non-positive values. The sinograms were of size ${888\times 984}$, and reconstructed images were of size ${512\times 512}$ with ${\Delta_x = \Delta_y = 0.9766}$ mm.} \textcolor{black}{For PWLS-ULTRA and SPULTRA, we pre-learned a union of five square transforms using ${8 \times 8}$ overlapping image patches with stride ${1\times 1}$ from five $512 \times 512$ XCAT phantom slices~\cite{pwls-ultra2018}. Here, we also compared SPULTRA with a recent deep-learning based low-dose CT denoising framework ``WavResNet" combined with an RNN architecture\cite{WavResNet18}. The iterative RNN version of WavResNet was pre-trained based on the 2016 Low-Dose CT Grand Challenge data set~\cite{WavResNet18}. During reconstruction, WavResNet, PWLS-ULTRA, and SPULTRA were initialized with the image reconstructed by PWLS-EP with $\beta_{ep}=0.1$. The parameters ${(\beta, \gamma_c)}$ for \textcolor{black}{both} PWLS-ULTRA and SPULTRA were set as $(0.05, 80)$. \textcolor{black}{These values worked well in our experiment. In the supplement, we discuss in detail the parameter selection procedure of $(\beta,\ \gamma_c)$ for both PWLS-ULTRA and SPULTRA.} \textcolor{black}{Parameters} for testing WavResNet were set according to \cite{WavResNet18}\textcolor{black}{, and the pixel values of the input to WavResNet were converted to match the network required scalings}. Since the WavResNet was trained with images reconstructed with the filtered backprojection (FBP) method~\cite{WavResNet18}, we also tested on this shoulder phantom that initialized WavResNet with an FBP reconstructed image. Although initializing WavResNet with an FBP reconstructed image better matches the trained model than the PWLS-EP reconstructed image does, the latter still provided better results. We included in the supplement the denoised image initialized with the FBP reconstruction. } \begin{figure*}[!htp] \centering \includegraphics[width=1\textwidth]{figures/shoulder2d/1ma_profile_imgs3_v2.pdf} \caption{Reconstructions for ultra low-dose 2D scan simulated from raw measurements. The leftmost image is the PWLS-EP reconstructed image for the $200\text{ mA}$ scan. The second image is the PWLS-EP reconstruction for the simulated ultra low-dose scan, and it is the initial image for WavResNet~\cite{WavResNet18}, PWLS-ULTRA~\cite{pwls-ultra2018}, and SPULTRA. The display windows are [800,~1400]~HU.} \label{fig:1ma-imgs-recon} \vspace{-0.08in} \end{figure*} \begin{table*} \centering \caption{\textcolor{black}{Mean (HU) and standard deviation (STD) (HU) of the ROIs for ultra low-dose shoulder phantom simulations.}} \begin{subtable}{0.49\textwidth} \centering \caption{\textcolor{black}{Mean (HU)}} \vspace{-0.02in} \begin{tabular}{c r r r} \toprule \textit{\textcolor{black}{Methods}}& \textit{\textcolor{black}{ROI 1}} & \textit{\textcolor{black}{ROI 2}} & \textit{\textcolor{black}{ROI 3}} \\ \midrule \textcolor{black}{Reference}& \textcolor{black}{1052.1}&\textcolor{black}{1060.1}& \textcolor{black}{1053.4}\\ \midrule \textcolor{black}{PWLS-EP}& \textcolor{black}{1032.7}&\textcolor{black}{977.5}& \textcolor{black}{1026.3}\\ \midrule \textcolor{black}{WavResNet\cite{WavResNet18}}& \textcolor{black}{1037.6}& \textcolor{black}{981.1}&\textcolor{black}{1031.2}\\ \midrule \textcolor{black}{PWLS-ULTRA\cite{pwls-ultra2018}}&\textcolor{black}{1031.1}&\textcolor{black}{1043.0}& \textcolor{black}{1024.2}\\ \midrule \textcolor{black}{SPULTRA}& \textbf{\textcolor{black}{1054.7}} & \textbf{\textcolor{black}{1044.0}} & \textbf{\textcolor{black}{1049.6}}\\ \bottomrule \end{tabular} \end{subtable} \hfill \begin{subtable}{0.49\textwidth} \centering \caption{\textcolor{black}{STD (HU) }} \vspace{-0.02in} \begin{tabular}{c r r r} \toprule \textit{\textcolor{black}{Methods}}& \textit{\textcolor{black}{ROI 1}} & \textit{\textcolor{black}{ROI 2}} & \textit{\textcolor{black}{ROI 3}} \\ \midrule \textcolor{black}{Reference}& \textcolor{black}{8.12}&\textcolor{black}{8.81}& \textcolor{black}{6.98}\\ \midrule \textcolor{black}{PWLS-EP}& \textcolor{black}{19.45}&\textcolor{black}{19.45}& \textcolor{black}{30.46}\\ \midrule \textcolor{black}{WavResNet\cite{WavResNet18}}& \textcolor{black}{18.91}& \textcolor{black}{18.91}&\textcolor{black}{30.16}\\ \midrule \textcolor{black}{PWLS-ULTRA\cite{pwls-ultra2018}}&\textcolor{black}{14.82}&\textcolor{black}{10.92}& \textcolor{black}{19.29}\\ \midrule \textcolor{black}{SPULTRA}& \textcolor{black}{16.34} & \textcolor{black}{11.42} & \textcolor{black}{11.60}\\ \bottomrule \end{tabular} \end{subtable} \label{tab:shoulder-mean-roi} \vspace{-0.16in} \end{table*} \vspace{-0.05in} \subsubsection{\textcolor{black}{Results}} Fig. \ref{fig:1ma-imgs-recon} shows the reconstructions for the $200\text{ mA}$ scan (reference image) along with the reconstructions for the simulated ultra low-dose scan obtained with PWLS-EP, WavResNet, PWLS-ULTRA, and SPULTRA. Visually, WavResNet fails to reconstruct the image but improves over the initial PWLS-EP reconstruction, while PWLS-ULTRA and SPULTRA provide better image quality. This indicates that the ULTRA-based methods may have a better generalization property than WavResNet, since they learn more fundamental features of CT images (also see~\cite{pwls-ultra2018}). We selected three smooth ROIs, where the pixel values are approximately constant. Tab.~\ref{tab:shoulder-mean-roi} shows the mean \textcolor{black}{and the standard deviation of} pixel values for these ROIs for various methods and the standard-dose reference. Since the iterative RNN version of WavResNet only has small improvements over PWLS-EP, the pixel values do not change much compared with PWLS-EP. PWLS-ULTRA however reduces the bias in the central region of the image (ROI 2), but fails to correct the bias in the regions near the bones (ROI 1 and ROI 3). SPULTRA reduces the bias in the central region of the image, and also significantly corrects the bias near the bone regions. \textcolor{black}{The standard deviations of the ROIs reconstructed by SPULTRA are comparable to those reconstructed by PWLS-ULTRA, and are close to those of the reference ROIs.} Additionally, SPULTRA reconstructs the bone (indicated by the magenta arrow in the last two subfigures of Fig.~\ref{fig:1ma-imgs-recon}) better than PWLS-ULTRA. \section{Problem Formulation for SPULTRA} \label{sec:formulation} The goal in LDCT image reconstruction is to estimate the linear attenuation coefficients $\x \in \mathbb{R}^{N_p}$ from CT measurements $\y \in \mathbb{R}^{N_d}$. We propose to obtain the reconstructed image by solving a SP model-based penalized-likelihood problem: \begin{equation}\label{eq:P0} \hat{\x} = \arg\min\textcolor{black}{_{\x \in\mathcal{X}}\ G (\x), \textcolor{black}{~~G}(\x) = \mathsf{L}(\x) + \R(\x), } \tag{P0} \end{equation} \textcolor{black}{where $\mathcal{X}=\{\x| 0 \leq \x_j \leq x_{\mathrm{max}}\}$, $x_{\mathrm{max}}$ is a large constant.} The objective function \textcolor{black}{$ G(\x) $} is composed of a negative log-likelihood function $\mathsf{L}(\x)$ based on the SP model for the measurements, and a penalty term $\R(\x)$ that is based on the \textcolor{black}{ULTRA model \cite{ravishankar:16:tci, pwls-ultra2018}}. The SP model can be described as \textcolor{black}{${Y_i\sim \text{Poisson}\{{I_0e^{-f_i([\A\x]_i)}} + \sigma^2\}}$}, where $Y_i$ is the shifted quantity of the $i$th measurement for $i = 1, \ldots, N_d$, $\sigma^2$ is the variance of the electronic noise, $I_0$ is the incident photon count per ray from the source, \textcolor{black}{${f_i(\cdot) }$ models the beam-hardening effect,} and $\A \in \mathbb{R}^{N_d \times N_p}$ is the CT system matrix. Denoting $l_i(\x) \triangleq [\A\x]_i$ (or $l_i$ in short), the data-fidelity term $\mathsf{L}(\x)$ can be written as \vspace{-0.02in} \begin{equation}\label{eq:L} \begin{aligned} \mathsf{L}(\x) &= \sum_{i = 1}^{N_d}h_i(l_i(\x)), \end{aligned} \vspace{-0.1in} \end{equation} where \begin{equation} h_i(l_i) \triangleq (I_0e \textcolor{black}{^{-f_i(l_i)} } + \sigma^2) - Y_i\log(I_0e \textcolor{black}{^{-f_i(l_i)} } + \sigma^2) . \end{equation} \textcolor{black}{The beam-hardening model $f_i(\cdot)$ is usually approximated as a polynomial \cite{pre-post-log}. For simplicity, we use a second order polynomial, i.e., ${f_i(l_i) = s_{1_i} l_i + s_{2_i} l_i^2}$, where $s_{1_i}$ and $s_{2_i}$ are coefficients of the polynomial for the $i$th measurement. } The ULTRA regularizer $\R(\x)$ has the following form \cite{pwls-ultra2018}: \begin{equation}\label{eq:Rx} \begin{aligned} \R(\x) \triangleq &\min_{\{\z_j, C_k\}} \beta \sum_{k=1}^{K} \bigg\{ \sum_{j\in C_k} \tau_j \{\|\omg_k \P_j \x - \z_{j}\|^2_2 + \gamma_c^2\|\z_{j}\|_0 \}\bigg\} \\ &\quad\text{s.t.}\ \{C_k\} \in \bm{\mathcal{G}}, \end{aligned} \end{equation} where $\bm{\mathcal{G}}$ denotes the set consisting of all possible partitions of $\{1, 2, \ldots,N_p\}$ into $K$ disjoint subsets, $K$ is the number of \textcolor{black}{classes} and $C_k$ denotes the set of indices of patches belonging to the $k$th \textcolor{black}{class}. The operator $\P_j \in \mathbb{R}^{v \times N_p}$ is the patch extraction operator that extracts the $j$th patch of $v$ voxels for \textcolor{black}{$j = 1, \ldots, \tilde{N}$, from $\x$, where $\tilde{N}$ is the number of extracted patches.} The learned transform corresponding to the $k$th \textcolor{black}{class} $\omg_k \in \mathbb{R}^{v \times v}$ maps the patches to the transform domain. Vector $\z_j~\in~\mathbb{R}^v$ denotes the sparse approximation of the transformed $j$th patch, with the parameter $\gamma_c^{2}$ ($\gamma_c>0$) controlling its sparsity level. We use the $\ell_0$ ``norm" (that counts the number of nonzero elements in $\z_j$) to enforce sparsity. The patch-based weight $\tau_j$ is defined as \textcolor{black}{$\|\P_j\bm{\kappa}\|_1/v$\cite{pwls-ultra2018, chun:17:svx}}, where $\bm{\kappa} \in \mathbb{R}^{N_p}$ is defined to help encourage resolution uniformity as $\kappa_j \triangleq \sqrt{\sum_{i=1}^{N_d}a_{ij} \textcolor{black}{\tilde{w}_i}\sum_{i=1}^{N_d}a_{ij}}$ \cite[eq(39)]{kappa}, with $a_{ij}$ \textcolor{black}{denoting} the entries of $\A$, and \textcolor{black}{$\tilde{w}_i$ is approximated as ${\tilde{w}_i = \dot{f_i}(\tilde{l}_i)^2 \frac{y_i^2}{y_i + \sigma^2}}$ \cite[eq(10)]{pre-post-log}, where $\tilde{l}_i$ is the beam-hardening corrected, post-log sinogram data. }To balance the data-fidelity term and the regularizer in the formulation, $\R(\x)$ is scaled by a positive parameter $\beta$. \vspace{-0.1in} \input{Algorithm_r2} \vspace{-0.05in} \section{Convergence Analysis} \label{sec:convergenceAnalysis} \textcolor{black}{The objective function \eqref{eq:P0} of SPULTRA is highly nonconvex due to the nonconvexity of the data-fidelity term and the regularizer. The proposed algorithm efficiently optimizes it by using surrogate functions and alternating minimization. This section provides a convergence analysis for the general optimization approach. While a recent work~\cite{ravishankar:16:tci} analyzed the convergence of a related optimization method, it did not involve the use of surrogate functions and involved adaptive learning of transforms.} \textcolor{black}{In the proposed method, the sparse coding and clustering step is solved exactly. For the image update step, where the cost function is quadratic as in \eqref{eq:Phi1}, many approaches may be used to optimize it, e.g., \cite{nien:16:rla, OGM2016, Momentum}. Our convergence proof in the supplement assumes for simplicity that the image update step is solved exactly. } \textcolor{black}{The convergence result uses the following notation. We use $\Z$ for the sparse code matrix concatenated by column vectors $\z_{j}$, and use a vector ${\Gamma \in \mathbb{R}^{\tilde{N}}}$, whose elements represent the classes indices for the patches, i.e., $\Gamma_j \in \{1, \cdots, K\}$. For an initial $(\x^0, \Z^0, \Gamma^0)$, we let $\{\x^n, \Z^n, \Gamma^n\}$ denote the sequence of iterates generated by alternating algorithm. The objective function in \eqref{eq:P0} is denoted as $G(\x,\Z,\Gamma)$ and includes the constraint on $\x$ as an added barrier penalty (which takes the value $+\infty$ when the constraint is violated and is zero otherwise). The convergence result is as follows.} \vspace{-0.1in} \begin{theorem} \textcolor{black}{\textcolor{black}{Assume the image update step is solved exactly}. For an initial $(\x^0, \Z^0, \Gamma^0)$, iterative sequence $\{\x^n, \Z^n, \Gamma^n\}$ generated by the SPULTRA algorithm is bounded, and the corresponding objective sequence $\{G(\x^n, \Z^n, \Gamma^n)\}$ decreases monotonically and converges to ${G^* \triangleq G^*(\x^0, \Z^0, \Gamma^0)}$. Moreover, all the accumulation points of the iterate sequence are equivalent and achieve the same value $G^*$ of the objective. Each accumulation point $(\x^*,\Z^*,\Gamma^*)$ also satisfies the following partial optimality conditions: \begin{equation}\label{eq:critical-point} \begin{aligned} &\mathbf{0} \in \partial_{\x}G(\x,\Z^*,\Gamma^*)|_{\x = \x^*},\\ &(\Z^*,\Gamma^*) \in \arg\min_{\Z,\Gamma} G(\x^*,\Z,\Gamma), \end{aligned} \tag{14} \end{equation} where $\partial_{\x}$ denotes the sub-differential operator for the function $G$ with respect to $\x$ \cite{rockafellar2009variational,mordukhovich2006variational,2015BCS-ST}. Finally, ${\|\x^{n+1} - \x^n\|_2 \to 0}$ as $n\to \infty$.} \end{theorem} \textcolor{black}{The above theorem implies that for each initial $(\x^0, \Z^0, \Gamma^0)$, the objective sequence converges (although the limit may depend on initialization) and the iterate sequence in the optimization framework converges to an equivalence class of accumulation points (i.e., all accumulation points have the same objective value $G^*$) that are also partial optimizers satisfying~\eqref{eq:critical-point}. Moreover, the image sequence satisfies $\|\x^{n+1} - \x^n\|_2 \to 0 $.} \textcolor{black}{When $K = 1$, \eqref{eq:critical-point} readily implies that the iterate sequence in the algorithm converges to an equivalence class of critical points~\cite{rockafellar2009variational} (that are generalized stationary points) of the nonconvex cost $G(\x, \Z, \Gamma)$. } \textcolor{black}{A detailed proof is included in the supplement\footnote{Supplementary material is available in the supplementary materials / multimedia tab.}. } \vspace{-0.05in} \input{Experiments_r2} \vspace{-0.05in} \input{conclusions_r1} \vspace{-0.15in} \bibliographystyle{IEEEbib} \section{Introduction} Recent years have witnessed the growing deployment of X-ray computed tomography (CT) in medical \textcolor{black}{applications}. Simultaneously there has been great concern to reduce the potential risks caused by exposure to X-ray radiation. Strategies for reducing the X-ray radiation in CT include reducing the photon intensity at the X-ray source, i.e., low-dose CT (LDCT), or lowering the number of projection views obtained by the CT machine, i.e., sparse-view CT. \textcolor{black}{In the case where the X-ray radiation is extremely low, the CT image may not be suitable for medical diagnosis, but it is still quite helpful for non-diagnostic applications such as attenuation correction for PET/CT imaging~\cite{kinahan1998attenuation, ACforPET2011, ACforPET2015} and virtual CT colonoscopy screening~\cite{wang2008virtual}. Reconstructing CT images }with reduced radiation is challenging, \textcolor{black}{and} many reconstruction methods have been proposed for this setting. Model-based iterative reconstruction (MBIR) is widely used~\cite{fessler2000statistical} \textcolor{black}{among these approaches}. Based on maximum a posteriori (MAP) estimation, MBIR approaches form a cost function that incorporates the statistical model for the acquired measurements and the prior knowledge (model) of the images. This section first reviews some of the statistical models for CT measurements along with recent works on extracting prior knowledge about images for LDCT image reconstruction, and then presents our contributions. \vspace{-0.12in} \subsection{Background} Accurate statistical modeling of the measurements in CT scanners is challenging, especially in low-dose imaging, when the electronic noise in the data acquisition system (DAS) becomes significant \cite{streakArtifact,whiting2002signal,elbakri2003efficient,yu2012development, ma2012variance,ding:16:mmp,survey2013-ct}. Approximations of the measurement statistics can be categorized into \textit{post-log} and \textit{pre-log} models \cite{pre-post-log}, which are detailed next. The post-log models work on data obtained from the logarithmic transformation of the raw measurements, which is often assumed Gaussian distributed. Since the logarithmic transformation approximately linearizes the raw measurements, methods based on post-log data can readily exploit various optimization approaches and regularization designs with efficiency and convergence guarantees for this reconstruction problem \cite{beister2012iterative, thibault:07:atd, Momentum}. The post-log methods however have a major drawback: the raw measurements may contain non-positive values on which the logarithmic transformation cannot be taken (or near-zero positive measurements whose logarithm can be very negative), particularly when the electronic noise becomes significant as compared to the photon statistical noise in low-dose cases. There are many pre-correction approaches to deal with \textcolor{black}{non-positive} raw measurements \textcolor{black}{for post-log methods}. Examples of such approaches include using a statistical weight of zero for \textcolor{black}{such} measurements \cite{Polyenergetic02}, replacing the non-positive measurements with a small positive value~\cite{PWLS06} and filtering neighboring measurements~\cite{streakArtifact}. Thibault et al.~\cite{recurFilter} proposed a recursive filter which preserves the local mean to pre-process noisy measurements, but still used a non-linear function to map all noisy measurements to strictly positive values. \textcolor{black}{Chang et al. \cite{chang2016:sino-correct} applied the local linear minimum mean-square error (LLMMSE) filter to pre-process the raw measurements, but the LLMMSE filter does not guarantee positivity in its output sinograms and introduces correlations among neighbouring channels. This correlation violates the assumption of independence of sinogram data on which MAP reconstruction formulations rely. Chang et al. \cite{chang2016:sino-correct} also proposed a pointwise Bayesian restoration (PBR) approach, which better preserves the independence of sinogram data while reducing bias for photon-starved CT data.} When pre-processing a large percentage of non-positive values for LDCT measurements, these pre-correction methods \textcolor{black}{may still} introduce bias in the reconstructed image and can degrade image quality~\cite{pre-post-log, recurFilter}. The logarithmic transformation itself causes a positive bias in the line integrals from which the image is reconstructed~\cite{pre-post-log, fessler1995hybrid}. A typical method for reconstructing images from the post-log data is penalized weighted least squares (PWLS) \cite{PWLS06} that optimizes an objective consisting of a weighted least squares data fidelity term and a regularization penalty. \textcolor{black}{However, the} pre-correction process and non-linear logarithmic operation create challenges in estimating the statistical weights for the PWLS methods \textcolor{black}{\cite{recurFilter,hayes2019unbiased}}. Contrary to the post-log methods, the pre-log methods work directly with the raw measurements. A robust statistical model for the pre-log raw CT measurements is the shifted-Poisson (SP) model. This model shifts the measurements by the variance of the electric readout noise. The shifted measurement has its variance equal to its mean, so that it could be approximated to be Poisson distributed. Since the shifted-Poisson model is a better approximation for CT measurement statistics compared to the Gaussian model \cite{pre-post-log}, \textcolor{black}{\cite{PL-smooth, PL-restore, ElecNoiseModeling,tilley2017penalized}}, and no pre-correction of the data is needed for most LDCT dose levels \cite{pre-post-log}, this paper uses this SP model for LDCT image reconstruction. There has been growing interest in improving CT image reconstruction by extracting prior knowledge from previous patient scans. Many methods have been proposed in this regard, such as prior image constrained compressed sensing methods (PICCS) \cite{PICCS2008, ncpiccs2011, piccs-app2012}, or the previous normal-dose scan induced nonlocal means method \cite{ndiNLM, zhang2017applications}. More recently, inspired by the success of learning-based methods in image processing and computer vision, researchers have incorporated data-driven approaches along with statistical models for LDCT image reconstruction. One such approach proposed by Xu et al. \cite{xu:12:ldx} combined dictionary learning techniques with the PWLS method for LDCT image reconstruction. The dictionary they used was either pre-learned from a training image set (consisting of 2D images) and fixed during reconstruction, or adaptively learned while reconstructing the image. The 2D dictionary model for image patches was later extended to a 2.5D dictionary (where different dictionaries were trained from 2D image patches extracted from axial, sagittal, and coronal planes of 3D data) \cite{2.5D}, and then to a 3D dictionary trained from 3D image patches \cite{3D-dict-18}. These dictionary learning and reconstruction methods are typically computationally expensive, because they involve repeatedly optimizing \textcolor{black}{NP-hard problems~\cite{NPhard02}} for estimating the sparse coefficients of patches. The learning of sparsifying transforms (ST) was proposed in recent works \cite{STlearning13, STlearning15} as a generalized analysis dictionary learning method, where the sparse coefficients are estimated directly by simple and efficient thresholding. Pre-learned square sparsifying transforms have been recently incorporated into 2D LDCT image reconstruction with both post-log Gaussian statistics~\textcolor{black}{\cite{pwls-ultra2018}} and pre-log SP measurement models \cite{ye:17:asm}. \textcolor{black}{Especially, Zheng et al.~}\cite{pwls-ultra2018} showed promise for PWLS with a union of pre-learned sparsifying transforms \cite{wen:14:sos} regularization that generalizes the square sparsifying transform approach. In addition to the dictionary learning-based approaches, some works have incorporated neural networks in CT image reconstruction. Adler and {\"O}ktem proposed a learned primal-dual reconstruction method \cite{adler2018primaldual}, that uses convolutional neural networks (CNNs) to learn parameterized proximal operators. This method was applied to relatively simple 2D phantoms. Wu et. al \cite{wu2017KSAE} proposed a k-sparse autoencoder (KSAE) based regularizer for LDCT image reconstruction, where they trained three independent KSAEs from axial, sagittal and coronal slices for 3D reconstruction via artificial neural networks. Chen et al. \cite{chen2018learn} proposed to unfold the classical iterative reconstruction procedure into a CNN-based recurrent residual network so that the original fixed regularizers and the balancing parameters within the iterative scheme can vary for each layer. The reconstruction with this network was only performed slice by slice. \textcolor{black}{He et al. proposed a parameterized plug-and-play alternating direction method (3pADMM) for PWLS model based low-dose CT image reconstruction \cite{3pADMM}. By regarding the ADMM optimization steps as network modules, this method can optimize the 3p prior and the related parameters simultaneously.} These methods are fully supervised learning methods requiring large datasets consisting of both undersampled images or measurements and the corresponding high-quality images. Some post-processing approaches involving neural networks such as a U-Net or a residual net also improve CT image quality \cite{han2016deep,WavResNet18}, but such post-processing methods usually construct an image-to-image mapping without fully incorporating the physics of the imaging process. \textcolor{black}{Additionally, the generalization of supervised learning methods may be limited} in the sense that the trained model may only work well on the data that is similar to the training set. \vspace{-0.1in} \subsection{Contributions} Considering the robustness and accuracy offered by the SP statistics, and inspired by the data-driven image modeling methods not requiring paired training data or previous registered normal-dose images, here we propose a new LDCT image reconstruction method named SPULTRA that combines robust SP measurement modeling with a union of learned sparsifying transforms (ULTRA) based regularizer. Since the SP model leads to a nonconvex data-fidelity term, we design a series of quadratic surrogate functions for this term in our optimization. For each surrogate function combined with the ULTRA regularizer (a majorizer of the SPULTRA objective), we optimize it by alternating between an \emph{image update step} and a \emph{sparse coding and clustering step}. \textcolor{black}{The proposed SPULTRA scheme is proved to converge to the critical points of the overall nonconvex problem. In the experiments, we compare SPULTRA with the recent PWLS-ULTRA scheme~\cite{pwls-ultra2018} under different incident photon intensity levels for 3D XCAT phantom simulations. The }results demonstrate that the proposed method avoids bias in image regions caused by the PWLS-ULTRA method, especially for low X-ray doses. At the same time, SPULTRA achieves better image reconstruction quality than PWLS-ULTRA given the same number of iterations, or alternatively, SPULTRA achieves a desired image reconstruction quality much faster than the competing PWLS-ULTRA scheme, especially for low X-ray doses. \textcolor{black}{We verify the bias avoidance property of SPULTRA on a synthesized 3D clinical chest scan, and an ultra low-dose 2D shoulder phantom scan simulated from standard-dose raw measurements that also involve beam-hardening effects. We compared SPULTRA with a recent deep-learning based denoising framework~\cite{WavResNet18} on the 2D data demonstrating the better reconstruction and generalization ability of SPULTRA.} This paper significantly extends our previous conference work~\cite{ye:17:asm} by incorporating the ULTRA regularizer and proposing a faster optimization procedure \textcolor{black}{with convergence guarantees. We performed extensive numerical evaluations compared to the 2D LDCT XCAT phantom results in~\cite{ye:17:asm}.} \vspace{-0.1in} \subsection{Organization} The rest of this paper is organized as follows. Section~\ref{sec:formulation} presents the proposed problem formulation for low-dose CT image reconstruction. Section~\ref{sec:algorithm} briefly reviews the ULTRA learning method and describes the proposed SPULTRA image reconstruction algorithm. Section~\textcolor{black}{\ref{sec:convergenceAnalysis} discusses the convergence properties of the SPULTRA methodology. Section~}\ref{sec:experiments} presents detailed experimental results and comparisons. Section~\ref{sec:conclusion} presents conclusions. \section{Proof Sketch for Convergence Theorem} As stated in Section IV, the objective function is written as follows: \begin{equation}\label{eq:P0} \begin{aligned} G(\x,\Z,\Gamma) = \mathsf{L}(\x)+ \R(\x,\Z,\Gamma) + \mathfrak{X}(\x), \end{aligned} \tag{P0} \end{equation} where $\mathfrak{X}(\x)$ is a barrier function that takes the value 0 when the constraint on $\x$ is satisfied and is $+\infty$ otherwise, and $\mathsf{L}(\x)$ is the data fidelity function of the form ${\mathsf{L}(\x) = \sum_{i = 1}^{N_d}h_i([\A\x]_i)}$ in which $\A \in \mathbb{R}^{N_d \times N_p}$ is the CT system matrix. $\Z$ is the sparse code matrix concatenated by column vectors $\z_{j}$, and ${\Gamma \in \mathbb{R}^{\tilde{N}}}$ is a vector whose elements represent the classes indices for the patches, i.e., $\Gamma_j \in \{1, \cdots, K\}$. With ${l_i \triangleq [\A\x]_i}$, $h_i(l_i)$ was defined as \begin{equation} h_i(l_i) \triangleq (I_0e^{-f_i(l_i)} + \sigma^2) - Y_i\log(I_0e^{-f_i(l_i)} + \sigma^2). \tag{2} \end{equation} The regularizer $\R(\x,\Z,\Gamma)$ was defined as \begin{equation}\label{eq:Rx} \begin{aligned} \R(\x,\Z,\Gamma) \triangleq \beta \sum_{j=1}^{\tilde{N}} \bigg\{ \|\omg_{\Gamma{j}} \P_j \x - \z_{j}\|^2_2 + \gamma_c^2\|\z_{j}\|_0 \bigg\} , \end{aligned} \end{equation} where $\beta >0$ is a parameter for balancing the data-fidelity and regularizer penalties, and $\tilde{N}$ is the number of patches. \begin{theorem}\label{them1} \textcolor{black}{Assume the image update step is solved exactly.} For an initial $(\x^0, \Z^0, \Gamma^0)$, iterative sequence $\{\x^n, \Z^n, \Gamma^n\}$ generated by the SPULTRA algorithm is bounded, and the corresponding objective sequence $\{G(\x^n, \Z^n, \Gamma^n)\}$ decreases monotonically and converges to ${G^* \triangleq G^*(\x^0, \Z^0, \Gamma^0)}$. Moreover, all the accumulation points of the iterate sequence are equivalent and achieve the same value $G^*$ of the objective. Each accumulation point $(\x^*,\Z^*,\Gamma^*)$ also satisfies the following partial optimality conditions: \begin{equation}\label{eq:critical-point} \begin{aligned} &\mathbf{0} \in \partial_{\x}G(\x,\Z^*,\Gamma^*)|_{\x = \x^*},\\ &(\Z^*,\Gamma^*) \in \arg\min_{\Z,\Gamma} G(\x^*,\Z,\Gamma), \end{aligned} \tag{14} \end{equation} where $\partial_{\x}$ denotes the sub-differential operator for the function $G$ with respect to $\x$ \cite{rockafellar2009variational,mordukhovich2006variational,2015BCS-ST}. Finally, ${\|\x^{n+1} - \x^n\|_2 \to 0}$ as $n\to \infty$. \end{theorem} \subsection{Preliminaries}\label{sec:preliminaries} \subsubsection{Surrogate Function design} To optimize the non-convex function $G(\cdot)$, we design a series of quadratic majorizers for each $h_i(\l_i)$: \begin{equation}\label{eq:surrogate_i} \begin{aligned} q(l_i;l_i^n) = h_i(\l_i^n) +\dot{h_i}(l_i^n)(l_i - l_i^n)+\frac{1}{2}c_i(l_i^n)(l_i - l_i^n)^2. \end{aligned} \end{equation} Here, $c_i(l_i)$ is the curvature defined in (5) of \cite{TMI-SPULTRA-as-submit}. According to \cite{TMI99}, such a choice of $c_i(l_i)$ is an optimum curvature that ensures majorizer conditions: \begin{subequations} \begin{equation}\label{eq:srr_cond1} h_i(l_i) \leq q(l_i;l_i^n), \ \forall l_i \geq 0, \end{equation} \begin{equation}\label{eq:srr_cond2} h_i(l_i^n) = q(l_i^n;l_i^n). \end{equation} \end{subequations} In general, when minimizing a majorizing function or updating $l_i$, let \begin{equation}\label{eq:sol_q} l_i^{n+1} = \arg\min q(l_i;l_i^n). \end{equation} Then, using \eqref{eq:srr_cond1} and \eqref{eq:srr_cond2} yields \begin{equation}\label{eq:srr_inequality} h_i(l_i^{n+1})\leq q_i(l_i^{n+1};l_i^n)\leq q(l_i^n;l_i^n) = h_i(l_i^n). \end{equation} Thus, in general, minimizing a majorizer monotonically decreases the original cost. Clearly, \eqref{eq:surrogate_i} can be rewritten as follows: \begin{equation} \begin{aligned} q(l_i;l_i^n) &=\frac{1}{2}\big[\big(c_i(l_i^n)^{\frac{1}{2}}(l_i - l_i^n)\big)^2 + 2\dot{h_i}(l_i^n)(l_i - l_i^n) \\ &\quad+ \big(c_i(l_i^n)^{-\frac{1}{2}}\dot{h_i}(l_i^n)\big)^2 \big] \\ & \quad + h_i(l_i^n) - \frac{1}{2}c_i(l_i^n)^{-1}\dot{h_i}(l_i^n)^2\\ & = \frac{c_i(l_i^n)}{2}\big[ (l_i - l_i^n) + c_i(l_i^n)^{-1}\dot{h_i}(l_i^n) \big]^2 + q_c^n. \end{aligned} \end{equation} When optimizing $q(l_i;l_i^n)$, $q_c^n$ is a constant that can be ignored, and we can optimize \begin{equation}\label{eq:srr_drop} \varphi_n (l_i) \triangleq \frac{c_i(l_i^n)}{2}\big[ (l_i - l_i^n) + c_i(l_i^n)^{-1}\dot{h_i}(l_i^n) \big]^2. \end{equation} The minimizer of \eqref{eq:srr_drop} also solves \eqref{eq:sol_q}, which makes \eqref{eq:srr_inequality} hold for every iteration.\\ Since in SPULTRA, we majorize the entire function $\mathsf{L}(\x)$, its majorizer is therefore \begin{equation}\label{eq:L_major} \mathbf{Q}(\x;\x^n) = \phi(\x;\x^n) + \underbrace{ \mathsf{L}(\x^n) - \frac{1}{2}||\bm{d}_h(l^n)||_{(\W^n)^{-1}}^2}_{Q_c^n}, \end{equation} where \begin{equation}\label{eq:surrogate} \begin{aligned} \phi(\x;\x^n) \triangleq \frac{1}{2}||\tilde{\y}^n - \A\x||_{\W^n}^2, \end{aligned} \end{equation} and $\bm{d}_h(l^n) \in \mathbb{R}^{N_d}$ is the row vector whose entries are $\dot{h_i}(l_i^n)$, ${\W^n \triangleq \diag\{c_i(l_i^n)\}}$, $\tilde{\y}^n \triangleq \A\x^n -\big(\W^n\big)^{-1}[\bm{d}_h(l^n)]^T$. Adding the regularizer $\R(\x,\Z,\Gamma)$ to $\mathbf{Q}(\x;\x^n) $, we obtain the following majorizer for $G(\x,\Z,\Gamma)$: \begin{equation}\label{eq:major_all} \begin{aligned} F(\x,\Z,\Gamma;\x^n) &\triangleq \phi(\x;\x^n) + Q_c^n+ \R(\x,\Z,\Gamma) + \mathfrak{X}(\x). \end{aligned} \end{equation} Dropping the constant term $Q_c^n$, the overall surrogate function for $G(\x,\Z,\Gamma)$ in the $n$th iteration becomes \begin{equation}\label{eq:surr_all} \Phi(\x,\Z,\Gamma;\x^n) = \phi(\x;\x^n) + \R(\x,\Z,\Gamma) + \mathfrak{X}(\x). \end{equation} \subsection{Proof of Theorem~\ref{them1} - Part 1} Here, we show that for an initial $(\x^0, \Z^0, \Gamma^0)$, iterative sequence $\{\x^n, \Z^n, \Gamma^n\}$ generated by the SPULTRA algorithm is bounded, and the corresponding objective sequence $\{G(\x^n, \Z^n, \Gamma^n)\}$ decreases monotonically and converges to ${G^* \triangleq G^*(\x^0, \Z^0, \Gamma^0)}$. \subsubsection{Boundedness of the sequence $\{\x^n, \Z^n, \Gamma^n\}$} It is obvious that the sequences $\{\x^n\}$ and $\{\Gamma^n\}$ are bounded, because of the constraints in \eqref{eq:P0}. Since $\z_j^{n}= \mathit{H}_{\gamma_c}(\omg_{\Gamma_j^n}\P_j\x^n)$ is obtained by hard-thresholding a bounded input, the sequence $\{\Z^n\}$ is also bounded. \subsubsection{Monotone decrease of the objective function $G(\x,\Z,\Gamma)$}\label{sec:decrease} First, we discuss the objective behavior in each step of the algorithm. \paragraph{Image update step}\label{sec:img_upd}With $\Z$ and cluster assignments $\Gamma$ fixed, the cost function for the image update step is ${\Phi(\x,\Z^n,\Gamma^n;\x^n)}$. $\Phi(\cdot)$ as in \eqref{eq:surr_all} is a sum of quadratic functions and the simple barrier function $\mathfrak{X}(\x)$, and many approaches can be used to minimize it. Assuming it is solved exactly, we have \begin{equation}\label{eq:img_converge1} \x^{n+1}\in \arg \min_{\x} \Phi(\x,\Z^n,\Gamma^n;\x^n), \end{equation} or equivalently, ${\x^{n+1}\in \arg \min_{\x} F(\x,\Z^n,\Gamma^n;\x^n)}.$ Since $F(\x,\Z,\Gamma;\x^n)$ is the majorizer of $G(\x,\Z,\Gamma)$, we have \begin{equation}\label{eq:mono_img} \begin{aligned} &G(\x^{n+1},\Z^n,\Gamma^n) \leq F(\x^{n+1},\Z^n,\Gamma^n;\x^n) \\ &\leq F(\x^{n},\Z^n,\Gamma^n;\x^n) = G(\x^{n},\Z^n,\Gamma^n) \end{aligned} \end{equation} \paragraph{Sparse coding and clustering step} With $\x$ fixed, the relevant part of the cost function for the sparse coding and clustering step is ${\mathsf{R}(\x^{n+1}, \Z, \Gamma)}$. Since the solution with respect to ${(\Z, \Gamma)}$ is computed exactly as described in {Section~III.~C} in \cite{TMI-SPULTRA-as-submit}, we have \begin{equation} (\Z^{n+1},\ \Gamma^{n+1}) \in \arg \min_{\Z, ~\Gamma} \mathsf{R}(\x^{n+1}, \Z, \Gamma). \end{equation} This then implies \begin{equation}\label{eq:spa_clu_convergence_G} (\Z^{n+1},\ \Gamma^{n+1}) \in \arg \min_{\Z, ~\Gamma} G(\x^{n+1},\Z, \Gamma). \end{equation}\\ Therefore, $G(\x^{n+1},\Z^{n+1}, \Gamma^{n+1}) \leq G(\x^{n+1},\Z, \Gamma)$. Combining this with \eqref{eq:mono_img} implies that the objective decreases in each outer iteration. In other words, the objective sequence $\{G^n \triangleq G(\x^n, \Z^n, \Gamma^n)\}$ is monotonically decreasing. Moreover, the objective $G$ is readily lower bounded by ${N_d \sigma^2 - (\sum_{i = 1}^{N_d}Y_i)\log(I_0 + \sigma^2)}$. Therefore, it converges to some limit ${G^* \triangleq G^*(\x^0, \Z^0, \Gamma^0)}$. \subsection{Proof of Theorem~\ref{them1} - Part 2}\label{sec:property2} Here, we show that all the accumulation points of the iterate sequence are equivalent and achieve the same value $G^*$ of the objective function. Since the sequence $\{\x^n, \Z^n, \Gamma^n\}$ is bounded, it follows from the Bolzano-Weierstrass Theorem that there exists a convergent subsequence and a corresponding accumulation point. In order to show that all the accumulation points of $\{\x^n, \Z^n, \Gamma^n\}$ achieve the same value of $G^*$, we consider an arbitrary convergent subsequence $\{\x^{q_m}, \Z^{q_m}, \Gamma^{q_m}\}$, and show that $G(\x^*, \Z^*, \Gamma^*) = G^*$ for the accumulation point $(\x^*, \Z^*, \Gamma^*)$. First, the objective satisfies \begin{equation}\label{eq:acc_srr} G^{q_m} \triangleq G(\x^{q_m}, \Z^{q_m}, \Gamma^{q_m}) = \mathsf{L}(\x^{q_m})+ \R(\x^{q_m}, \Z^{q_m}, \Gamma^{q_m}). \end{equation} Clearly, $\{G^{q_m}\}$ converges to $G^*$. Since ${\x^{q_m} \to \x^*}$ and ${\Z^{q_m} \to \Z^*}$ as $m\to \infty$, and $\mathsf{L}(\x)$ is a continuous function, therefore, $\mathsf{L}(\x^{q_m} )\to \mathsf{L}(\x^{*})$. Since $\Z^{q_m}$ does not contain any \textcolor{black}{non-zero} entries with magnitude less than $\gamma_c$ and $\Z^{q_m} \to \Z^*$, clearly, the support (i.e., locations of non-zeros) of $\Z^{q_m}$ must coincide with the support of $\Z^*$ after finitely many iterations. Similarly, because $\{\Gamma^{q_m}\}$is an integer-vector sequence, $\Gamma^{q_m}$ converges to $\Gamma^{*}$ in a finite number of iterations. Therefore, taking the limit $m\to \infty$ term by term in $G(\x^{q_m}, \Z^{q_m}, \Gamma^{q_m})$ yields \begin{equation}\label{eq:acc_G} \lim_{m\to \infty} G(\x^{q_m}, \Z^{q_m}, \Gamma^{q_m}) = G(\x^*, \Z^*, \Gamma^*). \end{equation} Combining \eqref{eq:acc_G} with the fact that $G^{q_m} \to G^*$, we obtain \begin{equation}\label{eq:propert2} G(\x^*, \Z^*, \Gamma^*) = G^*. \end{equation} Thus, any accumulation point of $\{\x^n, \Z^n, \Gamma^n\}$ achieves the value $G^*$ for the cost. \subsection{Proof of Theorem~\ref{them1} - Part 3}\label{sec:property3} Here, we show that each accumulation point $(\x^*,\Z^*,\Gamma^*)$ satisfies the partial optimality conditions in \eqref{eq:critical-point}. The proof uses the following Lemma~1. \begin{lemma}\label{lemma1} Consider the subsequence $\{\x^{q_m}, \Z^{q_m-1}, \Gamma^{q_m-1}\}$ that converges to the accumulation point $(\x^*, \Z^{**}, \Gamma^{**})$, then the subsequence $\{\x^{q_m-1}\}$ also converges to $\x^*$, with $\x^*$ being the unique minimizer of $F(\x, \Z^{**}, \Gamma^{**}; \x^{*})$ with respect to $\x$. \end{lemma} \textit{Proof of Lemma~\ref{lemma1}}:\\ Since $\{\x^{q_m-1}\}$ is bounded, there exists a convergent subsequence $\{\x^{q_{m_t}-1}\}$ which converges to $\x^{**}$. The following inequalities follow from \eqref{eq:mono_img} and \eqref{eq:spa_clu_convergence_G}: \begin{equation}\label{eq:mono_subseq} \begin{aligned} G^{q_{m_t}} &= G(\x^{q_{m_t}}, \Z^{q_{m_t}},\Gamma^{q_{m_t}}) \leq G(\x^{q_{m_t}}, \Z^{q_{m_t}-1},\Gamma^{q_{m_t}-1})\\ &\leq F(\x^{q_{m_t}}, \Z^{q_{m_t}-1},\Gamma^{q_{m_t}-1};\x^{q_{m_t}-1}) \\ &\leq F(\x^{q_{m_t}-1}, \Z^{q_{m_t}-1},\Gamma^{q_{m_t}-1};\x^{q_{m_t}-1}) \\ & = G(\x^{q_{m_t}-1}, \Z^{q_{m_t}-1},\Gamma^{q_{m_t}-1}) =G^{q_{m_t}-1} . \end{aligned} \end{equation} Since $G^{q_{m_t}}$ and $G^{q_{m_t}-1}$ are successive elements from the sequence $\{G^n\}$, and $\{G^n\}$ converges to $G^*$, then taking the limit $t\to \infty$ throughout \eqref{eq:mono_subseq} yields \begin{subequations} \begin{equation} G^* \leq F(\x^{*}, \Z^{**}, \Gamma^{**};\x^{**}) \leq F(\x^{**}, \Z^{**}, \Gamma^{**};\x^{**})\leq G^*. \end{equation} Thus, \begin{equation}\label{eq:37b} F(\x^{*}, \Z^{**}, \Gamma^{**};\x^{**}) = F(\x^{**}, \Z^{**}, \Gamma^{**};\x^{**}) = G^*. \tag{35(b)} \end{equation} \end{subequations} Since \eqref{eq:major_all} is a quadratic cost with simple box constraints on $\x$, the Hessian of the quadratic terms with respect to $\x$ is \begin{equation}\label{eq:H_F} \mathbf{H}(\x) = \A^T\W^n\A + 2\beta \sum_{j=1}^{N} \P_j^T\omg_{\Gamma{j}}^T\omg_{\Gamma{j}}\P_j . \end{equation} Clearly, $\A^T\W^n\A$ is non-negative definite, and $\sum_{j=1}^{N} \P_j^T\omg_{\Gamma{j}}^T\omg_{\Gamma{j}}\P_j $ is positive definite\textcolor{black}{\cite{STlearning15,2015BCS-ST}}. Since $\beta$ is a positive scalar, the Hessian in \eqref{eq:H_F} is positive definite. This implies that the minimization of $F(\cdot)$ (quadratic with a box constraint) has a unique solution~\cite{bjorck1996numerical, mead2010least,rojas2002interior}. Moreover, since the following inequality holds for all $\x$ satisfying the problem constraints \begin{equation} \begin{aligned} & F(\x^{q_{m_t}}, \Z^{q_{m_t-1}},\Gamma^{q_{m_t-1}};\x^{q_{m_t}-1}) \\ &\leq F(\x, \Z^{q_{m_t-1}},\Gamma^{q_{m_t-1}};\x^{q_{m_t}-1}), \end{aligned} \end{equation} taking the limit $t\to \infty$ above and using similar arguments as for \eqref{eq:acc_G} yields \begin{equation} \begin{aligned} F(\x^{*}, \Z^{**},\Gamma^{**};\x^{**}) \leq F(\x, \Z^{**},\Gamma^{**};\x^{**}), \end{aligned} \end{equation} implying that $\x^*$ is a minimizer of ${F(\x, \Z^{**},\Gamma^{**};\x^{**})}$. Since the minimizer of ${F(\x, \Z^{**},\Gamma^{**};\x^{**})}$ with respect to $\x$ is unique, and using \eqref{eq:37b} immediately implies ${\x^{**} = \x^{*}}$. Since $\{\x^{q_{m_t}-1}\}$ is an arbitrary subsequence of $\{\x^{q_{m}-1}\}$, $\x^*$ is the limit of any convergent subsequence of $\{\x^{q_{m}-1}\}$. In other words, $\x^*$ is the unique accumulation point of the bounded sequence, i.e., $\{\x^{q_{m}-1}\}$ itself converges to $\x^*$. This completes the proof of the Lemma.\\ We have shown in the proof of Lemma~\ref{lemma1} that $\x^{**}$ is a unique minimizer of the quadratic function ${F(\x, \Z^{**},\Gamma^{**};\x^{**})}$. This means that ${{\mathbf{0} \in \partial_{\x} F(\x, \Z^{**},\Gamma^{**};\x^{**})|_{\x = \x^{**}}}}$. It is easy to show that we can equivalently consider the sequence $\{\x^{q_m}, \Z^{q_m}, \Gamma^{q_m}\}$ converging to $(\x^*,\Z^*,\Gamma^*)$ for which \begin{equation}\label{eq:partial_F_0} \mathbf{0} \in \partial_{\x} F(\x,\Z^*,\Gamma^*; \x^*)|_{\x = \x^{*}}. \end{equation} Based on the definition of the majorizer of L(x), we have \begin{equation}\label{eq:dphi*} \nabla\phi(\x;\x^*)|_{\x = \x^{*}}= \nabla\mathsf{L}(\x)|_{\x = \x^{*}}, \end{equation} where $\nabla$ is the gradient operator on continuous functions. Since the quadratic surrogate and regularizer components of $F(\cdot)$ have exact gradients, combining \eqref{eq:dphi*} with \eqref{eq:partial_F_0} yields \begin{equation} \mathbf{0} \in \partial_{\x}G(\x,\Z^*,\Gamma^*)|_{\x = \x^{*}}. \end{equation} In other words, $\x^*$ is a critical point of $G(\x,\Z^*,\Gamma^*)$. To show the partial optimality condition for $(\Z^*, \Gamma^*)$ as in \eqref{eq:critical-point}, we first use \eqref{eq:spa_clu_convergence_G} for the subsequence $\{\x^{q_m}, \Z^{q_m}, \Gamma^{q_m}\}$ yielding \begin{equation} G(\x^{q_m}, \Z^{q_m}, \Gamma^{q_m}) \leq G(\x^{q_{m-1}}, \Z, \Gamma), \ \forall (\Z,\Gamma). \end{equation} Then, taking the limit ${m \to \infty}$ above and using \eqref{eq:acc_G} and Lemma~\ref{lemma1}, we get \begin{equation} G(\x^*, \Z^*, \Gamma^*) \leq G(\x^*, \Z, \Gamma), \ \forall (\Z,\Gamma), \end{equation} which can be equivalently written as \begin{equation}\label{eq:z_Ck_PGO*} (\Z^{*},\Gamma^{*}) \in \arg \min_{\Z,\Gamma} G(\x^{*},\Z,\Gamma). \end{equation} \subsection{Proof of Theorem~\ref{them1} - Part 4} Here, we show that $\|\x^{n+1} - \x^n\|_2 \to 0$ as $n\to \infty$. Since $\{\x^n\}$ is bounded, $\|\x^n\|_2 \leq C$ for some $C>0$ and all $n$. Therefore, the sequence $\{e^n\}$ is also bounded, with $e^n \triangleq \|\x^{n+1} - \x^n\|_2 \leq 2C$, $\forall$ $n$. Hence, there exists a convergent subsequence $\{e^{q_m}\}$ of $\{e^n\}$. For the bounded sequence $\{\x^{q_{m}+1}, \Z^{q_{m}},\Gamma^{q_{m}}\}$, there exists a convergent subsequence $\{\x^{q_{m_t}+1},\Z^{q_{m_t}},\Gamma^{q_{m_t}}\}$ converging to $(\x^*,\Z^*, \Gamma^*)$. Moreover, by Lemma~\ref{lemma1}, the sequence $\{\x^{q_{m_t}}\}$ also converges to $\x^*$. Therefore, clearly the subsequence $\{e^{q_{m_t}}\}$ with $e^{q_{m_t}}\triangleq ||\x^{q_{m_t}+1}- \x^{q_{m_t}}||_2$ converges to 0. Since $\{e^{q_{m_t}}\}$ is a subsequence of the convergent $\{e^{q_{m}}\}$, then $\{e^{q_{m}}\}$ has the same limit, i.e., 0. As the convergent subsequence $\{e^{q_{m}}\}$ is chosen arbitrarily from $\{e^n\}$, we conclude that $0$ is the only accumulation point of $\{e^n\}$. Thus, $||\x^{n+1} - \x^n||_2 \to 0$ as $n\to \infty$. \section{Additional Experimental Results} \subsection{Behavior of the Learned ULTRA Models} Here, we further illustrate the sparse coefficient maps generated by SPULTRA. \begin{figure}[!htbp] \centering \includegraphics[width=0.4\textwidth]{figures/sparsecode/irow81_classAll.pdf} \caption{Sum of the disjoint sparse coefficient maps generated by the 81st filter from all classes.} \label{fig:spacod_row81_all} \vspace{-0.1in} \end{figure} The sparse code vectors $\z_j$ in (3) can be concatenated as columns of a sparse code matrix $\Z$. Fig.~2 in~\cite{TMI-SPULTRA-as-submit} displays the axial slice of the sparse coefficient volume obtained from the 81st row of $\Z$. This represents the effective map for the 81st filter of all classes (composed as the sum of the 81st filter's map from each class). Fig.~\ref{fig:spacod_row81_classes} shows the underlying maps for the 81st filter for all classes obtained by masking out (or setting to zero) pixels in Fig.~\ref{fig:spacod_row81_all} that correspond to patches not in the class. The filters are shown at the top left corner of the sparse coefficient images. Thus, in the ULTRA model, several filters with different properties and different features or edges collaboratively help form the ``effective" sparse coefficient maps. \begin{figure}[!htp] \centering \includegraphics[width=0.5\textwidth]{figures/sparsecode/spacod_transform_row81.pdf} \caption{Sparse coefficient map (axial slice) for the $81$st filter of each class.} \label{fig:spacod_row81_classes} \vspace{-0.2in} \end{figure} \subsection{\textcolor{black}{Clustering results in low-dose situations}} \textcolor{black}{Fig.~2 in the manuscript showed 3 out of 15 voxel-level clustering results of the reconstructed image at $I_0 = 1\times 10^4$. Here, Fig.~\ref{fig:cluster_2e3_all} is a binary image showing clustering memberships of all the classes for the reconstruction at $I_0 = 2\times 10^3$. The white regions indicate pixels assigned to the corresponding class. The voxel-level clustering results (that display the pixels using their reconstructed intensities) at $I_0 = 2\times 10^3$ are actually similar to the ones shown in Fig.~2 (first column) in the manuscript. Specifically, Tab.~\ref{tab:cluster_perc} shows the percentages of pixels assigned to Class~1, 13 and 14 respectively. Although $I_0 = 2\times 10^3$ is a much lower dose compared with $I_0 = 1\times 10^4$, the clustering results only have slight changes. This illustrates that the voxel clustering based on majority vote of overlapping patches is robust in low-dose situations.} \begin{figure}[!htp] \centering \includegraphics[width=0.45\textwidth]{figures/sparsecode/clustering_2e3_int.pdf} \caption{\textcolor{black}{Binary images showing the clustering memberships of pixels in the central axial slice of the XCAT phantom reconstructed at $I_0 = 2\times 10^3$.}} \label{fig:cluster_2e3_all} \vspace{-0.05in} \end{figure} \begin{table}[!htp] \centering \caption{\textcolor{black}{Percentages of pixels belonging to Class~1, Class~13, and Class~14.}} \begin{tabular}{l c c c} \toprule \textcolor{black}{$\quad I_0$} &\textcolor{black}{Class 1} & \textcolor{black}{Class 13} &\textcolor{black}{Class 14}\\ \midrule \textcolor{black} {$1\times 10^4$ } & \textcolor{black}{18.7 $\%$} & \textcolor{black}{6.6 $\%$}&\textcolor{black}{65.5 $\%$} \\ \midrule \textcolor{black} {$2\times 10^3$} &\textcolor{black}{18.0 $\%$} & \textcolor{black}{6.9 $\%$}& \textcolor{black}{ 64.4 $\%$}\\ \bottomrule \end{tabular} \vspace{-0.21in} \label{tab:cluster_perc} \end{table} \subsection{Zoom-ins of ROI 2 and ROI 3 in the XCAT phantom simulations} \begin{figure}[!htbp] \centering \begin{subfigure}[h]{0.48\textwidth} \centering \includegraphics[width=1\textwidth]{figures/ROIs_xcat/3e3_ROI2_v3} \caption{$I_0 = 3\times 10^3$} \label{fig:xcat-3e3-roi2} \end{subfigure} \vfill \begin{subfigure}[h]{0.48\textwidth} \centering \includegraphics[width=1\textwidth]{figures/ROIs_xcat/2e3_ROI2_v3} \caption{$I_0 = 2\times 10^3$} \label{fig:xcat-2e3-roi2} \end{subfigure} \caption{\textcolor{black}{Plots of the ROI 2 (central axial, sagittal, and coronal slices of the 3D volume). The display windows for the reconstructed ROI and the corresponding error image are {[900,~1100]} HU and [0,~200]~HU, respectively.}} \label{fig:xcat-recon-roi2} \vspace{-0.1in} \end{figure} \begin{figure}[!htbp] \centering \begin{subfigure}[h]{0.45\textwidth} \centering \includegraphics[width=1\textwidth]{figures/ROIs_xcat/3e3_ROI3_v3} \caption{$I_0 = 3\times 10^3$} \label{fig:xcat-3e3-roi3} \end{subfigure} \vfil \begin{subfigure}[h]{0.45\textwidth} \centering \includegraphics[width=1\textwidth]{figures/ROIs_xcat/2e3_ROI3_v3} \caption{$I_0 = 2\times 10^3$} \label{fig:xcat-2e3-roi3} \end{subfigure} \caption{\textcolor{black}{Plots of the ROI 3 (central axial, sagittal, and coronal slices of the 3D volume). The display windows for the reconstructed ROI and the corresponding error image are {[900,~1100]} HU and [0,~200]~HU, respectively.}} \label{fig:xcat-recon-roi3} \vspace{-0.1in} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[width=0.47\textwidth]{figures/ROIs_xcat/xtrue_roi2-3} \caption{\textcolor{black}{3D plots of the ground-truth ROI 2 and ROI 3. The display windows for ROI~2 and ROI~3 are [900,~1100]~HU and [900,~1200]~HU, respectively.}} \vspace{-0.2in} \label{fig:xtrue-roi2-3} \end{figure} \textcolor{black}{Fig.~\ref{fig:xcat-recon-roi2} and Fig.~\ref{fig:xcat-recon-roi3}} plot the zoom-ins \textcolor{black}{and the corresponding error images} of ROI 2 and ROI 3 for the XCAT phantom simulations in Section~V.A, with $I_0 = 3 \times 10^3$ and $I_0 = 2 \times 10^3$, respectively. In Fig.~\ref{fig:xcat-recon-roi3}, we highlighted a region in the axial slice with small red arrows. \textcolor{black}{We show the zoom-ins of the ground-truth ROI 2 and ROI 3 of the XCAT phantom in Fig.~\ref{fig:xtrue-roi2-3}. }The results show that SPULTRA improves image quality over PWLS-EP and PWLS-ULTRA by reducing bias and improving image edges. \subsection{\textcolor{black}{FBP images of XCAT phantom simulations}} \vspace{-0.05in} \textcolor{black}{In XCAT phantom simulations, the PWLS-EP algorithm was initialized with an image reconstructed by the FDK \cite{feldkamp1984practical} method. Fig.~\ref{fig:fdk_xcat} shows the FDK reconstructed images for all the tested doses in Section V.A. These images have substantial streak artifacts and noise. } \begin{figure}[!htbp] \centering \begin{subfigure}[h]{0.22\textwidth} \centering \includegraphics[width=1\textwidth]{figures/FDK_imgs/1e4_xfdk} \caption{$I_0 = 1\times 10^4$} \label{fig:1e4_xfdk} \end{subfigure} \hfil \begin{subfigure}[h]{0.22\textwidth} \centering \includegraphics[width=1\textwidth]{figures/FDK_imgs/5e3_xfdk} \caption{$I_0 = 5\times 10^3$} \label{fig:5e3_xfdk} \end{subfigure} \vfil \begin{subfigure}[h]{0.22\textwidth} \centering \includegraphics[width=1\textwidth]{figures/FDK_imgs/3e3_xfdk} \caption{$I_0 = 3\times 10^3$} \label{fig:3e3_xfdk} \end{subfigure} \hfil \begin{subfigure}[h]{0.22\textwidth} \centering \includegraphics[width=1\textwidth]{figures/FDK_imgs/2e3_xfdk} \caption{$I_0 = 2\times 10^3$} \label{fig:2e3_xfdk} \end{subfigure} \caption{\textcolor{black}{FDK reconstructions for XCAT phantom simulations at different doses. The display window is [800,~1200]~HU.}} \label{fig:fdk_xcat} \vspace{-0.1in} \end{figure} \subsection{Ultra Low-dose 2D Shoulder Data Simulations} \subsubsection{\textcolor{black}{Initialize WavResNet with the FBP image}} In~\cite{TMI-SPULTRA-as-submit}, we presented the denoised image obtained using the iterative RNN version of WavResNet with the PWLS-EP reconstructed image as input. Since we used the optimal parameters reported in~\cite{WavResNet18} for WavResNet, wherein the inputs are reconstructed images using the filtered backprojection (FBP) method, here we also show the result obtained by using the FBP reconstructed shoulder phantom as input to WavResNet. Fig.~\ref{fig:wavresnet-fbpinit} shows the initial FBP image and the denoised image using the RNN versioned WavResNet with 6 iterations (as reported in~\cite{WavResNet18}, and more iterations did not provide much improvements in this case). As we see from Fig.~\ref{fig:wavresnet-fbpinit}, the denoised image is still quite noisy, and the image quality is clearly worse than the result with the PWLS-EP input shown in~\cite{TMI-SPULTRA-as-submit}. Hence, we used the PWLS-EP reconstruction as the input to WavResNet in the comparisons. \begin{figure}[!htp] \centering \begin{subfigure}[h]{0.22\textwidth} \centering \includegraphics[width=1\textwidth]{figures/shoulder2d/xfbp_withBH_1ma_crop} \caption{FBP input} \label{fig:shoulder_fbp} \end{subfigure} \hfil \begin{subfigure}[h]{0.22\textwidth} \centering \includegraphics[width=1\textwidth]{figures/shoulder2d/1ma_fbpinit_wavresrnn_iter6_crop.pdf} \caption{RNN versioned WavResNet} \label{fig:wavresnet-6iter} \end{subfigure} \caption{Iterative RNN versioned WavResNet result with an FBP image input. The display window is [800, 1400] HU.} \label{fig:wavresnet-fbpinit} \vspace{-0.1in} \end{figure} \subsubsection{\textcolor{black}{Regularizer Parameters Selection Procedure}} \textcolor{black}{In tuning the regularizer parameters for 2D shoulder data simulations where the beam-hardening model is involved, we considered the sparsity level, i.e., the percentage of non-zero entries in the sparse coefficients $\Z$ corresponding to the reconstructed image, and the trade-off among the bias, image resolution, and noise. Based on our heuristic parameters tuning in the XCAT and synthesized clinical data experiments, well reconstructed images usually have sparsity levels around $3\%$ or $4\%$. Therefore, we first roughly chose $\beta=0.05$ that reconstructed a reasonable image, and swept over several $\gamma_c$ values, which controls the sparsity level for both PWLS-ULTRA and SPULTRA, e.g. $\gamma_c = 40,\ 60,\ 80$, and $120$. Tab.~\ref{tab:shoulder-mean-roi} (the second column) reports the sparsity levels of reconstructions with different $(\beta,\gamma_c)$ values. The reconstructed images corresponding to sparsity levels larger than $5\%$ are shown in Fig.~\ref{fig:shoulder_pwls_0540} (PWLS-ULTRA) and Fig.~\ref{fig:shoulder_sp_0540} (SPULTRA). These figures clearly have some artifacts (pointed by red arrows), which verifies the rationale for picking $\gamma_c$ based on the sparsity level. Among $\gamma_c = 60,\ 80$, and $120$, we compared the mean values and standard deviations of the selected ROIs (marked in Fig.~10 in \cite{TMI-SPULTRA-as-submit}), and observed that $\gamma_c=120$ made the reconstructions blurry (see Fig.~\ref{fig:shoulder_pwls_05120} and Fig.~\ref{fig:shoulder_sp_05120}), while $\gamma_c = 60$ and $\gamma_c=80$ can provide good resolution-noise trade-off for reconstructed images. Hereafter, we fixed $\gamma_c = 80$ and swept over several $\beta$ values. Taking $\beta=0.05$ as a baseline, we selected $\beta = 0.03$ and $\beta=0.1$, which are (approximately) $0.5\times$ and $2\times$ of the baseline value. From both numerical results (Mean and STD in Tab.~\ref{tab:shoulder-mean-roi}) and visual results (Fig.~\ref{fig:shoulder_pwls_para} and Fig.~\ref{fig:shoulder_sp_para}), we found that $\beta=0.05$ gave the good bias-resolution-noise trade-off. In the manuscript \cite{TMI-SPULTRA-as-submit}, we showed the results with $\beta=0.05$ and $\gamma_c = 80$. } \begin{figure}[!htbp] \centering \begin{subfigure}[h]{0.22\textwidth} \centering \includegraphics[width=1\textwidth]{figures/shoulder2d/pwls_ultra_xral_bt0.05_gm40_iter200.pdf} \caption{\textcolor{black}{$(0.05,\ 40)$}} \label{fig:shoulder_pwls_0540} \end{subfigure} \hfil \begin{subfigure}[h]{0.22\textwidth} \centering \includegraphics[width=1\textwidth]{figures/shoulder2d/pwls_ultra_xral_bt0.05_gm60_iter200.pdf} \caption{\textcolor{black}{$(0.05,\ 60)$}} \label{fig:shoulder_pwls_bt0560} \end{subfigure} \vfil \begin{subfigure}[h]{0.22\textwidth} \centering \includegraphics[width=1\textwidth]{figures/shoulder2d/pwls_ultra_xral_bt0.05_gm80_iter200.pdf} \caption{\textcolor{black}{$(0.05,\ 80)$}} \label{fig:shoulder_pwls_0580} \end{subfigure} \hfil \begin{subfigure}[h]{0.22\textwidth} \centering \includegraphics[width=1\textwidth]{figures/shoulder2d/pwls_ultra_xral_bt0.05_gm120_iter200.pdf} \caption{\textcolor{black}{$(0.05,\ 120)$}} \label{fig:shoulder_pwls_05120} \end{subfigure} \vfil \begin{subfigure}[h]{0.22\textwidth} \centering \includegraphics[width=1\textwidth]{figures/shoulder2d/pwls_ultra_xral_bt0.03_gm80_iter200.pdf} \caption{\textcolor{black}{$(0.03,\ 80)$}} \label{fig:shoulder_pwls_0380} \end{subfigure} \hfil \begin{subfigure}[h]{0.22\textwidth} \centering \includegraphics[width=1\textwidth]{figures/shoulder2d/pwls_ultra_xral_bt0.1_gm80_iter200.pdf} \caption{\textcolor{black}{$(0.1,\ 80)$}} \label{fig:shoulder_pwls_180} \end{subfigure} \caption{\textcolor{black}{PWLS-ULTRA reconstructions with different $(\beta,\ \gamma_c)$ values. The red arrows point to some blurry areas or artifacts.}} \label{fig:shoulder_pwls_para} \end{figure} \begin{figure*}[!ht] \centering \begin{subfigure}[h]{0.22\textwidth} \centering \includegraphics[width=1\textwidth]{figures/shoulder2d/xral_bt0.05_gm40_iter600.pdf} \caption{\textcolor{black}{$(0.05,\ 40)$}} \label{fig:shoulder_sp_0540} \end{subfigure} \hfil \begin{subfigure}[h]{0.22\textwidth} \centering \includegraphics[width=1\textwidth]{figures/shoulder2d/xral_bt0.05_gm60_iter600.pdf} \caption{\textcolor{black}{$(0.05,\ 60)$}} \label{fig:shoulder_sp_0560} \end{subfigure} \hfil \begin{subfigure}[h]{0.22\textwidth} \centering \includegraphics[width=1\textwidth]{figures/shoulder2d/xral_bt0.05_gm80_iter600.pdf} \caption{\textcolor{black}{$(0.05,\ 80)$}} \label{fig:shoulder_sp_0580} \end{subfigure} \vfil \begin{subfigure}[h]{0.22\textwidth} \centering \includegraphics[width=1\textwidth]{figures/shoulder2d/xral_bt0.05_gm120_iter600.pdf} \caption{\textcolor{black}{$(0.05,\ 120)$}} \label{fig:shoulder_sp_05120} \end{subfigure} \hfil \begin{subfigure}[h]{0.22\textwidth} \centering \includegraphics[width=1\textwidth]{figures/shoulder2d/xral_bt0.03_gm80_iter600.pdf} \caption{\textcolor{black}{$(0.03,\ 80)$}} \label{fig:shoulder_sp_0380} \end{subfigure} \hfil \begin{subfigure}[h]{0.22\textwidth} \centering \includegraphics[width=1\textwidth]{figures/shoulder2d/xral_bt0.1_gm80_iter600.pdf} \caption{\textcolor{black}{$(0.1,\ 80)$}} \label{fig:shoulder_sp_180} \end{subfigure} \caption{\textcolor{black}{SPULTRA reconstructions with different $(\beta,\ \gamma_c)$ values. The red arrows point to some blurry areas or artifacts.}} \label{fig:shoulder_sp_para} \end{figure*} \begin{table*}[!htbp] \centering \caption{\textcolor{black}{Metrics used to tune parameters $(\beta,\gamma_c)$ for PWLS-ULTRA and SPULTRA in ultra low-dose shoulder phantom simulations. The \textit{sparsity (\%)} is the percentage of non-zero entries in the sparse coefficient matrix $\Z$. The \textit{Mean} and the standard deviation (\textit{STD}) are computed for ROIs marked in Fig.~10 of \cite{TMI-SPULTRA-as-submit}, and the unit is HU.} } \begin{subtable}{1\textwidth} \centering \caption{\textcolor{black}{PWLS-ULTRA}} \begin{tabular}{cccc} \toprule \textit{\textcolor{black}{$(\beta,\gamma_c)$}}&\textit{\textcolor{black}{sparsity (\%)}} & \textit{\textcolor{black}{Mean (ROI 1 / ROI 2 / ROI3)}} & \textit{\textcolor{black}{STD (ROI 1 / ROI 2 / ROI3)}} \\ \midrule \textcolor{black}{\textit{Reference}}& \textcolor{black}{-}&\textcolor{black}{\textit{1052.1 / 1060.1 / 1053.4}}& \textcolor{black}{\textit{8.12 / 8.81 / 6.98}}\\ \midrule \textcolor{black}{(0.05, 40)}& \textcolor{black}{5.9}&\textcolor{black}{1031.0 / 1043.2 / 1023.6}& \textcolor{black}{14.70 / 19.65 / 19.93}\\ \midrule \textcolor{black}{\textbf{(0.05, 60)}}& \textcolor{black}{\textbf{4.1}}&\textcolor{black}{\textbf{1031.1 / 1045.4 / 1023.9}}& \textcolor{black}{\textbf{14.03 / 11.35 / 19.38}}\\ \midrule \textcolor{black}{\textbf{(0.05, 80)}}& \textcolor{black}{\textbf{3.3}}&\textcolor{black}{\textbf{1031.1 / 1043.0 / 1024.2}}& \textcolor{black}{\textbf{14.82 / 10.92 / 19.29}}\\ \midrule \textcolor{black}{(0.05, 120)}&\textcolor{black}{2.5}&\textcolor{black}{1032.2 / 1026.7 / 1025.7}& \textcolor{black}{15.15 / 13.46 / 19.74}\\ \midrule \textcolor{black}{(0.03, 80)}& \textcolor{black}{3.6} & \textcolor{black}{1031.0 / 1043.2 / 1023.5} & \textcolor{black}{19.08 / 14.96 / 23.23}\\ \midrule \textcolor{black}{(0.1, 80)}& \textcolor{black}{3.0} & \textcolor{black}{1031.8 / 1027.0 / 1025.7} & \textcolor{black}{12.17 / 13.12 / 16.51}\\ \bottomrule \end{tabular} \end{subtable} \vfil \vspace{0.1in} \begin{subtable}{1\textwidth} \centering \caption{\textcolor{black}{SPULTRA}} \begin{tabular}{ccc c} \toprule \textit{\textcolor{black}{$(\beta,\gamma_c)$}}&\textit{\textcolor{black}{sparsity (\%)}} & \textit{\textcolor{black}{Mean (ROI 1 / ROI 2 / ROI3)}} & \textit{\textcolor{black}{STD (ROI 1 / ROI 2 / ROI3)}} \\ \midrule \textcolor{black}{\textit{Reference}}& \textcolor{black}{-}&\textcolor{black}{\textit{1052.1 / 1060.1 / 1053.4}}& \textcolor{black}{\textit{8.12 / 8.81 / 6.98}}\\ \midrule \textcolor{black}{(0.05, 40)}& \textcolor{black}{7.4}&\textcolor{black}{1054.7 / 1043.2 / 1049.2}& \textcolor{black}{16.95 / 12.14 / 13.06}\\ \midrule \textcolor{black}{(0.05, 60)}& \textcolor{black}{5.0}&\textcolor{black}{1054.7 / 1047.6 / 1049.1}& \textcolor{black}{15.96 / 12.26 / 11.93}\\ \midrule \textcolor{black}{\textbf{(0.05, 80)}}& \textcolor{black}{\textbf{3.9}}& \textcolor{black}{\textbf{1054.7 / 1044.0 / 1049.6}}&\textcolor{black}{\textbf{16.34 / 11.42 / 11.60}}\\ \midrule \textcolor{black}{(0.05, 120)}&\textcolor{black}{2.8}&\textcolor{black}{1055.3 / 1036.9 / 1050.8}& \textcolor{black}{16.13 / 14.11 / 11.55}\\ \midrule \textcolor{black}{(0.03, 80)}& \textcolor{black}{4.2} & \textcolor{black}{1054.5 / 1042.7 / 1049.2} & \textcolor{black}{20.65 / 15.81 / 17.36}\\ \midrule \textcolor{black}{(0.1, 80)}& \textcolor{black}{3.6} & \textcolor{black}{1054.7 / 1037.3 / 1050.6} & \textcolor{black}{12.73 / 12.31 / 6.59}\\ \bottomrule \end{tabular} \end{subtable} \label{tab:shoulder-mean-roi} \vspace{-0.1in} \end{table*}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Graphs considered in this paper are simple, undirected, and finite. Let $G$ be a graph. Denote by $V(G)$ and $E(G)$ the vertex set and edge set of $G$, respectively. For $v\in V(G)$, $N_G(v)$ denotes the set of neighbors of $v$ in $G$. For $S\subseteq V(G)$, $N_G(S)=\bigcup_{x\in S}N_G(x)-S$. For $H\subseteq G$ and $x\in V(G)$, define $V_H(x)=N_G(x)\cap V(H)$ and $V_H(S)=N_G(S)\cap V(H)$. Let $S\subseteq V(G)$. Then the subgraph induced by $V(G)-S$ is denoted by $G-S$. For notational simplicity we write $G-x$ for $G-\{x\}$. If $uv\in E(G)$ is an edge, we write $u\sim v$. Let $V_1, V_2\subseteq V(G)$ be two disjoint vertex sets. Then $E_G(V_1,V_2)$ is the set of edges of $G$ with one end in $V_1$ and the other end in $V_2$. The number of components of $G$ is denoted by $c(G)$. Let $t\ge 0$ be a real number. The graph is said to be {\it $t$-tough\/} if $|S|\ge t\cdot c(G-S)$ for each $S\subseteq V(G)$ with $c(G-S)\ge 2$. The {\it toughness $\tau(G)$\/} is the largest real number $t$ for which $G$ is $t$-tough, or is $\infty$ if $G$ is complete. This concept, a measure of graph connectivity and ``resilience'' under removal of vertices, was introduced by Chv\'atal~\cite{chvatal-tough-c} in 1973. It is easy to see that if $G$ has a hamiltonian cycle then $G$ is 1-tough. Conversely, Chv\'atal~\cite{chvatal-tough-c} conjectured that there exists a constant $t_0$ such that every $t_0$-tough graph is hamiltonian. Bauer, Broersma and Veldman~\cite{Tough-counterE} have constructed $t$-tough graphs that are not hamiltonian for all $t < \frac{9}{4}$, so $t_0$ must be at least $\frac{9}{4}$. There are a number of papers on Chv\'atal's toughness conjecture, and it has been verified when restricted to a number of graph classes~\cite{Bauer2006}, including planar graphs, claw-free graphs, co-comparability graphs, and chordal graphs. A graph $G$ is called {\it $2K_2$-free\/} if it does not contain two independent edges as an induced subgraph. Recently, Broersma, Patel and Pyatkin~\cite{2k2-tough} proved that every 25-tough $2K_2$-free graph on at least three vertices is hamiltonian. The class of $2K_2$-free graphs is well studied, for instance, see~\cite{ 2k2-tough, CHUNG1990129, MR845138, 2k21, 1412.0514, MR2279069, MR1172684}. It is a superclass of {\it split\/} graphs, where the vertices can be partitioned into a clique and an independent set. One can also easily check that every {\it cochordal\/} graph (i.e., a graph that is the complement of a chordal graph) is $2K_2$-free and so the class of $2K_2$-free graphs is at least as rich as the class of chordal graphs. In~\cite{2k21}, Gao and Pasechnik proposed the following conjecture. \begin{CON}\label{2k2h} Every $2$-tough $2K_2$-free graph with at least three vertices is hamiltonian. \end{CON} In this paper, we support Conjecture~\ref{2k2h} as well as improve the main result in~\cite{2k2-tough} by showing the following result. \begin{THM}\label{main} Let $G$ be a $3$-tough $2K_2$-free graph with at least three vertices. Then $G$ is hamiltonian. \end{THM} In~\cite{MR1392734} it was shown that every 3/2-tough split graph on at least three vertices is hamiltonian. And the authors constructed a sequence $\{G_n\}_{n=1}^{\infty}$ of split graphs with no 2-factor and $\tau(G_n)\rightarrow 3/2$. So $3/2$ is the best possible toughness for split graphs to be hamiltonian. Since split graphs are $2K_2$-free, we cannot decrease the bound in Theorem~\ref{main} below 3/2. Although we are not sure about the best possible toughness for guaranteeing $2K_2$-free graphs to be hamiltonian, we believe that Conjecture~\ref{2k2h} might be true. In fact, in the proof of Theorem~\ref{main}, except for one case where 3-tough is needed, all other cases only need a toughness of 2. \section{Proof of Theorem~\ref{main}} We need the following lemma for the existence of a 2-factor in a graph. \begin{LEM}[Enomoto et al.~\cite{MR785651}]\label{2-factor} Every $k$-tough graph has a $k$-factor if $k|V(G)|$ is even and $|V(G)|\ge k+1$. \end{LEM} We will also need some notation. Let $C$ be an oriented cycle. For $x\in V(C)$, denote the successor of $x$ by $x^+$ and the predecessor of $x$ by $x^-$. Let $S\subseteq V(C)$ be an independent set in $C$. Then $S^+=\{x^+\,|\, x\in S\}$, and $S^-$ is defined similarly. Let $D$ be another oriented cycle disjoint with $C$ and $T\subseteq V(D)$ be an independent set in $D$. Then $(S\cup T)^+=S^+\cup T^+$ and $(S\cup T)^-=S^-\cup T^-$. For $u,v\in V(C)$, $u\overset{\rightharpoonup }{C} v$ denotes the portion of $C$ starting at $u$, following $C$ in the orientation, and ending at $v$. Likewise, $u\overset{\leftharpoonup }{C} v$ is the opposite portion of $C$ with endpoints as $u$ and $v$. Given two vertex-disjoint cycles $C$ and $D$. Suppose $P_c$ is a portion of $C$ with endpoints $u,v$ and $P_d$ is a portion of $D$ with endpoints $x,y$. If $v$ and $x$ are adjacent, we write $uP_cvxP_dy$ as the concatenation of $P_c$ and $P_d$ through the edge $vx$. We will assume all cycles in consideration are oriented. \proof[Proof of Theorem~\ref{main}] The graph $G$ is 3-tough, so it has a 2-factor by Lemma~\ref{2-factor}. We take a 2-factor of $G$ such that it contains as few cycles as possible. Let $\mathcal{F}$ be the set of cycles in such a 2-factor. We may assume that $\mathcal{F}$ contains at least two cycles. For otherwise, the only cycle in $\mathcal{F}$ is a hamiltonian cycle of $G$. Let $x\in V(G)$ be a vertex. As cycles in $\mathcal{F}$ form a 2-factor of $G$, there exists a unique cycle, say $C\in \mathcal{F}$ such that $x\in V(C)$. If there exists a cycle $D\in \mathcal{F}-\{C\}$ such that $x$ is adjacent to two consecutive vertices on $D$ in $G$, we say $x$ is of {\it A-type}\,(w.r.t. $D$). If $x$ is not of A-type w.r.t. any cycles in $\mathcal{F}-\{C\}$, we say $x$ is of {\it B-type}. Denote $$ A=\{x\in V(G)\,|\, \mbox{$x$ is of A-type}\}\quad\quad \mbox{and}\quad\quad B=V(G)-A. $$ Let $xy\in E(C)$ be an edge. We say $xy$ is of {\it A-type} if $x,y\in A\cap V(C)$; we say $xy$ is of {\it B-type} if $x,y\in B\cap V(C)$; otherwise, $xy$ is of {\it AB-type}. We say $C$ is {\it AB-alternating} if all edges of $C$ are of AB-type. It is clear that if $C$ is AB-alternating, then $C$ is an even cycle. For a cycle $D\in \mathcal{F}-\{C\}$ and the edge $xy\in E(C)$, we denote $$ V_D(xy)=V_D(x)\cup V_D(y)\quad\quad \overline{V}_D(xy)=V(D)-V_D(xy), $$ where recall that $V_D(x)=N_G(x)\cap V(D)$. \begin{CLA}\label{nonadjacency} Let $C,D\in \mathcal{F}$ be two distinct cycles. If $x\in V(C)$ has a neighbor $u\in V(D)$, then $x^+\not\sim u^+, u^-$ and $x^-\not\sim u^+, u^-$. \end{CLA} \noindent\textbf{Proof}.\quad Assume on the contrary that $x^+\sim u^+$. Then $xu\overset{\leftharpoonup }{D} u^+x^+\overset{\rightharpoonup }{C} x$ combines $C$ and $D$ into a single cycle. This gives a contradiction to the minimality of $|\mathcal{F}|$. Similar construction shows that $x^+\not\sim u^-$ and $x^-\not\sim u^+, u^-$. \qed \begin{CLA}\label{no-a-type-edge} No cycle in $\mathcal{F}$ containing an A-type edge. \end{CLA} \noindent\textbf{Proof}.\quad Assume on the contrary that there exists $C\in\mathcal{F}$ and $xy\in E(C)$ such that $xy$ is of A-type. Let $D,Q\in \mathcal{F}-\{C\}$ such that $x\sim u, u^+$ with $uu^+\in E(D)$ and $y\sim v, v^+$ with $vv^+\in E(Q)$. As $x\sim u, u^+$, $y\not\sim u, u^+$ by Claim~\ref{nonadjacency}. Let $z$ be the other neighbor of $y$ on $C$. Then $z\sim u$ or $z\sim u^+$ by considering the two independent edges $yz$ and $uu^+$. By reversing the orientation of $C, D$ if necessary, we assume that $y=x^+$ and $z\sim u$. Then $$ \left\{ \begin{array}{ll} xu^+\overset{\rightharpoonup }{D} v yv^+\overset{\rightharpoonup }{D} uz \overset{\rightharpoonup }{C} x \quad \mbox{combines $C$ and $D$ into one cycle}, & \hbox{if $D=Q$;} \\ xu^+D uz \overset{\rightharpoonup }{C} x \quad \mbox{and} \quad vyv^+\overset{\rightharpoonup }{Q} v \quad \mbox{integrate $C, D, Q$ into two cycles}, & \hbox{if $D\ne Q$.} \end{array} \right. $$ \qed \begin{CLA}\label{I-xy-alter-size-inde} Let $C\in \mathcal{F}$, $xy\in E(C)$. Denote $$ I_{xy}=\bigcup_{D\in \mathcal{F}-\{C\}} \overline{V}_D(xy). $$ Then each of the following holds. \begin{enumerate} \item [$(1)$] $I_{xy}$ is an independent set in $G$. \item [$(2)$] If $xy$ is of B-type, then for any $D\in \mathcal{F}-\{C\}$, vertices on $D$ are alternating between $I_{xy}$ and $V(G)-V(C)-I_{xy}$. \item [$(3)$] If $xy$ is of B-type, then $|I_{xy}|=\frac{1}{2}|V(G)-V(C)|$. \end{enumerate} \end{CLA} \noindent\textbf{Proof}.\quad To show $(1)$, assume on the contrary that there exist $u,v\in I_{xy}$ such that $u\sim v$. Then $E_G(\{x,y\},\{u,v\}) \ne \emptyset$ by the $2K_2$-freeness assumption of $G$. Consequently, at least one of $u$ and $v$ is not an element in $I_{xy}$. This gives a contradiction. Assume now that $xy$ is a B-type edge. Let $D\in \mathcal{F}-\{C\}$. We show that for any edge $uv\in E(D)$, there is one and exactly one vertex in $\{u,v\}$ is in $V_D(xy)$. One of $u,v$ must be in $V_D(xy)$ is guaranteed by the $2K_2$-freeness of $G$. Suppose, w.l.o.g., that $u\in V_D(xy)$ with $u\sim x$. Then by Claim~\ref{nonadjacency}, $v\not\sim y$. As $x$ is of B-type, we further know that $v\not\sim x$. Thus, $v\in \overline{V}_D(xy)$. This gives $(2)$. The statement $(3)$ is an immediate consequence of $(2)$. \qed \begin{CLA}\label{a-plus-ind} Let $A^+$ be the set of successors of vertices in $A$. Then $A^+$ is an independent set in $G$. \end{CLA} \noindent\textbf{Proof}.\quad Suppose on the contrary that there exist $x^+, y^+\in A^+$ with $x^+y^+\in E(G)$. Assume $x^+\in V(C)$ with predecessor as $x$, $y^+\in V(D)$ with predecessor as $y$ for cycles $C, D\in \mathcal{F}$. Then both $x$ and $y$ are A-type vertices. Let $Q, R\in \mathcal{F}$ with $uu^+\in E(Q)$ and $vv^+\in E(R)$ such that $x\sim u, u^+$ and $y\sim v, v^+$. As $x\sim u, u^+$ and $y\sim v, v^+$, we know that $x^+\not\sim u^-, u, u^+, u^{++}$ and $y^+\not\sim v^-, v, v^+, v^{++}$ by Claim~\ref{nonadjacency}. Since $x^+y^+\in E(G)$, by the $2K_2$-freeness of $G$, $y^+$ is adjacent to one of $u,u^+$ and $x^+$ is adjacent to one of $v, v^+$. Thus, $\{u,u^+\}\cap \{v, v^+\}=\emptyset$. We consider two cases for completing the proof. \smallskip\noindent {\bf Case \ref{a-plus-ind}.1:} $C=D$. \smallskip\noindent {\bf Case \ref{a-plus-ind}.1.1:} $C=D$ and $Q=R$. We combine $C$ and $Q$ into a single cycle as follows. $$ \left\{ \begin{array}{ll} xu\overset{\leftharpoonup }{Q} v^+y \overset{\leftharpoonup }{C} x^+ v \overset{\leftharpoonup }{Q} u^+y^+\overset{\rightharpoonup }{C} x, & \hbox{if $x^+\sim v, y^+\sim u^+$;} \\ xu^+\overset{\rightharpoonup }{Q} vx^+ \overset{\rightharpoonup }{C} y v^+ \overset{\rightharpoonup }{Q} uy^+\overset{\rightharpoonup }{C} x, & \hbox{if $x^+\sim v, y^+\sim u$;} \\ xu\overset{\leftharpoonup }{Q} v^+x^+ \overset{\rightharpoonup }{C} y v \overset{\leftharpoonup }{Q} u^+y^+\overset{\rightharpoonup }{C} x, & \hbox{if $x^+\sim v^+, y^+\sim u^+$;} \\ xu^+\overset{\rightharpoonup }{Q} vy \overset{\leftharpoonup }{C} x^+ v^+ \overset{\rightharpoonup }{Q} uy^+\overset{\rightharpoonup }{C} x, & \hbox{if $x^+\sim v^+, y^+\sim u$.} \end{array} \right. $$ \smallskip\noindent {\bf Case \ref{a-plus-ind}.1.2:} $C=D$ and $Q\ne R$. Recall that $\{u, u^+\}\cap \{v, v^+\}=\emptyset$. Thus, $E_G(\{u, u^+\}, \{v, v^+\})\ne \emptyset$. By reversing the orientations of $Q$ and $R$ if necessary, we assume that $u\sim v$. Then $$ xu^+\overset{\rightharpoonup }{Q} uv \overset{\leftharpoonup}{R} v^+y \overset{\leftharpoonup }{C} x^+y^+\overset{\rightharpoonup }{C} x $$ combines $C$, $Q$ and $R$ into a single cycle. \smallskip\noindent {\bf Case \ref{a-plus-ind}.2:} $C\ne D$. \smallskip\noindent {\bf Case \ref{a-plus-ind}.2.1:} $C\ne D$ and $Q=R$. As $Q\ne C$ and $R\ne D$ by the definition of A-type vertices, we have $Q\not\in \{C,D\}$. Recall that $\{u,u^+\}\cap \{v, v^+\}=\emptyset$. Thus, $E_G(\{u, u^+\},\{v,v^+\})\ne \emptyset$, by the $2K_2$-freeness of $G$. By reversing the orientation of $Q$ if necessary, we assume $y^+\sim u$. Then $uv^+\not\in E(Q)$. As otherwise, $u=v^{++}$ and so $y^+\sim v^{++}$. However, $y^+\not\sim v^{++}$ by the argument prior to Case \ref{a-plus-ind}.1. We cover vertices in $V(C)\cup V(D)\cup V(Q)$ by one or two cycles as below. $$ \left\{ \begin{array}{ll} xu^+\overset{\leftharpoonup }{Q} v y\overset{\leftharpoonup }{D} y^+ x^+ \overset{\rightharpoonup }{C} x, & \hbox{if $u^+v\in E(Q)$;} \\ xu\overset{\leftharpoonup }{Q} v^+ y\overset{\leftharpoonup }{D} y^+ x^+ \overset{\rightharpoonup }{C} x, u^+\overset{\rightharpoonup }{Q} vu^+, & \hbox{if $u^+\sim v$ but $u^+v\not\in E(Q)$;} \\ xu \overset{\leftharpoonup }{Q} v^+ u^+ \overset{\rightharpoonup }{Q} vy \overset{\leftharpoonup }{D} y^+ x^+ \overset{\rightharpoonup }{C} x, & \hbox{if $u^+\sim v^+$;} \\ xu^+ \overset{\rightharpoonup }{Q} v u \overset{\leftharpoonup }{Q} v^+y \overset{\leftharpoonup }{D} y^+ x^+ \overset{\rightharpoonup }{C} x, & \hbox{if $u\sim v$;} \\ xu^+ \overset{\rightharpoonup }{Q} v y \overset{\leftharpoonup }{D} y^+ x^+ \overset{\rightharpoonup }{C} x, u\overset{\leftharpoonup }{Q} v^+ u, & \hbox{if $u\sim v^+$.} \end{array} \right. $$ \smallskip\noindent {\bf Case \ref{a-plus-ind}.2.2:} $C\ne D$ and $Q\ne R$. As $x^+\not\sim u, u^+$, and $x^+\sim y^+$, we get $y^+\not\in \{u, u^+\}$. Consequently, $y\not\in\{u^-, u\}$. Similarly, $x^+\not\in \{v, v^+\}$ and $x\not\in\{v^-, v\}$. \smallskip\noindent {\bf Case \ref{a-plus-ind}.2.2.1:} $C\ne D$, and $Q=D, R=C$. Again $E_G(\{u, u^+\},\{v,v^+\})\ne \emptyset$ and $E_G(\{u^+, u^{++}\},\{v,v^+\})\ne \emptyset$ by the $2K_2$-freeness of $G$. We combine $C$ and $D$ into a single cycle as follows. $$ \left\{ \begin{array}{ll} xu\overset{\leftharpoonup }{D} y^+ x^+ \overset{\rightharpoonup }{C} vu^+\overset{\rightharpoonup }{D} y v^+\overset{\rightharpoonup }{C} x, & \hbox{if $u^+\sim v$;} \\ xu\overset{\leftharpoonup }{D} y^+ x^+ \overset{\rightharpoonup }{C} vy\overset{\leftharpoonup }{D} u^+ v^+\overset{\rightharpoonup }{C} x, & \hbox{if $u^+\sim v^+$;} \\ xu^+\overset{\rightharpoonup }{D} yv \overset{\leftharpoonup }{C} x^+y^+\overset{\leftharpoonup }{D} u v^+\overset{\rightharpoonup }{C} x, & \hbox{if $u\sim v^+$;} \\ xu^+ \overset{\leftharpoonup }{D} y^+ x^+ \overset{\rightharpoonup }{C} v u^{++}\overset{\leftharpoonup }{D} y v^+ \overset{\rightharpoonup }{C} x, & \hbox{if $u\sim v,\, u^+\not\sim v, v^+$, and $ u^{++}\sim v$;}\\ xu^+ \overset{\leftharpoonup }{D} y^+ x^+ \overset{\rightharpoonup }{C} v y\overset{\leftharpoonup }{D} u^{++} v^+ \overset{\rightharpoonup }{C} x, & \hbox{if $u\sim v,\, u^+\not\sim v, v^+$, and $ u^{++}\sim v^+$.} \end{array} \right. $$ \smallskip\noindent {\bf Case \ref{a-plus-ind}.2.2.2:} $C\ne D, Q\ne R$, and $Q=D, R\not\in \{C,D\}$. We cover vertices in $V(C)\cup V(D)\cup V(R)$ by one or two cycles as below. $$ \left\{ \begin{array}{ll} xu^+\overset{\rightharpoonup }{D} yv^+ \overset{\rightharpoonup }{R} vu\overset{\leftharpoonup }{D} y^+x^+\overset{\rightharpoonup }{C} x, & \hbox{if $u\sim v$;} \\ xu^+\overset{\rightharpoonup }{D} yv \overset{\leftharpoonup}{R} v^+u\overset{\leftharpoonup }{D} y^+x^+\overset{\rightharpoonup }{C} x, & \hbox{if $u\sim v^+$;} \\ xu\overset{\leftharpoonup }{D} y^+ x^+ \overset{\rightharpoonup }{C} x, u^+ \overset{\rightharpoonup }{D} y v^+ \overset{\rightharpoonup }{R} vu^+ & \hbox{if $u^+\sim v$;} \\ xu\overset{\leftharpoonup }{D} y^+ x^+ \overset{\rightharpoonup }{C} x, u^+ \overset{\rightharpoonup }{D} y v \overset{\leftharpoonup}{R} v^+u^+ , & \hbox{if $u^+\sim v^+$.} \end{array} \right. $$ \smallskip\noindent {\bf Case \ref{a-plus-ind}.2.2.3:} $C\ne D, Q\ne R$, and $R=C, Q\not\in \{C,D\}$. This case is symmetric to Case~\ref{a-plus-ind}.2.2.2, so we skip its proof. \smallskip\noindent {\bf Case \ref{a-plus-ind}.2.2.4:} $C\ne D, Q\ne R$, and $Q\ne D, R\ne C$. By reversing the orientations of $Q$ and $R$ if necessary, we assume that $u\sim v$. Then $$ xu^+\overset{\rightharpoonup }{Q} uv \overset{\leftharpoonup}{R} v^+y\overset{\leftharpoonup }{D} y^+x^+\overset{\rightharpoonup }{C} x $$ combines $C, D, Q, R$ into a single cycle. \qed \begin{CLA}\label{one-cycle-have-B-edge} We may assume that $\mathcal{F}$ contains exactly one cycle $C$ such that $C$ has a B-type edge, and all other cycles in $\mathcal{F}$ are AB-alternating. \end{CLA} \noindent\textbf{Proof}.\quad By Claim~\ref{a-plus-ind} that $A^+$ is an independent set in $G$, we know that not all cycles in $\mathcal{F}$ are AB-alternating. As otherwise, let $S=A$. Then $c(G-S)=|A^+|=|A|=|S|$. We get that $\tau(G)\le \frac{|S|}{|c(G-S)|}=1<3$. This gives a contradiction. We then claim that $\mathcal{F}$ contains no two cycles, say $C$ and $D$ both containing a B-type edge. Assume on the contrary that both $C$ and $D$ contain a B-type edge. Suppose, w.l.o.g., that $|V(C)|\le |V(D)|$. Let $xy\in E(C)$ be of B-type. By Claim~\ref{I-xy-alter-size-inde}, $I_{xy}$, the set of non-neighbors of $x$ and $y$ in $V(G)-V(C)$, is an independent set in $G$, Thus, $I_{xy}\cup \{x\}$ is also an independent set in $G$. Let $S=V(G)-(I_{xy}\cup \{x\})$. Then $G-S$ has $|I_{xy}\cup \{x\}|$ components, each being an isolated vertex. Note that $|I_{xy}|=\frac{|V(G)-V(C)|}{2}=\frac{|V(G)-V(C)-V(D)|}{2}+\frac{|V(D)|}{2}$ by Claim~\ref{I-xy-alter-size-inde}, and $|V(C)|\le |V(D)|$. Thus, \begin{eqnarray*} \tau(G) &\le & \frac{|S|}{c(G-S)} \\ &=& \frac{|V(C)|-1+\frac{|V(G)-V(C)|}{2}}{|I_{xy}|+1} \\ &=& \frac{|V(C)|-1+\frac{|V(G)-V(C)-V(D)|}{2}+\frac{|V(D)|}{2}}{\frac{|V(G)-V(C)-V(D)|}{2}+\frac{|V(D)|}{2}+1}\\ &\le & \frac{|V(D)|-1+\frac{|V(G)-V(C)-V(D)|}{2}+\frac{|V(D)|}{2}}{\frac{|V(G)-V(C)-V(D)|}{2}+\frac{|V(D)|}{2}+1}\\ &= & \frac{\frac{|V(G)-V(C)-V(D)|}{2}+\frac{3|V(D)|}{2}-1}{\frac{|V(G)-V(C)-V(D)|}{2}+\frac{|V(D)|}{2}+1}\\ & <& 3, \end{eqnarray*} showing a contradiction to the assumption that $\tau(G)\ge 3$. (In fact, this is the only case where 3-tough is used.) \qed \begin{ASP}\label{C-only-B-edge} We now fix $C\in \mathcal{F}$ to denote the cycle which contains a B-type edge, and assume that all other cycles in $\mathcal{F}-\{C\}$ are AB-alternating. \end{ASP} \begin{CLA}\label{full-b-neighbors} Let $D\in \mathcal{F}-\{C\}$ and $xy\in E(C)$ be of B-type. Assume that $V_D(xy)\cap B\ne\emptyset$. Then $V_D(xy)=B\cap V(D)$, and either $V_D(x)=\emptyset$ and $V_D(y)=B\cap V(D)$ or $V_D(x)=B\cap V(D)$ and $V_D(y)=\emptyset$. \end{CLA} \noindent\textbf{Proof}.\quad Recall that $\overline{V}_D(xy)$ is an independent set in $G$, vertices on $D$ are alternating between $\overline{V}_D(xy)$ and $V_D(xy)$ by $(2)$ of Claim~\ref{I-xy-alter-size-inde}. Because $D$ is AB-alternating, we then get $V_D(xy)=B\cap V(D)$ if $V_D(xy)\cap B\ne\emptyset$. And so if $V_D(x)=\emptyset$, then $V_D(y)=B\cap V(D)$; and if $V_D(y)=\emptyset$, then $V_D(x)=B\cap V(D)$. Thus, we only show that either $V_D(x)$ or $V_D(y)$ has to be empty. Assume to the contrary that $V_D(x)\ne \emptyset$ and $V_D(y)\ne\emptyset$. As vertices on $D$ are alternating between $\overline{V}_D(xy)$ and $V_D(xy)$, we can choose $u\in V_D(x)$ so that $u^{++}\in V_D(y)$. Then $u^+\in A\cap V(D)$. Assume that $u^+$ is of A-type w.r.t. $Q\in \mathcal{F}-\{D\}$, i.e., $u^+\sim v, v^+$ with $vv^+\in E(Q)$. Assume, w.l.o.g., that $y=x^+$. As $x\sim u$ and $y\sim u^{++}$, we have that $u^+\not\sim x,y$ by Claim~\ref{nonadjacency}. Thus, $\{v, v^+\}\cap \{x,y\}=\emptyset$. \smallskip \noindent {\bf Case~\ref{full-b-neighbors}.1:} $Q=C$. We combine $C, D$ into a single cycle as $xu \overset{\leftharpoonup }{D} u^{++}y \overset{\rightharpoonup }{C} v u^+ v^+ \overset{\rightharpoonup }{C} x$. \smallskip \noindent {\bf Case~\ref{full-b-neighbors}.2:} $Q\ne C$. We cover $V(C)\cup V(D)\cup V(Q)$ by two cycles as $xu \overset{\leftharpoonup }{D} u^{++}y \overset{\rightharpoonup }{C} x$ and $v \overset{\rightharpoonup }{Q} v^+ u^+ v$. \qed \begin{CLA}\label{full-b-neighbors2} Let $ D\in \mathcal{F}-\{C\}$ and $x\in V(C)$. Assume that $V_D(x)=B\cap V(D)$, then $V_D(x^+)=V_D(x^-)=\emptyset$. \end{CLA} \noindent\textbf{Proof}.\quad Note first that $N_G(x^+)\cap (B\cap V(D))=\emptyset$ and $N_G(x^-)\cap (B\cap V(D))=\emptyset$. As otherwise, some vertex in $B\cap V(D)$ is adjacent to both vertices in $\{x, x^+\}$ or $\{x,x^-\}$. This implies that the vertex is of A-type w.r.t. $C$. Then we observe that neither $x^+$ nor $x^-$ is adjacent to any vertex in $A\cap V(D)$ by Claim~\ref{nonadjacency}. \qed \begin{CLA}\label{v-0-ind-b} Let $x\in V(C)$. Assume there exists $D\in \mathcal{F}$ so that $V_D(x)=B\cap V(D)$. Then $\{x^+\}\cup A^+$ is an independent set in $G$. \end{CLA} \noindent\textbf{Proof}.\quad As $A^+$ is already an independent set in $G$ by Claim~\ref{a-plus-ind}, we assume on the contrary that there exists $w\in A^+$ so that $x^+\sim w$. Note that $x\ne w$, since $V_D(x)=B\cap V(D)$ and $V_D(w)\cap B=\emptyset$. Assume $w\in V(Q)$ for some cycle $Q\in \mathcal{F}$. Then the predecessor $w^-$ of $w$ on $Q$ is of A-type. Note that $V_D(x^+)=\emptyset$ by Claim~\ref{full-b-neighbors2} and $x^+\sim w$ implies that $Q\ne D$. Let $R\in \mathcal{F}-\{Q\}$ with $vv^+\in E(R)$ so that $w^-\sim v, v^+$. Let $z\in A\cap V(D)$. As $V_D(x^+)=\emptyset$ and $x^+\sim w$, $w$ is adjacent to one of $z$ and $z^+$ by the $2K_2$-freeness of $G$. Since $D$ is AB-alternating by Assumption~\ref{C-only-B-edge} and $z\in A\cap V(D)$, $z^+\in B\cap V(D)$. We see that $w\sim z$, because both $w, z^+\in A^+$ and $A^+$ is an independent set in $G$ by Claim~\ref{a-plus-ind}. As $w^-\sim v, v^+$, $w\not\sim v, v^+$ by Claim~\ref{nonadjacency}. Thus, $E_G(\{w^+\}, \{v, v^+\})\ne \emptyset$. We consider two cases for finishing the proof. \smallskip \noindent {\bf Case~\ref{v-0-ind-b}.1:} $Q\ne C$. As $x^+\sim w$, we have $w^-\not\sim x^{++}, x$ by Claim~\ref{nonadjacency}. Since $w^-\sim v, v^+$, we then have that $v, v^+\not\in \{x, x^+, x^{++}\}$. \smallskip \noindent {\bf Case~\ref{v-0-ind-b}.1.1:} $Q\ne C$ and $R=C$. We combine $C,D, Q$ into one single cycle as below. $$ \left\{ \begin{array}{ll} x^+wz\overset{\rightharpoonup }{D} z^-x \overset{\leftharpoonup }{C} v^+ w^- \overset{\leftharpoonup }{Q} w^+ v \overset{\leftharpoonup }{C} x^+ , & \hbox{if $w^+\sim v$;} \\ x^+wz\overset{\rightharpoonup }{D} z^-x \overset{\leftharpoonup }{C} v^+ w^+\overset{\rightharpoonup }{Q} w^- v \overset{\leftharpoonup }{C} x^+, & \hbox{if $w^+\sim v^+$.} \end{array} \right. $$ \smallskip \noindent {\bf Case~\ref{v-0-ind-b}.1.2:} $Q\ne C$ and $R= D$. By the assumption, $V_D(x^+)=\emptyset$; particulary, $x^+\not\sim v, v^+$. Since $w^-\sim v, v^+$, $w\not\sim v, v^+$ by Claim~\ref{nonadjacency}. But $x^+w$ and $vv^+$ are two induced disjoint edges. This gives a contradiction to the $2K_2$-freeness. \smallskip \noindent {\bf Case~\ref{v-0-ind-b}.1.2:} $Q\ne C$ and $R\not\in \{C, D\}$. Since $w^-\sim v, v^+$, $w\not\sim v, v^+$ by Claim~\ref{nonadjacency}. Thus, $x^+\sim v$ or $x^+\sim v^+$. By reversing the orientation of $R$ if necessary, we assume $x^+\sim v$. Since $V_D(x)=B\cap V(D)$ and $z^+\in B\cap V(D)$, $x\sim z^+$. Then $$ xz^+\overset{\rightharpoonup }{D} z w\overset{\rightharpoonup }{Q} w^-v^+\overset{\rightharpoonup }{R} vx^+ \overset{\rightharpoonup }{C} x $$ is a cycle which contains all the vertices in $V(C)\cup V(D)\cup V(Q)\cup V(R)$. \smallskip \noindent {\bf Case~\ref{v-0-ind-b}.2:} $Q=C$. As $z\sim w$, $w^- \not\sim z^+, z^-$ by Claim~\ref{nonadjacency}. Since $w^-\sim v, v^+$, we then get that $v, v^+\not\in \{z^-, z, z^+\}$. \smallskip \noindent {\bf Case~\ref{v-0-ind-b}.2.1:} $Q=C$ and $R=D$. By the assumption, $V_D(x^+)=\emptyset$; particulary, $x^+\not\sim v, v^+$. Since $w^-\sim v, v^+$, $w\not\sim v, v^+$ by Claim~\ref{nonadjacency}. But $x^+w$ and $vv^+$ are two induced disjoint edges. This gives a contradiction to the $2K_2$-freeness. \smallskip \noindent {\bf Case~\ref{v-0-ind-b}.2.2:} $Q=C$ and $R\ne D$. Since $w^-\sim v, v^+$, $w\not\sim v, v^+$ by Claim~\ref{nonadjacency}. Thus, $x^+\sim v$ or $x^+\sim v^+$. By reversing the orientation of $R$ if necessary, we assume $x^+\sim v$. Then $$ xz^+\overset{\rightharpoonup }{D} z w\overset{\rightharpoonup }{C} x \quad \mbox{and} \quad x^+v \overset{\leftharpoonup}{R} v^+w^- \overset{\leftharpoonup }{C} x^+ $$ are two cycles which together contain all the vertices in $V(C)\cup V(D)\cup V(R)$. \qed Let $x\in V(C)$. If there exists a cycle $D\in \mathcal{F}-\{C\}$ such that $V_D(x)=B\cap V(D)$, then we say that $x$ is {\it bad} w.r.t. $D$. Define $$ V_{bad}=\{x\in V(C)\,|\, x \,\, \mbox{is a bad or A-type vertex on $C$}\}. $$ \begin{CLA}\label{bad-vx-propty} The vertex set $V_{bad}$ contains no two consecutive vertices on $C$. Moreover, no other vertex in $V(C)-V_{bad}$ is adjacent to any B-type vertex on any cycles other than $C$. \end{CLA} \noindent\textbf{Proof}.\quad Each vertex in $V_{bad}$ is adjacent to some B-type vertex on cycles other than $C$ by the definition. Let $v\in V_{bad}$. Then by Claim~\ref{v-0-ind-b}, $v^+$ is not adjacent to any B-type vertex on any cycles other than $C$. Hence, for any vertex $w\in V(C)$, $w$ or $w^+$ does not belong to $V_{bad}$. Thus, $V_{bad}$ contains no two consecutive vertices on $C$. To proof the second part of the statement, assume that $v\in V(C)$ is a vertex adjacent to some B-type vertex on a cycle $D\in \mathcal{F}-\{C\}$. Since vertices in $(A\cap V(C))^+$ are not adjacent to any B-type vertices on cycles other than $C$, $v^-$ is a B-type vertex. If $v$ is also of B-type, then by Claim~\ref{full-b-neighbors}, $V_D(v)=B\cap V(D)$. So $v\in V_{bad}$ by the definition of $V_{bad}$. If $v$ is of A-type, then $v\in V_{bad}$ again by the definition of $V_{bad}$. \qed \begin{CLA}\label{a-type-wrt-c} Let $xy\in E(C)$ be a B-type edge. For any cycle $D\in \mathcal{F}-\{C\}$, if $V_D(xy)=B\cap V(D)$, then for any $z\in A\cap V(D)$, $z$ is of A-type w.r.t. only the cycle $C$. \end{CLA} \noindent\textbf{Proof}.\quad As $xy\in E(C)$ is of B-type, for each cycle $Q\in \mathcal{F}-\{C\}$, vertices on $Q$ are alternating between $I_{xy}$ and $V(G)-V(C)-I_{xy}$, by $(2)$ of Claim~\ref{I-xy-alter-size-inde}. As $I_{xy}$ is an independent set in $G$ by $(1)$ of Claim~\ref{I-xy-alter-size-inde}, and $A\cap V(D)\subseteq I_{xy}$, for any $z\in A\cap V(D)$, it is not possible for $z$ to be adjacent to two consecutive vertices on any cycle $Q\in \mathcal{F}-\{C,D\}$. Thus, $z$ is of A-type w.r.t. only the cycle $C$. \qed For each vertex $x\in V_{bad}$, we define $$ U_{x}^0=\{x^+\} \quad \mbox{and}\quad U_{x}^1=\{u\,|\,u^+\in V_C(U_{x}^0)-V_{bad}\}-U_{x}^0. $$ For each vertex $x_1\in U_{x}^1$, define the path $$ P_{[x_1,x]}=x_1\overset{\leftharpoonup }{C} x^+ x_1^+ \overset{\rightharpoonup }{C} x $$ to be the directed path started at $x_1$ and ended at $x$. Start now on, if $v$ is a vertex on a directed path and $v$ is not the end of the path, we denote by \textbf{$v^{\dag}$ } the successor of $v$ on this path. This notation $v^{\dag}$ will be only used in the following occasion. It is easy to see that for any $x_2 \in V(P_{[x_1,x]})$ such that $x_2^{\dag} \in V_{P_{[x_1, x]}}(x_1)$, $P_{[x_2, x]}=x_2\overset{\leftharpoonup }{P}_{[x,x_1]}x_1x_2^{\dag}\overset{\rightharpoonup }{P}_{[x,x_1]}x$ is a directed path starting at $x_2$ and ending at $x$. Furthermore, $P_{[x_2, x]}$ contains all the vertices of $C$. In general, for $i\ge 2$ we define \begin{eqnarray*} U_{x}^i &=& \{u\,|\, u^{\dag}\sim v,\,\,\mbox{for any $v\in U_{x}^{i-1}$, and any $u^{\dag}\in V(P_{[v,x]})-V_{bad}$}\}-\bigcup_{j=0}^{i-1}U_{x}^j\\ U_{x}^{\infty} &=& \bigcup_{i=0}^{\infty}U_{x}^i. \end{eqnarray*} \begin{CLA}\label{zig-path-on-c} Let $x\in V_{bad}$ and let $U_{x}^i$ be defined as above. Let $D\in \mathcal{F}-\{C\}$ such that $x$ is bad or of A-type w.r.t. $D$, and let $u\in B\cap V(D)$ such that $x\sim u$. Then each of the following holds. \begin{itemize} \item [$(1)$]$U_{x}^{\infty} \subseteq \overline{V}_C(uu^+)$, i.e., for any $v\in U_{x}^{\infty}$, $v\not\sim u, u^+$. \item [$(2)$] For any $y\in V(C)-V_{bad}$ such that $y$ is adjacent to some vertex in $ U_{x}^{\infty}$, $y\sim u^+$. \item [$(3)$] If $x$ is bad and $v\in U_{x}^{\infty}$, then $V_D(v)=\emptyset$. \item [$(4)$] If $x$ is bad and $y\in V(C)-V_{bad}$ such that $y$ is adjacent to some vertex in $ U_{x}^{\infty}$, then $V_D(y)=A\cap V(D)$. \end{itemize} \end{CLA} \noindent\textbf{Proof}.\quad We first prove $(1)$ and $(2)$ simultaneously by applying induction on $i$. For $i=0$, $U_{x}^{0}=\{x^+\}$. As $x\sim u$, we have that $x^+\not\sim u^+$ by Claim~\ref{nonadjacency}. Furthermore, as $u$ is a B-type vertex, $u\not\sim x^+$. Hence, $x^+\in \overline{V}_C(uu^+)$. For any $y\in V(C)-V_{bad}$ such that $y\sim x^+$, since $x^+\in \overline{V}_C(uu^+)$, $y$ has to be adjacent to at least one of $u, u^+$ by the $2K_2$-freeness. As $y\in V(C)-V_{bad}$, $y\sim u^+$ by the second part of Claim~\ref{bad-vx-propty}. Assume now that both $(1)$ and $(2)$ are true for $i-1$ with $i\ge 1$. Let $v\in U_{x}^i$. By the definition of $U_{x}^i$, there exists $w\in U_{x}^{i-1}$ such that $v\in V(P_{[w,x]})$ and $w\sim v^{\dag}$, where $v^{\dag}\in V(C)-V_{bad}$ is the successor of $v$ on the directed path $P_{[w,x]}$. By the induction hypothesis, $v^{\dag}\sim u^+$. Also, by the induction hypothesis, $U_{x}^j\subseteq \overline{V}_C(uu^+)$ for any $j\le i-1$. Thus, $v^{\dag}\not\in \bigcup_{j=0}^{i-1}U_{x}^j$. Furthermore, $v\not\in \bigcup_{j=0}^{i-1}U_{x}^j$ as $U_{x}^i$ is disjoint with $\bigcup_{j=0}^{i-1}U_{x}^j$ by its definition. Since any edge on $P_{[w,x]}$ which is not an edge of $C$ has one endvertex in $\bigcup_{j=0}^{i-1}U_{x}^j$, $vv^{\dag}$ is an edge on $C$. Thus, as $v^{\dag}\sim u^+$, $v\not\sim u$ by Claim~\ref{nonadjacency}. Furthermore, $v\not\sim u^+$. For otherwise, if $v\sim u^+$, then as $x\sim u$, and $P_{[v,x]}$ is a spanning path of $C$, we get a cycle $v\overset{\rightharpoonup }{P}_{[v,x]}xu\overset{\leftharpoonup }{D} u^+ v$, which combines $C$ and $D$ into a single cycle. Thus, $v\in \overline{V}_C(uu^+)$. For any $y\in V(C)-V_{bad}$ such that $y\sim v$, since $v\in \overline{V}_C(uu^+)$, $y$ has to be adjacent to at least one of $u, u^+$ by the $2K_2$-freeness. As $y\in V(C)-V_{bad}$, $y\sim u^+$ by the second part of Claim~\ref{bad-vx-propty}. For the statements $(3)$ and $(4)$, we see that immediately by noticing that the cycle $D$ is AB-alternating and $x$ is adjacent to all the B-type vertices on $D$ if $x$ is bad. \qed Define $$ U^{\infty}=\bigcup_{x\in V_{bad}} U_{x}^{\infty} $$ Let $v\in U_x^{\infty}$ and $D\in \mathcal{F}-\{C\}$ such that $x$ is bad or of A-type w.r.t. $D$. Then $v$ is called {\it co-absorbable} w.r.t. $C$ and $D$ if there exists a cycle $R$ containing all the vertices in $V(C)\cup V(D)-\{v\}$. \begin{CLA}\label{co-absorbable} Each vertex $v\in U_x^{\infty}$ is co-absorbable w.r.t. $C$ and a cycle $D\in \mathcal{F}-\{C\}$ such that $x$ is bad or of A-type w.r.t. $D$. \end{CLA} \noindent\textbf{Proof}.\quad If $v\in U_{x}^{0}$, then $v=x^+$. Let $u\in B\cap V(D)$ such that $x\sim u$, and such that $x\sim u, u^+$ if $x$ is of A-type w.r.t. $D$. Then $x^+\not\sim u^+$ by Claim~\ref{zig-path-on-c}. Furthermore, $x^+\not\sim u$ as $u\in B\cap V(D)$. Thus, $x^{++}\sim u$ or $x^{++}\sim u^+$. Since $D$ is AB-alternating, $u^+$ is of A-type. By Claim~\ref{a-type-wrt-c}, $u^+$ is of A-type w.r.t. only $C$. Let $ww^+\in E(C)$ such that $u^+\sim w,w^+$. If $x$ is bad w.r.t. $D$, then $x\sim u, u^{++}$. And if $x\sim u, u^{++}$, then $u^+\not\sim x, x^+$ by Claim~\ref{nonadjacency}. Thus, $\{x, x^+\}\cap \{w,w^+\}=\emptyset$ if $x$ is bad w.r.t. $D$. Then $$ \left\{ \begin{array}{ll} xu\overset{\leftharpoonup }{D} u^+ x^{++} \overset{\rightharpoonup }{C} x, & \hbox{if $x^{++}\sim u^+$;}\\ xu^+\overset{\rightharpoonup }{D} u x^{++} \overset{\rightharpoonup }{C} x, & \hbox{if $x^{++}\sim u$ and $x$ is of A-type;}\\ xu^{++}\overset{\rightharpoonup }{D} u x^{++} \overset{\rightharpoonup }{C} wu^+w^+\overset{\rightharpoonup }{C} x, & \hbox{if $x^{++}\sim u$ and $x$ is bad.} \end{array} \right. $$ is a cycle containing all the vertices in $V(C)\cup V(D)-\{x^+\}$. We additionally show that $x^-$ is co-absorbable w.r.t. $C$ and $D$. (We will need this in the argument when $i\ge 1$.) Repeat the same argument for $x^{--}$, we then have $$ \left\{ \begin{array}{ll} xu\overset{\leftharpoonup }{D} u^+ x^{--} \overset{\leftharpoonup }{C} x, & \hbox{if $x^{--}\sim u^+$;}\\ xu^+\overset{\rightharpoonup }{D} u x^{--} \overset{\leftharpoonup }{C} x, & \hbox{if $x^{--}\sim u$ and $x$ is of A-type;}\\ xu^{++}\overset{\rightharpoonup }{D} u x^{--} \overset{\leftharpoonup }{C} w^+u^+w\overset{\leftharpoonup }{C} x, & \hbox{if $x^{--}\sim u$ and $x$ is bad.} \end{array} \right. $$ is a cycle containing all the vertices in $V(C)\cup V(D)-\{x^-\}$. Assume now that $v\in U_{x}^i$ for $i\ge 1$. By the definition of $U_{x}^i$, we know there exists a spanning path $P_{[v,x]}$ of $C$ with endvertices $v$ and $x$. By Claim~\ref{zig-path-on-c}, $v\not\sim u, u^+$. Let $y$ be the neighbor of $v$ on $P_{[v,x]}$. As $vy$ is an edge, and $v\not\sim u, u^+$, $y\sim u$ or $y\sim u^+$. Since $U_{x}^j\subseteq \overline{V}_C(uu^+)$ for any $j\le i-1$, we have that $y\not\in \bigcup_{j=0}^{i-1}U_{x}^j$. Furthermore, $v\not\in \bigcup_{j=0}^{i-1}U_{x}^j$ as $U_{x}^i$ is disjoint with $\bigcup_{j=0}^{i-1}U_{x}^j$ by its definition. Thus, $vy$ is an edge on $C$, since any edge on $P_{[v,x]}$ which is not an edge of $C$ has one endvertex in $\bigcup_{j=0}^{i-1}U_{x}^j$. We may assume that $y\not\in V_{bad}$, as both the predecessor and successor of a bad vertex on $C$ is co-absorbable by the argument for $i=0$ case. Thus $y\sim u^+$ by (2) of Claim~\ref{zig-path-on-c}. Then $y\overset{\leftharpoonup }{P}_{[v,x]} xu \overset{\leftharpoonup }{D} u^+ y$ is the desired cycle. \qed \begin{CLA}\label{numner-of-neighbors-for-co-absorbable-vertex} We may assume that each vertex in $U^{\infty}$ has less than $(|V(G)|-1)/3$ neighbors in $G$. \end{CLA} \noindent\textbf{Proof}.\quad Suppose on the contrary that there exists $v\in U^{\infty}$ so that $|N_G(v)|\ge (|V(G)|-1)/3$. By Claim~\ref{co-absorbable}, we see that $v$ is co-absorbable w.r.t. $C$ and some cycle $D\in \mathcal{F}-\{C\}$. By standard arguments for longest cycles, we know that $v$ has no two neighbors which are consecutive on any cycle $Q\in\mathcal{F}-\{C,D\}$ and on the cycle which is the combination of $C-v$ and $D$; and also that $(N_G(v))^+$, the set of the successors of neighbors of $v$ from the cycle which is the combination of $C-v$ and $D$ and cycles in $\mathcal{F}-\{C,D\}$, is an independent set in $G$. Let $S=V(G)-(N_G(v))^+-\{v\}$. Then $c(G-S)=|(N_G(v))^{+}\cup \{v\}|\ge (|V(G)|-1)/3+1>\frac{|V(G)|}{3}$. So $\tau(G)\le \frac{|S|}{c(G-S)}< 2$. This achieves a contradiction. \qed \begin{CLA}\label{zig-indep} Each of the following holds. \begin{itemize} \item [$(1)$]The set $U^{\infty}$ is an independent set in $G$. \item [$(2)$]$V_C(U^{\infty})\cap U^{\infty}=\emptyset$. \item [$(3)$]$ U^{\infty}\cup A^+$ is an independent set in $G$. \end{itemize} \end{CLA} \noindent\textbf{Proof}.\quad To prove (1), assume that there exist $u, v\in U^{\infty}$ such that $uv\in E(G)$. By Claim~\ref{numner-of-neighbors-for-co-absorbable-vertex}, $u$ and $v$ in total have at most $2(|V(G)|-1)/3$ neighbors in $G$. As $uv$ is an edge, and $G$ is $2K_2$-free, the set of non-neighbors of $u$ and $v$ in $G$ forms an independent set in $G$. Let $S=N_G(u)\cup N_G(v)-\{u\}$. Then $c(G-S)=|V(G)-S-\{u\}|>|V(G)|/3$. So $\tau(G)<2$. Again, we achieve a contradiction to the assumption that $\tau(G)\ge 3$. As $U^{\infty}$ is an independent set in $G$, we have $V_C(U^{\infty})\cap U^{\infty}=\emptyset$. Since each bad vertex $x$ is adjacent to its successor $x^+$, and $x^+\in U_x^0\subseteq U^{\infty}$, we have that $V_{bad}\subseteq V_C(U^{\infty})$. Thus, no vertex in $U^{\infty}$ is adjacent to any B-type vertex on cycles other than $C$. Since $(A\cap V(C))^+\subseteq U^{\infty}$, we know that $U^{\infty}\cup A^+$ is an independent set in $G$. \qed \begin{CLA}\label{S-comp-correspondence} For any vertex $y\in V_C(U^{\infty})$, there exists $v\in U^{\infty}$ such that $vy\in E(C)$. \end{CLA} \noindent\textbf{Proof}.\quad Assume that $y\in V_C(U_x^{\infty})$ for some $x\in V_{bad}$. The Claim trivially holds if $y\in V_C(U_x^{0})$. So assume that $i\ge 1$ and let $y\in V_C(U_x^i)-V_C(\bigcup_{j=0}^{i-1} U_x^j)$. By the definition of $U_x^{i}$, we know that there exists $w\in U_x^{i}$, and a spanning path $P_{[w,x]}$ of $C$ with endvertices as $w$ and $x$ such that $y$ is a neighbor of $w$ on $P_{[w,x]}$. Since $V_C(U^{\infty})\cap U^{\infty}=\emptyset$ by (2) of Claim~\ref{zig-indep}, $y\not\in U_x^{\infty}$. By the assumption that $y\in V_C(U_x^i)-V_C(\bigcup_{j=0}^{i-1} U_x^j)$, we know that the predecessor $v$ of $y$ on $P_{[w,x]}$ satisfies that $v\not\in \bigcup_{j=0}^{i-1} U_x^{j}$. As any edge of $P_{[w,x]}$ which is not an edge of $C$ has one end contained in $\bigcup_{j=0}^{i-1} U_x^{j}$, we then know that $vy\in E(C)$. \qed \begin{CLA}\label{S-comp-correspondence2} $|V_C(U^{\infty})|\le 2|U^{\infty}|$. \end{CLA} \noindent\textbf{Proof}.\quad Since $U^{\infty}$ is an independent set in $G$ by Claim~\ref{zig-indep}, $|N_C(U^{\infty})|\le 2|U^{\infty}|$. Let $y\in V_C(U^{\infty})$ be any vertex. By Claim~\ref{S-comp-correspondence}, there exists $v\in U^{\infty}$ such that $vy\in E(C)$. Thus, $V_C(U^{\infty})\subseteq N_C(U^{\infty})$. So $|V_C(U^{\infty})|\le |N_C(U^{\infty})|\le 2|U^{\infty}|$. \qed Let $$ S=A\cup V_C(U^{\infty}). $$ We claim that each vertex in $A^+\cup U^{\infty}$ is an isolated vertex in $G-S$. This is because $A^+\cup U^{\infty}$ is an independent set in $G$, and all the possible neighbors of vertices in $A^+\cup U^{\infty}$ in $G$ are contained in $S$. Note also that $|V_C(U^{\infty})|\le 2|U^{\infty}|$ by Claim~\ref{S-comp-correspondence2}, and $|S\cap (V(G)-V(C))|=|V(G)-V(C)-S|=|V(G)-V(C)|/2$ as we assume that all cycles in $\mathcal{F}-\{C\}$ are AB-alternating. Since $A\cap V(C)\subseteq V_{bad}$ by the definition of $V_{bad}$, and $V_{bad}\subseteq V_C(U^{\infty})$, we have that $A\cap V(C)\subseteq V_C(U^{\infty})$. Thus, $S=A\cup V_C(U^{\infty})=V_C(U^{\infty})\cup (A\cap (V(G)-V(C)))$ and thus $|S|=|V_C(U^{\infty})|+|V(G)-V(C)|/2$. Hence \begin{eqnarray*} \tau(G) &\le & \frac{|S|}{c(G-S)} \\ &\le & \frac{|V_C(U^{\infty})|+|V(G)-V(C)|/2}{|U^{\infty}|+|V(G)-V(C)|/2} \\ &\le & \frac{2|U^{\infty}|+|V(G)-V(C)|/2}{|U^{\infty}|+|V(G)-V(C)|/2}<2, \end{eqnarray*} showing a contradiction. The proof of Theorem~\ref{main} is now complete. \hfill $\blacksquare$\vspace{1mm} \textbf{{\noindent \large Acknowledgements}} The author is extremely grateful to Professor Mark Ellingham for his careful comments and suggestions in improving the proofs and the writing of this paper. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Human and animal learning is characterized not just by a capacity to acquire complex skills, but also the ability to adapt rapidly when those skills must be carried out under new or changing conditions. For example, animals can quickly adapt to walking and running on different surfaces~\citep{herman2017neural} and humans can easily modulate force during reaching movements in the presence of unexpected perturbations~\citep{flanagan1993modulation}. Furthermore, these experiences are remembered, and can be recalled to adapt more quickly when similar disturbances occur in the future~\citep{doyon2005reorganization}. Since learning entirely new models on such short time-scales is impractical, we can devise algorithms that explicitly train models to adapt quickly from small amounts of data. Such online adaptation is crucial for intelligent systems operating in the real world, where changing factors and unexpected perturbations are the norm. In this paper, we propose an algorithm for fast and continuous online learning that utilizes deep neural network models to build and maintain a task distribution, allowing for the natural development of both generalization as well as task specialization. Our working example is continuous adaptation in the model-based reinforcement learning setting, though our approach generally addresses any online learning scenario with streaming data. We assume that each ``trial" consists of multiple tasks, and that the delineation between the tasks is not provided explicitly to the learner -- instead, the method must adaptively decide what ``tasks" even represent, when to instantiate new tasks, and when to continue updating old ones. For example, a robot running over changing terrain might need to handle uphill and downhill slopes, and might choose to maintain separate models that become specialized to each slope, adapting to each one in turn based on the currently inferred surface. We perform adaptation simply by using online stochastic gradient descent (SGD) on the model parameters, while maintaining a mixture model over model parameters for different tasks. The mixture is updated via the Chinese restaurant process~\citep{stimberg2012bayesian}, which enables new tasks to be instantiated as needed over the course of a trial. Although online learning is perhaps one of the oldest applications of SGD~\citep{bottou1998online}, modern parametric models such as deep neural networks are exceedingly difficult to train online with this method. They typically require medium-sized minibatches and multiple epochs to arrive at sensible solutions, which is not suitable when receiving data in an online streaming setting. One of our key observations is that meta-learning can be used to learn a prior initialization for the parameters that makes such direct online adaptation feasible, with only a handful of gradient steps. The meta-training procedure we use is based on model-agnostic meta-learning (MAML)~\citep{maml}, where a prior weight initialization is learned for a model so as to optimize improvement on any task from a meta-training task distribution after a small number of gradient steps. Meta-learning with MAML has previously been extended to model-based RL~\citep{nagabandi2018learning}, but only for the $k$-shot adaptation setting: The meta-learned prior model is adapted to the $k$ most recent time steps, but the adaptation is not carried forward in time (i.e., adaptation is always performed from the prior itself). This rigid batch-mode setting is restrictive in an online learning setup and is insufficient for tasks that are further outside of the training distribution. A more natural formulation is one where the model receives a continuous stream of data and must adapt online to a potentially non-stationary task distribution. This requires both fast adaptation and the ability to recall prior tasks, as well as an effective adaptation strategy to interpolate as needed between the two. The primary contribution of this paper is a meta-learning for online learning (MOLe) algorithm that uses expectation maximization, in conjunction with a Chinese restaurant process prior on the task distribution, to learn mixtures of neural network models that are each updated with online SGD. In contrast to prior multi-task and meta-learning methods, our method's online assignment of soft task probabilities allows for task specialization to emerge naturally, without requiring task delineations to be specified in advance. We evaluate MOLe in the context of model-based RL on a suite of challenging simulated robotic tasks including disturbances, environmental changes, and simulated motor failures. Our simulated experiments show a half-cheetah agent and a hexapedal crawler robot performing continuous model adaptation in an online setting. Our results show online instantiation of new tasks, the ability to adapt to out-of-distribution tasks, and the ability to recognize and revert back to prior tasks. Additionally, we demonstrate that MOLe outperforms a state-of-the-art prior method that does $k$-shot model-based meta-RL, as well as natural baselines such as continuous gradient updates for adaptation and online learning without meta-training. \section{Related Work} \vspace{-0.1cm} Online learning is one of the oldest subfields of machine learning~\citep{bottou1998online,jafari2001no}. Prior algorithms have used online gradient updates~\citep{duchi2011adaptive} and probabilistic filtering formulations~\citep{murphy2002dynamic,hoffman2010online,broderick2013streaming}. In principle, commonly used gradient-based learning methods, such as SGD, can easily be used as online learning algorithms~\citep{bottou1998online}. In practice, their performance with deep neural network function approximators is limited~\citep{sahoo2017online}: such high-dimensional models must be trained with batch-mode methods, minibatches, and multiple passes over the data. We aim to lift this restriction by using model-agnostic meta-learning (MAML) to explicitly pretrain a model that enables fast adaptation, which we then use for continuous online adaptation via an expectation maximization algorithm with a Chinese restaurant process~\citep{blei2003latent} prior for dynamic allocation of new tasks in a non-stationary task distribution. Online learning is related to that of continual or lifelong learning~\citep{thrun1998lifelong}, where the agent faces a non-stationary distribution of tasks over time. However, unlike works that focus on avoiding negative transfer, i.e. catastrophic forgetting~\citep{kirkpatrick2017overcoming,rebuffi2017icarl,zenke2017continual,gradient_episodic,nguyen2017variational}, online learning focuses on the ability to rapidly learn and adapt in the presence of non-stationarity. While some continual learning works consider the problem of forward transfer, e.g.~\citet{rusu2016progressive,aljundi2017expert,wang2017growing}, these works and others in continual learning generally focus on small sets of tasks where fast, online learning is not realistically possible, since there are simply not enough tasks to recover structure that enables fast, few-shot learning in new tasks or environments. Our approach builds on techniques for meta-learning or learning-to-learn~\citep{thrun,schmidhuber1987,bengiobengio1,naik}. However, most recent meta-learning work considers a setting where one task is learned at a time, often from a single batch of data~\citep{mann,hugo,metanetworks,learningrl,rl2}. In our work, we specifically address non-stationary task distributions and do not assume that task boundaries are known. Prior work \citep{jerfel2018online} has also considered non-stationary task distributions; whereas \citet{jerfel2018online} use the meta-gradient to estimate the parameters of a mixture over the task-specific parameters, we focus on fast adaptation and accumulation of task-specific mixture components during run-time optimization. Other meta-learning works have considered non-stationarity within a task~\citep{al2017continuous} and episodes involving multiple tasks at meta-test time~\citep{ritter2018been}, but they do not consider continual online adaptation with unknown task separation. Prior work has also studied meta-learning for model-based RL~\citep{nagabandi2018learning}. This prior method updates the model every time step, but each update is a batch-mode $k$-shot update, using exactly $k$ prior transitions and resetting the model at each step. This allows for adaptive control, but does not enable continual online adaptation, since updates from previous steps are always discarded. In our comparisons, we find that our approach substantially outperforms this prior method. To our knowledge, our work is the first to apply meta-learning to learn streaming online updates. \section{Problem Statement}\label{sec:problemstatement} \vspace{-0.1cm} We formalize our online learning problem setting as follows: at each time step, the model receives an input $\*x_t$ and produces a prediction $\*{\hat{y}}_t$. It then receives a ground truth label $\*y_t$, which must be used to adapt the model to increase its prediction accuracy on the next input $\*x_{t+1}$. The true labels are assumed to come from some task distribution $P(Y_t | X_t, T_t)$, where $T_t$ is the task at time $t$. The tasks themselves change over time, resulting in a non-stationary task distribution, and the identity of the task $T_t$ is unknown to the learner. In real-world settings, tasks might correspond to unknown parameters of the system (e.g., motor malfunction on a robot), user preferences, or other unexpected events. This problem statement covers a range of online learning problems that all require continual adaptation to streaming data and trading off between generalization and specialization. In our experiments, we use model-based RL as our working example, where the input $\*x_t$ is a state-action pair, and the output $\*y_t$ is the next state. We discuss this application to model-based RL in Section~\ref{sec:mbrl}, but we keep the following derivation of our method general for the case of arbitrary online prediction problems. \section{Online Learning with a Mixture of Meta-Trained Networks} \vspace{-0.1cm} We discuss our meta-learning for online learning (MOLe) algorithm in two parts: online learning in this section, and meta-learning in the next. In this section, we explain our online learning method that enables effective online learning using a continuous stream of incoming data from a non-stationary task distribution. We aim to retain generalization so as to not lose past knowledge, as well as gain specialization, which is particularly important for learning new tasks that are further out-of-distribution and require more learning. We discuss the process of obtaining a meta-learned prior in Sec.~\ref{sec:metatrain}, but we first formulate in this section an online adaptation algorithm using SGD with expectation maximization to maintain and adapt a mixture model over task model parameters (i.e., a probabilistic task distribution). \subsection{Method Overview} Let $p_{\theta(T_t)}(\*y_t|\*x_t)$ represent the predictive distribution of our model on input $\*x_t$, for an unknown task $T_t$. Our goal is to estimate model parameters $\theta_t(T)$ for each task $T$ in the non-stationary task distribution: This requires inferring the distribution over tasks at each step $P( T_t|\*x_t, \*y_t )$, using that distribution to make predictions $ \hat{\*y}_t = p_{\theta(T_t)}(\*y_t|\*x_t)$, and also using it to update each model from $\theta_t(T)$ to $\theta_{t+1}(T)$. In practice, the parameters $\theta(T)$ of each model will correspond to the weights of a neural network $f_{\theta(T)}$. Each model begins with some prior parameter vector $\theta^*$, which we will discuss in more detail in Section~\ref{sec:metatrain}. Since the number of tasks is also unknown, we begin with one task at time step 0, where $\theta_0(T)=\{\theta_0(T_0)\}=\{\theta^*\}$. From here, we continuously update all parameters in $\theta_t(T)$ and add new tasks as needed, in the attempt to model the true underlying process $P(Y_t|X_t,T_t)$. Since task identities are unknown, we must also estimate $P(T_t)$ at each time step. Thus, the online learning problem consists of adapting each $\theta_t(T_i)$ at each time step $t$ according to the inferred task probabilities $P(T_t=T_i|\*x_t,\*y_t)$. To do this, we adapt the expectation maximization (EM) algorithm and optimize the expected log-likelihood, given by \begin{equation} \mathcal{L} = E_{T_t \sim P(T_t | \*x_t,\*y_t)}[ \log p_{\theta_t(T_t)}(\*y_t | \*x_t, T_t) ],\label{eq:ell} \end{equation} where we use $\theta_t(T_t)$ to denote the model parameters corresponding to task $T_t$. Finally, to handle the unknown number of tasks, we employ the Chinese restaurant process to instantiate new tasks as needed. \subsection{Approximate Online Inference} \label{sec:online_method} We use expectation maximization (EM) to update the model parameters. In our case, the E step in EM involves estimating the task distribution $P(T_t=T_i| \*x_t,\*y_t)$ at the current time step, while the M step involves updating all model parameters from $\theta_t(T)$ to obtain the new model parameters $\theta_{t+1}(T)$. The parameters are always updated by one gradient step per time step, according to the inferred task responsibilities. We first estimate the expectations over all task parameters $T_i$ in the task distribution, where the posterior of each task probability can be written as follows: \begin{equation} P(T_t = T_i | \*x_t,\*y_t) \propto p_{\theta_t(T_i)}(\*y_t | \*x_t, T_t = T_i)P(T_t = T_i). \end{equation} We then formulate the task prior $P(T_t=T_i)$ using a Chinese restaurant process (CRP) to enable new tasks to be instantiated during a trial. The CRP is an instantiation of a Dirichlet process. In the CRP, at time $t$, the probability of each task $T_i$ should be given by \begin{equation} P(T_t = T_i) = \frac{n_{T_i}}{t - 1 + \alpha} \end{equation} where $n_{T_i}$ is the expected number of datapoints in task $T_i$ for all steps $1,\dots,t-1$, and $\alpha$ is a hyperparameter that controls the instantiation of new tasks. The prior therefore becomes \begin{equation} P(T_t = T_i) = \frac{\sum_{t'=1}^{t-1} P(T_{t'} = T_i)}{t - 1 + \alpha} ~~~~\text{and} ~~~~P(T_t = T_\text{new}) = \frac{\alpha}{t - 1 + \alpha} \end{equation} Combining the prior and likelihood, we derive the following posterior task probability distribution: \begin{align} P(T_t = T_i | \*x_t,\*y_t) &\propto p_{\theta_t(T_i)}(\*y_t | \*x_t, T_t = T_i)\left[ \sum\limits_{t'=1}^{t-1} P(T_{t'} = T_i) + \delta(T_{t'} = T_\text{new})\alpha \right] \label{eq:e-step} \end{align} Having estimated the latent task probabilities, we next perform the M step, which improves the expected log-likelihood in Equation~\ref{eq:ell} based on the inferred task distribution. Since each task starts from the prior $\theta^*$, the values of all parameters in $\theta_t(T)$ after one gradient update are given by \begin{equation} \theta_{t+1}(T_i) = \theta^* - \beta \sum\limits_{t'=0}^{t} P_t(T_{t'}=T_i|\*x_{t'},\*y_{t'}) \nabla_{\theta_{t'}(T_i)}\log p_{\theta_{t'}(T_i)}(\*y_{t'} | \*x_{t'}) ~~~\forall~T_i \end{equation} If we assume that all parameters of $\theta_t(T)$ have already been updated for the previous time steps $0,\dots,t$, we can approximate this update by simply updating all parameters $\theta_{t}(T)$ on the newest data: \begin{equation}\label{eq:m-step} \theta_{t+1}(T_i)= \theta_t(T_i) - \beta P_t(T_t=T_i|\*x_{t},\*y_{t}) \nabla_{\theta_{t}(T_i)}\log p_{\theta_{t}(T_i)}(\*y_{t} | \*x_{t}) ~~~\forall~T_i \end{equation} This procedure is an approximation, since updates to task parameters $\theta_t(T)$ will in reality also change the corresponding task probabilities at previous time steps. However, this approximation removes the need to store previously seen data points and yields a fully online, streaming algorithm. Finally, to fully implement the EM algorithm, we must alternate the E and M steps to convergence at each time step, rolling back the previous gradient update to $\theta_t(T)$ at each iteration. In practice, we found it sufficient to perform the E and M steps only once per time step. While this is a crude simplification, successive time steps in the online learning scenario are likely to be correlated, making this procedure reasonable. However, it is also straightforward to perform multiple steps of EM while still remaining fully online. \begin{wrapfigure}[16]{R}{0.5\textwidth} \vspace{-0.4cm} \begin{minipage}{0.53\textwidth} \begin{algorithm}[H] \caption{\footnotesize Online Learning with Mixture of Meta-Trained Networks} \label{alg:metatest} \begin{algorithmic} \footnotesize \REQUIRE $\theta^*$ from meta-training \STATE Initialize $t=0$, $\theta_0(T) = \{\theta_0(T_0)\} = \{\theta^*\}$ \FOR{each time step $t$} \STATE Calculate $p_{\theta_{t}(T_i)}(\*y_t|\*x_t, T_t=T_i) ~~\forall T_i$ \STATE Calculate $P_t(T_i) = P_t(T_t=T_i|\*x_t, \*y_t) ~~\forall T_i$ \STATE Calculate $\theta_{t+1}(T_i)$ by adapting from $\theta_t(T_i) ~~\forall T_i$ \STATE Calculate $\theta_{t+1}(T_\text{new})$ by adapting from $\theta^*$ \IF{$P_t(T_\text{new}) > P_t(T_i) ~~\forall T_i $} \STATE Add $\theta_{t+1}(T_\text{new})$ to $\theta_{t+1}(T)$ \STATE Recalculate $P_t(T_i)$ using $\theta_{t+1}(T_i) ~~\forall T_i$ \STATE Recalculate $\theta_{t+1}(T_i)$ using updated $P_t(T_i) ~~\forall T_i$ \ENDIF \STATE $T^* = \text{argmax}_{T_i} ~~p_{\theta_{t+1}(T_i)}(\*y_t|\*x_t, T_t=T_i)$ \STATE Receive next data point $\*x_{t+1}$ \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \end{wrapfigure} We now summarize this full online learning portion of MOLe, and we also outline it in Alg.~\ref{alg:metatest}. At the first time step $t=0$, the task distribution is initialized to contain one entry: $\theta_0(T)=\{\theta_0(T_0)\}=\{\theta^*\}$. At every time step after that, an E step is performed to estimate the task distribution and an M step is performed to update the model parameters. The CRP prior also assigns, at each time step, the probability of adding a new task $T_\text{new}$ at the given time step. The parameters $\theta_{t+1}(T_\text{new})$ of this new task are adapted from $\theta^*$ on the latest data. The prediction on the next datapoint is then made using the model parameters $\theta_{t+1}(T^*)$ corresponding to the most likely task $T^*$. \section{Meta-Learning the Prior} \label{sec:metatrain} We formulated an algorithm above for performing online adaptation using continually incoming data. For this method, we choose to meta-train the prior using the model-agnostic meta-learning (MAML) algorithm. This meta-training algorithm is an appropriate choice, because it results in a prior that is specifically intended for gradient-based fine-tuning. Before we further discuss our choice in meta-training procedure, we first give an overview of MAML and meta-learning in general. Given a distribution of tasks, a meta-learning algorithm produces a learning procedure, which can, in some cases, quickly adapt to a new task. MAML optimizes for an initialization of a deep network that achieves good few-shot task generalization when fine-tuned using a few datapoints from that task. At train time, MAML sees small amounts of data from large numbers of tasks, where data $\mathcal{D}_T$ from each task $T$ can be split into training and validation subsets ($\mathcal{D}^\text{tr}_T$ and $\mathcal{D}^\text{val}_T$), where $\mathcal{D}^\text{tr}_T$ is of size $k$. MAML optimizes for model parameters $\theta$ such that one or more gradients steps on $\mathcal{D}^\text{tr}_T$ results in a minimal loss $L$ on $\mathcal{D}^\text{val}_T$. In our case, we will set $\mathcal{D}^\text{tr}_{T_t}=(\*x_t, \*y_t)$ and $\mathcal{D}^\text{val}_{T_t}=(\*x_{t+1}, \*y_{t+1})$, and the loss $L$ will correspond to negative log likelihood. A good $\theta$ that allows such adaptation to be successful across various meta-training tasks is thus a good network initialization from which adaptation can solve various new tasks that are related to the previously seen tasks. The MAML objective is defined as follows: \begin{equation} \min_{\theta} \sum\limits_T L(\theta - \eta \nabla_{\theta} L(\theta, \mathcal{D}^\text{tr}_T), \mathcal{D}^\text{val}_T) = \min_{\theta} \sum\limits_T L(\phi_T, \mathcal{D}^\text{val}_T). \end{equation} Here, $\eta$ is the inner learning rate. Once this meta-objective is optimized, the resulting $\theta^*$ acts as a prior from which fine-tuning can occur at test-time, using recent experience from $\mathcal{D}^\text{tr}_{T_{\text{test}}}$ as follows: \begin{equation} \phi_{T_{\text{test}}} = \theta^* - \eta \nabla_{\theta^*} L(\theta^*, \mathcal{D}^\text{tr}_{T_{\text{test}}}). \end{equation} Here, $\phi_{T_{\text{test}}}$ is adapted from the meta-learned prior $\theta^*$ to be more representative for the current time. Although \citet{maml} demonstrated this fast adaptation of deep neural networks and \citet{nagabandi2018learning} extended this framework to model-based meta RL, these methods address adaptation in the $k$-shot setting, always adapting directly from the meta-learned prior and not allowing further adaptation or specialization. In this work, we have extended these capabilities by enabling more evolution of knowledge through a temporally-extended online adaptation procedure. While our procedure for continual online learning is still initialized with this meta-training for $k$-shot adaptation (i.e., MAML), we found that this prior was sufficient to enable effective continual online adaptation at test time. The intuitive rationale for this is that MAML trains the model to be able to change significantly using only a small number of datapoints and gradient steps. Note that this meta-trained prior can be used at test time in (a) a $k$-shot setting, similar to how it was trained, or it can be used at test time by (b) taking substantially more gradient steps away from this prior. We show in Sec.~\ref{sec:results} that our method outperforms both of these methods, but the mere ability to use this meta-learned prior in these ways makes the use of MAML enticing. We note that it is quite possible to modify the MAML algorithm to optimize the model directly with respect to the weighted updates discussed in Section~\ref{sec:online_method}. This simply requires computing the task weights (the E step) on each batch during meta-training, and then constructing a computation graph where all gradient updates are multiplied by their respective weights. Standard automatic differentiation software can then compute the corresponding meta-gradient. For short trial lengths, this is not substantially more complex than standard MAML; for longer trial lengths, truncated backpropagation is an option. Although such a meta-training procedure better matches the way that the model is used during online adaptation, we found that it did not substantially improve our results. While it's possible that the difference might be more significant if meta-training for longer-term adaptation, this observation does suggest that simply meta-training with MAML is sufficient for enabling effective continuous online adaptation in non-stationary multi-task settings. To clarify, although this modified training procedure of incorporating the EM weight updates (during meta-training) did not explicitly improve our results, we see that test-time performance did indeed improve with using more data for the standard MAML meta-training procedure (see Appendix). \section{Application to Model-Based RL} \label{sec:mbrl} In our experiments, we apply MOLe to model-based reinforcement learning. RL in general aims to act in a way that maximizes the sum of future rewards. At each time step $t$, the agent executes action $\mathbf{a}_t \in A$ from state $\mathbf{s}_t \in S$, transitions to the next state $\mathbf{s}_{t+1}$ according to the transition probabilities (i.e., dynamics) $p(\mathbf{s}_{t+1}|\mathbf{s}_t,\mathbf{a}_t)$ and receives rewards $r_t=r(\mathbf{s}_t, \mathbf{a}_t)$. The goal at each step is to execute the action $\mathbf{a}_t$ that maximizes the discounted sum of future rewards $\sum\limits_{t'=t}^{\infty} \gamma^{t'-t} r(\mathbf{s}_{t'}, \mathbf{a}_{t'})$, where discount factor $\gamma \in [0,1]$ prioritizes near-term rewards. In model-based RL, in particular, the predictions from a known or learned dynamics model are used to either learn a policy, or are used directly inside a planning algorithm to select actions that maximize reward. In our work, the underlying distribution that we aim to model is the dynamics distribution $p(\mathbf{s}_{t+1}|\mathbf{s}_t,\mathbf{a}_t, T_t)$, where the unknown $T_t$ represents the underlying settings (e.g., state of the system, external details, environmental perturbations, etc.). The goal for MOLe is to estimate this distribution with a predictive model $p_\theta$. To instantiate MOLe in this context of model-based RL, we follow Algorithm~\ref{alg:metatest} with the following specifications: (1) We set the input $\*x_t$ to be the concatenation of $K$ previous states and actions, given by $\*x_t = [\mathbf{s}_{t-K},\mathbf{a}_{t-K}, \dots, \mathbf{s}_{t-2},\mathbf{a}_{t-2},\mathbf{s}_{t-1},\mathbf{a}_{t-1}]$, and the output to be the corresponding next states $\*y_t = [\mathbf{s}_{t-K+1},\dots,\mathbf{s}_{t-1},\mathbf{s}_{t}]$. This provides us with a slightly larger batch of data for each online update, as compared to using only the data from the given time step. Since individual time steps at high frequency can be very noisy, using the past $K$ transitions helps to damp out the updates. (2) The predictive model $p_\theta$ represents each of these underlying transitions as an independent Gaussian such that $p_\theta(\*y_t|\*x_t) = \prod_{t^{'}=t-K}^{t-1} p(\*s_{t^{'}+1}|\*s_{t^{'}}, \*a_{t^{'}})$, where each $p(\*s_{t+1}|\*s_t, \*a_t)$ is parameterized with a Gaussian given by mean $f_\theta(\*s_t, \*a_t)$ and constant variance $\sigma^2$. We implement this mean dynamics function $f_{\theta}(s_t, a_t)$ as a neural network model with three hidden layers each of dimension 500, and ReLU nonlinearities. (3) To calculate the new task parameter $\theta_{t+1}(T_\text{new})$, which may or may not be added to the task distribution $\theta_{t+1}(T)$, we use a set of $K$ nearby datapoints that is separate from the set $\*x_t$. This is done to avoid calculating the parameter using the same dataset on which it is evaluated, since $P_t(T_\text{new})$ comes from evaluating the parameter on the data $\*x_t$. (4) Unlike standard online streaming tasks where the next data point $\*x_{t+1}$ is just given, the incoming data point (i.e., the next visited state) in this case is influenced by the predictive model itself. This is because, after the most likely task $T^*$ is selected from the possible tasks, the predictions from the model $p_{\theta_{t+1}(T^*)}$ are used by the controller to plan over a sequence of future actions $\mathbf{a}_0,\dots,\mathbf{a}_H$ and select the actions that maximize future reward. Note that the planning procedure is based on stochastic optimization, following prior work~\citep{nagabandi2018learning}, and we provide more details in the appendix. Since the controller's action choice determines the next data point, and since the controller's choice is dependent on the estimated model parameters, it is even more crucial in this setting to appropriately adapt the model. 5) Finally, note that we attain $\theta^*$ from meta-training using model-agnostic meta-learning (MAML), as mentioned in the method above. However, in this case, MAML is performed in the loop of model-based RL. In other words, the model parameters at a given iteration of meta-training are used by the controller to generate on-policy rollouts, the data from these rollouts is then added to the dataset for MAML, and this process repeats until the end of meta-training. \section{Experiments} \label{sec:results} The questions that we aimed to study from our experiments include: Can MOLe 1) autonomously discover some task structure amid a stream of non-stationary data? 2) adapt to tasks that are further outside of the task distribution than can be handled by a $k$-shot learning approach? 3) recognize and revert to tasks it has seen before? 4) avoid overfitting to a recent task to prevent deterioration of performance upon the next task switch? 5) outperform other methods? To study these questions, we conduct experiments on agents in the MuJoCo physics engine~\citep{todorov2012mujoco}. The agents we used are a half-cheetah (S ∈ R21, A ∈ R6) and a hexapedal crawler (S ∈ R50, A ∈ R12). Using these agents, we design a number of challenging online learning problems that involve multiple sudden and gradual changes in the underlying task distribution, including tasks that are extrapolated from those seen previously, where online learning is criticial. Through these experiments, we aim to build problem settings that are representative of the types of disturbances and shifts that a real RL agent might encounter. We present results and analysis of our findings in the following three sections, and videos can be found at \url{https://sites.google.com/berkeley.edu/onlineviameta}. In our experiments, we compare to several alternative methods, including two approaches that leverage meta-training and two approaches that do not:\\ (a) \textbf{k-shot adaptation with meta-learning}: Always adapt from the meta-trained prior $\theta^*$, as typically done with meta-learning methods~\citep{nagabandi2018learning}. This method is often insufficient for adapting to tasks that are further out of distribution, and the adaptation is also not carried forward in time for future use.\\ (b) \textbf{continued adaptation with meta-learning}: Always take gradient steps from the previous time step's parameters. This method oftens overfits to recently observed tasks, so it should indicate the importance of our method effectively identifying task structure to avoid overfitting and enable recall. \\ (c) \textbf{model-based RL}: Train a model on the same data as the methods above, using standard supervised learning, and keep this model fixed throughout the trials (i.e., no meta-learning and no adaptation).\\ (d) \textbf{model-based RL with online gradient updates}: Use the same model from model-based RL (i.e., no meta-learning), but adapt it online using gradient-descent at run time. This is representative of commonly used dynamic evaluation methods~\citep{rei2015onlinerepre,krause2017dynamiceval,krause2016multiplicative,fortunato2017bayesian}. \subsection{Terrain Slopes on Half-Cheetah} \begin{wrapfigure}{r}{0.4\textwidth} \vspace{-0.3in} \begin{center} \includegraphics[width=0.4\textwidth]{figs/hfield.png} \vspace{-0.2in} \caption{Half-cheetah agent, shown traversing a landscape with `basins'} \label{fig:cheetah} \vspace{-0.2in} \end{center} \end{wrapfigure} We start with the task of a half-cheetah (Fig.~\ref{fig:cheetah}) agent, traversing terrains of differing slopes. The prior model is meta-trained on data from terrains with random slopes of low magnitudes, and the test trials are executed on difficult out-of-distribution tasks such as basins, steep hills, etc. As shown in Fig.~\ref{fig:hfield_test}, neither model-based RL nor model-based RL with online gradient updates perform well on these out-of-distribution tasks, even though those models were trained on the same data that the meta-trained model received. The bad performance of the model-based RL approach indicates the need for model adaptation (as opposed to assuming a single model can encompass everything), while the bad performance of model-based RL with online gradient updates indicates the need for a meta-learned initialization to enable online learning with neural networks. For the three meta-learning and adaptation methods, we expect continued adaptation with meta-learning to perform poorly due to continuous gradient steps causing it to overfit to recent data; that is, we expect that experience on the upward slopes to lead to deterioration of performance on downward slopes, or something similar. However, based on both our qualitative and quantitative results, we see that the meta-learning procedure seems to have initialized the agent with a parameter space in which these various ``tasks" are not seen as substantially different, where online learning by SGD performs well. This suggests that the meta-learning process finds a task space where there is an easy skill transfer between slopes; thus, even when MOLe is faced with the option of switching tasks or adding new tasks to its dynamic latent task distribution, it chooses not to do so (Fig.~\ref{fig:hfield_pt_same}). Unlike findings that we will see later, it is interesting that the discovered task space here does not correspond to human-distinguishable categorical labels. Finally, note that these tasks of changing slopes are not particularly similar to each other (and that the discovered task space is indeed useful), because the two non-meta-learning baselines do indeed fail on these test tasks despite having similar training performance on the shallow training slopes. \begin{figure}[H] \centering \vspace{-0.1in} \includegraphics[width=\textwidth]{figs/hfield_alltasks.png} \vspace{-0.2in} \vspace{-0.2cm} \caption{\footnotesize Results on half-cheetah terrain traversal. The poorly performing model-based RL shows that a single model is not sufficient, and model-based RL with online gradient updates shows that a meta-learned initialization is critical. The three meta-learning approaches perform similarly; however, the performance of k-shot adaptation deteriorates when the task calls for taking multiple gradient steps away from the prior.} \label{fig:hfield_test} \end{figure} \begin{figure}[H] \centering \vspace{-0.2in} \includegraphics[width=0.47\textwidth]{figs/hfield_dist_same1.png} \includegraphics[width=0.47\textwidth]{figs/hfield_dist_same2.png} \vspace{-0.16in} \caption{\footnotesize Latent task distribution over time for two half-cheetah landscape traversal tasks, where encountered terrain slopes vary within each run. Interestingly, we find that MOLe chooses to only use a single latent task variable to describe varying terrain.} \label{fig:hfield_pt_same} \end{figure} \subsection{Half-Cheetah Motor Malfunctions} While the findings from the half-cheetah on sloped terrains illustrate that separate task parameters aren't always necessary for what might externally seem like separate tasks, we also want to study agents that experience more drastically-changing non-stationary task distributions during their experience in the world. For this set of experiments, we train all models on data where a single actuator is selected at random to experience a malfunction during the rollout. In this case, malfunction means that the polarity or magnitude of actions applied to that actuator are altered. Fig.~\ref{fig:malfunction} shows the results of various methods on drastically out-of-distribution test tasks, such as altering all actuators at once. The left of Fig.~\ref{fig:malfunction} shows that when the task distribution during the test trials contains only a single task, such as `sign negative' where all actuators are prescribed to be the opposite polarity, then continued adaptation performs well by repeatedly performing gradient updates on incoming data. However, as shown in the other tasks of Fig.~\ref{fig:malfunction}, the performance of this continue adaptation substantially deteriorates when the agent experiences a non-stationary task distribution. Due to overspecialization on recent incoming data, such methods that continuously adapt tend to forget and lose previously existing skills. This overfitting and forgetting of past skills is also illustrated in the consistent performance deterioration shown in Fig.~\ref{fig:malfunction}. MOLe, on the other hand, dynamically builds a probabilistic task distribution and allows adaptation to these difficult tasks, without forgetting past skills. We show a sample task setup in Fig.~\ref{fig:malfunctiontask}, where the agent experiences alternating periods of normal and crippled-leg operation. This plot shows the successful recognition of new tasks as well as old tasks; note that both the recognition and adaptation are all done online, without using a bank of past data to perform the adaptation, and without a human-specified set of task categories. \begin{figure}[H] \centering \vspace{-0.1in} \includegraphics[width=0.49\textwidth]{figs/signs_allTasks.png}\label{fig:signs_test} \includegraphics[width=0.49\textwidth]{figs/signs_cumsum_3.png}\label{fig:signs_cumsum} \vspace{-0.3cm} \caption{\footnotesize Results on the motor malfunction trials, where different trials are shown task distributions that modulate at different frequencies (or stay constant, for the first category). Here, online learning is critical for good performance, k-shot adaptation is insufficient for such different tasks, and continued gradient steps leads to overfitting to recently seen data. \label{fig:malfunction} } \end{figure} \begin{figure}[H] \centering \vspace{-0.2in} \includegraphics[width=0.8\textwidth]{figs/signs_dist_changing3.png}\label{fig:signs_pt} \vspace{-0.15in} \caption{\footnotesize Latent task variable distribution over the course of an online learning trial where the underlying motor malfunction changes every $500$ timesteps. We find that MOLe is able to successfully recover the task structure, recognize when the underlying task has changed, and recall previously seen tasks. \label{fig:malfunctiontask} } \end{figure} \subsection{Crippling of End Effectors on Six-Legged Crawler} \begin{wrapfigure}{r}{0.2\textwidth} \begin{center} \vspace{-0.3in} \includegraphics[width=0.2\textwidth]{figs/roach.png} \caption{\footnotesize Six-legged crawler robot, shown with crippled legs.} \label{fig:crawler} \vspace{-0.2in} \end{center} \end{wrapfigure} To further examine the effects of our continual online adaptation algorithm, we study another, more complex agent: a 6-legged crawler (Fig.~\ref{fig:crawler}). In these experiments, models are trained on random joints being crippled (i.e., unable to apply actuator commands). In Fig.~\ref{fig:crawler_test}, we present two illustrative test tasks: (1) the agent sees a set configuration of crippling for the duration of its test-time experience, and (2) the agent receives alternating periods of experience, between regions of normal operation and regions of having crippled legs. The first setting is similar to data seen during training, and thus, we see that even the model-based RL and model based-RL with online gradient updates baselines do not fail. The methods that include both meta-learning and adaptation, however, do have higher performance. Furthermore, we see again that continued gradient steps in this case of a single-task setting is not detrimental. The second setting's non-stationary task distribution (when the leg crippling is dynamic) illustrates the need for online adaptation (model-based RL fails), the need for a good prior to adapt from (failure of model-based RL with online gradient updates), the harm of overfitting to recent experience and thus forgetting older skills (low performance of continued gradient steps), and the need for further adaptation away from the prior (limited performance of k-shot adaptation). With MOLe, this agent is able to build its own representation of ``task" switches, and we see that this switch does indeed correspond to recognizing regions of leg crippling (left of Fig.~\ref{fig:crawler_pt}). The plot of the cumulative sum of rewards (right of Fig.~\ref{fig:crawler_pt}) of each of the three meta-learning plus adaptation methods includes this same task switch pattern every 500 steps: Here, we can clearly see that steps 500-1000 and 1500-2000 were the crippled regions. Continued gradient steps actually performs worse on the second and third times it sees normal operation, whereas MOLe is noticeably better as it sees the task more often. Note this improvement of both skills, where development of one skill actually does not hinder the other. Finally, we examine experiments where the crawler experiences (during each trial) walking straight, making turns, and sometimes having a crippled leg. The performance during the first 500 time steps of "walking forward in a normal configuration" for continued gradient steps was comparable to MOLe (+/-10\% difference), but its performance during the last 500 time steps of "walking forward in a normal configuration" was \textbf{200\%} lower. Note this detrimental effect of performing updates without allowing for separate task specialization/adaptation. \vspace{-0.1in} \begin{figure}[H] \begin{center} \includegraphics[width=0.8\textwidth]{figs/roach_allTasks.png} \vspace{-0.5cm} \caption{\footnotesize Quantitative results on crawler. For a fixed task, adaptation is not necessary and all methods perform well. In contrast, when tasks change dynamically within the trial, only MOLe effectively learns online.} \label{fig:crawler_test} \end{center} \vspace{-0.2in} \end{figure} \begin{figure}[H] \centering \vspace{-0.1in} \includegraphics[height=0.28\textwidth]{figs/roach_dist.png}\label{fig:crawler_pt} \includegraphics[height=0.28\textwidth]{figs/roach_cumsum.png \vspace{-0.4cm} \caption{\footnotesize Results on crawler experiments. Left: Online recognition of latent task probabilities for alternating periods of normal/crippled experience. Right: MOLe improves from seeing the same tasks multiple times.} \vspace{-0.4cm} \end{figure} \section{Discussion} We presented an online learning method for neural network models that can handle non-stationary, multi-task settings within each trial. Our method adapts the model directly with SGD, where an EM algorithm uses a Chinese restaurant process prior to maintain a distribution over tasks and handle non-stationarity. Although SGD generally makes for a poor online learning algorithm in the streaming setting for large parametric models such as deep neural networks, we observe that, by (1) meta-training the model for fast adaptation with MAML and (2) employing our online algorithm for probabilistic updates at test time, we can enable effective online learning with neural networks. In our experiments, we applied this approach to model-based RL, and we demonstrated that it could be used to adapt the behavior of simulated robots faced with various new and unexpected tasks. Our results showed that our method can develop its own notion of task, continuously adapt away from the prior as necessary (to learn even tasks that require more adaptation), and recall tasks it has seen before. While we use model-based RL as our evaluation domain, our method is general and could be applied to other streaming and online learning settings. An exciting direction for future work would be to apply our method to domains such as time series modeling and active online learning. \section{Test-time Performance vs Training Data} We verify below that as the meta-trained models are trained with more data, their performance on test tasks does improve. \begin{figure}[H] \centering \includegraphics[width=0.7\textwidth]{figs/testPerf_vs_metatraining.png} \caption{\footnotesize Performance on test tasks (i.e., unseen during training) of models that are meta-trained with differing amounts of data. Performance numbers here are normalized per agent, between 0 and 1.} \label{fig:metatrain_data} \vspace{0.2cm} \end{figure} \section{Hyperparameters} In all experiments, we use a dynamics model consisting of three hidden layers, each of dimension 500, with ReLU nonlinearities. The control method that we use is random-shooting model predictive control (MPC) where 1000 candidate action sequences each of horizon length H=10 are sampled at each time step, fed through the predictive model, and ranked by their expected reward. The first action step from the highest-scoring candidate action sequence is then executed before the entire planning process repeats again at the next time step. Below, we list relevant training and testing parameters for the various methods used in our experiments. \# Task/itr corresponds to the number of tasks sampled during each iteration of collecting data to train the model, and \# TS/itr is the total number of times steps collected during that iteration (sum over all tasks). \begin{table}[H] \centering \caption{Hyperparameters for train-time} \label{tab:hc-hyper-train} \resizebox{\textwidth}{!}{% \begin{tabular}{@{}llllllllllllll@{}} \\ & Iters & Epochs & \# Tasks/itr & \# TS/itr & K & outer LR & inner LR ($\eta$) \\ \textbf{Meta-learned approaches (3)} & 12 & 50 & 16 & 2000-3000 & 16 & 0.001 & 0.01 \\ \textbf{Non-meta-learned approaches (2)} & 12 & 50 & 16 & 2000-3000 & 16 & 0.001 & N/A \\ \end{tabular} } \end{table} \begin{table}[H] \centering \caption{Hyperparameters for run-time} \label{tab:hc-hyper-test} \resizebox{0.6\textwidth}{!}{% \begin{tabular}{@{}llllllllllllll@{}} \\ \textbf{ } & $\alpha$ (CRP) & LR (model update) & K (previous data) \\ \textbf{MOLe (ours)} & 1 & 0.01 & 16 \\ \textbf{continued adaptation with meta-learning} & N/A & 0.01 & 16 \\ \textbf{k-shot adaptation with meta-learning} & N/A & 0.01 & 16 \\ \textbf{model-based RL} & N/A & N/A & N/A \\ \textbf{model-based RL with online gradient updates} & N/A & 0.01 & 16 \\ \end{tabular}% } \end{table} \newpage \section{Controller} As mentioned in Section~\ref{sec:mbrl}, we use the learned dynamics model in conjunction with a controller to select the next action to execute. The controller uses the learned model $f_\theta$ together with a reward function $r(\*s_t, \*a_t)$ that encodes the desired task. Many methods could be used to perform this action selection, including cross entropy method (CEM)~\citep{botev2013cross} or model predictive path integral control (MPPI)~\citep{williams15mppi}, but in our experiments, we use a random-sampling shooting method~\cite{Rao2009_shooting}. At each time step $t$, we randomly generate $N$ candidate action sequences with $H$ actions in each sequence. \begin{equation} A^i_t = \{\*a^i_t, \dots, \*a^i_{t+H}\} \end{equation} We then use the learned dynamics model to predict the resulting states of executing these candidate action sequences. \begin{equation} S^i = \{ \hat{\*s}_{t+1}^i, \dots, \hat{\*s}_{t+H+1}^i\} ~~\text{where} ~~\hat{\*s}_{p+1}^i = f_\theta(\hat{\*s}_p^i, \*a^i_p) \end{equation} Next, we use the reward function to select the action sequence with the highest associated predicted reward. \begin{equation} i^* = \text{argmax}_i {\sum^{t'=t+H}_{t'=t} r(\hat{\*s}_{t'}^i, \*a^i_{t'})} \end{equation} Next, rather than executing the entire sequence $A^{i^*}_t$ of selected optimal actions, we use a model predictive control (MPC) framework to execute only the first action $\*a_t^{i^*}$ from the current state $\*s_t$. We then replan at the next time step; This use of MPC can compensate for model inaccuracies by preventing accumulating errors, since we replan at each time step using updated state information.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{The ALICE Collaboration} \begingroup \small \begin{flushleft} S.~Acharya$^\textrm{\scriptsize 139}$, D.~Adamov\'{a}$^\textrm{\scriptsize 96}$, J.~Adolfsson$^\textrm{\scriptsize 34}$, M.M.~Aggarwal$^\textrm{\scriptsize 101}$, G.~Aglieri Rinella$^\textrm{\scriptsize 35}$, M.~Agnello$^\textrm{\scriptsize 31}$, N.~Agrawal$^\textrm{\scriptsize 48}$, Z.~Ahammed$^\textrm{\scriptsize 139}$, N.~Ahmad$^\textrm{\scriptsize 17}$, S.U.~Ahn$^\textrm{\scriptsize 80}$, S.~Aiola$^\textrm{\scriptsize 143}$, A.~Akindinov$^\textrm{\scriptsize 65}$, S.N.~Alam$^\textrm{\scriptsize 139}$, J.L.B.~Alba$^\textrm{\scriptsize 114}$, D.S.D.~Albuquerque$^\textrm{\scriptsize 125}$, D.~Aleksandrov$^\textrm{\scriptsize 92}$, B.~Alessandro$^\textrm{\scriptsize 59}$, R.~Alfaro Molina$^\textrm{\scriptsize 75}$, A.~Alici$^\textrm{\scriptsize 54}$\textsuperscript{,}$^\textrm{\scriptsize 12}$\textsuperscript{,}$^\textrm{\scriptsize 27}$, A.~Alkin$^\textrm{\scriptsize 3}$, J.~Alme$^\textrm{\scriptsize 22}$, T.~Alt$^\textrm{\scriptsize 71}$, L.~Altenkamper$^\textrm{\scriptsize 22}$, I.~Altsybeev$^\textrm{\scriptsize 138}$, C.~Alves Garcia Prado$^\textrm{\scriptsize 124}$, M.~An$^\textrm{\scriptsize 7}$, C.~Andrei$^\textrm{\scriptsize 89}$, D.~Andreou$^\textrm{\scriptsize 35}$, H.A.~Andrews$^\textrm{\scriptsize 113}$, A.~Andronic$^\textrm{\scriptsize 109}$, V.~Anguelov$^\textrm{\scriptsize 106}$, C.~Anson$^\textrm{\scriptsize 99}$, T.~Anti\v{c}i\'{c}$^\textrm{\scriptsize 110}$, F.~Antinori$^\textrm{\scriptsize 57}$, P.~Antonioli$^\textrm{\scriptsize 54}$, R.~Anwar$^\textrm{\scriptsize 127}$, L.~Aphecetche$^\textrm{\scriptsize 117}$, H.~Appelsh\"{a}user$^\textrm{\scriptsize 71}$, S.~Arcelli$^\textrm{\scriptsize 27}$, R.~Arnaldi$^\textrm{\scriptsize 59}$, O.W.~Arnold$^\textrm{\scriptsize 107}$\textsuperscript{,}$^\textrm{\scriptsize 36}$, I.C.~Arsene$^\textrm{\scriptsize 21}$, M.~Arslandok$^\textrm{\scriptsize 106}$, B.~Audurier$^\textrm{\scriptsize 117}$, A.~Augustinus$^\textrm{\scriptsize 35}$, R.~Averbeck$^\textrm{\scriptsize 109}$, M.D.~Azmi$^\textrm{\scriptsize 17}$, A.~Badal\`{a}$^\textrm{\scriptsize 56}$, Y.W.~Baek$^\textrm{\scriptsize 79}$\textsuperscript{,}$^\textrm{\scriptsize 61}$, S.~Bagnasco$^\textrm{\scriptsize 59}$, R.~Bailhache$^\textrm{\scriptsize 71}$, R.~Bala$^\textrm{\scriptsize 103}$, A.~Baldisseri$^\textrm{\scriptsize 76}$, M.~Ball$^\textrm{\scriptsize 45}$, R.C.~Baral$^\textrm{\scriptsize 68}$, A.M.~Barbano$^\textrm{\scriptsize 26}$, R.~Barbera$^\textrm{\scriptsize 28}$, F.~Barile$^\textrm{\scriptsize 53}$\textsuperscript{,}$^\textrm{\scriptsize 33}$, L.~Barioglio$^\textrm{\scriptsize 26}$, G.G.~Barnaf\"{o}ldi$^\textrm{\scriptsize 142}$, L.S.~Barnby$^\textrm{\scriptsize 113}$\textsuperscript{,}$^\textrm{\scriptsize 95}$, V.~Barret$^\textrm{\scriptsize 82}$, P.~Bartalini$^\textrm{\scriptsize 7}$, K.~Barth$^\textrm{\scriptsize 35}$, J.~Bartke$^\textrm{\scriptsize 121}$\Aref{0}, E.~Bartsch$^\textrm{\scriptsize 71}$, M.~Basile$^\textrm{\scriptsize 27}$, N.~Bastid$^\textrm{\scriptsize 82}$, S.~Basu$^\textrm{\scriptsize 139}$\textsuperscript{,}$^\textrm{\scriptsize 141}$, B.~Bathen$^\textrm{\scriptsize 72}$, G.~Batigne$^\textrm{\scriptsize 117}$, A.~Batista Camejo$^\textrm{\scriptsize 82}$, B.~Batyunya$^\textrm{\scriptsize 78}$, P.C.~Batzing$^\textrm{\scriptsize 21}$, I.G.~Bearden$^\textrm{\scriptsize 93}$, H.~Beck$^\textrm{\scriptsize 106}$, C.~Bedda$^\textrm{\scriptsize 64}$, N.K.~Behera$^\textrm{\scriptsize 61}$, I.~Belikov$^\textrm{\scriptsize 135}$, F.~Bellini$^\textrm{\scriptsize 27}$, H.~Bello Martinez$^\textrm{\scriptsize 2}$, R.~Bellwied$^\textrm{\scriptsize 127}$, L.G.E.~Beltran$^\textrm{\scriptsize 123}$, V.~Belyaev$^\textrm{\scriptsize 85}$, G.~Bencedi$^\textrm{\scriptsize 142}$, S.~Beole$^\textrm{\scriptsize 26}$, A.~Bercuci$^\textrm{\scriptsize 89}$, Y.~Berdnikov$^\textrm{\scriptsize 98}$, D.~Berenyi$^\textrm{\scriptsize 142}$, R.A.~Bertens$^\textrm{\scriptsize 130}$, D.~Berzano$^\textrm{\scriptsize 35}$, L.~Betev$^\textrm{\scriptsize 35}$, A.~Bhasin$^\textrm{\scriptsize 103}$, I.R.~Bhat$^\textrm{\scriptsize 103}$, A.K.~Bhati$^\textrm{\scriptsize 101}$, B.~Bhattacharjee$^\textrm{\scriptsize 44}$, J.~Bhom$^\textrm{\scriptsize 121}$, L.~Bianchi$^\textrm{\scriptsize 127}$, N.~Bianchi$^\textrm{\scriptsize 51}$, C.~Bianchin$^\textrm{\scriptsize 141}$, J.~Biel\v{c}\'{\i}k$^\textrm{\scriptsize 39}$, J.~Biel\v{c}\'{\i}kov\'{a}$^\textrm{\scriptsize 96}$, A.~Bilandzic$^\textrm{\scriptsize 36}$\textsuperscript{,}$^\textrm{\scriptsize 107}$, R.~Biswas$^\textrm{\scriptsize 4}$, S.~Biswas$^\textrm{\scriptsize 4}$, J.T.~Blair$^\textrm{\scriptsize 122}$, D.~Blau$^\textrm{\scriptsize 92}$, C.~Blume$^\textrm{\scriptsize 71}$, G.~Boca$^\textrm{\scriptsize 136}$, F.~Bock$^\textrm{\scriptsize 84}$\textsuperscript{,}$^\textrm{\scriptsize 35}$\textsuperscript{,}$^\textrm{\scriptsize 106}$, A.~Bogdanov$^\textrm{\scriptsize 85}$, L.~Boldizs\'{a}r$^\textrm{\scriptsize 142}$, M.~Bombara$^\textrm{\scriptsize 40}$, G.~Bonomi$^\textrm{\scriptsize 137}$, M.~Bonora$^\textrm{\scriptsize 35}$, J.~Book$^\textrm{\scriptsize 71}$, H.~Borel$^\textrm{\scriptsize 76}$, A.~Borissov$^\textrm{\scriptsize 19}$, M.~Borri$^\textrm{\scriptsize 129}$, E.~Botta$^\textrm{\scriptsize 26}$, C.~Bourjau$^\textrm{\scriptsize 93}$, P.~Braun-Munzinger$^\textrm{\scriptsize 109}$, M.~Bregant$^\textrm{\scriptsize 124}$, T.A.~Broker$^\textrm{\scriptsize 71}$, T.A.~Browning$^\textrm{\scriptsize 108}$, M.~Broz$^\textrm{\scriptsize 39}$, E.J.~Brucken$^\textrm{\scriptsize 46}$, E.~Bruna$^\textrm{\scriptsize 59}$, G.E.~Bruno$^\textrm{\scriptsize 33}$, D.~Budnikov$^\textrm{\scriptsize 111}$, H.~Buesching$^\textrm{\scriptsize 71}$, S.~Bufalino$^\textrm{\scriptsize 31}$, P.~Buhler$^\textrm{\scriptsize 116}$, P.~Buncic$^\textrm{\scriptsize 35}$, O.~Busch$^\textrm{\scriptsize 133}$, Z.~Buthelezi$^\textrm{\scriptsize 77}$, J.B.~Butt$^\textrm{\scriptsize 15}$, J.T.~Buxton$^\textrm{\scriptsize 18}$, J.~Cabala$^\textrm{\scriptsize 119}$, D.~Caffarri$^\textrm{\scriptsize 35}$\textsuperscript{,}$^\textrm{\scriptsize 94}$, H.~Caines$^\textrm{\scriptsize 143}$, A.~Caliva$^\textrm{\scriptsize 64}$, E.~Calvo Villar$^\textrm{\scriptsize 114}$, P.~Camerini$^\textrm{\scriptsize 25}$, A.A.~Capon$^\textrm{\scriptsize 116}$, F.~Carena$^\textrm{\scriptsize 35}$, W.~Carena$^\textrm{\scriptsize 35}$, F.~Carnesecchi$^\textrm{\scriptsize 27}$\textsuperscript{,}$^\textrm{\scriptsize 12}$, J.~Castillo Castellanos$^\textrm{\scriptsize 76}$, A.J.~Castro$^\textrm{\scriptsize 130}$, E.A.R.~Casula$^\textrm{\scriptsize 24}$\textsuperscript{,}$^\textrm{\scriptsize 55}$, C.~Ceballos Sanchez$^\textrm{\scriptsize 9}$, P.~Cerello$^\textrm{\scriptsize 59}$, S.~Chandra$^\textrm{\scriptsize 139}$, B.~Chang$^\textrm{\scriptsize 128}$, S.~Chapeland$^\textrm{\scriptsize 35}$, M.~Chartier$^\textrm{\scriptsize 129}$, J.L.~Charvet$^\textrm{\scriptsize 76}$, S.~Chattopadhyay$^\textrm{\scriptsize 139}$, S.~Chattopadhyay$^\textrm{\scriptsize 112}$, A.~Chauvin$^\textrm{\scriptsize 107}$\textsuperscript{,}$^\textrm{\scriptsize 36}$, M.~Cherney$^\textrm{\scriptsize 99}$, C.~Cheshkov$^\textrm{\scriptsize 134}$, B.~Cheynis$^\textrm{\scriptsize 134}$, V.~Chibante Barroso$^\textrm{\scriptsize 35}$, D.D.~Chinellato$^\textrm{\scriptsize 125}$, S.~Cho$^\textrm{\scriptsize 61}$, P.~Chochula$^\textrm{\scriptsize 35}$, K.~Choi$^\textrm{\scriptsize 19}$, M.~Chojnacki$^\textrm{\scriptsize 93}$, S.~Choudhury$^\textrm{\scriptsize 139}$, T.~Chowdhury$^\textrm{\scriptsize 82}$, P.~Christakoglou$^\textrm{\scriptsize 94}$, C.H.~Christensen$^\textrm{\scriptsize 93}$, P.~Christiansen$^\textrm{\scriptsize 34}$, T.~Chujo$^\textrm{\scriptsize 133}$, S.U.~Chung$^\textrm{\scriptsize 19}$, C.~Cicalo$^\textrm{\scriptsize 55}$, L.~Cifarelli$^\textrm{\scriptsize 12}$\textsuperscript{,}$^\textrm{\scriptsize 27}$, F.~Cindolo$^\textrm{\scriptsize 54}$, J.~Cleymans$^\textrm{\scriptsize 102}$, F.~Colamaria$^\textrm{\scriptsize 33}$, D.~Colella$^\textrm{\scriptsize 66}$\textsuperscript{,}$^\textrm{\scriptsize 35}$, A.~Collu$^\textrm{\scriptsize 84}$, M.~Colocci$^\textrm{\scriptsize 27}$, M.~Concas$^\textrm{\scriptsize 59}$\Aref{idp1835040}, G.~Conesa Balbastre$^\textrm{\scriptsize 83}$, Z.~Conesa del Valle$^\textrm{\scriptsize 62}$, M.E.~Connors$^\textrm{\scriptsize 143}$\Aref{idp1854432}, J.G.~Contreras$^\textrm{\scriptsize 39}$, T.M.~Cormier$^\textrm{\scriptsize 97}$, Y.~Corrales Morales$^\textrm{\scriptsize 59}$, I.~Cort\'{e}s Maldonado$^\textrm{\scriptsize 2}$, P.~Cortese$^\textrm{\scriptsize 32}$, M.R.~Cosentino$^\textrm{\scriptsize 126}$, F.~Costa$^\textrm{\scriptsize 35}$, S.~Costanza$^\textrm{\scriptsize 136}$, J.~Crkovsk\'{a}$^\textrm{\scriptsize 62}$, P.~Crochet$^\textrm{\scriptsize 82}$, E.~Cuautle$^\textrm{\scriptsize 73}$, L.~Cunqueiro$^\textrm{\scriptsize 72}$, T.~Dahms$^\textrm{\scriptsize 36}$\textsuperscript{,}$^\textrm{\scriptsize 107}$, A.~Dainese$^\textrm{\scriptsize 57}$, M.C.~Danisch$^\textrm{\scriptsize 106}$, A.~Danu$^\textrm{\scriptsize 69}$, D.~Das$^\textrm{\scriptsize 112}$, I.~Das$^\textrm{\scriptsize 112}$, S.~Das$^\textrm{\scriptsize 4}$, A.~Dash$^\textrm{\scriptsize 90}$, S.~Dash$^\textrm{\scriptsize 48}$, S.~De$^\textrm{\scriptsize 124}$\textsuperscript{,}$^\textrm{\scriptsize 49}$, A.~De Caro$^\textrm{\scriptsize 30}$, G.~de Cataldo$^\textrm{\scriptsize 53}$, C.~de Conti$^\textrm{\scriptsize 124}$, J.~de Cuveland$^\textrm{\scriptsize 42}$, A.~De Falco$^\textrm{\scriptsize 24}$, D.~De Gruttola$^\textrm{\scriptsize 30}$\textsuperscript{,}$^\textrm{\scriptsize 12}$, N.~De Marco$^\textrm{\scriptsize 59}$, S.~De Pasquale$^\textrm{\scriptsize 30}$, R.D.~De Souza$^\textrm{\scriptsize 125}$, H.F.~Degenhardt$^\textrm{\scriptsize 124}$, A.~Deisting$^\textrm{\scriptsize 109}$\textsuperscript{,}$^\textrm{\scriptsize 106}$, A.~Deloff$^\textrm{\scriptsize 88}$, C.~Deplano$^\textrm{\scriptsize 94}$, P.~Dhankher$^\textrm{\scriptsize 48}$, D.~Di Bari$^\textrm{\scriptsize 33}$, A.~Di Mauro$^\textrm{\scriptsize 35}$, P.~Di Nezza$^\textrm{\scriptsize 51}$, B.~Di Ruzza$^\textrm{\scriptsize 57}$, I.~Diakonov$^\textrm{\scriptsize 138}$, M.A.~Diaz Corchero$^\textrm{\scriptsize 10}$, T.~Dietel$^\textrm{\scriptsize 102}$, P.~Dillenseger$^\textrm{\scriptsize 71}$, R.~Divi\`{a}$^\textrm{\scriptsize 35}$, {\O}.~Djuvsland$^\textrm{\scriptsize 22}$, A.~Dobrin$^\textrm{\scriptsize 35}$, D.~Domenicis Gimenez$^\textrm{\scriptsize 124}$, B.~D\"{o}nigus$^\textrm{\scriptsize 71}$, O.~Dordic$^\textrm{\scriptsize 21}$, L.V.V.~Doremalen$^\textrm{\scriptsize 64}$, T.~Drozhzhova$^\textrm{\scriptsize 71}$, A.K.~Dubey$^\textrm{\scriptsize 139}$, A.~Dubla$^\textrm{\scriptsize 109}$, L.~Ducroux$^\textrm{\scriptsize 134}$, A.K.~Duggal$^\textrm{\scriptsize 101}$, P.~Dupieux$^\textrm{\scriptsize 82}$, R.J.~Ehlers$^\textrm{\scriptsize 143}$, D.~Elia$^\textrm{\scriptsize 53}$, E.~Endress$^\textrm{\scriptsize 114}$, H.~Engel$^\textrm{\scriptsize 70}$, E.~Epple$^\textrm{\scriptsize 143}$, B.~Erazmus$^\textrm{\scriptsize 117}$, F.~Erhardt$^\textrm{\scriptsize 100}$, B.~Espagnon$^\textrm{\scriptsize 62}$, S.~Esumi$^\textrm{\scriptsize 133}$, G.~Eulisse$^\textrm{\scriptsize 35}$, J.~Eum$^\textrm{\scriptsize 19}$, D.~Evans$^\textrm{\scriptsize 113}$, S.~Evdokimov$^\textrm{\scriptsize 115}$, L.~Fabbietti$^\textrm{\scriptsize 36}$\textsuperscript{,}$^\textrm{\scriptsize 107}$, J.~Faivre$^\textrm{\scriptsize 83}$, A.~Fantoni$^\textrm{\scriptsize 51}$, M.~Fasel$^\textrm{\scriptsize 84}$\textsuperscript{,}$^\textrm{\scriptsize 97}$, L.~Feldkamp$^\textrm{\scriptsize 72}$, A.~Feliciello$^\textrm{\scriptsize 59}$, G.~Feofilov$^\textrm{\scriptsize 138}$, J.~Ferencei$^\textrm{\scriptsize 96}$, A.~Fern\'{a}ndez T\'{e}llez$^\textrm{\scriptsize 2}$, E.G.~Ferreiro$^\textrm{\scriptsize 16}$, A.~Ferretti$^\textrm{\scriptsize 26}$, A.~Festanti$^\textrm{\scriptsize 29}$, V.J.G.~Feuillard$^\textrm{\scriptsize 82}$\textsuperscript{,}$^\textrm{\scriptsize 76}$, J.~Figiel$^\textrm{\scriptsize 121}$, M.A.S.~Figueredo$^\textrm{\scriptsize 124}$, S.~Filchagin$^\textrm{\scriptsize 111}$, D.~Finogeev$^\textrm{\scriptsize 63}$, F.M.~Fionda$^\textrm{\scriptsize 24}$, E.M.~Fiore$^\textrm{\scriptsize 33}$, M.~Floris$^\textrm{\scriptsize 35}$, S.~Foertsch$^\textrm{\scriptsize 77}$, P.~Foka$^\textrm{\scriptsize 109}$, S.~Fokin$^\textrm{\scriptsize 92}$, E.~Fragiacomo$^\textrm{\scriptsize 60}$, A.~Francescon$^\textrm{\scriptsize 35}$, A.~Francisco$^\textrm{\scriptsize 117}$, U.~Frankenfeld$^\textrm{\scriptsize 109}$, G.G.~Fronze$^\textrm{\scriptsize 26}$, U.~Fuchs$^\textrm{\scriptsize 35}$, C.~Furget$^\textrm{\scriptsize 83}$, A.~Furs$^\textrm{\scriptsize 63}$, M.~Fusco Girard$^\textrm{\scriptsize 30}$, J.J.~Gaardh{\o}je$^\textrm{\scriptsize 93}$, M.~Gagliardi$^\textrm{\scriptsize 26}$, A.M.~Gago$^\textrm{\scriptsize 114}$, K.~Gajdosova$^\textrm{\scriptsize 93}$, M.~Gallio$^\textrm{\scriptsize 26}$, C.D.~Galvan$^\textrm{\scriptsize 123}$, P.~Ganoti$^\textrm{\scriptsize 87}$, C.~Gao$^\textrm{\scriptsize 7}$, C.~Garabatos$^\textrm{\scriptsize 109}$, E.~Garcia-Solis$^\textrm{\scriptsize 13}$, K.~Garg$^\textrm{\scriptsize 28}$, P.~Garg$^\textrm{\scriptsize 49}$, C.~Gargiulo$^\textrm{\scriptsize 35}$, P.~Gasik$^\textrm{\scriptsize 107}$\textsuperscript{,}$^\textrm{\scriptsize 36}$, E.F.~Gauger$^\textrm{\scriptsize 122}$, M.B.~Gay Ducati$^\textrm{\scriptsize 74}$, M.~Germain$^\textrm{\scriptsize 117}$, J.~Ghosh$^\textrm{\scriptsize 112}$, P.~Ghosh$^\textrm{\scriptsize 139}$, S.K.~Ghosh$^\textrm{\scriptsize 4}$, P.~Gianotti$^\textrm{\scriptsize 51}$, P.~Giubellino$^\textrm{\scriptsize 109}$\textsuperscript{,}$^\textrm{\scriptsize 59}$\textsuperscript{,}$^\textrm{\scriptsize 35}$, P.~Giubilato$^\textrm{\scriptsize 29}$, E.~Gladysz-Dziadus$^\textrm{\scriptsize 121}$, P.~Gl\"{a}ssel$^\textrm{\scriptsize 106}$, D.M.~Gom\'{e}z Coral$^\textrm{\scriptsize 75}$, A.~Gomez Ramirez$^\textrm{\scriptsize 70}$, A.S.~Gonzalez$^\textrm{\scriptsize 35}$, V.~Gonzalez$^\textrm{\scriptsize 10}$, P.~Gonz\'{a}lez-Zamora$^\textrm{\scriptsize 10}$, S.~Gorbunov$^\textrm{\scriptsize 42}$, L.~G\"{o}rlich$^\textrm{\scriptsize 121}$, S.~Gotovac$^\textrm{\scriptsize 120}$, V.~Grabski$^\textrm{\scriptsize 75}$, L.K.~Graczykowski$^\textrm{\scriptsize 140}$, K.L.~Graham$^\textrm{\scriptsize 113}$, L.~Greiner$^\textrm{\scriptsize 84}$, A.~Grelli$^\textrm{\scriptsize 64}$, C.~Grigoras$^\textrm{\scriptsize 35}$, V.~Grigoriev$^\textrm{\scriptsize 85}$, A.~Grigoryan$^\textrm{\scriptsize 1}$, S.~Grigoryan$^\textrm{\scriptsize 78}$, N.~Grion$^\textrm{\scriptsize 60}$, J.M.~Gronefeld$^\textrm{\scriptsize 109}$, F.~Grosa$^\textrm{\scriptsize 31}$, J.F.~Grosse-Oetringhaus$^\textrm{\scriptsize 35}$, R.~Grosso$^\textrm{\scriptsize 109}$, L.~Gruber$^\textrm{\scriptsize 116}$, F.~Guber$^\textrm{\scriptsize 63}$, R.~Guernane$^\textrm{\scriptsize 83}$, B.~Guerzoni$^\textrm{\scriptsize 27}$, K.~Gulbrandsen$^\textrm{\scriptsize 93}$, T.~Gunji$^\textrm{\scriptsize 132}$, A.~Gupta$^\textrm{\scriptsize 103}$, R.~Gupta$^\textrm{\scriptsize 103}$, I.B.~Guzman$^\textrm{\scriptsize 2}$, R.~Haake$^\textrm{\scriptsize 35}$, C.~Hadjidakis$^\textrm{\scriptsize 62}$, H.~Hamagaki$^\textrm{\scriptsize 86}$\textsuperscript{,}$^\textrm{\scriptsize 132}$, G.~Hamar$^\textrm{\scriptsize 142}$, J.C.~Hamon$^\textrm{\scriptsize 135}$, J.W.~Harris$^\textrm{\scriptsize 143}$, A.~Harton$^\textrm{\scriptsize 13}$, H.~Hassan$^\textrm{\scriptsize 83}$, D.~Hatzifotiadou$^\textrm{\scriptsize 12}$\textsuperscript{,}$^\textrm{\scriptsize 54}$, S.~Hayashi$^\textrm{\scriptsize 132}$, S.T.~Heckel$^\textrm{\scriptsize 71}$, E.~Hellb\"{a}r$^\textrm{\scriptsize 71}$, H.~Helstrup$^\textrm{\scriptsize 37}$, A.~Herghelegiu$^\textrm{\scriptsize 89}$, G.~Herrera Corral$^\textrm{\scriptsize 11}$, F.~Herrmann$^\textrm{\scriptsize 72}$, B.A.~Hess$^\textrm{\scriptsize 105}$, K.F.~Hetland$^\textrm{\scriptsize 37}$, H.~Hillemanns$^\textrm{\scriptsize 35}$, C.~Hills$^\textrm{\scriptsize 129}$, B.~Hippolyte$^\textrm{\scriptsize 135}$, J.~Hladky$^\textrm{\scriptsize 67}$, B.~Hohlweger$^\textrm{\scriptsize 107}$, D.~Horak$^\textrm{\scriptsize 39}$, S.~Hornung$^\textrm{\scriptsize 109}$, R.~Hosokawa$^\textrm{\scriptsize 133}$\textsuperscript{,}$^\textrm{\scriptsize 83}$, P.~Hristov$^\textrm{\scriptsize 35}$, C.~Hughes$^\textrm{\scriptsize 130}$, T.J.~Humanic$^\textrm{\scriptsize 18}$, N.~Hussain$^\textrm{\scriptsize 44}$, T.~Hussain$^\textrm{\scriptsize 17}$, D.~Hutter$^\textrm{\scriptsize 42}$, D.S.~Hwang$^\textrm{\scriptsize 20}$, S.A.~Iga~Buitron$^\textrm{\scriptsize 73}$, R.~Ilkaev$^\textrm{\scriptsize 111}$, M.~Inaba$^\textrm{\scriptsize 133}$, M.~Ippolitov$^\textrm{\scriptsize 85}$\textsuperscript{,}$^\textrm{\scriptsize 92}$, M.~Irfan$^\textrm{\scriptsize 17}$, V.~Isakov$^\textrm{\scriptsize 63}$, M.~Ivanov$^\textrm{\scriptsize 109}$, V.~Ivanov$^\textrm{\scriptsize 98}$, V.~Izucheev$^\textrm{\scriptsize 115}$, B.~Jacak$^\textrm{\scriptsize 84}$, N.~Jacazio$^\textrm{\scriptsize 27}$, P.M.~Jacobs$^\textrm{\scriptsize 84}$, M.B.~Jadhav$^\textrm{\scriptsize 48}$, S.~Jadlovska$^\textrm{\scriptsize 119}$, J.~Jadlovsky$^\textrm{\scriptsize 119}$, S.~Jaelani$^\textrm{\scriptsize 64}$, C.~Jahnke$^\textrm{\scriptsize 36}$, M.J.~Jakubowska$^\textrm{\scriptsize 140}$, M.A.~Janik$^\textrm{\scriptsize 140}$, P.H.S.Y.~Jayarathna$^\textrm{\scriptsize 127}$, C.~Jena$^\textrm{\scriptsize 90}$, S.~Jena$^\textrm{\scriptsize 127}$, M.~Jercic$^\textrm{\scriptsize 100}$, R.T.~Jimenez Bustamante$^\textrm{\scriptsize 109}$, P.G.~Jones$^\textrm{\scriptsize 113}$, A.~Jusko$^\textrm{\scriptsize 113}$, P.~Kalinak$^\textrm{\scriptsize 66}$, A.~Kalweit$^\textrm{\scriptsize 35}$, J.H.~Kang$^\textrm{\scriptsize 144}$, V.~Kaplin$^\textrm{\scriptsize 85}$, S.~Kar$^\textrm{\scriptsize 139}$, A.~Karasu Uysal$^\textrm{\scriptsize 81}$, O.~Karavichev$^\textrm{\scriptsize 63}$, T.~Karavicheva$^\textrm{\scriptsize 63}$, L.~Karayan$^\textrm{\scriptsize 106}$\textsuperscript{,}$^\textrm{\scriptsize 109}$, E.~Karpechev$^\textrm{\scriptsize 63}$, U.~Kebschull$^\textrm{\scriptsize 70}$, R.~Keidel$^\textrm{\scriptsize 145}$, D.L.D.~Keijdener$^\textrm{\scriptsize 64}$, M.~Keil$^\textrm{\scriptsize 35}$, B.~Ketzer$^\textrm{\scriptsize 45}$, P.~Khan$^\textrm{\scriptsize 112}$, S.A.~Khan$^\textrm{\scriptsize 139}$, A.~Khanzadeev$^\textrm{\scriptsize 98}$, Y.~Kharlov$^\textrm{\scriptsize 115}$, A.~Khatun$^\textrm{\scriptsize 17}$, A.~Khuntia$^\textrm{\scriptsize 49}$, M.M.~Kielbowicz$^\textrm{\scriptsize 121}$, B.~Kileng$^\textrm{\scriptsize 37}$, D.~Kim$^\textrm{\scriptsize 144}$, D.W.~Kim$^\textrm{\scriptsize 43}$, D.J.~Kim$^\textrm{\scriptsize 128}$, H.~Kim$^\textrm{\scriptsize 144}$, J.S.~Kim$^\textrm{\scriptsize 43}$, J.~Kim$^\textrm{\scriptsize 106}$, M.~Kim$^\textrm{\scriptsize 61}$, M.~Kim$^\textrm{\scriptsize 144}$, S.~Kim$^\textrm{\scriptsize 20}$, T.~Kim$^\textrm{\scriptsize 144}$, S.~Kirsch$^\textrm{\scriptsize 42}$, I.~Kisel$^\textrm{\scriptsize 42}$, S.~Kiselev$^\textrm{\scriptsize 65}$, A.~Kisiel$^\textrm{\scriptsize 140}$, G.~Kiss$^\textrm{\scriptsize 142}$, J.L.~Klay$^\textrm{\scriptsize 6}$, C.~Klein$^\textrm{\scriptsize 71}$, J.~Klein$^\textrm{\scriptsize 35}$, C.~Klein-B\"{o}sing$^\textrm{\scriptsize 72}$, S.~Klewin$^\textrm{\scriptsize 106}$, A.~Kluge$^\textrm{\scriptsize 35}$, M.L.~Knichel$^\textrm{\scriptsize 106}$, A.G.~Knospe$^\textrm{\scriptsize 127}$, C.~Kobdaj$^\textrm{\scriptsize 118}$, M.~Kofarago$^\textrm{\scriptsize 142}$, T.~Kollegger$^\textrm{\scriptsize 109}$, A.~Kolojvari$^\textrm{\scriptsize 138}$, V.~Kondratiev$^\textrm{\scriptsize 138}$, N.~Kondratyeva$^\textrm{\scriptsize 85}$, E.~Kondratyuk$^\textrm{\scriptsize 115}$, A.~Konevskikh$^\textrm{\scriptsize 63}$, M.~Konyushikhin$^\textrm{\scriptsize 141}$, M.~Kopcik$^\textrm{\scriptsize 119}$, M.~Kour$^\textrm{\scriptsize 103}$, C.~Kouzinopoulos$^\textrm{\scriptsize 35}$, O.~Kovalenko$^\textrm{\scriptsize 88}$, V.~Kovalenko$^\textrm{\scriptsize 138}$, M.~Kowalski$^\textrm{\scriptsize 121}$, G.~Koyithatta Meethaleveedu$^\textrm{\scriptsize 48}$, I.~Kr\'{a}lik$^\textrm{\scriptsize 66}$, A.~Krav\v{c}\'{a}kov\'{a}$^\textrm{\scriptsize 40}$, M.~Krivda$^\textrm{\scriptsize 66}$\textsuperscript{,}$^\textrm{\scriptsize 113}$, F.~Krizek$^\textrm{\scriptsize 96}$, E.~Kryshen$^\textrm{\scriptsize 98}$, M.~Krzewicki$^\textrm{\scriptsize 42}$, A.M.~Kubera$^\textrm{\scriptsize 18}$, V.~Ku\v{c}era$^\textrm{\scriptsize 96}$, C.~Kuhn$^\textrm{\scriptsize 135}$, P.G.~Kuijer$^\textrm{\scriptsize 94}$, A.~Kumar$^\textrm{\scriptsize 103}$, J.~Kumar$^\textrm{\scriptsize 48}$, L.~Kumar$^\textrm{\scriptsize 101}$, S.~Kumar$^\textrm{\scriptsize 48}$, S.~Kundu$^\textrm{\scriptsize 90}$, P.~Kurashvili$^\textrm{\scriptsize 88}$, A.~Kurepin$^\textrm{\scriptsize 63}$, A.B.~Kurepin$^\textrm{\scriptsize 63}$, A.~Kuryakin$^\textrm{\scriptsize 111}$, S.~Kushpil$^\textrm{\scriptsize 96}$, M.J.~Kweon$^\textrm{\scriptsize 61}$, Y.~Kwon$^\textrm{\scriptsize 144}$, S.L.~La Pointe$^\textrm{\scriptsize 42}$, P.~La Rocca$^\textrm{\scriptsize 28}$, C.~Lagana Fernandes$^\textrm{\scriptsize 124}$, Y.S.~Lai$^\textrm{\scriptsize 84}$, I.~Lakomov$^\textrm{\scriptsize 35}$, R.~Langoy$^\textrm{\scriptsize 41}$, K.~Lapidus$^\textrm{\scriptsize 143}$, C.~Lara$^\textrm{\scriptsize 70}$, A.~Lardeux$^\textrm{\scriptsize 76}$\textsuperscript{,}$^\textrm{\scriptsize 21}$, A.~Lattuca$^\textrm{\scriptsize 26}$, E.~Laudi$^\textrm{\scriptsize 35}$, R.~Lavicka$^\textrm{\scriptsize 39}$, L.~Lazaridis$^\textrm{\scriptsize 35}$, R.~Lea$^\textrm{\scriptsize 25}$, L.~Leardini$^\textrm{\scriptsize 106}$, S.~Lee$^\textrm{\scriptsize 144}$, F.~Lehas$^\textrm{\scriptsize 94}$, S.~Lehner$^\textrm{\scriptsize 116}$, J.~Lehrbach$^\textrm{\scriptsize 42}$, R.C.~Lemmon$^\textrm{\scriptsize 95}$, V.~Lenti$^\textrm{\scriptsize 53}$, E.~Leogrande$^\textrm{\scriptsize 64}$, I.~Le\'{o}n Monz\'{o}n$^\textrm{\scriptsize 123}$, P.~L\'{e}vai$^\textrm{\scriptsize 142}$, S.~Li$^\textrm{\scriptsize 7}$, X.~Li$^\textrm{\scriptsize 14}$, J.~Lien$^\textrm{\scriptsize 41}$, R.~Lietava$^\textrm{\scriptsize 113}$, B.~Lim$^\textrm{\scriptsize 19}$, S.~Lindal$^\textrm{\scriptsize 21}$, V.~Lindenstruth$^\textrm{\scriptsize 42}$, S.W.~Lindsay$^\textrm{\scriptsize 129}$, C.~Lippmann$^\textrm{\scriptsize 109}$, M.A.~Lisa$^\textrm{\scriptsize 18}$, V.~Litichevskyi$^\textrm{\scriptsize 46}$, H.M.~Ljunggren$^\textrm{\scriptsize 34}$, W.J.~Llope$^\textrm{\scriptsize 141}$, D.F.~Lodato$^\textrm{\scriptsize 64}$, P.I.~Loenne$^\textrm{\scriptsize 22}$, V.~Loginov$^\textrm{\scriptsize 85}$, C.~Loizides$^\textrm{\scriptsize 84}$, P.~Loncar$^\textrm{\scriptsize 120}$, X.~Lopez$^\textrm{\scriptsize 82}$, E.~L\'{o}pez Torres$^\textrm{\scriptsize 9}$, A.~Lowe$^\textrm{\scriptsize 142}$, P.~Luettig$^\textrm{\scriptsize 71}$, M.~Lunardon$^\textrm{\scriptsize 29}$, G.~Luparello$^\textrm{\scriptsize 25}$, M.~Lupi$^\textrm{\scriptsize 35}$, T.H.~Lutz$^\textrm{\scriptsize 143}$, A.~Maevskaya$^\textrm{\scriptsize 63}$, M.~Mager$^\textrm{\scriptsize 35}$, S.~Mahajan$^\textrm{\scriptsize 103}$, S.M.~Mahmood$^\textrm{\scriptsize 21}$, A.~Maire$^\textrm{\scriptsize 135}$, R.D.~Majka$^\textrm{\scriptsize 143}$, M.~Malaev$^\textrm{\scriptsize 98}$, L.~Malinina$^\textrm{\scriptsize 78}$\Aref{idp4144160}, D.~Mal'Kevich$^\textrm{\scriptsize 65}$, P.~Malzacher$^\textrm{\scriptsize 109}$, A.~Mamonov$^\textrm{\scriptsize 111}$, V.~Manko$^\textrm{\scriptsize 92}$, F.~Manso$^\textrm{\scriptsize 82}$, V.~Manzari$^\textrm{\scriptsize 53}$, Y.~Mao$^\textrm{\scriptsize 7}$, M.~Marchisone$^\textrm{\scriptsize 77}$\textsuperscript{,}$^\textrm{\scriptsize 131}$, J.~Mare\v{s}$^\textrm{\scriptsize 67}$, G.V.~Margagliotti$^\textrm{\scriptsize 25}$, A.~Margotti$^\textrm{\scriptsize 54}$, J.~Margutti$^\textrm{\scriptsize 64}$, A.~Mar\'{\i}n$^\textrm{\scriptsize 109}$, C.~Markert$^\textrm{\scriptsize 122}$, M.~Marquard$^\textrm{\scriptsize 71}$, N.A.~Martin$^\textrm{\scriptsize 109}$, P.~Martinengo$^\textrm{\scriptsize 35}$, J.A.L.~Martinez$^\textrm{\scriptsize 70}$, M.I.~Mart\'{\i}nez$^\textrm{\scriptsize 2}$, G.~Mart\'{\i}nez Garc\'{\i}a$^\textrm{\scriptsize 117}$, M.~Martinez Pedreira$^\textrm{\scriptsize 35}$, A.~Mas$^\textrm{\scriptsize 124}$, S.~Masciocchi$^\textrm{\scriptsize 109}$, M.~Masera$^\textrm{\scriptsize 26}$, A.~Masoni$^\textrm{\scriptsize 55}$, E.~Masson$^\textrm{\scriptsize 117}$, A.~Mastroserio$^\textrm{\scriptsize 33}$, A.M.~Mathis$^\textrm{\scriptsize 107}$\textsuperscript{,}$^\textrm{\scriptsize 36}$, A.~Matyja$^\textrm{\scriptsize 121}$\textsuperscript{,}$^\textrm{\scriptsize 130}$, C.~Mayer$^\textrm{\scriptsize 121}$, J.~Mazer$^\textrm{\scriptsize 130}$, M.~Mazzilli$^\textrm{\scriptsize 33}$, M.A.~Mazzoni$^\textrm{\scriptsize 58}$, F.~Meddi$^\textrm{\scriptsize 23}$, Y.~Melikyan$^\textrm{\scriptsize 85}$, A.~Menchaca-Rocha$^\textrm{\scriptsize 75}$, E.~Meninno$^\textrm{\scriptsize 30}$, J.~Mercado P\'erez$^\textrm{\scriptsize 106}$, M.~Meres$^\textrm{\scriptsize 38}$, S.~Mhlanga$^\textrm{\scriptsize 102}$, Y.~Miake$^\textrm{\scriptsize 133}$, M.M.~Mieskolainen$^\textrm{\scriptsize 46}$, D.~Mihaylov$^\textrm{\scriptsize 107}$, D.L.~Mihaylov$^\textrm{\scriptsize 107}$, K.~Mikhaylov$^\textrm{\scriptsize 65}$\textsuperscript{,}$^\textrm{\scriptsize 78}$, L.~Milano$^\textrm{\scriptsize 84}$, J.~Milosevic$^\textrm{\scriptsize 21}$, A.~Mischke$^\textrm{\scriptsize 64}$, A.N.~Mishra$^\textrm{\scriptsize 49}$, D.~Mi\'{s}kowiec$^\textrm{\scriptsize 109}$, J.~Mitra$^\textrm{\scriptsize 139}$, C.M.~Mitu$^\textrm{\scriptsize 69}$, N.~Mohammadi$^\textrm{\scriptsize 64}$, B.~Mohanty$^\textrm{\scriptsize 90}$, M.~Mohisin Khan$^\textrm{\scriptsize 17}$\Aref{idp4502608}, E.~Montes$^\textrm{\scriptsize 10}$, D.A.~Moreira De Godoy$^\textrm{\scriptsize 72}$, L.A.P.~Moreno$^\textrm{\scriptsize 2}$, S.~Moretto$^\textrm{\scriptsize 29}$, A.~Morreale$^\textrm{\scriptsize 117}$, A.~Morsch$^\textrm{\scriptsize 35}$, V.~Muccifora$^\textrm{\scriptsize 51}$, E.~Mudnic$^\textrm{\scriptsize 120}$, D.~M{\"u}hlheim$^\textrm{\scriptsize 72}$, S.~Muhuri$^\textrm{\scriptsize 139}$, M.~Mukherjee$^\textrm{\scriptsize 4}$\textsuperscript{,}$^\textrm{\scriptsize 139}$, J.D.~Mulligan$^\textrm{\scriptsize 143}$, M.G.~Munhoz$^\textrm{\scriptsize 124}$, K.~M\"{u}nning$^\textrm{\scriptsize 45}$, R.H.~Munzer$^\textrm{\scriptsize 71}$, H.~Murakami$^\textrm{\scriptsize 132}$, S.~Murray$^\textrm{\scriptsize 77}$, L.~Musa$^\textrm{\scriptsize 35}$, J.~Musinsky$^\textrm{\scriptsize 66}$, C.J.~Myers$^\textrm{\scriptsize 127}$, J.W.~Myrcha$^\textrm{\scriptsize 140}$, B.~Naik$^\textrm{\scriptsize 48}$, R.~Nair$^\textrm{\scriptsize 88}$, B.K.~Nandi$^\textrm{\scriptsize 48}$, R.~Nania$^\textrm{\scriptsize 54}$\textsuperscript{,}$^\textrm{\scriptsize 12}$, E.~Nappi$^\textrm{\scriptsize 53}$, A.~Narayan$^\textrm{\scriptsize 48}$, M.U.~Naru$^\textrm{\scriptsize 15}$, H.~Natal da Luz$^\textrm{\scriptsize 124}$, C.~Nattrass$^\textrm{\scriptsize 130}$, S.R.~Navarro$^\textrm{\scriptsize 2}$, K.~Nayak$^\textrm{\scriptsize 90}$, R.~Nayak$^\textrm{\scriptsize 48}$, T.K.~Nayak$^\textrm{\scriptsize 139}$, S.~Nazarenko$^\textrm{\scriptsize 111}$, A.~Nedosekin$^\textrm{\scriptsize 65}$, R.A.~Negrao De Oliveira$^\textrm{\scriptsize 35}$, L.~Nellen$^\textrm{\scriptsize 73}$, S.V.~Nesbo$^\textrm{\scriptsize 37}$, F.~Ng$^\textrm{\scriptsize 127}$, M.~Nicassio$^\textrm{\scriptsize 109}$, M.~Niculescu$^\textrm{\scriptsize 69}$, J.~Niedziela$^\textrm{\scriptsize 35}$, B.S.~Nielsen$^\textrm{\scriptsize 93}$, S.~Nikolaev$^\textrm{\scriptsize 92}$, S.~Nikulin$^\textrm{\scriptsize 92}$, V.~Nikulin$^\textrm{\scriptsize 98}$, A.~Nobuhiro$^\textrm{\scriptsize 47}$, F.~Noferini$^\textrm{\scriptsize 12}$\textsuperscript{,}$^\textrm{\scriptsize 54}$, P.~Nomokonov$^\textrm{\scriptsize 78}$, G.~Nooren$^\textrm{\scriptsize 64}$, J.C.C.~Noris$^\textrm{\scriptsize 2}$, J.~Norman$^\textrm{\scriptsize 129}$, A.~Nyanin$^\textrm{\scriptsize 92}$, J.~Nystrand$^\textrm{\scriptsize 22}$, H.~Oeschler$^\textrm{\scriptsize 106}$\Aref{0}, S.~Oh$^\textrm{\scriptsize 143}$, A.~Ohlson$^\textrm{\scriptsize 106}$\textsuperscript{,}$^\textrm{\scriptsize 35}$, T.~Okubo$^\textrm{\scriptsize 47}$, L.~Olah$^\textrm{\scriptsize 142}$, J.~Oleniacz$^\textrm{\scriptsize 140}$, A.C.~Oliveira Da Silva$^\textrm{\scriptsize 124}$, M.H.~Oliver$^\textrm{\scriptsize 143}$, J.~Onderwaater$^\textrm{\scriptsize 109}$, C.~Oppedisano$^\textrm{\scriptsize 59}$, R.~Orava$^\textrm{\scriptsize 46}$, M.~Oravec$^\textrm{\scriptsize 119}$, A.~Ortiz Velasquez$^\textrm{\scriptsize 73}$, A.~Oskarsson$^\textrm{\scriptsize 34}$, J.~Otwinowski$^\textrm{\scriptsize 121}$, K.~Oyama$^\textrm{\scriptsize 86}$, Y.~Pachmayer$^\textrm{\scriptsize 106}$, V.~Pacik$^\textrm{\scriptsize 93}$, D.~Pagano$^\textrm{\scriptsize 137}$, P.~Pagano$^\textrm{\scriptsize 30}$, G.~Pai\'{c}$^\textrm{\scriptsize 73}$, P.~Palni$^\textrm{\scriptsize 7}$, J.~Pan$^\textrm{\scriptsize 141}$, A.K.~Pandey$^\textrm{\scriptsize 48}$, S.~Panebianco$^\textrm{\scriptsize 76}$, V.~Papikyan$^\textrm{\scriptsize 1}$, G.S.~Pappalardo$^\textrm{\scriptsize 56}$, P.~Pareek$^\textrm{\scriptsize 49}$, J.~Park$^\textrm{\scriptsize 61}$, W.J.~Park$^\textrm{\scriptsize 109}$, S.~Parmar$^\textrm{\scriptsize 101}$, A.~Passfeld$^\textrm{\scriptsize 72}$, S.P.~Pathak$^\textrm{\scriptsize 127}$, V.~Paticchio$^\textrm{\scriptsize 53}$, R.N.~Patra$^\textrm{\scriptsize 139}$, B.~Paul$^\textrm{\scriptsize 59}$, H.~Pei$^\textrm{\scriptsize 7}$, T.~Peitzmann$^\textrm{\scriptsize 64}$, X.~Peng$^\textrm{\scriptsize 7}$, L.G.~Pereira$^\textrm{\scriptsize 74}$, H.~Pereira Da Costa$^\textrm{\scriptsize 76}$, D.~Peresunko$^\textrm{\scriptsize 85}$\textsuperscript{,}$^\textrm{\scriptsize 92}$, E.~Perez Lezama$^\textrm{\scriptsize 71}$, V.~Peskov$^\textrm{\scriptsize 71}$, Y.~Pestov$^\textrm{\scriptsize 5}$, V.~Petr\'{a}\v{c}ek$^\textrm{\scriptsize 39}$, V.~Petrov$^\textrm{\scriptsize 115}$, M.~Petrovici$^\textrm{\scriptsize 89}$, C.~Petta$^\textrm{\scriptsize 28}$, R.P.~Pezzi$^\textrm{\scriptsize 74}$, S.~Piano$^\textrm{\scriptsize 60}$, M.~Pikna$^\textrm{\scriptsize 38}$, P.~Pillot$^\textrm{\scriptsize 117}$, L.O.D.L.~Pimentel$^\textrm{\scriptsize 93}$, O.~Pinazza$^\textrm{\scriptsize 54}$\textsuperscript{,}$^\textrm{\scriptsize 35}$, L.~Pinsky$^\textrm{\scriptsize 127}$, D.B.~Piyarathna$^\textrm{\scriptsize 127}$, M.~P\l osko\'{n}$^\textrm{\scriptsize 84}$, M.~Planinic$^\textrm{\scriptsize 100}$, F.~Pliquett$^\textrm{\scriptsize 71}$, J.~Pluta$^\textrm{\scriptsize 140}$, S.~Pochybova$^\textrm{\scriptsize 142}$, P.L.M.~Podesta-Lerma$^\textrm{\scriptsize 123}$, M.G.~Poghosyan$^\textrm{\scriptsize 97}$, B.~Polichtchouk$^\textrm{\scriptsize 115}$, N.~Poljak$^\textrm{\scriptsize 100}$, W.~Poonsawat$^\textrm{\scriptsize 118}$, A.~Pop$^\textrm{\scriptsize 89}$, H.~Poppenborg$^\textrm{\scriptsize 72}$, S.~Porteboeuf-Houssais$^\textrm{\scriptsize 82}$, J.~Porter$^\textrm{\scriptsize 84}$, V.~Pozdniakov$^\textrm{\scriptsize 78}$, S.K.~Prasad$^\textrm{\scriptsize 4}$, R.~Preghenella$^\textrm{\scriptsize 54}$\textsuperscript{,}$^\textrm{\scriptsize 35}$, F.~Prino$^\textrm{\scriptsize 59}$, C.A.~Pruneau$^\textrm{\scriptsize 141}$, I.~Pshenichnov$^\textrm{\scriptsize 63}$, M.~Puccio$^\textrm{\scriptsize 26}$, G.~Puddu$^\textrm{\scriptsize 24}$, P.~Pujahari$^\textrm{\scriptsize 141}$, V.~Punin$^\textrm{\scriptsize 111}$, J.~Putschke$^\textrm{\scriptsize 141}$, A.~Rachevski$^\textrm{\scriptsize 60}$, S.~Raha$^\textrm{\scriptsize 4}$, S.~Rajput$^\textrm{\scriptsize 103}$, J.~Rak$^\textrm{\scriptsize 128}$, A.~Rakotozafindrabe$^\textrm{\scriptsize 76}$, L.~Ramello$^\textrm{\scriptsize 32}$, F.~Rami$^\textrm{\scriptsize 135}$, D.B.~Rana$^\textrm{\scriptsize 127}$, R.~Raniwala$^\textrm{\scriptsize 104}$, S.~Raniwala$^\textrm{\scriptsize 104}$, S.S.~R\"{a}s\"{a}nen$^\textrm{\scriptsize 46}$, B.T.~Rascanu$^\textrm{\scriptsize 71}$, D.~Rathee$^\textrm{\scriptsize 101}$, V.~Ratza$^\textrm{\scriptsize 45}$, I.~Ravasenga$^\textrm{\scriptsize 31}$, K.F.~Read$^\textrm{\scriptsize 97}$\textsuperscript{,}$^\textrm{\scriptsize 130}$, K.~Redlich$^\textrm{\scriptsize 88}$\Aref{idp5486048}, A.~Rehman$^\textrm{\scriptsize 22}$, P.~Reichelt$^\textrm{\scriptsize 71}$, F.~Reidt$^\textrm{\scriptsize 35}$, X.~Ren$^\textrm{\scriptsize 7}$, R.~Renfordt$^\textrm{\scriptsize 71}$, A.R.~Reolon$^\textrm{\scriptsize 51}$, A.~Reshetin$^\textrm{\scriptsize 63}$, K.~Reygers$^\textrm{\scriptsize 106}$, V.~Riabov$^\textrm{\scriptsize 98}$, R.A.~Ricci$^\textrm{\scriptsize 52}$, T.~Richert$^\textrm{\scriptsize 64}$, M.~Richter$^\textrm{\scriptsize 21}$, P.~Riedler$^\textrm{\scriptsize 35}$, W.~Riegler$^\textrm{\scriptsize 35}$, F.~Riggi$^\textrm{\scriptsize 28}$, C.~Ristea$^\textrm{\scriptsize 69}$, M.~Rodr\'{i}guez Cahuantzi$^\textrm{\scriptsize 2}$, K.~R{\o}ed$^\textrm{\scriptsize 21}$, E.~Rogochaya$^\textrm{\scriptsize 78}$, D.~Rohr$^\textrm{\scriptsize 42}$\textsuperscript{,}$^\textrm{\scriptsize 35}$, D.~R\"ohrich$^\textrm{\scriptsize 22}$, P.S.~Rokita$^\textrm{\scriptsize 140}$, F.~Ronchetti$^\textrm{\scriptsize 51}$, P.~Rosnet$^\textrm{\scriptsize 82}$, A.~Rossi$^\textrm{\scriptsize 29}$, A.~Rotondi$^\textrm{\scriptsize 136}$, F.~Roukoutakis$^\textrm{\scriptsize 87}$, A.~Roy$^\textrm{\scriptsize 49}$, C.~Roy$^\textrm{\scriptsize 135}$, P.~Roy$^\textrm{\scriptsize 112}$, A.J.~Rubio Montero$^\textrm{\scriptsize 10}$, O.V.~Rueda$^\textrm{\scriptsize 73}$, R.~Rui$^\textrm{\scriptsize 25}$, R.~Russo$^\textrm{\scriptsize 26}$, A.~Rustamov$^\textrm{\scriptsize 91}$, E.~Ryabinkin$^\textrm{\scriptsize 92}$, Y.~Ryabov$^\textrm{\scriptsize 98}$, A.~Rybicki$^\textrm{\scriptsize 121}$, S.~Saarinen$^\textrm{\scriptsize 46}$, S.~Sadhu$^\textrm{\scriptsize 139}$, S.~Sadovsky$^\textrm{\scriptsize 115}$, K.~\v{S}afa\v{r}\'{\i}k$^\textrm{\scriptsize 35}$, S.K.~Saha$^\textrm{\scriptsize 139}$, B.~Sahlmuller$^\textrm{\scriptsize 71}$, B.~Sahoo$^\textrm{\scriptsize 48}$, P.~Sahoo$^\textrm{\scriptsize 49}$, R.~Sahoo$^\textrm{\scriptsize 49}$, S.~Sahoo$^\textrm{\scriptsize 68}$, P.K.~Sahu$^\textrm{\scriptsize 68}$, J.~Saini$^\textrm{\scriptsize 139}$, S.~Sakai$^\textrm{\scriptsize 51}$\textsuperscript{,}$^\textrm{\scriptsize 133}$, M.A.~Saleh$^\textrm{\scriptsize 141}$, J.~Salzwedel$^\textrm{\scriptsize 18}$, S.~Sambyal$^\textrm{\scriptsize 103}$, V.~Samsonov$^\textrm{\scriptsize 85}$\textsuperscript{,}$^\textrm{\scriptsize 98}$, A.~Sandoval$^\textrm{\scriptsize 75}$, D.~Sarkar$^\textrm{\scriptsize 139}$, N.~Sarkar$^\textrm{\scriptsize 139}$, P.~Sarma$^\textrm{\scriptsize 44}$, M.H.P.~Sas$^\textrm{\scriptsize 64}$, E.~Scapparone$^\textrm{\scriptsize 54}$, F.~Scarlassara$^\textrm{\scriptsize 29}$, R.P.~Scharenberg$^\textrm{\scriptsize 108}$, H.S.~Scheid$^\textrm{\scriptsize 71}$, C.~Schiaua$^\textrm{\scriptsize 89}$, R.~Schicker$^\textrm{\scriptsize 106}$, C.~Schmidt$^\textrm{\scriptsize 109}$, H.R.~Schmidt$^\textrm{\scriptsize 105}$, M.O.~Schmidt$^\textrm{\scriptsize 106}$, M.~Schmidt$^\textrm{\scriptsize 105}$, S.~Schuchmann$^\textrm{\scriptsize 106}$, J.~Schukraft$^\textrm{\scriptsize 35}$, Y.~Schutz$^\textrm{\scriptsize 35}$\textsuperscript{,}$^\textrm{\scriptsize 135}$\textsuperscript{,}$^\textrm{\scriptsize 117}$, K.~Schwarz$^\textrm{\scriptsize 109}$, K.~Schweda$^\textrm{\scriptsize 109}$, G.~Scioli$^\textrm{\scriptsize 27}$, E.~Scomparin$^\textrm{\scriptsize 59}$, R.~Scott$^\textrm{\scriptsize 130}$, M.~\v{S}ef\v{c}\'ik$^\textrm{\scriptsize 40}$, J.E.~Seger$^\textrm{\scriptsize 99}$, Y.~Sekiguchi$^\textrm{\scriptsize 132}$, D.~Sekihata$^\textrm{\scriptsize 47}$, I.~Selyuzhenkov$^\textrm{\scriptsize 109}$\textsuperscript{,}$^\textrm{\scriptsize 85}$, K.~Senosi$^\textrm{\scriptsize 77}$, S.~Senyukov$^\textrm{\scriptsize 3}$\textsuperscript{,}$^\textrm{\scriptsize 35}$\textsuperscript{,}$^\textrm{\scriptsize 135}$, E.~Serradilla$^\textrm{\scriptsize 75}$\textsuperscript{,}$^\textrm{\scriptsize 10}$, P.~Sett$^\textrm{\scriptsize 48}$, A.~Sevcenco$^\textrm{\scriptsize 69}$, A.~Shabanov$^\textrm{\scriptsize 63}$, A.~Shabetai$^\textrm{\scriptsize 117}$, R.~Shahoyan$^\textrm{\scriptsize 35}$, W.~Shaikh$^\textrm{\scriptsize 112}$, A.~Shangaraev$^\textrm{\scriptsize 115}$, A.~Sharma$^\textrm{\scriptsize 101}$, A.~Sharma$^\textrm{\scriptsize 103}$, M.~Sharma$^\textrm{\scriptsize 103}$, M.~Sharma$^\textrm{\scriptsize 103}$, N.~Sharma$^\textrm{\scriptsize 130}$\textsuperscript{,}$^\textrm{\scriptsize 101}$, A.I.~Sheikh$^\textrm{\scriptsize 139}$, K.~Shigaki$^\textrm{\scriptsize 47}$, Q.~Shou$^\textrm{\scriptsize 7}$, K.~Shtejer$^\textrm{\scriptsize 26}$\textsuperscript{,}$^\textrm{\scriptsize 9}$, Y.~Sibiriak$^\textrm{\scriptsize 92}$, S.~Siddhanta$^\textrm{\scriptsize 55}$, K.M.~Sielewicz$^\textrm{\scriptsize 35}$, T.~Siemiarczuk$^\textrm{\scriptsize 88}$, D.~Silvermyr$^\textrm{\scriptsize 34}$, C.~Silvestre$^\textrm{\scriptsize 83}$, G.~Simatovic$^\textrm{\scriptsize 100}$, G.~Simonetti$^\textrm{\scriptsize 35}$, R.~Singaraju$^\textrm{\scriptsize 139}$, R.~Singh$^\textrm{\scriptsize 90}$, V.~Singhal$^\textrm{\scriptsize 139}$, T.~Sinha$^\textrm{\scriptsize 112}$, B.~Sitar$^\textrm{\scriptsize 38}$, M.~Sitta$^\textrm{\scriptsize 32}$, T.B.~Skaali$^\textrm{\scriptsize 21}$, M.~Slupecki$^\textrm{\scriptsize 128}$, N.~Smirnov$^\textrm{\scriptsize 143}$, R.J.M.~Snellings$^\textrm{\scriptsize 64}$, T.W.~Snellman$^\textrm{\scriptsize 128}$, J.~Song$^\textrm{\scriptsize 19}$, M.~Song$^\textrm{\scriptsize 144}$, F.~Soramel$^\textrm{\scriptsize 29}$, S.~Sorensen$^\textrm{\scriptsize 130}$, F.~Sozzi$^\textrm{\scriptsize 109}$, E.~Spiriti$^\textrm{\scriptsize 51}$, I.~Sputowska$^\textrm{\scriptsize 121}$, B.K.~Srivastava$^\textrm{\scriptsize 108}$, J.~Stachel$^\textrm{\scriptsize 106}$, I.~Stan$^\textrm{\scriptsize 69}$, P.~Stankus$^\textrm{\scriptsize 97}$, E.~Stenlund$^\textrm{\scriptsize 34}$, D.~Stocco$^\textrm{\scriptsize 117}$, P.~Strmen$^\textrm{\scriptsize 38}$, A.A.P.~Suaide$^\textrm{\scriptsize 124}$, T.~Sugitate$^\textrm{\scriptsize 47}$, C.~Suire$^\textrm{\scriptsize 62}$, M.~Suleymanov$^\textrm{\scriptsize 15}$, M.~Suljic$^\textrm{\scriptsize 25}$, R.~Sultanov$^\textrm{\scriptsize 65}$, M.~\v{S}umbera$^\textrm{\scriptsize 96}$, S.~Sumowidagdo$^\textrm{\scriptsize 50}$, K.~Suzuki$^\textrm{\scriptsize 116}$, S.~Swain$^\textrm{\scriptsize 68}$, A.~Szabo$^\textrm{\scriptsize 38}$, I.~Szarka$^\textrm{\scriptsize 38}$, A.~Szczepankiewicz$^\textrm{\scriptsize 140}$, U.~Tabassam$^\textrm{\scriptsize 15}$, J.~Takahashi$^\textrm{\scriptsize 125}$, G.J.~Tambave$^\textrm{\scriptsize 22}$, N.~Tanaka$^\textrm{\scriptsize 133}$, M.~Tarhini$^\textrm{\scriptsize 62}$, M.~Tariq$^\textrm{\scriptsize 17}$, M.G.~Tarzila$^\textrm{\scriptsize 89}$, A.~Tauro$^\textrm{\scriptsize 35}$, G.~Tejeda Mu\~{n}oz$^\textrm{\scriptsize 2}$, A.~Telesca$^\textrm{\scriptsize 35}$, K.~Terasaki$^\textrm{\scriptsize 132}$, C.~Terrevoli$^\textrm{\scriptsize 29}$, B.~Teyssier$^\textrm{\scriptsize 134}$, D.~Thakur$^\textrm{\scriptsize 49}$, S.~Thakur$^\textrm{\scriptsize 139}$, D.~Thomas$^\textrm{\scriptsize 122}$, R.~Tieulent$^\textrm{\scriptsize 134}$, A.~Tikhonov$^\textrm{\scriptsize 63}$, A.R.~Timmins$^\textrm{\scriptsize 127}$, A.~Toia$^\textrm{\scriptsize 71}$, S.~Tripathy$^\textrm{\scriptsize 49}$, S.~Trogolo$^\textrm{\scriptsize 26}$, G.~Trombetta$^\textrm{\scriptsize 33}$, L.~Tropp$^\textrm{\scriptsize 40}$, V.~Trubnikov$^\textrm{\scriptsize 3}$, W.H.~Trzaska$^\textrm{\scriptsize 128}$, B.A.~Trzeciak$^\textrm{\scriptsize 64}$, T.~Tsuji$^\textrm{\scriptsize 132}$, A.~Tumkin$^\textrm{\scriptsize 111}$, R.~Turrisi$^\textrm{\scriptsize 57}$, T.S.~Tveter$^\textrm{\scriptsize 21}$, K.~Ullaland$^\textrm{\scriptsize 22}$, E.N.~Umaka$^\textrm{\scriptsize 127}$, A.~Uras$^\textrm{\scriptsize 134}$, G.L.~Usai$^\textrm{\scriptsize 24}$, A.~Utrobicic$^\textrm{\scriptsize 100}$, M.~Vala$^\textrm{\scriptsize 66}$\textsuperscript{,}$^\textrm{\scriptsize 119}$, J.~Van Der Maarel$^\textrm{\scriptsize 64}$, J.W.~Van Hoorne$^\textrm{\scriptsize 35}$, M.~van Leeuwen$^\textrm{\scriptsize 64}$, T.~Vanat$^\textrm{\scriptsize 96}$, P.~Vande Vyvre$^\textrm{\scriptsize 35}$, D.~Varga$^\textrm{\scriptsize 142}$, A.~Vargas$^\textrm{\scriptsize 2}$, M.~Vargyas$^\textrm{\scriptsize 128}$, R.~Varma$^\textrm{\scriptsize 48}$, M.~Vasileiou$^\textrm{\scriptsize 87}$, A.~Vasiliev$^\textrm{\scriptsize 92}$, A.~Vauthier$^\textrm{\scriptsize 83}$, O.~V\'azquez Doce$^\textrm{\scriptsize 107}$\textsuperscript{,}$^\textrm{\scriptsize 36}$, V.~Vechernin$^\textrm{\scriptsize 138}$, A.M.~Veen$^\textrm{\scriptsize 64}$, A.~Velure$^\textrm{\scriptsize 22}$, E.~Vercellin$^\textrm{\scriptsize 26}$, S.~Vergara Lim\'on$^\textrm{\scriptsize 2}$, R.~Vernet$^\textrm{\scriptsize 8}$, R.~V\'ertesi$^\textrm{\scriptsize 142}$, L.~Vickovic$^\textrm{\scriptsize 120}$, S.~Vigolo$^\textrm{\scriptsize 64}$, J.~Viinikainen$^\textrm{\scriptsize 128}$, Z.~Vilakazi$^\textrm{\scriptsize 131}$, O.~Villalobos Baillie$^\textrm{\scriptsize 113}$, A.~Villatoro Tello$^\textrm{\scriptsize 2}$, A.~Vinogradov$^\textrm{\scriptsize 92}$, L.~Vinogradov$^\textrm{\scriptsize 138}$, T.~Virgili$^\textrm{\scriptsize 30}$, V.~Vislavicius$^\textrm{\scriptsize 34}$, A.~Vodopyanov$^\textrm{\scriptsize 78}$, M.A.~V\"{o}lkl$^\textrm{\scriptsize 106}$\textsuperscript{,}$^\textrm{\scriptsize 105}$, K.~Voloshin$^\textrm{\scriptsize 65}$, S.A.~Voloshin$^\textrm{\scriptsize 141}$, G.~Volpe$^\textrm{\scriptsize 33}$, B.~von Haller$^\textrm{\scriptsize 35}$, I.~Vorobyev$^\textrm{\scriptsize 36}$\textsuperscript{,}$^\textrm{\scriptsize 107}$, D.~Voscek$^\textrm{\scriptsize 119}$, D.~Vranic$^\textrm{\scriptsize 35}$\textsuperscript{,}$^\textrm{\scriptsize 109}$, J.~Vrl\'{a}kov\'{a}$^\textrm{\scriptsize 40}$, B.~Wagner$^\textrm{\scriptsize 22}$, J.~Wagner$^\textrm{\scriptsize 109}$, H.~Wang$^\textrm{\scriptsize 64}$, M.~Wang$^\textrm{\scriptsize 7}$, D.~Watanabe$^\textrm{\scriptsize 133}$, Y.~Watanabe$^\textrm{\scriptsize 132}$, M.~Weber$^\textrm{\scriptsize 116}$, S.G.~Weber$^\textrm{\scriptsize 109}$, D.F.~Weiser$^\textrm{\scriptsize 106}$, S.C.~Wenzel$^\textrm{\scriptsize 35}$, J.P.~Wessels$^\textrm{\scriptsize 72}$, U.~Westerhoff$^\textrm{\scriptsize 72}$, A.M.~Whitehead$^\textrm{\scriptsize 102}$, J.~Wiechula$^\textrm{\scriptsize 71}$, J.~Wikne$^\textrm{\scriptsize 21}$, G.~Wilk$^\textrm{\scriptsize 88}$, J.~Wilkinson$^\textrm{\scriptsize 106}$, G.A.~Willems$^\textrm{\scriptsize 72}$, M.C.S.~Williams$^\textrm{\scriptsize 54}$, E.~Willsher$^\textrm{\scriptsize 113}$, B.~Windelband$^\textrm{\scriptsize 106}$, W.E.~Witt$^\textrm{\scriptsize 130}$, S.~Yalcin$^\textrm{\scriptsize 81}$, K.~Yamakawa$^\textrm{\scriptsize 47}$, P.~Yang$^\textrm{\scriptsize 7}$, S.~Yano$^\textrm{\scriptsize 47}$, Z.~Yin$^\textrm{\scriptsize 7}$, H.~Yokoyama$^\textrm{\scriptsize 133}$\textsuperscript{,}$^\textrm{\scriptsize 83}$, I.-K.~Yoo$^\textrm{\scriptsize 35}$\textsuperscript{,}$^\textrm{\scriptsize 19}$, J.H.~Yoon$^\textrm{\scriptsize 61}$, V.~Yurchenko$^\textrm{\scriptsize 3}$, V.~Zaccolo$^\textrm{\scriptsize 59}$\textsuperscript{,}$^\textrm{\scriptsize 93}$, A.~Zaman$^\textrm{\scriptsize 15}$, C.~Zampolli$^\textrm{\scriptsize 35}$, H.J.C.~Zanoli$^\textrm{\scriptsize 124}$, N.~Zardoshti$^\textrm{\scriptsize 113}$, A.~Zarochentsev$^\textrm{\scriptsize 138}$, P.~Z\'{a}vada$^\textrm{\scriptsize 67}$, N.~Zaviyalov$^\textrm{\scriptsize 111}$, H.~Zbroszczyk$^\textrm{\scriptsize 140}$, M.~Zhalov$^\textrm{\scriptsize 98}$, H.~Zhang$^\textrm{\scriptsize 22}$\textsuperscript{,}$^\textrm{\scriptsize 7}$, X.~Zhang$^\textrm{\scriptsize 7}$, Y.~Zhang$^\textrm{\scriptsize 7}$, C.~Zhang$^\textrm{\scriptsize 64}$, Z.~Zhang$^\textrm{\scriptsize 7}$\textsuperscript{,}$^\textrm{\scriptsize 82}$, C.~Zhao$^\textrm{\scriptsize 21}$, N.~Zhigareva$^\textrm{\scriptsize 65}$, D.~Zhou$^\textrm{\scriptsize 7}$, Y.~Zhou$^\textrm{\scriptsize 93}$, Z.~Zhou$^\textrm{\scriptsize 22}$, H.~Zhu$^\textrm{\scriptsize 22}$, J.~Zhu$^\textrm{\scriptsize 117}$\textsuperscript{,}$^\textrm{\scriptsize 7}$, X.~Zhu$^\textrm{\scriptsize 7}$, A.~Zichichi$^\textrm{\scriptsize 12}$\textsuperscript{,}$^\textrm{\scriptsize 27}$, A.~Zimmermann$^\textrm{\scriptsize 106}$, M.B.~Zimmermann$^\textrm{\scriptsize 35}$\textsuperscript{,}$^\textrm{\scriptsize 72}$, G.~Zinovjev$^\textrm{\scriptsize 3}$, J.~Zmeskal$^\textrm{\scriptsize 116}$, S.~Zou$^\textrm{\scriptsize 7}$ \renewcommand\labelenumi{\textsuperscript{\theenumi}~} \section*{Affiliation notes} \renewcommand\theenumi{\roman{enumi}} \begin{Authlist} \item \Adef{0}Deceased \item \Adef{idp1835040}{Also at: Dipartimento DET del Politecnico di Torino, Turin, Italy} \item \Adef{idp1854432}{Also at: Georgia State University, Atlanta, Georgia, United States} \item \Adef{idp4144160}{Also at: M.V. Lomonosov Moscow State University, D.V. Skobeltsyn Institute of Nuclear, Physics, Moscow, Russia} \item \Adef{idp4502608}{Also at: Department of Applied Physics, Aligarh Muslim University, Aligarh, India} \item \Adef{idp5486048}{Also at: Institute of Theoretical Physics, University of Wroclaw, Poland} \end{Authlist} \section*{Collaboration Institutes} \renewcommand\theenumi{\arabic{enumi}~} $^{1}$A.I. Alikhanyan National Science Laboratory (Yerevan Physics Institute) Foundation, Yerevan, Armenia \\ $^{2}$Benem\'{e}rita Universidad Aut\'{o}noma de Puebla, Puebla, Mexico \\ $^{3}$Bogolyubov Institute for Theoretical Physics, Kiev, Ukraine \\ $^{4}$Bose Institute, Department of Physics and Centre for Astroparticle Physics and Space Science (CAPSS), Kolkata, India \\ $^{5}$Budker Institute for Nuclear Physics, Novosibirsk, Russia \\ $^{6}$California Polytechnic State University, San Luis Obispo, California, United States \\ $^{7}$Central China Normal University, Wuhan, China \\ $^{8}$Centre de Calcul de l'IN2P3, Villeurbanne, Lyon, France \\ $^{9}$Centro de Aplicaciones Tecnol\'{o}gicas y Desarrollo Nuclear (CEADEN), Havana, Cuba \\ $^{10}$Centro de Investigaciones Energ\'{e}ticas Medioambientales y Tecnol\'{o}gicas (CIEMAT), Madrid, Spain \\ $^{11}$Centro de Investigaci\'{o}n y de Estudios Avanzados (CINVESTAV), Mexico City and M\'{e}rida, Mexico \\ $^{12}$Centro Fermi - Museo Storico della Fisica e Centro Studi e Ricerche ``Enrico Fermi', Rome, Italy \\ $^{13}$Chicago State University, Chicago, Illinois, United States \\ $^{14}$China Institute of Atomic Energy, Beijing, China \\ $^{15}$COMSATS Institute of Information Technology (CIIT), Islamabad, Pakistan \\ $^{16}$Departamento de F\'{\i}sica de Part\'{\i}culas and IGFAE, Universidad de Santiago de Compostela, Santiago de Compostela, Spain \\ $^{17}$Department of Physics, Aligarh Muslim University, Aligarh, India \\ $^{18}$Department of Physics, Ohio State University, Columbus, Ohio, United States \\ $^{19}$Department of Physics, Pusan National University, Pusan, South Korea \\ $^{20}$Department of Physics, Sejong University, Seoul, South Korea \\ $^{21}$Department of Physics, University of Oslo, Oslo, Norway \\ $^{22}$Department of Physics and Technology, University of Bergen, Bergen, Norway \\ $^{23}$Dipartimento di Fisica dell'Universit\`{a} 'La Sapienza' and Sezione INFN, Rome, Italy \\ $^{24}$Dipartimento di Fisica dell'Universit\`{a} and Sezione INFN, Cagliari, Italy \\ $^{25}$Dipartimento di Fisica dell'Universit\`{a} and Sezione INFN, Trieste, Italy \\ $^{26}$Dipartimento di Fisica dell'Universit\`{a} and Sezione INFN, Turin, Italy \\ $^{27}$Dipartimento di Fisica e Astronomia dell'Universit\`{a} and Sezione INFN, Bologna, Italy \\ $^{28}$Dipartimento di Fisica e Astronomia dell'Universit\`{a} and Sezione INFN, Catania, Italy \\ $^{29}$Dipartimento di Fisica e Astronomia dell'Universit\`{a} and Sezione INFN, Padova, Italy \\ $^{30}$Dipartimento di Fisica `E.R.~Caianiello' dell'Universit\`{a} and Gruppo Collegato INFN, Salerno, Italy \\ $^{31}$Dipartimento DISAT del Politecnico and Sezione INFN, Turin, Italy \\ $^{32}$Dipartimento di Scienze e Innovazione Tecnologica dell'Universit\`{a} del Piemonte Orientale and INFN Sezione di Torino, Alessandria, Italy \\ $^{33}$Dipartimento Interateneo di Fisica `M.~Merlin' and Sezione INFN, Bari, Italy \\ $^{34}$Division of Experimental High Energy Physics, University of Lund, Lund, Sweden \\ $^{35}$European Organization for Nuclear Research (CERN), Geneva, Switzerland \\ $^{36}$Excellence Cluster Universe, Technische Universit\"{a}t M\"{u}nchen, Munich, Germany \\ $^{37}$Faculty of Engineering, Bergen University College, Bergen, Norway \\ $^{38}$Faculty of Mathematics, Physics and Informatics, Comenius University, Bratislava, Slovakia \\ $^{39}$Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in Prague, Prague, Czech Republic \\ $^{40}$Faculty of Science, P.J.~\v{S}af\'{a}rik University, Ko\v{s}ice, Slovakia \\ $^{41}$Faculty of Technology, Buskerud and Vestfold University College, Tonsberg, Norway \\ $^{42}$Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe-Universit\"{a}t Frankfurt, Frankfurt, Germany \\ $^{43}$Gangneung-Wonju National University, Gangneung, South Korea \\ $^{44}$Gauhati University, Department of Physics, Guwahati, India \\ $^{45}$Helmholtz-Institut f\"{u}r Strahlen- und Kernphysik, Rheinische Friedrich-Wilhelms-Universit\"{a}t Bonn, Bonn, Germany \\ $^{46}$Helsinki Institute of Physics (HIP), Helsinki, Finland \\ $^{47}$Hiroshima University, Hiroshima, Japan \\ $^{48}$Indian Institute of Technology Bombay (IIT), Mumbai, India \\ $^{49}$Indian Institute of Technology Indore, Indore, India \\ $^{50}$Indonesian Institute of Sciences, Jakarta, Indonesia \\ $^{51}$INFN, Laboratori Nazionali di Frascati, Frascati, Italy \\ $^{52}$INFN, Laboratori Nazionali di Legnaro, Legnaro, Italy \\ $^{53}$INFN, Sezione di Bari, Bari, Italy \\ $^{54}$INFN, Sezione di Bologna, Bologna, Italy \\ $^{55}$INFN, Sezione di Cagliari, Cagliari, Italy \\ $^{56}$INFN, Sezione di Catania, Catania, Italy \\ $^{57}$INFN, Sezione di Padova, Padova, Italy \\ $^{58}$INFN, Sezione di Roma, Rome, Italy \\ $^{59}$INFN, Sezione di Torino, Turin, Italy \\ $^{60}$INFN, Sezione di Trieste, Trieste, Italy \\ $^{61}$Inha University, Incheon, South Korea \\ $^{62}$Institut de Physique Nucl\'eaire d'Orsay (IPNO), Universit\'e Paris-Sud, CNRS-IN2P3, Orsay, France \\ $^{63}$Institute for Nuclear Research, Academy of Sciences, Moscow, Russia \\ $^{64}$Institute for Subatomic Physics of Utrecht University, Utrecht, Netherlands \\ $^{65}$Institute for Theoretical and Experimental Physics, Moscow, Russia \\ $^{66}$Institute of Experimental Physics, Slovak Academy of Sciences, Ko\v{s}ice, Slovakia \\ $^{67}$Institute of Physics, Academy of Sciences of the Czech Republic, Prague, Czech Republic \\ $^{68}$Institute of Physics, Bhubaneswar, India \\ $^{69}$Institute of Space Science (ISS), Bucharest, Romania \\ $^{70}$Institut f\"{u}r Informatik, Johann Wolfgang Goethe-Universit\"{a}t Frankfurt, Frankfurt, Germany \\ $^{71}$Institut f\"{u}r Kernphysik, Johann Wolfgang Goethe-Universit\"{a}t Frankfurt, Frankfurt, Germany \\ $^{72}$Institut f\"{u}r Kernphysik, Westf\"{a}lische Wilhelms-Universit\"{a}t M\"{u}nster, M\"{u}nster, Germany \\ $^{73}$Instituto de Ciencias Nucleares, Universidad Nacional Aut\'{o}noma de M\'{e}xico, Mexico City, Mexico \\ $^{74}$Instituto de F\'{i}sica, Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, Brazil \\ $^{75}$Instituto de F\'{\i}sica, Universidad Nacional Aut\'{o}noma de M\'{e}xico, Mexico City, Mexico \\ $^{76}$IRFU, CEA, Universit\'{e} Paris-Saclay, Saclay, France \\ $^{77}$iThemba LABS, National Research Foundation, Somerset West, South Africa \\ $^{78}$Joint Institute for Nuclear Research (JINR), Dubna, Russia \\ $^{79}$Konkuk University, Seoul, South Korea \\ $^{80}$Korea Institute of Science and Technology Information, Daejeon, South Korea \\ $^{81}$KTO Karatay University, Konya, Turkey \\ $^{82}$Laboratoire de Physique Corpusculaire (LPC), Clermont Universit\'{e}, Universit\'{e} Blaise Pascal, CNRS--IN2P3, Clermont-Ferrand, France \\ $^{83}$Laboratoire de Physique Subatomique et de Cosmologie, Universit\'{e} Grenoble-Alpes, CNRS-IN2P3, Grenoble, France \\ $^{84}$Lawrence Berkeley National Laboratory, Berkeley, California, United States \\ $^{85}$Moscow Engineering Physics Institute, Moscow, Russia \\ $^{86}$Nagasaki Institute of Applied Science, Nagasaki, Japan \\ $^{87}$National and Kapodistrian University of Athens, Physics Department, Athens, Greece, Athens, Greece \\ $^{88}$National Centre for Nuclear Studies, Warsaw, Poland \\ $^{89}$National Institute for Physics and Nuclear Engineering, Bucharest, Romania \\ $^{90}$National Institute of Science Education and Research, Bhubaneswar, India \\ $^{91}$National Nuclear Research Center, Baku, Azerbaijan \\ $^{92}$National Research Centre Kurchatov Institute, Moscow, Russia \\ $^{93}$Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark \\ $^{94}$Nikhef, Nationaal instituut voor subatomaire fysica, Amsterdam, Netherlands \\ $^{95}$Nuclear Physics Group, STFC Daresbury Laboratory, Daresbury, United Kingdom \\ $^{96}$Nuclear Physics Institute, Academy of Sciences of the Czech Republic, \v{R}e\v{z} u Prahy, Czech Republic \\ $^{97}$Oak Ridge National Laboratory, Oak Ridge, Tennessee, United States \\ $^{98}$Petersburg Nuclear Physics Institute, Gatchina, Russia \\ $^{99}$Physics Department, Creighton University, Omaha, Nebraska, United States \\ $^{100}$Physics department, Faculty of science, University of Zagreb, Zagreb, Croatia \\ $^{101}$Physics Department, Panjab University, Chandigarh, India \\ $^{102}$Physics Department, University of Cape Town, Cape Town, South Africa \\ $^{103}$Physics Department, University of Jammu, Jammu, India \\ $^{104}$Physics Department, University of Rajasthan, Jaipur, India \\ $^{105}$Physikalisches Institut, Eberhard Karls Universit\"{a}t T\"{u}bingen, T\"{u}bingen, Germany \\ $^{106}$Physikalisches Institut, Ruprecht-Karls-Universit\"{a}t Heidelberg, Heidelberg, Germany \\ $^{107}$Physik Department, Technische Universit\"{a}t M\"{u}nchen, Munich, Germany \\ $^{108}$Purdue University, West Lafayette, Indiana, United States \\ $^{109}$Research Division and ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum f\"ur Schwerionenforschung GmbH, Darmstadt, Germany \\ $^{110}$Rudjer Bo\v{s}kovi\'{c} Institute, Zagreb, Croatia \\ $^{111}$Russian Federal Nuclear Center (VNIIEF), Sarov, Russia \\ $^{112}$Saha Institute of Nuclear Physics, Kolkata, India \\ $^{113}$School of Physics and Astronomy, University of Birmingham, Birmingham, United Kingdom \\ $^{114}$Secci\'{o}n F\'{\i}sica, Departamento de Ciencias, Pontificia Universidad Cat\'{o}lica del Per\'{u}, Lima, Peru \\ $^{115}$SSC IHEP of NRC Kurchatov institute, Protvino, Russia \\ $^{116}$Stefan Meyer Institut f\"{u}r Subatomare Physik (SMI), Vienna, Austria \\ $^{117}$SUBATECH, IMT Atlantique, Universit\'{e} de Nantes, CNRS-IN2P3, Nantes, France \\ $^{118}$Suranaree University of Technology, Nakhon Ratchasima, Thailand \\ $^{119}$Technical University of Ko\v{s}ice, Ko\v{s}ice, Slovakia \\ $^{120}$Technical University of Split FESB, Split, Croatia \\ $^{121}$The Henryk Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences, Cracow, Poland \\ $^{122}$The University of Texas at Austin, Physics Department, Austin, Texas, United States \\ $^{123}$Universidad Aut\'{o}noma de Sinaloa, Culiac\'{a}n, Mexico \\ $^{124}$Universidade de S\~{a}o Paulo (USP), S\~{a}o Paulo, Brazil \\ $^{125}$Universidade Estadual de Campinas (UNICAMP), Campinas, Brazil \\ $^{126}$Universidade Federal do ABC, Santo Andre, Brazil \\ $^{127}$University of Houston, Houston, Texas, United States \\ $^{128}$University of Jyv\"{a}skyl\"{a}, Jyv\"{a}skyl\"{a}, Finland \\ $^{129}$University of Liverpool, Liverpool, United Kingdom \\ $^{130}$University of Tennessee, Knoxville, Tennessee, United States \\ $^{131}$University of the Witwatersrand, Johannesburg, South Africa \\ $^{132}$University of Tokyo, Tokyo, Japan \\ $^{133}$University of Tsukuba, Tsukuba, Japan \\ $^{134}$Universit\'{e} de Lyon, Universit\'{e} Lyon 1, CNRS/IN2P3, IPN-Lyon, Villeurbanne, Lyon, France \\ $^{135}$Universit\'{e} de Strasbourg, CNRS, IPHC UMR 7178, F-67000 Strasbourg, France, Strasbourg, France \\ $^{136}$Universit\`{a} degli Studi di Pavia, Pavia, Italy \\ $^{137}$Universit\`{a} di Brescia, Brescia, Italy \\ $^{138}$V.~Fock Institute for Physics, St. Petersburg State University, St. Petersburg, Russia \\ $^{139}$Variable Energy Cyclotron Centre, Kolkata, India \\ $^{140}$Warsaw University of Technology, Warsaw, Poland \\ $^{141}$Wayne State University, Detroit, Michigan, United States \\ $^{142}$Wigner Research Centre for Physics, Hungarian Academy of Sciences, Budapest, Hungary \\ $^{143}$Yale University, New Haven, Connecticut, United States \\ $^{144}$Yonsei University, Seoul, South Korea \\ $^{145}$Zentrum f\"{u}r Technologietransfer und Telekommunikation (ZTT), Fachhochschule Worms, Worms, Germany \endgroup \section{Introduction} Identical boson femtoscopy, especially of identical charged pions, has been used extensively over the years to study experimentally the space-time geometry of the collision region in high-energy particle and heavy-ion collisions \cite{Lisa:2005dd}. Identical-kaon femtoscopy studies have also been carried out, recent examples of which are the ones with Au-Au collisions at $\sqrt{s_{\rm NN}}=200$ GeV by the STAR Collaboration \cite{Abelev:2006gu} (K$^0_{\rm S}$K$^0_{\rm S}$) and with pp at $\sqrt{s}=7$ TeV and Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV by the ALICE Collaboration \cite{Abelev:2012ms,Abelev:2012sq,Adam:2015vja} (K$^0_{\rm S}$K$^0_{\rm S}$ and K$^{\pm}$K$^{\pm}$). The pair-wise interactions between the identical kaons that form the basis for femtoscopy are for K$^{\rm \pm}$K$^{\rm \pm}$ quantum statistics and the Coulomb interaction, and for K$^0_{\rm S}$K$^0_{\rm S}$ quantum statistics and the final-state interaction through the $f_0(980)/a_0(980)$ threshold resonances. One can also consider the case of non-identical kaon pairs, e.g. K$^0_{\rm S}$K$^{\rm \pm}$ pairs. Besides the non-resonant channels which may be present, e.g.\ non-resonant elastic scattering or free-streaming of the kaons from their freeze-out positions to the detector, the other only pair-wise interaction allowed for a K$^0_{\rm S}$K$^{\rm \pm}$ pair at freeze out from the collision system is a final-state interaction (FSI) through the $a_0(980)$ resonance. The other pair-wise interactions present for identical-kaon pairs are not present for K$^0_{\rm S}$K$^{\rm \pm}$ pairs because: a) there is no quantum statistics enhancement since the kaons are not identical, b) there is no Coulomb effect since one of the kaons is uncharged, and c) there is no strong FSI through the $f_0$ resonance since the kaon pair is in an $I=1$ isospin state, as is the $a_0$, whereas the $f_0$ is an $I=0$ state. Another feature of the K$^0_{\rm S}$K$^{\rm \pm}$ FSI through the $a_0$ resonance is, due to the $a_0$ having strangeness $S=0$ and the K$^0_{\rm S}$ being a linear combination of the K$^0$ and ${\rm \overline{K}^0}$, \begin{equation} \left | {\rm K}^0_S \right \rangle=\frac{1}{\sqrt{2}}\left ( \left | {\rm K}^0 \right \rangle + \left | {\rm \overline{K}}^0 \right \rangle\right ), \end{equation} only the ${\rm \overline{K}^0}$K$^+$ pair from K$^0_{\rm S}$K$^{\rm +}$ and the K$^0$K$^-$ pair from K$^0_{\rm S}$K$^{\rm -}$ have $S=0$ and thus can form the $a_0$ resonance. This allows the possibility to study the K$^0$ and ${\rm \overline{K}^0}$ sources separately since they are individually selected by studying K$^0_{\rm S}$K$^{\rm -}$ and K$^0_{\rm S}$K$^{\rm +}$ pairs, respectively. An additional consequence of this feature is that only $50\%$ of either the K$^0_{\rm S}$K$^{\rm -}$ or K$^0_{\rm S}$K$^{\rm +}$ detected pairs will pass through the $a_0$ resonance. This is taken into account in the expression for the model used to fit the correlation functions. On the other hand, the natural requirement that the source sizes extracted from the K$^0_{\rm S}$K$^{\rm \pm}$ femtoscopy agree with those obtained for the K$^0_{\rm S}$K$^0_{\rm S}$ and K$^{\rm \pm}$K$^{\rm \pm}$ systems allows one to study the properties of the $a_0$ resonance itself. This is interesting in its own right since many studies discuss the possibility that the $a_0$, listed by the Particle Data Group as a diquark light unflavored meson state~\cite{Olive:2016xmw}, could be a four-quark state, i.e. a tetraquark, or a ``${\rm \overline{K}}-$K molecule'' \cite{Martin1977,Antonelli2002,Achasov:2002ir,Achasov1,Santopinto:2006my,Jaffe:1976ig}. For example, the production cross section of the $a_0$ resonance in a reaction channel such as ${\rm K}^0{\rm K}^-\rightarrow a^-_0$ should depend on whether the $a^-_0$ is composed of ${\rm d\overline{u}}$ or ${\rm d\overline{s}s\overline{u}}$ quarks, the former requiring the annihilation of the ${\rm \overline{s}s}$ pair and the latter being a direct transfer of the quarks in the kaons to the $a^-_0$. The results from K$^0_{\rm S}$K$^-$ femtoscopy might be sensitive to these two different scenarios. In this Letter, results from the first study of K$^0_{\rm S}$K$^{\rm \pm}$ femtoscopy are presented. This has been done for Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV measured by the ALICE experiment at the LHC \cite{Aamodt:2008zz}. The physics goals of the present K$^0_{\rm S}$K$^{\rm \pm}$ femtoscopy study are the following: 1) show to what extent the FSI through the $a_0$ resonance describes the correlation functions, 2) study the K$^0$ and ${\rm \overline{K^0}}$ sources to see if there are differences in the source parameters, and 3) test published $a_0$ mass and coupling parameters by comparisons with published identical kaon results \cite{Adam:2015vja}. \section{Description of experiment and data Selection} The ALICE experiment and its performance in the LHC Run 1 $(2009-2013)$ are described in Ref.~\cite{Aamodt:2008zz} and Ref.~\cite{Abelev:2014ffa,Akindinov:2013tea}, respectively. About $22\times 10^6$ Pb-Pb collision events with $0$--$10\%$ centrality class taken in 2011 were used in this analysis (the average centrality in this range is 4.9\% due to a slight trigger inefficiency in the 8-10\% range). Events were classified according to their centrality using the measured amplitudes in the V0 detectors, which consist of two arrays of scintillators located along the beamline and covering the full azimuth \cite{Abelev:2013vea}. Charged particles were reconstructed and identified with the central barrel detectors located within a solenoid magnet with a field strength of $B=0.5$ T. Charged particle tracking was performed using the Time Projection Chamber (TPC) \cite{Alme:2010ke} and the Inner Tracking System (ITS) \cite{Aamodt:2008zz}. The ITS allowed for high spatial resolution in determining the primary (collision) vertex. Tracks were reconstructed and their momenta were obtained with the TPC. A momentum resolution of less than 10 MeV/$c$ was typically obtained for the charged tracks of interest in this analysis. The primary vertex was obtained from the ITS, the position of the primary vertex being constrained along the beam direction (the ``$z$-position'') to be within $\pm10$ cm of the center of the ALICE detector. In addition to the standard track quality selections, the track selections based on the quality of track reconstruction fit and the number of detected tracking points in the TPC were used to ensure that only well-reconstructed tracks were taken in the analysis~\cite{Abelev:2014ffa,Akindinov:2013tea}. Particle identification (PID) for reconstructed tracks was carried out using both the TPC and the Time-of-Flight (TOF) detector in the pseudorapidity range $|\eta| < 0.8$~\cite{Abelev:2014ffa,Akindinov:2013tea}. For each PID method, a value was assigned to each track denoting the number of standard deviations between the measured track information and calculated values ($N_{\sigma}$) \cite{Adam:2015vja,Abelev:2014ffa,Akindinov:2013tea}. For TPC PID, a parametrized Bethe-Bloch formula was used to calculate the specific energy loss $\left<{\rm d}E/{\rm d}x\right>$ in the detector expected for a particle with a given mass and momentum. For PID with TOF, the particle mass was used to calculate the expected time-of-flight as a function of track length and momentum. This procedure was repeated for four ``particle species hypotheses''---electron, pion, kaon and proton---, and, for each hypothesis, a different $N_{\sigma}$ value was obtained per detector. \subsection{Kaon selection} The methods used to select and identify individual K$^0_{\rm S}$ and K$^{\rm \pm}$ particles are the same as those used for the ALICE Pb-Pb K$^0_{\rm S}$K$^0_{\rm S}$ and K$^{\rm \pm}$K$^{\rm \pm}$ analyses \cite{Adam:2015vja}. These are now described below. \subsubsection{K$^0_{\rm S}$ selection} The K$^0_{\rm S}$ particles were reconstructed from the decay K$^0_{\rm S}\rightarrow\pi^+\pi^-$, with the daughter $\pi^+$ and $\pi^-$ tracks detected in the TPC and TOF detectors. Pions with $p_{\rm T}>0.15$ GeV/$c$ were accepted (since for lower $p_{\rm T}$ track finding efficiency drops rapidly) and the distance of closest approach to the primary vertex (DCA) of the reconstructed K$^0_{\rm S}$ was required to be less than 0.3 cm in all directions. The required $N_{\sigma}$ values for the pions were $N_{\sigma TPC} < 3$ and $N_{\sigma TOF} < 3$ for $p>0.8$ GeV/$c$. An invariant mass distribution for the $\pi^+\pi^-$ pairs was produced and the K$^0_{\rm S}$ was defined to be resulting from a pair that fell into the invariant mass range $0.480<m_{\pi^+\pi^-}<0.515$ GeV/$c^2$. \subsubsection{K$^\pm$ selection} Charged kaon tracks were also detected using the TPC and TOF detectors, and were accepted if they were within the range $0.14<p_{\rm T}<1.5$ GeV/$c$. In order to reduce the number of secondaries (for instance the charged particles produced in the detector material, particles from weak decays, etc.) the primary charged kaon tracks were selected based on the DCA, such that the DCA transverse to the beam direction was less than 2.4 cm and the DCA along the beam direction was less than 3.2 cm. If the TOF signal were not available, the required $N_{\sigma}$ values for the charged kaons were $N_{\sigma TPC} < 2$ for $p_{\rm T}<0.5$ GeV/$c$, and the track was rejected for $p_{\rm T}>0.5$ GeV/$c$. If the TOF signal were also available and $p_{\rm T}>0.5$ GeV/$c$: $N_{\sigma TPC} < 3$ and $N_{\sigma TOF} < 2$ ($0.5<p_{\rm T}<0.8$ GeV/$c$), $N_{\sigma TOF} < 1.5$ ($0.8<p_{\rm T}<1.0$ GeV/$c$), $N_{\sigma TOF} < 1$ ($1.0<p_{\rm T}<1.5$ GeV/$c$). K$^0_{\rm S}$K$^{\rm \pm}$ experimental pair purity was estimated from a Monte Carlo (MC) study based on HIJING~\cite{Wang:1991hta} simulations using GEANT3~\cite{Brun:1994aa} to model particle transport through the ALICE detectors. The purity was determined from the fraction of the reconstructed MC simulated pairs that were identified as actual K$^0_{\rm S}$K$^{\rm \pm}$ pairs input from HIJING. The pair purity was estimated to be 88\% for all kinematic regions studied in this analysis. \section{Analysis methods} \subsection{Experimental Correlation Functions} This analysis studies the momentum correlations of K$^0_{\rm S}$K$^{\rm \pm}$ pairs using the two-particle correlation function, defined as \begin{equation} C(k^*)=A(k^*)/B(k^*) \end{equation} where $A(k^*)$ is the measured distribution of pairs from the same event, $B(k^*)$ is the reference distribution of pairs from mixed events, and $k^*$ is the magnitude of the momentum of each of the particles in the pair rest frame (PRF), \begin{equation} k^*=\sqrt{\frac{(s-m_{\rm K^0}^2-m_{\rm K^\pm}^2)^2-4m_{\rm K^0}^2m_{\rm K^\pm}^2}{4s}} \end{equation} where, \begin{equation} s=m_{\rm K^0}^2+m_{\rm K^\pm}^2+2E_{\rm K^0}E_{\rm K^\pm}-2\vec{p}_{\rm K^0}\cdot\vec{p}_{\rm K^\pm} \end{equation} and $m_{\rm K^0}$ ($E_{\rm K^0}$) and $m_{\rm K^\pm}$ ($E_{\rm K^\pm}$) are the rest masses (total energies) of the K$^0_{\rm S}$ and K$^{\rm \pm}$, respectively. The denominator $B(k^*)$ was formed by mixing K$^0_{\rm S}$ and K$^{\rm \pm}$ particles from each event with particles from ten other events. The vertexes of the mixed events were constrained to be within 2 cm of each other in the $z$-direction. A centrality constraint on the mixed events was found not to be necessary for the narrow centrality range, i.e.\ $0$--$10\%$, used in this analysis. Correlation functions were obtained separately for two different magnetic field orientations in the experiment and then either averaged or fit separately, depending on the fitting method used (see below). Correlation functions were measured for three overlapping/non-exclusive pair transverse momentum ($k_{\rm T} = |\textbf{p}_{\rm T,1}+\textbf{p}_{\rm T,2}|/2$) bins: all $k_{\rm T}$, $k_{\rm T}<0.675$ and $k_{\rm T}>0.675$ GeV/$c$. The mean $k_{\rm T}$ values for these three bins were 0.675, 0.425 and 0.970 GeV/$c$, respectively. Figure~\ref{fig1} shows sample raw K$^0_{\rm S}$K$^{\rm +}$ correlation functions for these three bins for one of the magnetic field orientations. One can see the main feature of the femtoscopic correlation function: the suppression due to the strong final-state interactions for small $k^*$. In the higher $k^*$ region, the effects of the $a_0$ appear to not be present and thus could be used as a reference, i.e.\ ``baseline'', for the $a_0$-based model fitted to $C(k^*)$ in order to extract the source parameters. Also shown in the figure are linear fits to the baseline for large $k^*$. The effects on $C(k^*)$ by the $a_0$ resonance are mostly seen in the $k^*<0.2$ GeV/$c$ region, where the width of the $a_0$ region reflects the size of the kaon source (see equations below). Correlation functions were corrected for momentum resolution effects using HIJING calculations. HIJING was used to create two correlation functions: one in terms of the generator-level $k^*$ and one in terms of the simulated detector-level $k^*$. Because HIJING does not incorporate final-state interactions, weights were calculated using a 9th-order polynomial fit in $k^*$ to an experimental correlation function and were used when filling the same-event distributions. These weights were calculated using $k^*$. Then, the ratio of the ``ideal'' correlation function to the ``measured'' one (for each $k^*$ bin) was multiplied to the data correlation functions before the fit procedure. This correction mostly affected the lowest $k^*$ bins, increasing the extracted source parameters by several percent. \begin{figure}[t!] \centering \includegraphics[scale=1.4]{new2fig1} \caption{Examples of raw K$^0_{\rm S}$K$^{\rm +}$ correlation functions for the three $k_T$ bins with linear fits to the baseline at large $k^*$. Statistical uncertainties are shown.} \label{fig1} \end{figure} \subsection{Final-state interaction model} The K$^0_{\rm S}$K$^{\rm \pm}$ correlation functions were fit with functions that include a parameterization which incorporates strong FSI. It was assumed that the FSI arises in the K$^0_{\rm S}$K$^{\rm \pm}$ channels due to the near-threshold resonance, $a_0$(980). This parameterization was introduced by R. Lednicky and is based on the model by R. Lednicky and V.L. Lyuboshitz~\cite{Lednicky:1981su,Lednicky:2005af} (see also Ref.~\cite{Abelev:2006gu} for more details on this parameterization). Using an equal emission time approximation in the PRF~\cite{Lednicky:1981su}, the elastic K$^0_{\rm S}$K$^{\rm \pm}$ transition is written as a stationary solution $\Psi_{-\pvec{k}^*}(\pvec{r}^*)$ of the scattering problem in the PRF. The quantity $\pvec{r}^*$ represents the emission separation of the pair in the PRF, and the $-\pvec{k}^*$ subscript refers to a reversal of time from the emission process. At large distances this has the asymptotic form of a superposition of a plane wave and an outgoing spherical wave, \begin{equation} \Psi_{-\pvec{k}^*}(\pvec{r}^*) = e^{-i\pvec{k}^* \cdot \pvec{r}^*} + f(k^*) \frac{e^{ik^*r^*}}{r^*} \;, \label{eq:FSIwave} \end{equation} where $f(k^*)$ is the $s$-wave K$^0$K$^-$ or $\overline{\rm K}^0$K$^+$ scattering amplitude whose contribution is the $s$-wave isovector $a_0$ resonance (see Eq.~11 in Ref.~\cite{Abelev:2006gu}), \begin{equation} f(k^*) = \frac{\gamma_{a_0\rightarrow {\rm K\overline{K}}}}{m_{a_0}^2-s-i(\gamma_{a_0\rightarrow {\rm K\overline{K}}} k^*+\gamma_{a_0\rightarrow \pi\eta}k_{\pi\eta})}\;. \label{eq:fit4} \end{equation} In Eq.~\ref{eq:fit4}, $m_{a_0}$ is the mass of the $a_0$ resonance, and $\gamma_{a_0\rightarrow {\rm K\overline{K}}}$ and $\gamma_{a_0\rightarrow \pi\eta}$ are the couplings of the $a_0$ resonance to the K$^0$K$^-$ (or $\overline{\rm K}^0$K$^+$) and $\pi\eta$ channels, respectively. Also, $s=4(m_{\rm K^0}^2+k^{*2})$ and $k_{\pi\eta}$ denotes the momentum in the second decay channel ($\pi\eta$) (see Table \ref{table1}). The correlation function due to the FSI is then calculated by integrating $\Psi_{-\pvec{k}^*}(\pvec{r}^*)$ in the \textit{Koonin-Pratt equation}~\cite{Koonin:1977fh,Pratt:1990zq} \begin{equation} C(\pvec{k}^*) = \int {\rm d}^3 \, \pvec{r}^* \, S(\pvec{r}^*) \left| \Psi_{-\pvec{k}^*}(\pvec{r}^*) \right| ^2 \, , \label{eq:koonin} \end{equation} where $S(\pvec{r}^*)$ is a one-dimensional Gaussian source function of the PRF relative distance $\left| \pvec{r}^* \right| $ with a Gaussian width $R$ of the form \begin{equation} S(\pvec{r}^*) \sim e^{-\left| \pvec{r}^* \right| ^2/(4R^2)}\;. \label{source} \end{equation} Equation~\ref{eq:koonin} can be integrated analytically for K$^0_{\rm S}$K$^{\rm \pm}$ correlations with FSI for the one-dimensional case, with the result \begin{equation} C(k^*)=1+\lambda\alpha\left[\frac{1}{2}\left|\frac{f(k^*)}{R}\right|^2+\frac{2\mathcal{R}f(k^*)}{\sqrt{\pi}R}F_1(2k^* R)-\frac{\mathcal{I}f(k^*)}{R}F_2(2k^* R)\right], \label{eq:fit2} \end{equation} where \begin{equation} F_1(z)\equiv\frac{\sqrt{\pi} e^{-z^2} \operatorname{erfi}(z)}{2 z};\qquad F_2(z)\equiv\frac{1-e^{-z^2}}{z}. \label{eq:fit3} \end{equation} In the above equations $\alpha$ is the fraction of K$^{0}_{\rm S}$K$^{\pm}$ pairs that come from the K$^0$K$^-$ or $\overline{\rm K}^0$K$^+$ system, set to 0.5 assuming symmetry in K$^0$ and $\overline{\rm K}^0$ production \cite{Abelev:2006gu}, $R$ is the radius parameter from the spherical Gaussian source distribution given in Eq.~\ref{source}, and $\lambda$ is the correlation strength. The correlation strength is unity in the ideal case of pure $a_0$-resonant FSI, perfect PID, a perfect Gaussian kaon source and the absence of long-lived resonances which decay into kaons. Note that the form of the FSI term in Eq.~\ref{eq:fit2} differs from the form of the FSI term for $\rm K^0_S\kzs$ correlations (Eq. 9 of Ref.~\cite{Abelev:2006gu}) by a factor of $1/2$ due to the non-identical particles in K$^0_{\rm S}$K$^{\rm \pm}$ correlations and thus the absence of the requirement to symmetrize the wavefunction given in Eq.~\ref{eq:FSIwave}. As seen in Eq.~\ref{eq:fit4}, the K$^0$K$^-$ or $\overline{\rm K}^0$K$^+$ s-wave scattering amplitude depends on the $a_0$ mass and decay couplings. In the present work, we have taken the values used in Ref.~\cite{Abelev:2006gu} which have been extracted from the analysis of the $a_0\rightarrow\pi\eta$ spectra of several experiments \cite{Martin1977,Antonelli2002,Achasov1,Achasov:2002ir}, shown in Table \ref{table1}. The extracted $a_0$ mass and decay couplings have a range of values for the various references. Except for the Martin reference~\cite{Martin1977}, which extracts the $a_0$ values from the reaction 4.2 GeV/$c$ incident momentum K$^-+p\rightarrow\Sigma^+(1385)\pi^-\eta$ using a two-channel Breit-Wigner formula, the other references extract the $a_0$ values from the radiative $\phi$-decay data, i.e.\ $\phi\rightarrow\pi^0\eta\gamma$, from the KLOE collaboration~\cite{Aloisio:2002bsa}. These latter three references apply a model that assumes, after taking into account the $\phi\rightarrow\pi^0\rho^0\rightarrow\pi^0\eta\gamma$ background process, that the $\phi$ decays to the $\pi^0\eta\gamma$ final state through the intermediate processes $\phi\rightarrow {\rm K}^+{\rm K}^-\gamma\rightarrow a_0\gamma$ or $\phi\rightarrow {\rm K}^+{\rm K}^-\rightarrow a_0\gamma$, i.e.\ the ``charged kaon loop model''~\cite{Achasov:2002ir}. The main difference between these analyses is that the Antonelli reference~\cite{Antonelli2002} assumes a fixed $a_0$ mass in the fit of this model to the $\pi^0\eta$ data, whereas the Achasov1 and Achasov2 analyses~\cite{Achasov:2002ir} allow the $a_0$ mass to be a free parameter in the two different fits made to the data. It is assumed in the present analysis that these decay couplings will also be valid for K$^0$K$^-$ and $\overline{\rm K}^0$K$^+$ scattering due to isospin invariance. Correlation functions were fitted with all four of these cases to see the effect on the extracted source parameters. \begin{table} \centering \begin{tabular}{| c | c | c | c |} \hline Reference & $m_{a_0}$ & $\gamma_{a_0K\bar{K}}$ & $\gamma_{a_0\pi\eta}$ \\ \hline Martin~\cite{Martin1977} & 0.974 & 0.333 & 0.222 \\ \hline Antonelli~\cite{Antonelli2002} & 0.985 & 0.4038 & 0.3711 \\ \hline Achasov1~\cite{Achasov:2002ir} & 0.992 & 0.5555 & 0.4401 \\ \hline Achasov2~\cite{Achasov:2002ir} & 1.003 & 0.8365 & 0.4580 \\ \hline \end{tabular} \caption{The $a_0$ masses and coupling parameters, all in GeV (taken from Ref.\cite{Abelev:2006gu}).} \label{table1} \end{table} \subsection{Fitting methods} In order to estimate the systematic errors in the fitting method used to extract $R$ and $\lambda$ using Eq.~\ref{eq:fit2}, two different methods, judged to be equally valid, have been used to handle the effects of the baseline: 1) a separate linear fit to the ``baseline region,'' followed by fitting Eq.~\ref{eq:fit2} to the correlation function divided by the linear fit to extract the source parameters, and 2) a combined fit of Eq.~\ref{eq:fit2} and a quadratic function describing the baseline where the source parameters and the parameters of the quadratic function are fitted simultaneously. The source parameters are extracted for each case from both methods and averaged, the symmetric systematic error for each case due to the fitting method being one-half of the difference between the two methods. Both fitting methods will now be described in more detail. \subsubsection{Linear baseline method} In the ``linear baseline method,'' for the all $k_{\rm T}$, $k_{\rm T}<0.675$ and $k_{\rm T}>0.675$ GeV/$c$ bins the $a_0$ regions were taken to be $k^*<0.3$, $k^*<0.2$ and $k^*<0.4$ GeV/$c$, respectively. In the higher $k^*$ region it was assumed that effects of the $a_0$ were not present and thus can be used as a reference, i.e.\ ``baseline'', for the $a_0$-based model fitted to $C(k^*)$, which was averaged over the two magnetic field orientations used in the experiment, to extract the source parameters. For the three $k_{\rm T}$ bins, linear fits were made in the $k^*$ ranges $0.3$--$0.45$, $0.2$--$0.45$ and $0.4$--$0.6$ GeV/$c$, respectively, and the correlation functions were divided by these fits to remove baseline effects extending into the low-$k^*$ region. These ranges were taken to define the baselines since the measured correlation functions were found to be linear here. For larger values of $k^*$ the correlation functions became non-linear. The baseline was studied using HIJING MC calculations which take into account the detector characteristics as described earlier. The $C(k^*)$ distributions obtained from HIJING do not show suppressions at low $k^*$ as seen in Fig.~\ref{fig1} but rather show linear distributions over the entire ranges in $k^*$ shown in the figure. HIJING also shows the baseline becoming non-linear for larger values of $k^*$, as seen in the measurements. The MC generator code AMPT~\cite{Lin:2004en} was also used to study the baseline. AMPT is similar to HIJING but also includes final-state rescattering effects. AMPT calculations also showed linear baselines in the $k^*$ ranges used in the present analysis, becoming non-linear for larger $k^*$. Both HIJING and AMPT qualitatively show the same direction of changes in the slopes of the baseline vs. $k_{\rm T}$ as seen in the data, but AMPT more accurately described the slope values themselves, suggesting that final-state rescattering plays a role in the $k_{\rm T}$ dependence of the baseline slope. The systematic uncertainties on the extracted source parameters due to the assumption of linearity in these $k^*$ regions were estimated from HIJING to be less than 1\%. Figure~\ref{fig2} shows examples of K$^0_{\rm S}$K$^+$ and K$^0_{\rm S}$K$^-$ correlation functions divided by linear fits to the baseline with Eq.~\ref{eq:fit2} using the Achasov2 parameters. One can see the main feature of the femtoscopic correlation function: the suppression due to the strong final-state interactions for small $k^*$. As seen, the $a_0$ FSI parameterization gives an excellent representation of the ``signal region'' of the data, i.e.\ the suppression of the correlation functions in the $k^*$ range 0 to about 0.15 GeV/$c$. \begin{figure}[t!] \centering \includegraphics[scale=1.4]{new2fig2} \caption{Examples of K$^0_{\rm S}$K$^+$ and K$^0_{\rm S}$K$^-$ correlation functions divided by linear fits to the baseline with the Lednicky parameterization using the Achasov2 \cite{Achasov:2002ir} parameters. Statistical (lines) and the linear sum of statistical and systematic uncertainties (boxes) are shown.} \label{fig2} \end{figure} \subsubsection{Quadratic baseline method} In the ``quadratic baseline method,'' $R$ and $\lambda$ are extracted assuming a quadratic baseline function by fitting the product of a quadratic function and the Lednicky equation, Eq.~\ref{eq:fit2}, to the raw correlation functions for each of the two magnetic field orientations used in the experiment, such as shown in Fig.~\ref{fig1}, i.e.\ , \begin{equation} C_{raw}^{fit}(k^*)=a(1-bk^*+ck^{*2})C(k^*) \label{eq:quad} \end{equation} where $C(k^*)$ is given by Eq.~\ref{eq:fit2}, and $a$, $b$ and $c$ are fit parameters. Eq.~\ref{eq:quad} is fit to the same $k^*$ ranges as shown in Fig.~\ref{fig1}, i.e. $0$--$0.45$ GeV/$c$ for all $k_{\rm T}$ and $k_{\rm T}<0.675$ GeV/$c$, and $0$--$0.6$ GeV/$c$ for $k_{\rm T}>0.675$ GeV/$c$. The fits to the experimental correlation functions are found to be of similar good quality as seen for the linear baseline method fits shown in Fig.~\ref{fig2}. \subsection{Systematic uncertainties} Systematic uncertainties on the extracted source parameters were estimated by varying the ranges of kinematic and PID cut values on the data by $\pm10\%$ and $\pm20\%$, as well as from MC simulations. The main systematic uncertainties on the extracted values of $R$ and $\lambda$ due to various sources, not including the baseline fitting method, are: a) $k^*$ fitting range: 2\%, b) single-particle and pair cuts (e.g. DCA cuts, PID cuts, pair separation cuts): $2\%$--$4\%$ for $R$ and $3\%$--$8\%$ for $\lambda$, and c) pair purity: 1\% on $\lambda$. Combining the individual systematic uncertainties in quadrature, the total systematic uncertainties on the extracted source parameters, not including the baseline fitting method contribution, are in the ranges $3\%$--$5\%$ for $R$ and $4\%$--$8\%$ for $\lambda$. As mentioned earlier, for the two fitting methods, the source parameters are extracted for each case from both methods and averaged, the symmetric systematic error for each case due to the fitting method being one-half of the difference between the two methods. The baseline fitting method systematic error thus obtained is added in quadrature with the systematic errors given above. It is found that the size of the baseline fitting method systematic errors are about 50\% larger for $R$ and of similar magnitude for $\lambda$ as those quoted above for the non-fitting-method systematic errors. \section{Results and discussion} Figure~\ref{fig3} shows sample results for the $R$ and $\lambda$ parameters extracted in the present analysis from K$^0_{\rm S}$K$^{\rm \pm}$ femtoscopy using the Achasov1 parameters. The left column compares K$^0_{\rm S}$K$^{\rm +}$ and K$^0_{\rm S}$K$^{\rm -}$ results from the quadratic baseline fit method, and the right column compares results averaged over K$^0_{\rm S}$K$^{\rm +}$ and K$^0_{\rm S}$K$^{\rm -}$ for the quadratic baseline fits and the linear baseline fits. As it is usually the case in femtoscopic analyses, the fitted $R$ and $\lambda$ parameters are correlated. The fitting (statistical) uncertainties are taken to be the extreme values of the $1\sigma$ fit contours in $R$ vs. $\lambda$. Statistical uncertainties are plotted for all results. It is seen in the figure that the $R$ and $\lambda$ values for K$^0_{\rm S}$K$^{\rm -}$ have a slight tendency to be larger than those for K$^0_{\rm S}$K$^{\rm +}$. Such a difference could result from the K$^{\rm -}$ -- nucleon scattering cross section being larger than that for K$^{\rm +}$ -- nucleon (see Fig.~51.9 of Ref.~\cite{Olive:2016xmw}), possibly resulting in more final-state rescattering for the K$^{\rm -}$. Since the difference is not significant once systematic uncertainties are taken into account, K$^0_{\rm S}$K$^{\rm +}$ and K$^0_{\rm S}$K$^{\rm -}$ are averaged over in the final results. The difference in the extracted parameters between the two baseline fitting methods is also seen to be small, and is accounted for as a systematic error, as described earlier. \begin{figure}[t!] \centering \includegraphics[scale=1.]{new2fig3} \caption{Sample results for the $R$ and $\lambda$ parameters extracted in the present analysis from K$^0_{\rm S}$K$^{\rm \pm}$ femtoscopy using the Achasov1 parameters. The left column compares K$^0_{\rm S}$K$^{\rm +}$ and K$^0_{\rm S}$K$^{\rm -}$ results from the quadratic baseline fit method, and the right column compares results averaged over K$^0_{\rm S}$K$^{\rm +}$ and K$^0_{\rm S}$K$^{\rm -}$ for the quadratic baseline fits and the linear baseline fits. Statistical uncertainties are plotted for all results.} \label{fig3} \end{figure} The results for the $R$ and $\lambda$ parameters extracted in the present analysis from K$^0_{\rm S}$K$^{\rm \pm}$ femtoscopy, averaged over the two baseline fit methods and averaged over K$^0_{\rm S}$K$^{\rm +}$ and K$^0_{\rm S}$K$^{\rm -}$, are presented in Table~\ref{tab:results} and in Figs. \ref{fig4} and \ref{fig5}. Fit results are shown for all four parameter sets given in Table \ref{table1}. Figs.~\ref{fig4} and \ref{fig5} also show comparisons with identical kaon results for the same collision system and energy from ALICE from Ref.~\cite{Adam:2015vja}. Statistical and total uncertainties are shown for all results. \begin{table} \centering \begin{tabular}{| c | c | c | c | c |} \hline Parameters & $R$ (fm) or $\lambda$ & all $k_{\rm T}$ & $k_{\rm T}<0.675$ GeV/$c$ & $k_{\rm T}>0.675$ GeV/$c$ \\ \hline Achasov2 & $R$ & $5.17\pm0.16\pm0.41$ & $6.71\pm0.40\pm0.42$ & $4.75\pm0.18\pm0.36$ \\ \hline & $\lambda$ & $0.587\pm0.034\pm0.051$ & $0.651\pm0.073\pm0.076$ & $0.600\pm0.040\pm0.034$ \\ \hline Achasov1 & $R$ & $4.92\pm0.15\pm0.39$ & $6.30\pm0.40\pm0.43$ & $4.49\pm0.18\pm0.30$ \\ \hline & $\lambda$ & $0.650\pm0.038\pm0.056$ & $0.723\pm0.087\pm0.091$ & $0.649\pm0.048\pm0.038$ \\ \hline Antonelli & $R$ & $4.66\pm0.17\pm0.46$ & $5.74\pm0.36\pm0.26$ & $4.07\pm0.18\pm0.29$ \\ \hline & $\lambda$ & $0.624\pm0.044\pm0.058$ & $0.703\pm0.085\pm0.077$ & $0.613\pm0.052\pm0.037$ \\ \hline Martin & $R$ & $3.29\pm0.12\pm0.35$ & $4.46\pm0.25\pm0.20$ & $2.90\pm0.11\pm0.41$ \\ \hline & $\lambda$ & $0.305\pm0.020\pm0.033$ & $0.376\pm0.041\pm0.037$ & $0.296\pm0.021\pm0.030$ \\ \hline \end{tabular} \caption{Fit results for $R$ and $\lambda$ extracted in the present analysis from K$^0_{\rm S}$K$^{\rm \pm}$ femtoscopy averaged over K$^0_{\rm S}$K$^{\rm +}$ and K$^0_{\rm S}$K$^{\rm -}$. Statistical and systematic errors are also shown.} \label{tab:results} \end{table} \begin{figure}[t!] \centering \includegraphics[scale=1.]{new2fig4} \caption{Source radius parameter, $R$, extracted in the present analysis from K$^0_{\rm S}$K$^{\rm \pm}$ femtoscopy averaged over K$^0_{\rm S}$K$^{\rm +}$ and K$^0_{\rm S}$K$^{\rm -}$ and the two baseline fit methods (red symbols), along with comparisons with identical kaon results from ALICE~\cite{Adam:2015vja} (blue symbols). Statistical (lines) and the linear sum of statistical and systematic uncertainties (boxes) are shown.} \label{fig4} \end{figure} \begin{figure}[t!] \centering \includegraphics[scale=1.]{new2fig5} \caption{Correlation strength parameter, $\lambda$, extracted in the present analysis from K$^0_{\rm S}$K$^{\rm \pm}$ femtoscopy averaged over K$^0_{\rm S}$K$^{\rm +}$ and K$^0_{\rm S}$K$^{\rm -}$ and the two baseline fit methods (red symbols), along with comparisons with identical kaon results from ALICE~\cite{Adam:2015vja} (blue symbols). Statistical (lines) and the linear sum of statistical and systematic uncertainties (boxes) are shown.} \label{fig5} \end{figure} As shown in Fig.~\ref{fig4}, both Achasov parameter sets, with the larger $a_0$ masses and decay couplings, appear to give $R$ values that agree best with those obtained from identical-kaon femtoscopy. The Antonelli parameter set appears to give slightly lower values. Comparing the measured $R$ values between K$^0_{\rm S}$K$^0_{\rm S}$ and K$^{\pm}$K$^{\pm}$ in Fig.~\ref{fig4} they are seen to agree with each other within the uncertainties. In fact, the only reason for the femtoscopic K$^0_{\rm S}$K$^{\rm \pm}$ radii to be different from the K$^0_{\rm S}$K$^0_{\rm S}$ and K$^{\pm}$K$^{\pm}$ ones would be if the K$^0_{\rm S}$ and K$^{\pm}$ sources were displaced with respect to each other. This is not expected because the collision dynamics is governed by strong interactions for which the isospin symmetry applies. The results for the correlation strength parameters $\lambda$ are shown in Fig.~\ref{fig5}. The $\lambda$ parameters from K$^0_{\rm S}$K$^{\rm \pm}$ and K$^{\pm}$K$^{\pm}$ are corrected for experimental purity~\cite{Adam:2015vja}.The K$^0_{\rm S}$K$^0_{\rm S}$ pairs have a high purity of $> 90\%$, so the corresponding correction was neglected~\cite{Adam:2015vja}(see the earlier discussion on purity). Statistical and total uncertainties are shown for all results. The K$^0_{\rm S}$K$^{\rm \pm}$ $\lambda$ values, with the exception of the Martin parameters, appear to be in agreement with the $\lambda$ values for the identical kaons. All of the $\lambda$ values are seen to be measured to be about 0.6, i.e. less than the ideal value of unity, which can be due to the contribution of kaons from K$^*$ decay ($\Gamma\sim50$ MeV, where $\Gamma$ is the decay width) and from other long-lived resonances (such as the $D$-meson) distorting the spatial kaon source distribution away from the ideal Gaussian which is assumed in the fit function~\cite{Humanic:2013xga}. One would expect that the K$^0_{\rm S}$K$^{\rm \pm}$ $\lambda$ values agree with those from the identical kaons if the FSI for the K$^0_{\rm S}$K$^{\rm \pm}$ went solely through the $a_0$ resonant channel since this analysis should see the same source distribution. In order to obtain a more quantitative comparison of the present results for $R$ and $\lambda$ with the identical kaon results, the $\chi^2/{\rm ndf}$ is calculated for $R$ and $\lambda$ for each parameter set, \begin{equation} \chi_\omega^2/{\rm ndf} = \frac{1}{\nu}\sum_{i=1}^{3}\frac{[\omega_i(K^0_{\rm S}K^{\rm \pm})-\omega_i(KK)]^2}{\sigma_i^2} \label{chi2} \end{equation} where $\omega$ is either $R$ or $\lambda$, $i$ runs over the three $k_T$ values, the number of degrees of freedom taken is ${\rm ndf}=3$ and $\sigma_i$ is the sum of the statistical and systematic uncertainties on the $i^{th}$ K$^0_{\rm S}$K$^{\rm \pm}$ extracted parameter (Note that the all $k_{\rm T}$ bin indeed contains the kaon pairs that make up the $k_{\rm T}<0.675$ GeV/$c$ and $k_{\rm T}>0.675$ GeV/$c$ bins, but in addition it contains an equal number of new pair combinations between the kaons in the $k_{\rm T}<0.675$ GeV/$c$ and $k_{\rm T}>0.675$ GeV/$c$ bins. So for the purposes of this simple comparison, we approximate the all $k_{\rm T}$ bin as being independent). The linear sum of the statistical and systematic uncertainties is used for $\sigma_i$ to be consistent with the linear sum of the statistical and systematic uncertainties plotted on the points in Figs.~\ref{fig4} and ~\ref{fig5}. The quantity $\omega_i(KK)$ is determined by fitting a quadratic to the identical kaon results and evaluating the fit at the average $k_T$ values of the K$^0_{\rm S}$K$^{\rm \pm}$ measurements. Table~\ref{tab:chisq} summarizes the results for each parameter set and the extracted p-values. As seen, the Achasov2, Achasov1 and Antonelli parameter sets are consistent with the identical kaon results for both $R$ and $\lambda$. The Martin parameter set is seen to have vanishingly small p-values for both $R$ and $\lambda$ and is thus in clear disagreement with the identical kaon results, as can easily be seen by examining Figs.~\ref{fig4} and \ref{fig5}. In order to quantitatively estimate the size of the non-resonant channel present, the ratio $\left \langle \frac{\lambda(K^0_{\rm S}K^{\rm \pm})}{\lambda(KK)} \right \rangle$ has been calculated for each parameters set, where the average is over the three $k_T$ values and the uncertainty is calculated from the average of the statistical+systematic uncertainties on the K$^0_{\rm S}$K$^{\rm \pm}$ parameters. These values are shown in the last column of Table~\ref{tab:chisq}. Disregarding the Martin value, the smallest value this ratio can take within the uncertainties is 0.87 (from the Achasov2 paramters) which would thus allow at most a 13\% non-resonant contribution. \begin{table} \centering \begin{tabular}{| c | c | c | c | c | c |} \hline Parameters & $\chi_R^2/{\rm ndf}$ & $R$ p-value & $\chi_\lambda^2/{\rm ndf}$ & $\lambda$ p-value & $\left \langle \frac{\lambda(K^0_{\rm S}K^{\rm \pm})}{\lambda(KK)} \right \rangle$ \\ \hline Achasov2 & 0.456 & 0.713 & 0.248 & 0.863 & 1.04$\pm$0.17 \\ \hline Achasov1 & 0.583 & 0.626 & 0.712 & 0.545 & 1.14$\pm$0.20 \\ \hline Antonelli & 1.297 & 0.273 & 0.302 & 0.824 & 1.09$\pm$0.20 \\ \hline Martin & 14.0 & 0.000 & 22.2 & 0.000 & 0.55$\pm$0.10 \\ \hline \end{tabular} \caption{Comparisons of $R$ and $\lambda$ from K$^0_{\rm S}$K$^{\rm \pm}$ with identical kaon results.} \label{tab:chisq} \end{table} The results of this study presented above clearly show that the measured K$^0_{\rm S}$K$^{\rm \pm}$ have dominantly undergone a FSI through the $a_0$ resonance. This is remarkable considering that we measure in Pb-Pb collisions the average separation between the two kaons at freeze out to be $\sim 5$ fm, and due to the short-ranged nature of the strong interaction of $\sim 1$ fm this would seem to not encourage a FSI but rather encourage free-streaming of the kaons to the detector resulting in a ``flat'' correlation function. A dominant FSI is what might be expected if the $a_0$ would be a four-quark, i.e.\ tetraquark, state or a ``${\rm \overline{K}}-$K molecule.'' There appears to be no calculations in the literature for the tetraquark vs. diquark production cross sections for the interaction ${\rm K\overline{K}} \rightarrow a_0$, but qualitative arguments compatible with the $a_0$ being a four--quark state can be made based on the present measurements. The main argument in favor of this is that the reaction channel ${\rm K}^0{\rm K}^-\rightarrow a^-_0$ ($\overline{\rm K}^0$K$^+\rightarrow a^+_0$) is strongly favored if the $a^-_0$ ($a^+_0$) is composed of ${\rm d\overline{s}s\overline{u}}$ (${\rm \overline{d}s\overline{s}u}$) quarks such that a direct transfer of the quarks in the kaons to the $a^-_0$ ($a^+_0$) has taken place, since this is an ``OZI superallowed'' reaction~\cite{Jaffe:1976ig}. The ``OZI rule'' can be stated as ``an inhibition associated with the creation or annihilation of quark lines''~\cite{Jaffe:1976ig}. Thus, a diquark $a_0$ final state is less favored according to the OZI rule since it would require the annihilation of the strange quarks in the kaon interaction. This would allow for the possibility of a significant non-resonant or free-streaming channel for the kaon interaction that would result in a $\lambda$ value below the identical-kaon value by diluting the $a_0$ signal. As mentioned above, the collision geometry itself also suppresses the annihilation of the strange quarks due to the large separation between the kaons at freeze out. Note that this assumes that the $C(k^*)$ distribution of a non-resonant channel would be mostly ``flat'' or ``monotonic'' in shape and not showing a strong resonant-like signal as seen for the $a_0$ in Fig.~\ref{fig1} and Fig.~\ref{fig2}. This assumption is clearly true in the free-streaming case, which is assumed in Eq.~\ref{eq:fit2} in setting $\alpha=0.5$ due to the non-resonant kaon combinations. A similar argument, namely that the success of the ``charged kaon loop model'' in describing the radiative $\phi$-decay data favors the $a_0$ as a tetraquark state, is given in Ref.~\cite{Achasov:2002ir}. \section{Summary} In summary, femtoscopic correlations with K$^0_{\rm S}$K$^{\rm \pm}$ pairs have been studied for the first time. This new femtoscopic method was applied to data from central Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV by the LHC ALICE experiment. Correlations in the K$^0_{\rm S}$K$^{\rm \pm}$ pairs are produced by final-state interactions which proceed through the a$_{\rm 0}$(980) resonance. The $a_0$ resonant FSI is seen to give an excellent representation of the shape of the signal region in the present study. The differences between ${\rm \overline{K}^0}$K$^+$ and K$^0$K$^-$ for the extracted $R$ and $\lambda$ values are found to be insignificant within the uncertainties of the present study. The three larger $a_0$ mass and decay parameter sets are favored by the comparison with the identical kaon results. The present results are also compatible with the interpretation of the $a_0$ resonance as a tetraquark state. This work should provide a constraint on models that are used to predict kaon-kaon interactions \cite{Oller:1998hw,HongXiem:2014tda}. It will be interesting to apply K$^0_{\rm S}$K$^{\rm \pm}$ femtoscopy to other collision energies, e.g.\ the higher LHC energies now available, and bombarding species, e.g.\ proton-proton collisions, since the different source sizes encountered in these cases will probe the interaction of the K$^0_{\rm S}$ with the K$^{\rm \pm}$ in different sensitivity ranges (i.e.\ see the $R$ dependence in Eq.~\ref{eq:fit2}). \newenvironment{acknowledgement}{\relax}{\relax} \begin{acknowledgement} \section*{Acknowledgements} \input{fa_2017-04-05.tex} \end{acknowledgement} \bibliographystyle{utphys}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Algorithm Details} \begin{algorithm}[H] \caption{GenDICE (with function approximators)}\label{alg:gendice} \begin{algorithmic} \STATE {\bf Inputs}: Convex function $\phi$ and its Fenchel conjugate $\phi^*$, off-policy data $\Dcal = \{(s^{(i)}, a^{(i)}, r^{(i)}, s^{\prime(i)})\}_{i=1}^N$, initial state $s_0\sim\mu_0$, target policy $\pi$, {distribution corrector}~$\texttt{nn}_{\theta_\tau}(\cdot,\cdot), \texttt{nn}_{\theta_f}(\cdot,\cdot)$, constraint scalar $u$, learning rates $\eta_\tau, \eta_f, \eta_u$, number of iterations $K$, batch size $B$. \FOR{$t = 1,\dots,K$} \STATE Sample batch $\{(s^{(i)}, a^{(i)}, r^{(i)}, s^{\prime(i)})\}_{i=1}^B$ from $\Dset$. \STATE Sample batch $\{s_0^{(i)}\}_{i=1}^B$ from $\mu_0$. \STATE Sample actions $a^{\prime(i)}\sim\pi(s^{\prime(i)})$, for $i=1,\dots,B$. \STATE Sample actions $a_0^{(i)}\sim\pi(s_0^{(i)})$, for $i=1,\dots,B$. \STATE Compute empirical loss $\hat J_{\chi^2}\rbr{\tau, u, f} = \rbr{1 - \gamma}\EE_{\mu_0\pi}\sbr{f\rbr{s, a}} + \gamma\EE_{\Tcal_p}\sbr{\tau\rbr{s, a}f\rbr{s', a'}}$ \\ $- \EE_{p}\sbr{\tau\rbr{s, a}\rbr{f\rbr{s, a} + \frac{1}{4}f^2\rbr{s, a}}} + \lambda\rbr{\EE_p\sbr{u\tau\rbr{s, a}-u}-\frac{u^2}{2}}$. \STATE Update $w_\tau\leftarrow \theta_\tau - \eta_\tau \nabla_{\theta_\tau}\hat J_{\chi^2}$. \STATE Update $w_f\leftarrow \theta_f + \eta_f \nabla_{\theta_f}\hat J_{\chi^2}$. \STATE Update $u\leftarrow \theta_u + \eta_u \nabla_{u} \hat J_{\chi^2}$. \ENDFOR \STATE {\bf Return} $\texttt{nn}_{\tau}$. \end{algorithmic} \end{algorithm} \section{Experimental Settings}~\label{appendix:exp_settings} \subsection{Tabular Case} For the Taxi domain, we follow the same protocol as used in~\citet{liu2018breaking}. The behavior and target policies are also taken from~\citet{liu2018breaking} (referred in their work as the behavior policy for $\alpha=0$). We use a similar taxi domain, where a grid size of $5 \times 5$ yields $2000$ states in total ($25\times16\times5$, corresponding to $25$ taxi locations, 16 passenger appearance status and $5$ taxi status). We set our target policy as the final policy $\pi_*$ after running Q-learning~\citep{SutBar98} for $1000$ iterations, and set another policy $\pi_+$ after $950$ iterations as our base policy. The behavior policy is a mixture policy controlled by $\alpha$ as $\pi = (1-\alpha)\pi_* + \alpha \pi_+$, \ie, the larger $\alpha$ is, the behavior policy is more close to the target policy. In this setting, we solve for the optimal stationary ratio $\tau$ exactly using matrix operations. Since~\citet{liu2018breaking} perform a similar exact solve for $|S|$ variables $\mu(s)$, for better comparison we also perform our exact solve with respect to $|S|$ variables $\tau(s)$. Specifically, the final objective of importance sampling will require knowledge of the importance weights $\mu(a|s)/p(a|s)$. ~~~~~~~~~~ \begin{wraptable}{r}{0.55\textwidth} \small \caption{Statistics of different graphs.} \begin{tabular}{lcc} \toprule[1.2pt] \textbf{Dataset} & \textbf{Number of Nodes} & \textbf{Number of Edges}\\ \midrule BA (Small) & $100$ & $400$\\ BA (Large) & $500$ & $2000$\\ Cora & $2708$ & $5429$\\ Citeseer & $3327$ & $4731$\\ \bottomrule[1.2pt] \vspace{-6mm} \end{tabular} \label{tab:oprdata} \end{wraptable} \vspace{-2mm} For offline PageRank, the graph statistics are illustrated in Table \ref{tab:oprdata}, and the degree statistics and graph visualization are shown in Figure \ref{fig:offpolicy-graphs}. For the Barabasi–Albert (BA) Graph, it begins with an initial connected network of $m_0$ nodes in the network. Each new node is connected to $m\leq m_{0}$ existing nodes with a probability that is proportional to the number of links that the existing nodes already have. Intuitively, heavily linked nodes (`hubs') tend to quickly accumulate even more links, while nodes with only a few links are unlikely to be chosen as the destination for a new link. The new nodes have a `preference' to attach themselves to the already heavily linked nodes. For two real-world graphs, it is built upon the real-world citation networks. In our experiments, the weights of the BA graph is randomly drawn from a standard Gaussian distribution with normalization to ensure the property of the transition matrix. The offline data is collected by a random walker on the graph, which consists the initial state and next state in a single trajectory. In experiments, we vary the number of off-policy samples to validate the effectiveness of \estname with limited offline samples provided. \begin{figure}[ht] \centering \begin{tabular}{cccc} \hspace{-3mm} \includegraphics[width=0.25\linewidth]{figures/BA100} & \hspace{-5mm} \includegraphics[width=0.25\linewidth]{figures/BA500.pdf} & \hspace{-5mm} \includegraphics[width=0.25\linewidth]{figures/Cora.pdf} & \hspace{-5mm} \includegraphics[width=0.25\linewidth]{figures/Citeseer.pdf} \\ \end{tabular} \vspace{-4mm} \caption{Degree statistics and visualization of different graphs.} \label{fig:offpolicy-graphs} \vspace{-4mm} \end{figure} \subsection{Continuous Case} We use the Cartpole, Reacher and HalfCheetah tasks as given by OpenAI Gym. In importance sampling, we learn a neural network policy via behavior cloning, and use its probabilities for computing importance weights $\pi_*(a|s)/\pi(a|s)$. All neural networks are feed-forward with two hidden layers of dimension $64$ and $\tanh$ activations. \paragraph{Discrete Control Tasks} We modify the Cartpole task to be infinite horizon: We use the same dynamics as in the original task but change the reward to be $-1$ if the original task returns a termination (when the pole falls below some threshold) and $1$ otherwise. We train a policy on this task with standard Deep Q-Learning~\citep{MniKavSilGraetal13} until convergence. We then define the target policy $\pi_*$ as a weighted combination of this pre-trained policy (weight $0.7$) and a uniformly random policy (weight $0.3$). The behavior policy $\pi$ for a specific $0\le \alpha\le 1$ is taken to be a weighted combination of the pre-trained policy (weight $0.55 + 0.15\alpha$) and a uniformly random policy (weight $0.45 - 0.15\alpha$). We train each stationary distribution correction estimation method using the Adam optimizer with batches of size $2048$ and learning rates chosen using a hyperparameter search from $\{0.0001, 0.0003, 0.001, 0.003\}$ and choose the best one as $0.0003$. \paragraph{Continuous Control Tasks} For the Reacher task, we train a deterministic policy until convergence via DDPG~\citep{LilHunPriHeeetal15}. We define the target policy $\pi$ as a Gaussian with mean given by the pre-trained policy and standard deviation given by $0.1$. The behavior policy $\pi_b$ for a specific $0\le \alpha\le 1$ is taken to be a Gaussian with mean given by the pre-trained policy and standard deviation given by $0.4 - 0.3\alpha$. We train each stationary distribution correction estimation method using the Adam optimizer with batches of size $2048$ and learning rates chosen using a hyperparameter search from $\{0.0001, 0.0003, 0.001, 0.003\}$ and the optimal learning rate found was $0.003$). For the HalfCheetah task, we also train a deterministic policy until convergence via DDPG~\citep{LilHunPriHeeetal15}. We define the target policy $\pi$ as a Gaussian with mean given by the pre-trained policy and standard deviation given by $0.1$. The behavior policy $\pi_b$ for a specific $0\le \alpha\le 1$ is taken to be a Gaussian with mean given by the pre-trained policy and standard deviation given by $0.2 - 0.1\alpha$. We train each stationary distribution correction estimation method using the Adam optimizer with batches of size $2048$ and learning rates chosen using a hyperparameter search from $\{0.0001, 0.0003, 0.001, 0.003\}$ and the optimal learning rate found was $0.003$. \section{Additional Experiments}\label{appendix:exp} \subsection{OPE for Discrete Control} On the discrete control task, we modify the Cartpole task to be infinite horizon: the original dynamics is used but with a modified reward function: the agent will receive $-1$ if the environment returns a termination (\ie, the pole falls below some threshold) and 1 otherwise. As shown in Figure \ref{fig:offpolicy-cartpole}, our method shows competitive results with IS and Model-Based in average reward case, but our proposed method finally outperforms these two methods in terms of log MSE loss. Specifically, it is relatively difficult to fit a policy with data collected by multiple policies, which renders the poor performance of IS. \vspace{-2mm} \begin{figure}[ht] \centering \includegraphics[width=1\linewidth]{figures/Cartpole.pdf} \vspace{-6mm} \caption{Results on Cartpole. Each plot in the first row shows the estimated average step reward over training and different behavior policies (higher $\alpha$ corresponds to a behavior policy closer to the target policy; the same in other figures); M1:$\alpha=[0.0, 0.33]$; M2: $\alpha=[0.0, 0.33, 0.66]$)} \label{fig:offpolicy-cartpole_l} \end{figure} \subsection{Additional Results on Continuous Control} In this section, we show more results on the continuous control tasks, \ie, HalfCheetah and Reacher. Figure \ref{fig:offpolicy-reacher_l} shows the $\log$ MSE towards training steps, and \estname outperforms other baselines with different behavior policies. Figure \ref{fig:offpolicy-reacher_r} better illustrates how our method beat other baselines, and can accurately estimate the reward of the target policy. Besides, Figure \ref{fig:offpolicy-half_r} shows \estname gives better reward estimation of the target policy. In these figures, the left three figures show the performance with off-policy dataset collected by single behavior policy from more difficult to easier tasks. The right two figures show the results, where off-policy dataset collected by multiple behavior policies. Figure \ref{fig:ablationlr_l} shows the ablation study results in terms of estimated rewards. The left two figures shows the effects of different learning rate. When $\alpha=0.33$, \ie, the OPE tasks are relatively easier, \estname gets relatively good results in all learning rate settings. However, when $\alpha=0.0$, \ie, the estimation becomes more difficult, only \estname in larger learning rate gets reasonable estimation. Interestingly, we can see with larger learning rates, the performance becomes better, and when learning rate is $0.001$ with $\alpha=0.0$, the variance is very high, showing some cases the estimation becomes more accurate. The right three figures show different activation functions with different behavior policy. The square and softplus function works well; while the exponential function shows poor performance under some settings. In practice, we use the square function since its low variance and better performance in most cases. \begin{figure}[ht] \centering \includegraphics[width=1\linewidth]{figures/Reacher_l.pdf} \vspace{-6mm} \caption{Results on Reacher. Each plot in the first row shows the estimated average step reward over training and different behavior policies (higher $\alpha$ corresponds to a behavior policy closer to the target policy; the same in other figures).} \label{fig:offpolicy-reacher_l} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=1\linewidth]{figures/Reacher_r.pdf} \vspace{-6mm} \caption{Results on Reacher. Each plot in the first row shows the estimated average step reward over training and different behavior policies (higher $\alpha$ corresponds to a behavior policy closer to the target policy; the same in other figures).} \label{fig:offpolicy-reacher_r} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=1\linewidth]{figures/Half_r.pdf} \vspace{-6mm} \caption{Results on HalfCheetah. Each plot in the first row shows the estimated average step reward over training and different behavior policies (higher $\alpha$ corresponds to a behavior policy closer to the target policy.} \label{fig:offpolicy-half_r} \end{figure} \begin{figure}[h!] \centering \begin{tabular}{cc} \hspace{-3mm} \includegraphics[width=0.4\linewidth]{figures/Reacher_lr.pdf} & \hspace{-5mm} \includegraphics[width=0.6\linewidth]{figures/Reacher_par.pdf} \\ \end{tabular} \vspace{-3mm} \caption{{Results of ablation study with different learning rates and activation functions. The plots show the estimated average step reward over training and different behavior policies .}} \label{fig:ablationlr_l} \end{figure} \subsection{Comparison with self-normalization trick}\label{appendix:selfnormal} The self-normalization trick used in \cite{liu2018breaking} encodes the normalization constraint in $\tau$, while the principled optimization technique is considered in GenDICE. Further, the self-normalization trick will lead to several disadvantages theoretically, in both statistical and computational aspects : \begin{itemize}[leftmargin=*, nosep, topsep=0pt] \item[{\bf i)}] It will generally not produce an unbiased solution. Although $\frac{1}{| \mathcal{D}|}\sum_{(s', a')\in \mathcal{D}}\tau(s', a')$ is an unbiased estimator for $\mathbb{E}[\tau]$, the plugin estimator $\frac{\tau(s, a)}{\frac{1}{| \mathcal{D}|}\sum_{(s', a')\in \mathcal{D}}\tau(s', a')}$ will be \emph{biased} for $\frac{\tau(s, a)}{\mathbb{E}[\tau]}$. \item[{\bf ii)}] It will induce more computational cost. Specifically, the self-normalized ratio will be in the form of $\frac{\tau(s, a)}{\frac{1}{| \mathcal{D}|}\sum_{(s', a')\in \mathcal{D}}\tau(s', a')}$, which requires to go through all the samples in training set $\mathcal{D}$ even for just estimating one stochastic gradient, and thus, is prohibitive for large dataset. \end{itemize} Empirically, self-normalization is the most natural and the first idea we tried during this project. We have some empirical results about this method in the OPR setting. \begin{wraptable}{r}{0.46\textwidth} \vspace{-7mm} \caption{Comparison between regularization and self-normalization.} \vspace{2mm} \centering \begin{tabular}{lc} \toprule[1.2pt] & \textbf{$\log$ KL-divergence} \\ \midrule self-normalization & $-4.26 \pm 0.157 $ \\ regularization & $-4.74 \pm 0.163 $\\ \bottomrule[1.2pt] \vspace{-6mm} \end{tabular} \label{tab:oprselfaba} \end{wraptable} Despite the additional computational cost, it performs worse than the proposed regularization technique used in the current version of GenDICE. Table \ref{tab:oprselfaba} shows a comparison between self-normalization and regularization on OPR with $\chi^2$-divergence for BA graph with 100 nodes, 10,000 offline samples, and 20 trials. We stop the algorithms in the same running-time budget. \section{Background} \vspace{-2mm} \label{eq:prelim} \label{eq:background} We first introduce off-line PageRank (OPR) and off-policy policy evaluation (OPE) as two motivating domains, where the goal is to estimate stationary quantities given only off-line access to a set of sampled transitions from an environment. \vspace{-3mm} \paragraph{Off-line PageRank (OPR)} The celebrated PageRank algorithm \citep{PagBriMotWin99} defines the ranking of a web page in terms of its asymptotic visitation probability under a random walk on the (augmented) directed graph specified by the hyperlinks. If we denote the World Wide Web by a directed graph $G = \rbr{V, E}$ with vertices (web pages) $v\in V$ and edges (hyperlinks) $\rbr{v, u}\in E$, PageRank considers the random walk defined by the Markov transition operator $v\rightarrow u$: \begin{equation} \textstyle \Pb\rbr{u|v} = \frac{(1-\eta)}{\abr{v}}\one_{\rbr{v, u}\in E} + \frac{\eta}{\abr{V}} \,, \label{eq:pagerank} \end{equation} where $\abr{v}$ denotes the out-degree of vertex $v$ and $\eta\in[0,1)$ is a probability of ``teleporting" to any page uniformly. Define $d_t\rbr{v} \defeq \PP\rbr{s_t=v| s_0\sim \mu_0, \forall i< t, s_{i+1}\sim\Pb(\cdot|s_i)}$, where $\mu_0$ is the initial distribution over vertices, then the original PageRank algorithm explicitly iterates for the limit \begin{equation} d\rbr{v}\defeq \begin{cases} \lim_{t\rightarrow\infty} d_t\rbr{v} & \quad \text{if } \gamma =1 \\[0.5ex] (1-\gamma)\sum_{t=0}^{\infty}\gamma^t d_t\rbr{v} & \quad \text{if } \gamma \in (0, 1)\,. \end{cases} \end{equation} The classical version of this problem is solved by tabular methods that simulate \eqref{eq:pagerank}. However, we are interested in a more scalable off-line version of the problem where the transition model is not explicitly given. Instead, consider estimating the rank of a particular web page $v'$ from a large web graph, given only a sample $\Dcal = \cbr{\rbr{v, u}_i}_{i=1}^N$ from a random walk on $G$ as specified above. We would still like to estimate $d(v')$ based on this data. First, note that if one knew the distribution $p$ by which any vertex $v$ appeared in $\Dcal$, the target quantity could be re-expressed by a simple importance ratio $d\rbr{v'} =\EE_{v\sim p}\sbr{\frac{d\rbr{v}}{p\rbr{v}}\one_{v=v'}}$. Therefore, if one had the correction ratio function $\tau\rbr{v}=\frac{d\rbr{v}}{p\rbr{v}}$, an estimate of $d\rbr{v'}$ can easily be recovered via $d\rbr{v'}\approx\hat p\rbr{v'}\tau\rbr{v'}$, where $\hat p\rbr{v'}$ is the empirical probability of $v'$ estimated from $\Dcal$. The main attack on the problem we investigate is to recover a good estimate of the ratio function $\tau$. \vspace{-3mm} \paragraph{Policy Evaluation} An important generalization of this stationary value estimation problem arises in RL in the form of policy evaluation. Consider a Markov Decision Process~(MDP) $\mathcal{M} = \langle S, A, \Pb, R, \gamma, \mu_0 \rangle$~\citep{Puterman14}, where $S$ is a state space, $A$ is an action space, $\Pb\rbr{s'|s, a}$ denotes the transition dynamics, $R$ is a reward function, $\gamma\in(0, 1]$ is a discounted factor, and $\mu_0$ is the initial state distribution. Given a policy, which chooses actions in any state $s$ according to the probability distribution $\pi(\cdot|s)$, a trajectory $\beta=(s_0,a_0,r_0,s_1,a_1,r_1,\ldots)$ is generated by first sampling the initial state $s_0 \sim \mu_0$, and then for $t \ge 0$, $a_t \sim \pi(\cdot|s_t)$, $r_t \sim R(s_t,a_t)$, and $s_{t+1} \sim \Pb(\cdot|s_t,a_t)$. The value of a policy $\pi$ is the expected per-step reward defined as: \begin{equation}\label{eq:rewardfunc} \textstyle \text{\small{Average:}}~~\pval(\pi)\defeq\lim_{T\rightarrow\infty} \frac{1}{T+1}\EE\sbr{\sum_{t=0}^{T} r_t}\,, ~~\text{\small{Discounted:}}~~\pval_\gamma(\pi)\defeq (1-\gamma)\EE\sbr{\sum_{t=0}^{\infty}\gamma^t r_t}\,. \end{equation} In the above, the expectation is taken with respect to the randomness in the state-action pair $\Pb\rbr{s'|s, a}\pi\rbr{a'|s'}$ and the reward $R\rbr{s_t, a_t}$. Without loss of generality, we assume the limit exists for the average case, and hence $\pval(\pi)$ is finite. \vspace{-3mm} \paragraph{Behavior-agnostic Off-Policy Evaluation~(OPE)} An important setting of policy evaluation that often arises in practice is to estimate $\pval_\gamma\rbr{\pi}$ or $\pval\rbr{\pi}$ given a fixed dataset $\Dcal = \cbr{\rbr{s, a, r, s'}_i}_{i=1}^N\sim \Pb\rbr{s'|s, a}p\rbr{s, a}$, where $p\rbr{s, a}$ is an unknown distribution induced by multiple unknown behavior policies. This problem is different from the classical form of OPE, where it is assumed that a known behavior policy $\pi_b$ is used to collect transitions. In the behavior-agnostic scenario, however, typical importance sampling~(IS)~estimators~\citep[e.g.,][]{PreSutSin00} do not apply. Even if one can assume $\Dcal$ consists of trajectories where the behavior policy can be estimated from data, it is known that that straightforward IS estimators suffer a variance exponential in the trajectory length, known as the ``curse of horizon''~\citep{jiang2015doubly,liu2018breaking}. Let $d^\pi_t\rbr{s, a} = \PP\rbr{s_t=s, a_t=a| s_0\sim \mu_0, \forall i< t, a_i\sim\pi\rbr{\cdot|s_i}, s_{i+1}\sim\Pb(\cdot|s_i, a_i)}$. The stationary distribution can then be defined as \begin{equation} \label{eq:mdp_stationary} \mu_\gamma^\pi\rbr{s, a}\defeq \begin{cases} \lim_{T\rightarrow\infty} \frac{1}{T+1}\sum_{t=0}^T d^\pi_t\rbr{s, a} = \lim_{t\rightarrow\infty} d^\pi_t\rbr{s, a} & \quad \text{if } \gamma =1 \\[0.5ex] (1-\gamma)\sum_{t=0}^{\infty}\gamma^t d^\pi_t\rbr{s, a} & \quad \text{if } \gamma \in (0, 1)\,. \end{cases} \end{equation} With this definition, $\pval(\pi)$ and $\pval_\gamma\rbr{\pi}$ can be equivalently re-expressed as \begin{equation} \label{eq:dual_reward} \textstyle \pval_\gamma(\pi)\defeq \EE_{\mu_\gamma^\pi}\sbr{R\rbr{s, a}} = \EE_{p}\sbr{\frac{\mu_\gamma^\pi\rbr{s, a}}{p\rbr{s, a}}R\rbr{s, a}}\,. \end{equation} Here we see once again that if we had the correction ratio function $\tau\rbr{s,a}=\frac{\mu_\gamma^\pi\rbr{s, a}}{p\rbr{s, a}}$ a straightforward estimate of $\pval_\gamma(\pi)$ could be recovered via $\pval_\gamma(\pi)\approx\EE_{\hat p}\sbr{\tau\rbr{s, a}R\rbr{s,a}}$, where $\hat p\rbr{s,a}$ is an empirical estimate of $p\rbr{s,a}$. In this way, the behavior-agnostic OPE problem can be reduced to estimating the correction ratio function $\tau$, as above. We note that \citet{liu2018breaking} and~\citet{nachum2019dualdice} also exploit~\eqref{eq:dual_reward} to reduce OPE to stationary distribution correction, but these prior works are distinct from the current proposal in different ways. First, the inverse propensity score~(IPS)~method of \citet{liu2018breaking} assumes the transitions are sampled from a \emph{single} behavior policy, which must be \emph{known} beforehand; hence that approach is not applicable in behavior-agnostic OPE setting. Second, the recent DualDICE~algorithm \citep{nachum2019dualdice} is also a behavior-agnostic OPE estimator, but its derivation relies on a \emph{change-of-variable} trick that is only valid for $\gamma<1$. This previous formulation becomes unstable when $\gamma\rightarrow 1$, as shown in~\secref{sec:experiments} and~\appref{appendix:exp}. The behavior-agnostic OPE estimator we derive below in~\secref{sec:dual_est} is applicable both when $\gamma = 1$ and $\gamma\in (0, 1)$. This connection is why we name the new estimator \estabb, for \emph{GENeralized stationary DIstribution Correction Estimation}. \section{Conclusion}\label{sec:conclusion} In this paper, we proposed a novel algorithm \estname for general stationary distribution correction estimation, which can handle both the discounted and average stationary distribution given multiple behavior-agnostic samples. Empirical results on off-policy evaluation and offline PageRank show the superiority of proposed method over the existing state-of-the-art methods. \subsubsection*{Acknowledgments} The authors would like to thank Ofir Nachum, the rest of the Google Brain team and the anonymous reviewers for helpful discussions and feedback. \section{\estname}\label{sec:dual_est} \vspace{-2mm} As noted, there are important estimation problems in the Markov chain and MDP settings that can be recast as estimating a stationary distribution correction ratio. We first outline the conditions that characterize the correction ratio function $\tau$, upon which we construct the objective for the~\estabb estimator, and design efficient algorithm for optimization. We will develop our approach for the more general MDP setting, with the understanding that all the methods and results can be easily specialized to the Markov chain setting. \vspace{-2mm} \subsection{Estimating Stationary Distribution Correction}\label{sec:unbias_estimator} \vspace{-1mm} The stationary distribution $\mu^\pi_\gamma$ defined in~\eqref{eq:mdp_stationary} can also be characterized via \begin{equation} \label{eq:stationary} \resizebox{0.85\hsize}{!}{$ \mu\rbr{s', a'} = \underbrace{\rbr{1 - \gamma}\mu_0\rbr{s'}\pi\rbr{a'|s'} + \gamma\int \pi\rbr{a'|s'}\Pb\rbr{s'|s, a}\mu\rbr{s, a}ds\,da}_{\rbr{\Tcal\circ \mu}\rbr{s', a'}},\,\,\forall \rbr{s', a'}\in S\times A. $} \end{equation} At first glance, this equation shares a superficial similarity to the Bellman equation, but there is a fundamental difference. The Bellman operator recursively integrates out future $\rbr{s', a'}$ pairs to characterize a current pair $\rbr{s, a}$ value, whereas the distribution operator $\Tcal$ defined in \eqref{eq:stationary} operates in the reverse temporal direction. When $\gamma<1$, \eqref{eq:stationary} always has a fixed-point solution. For $\gamma =1$, in the discrete case, the fixed-point exists as long as $\Tcal$ is ergodic; in the continuous case, the conditions for fixed-point existence become more complicated~\citep{MeyTwe12} and beyond the scope of this paper. The development below is based on a divergence $D$ and the following default assumption. \vspace{-2mm} \begin{assumption}[Markov chain regularity]\label{asmp:stat_exist} For the given target policy $\pi$, the resulting state-action transition operator $\Tcal$ has a unique stationary distribution $\mu$ that satisfies $D(\Tcal\circ\mu\|\mu)=0$. \end{assumption} \vspace{-1mm} In the behavior-agnostic setting we consider, one does not have direct access to $\Pb$ for element-wise evaluation or sampling, but instead is given a fixed set of samples from $\Pb\rbr{s'|s, a}p\rbr{s, a}$ with respect to some distribution $p\rbr{s,a}$ over $S\times A$. Define $\Tpgam$ to be a mixture of $\mu_0\pi$ and $\Tcal_p$; \ie, let \begin{equation}\label{eq:ref_tp} \Tpgam\rbr{\rbr{s', a'}, \rbr{s, a}} \defeq \rbr{1 - \gamma}\mu_0\rbr{s'}\pi\rbr{a'|s'} +\gamma \underbrace{\pi\rbr{a'|s'}\Pb\rbr{s'|s, a}p\rbr{s, a}}_{\Tcal_p\rbr{\rbr{s', a'}, \rbr{s, a}}} . \end{equation} Obviously, conditioning on $\rbr{s, a, s'}$ one could easily sample $a'\sim \pi\rbr{a'|s'}$ to form $\rbr{s, a, s', a'}\sim\Tcal_p\rbr{\rbr{s', a'}, \rbr{s, a}}$; similarly, a sample $\rbr{s', a'}\sim \mu_0\rbr{s'}\pi\rbr{a'|s'}$ could be formed from $s'$. Mixing such samples with probability $\gamma$ and $1-\gamma$ respectively yields a sample $\rbr{s, a, s', a'}\sim \Tpgam\rbr{\rbr{s', a'}, \rbr{s, a}}$. Based on these observations, the stationary condition for the ratio from~\eqref{eq:stationary} can be re-expressed in terms of $\Tpgam$ as \vspace{-1mm} \begin{equation}\label{eq:stationary_ratio} \resizebox{0.9\hsize}{!}{$ p\rbr{s', a'}\tau^*\rbr{s', a'} =\underbrace{\rbr{1 - \gamma}\mu_0\rbr{s'}\pi\rbr{a'|s'} + \gamma \int\pi\rbr{a'|s'}\Pb\rbr{s'|s, a}p\rbr{s, a}\tau^*\rbr{s, a}ds\,da}_{\rbr{\Tpgam\circ\tau^*}\rbr{s', a'}}, $} \end{equation} where $\tau^*\rbr{s, a}\defeq \frac{\mu\rbr{s, a}}{p\rbr{s, a}}$ is the correction ratio function we seek to estimate. One natural approach to estimating $\tau^*$ is to match the LHS and RHS of~\eqref{eq:stationary_ratio} with respect to some divergence $D\rbr{\cdot\|\cdot}$ over the empirical samples. That is, we consider estimating $\tau^*$ by solving the optimization problem \begin{equation} \label{eq:naive_est} \min_{\tau\ge 0}\,\,D\rbr{\Tpgam\circ\tau\|\ptau}. \end{equation} Although this forms the basis of our approach, there are two severe issues with this naive formulation that first need to be rectified: \begin{itemize}[leftmargin=*,topsep=0pt, nosep] \item[{\bf i)}] {\bf Degenerate solutions: } When $\gamma =1$, the operator $\Tcal_{\gamma=1,\mu_0}^p$ is invariant to constant rescaling: if $\tau^* = \Tcal_{\gamma=1,\mu_0}^p\circ\tau^*$ then $c\tau^* = \Tcal_{\gamma=1,\mu_0}^p\circ \rbr{c\tau^*}$ for any $c\ge 0$. Therefore, simply minimizing the divergence $D\rbr{\Tcal_{\gamma=1,\mu_0}^p\circ\tau\|\ptau}$ cannot provide a desirable estimate of $\tau^*$. In fact, in this case the trivial solution $\tau^*\rbr{s,a} = 0$ cannot be eliminated. \item[{\bf ii)}] {\bf Intractable objective: } The divergence $D\rbr{\Tpgam\circ\tau\|\ptau}$ involves the computation of $\Tpgam\circ\tau$, which in general involves an intractable integral. Thus, evaluation of the exact objective is intractable, and neglects the assumption that we only have access to samples from $\Tpgam$ and are not able to evaluate it at arbitrary points. \end{itemize} We address each of these two issues in a principled manner. \vspace{-2mm} \subsection{Eliminating degenerate solutions} \vspace{-1mm} To avoid degenerate solutions when $\gamma=1$, we ensure that the solution is a proper density ratio; that is, the property $\tau\in \Xi\defeq\cbr{\tau\rbr{\cdot}\ge0, \EE_{p}\sbr{\tau} = 1}$ must be true of any $\tau$ that is a ratio of some density to $p$. This provides an additional constraint that we add to the optimization formulation \begin{equation} \label{eq:constrained_est} \min_{\tau\ge 0}\,\,D\rbr{\Tpgam\circ\tau\|\ptau}, \quad\st,\quad \EE_{p}\sbr{\tau} = 1. \end{equation} With this additional constraint, it is obvious that the trivial solution $\tau\rbr{s, a} = 0$ is eliminated as an infeasible point of \eqnref{eq:constrained_est}, along with other degenerate solutions $\tau\rbr{s, a} = c\tau^*\rbr{s, a}$ with $c\neq 1$. Unfortunately, exactly solving an optimization with expectation constraints is very complicated in general~\citep{LanZho16}, particularly given a nonlinear parameterization for $\tau$. The penalty method~\citep{LueYe15} provides a much simpler alternative, where a sequence of regularized problems are solved \begin{equation} \label{eq:regularized_est} \min_{\tau\ge 0}\,\,J\rbr{\tau}\defeq D\rbr{\Tpgam\circ\tau\|\ptau} + \smallfrac{\lambda}{2} \rbr{\EE_{p}\sbr{\tau} - 1}^2, \end{equation} with $\lambda$ increasing. The drawback of the penalty method is that it generally requires $\lambda \rightarrow\infty$ to ensure the strict feasibility, which is still impractical, \newadd{especially in stochastic gradient descent. The infinite $\lambda$ may induce \emph{unbounded variance} in the gradient estimator, and thus, \emph{divergence} in optimization.} However, by exploiting the special structure of the solution sets to~\eqref{eq:regularized_est}, we can show that, remarkably, it is unnecessary to increase $\lambda$. \vspace{-2mm} \begin{theorem} \label{thm:soundness} For $\gamma\in (0, 1]$ and any $\lambda >0$, the solution to \eqref{eq:regularized_est} is given by $\tau^*\rbr{s,a}=\frac{u\rbr{s, a}}{p\rbr{s, a}}$. \end{theorem} \vspace{-2mm} The detailed proof for \thmref{thm:soundness} is given in~\appref{appendix:soundness}. By~\thmref{thm:soundness}, we can estimate the desired correction ratio function $\tau^*$ by solving only one optimization with an arbitrary $\lambda>0$. \vspace{-2mm} \subsection{Exploiting dual embedding} \vspace{-1mm} The optimization in~\eqref{eq:regularized_est} involves the integrals $\Ttau$ and $\EE_p\sbr{\tau}$ inside nonlinear loss functions, hence appears difficult to solve. Moreover, obtaining unbiased gradients with a naive approach requires double sampling~\citep{Baird95}. Instead, we bypass both difficulties by applying a dual embedding technique~\citep{DaiHePanBooetal16,DaiShaLiXiaHeetal17}. In particular, we assume the divergence $D$ is in the form of an $f$-divergence~\citep{nowozin2016f} \[ \textstyle D_\phi\rbr{\Ttau\|\ptau}\defeq \int\ptau\rbr{s, a}\phi\rbr{\frac{\Ttau\rbr{s, a}}{\ptau\rbr{s, a}}}ds\,da \] where $\phi\rbr{\cdot}:\RR_+\rightarrow\RR$ is a convex, lower-semicontinuous function with $\phi\rbr{1} = 0$. Plugging this into $J\rbr{\tau}$ in~\eqref{eq:regularized_est} we can easily check the convexity of the objective \vspace{-2mm} \begin{theorem}\label{thm:convexity} For an $f$-divergence with valid $\phi$ defining $D_\phi$, the objective $J\rbr{\tau}$ is convex w.r.t. $\tau$. \end{theorem} \vspace{-1mm} The detailed proof is provided in~\appref{appendix:convexity}. Recall that a suitable convex function can be represented as $\phi\rbr{x} = \max_{f} x\cdot f - \phi^*\rbr{f}$, where $\phi^*$ is the Fenchel conjugate of $\phi\rbr{\cdot}$. In particular, we have the representation $\frac{1}{2}x^2 = \max_{u}ux - \frac{1}{2}u^2$, which allows us to re-express the objective as \begin{equation} \resizebox{0.92\hsize}{!}{$ J\rbr{\tau} = \int \ptau\rbr{s', a'}\cbr{\max_{f}\sbr{\frac{\Ttau\rbr{s', a'}}{\ptau\rbr{s', a'}}f - \phi^*\rbr{f}}}ds'da' + \lambda\cbr{\max_{u}\sbr{u\rbr{\EE_p\sbr{\tau}-1}-\frac{u^2}{2}}}. $} \end{equation} Applying the interchangeability principle~\cite{ShaDen14,DaiHePanBooetal16}, one can replace the inner $\max$ in the first term over scalar $f$ to maximize over a function $f\rbr{\cdot, \cdot}:S\times A\rightarrow\RR$ \begin{multline} \label{eq:saddle_est} \min_{\tau\ge0}\max_{f:S\times A\rightarrow \RR, u\in\RR}\,\,J\rbr{\tau, u, f} = \rbr{1 - \gamma}\EE_{\mu_0\pi}\sbr{f\rbr{s, a}} + \gamma\EE_{\Tcal_p}\sbr{\tau\rbr{s, a}f\rbr{s', a'}} \\[-0.5ex] - \EE_{p}\sbr{\tau\rbr{s, a}\phi^*\rbr{f\rbr{s, a}}} + \lambda\rbr{\EE_p\sbr{u\tau\rbr{s, a}-u}-\smallfrac{u^2}{2}}. \end{multline} This yields the main optimization formulation, which avoids the aforementioned difficulties and is well-suited for practical optimization as discussed in~\secref{sec:prac_alg}. \vspace{-3mm} \paragraph{Remark (Other divergences):} \newadd{In addition to $f$-divergence, the proposed estimator~\eqref{eq:regularized_est} is compatible with other divergences, such as the integral probability metrics~(IPM)~\citep{Mueller97,SriFukGreSchetal09}, while retaining consistency. Based on the definition of the IPM, these divergences directly lead to $\min$-$\max$ optimizations similar to~\eqref{eq:saddle_est} with the identity function as $\phi^*\rbr{\cdot}$ and different feasible sets for the dual functions. Specifically, maximum mean discrepancy~(MMD)~\citep{SmoGreBor06} requires $\nbr{f}_{\Hcal_k}\le 1$ where $\Hcal_k$ denotes the RKHS with kernel $k$; the Dudley metric~\citep{Dudley02} requires $\nbr{f}_{BL}\le 1$ where $\nbr{f}_{BL}\defeq \nbr{f}_\infty + \nbr{\nabla f}_2$; and Wasserstein distance~\citep{ArjChiBot17} requires $\nbr{\nabla f}_2\le 1$. These additional requirements on the dual function might incur some extra difficulty in practice. For example, with Wasserstein distance and the Dudley metric, we might need to include an extra gradient penalty~\citep{GulAhmArjDumetal17}, which requires additional computation to take the gradient through a gradient. Meanwhile, the consistency of the surrogate loss under regularization is not clear. For MMD, we can obtain a closed-form solution for the dual function, which saves the cost of the inner optimization~\cite{GreBorRasSchetal12}, but with the tradeoff of requiring \emph{two independent} samples in each outer optimization update. Moreover, MMD relies on the condition that the dual function lies in some RKHS, which introduces additional kernel parameters to be tuned and in practice may not be sufficiently flexible compared to neural networks. } \vspace{-4mm} \subsection{A Practical Algorithm}\label{sec:prac_alg} \vspace{-1mm} We have derived a consistent stationary distribution correction estimator in the form of a $\min$-$\max$ saddle point optimization~\eqref{eq:saddle_est}. Here, we present a practical instantiation of~\estabb with a concrete objective and parametrization. We choose the $\chi^2$-divergence, which is an $f$-divergence with $\phi\rbr{x} = \rbr{x - 1}^2$ and $\phi^*\rbr{y} = y + \frac{y^2}{4}$. The objective becomes \begin{multline}\label{eq:chi_saddle_est} \textstyle J_{\chi^2}\rbr{\tau, u, f} = \rbr{1 - \gamma}\EE_{\mu_0\pi}\sbr{f\rbr{s, a}} + \gamma\EE_{\Tcal_p}\sbr{\tau\rbr{s, a}f\rbr{s', a'}} \\ - \EE_{p}\sbr{\tau\rbr{s, a}\rbr{f\rbr{s, a} + \smallfrac{1}{4}f^2\rbr{s, a}}} + \lambda\rbr{\EE_p\sbr{u\tau\rbr{s, a}-u}-\smallfrac{u^2}{2}}. \end{multline} There two major reasons for adopting $\chi^2$-divergence: \begin{itemize}[leftmargin=*, nosep, topsep=0pt] \item[{\bf i)}] In the behavior-agnostic OPE problem, we mainly use the ratio correction function for estimating $\widehat\EE_{p}\sbr{\hat\tau\rbr{s, a}R\rbr{s, a}}$, which is an expectation. Recall that the error between the estimate and ground-truth can then be bounded by total variation, which is a lower bound of $\chi^2$-divergence. \item[{\bf ii)}] For the alternative divergences, the conjugate of the $KL$-divergence involves $\exp\rbr{\cdot}$, which may lead to instability in optimization; while the IPM variants introduce extra constraints on dual function, which may be difficult to be optimized. The conjugate function of $\chi^2$-divergence enjoys suitable numerical properties and provides squared regularization. \newadd{We have provided an empirical ablation study that investigates the alternative divergences in~Section \ref{subsec:ablation}.} \end{itemize} To parameterize the correction ratio $\tau$ and dual function $f$ we use neural networks, $\tau\rbr{s, a} = \mathtt{nn}_{w_\tau}\rbr{s, a}$ and $f\rbr{s, a} = \mathtt{nn}_{w_f}\rbr{s, a}$, where $w_\tau$ and $w_f$ denotes the parameters of $\tau$ and $f$ respectively. Since the optimization requires $\tau$ to be non-negative, we add an extra positive neuron, such as $\exp\rbr{\cdot}$, $\log\rbr{1 + \exp\rbr{\cdot}}$ or $\rbr{\cdot}^2$ at the final layer of $\mathtt{nn}_{w_\tau}\rbr{s, a}$. We empirically compare the different positive neurons in~\secref{subsec:ablation}. For these representations, and unbiased gradient estimator $\nabla_{\rbr{w_\tau, u, w_f}} J\rbr{\tau, u, f}$ can be obtained straightforwardly, as shown in~\appref{appendix:alg_details}. This allows us to apply stochastic gradient descent to solve the saddle-point problem~\eqref{eq:chi_saddle_est} in a scalable manner, as illustrated in~\Algref{alg:gendice}. \subsubsection*{Acknowledgments} \bibliographystyle{iclr2020_conference} \section{Experiments}\label{sec:experiments} \vspace{-3mm} In this section, we evaluate \estname on OPE and OPR problems. For OPE, we use one or multiple behavior policies to collect a fixed number of trajectories at some fixed trajectory length. This data is used to recover a correction ratio function for a target policy $\pi$ that is then used to estimate the average reward in two different settings: \RN{1}) average reward; and \RN{2}) discounted reward. In both settings, we compare with a model-based approach and step-wise weighted IS~\citep{PreSutSin00}. \newadd{We also compare to \citet{liu2018breaking} (referred to as ``IPS'' here) in the Taxi domain with a learned behavior policy\footnote{We used the released implementation of IPS~\citep{liu2018breaking} from {\url{https://github.com/zt95/infinite-horizon-off-policy-estimation}}.}.} We specifically compare to DualDICE~\citep{nachum2019dualdice} in the discounted reward setting, which is a direct and current state-of-the-art baseline. For OPR, the main comparison is with the model-based method, where the transition operator is empirically estimated and stationary distribution recovered via an exact solver. We validate \estname in both tabular and continuous cases, and perform an ablation study to further demonstrate its effectiveness. All results are based on 20 random seeds, with mean and standard deviation plotted. Our code is publicly available at \url{https://github.com/zhangry868/GenDICE}. \vspace{-2mm} \subsection{Tabular Case}\label{exp:tabular} \vspace{-2mm} \begin{wrapfigure}{R}{0.39\linewidth} \vspace{-5mm} \centering \includegraphics[width=\linewidth]{figures/Graph.pdf}\\ \vspace{-4mm} \caption{Stationary Distribution Estimation on BA and real-world graphs. Each plot shows the $\log$ $KL$-divergence of \estname and model-based method towards the number of samples.} \label{fig:offpolicy-graph} \vspace{-3mm} \end{wrapfigure} \paragraph{Offline PageRank on Graphs} One direct application of \estname is off-line PageRank (OPR). We test \estname on a Barabasi-Albert (BA) graph (synthetic), and two real-world graphs, Cora and Citeseer. Details of the graphs are given in Appendix \ref{appendix:exp_settings}. We use the $\log$ $KL$-divergence between estimated stationary distribution and the ground truth as the evaluation metric, with the ground truth computed by an exact solver based on the exact transition operator of the graphs. We compared \estname with model-based methods in terms of the sample efficiency. From the results in \figref{fig:offpolicy-graph}, \estname outperforms the model-based method when limited data is given. Even with $20k$ samples for a BA graph with $100$ nodes, where a transition matrix has $10k$ entries, \estname still shows better performance in the offline setting. This is reasonable since \estname directly estimates the stationary distribution vector or ratio, while the model-based method needs to learn an entire transition matrix that has many more parameters. \vspace{-3mm} \paragraph{Off-Policy Evaluation with Taxi} We use a similar taxi domain as in \citet{liu2018breaking}, where a grid size of $5 \times 5$ yields $2000$ states in total ($25\times16\times5$, corresponding to $25$ taxi locations, 16 passenger appearance status and $5$ taxi status). We set the target policy to a final policy $\pi$ after running tabular Q-learning for $1000$ iterations, and set another policy $\pi_+$ after $950$ iterations as the base policy. The behavior policy is a mixture controlled by $\alpha$ as $\pi_b = (1-\alpha)\pi + \alpha \pi_+$. For the model-based method, we use a tabular representation for the reward and transition functions, whose entries are estimated from behavior data. For IS and IPS, we fit a policy via behavior cloning to estimate the policy ratio. In this specific setting, our methods achieve better results compared to IS, IPS and the model-based method. Interestingly, with longer horizons, IS cannot improve as much as other methods even with more data, while \estname consistently improve and achieves much better results than the baselines. DualDICE only works with $\gamma<1$. \estname is more stable than DualDICE when $\gamma$ becomes larger (close to $1$), while still showing competitive performance for smaller discount factors $\gamma$. \begin{figure}[ht] \centering \vspace{-3mm} \includegraphics[width=\linewidth]{figures/Taxi.pdf} \vspace{-6mm} \caption{Results on Taxi Domain. The plots show log MSE of the tabular estimator across different trajectory lengths, different discount factors and different behavior policies ($x$-axis).} \label{fig:offpolicy-taxi} \end{figure} \subsection{Continuous Case} \vspace{-1mm} We further test our method for OPE on three control tasks: a discrete-control task Cartpole and two continuous-control tasks Reacher and HalfCheetah. In these tasks, observations (or states) are continuous, thus we use neural network function approximators and stochastic optimization. Since DualDICE~\citep{nachum2019dualdice} has shown the state-of-the-art performance on discounted OPE, we mainly compare with it in the discounted reward case. We also compare to IS with a learned policy via behavior cloning and a neural model-based method, similar to the tabular case, but with neural network as the function approximator. All neural networks are feed-forward with two hidden layers of dimension $64$ and $\tanh$ activations. More details can be found in Appendix \ref{appendix:exp_settings}. Due to limited space, we put the discrete control results in Appendix \ref{appendix:exp} and focus on the more challenging continuous control tasks. Here, the good performance of IS and model-based methods in Section \ref{exp:tabular} quickly deteriorates as the environment becomes complex, \ie, with a continuous action space. Note that \estname is able to maintain good performance in this scenario, even when using function approximation and stochastic optimization. This is reasonable because of the difficulty of fitting to the coupled policy-environment dynamics with a continuous action space. Here we also \textit{empirically} validate \estname with off-policy data collected by multiple policies. As illustrated in Figure \ref{fig:offpolicy-cartpole}, all methods perform better with longer trajectory length or more trajectories. When $\alpha$ becomes larger, \ie, the behavior policies are closer to the target policy, all methods performs better, as expected. Here, \estname demonstrates good performance both on average-reward and discounted reward cases in different settings. The right two figures in each row show the $\log$ MSE curve versus optimization steps, where \estname achieves the smallest loss. In the discounted reward case, \estname shows significantly better and more stable performance than the strong baseline, DualDICE. Figure \ref{fig:offpolicy-half} also shows better performance of \estname than all baselines in the more challenging HalfCheetah domain. \begin{figure}[ht] \centering \vspace{-3mm} \includegraphics[width=1\linewidth]{figures/Reacher} \vspace{-6mm} \caption{Results on Reacher. The left three plots in the first row show the $\log$ MSE of estimated average per-step reward over different numbers of trajectories, truncated lengths, and behavior policies (M1 and M2 mean off-policy set collected by multiple behavior policies with $\alpha=[0.0, 0.33]$ and $\alpha=[0.0, 0.33, 0.66]$). The right two figures show the loss curves towards the optimization steps. Each plot in the second row shows the average reward case.} \label{fig:offpolicy-cartpole} \end{figure} \vspace{-6mm} \begin{figure}[ht] \centering \includegraphics[width=1\linewidth]{figures/Half} \vspace{-7mm} \caption{Results on HalfCheetah. Plots from left to the right show the $\log$ MSE of estimated average per-step reward over different truncated lengths, numbers of trajectories, and behavior policies in discounted and average reward cases. } \label{fig:offpolicy-half} \vspace{-3mm} \end{figure} \vspace{-3mm} \subsection{Ablation Study}\label{subsec:ablation} \vspace{-2mm} Finally, we conduct an ablation study on \estname to study its robustness and implementation sensitivities. We investigate the effects of learning rate, activation function, discount factor, and the specifically designed ratio constraint. We further demonstrate the effect of the choice of divergences and the penalty weight. \vspace{-3mm} \paragraph{Effects of the Learning Rate} Since we are using neural network as the function approximator, and stochastic optimization, it is necessary to show sensitivity to the learning rate with $\{0.0001, 0.0003, 0.001, 0.003\}$, with results in Figure \ref{fig:ablationlr}. When $\alpha=0.33$, \ie, the OPE tasks are relatively easier and \estname obtains better results at all learning rate settings. However, when $\alpha=0.0$, \ie, the estimation becomes more difficult and only \estname only obtains reasonable results with the larger learning rate. Generally, this ablation study shows that the proposed method is not sensitive to the learning rate, and is easy to train. \vspace{-3mm} \paragraph{Activation Function of Ratio Estimator} We further investigate the effects of the activation function on the last layer, which ensure the non-negative outputs required for the ratio. To better understand which activation function will lead to stable trainig for the neural correction estimator, we empirically compare using \RN{1}) $(\cdot)^2$; \RN{2}) $\log(1+\exp(\cdot))$; and \RN{3}) $\exp(\cdot)$. In practice, we use the $(\cdot)^2$ since it achieves low variance and better performance in most cases, as shown in Figure \ref{fig:ablationlr}. \vspace{-3mm} \paragraph{Effects of Discount Factors} We vary $\gamma\in\{0.95, 0.99, 0.995, 0.999, 1.0\}$ to probe the sensitivity of \estname. Specifically, we compare to DualDICE, and find that \estname is stable, while DualDICE becomes unstable when the $\gamma$ becomes large, as shown in Figure \ref{fig:ablationdis}. \estname is also more general than DualDICE, as it can be applied to both the average and discounted reward cases. \vspace{-3mm} \paragraph{Effects of Ratio Constraint} In Section \ref{sec:dual_est}, we highlighted the importance of the ratio constraint. Here we investigate the trivial solution issue without the constraint. The results in Figure~\ref{fig:ablationdis} demonstrate the necessity of adding the constraint penalty, since a trivial solution prevents an accurate corrector from being recovered (green line in left two figures). \begin{figure}[ht] \centering \vspace{-1mm} \begin{tabular}{cc} \hspace{-3mm} \includegraphics[width=0.41\linewidth]{figures/Reacher_lr_l.pdf} & \hspace{-6mm} \includegraphics[width=0.6\linewidth]{figures/Reacher_par_l.pdf} \\ \end{tabular} \vspace{-4mm} \caption{{Results of ablation study with different learning rates and activation functions. The plots show the log MSE of estimated average per-step reward over training and different behavior policies.}} \label{fig:ablationlr} \end{figure} \begin{figure}[ht] \centering \begin{tabular}{cc} \hspace{-3mm} \includegraphics[width=0.4\linewidth]{figures/Reacher_penalty.pdf} & \hspace{-5mm} \includegraphics[width=0.6\linewidth]{figures/Reacher_discount.pdf} \\ \end{tabular} \vspace{-4mm} \caption{{Results of ablation study with constraint penalty and discount factors. The left two figures show the effect of ratio constraint on estimating average per-step reward. The right three figures show the $\log$ MSE for average per-step reward over training and different discount factor $\gamma$.}} \label{fig:ablationdis} \end{figure} \begin{wrapfigure}{R}{0.39\linewidth} \centering \begin{tabular}{cc} \hspace{-3mm} \includegraphics[width=0.48\linewidth]{figures/BA_div.pdf} & \hspace{-4mm} \includegraphics[width=0.5\linewidth]{figures/BA_penalty.pdf}\\ \vspace{-4mm} \\ (a) & (b) \\ \end{tabular} \vspace{-3mm} \caption{{Results of ablation study with (a) different divergence and (b) weight of penalty $\lambda$. The plots show the log $KL$-Divergence of OPR on Barabasi-Albert graph.}} \label{fig:opr_ablation_main} \vspace{-3mm} \end{wrapfigure} \vspace{-3mm} \paragraph{Effects of the Choice of Divergences} We empirically test the GenDICE with several other alternative divergences, \eg, Wasserstein-$1$ distance, Jensen-Shannon divergence, $KL$-divergence, Hellinger divergence, and MMD. To avoid the effects of other factors in the estimator, \eg, function parametrization, we focus on the offline PageRank task on BA graph with $100$ nodes and $10k$ offline samples. All the experiments are evaluated with $20$ random trials. To ensure the dual function to be $1$-Lipchitz, we add the gradient penalty. Besides, we use a learned Gaussian kernel in MMD, similar to~\citet{LiChaYuYanetal17}. As we can see in~\figref{fig:opr_ablation_main}(a), the GenDICE estimator is compatible with many different divergences. Most of the divergences, with appropriate extra techniques to handle the difficulties in optimization and carefully tuning for extra parameters, can achieve similar performances, consistent with phenomena in the variants of GANs~\citep{LucKurMicGeletal18}. However, $KL$-divergence is an outlier, performing noticeably worse, which might be caused by the ill-behaved $\exp\rbr{\cdot}$ in its conjugate function. The $\chi^2$-divergence and JS-divergence are better, which achieve good performances with fewer parameters to be tuned. \vspace{-3mm} \paragraph{Effects of the Penalty Weight} The results of different penalty weights $\lambda$ are illustrated in~\figref{fig:opr_ablation_main}(b). We vary the $\lambda \in [0.1, 5]$ with $\chi^2$-divergence. Within a large range of $\lambda$, the performances of the proposed GenDICE are quite consistent, which justifies~\thmref{thm:soundness}. The penalty multiplies with $\lambda$. Therefore, with $\lambda$ increases, the variance of the stochastic gradient estimator also increases, which explains the variance increasing in large $\lambda$ in~\figref{fig:opr_ablation_main}(b). In practice, $\lambda = 1$ is a reasonable choice for general cases. \vspace{-3mm} \section{Introduction}\label{sec:intro} Estimation of quantities defined by the stationary distribution of a Markov chain lies at the heart of many scientific and engineering problems. Famously, the steady-state distribution of a random walk on the World Wide Web provides the foundation of the PageRank algorithm~\citep{langville04deeper}. In many areas of machine learning, Markov chain Monte Carlo (MCMC) methods are used to conduct approximate Bayesian inference by considering Markov chains whose equilibrium distribution is a desired posterior~\citep{andrieu02introduction}. An example from engineering is queueing theory, where the queue lengths and waiting time under the limiting distribution have been extensively studied~\citep{gross18fundamentals}. As we will also see below, stationary distribution quantities are of fundamental importance in reinforcement learning (RL)~\citep[e.g.,][]{tsitsiklis97analysis}. Classical algorithms for estimating stationary distribution quantities rely on the ability to sample next states from the current state \emph{by directly interacting with the environment} (as in on-line RL or MCMC), or even require the transition probability distribution to be given explicitly (as in PageRank). Unfortunately, these classical approaches are inapplicable when direct access to the environment is not available, which is often the case in practice. There are many practical scenarios where a collection of sampled trajectories is available, having been collected off-line by an external mechanism that chose states and recorded the subsequent next states. Given such data, we still wish to estimate a stationary quantity. One important example is off-policy policy evaluation in RL, where we wish to estimate the value of a policy different from that used to collect experience. Another example is off-line PageRank (OPR), where we seek to estimate the relative importance of webpages given a sample of the web graph. Motivated by the importance of these off-line scenarios, and by the inapplicability of classical methods, we study the problem of \emph{off-line estimation of stationary values} via a \emph{stationary distribution corrector}. Instead of having access to the transition probabilities or a next-state sampler, we assume only access to a \emph{fixed} sample of state transitions, where states have been sampled from an unknown distribution and next-states are sampled according to the Markov chain's transition operator. The off-line setting is indeed more challenging than its more traditional on-line counterpart, given that one must infer an asymptotic quantity from finite data. Nevertheless, we develop techniques that still allow consistent estimation under general conditions, and provide effective estimates in practice. The main contributions of this work are: \vspace{-2mm} \begin{itemize}[leftmargin=*] \item We formalize the problem of off-line estimation of stationary quantities, which captures a wide range of practical applications. \item We propose a novel stationary distribution estimator, \estname, for this task. The resulting algorithm is based on a new dual embedding formulation for divergence minimization, with a carefully designed mechanism that explicitly eliminates degenerate solutions. \item We theoretically establish consistency and other statistical properties of~\estname, and empirically demonstrate that it achieves significant improvements on several behavior-agnostic off-policy evaluation benchmarks and an off-line version of PageRank. \end{itemize} \vspace{-2mm} The methods we develop in this paper fundamentally extend recent work in off-policy policy evaluation \citep{liu2018breaking,nachum2019dualdice} by introducing a new formulation that leads to a more general, and as we will show, more effective estimation method. \section{Offline PageRank as Stationary Distribution Correction Estimation}\label{appendix:OPR} \section{Properties of~\estabb}\label{appendix:properties} For notation simplicity, we denote $x = \rbr{s, a} \in \Omega \defeq S\times A$ and $\Pb^\pi\rbr{x'|x} \defeq \pi\rbr{a'|s'}\Pb\rbr{s'|s, a}$. Also define $\nbr{f}_{p, 2} \defeq \inner{f}{f}_p = \int f\rbr{x}^2p\rbr{x}dx$. We make the following assumption to ensure the existence of the stationary distribution. Our discussion is all based on this assumption. \paragraph{\asmpref{asmp:stat_exist}} \textit{Under the target policy, the resulted state-action transition operator $\Tcal$ has a unique stationary distribution in terms of the divergence $D\rbr{\cdot||\cdot}$.} If the total variation divergence is selected, the~\asmpref{asmp:stat_exist} requires the transition operator should be ergodic, as discussed in~\citet{MeyTwe12}. \subsection{Consistency of the Estimator}\label{appendix:soundness} \paragraph{\thmref{thm:soundness}} \textit{For arbitrary $\lambda >0$, the solution to the optimization~\eqnref{eq:regularized_est} is $\frac{u\rbr{s, a}}{p\rbr{s, a}}$ for $\gamma\in (0, 1]$. } \begin{proof} For $\gamma \in(0, 1)$, there is not degenerate solutions to $D\rbr{\Ttau||\ptau}$. The optimal solution is a density ratio. Therefore, the extra penalty $\rbr{\EE_{p\rbr{x}}\sbr{\tau\rbr{x}} - 1}^2$ does not affect the optimality for $\forall\lambda>0$. When $\gamma =1$, for $\forall \lambda >0$, recall both $D\rbr{\Ttau||\ptau}$ and $\rbr{\EE_{p\rbr{x}}\sbr{\tau\rbr{x}} - 1}^2$ are non-negative, and the the density ratio $\frac{\mu\rbr{x}}{p\rbr{x}}$ leads to zero for both terms. Then, the density ratio is a solution to $J\rbr{\tau}$. For any other non-negative function $\tau\rbr{x}\ge 0$, if it is the optimal solution to $J\rbr{\tau}$, then, we have \begin{eqnarray}\label{eq:opt_condition} D\rbr{\Ttau||\ptau}=0 &\Rightarrow& p\rbr{x'}\tau\rbr{x'} = \Ttau\rbr{x'} = \int\Pb^\pi\rbr{x'|x}\tau\rbr{x}dx,\\ \rbr{\EE_{p\rbr{x}}\sbr{\tau\rbr{x}} - 1}^2 = 0 &\Rightarrow& \EE_{p\rbr{x}}\sbr{\tau\rbr{x}} = 1. \end{eqnarray} We denote $\mu\rbr{x} = p\rbr{x}\tau\rbr{x}$, which is clearly a density function. Then, the optimal conditions in \eqref{eq:opt_condition} imply $$ \mu\rbr{x'} = {\int\Pb^\pi\rbr{x'|x}\mu\rbr{x}dx}, $$ or equivalently, $\mu$ is the stationary distribution of $\Tcal$. We have thus shown the optimal $\tau\rbr{x} = \frac{\mu\rbr{x}}{p\rbr{x}}$ is the target density ratio. \end{proof} \subsection{Convexity of the Objective}\label{appendix:convexity} \begin{proof} Since the $\phi$ is convex, we consider the Fenchel dual representation of the $f$-divergence $D_\phi\rbr{\Ttau||\ptau}$, \ie, \begin{multline} D_\phi\rbr{\Ttau||\ptau} = \max_{f\in \Omega\rightarrow \RR} \ell\rbr{\tau, f}\\ \defeq \rbr{1- \gamma}\EE_{\mu_0\pi}\sbr{f\rbr{x}} + \gamma\EE_{\Tcal_p\rbr{x, x'}}\sbr{\tau\rbr{x} f\rbr{x'}}- \EE_{p\rbr{x}}\sbr{\tau\rbr{x}\phi^*\rbr{f\rbr{x}}}. \end{multline} It is obviously $\ell\rbr{\tau, f}$ is convex in $\tau$ for each $f$, then, $D_\phi\rbr{\Ttau||\ptau}$ is convex. The term $\lambda \rbr{\EE_p\rbr{\tau} - 1}^2$ is also convex, which concludes the proof. \end{proof} \section{Algorithm Details}\label{appendix:alg_details} We provide the unbiased gradient estimator for $\nabla_{w_\tau, u, w_f} J\rbr{\tau, u, f}$ in~\eqnref{eq:chi_saddle_est} below: \begin{eqnarray}\label{eq:grad_estimator} \nabla_{w_\tau}J_{\chi^2}\rbr{\tau, u, f}\hspace{-2mm} &=&\hspace{-2mm} \gamma\EE_{\Tcal_p}\sbr{\nabla_{w_\tau}\tau\rbr{s, a}f\rbr{s', a'}} - \EE_{p}\sbr{\nabla_{w_\tau}\tau\rbr{s, a}\rbr{f\rbr{s, a} + \frac{1}{4}f^2\rbr{s, a}}} \nonumber\\ &&+\lambda u{\EE_p\sbr{\nabla_{w_\tau}\tau\rbr{s, a}}}, \\ \nabla_{u}J_{\chi^2}\rbr{\tau, u, f} \hspace{-2mm}&=&\hspace{-2mm} \lambda\rbr{\EE_p\sbr{\tau\rbr{s, a}-1}-u},\\ \nabla_{w_f}J_{\chi^2}\rbr{\tau, u, f} \hspace{-2mm}&=&\hspace{-2mm} \rbr{1 - \gamma}\EE_{\mu_0\pi}\sbr{\nabla_{w_f}f\rbr{s, a}} + \gamma\EE_{\Tcal_p}\sbr{\tau\rbr{s, a}\nabla_{w_f}f\rbr{s', a'}} \\ &&- \EE_{p}\sbr{\tau\rbr{s, a}\rbr{1 + \frac{1}{2}f\rbr{s, a}}\nabla_{w_f}f\rbr{s, a}}. \nonumber \end{eqnarray} Then, we have the psuedo code which applies SGD for solving~\eqnref{eq:chi_saddle_est}. \begin{algorithm}[tb] \caption{GenDICE (with function approximators)}\label{alg:gendice} \begin{algorithmic} \STATE {\bf Inputs}: Convex function $\phi$ and its Fenchel conjugate $\phi^*$, off-policy data $\Dcal = \{(s^{(i)}, a^{(i)}, r^{(i)}, s^{\prime(i)})\}_{i=1}^N$, initial state $s_0\sim\mu_0$, target policy $\pi$, {distribution corrector}~$\texttt{nn}_{w_\tau}(\cdot,\cdot), \texttt{nn}_{w_f}(\cdot,\cdot)$, constraint scalar $u$, learning rates $\eta_\tau, \eta_f, \eta_u$, number of iterations $K$, batch size $B$. \FOR{$t = 1,\dots,K$} \STATE Sample batch $\{(s^{(i)}, a^{(i)}, r^{(i)}, s^{\prime(i)})\}_{i=1}^B$ from $\Dset$. \STATE Sample batch $\{s_0^{(i)}\}_{i=1}^B$ from $\mu_0$. \STATE Sample actions $a^{\prime(i)}\sim\pi(s^{\prime(i)})$, for $i=1,\dots,B$. \STATE Sample actions $a_0^{(i)}\sim\pi(s_0^{(i)})$, for $i=1,\dots,B$. \STATE Compute empirical loss $\hat J_{\chi^2}\rbr{\tau, u, f} = \rbr{1 - \gamma}\EE_{\mu_0\pi}\sbr{f\rbr{s, a}} + \gamma\EE_{\Tcal_p}\sbr{\tau\rbr{s, a}f\rbr{s', a'}}$ \\ $- \EE_{p}\sbr{\tau\rbr{s, a}\rbr{f\rbr{s, a} + \frac{1}{4}f^2\rbr{s, a}}} + \lambda\rbr{\EE_p\sbr{u\tau\rbr{s, a}-u}-\frac{u^2}{2}}$. \STATE Update $w_\tau\leftarrow w_\tau - \eta_\tau \nabla_{\theta_\tau}\hat J_{\chi^2}$. \STATE Update $w_f\leftarrow w_f + \eta_f \nabla_{\theta_f}\hat J_{\chi^2}$. \STATE Update $u\leftarrow u + \eta_u \nabla_{u} \hat J_{\chi^2}$. \ENDFOR \STATE {\bf Return} $\texttt{nn}_{w_\tau}$. \end{algorithmic} \end{algorithm} \section{Proof of~\thmref{thm:total_error}}\label{appendix:proofs} For convenience, we repeat here the notation defined in the main text. The saddle-point reformulation of the objective function of~\estabb is: \begin{multline*} J\rbr{\tau, u, f} \defeq \rbr{1 - \gamma}\EE_{\mu_0\pi}\sbr{f\rbr{x'}} + \gamma\EE_{\Tcal_p\rbr{x, x'}}\sbr{\tau\rbr{x} f\rbr{x'}}\\ - \EE_{p\rbr{x}}\sbr{\tau\rbr{x}\phi^*\rbr{f\rbr{x}}} + \lambda\rbr{\EE_{p\rbr{x}}\sbr{u\tau\rbr{x} - u} - \frac{1}{2}u^2}. \end{multline*} To avoid the numerical infinity in $D_\phi\rbr{\cdot||\cdot}$, we induced the bounded version as \begin{equation*} J\rbr{\tau} \defeq \max_{\nbr{f}_\infty\le C, u} \,\, J\rbr{\tau, u, f} = D^C_\phi\rbr{\Ttau || \ptau} + \frac{\lambda}{2} \rbr{\EE_{p\rbr{x}}\sbr{\tau\rbr{x}} - 1}^2, \end{equation*} in which $D_\phi^C\rbr{\cdot||\cdot}$ is still a valid divergence, and therefore the optimal solution $\tau^*$ is still the stationary density ratio $\frac{\mu\rbr{x}}{p\rbr{x}}$. We denote the $\Jhat\rbr{\tau,\mu, f}$ as the empirical surrogate of $J\rbr{\tau, \mu, f}$ on samples $\Dcal = \rbr{\rbr{x, x'}_{i=1}^N}\sim \Tpgam\rbr{x, x'}$ with the optimal solution in $\rbr{\Hcal, \Fcal, \RR}$ as $\rbr{\tauhs, \uhs, \fhs}$. Furthermore, denote \begin{align*} \tau^*_\Hcal &= \argmin_{\tau\in\Hcal} J\rbr{\tau}\,, \\ \tau^* &= \argmin_{\tau\in S\times A\rightarrow \RR} J\rbr{\tau} \end{align*} with optimal $\rbr{f^*, u^*}$, and \begin{align*} L\rbr{\tau} &= \max_{f\in \Fcal, u\in \RR}J\rbr{\tau, u, f}\,, \\ \Lhat\rbr{\tau} &= \max_{f\in\Fcal, u\in\RR} \Jhat\rbr{\tau,u, f}\,. \end{align*} We apply some optimization algorithm for $\Jhat\rbr{\tau, u, f}$ over space $\rbr{\Hcal, \Fcal, \RR}$, leading to the output $\rbr{\hat\tau, \uhat, \fhat}$. Under~\asmpref{asmp:ref_dist}, we need only consider $\nbr{\tau}_\infty\le C$, then, the corresponding dual $u = \EE_p\rbr{\tau} - 1\Rightarrow u\in U\defeq\cbr{\abr{u}\le \rbr{C+1}}$. We choose the $\phi^*\rbr{\cdot}$ is a $\kappa$-Lipschitz continuous, then, the $J\rbr{\tau, u, f}$ is a $C_{\Pb^\pi, \kappa, \lambda} = \max\cbr{\rbr{{\gamma}\nbr{\Pb^\pi}_{p, \infty} +\rbr{1 - \gamma} \nbr{\frac{\mu_0\pi}{p}}_{p, \infty} + \kappa}C, \rbr{C+1}\rbr{\lambda + \frac{1}{2}}}$-Lipschitz continuous function w.r.t. $\rbr{f, u}$ with the norm $\nbr{\rbr{f, u}}_{p, 1}\defeq \int \abr{f\rbr{x}}p\rbr{x}dx + \abr{u}$, and $C_{\phi, C, \lambda}\defeq \rbr{C + \lambda\rbr{C+1} + \max_{t\in \cbr{-C, C}}\rbr{-\phi\rbr{t}}}$-Lipschitz continuous function w.r.t. $\tau$ with the norm $\nbr{\tau}_{p, 1} \defeq \int \abr{\tau\rbr{x}}p\rbr{x}dx$. We consider the error between $\hat\tau$ and $\tau^*$ using standard arguments~\citep{ShaBen14,Bach14}, \ie, \begin{equation*} d\rbr{\hat\tau, \tau^*}\defeq J\rbr{\hat\tau} - J\rbr{\tau^*}. \end{equation*} The discrepancy $d\rbr{\tau, \tau^*}\ge 0$ and $d\rbr{\tau, \tau^*} = 0$ if and only if $\ptau$ is stationary distribution of $\Tcal$ in the weak sense of $D_\phi\rbr{\cdot||\cdot}$. \paragraph{Remark:} In some special cases, the suboptimality also implies the distance between $\hat\tau$ and $\tau^*$. Specifically,for $\gamma=1$, if the transition operator $\Pb^\pi$ can be represented as $\Pb^\pi = \Qcal\Lambda \Qcal^{-1}$ where $\Qcal$ denotes the (countable) eigenfunctions and $\Lambda$ denotes the diagonal matrix with eigenvalues, the largest of which is $1$. We consider $\phi\rbr{\cdot}$ as identity and $f\in \Fcal\defeq\cbr{\spn\rbr{\Qcal}, \nbr{f}_{p, 2}\le 1}$, then the $d\rbr{\tau, \tau^*}$ will bounded from below by a metric between $\tau$ and $\tau^*$. Particularly, we have \begin{eqnarray*} D_\phi\rbr{\Ttau||\ptau}= \max_{f\in \Fcal}\,\, \EE_{\Tcal_p\rbr{x, x'}}\sbr{\tau\rbr{x} f\rbr{x'}}- \EE_{p\rbr{x}}\sbr{\tau\rbr{x}{f\rbr{x}}} = \nbr{\tau - \Pb^\pi\circ\tau}_{p, 2}. \end{eqnarray*} Rewrite $\tau = \alpha\tau^* + \zeta$, where $\zeta\in \spn\rbr{\Qcal_{\setminus\tau^*}}$, then \begin{eqnarray*} D_\phi\rbr{\Ttau||\ptau} = \nbr{\alpha\tau^* - \alpha\Pb^\pi\circ\tau^* + \zeta - \Pb^\pi\circ\zeta}_{p, 2} = \nbr{\zeta - \Pb^\pi\circ\zeta}_{p, 2}. \end{eqnarray*} Recall the optimality of $\tau^*$, \ie, $D_\phi\rbr{\Ttau^*||\ptau^*} = 0$, we have \begin{equation*} d\rbr{\tau, \tau^*} =J\rbr{\tau} \ge \nbr{\zeta - \Pb^\pi\circ\zeta}_{p, 2} \defeq \nbr{\rbr{\tau-\tau^*}}_{p, 2, \rbr{\Pb^\pi - I}}. \end{equation*} \subsection{Error Decomposition} We start with the following error decomposition: \begin{equation*} d\rbr{\hat\tau, \tau^*}\defeq J\rbr{\hat\tau} - J\rbr{\tau^*} = \underbrace{J\rbr{\hat\tau} - J\rbr{\tauhs}}_{{\epsilon}_1} + \underbrace{J\rbr{\tauhs} - J\rbr{\tau^*}}_{{\epsilon}_2}. \end{equation*} \begin{itemize}[leftmargin=*] \item For ${\epsilon}_1$, we have \begin{eqnarray*} {\epsilon}_1 &=& J\rbr{\hat\tau} - L\rbr{\hat\tau} + L\rbr{\hat\tau} - L\rbr{\tauhs} + L\rbr{\tauhs} - J\rbr{\tauhs}. \end{eqnarray*} We consider the terms one-by-one. By definition, we have \begin{eqnarray} J\rbr{\hat\tau} - L\rbr{\hat\tau} &=& \max_{f, \mu} J\rbr{\hat\tau, u, f} - \max_{f\in \Fcal, \mu}J\rbr{\hat\tau, u, f} \nonumber \\ &\le&C_{\Pb^\pi, \kappa, \lambda}\underbrace{\sup_{f_1, u_1\in U}\inf_{f_2\in\Fcal, u_2\in U}\nbr{\rbr{f_1, u_1} - \rbr{f_2, u_2}}_{p, 1}}_{{\epsilon}_{approx}\rbr{\Fcal}}, \label{eq:eps_fapprox} \end{eqnarray} which is induced by introducing $\Fcal$ for dual approximation. For the third term $L\rbr{\tauhs} - J\rbr{\tauhs}$, we have \begin{eqnarray*} L\rbr{\tauhs} - J\rbr{\tauhs} = \max_{f\in \Fcal, u\in U} J\rbr{\tauhs, u, f} - \max_{f, u\in U} J\rbr{\tauhs, u, f} \le 0. \end{eqnarray*} For the term $L\rbr{\hat\tau} - L\rbr{\tauhs}$, \begin{eqnarray} L\rbr{\hat\tau} - L\rbr{\tauhs} &=& L\rbr{\hat\tau} - \Lhat\rbr{\hat\tau} + \underbrace{\Lhat\rbr{\hat\tau} - \Lhat\rbr{\tauhs}}_{\hat{\epsilon}_{opt}} + \Lhat\rbr{\tauhs} - L\rbr{\tauhs} \nonumber \\ &\le& 2\sup_{\tau\in \Hcal}\abr{L\rbr{\tau} - \Lhat\rbr{\tau}} + \hat{\epsilon}_{opt} \nonumber \\ &\le& 2\sup_{\tau\in \Hcal}\abr{\max_{f\in\Fcal, u\in U} J\rbr{\tau, u, f} - \max_{f\in \Fcal, u\in U}\Jhat\rbr{\tau, u, f}} + \hat{\epsilon}_{opt} \nonumber \\ &\le& 2\sup_{\tau\in\Hcal}\sup_{f\in \Fcal, u\in U}\abr{J\rbr{\tau, u, f} - \Jhat\rbr{\tau, u, f}} + \hat{\epsilon}_{opt} \nonumber \\ &=& 2\cdot {\epsilon}_{est} + \hat{\epsilon}_{opt}, \label{eq:eps_est} \end{eqnarray} where we define ${\epsilon}_{est}\defeq \sup_{\tau\in\Hcal, f\in \Fcal, u\in U}\abr{J\rbr{\tau, u, f} - \Jhat\rbr{\tau, u, f}}$. Therefore, we can now bound ${\epsilon}_1$ as \begin{equation*} {\epsilon}_1\le C_{\Tcal, \kappa, \lambda}{\epsilon}_{approx}\rbr{\Fcal} + 2{\epsilon}_{est} + \hat{\epsilon}_{opt}. \end{equation*} \item For ${\epsilon}_2$, we have \begin{eqnarray*} {\epsilon}_2 &=& J\rbr{\tauhs} - J\rbr{\tau_\Hcal^*} + J\rbr{\tau_\Hcal^*} - J\rbr{\tau^*}\\ &=& J\rbr{\tauhs} - L\rbr{\tauhs} + L\rbr{\tauhs} - L\rbr{\tau_\Hcal^*} + L\rbr{\tau_\Hcal^*} - J\rbr{\tau_\Hcal^*} + J\rbr{\tau_\Hcal^*} - J\rbr{\tau^*}. \end{eqnarray*} We consider the terms from right to left. For the term $J\rbr{\tau_\Hcal^*} - J\rbr{\tau^*}$, we have \begin{eqnarray*} J\rbr{\tauhs} - J\rbr{\tau^*} =& J\rbr{\tauhs, \uhs, \fhs} - J\rbr{\tau^*, \uhs, \fhs} + \underbrace{J\rbr{\tau^*, \uhs, \fhs} - J\rbr{\tau^*, u^*, f^*}}_{\le 0}\\ =& J\rbr{\tauhs, \uhs, \fhs} - J\rbr{\tau^*, \uhs, \fhs} \le C_{\phi, C, \lambda}\underbrace{\sup_{\tau_1}\inf_{\tau_2\in \Hcal}\nbr{\tau_1 - \tau_2}_{p, 1}}_{{\epsilon}_{approx}\rbr{\Hcal}}, \end{eqnarray*} which is induced by restricting the function space to $\Hcal$. The second term is nonpositive, due to the optimality of $\rbr{u^*, f^*}$. The final inequality comes from the fact that $J\rbr{\tau, u, f}$ is $C_{\phi, C, \lambda}$-Lipschitz w.r.t. $\tau$. For the term $L\rbr{\tauhs} - J\rbr{\tau_\Hcal^*}$, by definition \begin{equation*} L\rbr{\tau_\Hcal^*} - J\rbr{\tau_\Hcal^*} = \max_{f\in \Fcal, u\in U} J\rbr{\tau_{\Hcal}^*, f, u} - \max_{f, u\in U} J\rbr{\tau_{\Hcal}^*, f, u}\le 0. \end{equation*} For the term $ L\rbr{\tauhs} - L\rbr{\tau_\Hcal^*}$, we have \begin{eqnarray*} L\rbr{\tauhs} - L\rbr{\tau_\Hcal^*} &=& L\rbr{\tauhs} - \Lhat\rbr{\tauhs} + \underbrace{\Lhat\rbr{\tauhs} - \Lhat\rbr{\tau_\Hcal^*}}_{\le 0} + \Lhat\rbr{\tau_\Hcal^*} - L\rbr{\tau_\Hcal^*}\\ &=& 2\sup_{\tau\in\Hcal}\abr{L\rbr{\tau} - \Lhat\rbr{\tau}}\\ &=& 2\sup_{\tau\in\Hcal, f\in \Fcal, u\in U}\abr{J\rbr{\tau, u, f} - \Jhat\rbr{\tau, u, f}}\\ &=& 2\cdot{\epsilon}_{est}. \end{eqnarray*} where the second term is nonpositive, thanks to the optimality of $\tauhs$. Finally, for the term $J\rbr{\tauhs} - J\rbr{\tau_\Hcal^*}$, using the same argument in~\eqref{eq:eps_fapprox}, we have \begin{equation*} J\rbr{\tauhs} - J\rbr{\tau_\Hcal^*}\le C_{\Pb^\pi,\kappa, \lambda}{\epsilon}_{approx}\rbr{\Fcal}. \end{equation*} Therefore, we can bound ${\epsilon}_2$ by \begin{equation*} {\epsilon}_2 \le C_{\phi, C, \lambda}{\epsilon}_{approx}\rbr{\Hcal} + C_{\Pb^\pi, \kappa, \lambda}\rbr{\Fcal} + 2{\epsilon}_{est}. \end{equation*} \end{itemize} In sum, we have \begin{equation*} d\rbr{\hat\tau, \tau^*} \le 4{\epsilon}_{est} + \hat{\epsilon}_{opt} + 2C_{\Pb^\pi, \kappa,\lambda}{\epsilon}_{approx}\rbr{\Fcal} + C_{\phi, C, \lambda}{\epsilon}_{approx}\rbr{\Hcal}. \end{equation*} In the following sections, we will bound the ${\epsilon}_{est}$ and $\hat{\epsilon}_{opt}$. \subsection{Statistical Error} In this section, we analyze the statistical error $$ {\epsilon}_{est}\defeq \sup_{\tau\in\Hcal, f\in \Fcal, u\in U}\abr{J\rbr{\tau, u, f} - \Jhat\rbr{\tau, u, f}}. $$ We mainly focus on the batch RL setting with \iid~samples $\Dcal = \sbr{\rbr{x_i, x_i'}}_{i=1}^N\sim\Tcal_p\rbr{x, x'}$, which has been studied by previous authors~\citep[e.g.,][]{SutSzeGerBow12,nachum2019dualdice}. However, as discussed in the literature~\citep{AntSzeMun08b,LazGhaMun12,DaiShaLiXiaHeetal17,nachum2019dualdice}, using the blocking technique of~\citet{Yu94}, the statistical error provided here can be generalized to $\beta$-mixing samples in a single sample path. We omit this generalization for the sake of expositional simplicity. To bound the ${\epsilon}_{est}$, we follow similar arguments by~\citet{DaiShaLiXiaHeetal17,nachum2019dualdice} via the covering number. For completeness, the definition is given below. The Pollard's tail inequality bounds the maximum deviation via the covering number of a function class: \begin{lemma}[\citet{Pollard12}]\label{lemma:beta_tail} Let ${\mathcal{G}}$ be a permissible class of ${\mathcal{Z}}\rightarrow[-M, M]$ functions and $\cbr{Z_i}_{i=1}^N$ are \iid~samples from some distribution. Then, for any given $\epsilon>0$, \begin{eqnarray*} {\mathbb{P}}\rbr{\sup_{g\in{\mathcal{G}}}\abr{\frac{1}{N}\sum_{i=1}^N g(Z_i) - \mathbb{E}\sbr{g(Z)}} > \epsilon}\le 8\mathbb{E}\sbr{{\mathcal{N}}_1\rbr{\frac{\epsilon}{8}, {\mathcal{G}}, \cbr{Z_i}_{i=1}^N}}\exp\rbr{\frac{-N\epsilon^2}{512M^2}}. \end{eqnarray*} \end{lemma} The covering number can then be bounded in terms of the function class's pseudo-dimension: \begin{lemma}[\citet{Haussler95}, Corollary~3]\label{lemma:cover_pseudo} For any set $\Xcal$, any points $x^{1:N}\in\Xcal^N$, any class $\Fcal$ of functions on $\Xcal$ taking values in $[0, M]$ with pseudo-dimension $D_{\Fcal}<\infty$, and any $\epsilon>0$, $$ \Ncal_1\rbr{\epsilon, \Fcal, x^{1:N}}\le e\rbr{D_\Fcal + 1}\rbr{\frac{2eM}{\epsilon}}^{D_\Fcal}. $$ \end{lemma} The statistical error ${\epsilon}_{est}$ can be bounded using these lemmas. \begin{lemma}[Stochastic error]\label{lemma:stat_error} Under the~\asmpref{asmp:ref_dist}, if $\phi^*$ is $\kappa$-Lipschitz continuous and the psuedo-dimension of $\Hcal$ and $\Fcal$ are finite, with probability at least $1-\delta$, we have $$ {\epsilon}_{est} = \Ocal\rbr{\sqrt{\frac{\log N + \log\frac{1}{\delta}}{N}}}. $$ \end{lemma} \begin{proof} The proof works by verifying the conditions in~\lemref{lemma:beta_tail} and computing the covering number. Denote the $h_{\tau, u, f}\rbr{x,x'} =\rbr{1 - \gamma} f\rbr{x'} + \gamma \tau\rbr{x}f\rbr{x'} - \tau\rbr{x}\phi^*\rbr{f\rbr{x}} + \lambda u\tau\rbr{x} - \lambda u - \lambda\frac{1}{2}u^2$, we will apply~\lemref{lemma:beta_tail} with $\Zcal = \Omega\times\Omega$, $Z_i = \rbr{x_i, x_i'}$, and $\Gcal = h_{\Hcal\times\Fcal\times U}$. We check the boundedness of $h_{\zeta, u, f}\rbr{x, x'}$. Based on~\asmpref{asmp:ref_dist}, we only consider the $\tau\in \Hcal$ and $u\in U$ bounded by $C$ and $C+1$. We also rectify the $\nbr{f}_\infty \le C$. Then, we can bound the $\nbr{h}_\infty$: \begin{eqnarray*} \nbr{h_{\tau, u, f}}_\infty &\le& \rbr{1 + \nbr{\tau}_\infty}\nbr{f}_\infty + \nbr{\tau}_\infty\rbr{\max_{t\in [-C, C]}\,\, -\phi^*\rbr{t}} + \lambda C\rbr{\nbr{\tau}_\infty + 1} + \lambda C^2\\ &\le& \rbr{C+1}^2 + C\cdot C_\phi + \lambda C\rbr{2C+1} =: M, \end{eqnarray*} where $C_\phi = \max_{t\in [-C, C]}\,\, -\phi^*\rbr{t}$. Thus, by~\lemref{lemma:beta_tail}, we have \begin{eqnarray} \lefteqn{{\mathbb{P}}\rbr{\sup_{\tau\in\Hcal, f\in \Fcal, u\in U}\abr{\Jhat\rbr{\tau, u, f} - J\rbr{\tau, u, f}}}} \nonumber \\ &=& {\mathbb{P}}\rbr{\sup_{\tau\in\Hcal, f\in \Fcal, u\in U}\abr{\frac{1}{n}\sum_{i=1}^N h_{\zeta, u, f}\rbr{Z_i} - \EE\sbr{h_{\zeta, u, f}}}} \nonumber \\ &\le& \mathbb{E}\sbr{{\mathcal{N}}_1\rbr{\frac{\epsilon}{8}, {\mathcal{G}}, \cbr{Z_i}_{i=1}^N}}\exp\rbr{\frac{-N\epsilon^2}{512M^2}}. \label{eq:intermediate} \end{eqnarray} Next, we check the covering number of $\Gcal$. Firstly, we bound the distance in $\Gcal$, \begin{eqnarray*} &&\frac{1}{N}\sum_{i=1}^N\abr{h_{\tau_1, u_1, f_1}\rbr{Z_i} - h_{\tau_2, u_2, f_2}\rbr{Z_i}}\\ &\le&\frac{C + C_\phi + \lambda(C+1)}{N}\sum_{i=1}^N \abr{\tau_1\rbr{x_i} -\tau_2\rbr{x_i}} + \frac{1 + \gamma C}{N}\sum_{i=1}^N \abr{f_1\rbr{x'_i} - f_2\rbr{x'_i}} \\ &&+ \frac{\kappa C}{N}\sum_{i=1}^N \abr{f_1\rbr{x_i} - f_2\rbr{x_i}} + \lambda\rbr{2C +1}\abr{u_1 - u_2}, \end{eqnarray*} which leads to \begin{eqnarray*} &&\Ncal_1\rbr{\rbr{C_\phi + \rbr{3\lambda + 2 + \gamma + \kappa}\rbr{C+1}}{\epsilon}', \Gcal, \cbr{Z_i}_{i=1}^N}\\ &\le& \Ncal_1\rbr{{\epsilon}', \Hcal, \rbr{x_i}_{i=1}^N}\Ncal_1\rbr{{\epsilon}', \Fcal, \rbr{x'_i}_{i=1}^N}\Ncal_1\rbr{{\epsilon}', \Fcal, \rbr{x_i}_{i=1}^N}\Ncal_1\rbr{{\epsilon}', U}. \end{eqnarray*} For the set $U = [-C-1, C+1]$, we have, \begin{equation*} \Ncal_1\rbr{{\epsilon}', U}\le \frac{2C+2}{{\epsilon}'}. \end{equation*} Denote the pseudo-dimension of $\Hcal$ and $\Fcal$ as $D_\Hcal$ and $D_\Fcal$, respectively, we have \begin{align*} & \Ncal_1\rbr{\rbr{C_\phi + \rbr{3+2\lambda + \kappa}\rbr{C+1}}{\epsilon}', \Gcal, \cbr{Z_i}_{i=1}^N} \\ \le& e^3\rbr{D_\Hcal+1}\rbr{D_\Fcal+1}^2\rbr{\frac{2C+2}{{\epsilon}'}}\rbr{\frac{4eC}{{\epsilon}'}}^{D_\Hcal + 2D_\Fcal}, \end{align*} which implies \begin{eqnarray*} &&\Ncal_1\rbr{\frac{{\epsilon}}{8}, \Gcal, \cbr{Z_i}_{i=1}^N}\\ &\le& \frac{C+1}{2C}e^2\rbr{D_\Hcal+1}\rbr{D_\Fcal+1}^2\rbr{\frac{32\rbr{C_\phi + \rbr{3\lambda + 2 + \gamma + \kappa}\rbr{C+1}}eC}{{\epsilon}}}^{D_\Hcal+D_\Fcal + 1}\\ &=& C_1\rbr{\frac{1}{{\epsilon}}}^{D_1}, \end{eqnarray*} where $D_1 = D_\Hcal + D_\Fcal + 1$ and $$ C_1 = \frac{C+1}{2C}e^2\rbr{D_\Hcal+1}\rbr{D_\Fcal+1}^2\rbr{32\rbr{C_\phi + \rbr{3\lambda + 2 + \gamma + \kappa}\rbr{C+1}}eC}\,. $$ Combine this result with~\eqref{eq:intermediate}, we obtain the bound for the statistical error: \begin{equation} {\mathbb{P}}\rbr{\sup_{\tau\in\Hcal, f\in \Fcal, u\in U}\abr{\Jhat\rbr{\tau, u, f} - J\rbr{\tau, u, f}}} \le 8C_1\rbr{\frac{1}{{\epsilon}}}^{D_1}\exp\rbr{\frac{-N\epsilon^2}{512M^2}}. \end{equation} Setting ${\epsilon} = \sqrt{\frac{C_2\rbr{\log N + \log \frac{1}{\delta}}}{N}}$ with $C_2 = \max\rbr{\rbr{8C_1}^{\frac{2}{D_1}}, 512MD_1, 512M, 1}$, we have \begin{equation*} 8C_1\rbr{\frac{1}{{\epsilon}}}^{D_1}\exp\rbr{\frac{-N\epsilon^2}{512M^2}}\le \delta. \end{equation*} \end{proof} \subsection{Optimization Error}\label{appendix:opt_error} In this section, we investigate the optimization error $$ \hat{\epsilon}_{opt}\defeq\Lhat\rbr{\hat\tau} - \Lhat\rbr{\tauhs}. $$ Notice our estimator $\min_{\tau\in\Hcal}\max_{f\in \Fcal, u\in U}\Jhat\rbr{\tau, u, f}$ is compatible with different parametrizations for $\rbr{\Hcal, \Fcal}$ and different optimization algorithms, the optimization error will be different. For the general neural network for $\rbr{\tau, f}$, although there are several progress recently~\citep{LinLiuRafYan18,JinNetJor19,LinJinJor19} about the convergence to a stationary point or local minimum, it remains a largely open problem to quantify the optimization error, which is out of the scope of this paper. Here, we mainly discuss the convergence rate with tabular, linear and kernel parametrization for $\rbr{\tau, f}$. Particularly, we consider the linear parametrization particularly, \ie, $\tau\rbr{x} = {w_\tau^\top \psi_\tau\rbr{x}}$ with $\cbr{w_\tau, \psi_\tau\rbr{x}\ge 0} $ and $f\rbr{x} = w_f^\top \psi_f\rbr{x}$. With such parametrization, the $\Jhat\rbr{\tau, u, f}$ is still convex-concave w.r.t $\rbr{w_\tau, w_f, u}$. We can bound the $\hat{\epsilon}_{opt}$ by the primal-dual gap ${\epsilon}_{gap}$: \begin{eqnarray*} \hat{\epsilon}_{opt} &=& \Lhat\rbr{\hat\tau} - \Lhat\rbr{\tauhs}\\ &\le& \max_{f\in\Fcal, u\in U}\Jhat\rbr{\hat\tau, u, f} - \Jhat\rbr{\tauhs, \uhs, \fhs} + \Jhat\rbr{\tauhs, \uhs, \fhs} - \min_{\tau\in \Hcal}\Jhat\rbr{\tau, \uhat, \fhat}\\ &=& \underbrace{\max_{f\in\Fcal, u\in U}\Jhat\rbr{\hat\tau, u, f} - \min_{\tau\in \Hcal}\Jhat\rbr{\tau, \uhat, \fhat}}_{{\epsilon}_{gap}}. \end{eqnarray*} With vanilla SGD, we have ${\epsilon}_{gap} = \Ocal\rbr{\frac{1}{\sqrt{T}}}$, where $T$ is the optimization steps~\citep{NemJudLanSha09}. Therefore, ${\epsilon}_{opt} = \EE\sbr{\hat{\epsilon}_{opt}} = \Ocal\rbr{\frac{1}{\sqrt{T}}}$, where the $\EE\sbr{\cdot}$ is taken w.r.t. randomness in SGD. \subsection{Complete Error Analysis}\label{appendix:full_error} We are now ready to state the main theorm in a precise way: \paragraph{\thmref{thm:total_error}} \textit{ Under Assumptions~\ref{asmp:ref_dist} and~\ref{asmp:stat_exist} , the stationary distribution $\mu$ exists, \ie, $\max_{f\in\Fcal^*}\EE_{\Tcal\circ\mu}\sbr{f} - \EE_{\mu}\sbr{\phi^*\rbr{f}} = 0$. If the $\phi^*\rbr{\cdot}$ is $\kappa$-Lipschitz continuous, $\nbr{f}_\infty\le C<\infty,\,\,\forall f\in \Fcal^*$, and the psuedo-dimension of $\Hcal$ and $\Fcal$ are finite, the error between the~\estname estimate to $\tau^*\rbr{x} = \frac{u\rbr{x}}{p\rbr{x}}$ is bounded by $$ \EE\sbr{J\rbr{\hat\tau} - J\rbr{\tau^*}} =\widetilde\Ocal\rbr{{\epsilon}_{approx}\rbr{\Fcal, \Hcal} + \sqrt{\frac{1}{N}} + {\epsilon}_{opt}}, $$ where $\EE\sbr{\cdot}$ is w.r.t. the randomness in sample $\Dcal$ and in the optimization algorithms. ${\epsilon}_{opt}$ is the optimization error, and ${\epsilon}_{approx}\rbr{\Fcal, \Hcal}$ is the approximation induced by $\rbr{\Fcal, \Hcal}$ for parametrization of $\rbr{\tau, f}$. } \begin{proof} We have the total error as \begin{equation}\label{eq:bound} \EE\sbr{J\rbr{\hat\tau} - J\rbr{\tau^*}} \le 4\EE\sbr{{\epsilon}_{est}} + \EE\sbr{{\epsilon}_{opt}} + {\epsilon}_{approx}\rbr{\Fcal, \Hcal}, \end{equation} where ${\epsilon}_{approx}\defeq 2C_{\Tcal, \kappa,\lambda}{\epsilon}_{approx}\rbr{\Fcal} + C_{\phi, C, \lambda}{\epsilon}_{approx}\rbr{\Hcal}$. For ${{\epsilon}_{opt}}$, we can apply the results for SGD in~\appref{appendix:opt_error}. We can bound the $\EE\sbr{{\epsilon}_{est}}$ by \lemref{lemma:stat_error}. Specifically, we have \begin{equation*} \EE\sbr{{\epsilon}_{est}} = \rbr{1 - \delta}\sqrt{\frac{C_2\rbr{\log N + \log \frac{1}{\delta}}}{N}} + \delta M =\Ocal\rbr{\sqrt{\frac{\log N}{N}}}, \end{equation*} by setting $\delta = \frac{1}{\sqrt{N}}$. Plug all these bounds into~\eqref{eq:bound}, we achieve the conclusion. \end{proof} \section{More Related Work} \paragraph{Density Ratio Estimation} Density ratio estimation is a fundamental goal of machine learning and much related work exists. Recent progress shows the power of adversarial training on realistic image generation~\citep{goodfellow2014generative}, text generation~\citep{yu2017seqgan} and even imitation learning~\citep{ho2016generative}. These classical methods focus on estimate the ratio between the data distribution and the model distribution, and can be generalized as minimizing the $f$-divergence~\citep{nowozin2016f}. Furthermore, classical MCMC~\citep{brooks2011handbook, gelman2013bayesian} aims at sampling from $\mu^\pi$ based on a given unnormalized form in physics, statistics and machine learning. Similar methods include variational Bayes~\citep{hoffman2013stochastic, kingma2013auto} and expectation propagation~\citep{minka2001expectation}. Particle-based methods avoid the parametric assumptions on the $\mu^\pi$, such as SVGD, but usually renders heavy computational cost to update many particles. Amortized SVGD~\citep{wang2016learning} and Adversarial MCMC~\citep{song2017nice} alleviate this issue via combining with neural network, but they still aim at approximating a distribution (learning a sampler) via density ratio estimation. Our proposed \estname is derived by exploiting the Fenchel duality of the $f$-divergences, but the f-divergence is built upon the property of steady state of a Markov chain. Furthermore, we only require the data distribution, \ie, off-policy dataset, and do not need to sample from our model. Thus \estname uses a very general variational minimization technique to estimate the stationary ratio of a Markov chain, instead of approximate the stationary distribution, which distinct it from classical methods. \vspace{-2mm} \paragraph{Off-policy Learning} Off-policy learning~\citep{precup01off,munos2016safe,gelada2019off,liu2019off} aims at value or policy learning from off-policy data, \ie, policy improvement given a policy, which is different from the off-policy evaluation~\citep{thomas2016data,irpan2019off}. Furthermore, off-policy learning mainly focus on how to train a policy in a more stable manner with better convwergence. The off-policy evaluation can be intergrated into off-policy learning to estimate the reward of target policy for its optimization, but off-policy is not only restricted in this setting~\citep{swaminathan2017off}. Thus applying \estname in the off-policy learning setting to enhance off-polcy learning is an interesting future direction. \Bo{Discuss the connection with Lawrence Carin about onpolicy MCMC and Qiang's amortized stein, as well as other related MCMC papers. Also in Appendix.} \section{Related Work}\label{sec:related_work} \vspace{-2mm} \paragraph{Off-policy Policy Evaluation} Off-policy policy evaluation with importance sampling (IS) has has been explored in the contextual bandits \citep{strehl2010learning, dudik2011doubly,wang2017optimal}, and episodic RL settings~\citep{murphy2001marginal,precup01off}, achieving many empirical successes~\citep[e.g.,][]{strehl2010learning,dudik2011doubly,bottou13counterfactual}. Unfortunately, IS-based methods suffer from exponential variance in long-horizon problems, known as the ``curse of horizon''~\citep{liu2018breaking}. A few variance-reduction techniques have been introduced, but still cannot eliminate this fundamental issue~\citep{jiang2015doubly,thomas2016data,guo2017using}. By rewriting the accumulated reward as an expectation w.r.t.\ a stationary distribution,~\citet{liu2018breaking,gelada2019off} recast OPE as estimating a correction ratio function, which significantly alleviates variance. However, these methods still require the off-policy data to be collected by a \emph{single and known} behavior policy, which restricts their practical applicability. The only published algorithm in the literature, to the best of our knowledge, that solves agnostic-behavior off-policy evaluation is DualDICE~\citep{nachum2019dualdice}. However, DualDICE was developed for discounted problems and its results become unstable when the discount factor approaches $1$ (see below). By contrast, \estname can cope with the more challenging problem of undiscounted reward estimation in the general behavior-agnostic setting. Note that standard model-based methods~\citep{SutBar98}, which estimate the transition and reward models directly then calculate the expected reward based on the learned model, are also applicable to the behavior-agnostic setting considered here. Unfortunately, model-based methods typically rely heavily on modeling assumptions about rewards and transition dynamics. In practice, these assumptions do not always hold, and the evaluation results can become unreliable. \vspace{-3mm} \paragraph{Markov Chain Monte Carlo} Classical MCMC~\citep{brooks2011handbook, gelman2013bayesian} aims at sampling from $\mu^\pi$ by iteratively simulting from the transition operator. It requires continuous interaction with the transition operator and heavy computational cost to update many particles. Amortized SVGD~\citep{wang2016learning} and Adversarial MCMC~\citep{song2017nice, li2019adversarial} alleviate this issue via combining with neural network, but they still interact with the transition operator directly, \ie, in an on-policy setting. The major difference of our~\estname is the learning setting: we only access the off-policy dataset, and cannot sample from the transition operator. The proposed \estname leverages stationary density ratio estimation for approximating the stationary quantities, which distinct it from classical methods. \vspace{-3mm} \paragraph{Density Ratio Estimation} Density ratio estimation is a fundamental tool in machine learning and much related work exists. Classical density ratio estimation includes moment matching \citep{gretton2009covariate}, probabilistic classification~\citep{bickel2007discriminative}, and ratio matching~\citep{NguWaiJor08,SugNakKasBueetal08,kanamori2009least}. These classical methods focus on estimating the ratio between two distributions with samples from both of them, while \estname estimates the density ratio to a stationary distribution of a transition operator, from which even one sample is difficult to obtain. \vspace{-3mm} \paragraph{PageRank} \citet{yao13reinforcement} developed a reverse-time RL framework for PageRank via solving a reverse Bellman equation, which is less sensitive to graph topology and shows faster adaptation with graph change. However, \citet{yao13reinforcement} still considers the online manner, which is different with our OPR setting. \iffalse The most related work is DualDICE~\citep{nachum2019dualdice}, which also conducting OPE on multiple unknown behavior policies and achieves state-of-the-art for discounted case. It is not applicable for average reward estimation and quickly becomes unstable when $\gamma\rightarrow 1$ empirically. In sum, the proposed~\estabb estimates the state-action stationary correction in the most general behavior-agonistic setting for both $\gamma<1$ and $\gamma=1$ cases, avoiding the trivial solution in a princled way. \fi \iffalse \paragraph{Density Ratio Estimation} \Bo{The references listed here are not quite related. I would use this section for dual embedding and GAN. It seems we do not have enough space. Put this into the Appendix.} Density ratio estimation is a fundamental goal of machine learning and much related work exists. Recent progress shows the power of adversarial training on realistic image generation~\citep{goodfellow2014generative}, text generation~\citep{yu2017seqgan} and even imitation learning~\citep{ho2016generative}. These classical methods focus on estimate the ratio between the data distribution and the model distribution, and can be generalized as minimizing the $f$-divergence~\citep{nowozin2016f}. Our proposed \estname is derived by exploiting the Fenchel duality of the $f$-divergences, but the f-divergence is built upon the property of steady state of a Markov chain. Furthermore, we only require the data distribution, \ie, off-policy dataset, and do not need to sample from our model. Thus \estname uses a very general variational minimization technique to estimate the stationary ratio of a Markov chain, which distinct it from classical methods. \Bo{Discuss the connection with Lawrence Carin about onpolicy MCMC and Qiang's amortized stein, as well as other related MCMC papers. Also in Appendix.} \fi \section{Theoretical Analysis}\label{sec:theoretical_analysis} \vspace{-2mm} We provide a theoretical analysis for the proposed~\estabb algorithm, following a similar learning setting and assumptions to~\citep{nachum2019dualdice}. \vspace{-2mm} \begin{assumption} \label{asmp:ref_dist} The target stationary correction are bounded, $\nbr{\tau^*}_\infty\le C<\infty$. \end{assumption} \vspace{-2mm} The main result is summarized in the following theorem. A formal statement, together with the proof, is given in~\appref{appendix:proofs}. \vspace{-3mm} \begin{theorem}[Informal] \label{thm:total_error} Under mild conditions, with learnable $\Fcal$ and $\Hcal$, the error in the objective between the~\estname estimate, ${\hat\tau}$, to the solution $\tau^*\rbr{s, a} = \frac{u\rbr{s, a}}{p\rbr{s, a}}$ is bounded by \vspace{-1mm} $$ \textstyle \EE\sbr{J\rbr{\hat\tau} - J\rbr{\tau^*}} =\widetilde\Ocal\rbr{{\epsilon}_{approx}\rbr{\Fcal, \Hcal} + {\frac{1}{\sqrt{N}}} + {\epsilon}_{opt}}, $$ where $\EE\sbr{\cdot}$ is w.r.t. the randomness in $\Dcal$ and in the optimization algorithms, ${\epsilon}_{opt}$ is the optimization error, and ${\epsilon}_{approx}\rbr{\Fcal, \Hcal}$ is the approximation induced by $\rbr{\Fcal, \Hcal}$ for parametrization of $\rbr{\tau, f}$. \end{theorem} \vspace{-2mm} The theorem shows that the suboptimality of \estname's solution, measured in terms of the objective function value, can be decomposed into three terms: (1) the approximation error ${\epsilon}_{approx}$, which is controlled by the representation flexibility of function classes; (2) the estimation error due to sample randomness, which decays at the order of $1/\sqrt{N}$; and (3) the optimization error, which arises from the suboptimality of the solution found by the optimization algorithm. As discussed in~\appref{appendix:proofs}, in special cases, this suboptimality can be bounded below by a divergence between $\hat\tau$ and $\tau^*$, and therefore directly bounds the error in the estimated policy value. There is also a tradeoff between these three error terms. With more flexible function classes (\eg, neural networks) for $\Fcal$ and $\Hcal$, the approximation error ${\epsilon}_{approx}$ becomes smaller. However, it may increase the estimation error (through the constant in front of $1/\sqrt{N}$) and the optimization error (by solving a harder optimization problem). On the other hand, if $\Fcal$ and $\Hcal$ are linearly parameterized, estimation and optimization errors tend to be smaller and can often be upper-bounded explicitly in~\appref{appendix:opt_error}. However, the corresponding approximation error will be larger.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \textit{Handwritten Text Recognition} (HTR) has become a valuable tool to extract text from scanned documents \citep{terras2021inviting}. The current digitisation wave in libraries and archives does not stop at historical manuscripts. As such, HTR plays an essential role in making the contents of manuscripts available to researchers and the public. HTR has undergone significant improvements in recent years, thanks in large part to the introduction of neural network-based techniques \citep{graves2008offline,graves2009novel}. Platforms like \textit{Transkribus}\footnote{\url{https://readcoop.eu/de/transkribus/}} successfully integrated these approaches in a way that its HTR+ model \citep{michael2018htr} can achieve character error rates (CERs) of below 5\% with little annotated ground truth material \citep{Mueh19}. However, a look at the digital platform for manuscript material for Swiss libraries and archives \textit{e-manuscripta}\footnote{\url{https://www.e-manuscripta.ch/}} shows that in the category ``correspondence'' containing 45k titles, only 313, or $0.1\%$, contain transcriptions. Such large manuscript collections pose significant challenges to libraries and archives, especially because of the variety of handwriting styles. That the authors' handwriting changes according to what they were writing only adds in complexity. Fig.~\ref{fig:hwstyles} exemplifies this by showing Rudolf Gwalther's handwriting in (a) a 16\textsuperscript{th} century poetry volume and (b) a letter, among other handwritings from different authors (c and d). \begin{figure} \centering \includegraphics[width=\columnwidth]{handwritingstyles.png} \caption{Different handwriting styles. Poetry by Rudolf Gwalther (a), letters by Rudolf Gwalther (b), Matthieu Coignet (b), and Kaspar Wolf (c).} \label{fig:hwstyles} \end{figure} The variability of such collections calls for models that adapt well to different hands with only little to no training data. Transformer-based architectures \citep{vaswani2017attention} have proven suitable to build large language representation models like, e.g., BERT \citep{devlin2018bert}. BERT-style models are used to fine-tune specific models for natural language understanding and are known as strong transfer learners \citep{ruder2019transfer}. Most recently, transformers have found their way into image processing \citep{dosovitskiy2020image,touvron2021training}, which drove the development of image transformers \citep{bao2021beit}. \section{Approach} The basis for our research is TrOCR \cite{li2021trocr}, which combines the BERT-style vision transformer BEiT \citep{bao2021beit} with a RoBERTa \citep{liu2019roberta} language representation model. BEiT works as an encoder and is pre-trained on the Image-Net-1K \citep{russakovsky2015imagenet} dataset containing 1.2M images, while RoBERTa serves as a decoder producing the text. \citet{li2021trocr} used 687M of printed and about 18M of synthetically generated handwritten text lines in English to pre-train the TrOCR model. During this phase, the model learns to extract relevant features from the images and decode them into English text, therefore training the language model from scratch. The authors initialised the RoBERTa decoder with 6 and 12 layers, referring to them as BASE when paired with the pre-trained 12 layer BEiT instance and LARGE when paired with the 24-layer BEiT model, respectively. Finally, \citet{li2021trocr} fine-tuned their pre-trained TrOCR instances on ``real-world'' data, like the IAM dataset \citep{marti2002iam}. The IAM dataset consists of handwritten English lines from different authors. TrOCR\textsubscript{BASE} reaches a CER of 3.42\% and TrOCR\textsubscript{LARGE} a CER of 2.89\% on this dataset. The score of TrOCR\textsubscript{LARGE} is only 0.14 percentage points behind the best score of \citet{diaz2021rethinking}, who used a different approach. Our research aims to exploit the pre-trained vision and language transformers, hoping that a model fine-tuned on historical manuscripts generalises well enough to be applied to extensive and variable manuscript collections. We want to test whether we can transfer the ``knowledge'' about handwriting in the English language TrOCR has acquired early modern manuscripts. \section{Data} Our data stem from the 16\textsuperscript{th} century volume \textit{Lateinische Gedichte} by Rudolf Gwalther.\footnote{\url{https://doi.org/10.7891/e-manuscripta-26750}} \citet{stotz21gwalther} downloaded the available images and partial transcriptions from \textit{e-manuscripta} and loaded them into the Transkribus interface. They applied layout recognition to identify lines and baselines and aligned them with the transcriptions. The publicly available dataset has 4,037 image and corresponding text lines in Latin, which we split into 3,603 lines for training and 433 lines for validation.\footnote{\url{https://doi.org/10.5281/zenodo.4780947}} A second dataset consists of 16,584 lines in Latin from Heinrich Bullinger's (1504 - 1575) correspondence. It contains hands from about 60 different authors with a heavily skewed author distribution. We split the data into 13,843 lines for training, 1,685 lines for validation, and 1,056 for testing. \section{Experiments and Discussion} We trained Transkribus HTR+ models on the Gwalther and Bullinger data for 50 epochs as reference models.\footnote{We used the \textit{Acta\_17 HTR+} as a base model.} Table \ref{tab:res} shows the result under ``HTR+''. For the TrOCR architecture, using the same data, we fine-tuned both TrOCR\textsubscript{BASE} and TrOCR\textsubscript{LARGE} for three up to 20 epochs.\footnote{The untrained TrOCR\textsubscript{LARGE} model achieves a CER of 57.48\% on the validation data.} Table \ref{tab:res} presents the results of our initial experiments: the longer we fine-tune the models, the better their performance gets. This effect is less pronounced for TrOCR\textsubscript{BASE}, however, where the performance even drops if we fine-tune more than ten epochs. Moreover, we note a clear performance gap between TrOCR\textsubscript{BASE} and TrOCR\textsubscript{LARGE}, where TrOCR\textsubscript{LARGE} always performs better. Our results are surprising because the pre-trained TrOCR model never saw any Latin data previous to our experiments. For example, our model only sees 23k Latin words during fine-tuning on the Gwalther data. The vocabulary overlap of the training and validation set is 68.9\%. Moreover, TrOCR has never been confronted with early modern manuscripts. Nevertheless, we achieved a CER that beats our reference model trained on Gwalther data at 0.19 percentage points on the validation set and 4.6 percentage points on the Bullinger data on the test set. We, therefore, assume that TrOCR is a robust and highly transferable handwriting representation model that is suitable for being fine-tuned on hands of all styles and origins. \begin{table}[] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{@{}ll|rrrrr|r@{}} \toprule & & \multicolumn{5}{c|}{\textbf{fine-tuning epochs}} & \textbf{epochs} \\ \textbf{System} & \textbf{data} & \textbf{3} & \textbf{5} & \textbf{10} & \textbf{15} & \textbf{20} & \textbf{50} \\ \midrule HTR+ & \multirow{3}{*}{Gwalther} & - & - & - & - & - & 2.74 \\ TrOCR\textsubscript{BASE} & & 3.84 & 3.72 & \textbf{3.18} & 3.31 & 3.62 & - \\ TrOCR\textsubscript{LARGE} & & 2.94 & 2.72 & 2.58 & \textbf{2.55} & 2.62 & - \\ \midrule HTR+ & \multirow{2}{*}{Bullinger} & - & - & - & - & - & 21.13 \\ TrOCR\textsubscript{LARGE} & & - & - & - & 16.53 & - & - \\ \bottomrule \end{tabular} } \caption{CERs for different models and different (fine-tuning) epochs on the validation set for Gwalther data and the test set for Bullinger data.} \label{tab:res} \end{table} \section{Conclusion} Our initial experiments with TrOCR indicate that it outperforms state-of-the-art models for single-author and multi-author datasets. Astonishing is its strong performance on a language and handwriting styles it has never ``learnt to read''. Moreover, TrOCR does not require baseline information, in contrast to Transkribus models. In future experiments, we want to investigate whether plugging in a pre-trained Latin RoBERTa decoder plus adapting the encoder to early modern handwriting can improve performance. Moreover, we want to further examine TrOCR on more variable datasets. For example, projects focusing on correspondences would benefit from HTR models that adapt to many different authors. Thus, we will investigate whether TrOCR generalises better to this data than conventional methods.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Hardware and data flow} The hardware consists of two computing farms with around 200 and 250 servers. The FLP (First Level Processors) farm has FPGA based readout cards called CRUs (Common Readout Units), which receive the data via optical links from the detectors. The FLP is the only place in the processing, where all data of an individual link is accessible. Certain calibration operations, such as the integration of the digital currents of the Time Projection Chamber (TPC), need access to all this data and must thus run on the FLPs. Some detectors also run custom algorithms on the FPGA in the CRU user logic, e.\,g.~for Zero-Suppression of the raw data. The data is transferred from the FLPs to the EPNs (Event Processing Nodes) via the InfiniBand network. The transfer is arranged in such a way that an EPN server receives always the full data for individual collisions, while the different collisions are distributed over the EPNs. The majority of the data processing is done on the EPNs. Since TPC calibration and data compression rely on the information from TPC tracking, the EPNs perform full online track reconstruction of the TPC data. This is the most computing-intense part of the synchronous processing. Each EPN server is equipped with eight GPUs, which provide the bulk of the processing power. The full TPC processing happens on the GPUs. \section{Software} The unified O$^2$ software framework is used for reconstruction and calibration both online and offline. There are no separate algorithms, but the same framework and the same implementations are used for online and offline processing. The online processing employs stricter cuts and skips some slow processing steps not needed for the compression or the calibration. Since in Run 3 all data is processed online, not only collisions triggered by a hardware trigger, the computing requirements grow tremendously and ALICE and EPN use GPUs to speed up the processing~\cite{bib:gpu}. The GPU and the CPU implementations share a common source code. This significantly improves the maintainability, since only a single source code must be developed and maintained. On top of processing, calibration, and data compression, the quality control (QC) performs a real time validation of the detector data on the EPNs, the FLPs, and dedicated QC servers. The main purpose of the EPN farm is the synchronous processing of the peak data rate during the 50 kHz Pb--Pb data taking. The TPC is the dominant contributor to this computing load, thus the EPNs are optimized for the fastest possible TPC processing on the GPUs. The capability of the EPNs to handle the expected load was verified in a full-system test, in which simulated events were injected at the level of the data distribution entering the EPN, and the EPN was performing the full synchronous processing with its GPU and CPU parts. The LHC collides heavy ions only during few weeks per year, thus the EPN farm will actually run asynchronous processing for most of the time. It is thus desirable to use the computing capacity of the GPUs also in the asynchronous processing, wo avoid wasting the majority of the computing capacity. While during the ongoing commissioning the most important task is still the readiness for the synchronous processing, there is an ongoing campaign to offload as much as possible parts of the asynchronous processing to the GPU. A promising candidate is the full tracking chain in the central barrel region, including the Inner Tracking System (ITS), the Transition Radiation Detector (TRD), and the Time of Flight detector (TOF). This could be further extended to the vertexing. Not all of these components can fully run on the GPU yet, but already now the fraction that can be offloaded to the GPU is more than 80\% of the CPU computing load, measured in the case that all processing steps run on the CPU. It is nevertheless important to offload also small but intermediate steps to avoid repeated data transfer forth and back. The relative contribution of the GPUs to the total EPN computing capacity is around 90\%, and it is likely that eventually more than 90\% of the asynchronous computing load will be offloaded to the GPU, such that the GPUs will be fully loaded in both computing phases. The ALICE online and offline systems have proven their readiness for data taking during the LHC pilot beam test at the end of 2021 running full online processing with GPUs. The next steps are the pp data taking at nominal interaction rate, the Pb--Pb run at the peak design interaction rate of 50\,kHz, and the asynchronous processing of the data. \section*{Acknowledgements} We thanks German BMBF and Greek ESPA 2014-2020 National Fund for Research Infrastructures DeTAnet for support of the Alice Offline/Online project.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Stars form in shielded molecular cores of giant molecular clouds. In Kennicutt-Schmidt relations, the star-formation rates correlate with molecular surface densities \citep[e.g.,~][]{Leroy2008, Genzel2013}. The conversion of atomic to molecular gas is also crucial for formation of other important molecular tracers and coolants such as CO, OH, and H$_2$O \citep[e.g.,~][]{Bialy2015a}. Understanding the \ifmmode {\rm HI-to-H_2}\else H{\small I}-to-H$_2$ \fi transition is important for star-formation and galaxy evolution theories, and for interpreting observations of the ISM. H$_2$ molecules are photodissociated by far-ultraviolet (FUV) radiation within the Lyman-Werner (LW) band (11.2 - 13.6 eV). This occurs via a two-step process, in which a LW photon excites an electronic state, which in $\sim 12 \%$ of the cases decays to rovibrational continuum that leads to dissociation of the H$_2$ molecule. With increasing column density, the FUV radiation is absorbed, and the H$_2$ dissociation rate decreases. Once the column density is large enough so that the local H$_2$ dissociation rate becomes equal to the H$_2$ formation rate, an \ifmmode {\rm HI-to-H_2}\else H{\small I}-to-H$_2$ \fi transition occurs. What are the properties of \ifmmode {\rm HI-to-H_2}\else H{\small I}-to-H$_2$ \fi transitions in molecular clouds, and what are the properties of the (predominantly) \ifmmode {\rm HI} \else H{\small I} \fi shielding columns? \section{Observations} The Perseus cloud is located at a distance of $\sim 300$~pc, with an angular extent of $6^0 \times 3^0$, and a total mass of $\sim 10^4$~\ifmmode {\rm M_{\odot}}\else $\rm M_{\odot}$\fi \citep{Bally2008}. Perseus consists of several dark and star-forming regions, which form low and intermediate mass stars (later than B1). Thus the FUV radiation in Perseus is probably dominated by external sources \citep[][hereafter L12]{Lee2012}. L12 used 21 cm observations of the GALFA survey \citep{Peek2011}, together with IRIS infrared data \citep{MivilleDeschenes2005} and the $A_V$ image from the COMPLETE Survey \citep{Ridge2006}, to derive \ifmmode {\rm HI} \else H{\small I} \fi and \ifmmode {\rm H_2} \else H$_2$ \fi surface densities (\ifmmode {\Sigma_{\rm HI}} \else $\Sigma_{\rm HI}$ \fi and $\ifmmode {\Sigma_{\rm H_2}} \else $\Sigma_{\rm H_2}$ \fi$) towards B1, B1E, B5, IC348, and NGC1333, with a resolution of 0.4 pc. We use the data presented by \citet{Lee2015}, for which the \ifmmode {\rm HI} \else H{\small I} \fi columns were corrected for (up to 20 \%) 21 cm depth effects. \section{Theoretical Framework} \setcounter{equation}{0} \renewcommand\theequation{\arabic{equation}} We apply the S14 theoretical model which assumes semi-infinite gas slabs irradiated by external FUV. S14 derived an analytic formula for the total accumulated \ifmmode {\rm HI} \else H{\small I} \fi surface column density, \begin{equation} \label{eq: Sigma_HI} \Sigma_{\rm HI} \ = \ 6.71 \ \Big(\frac{1.9}{\ifmmode {\sigma_{g-21}}\else $\sigma_{g-21}$ \fi}\Big) \ \ln \Big[ \frac{\ifmmode {\alpha G}\else $\alpha G$ \fi}{3.2} \ + 1 \Big] \ \ifmmode {\rm M_{\odot} \ pc^{-2}} \else $\rm M_{\odot} \ pc^{-2}$ \fi , \end{equation} Importantly, \ifmmode {\Sigma_{\rm HI}} \else $\Sigma_{\rm HI}$ \fi is independent of the total gas column \ifmmode {\Sigma_{\rm tot}} \else $\Sigma_{\rm tot}$ \fi (or the cloud size), and is determined solely by the cloud physical parameters $\ifmmode {\alpha G}\else $\alpha G$ \fi$ and $\ifmmode {\sigma_{g-21}}\else $\sigma_{g-21}$ \fi$. Here, $\ifmmode {\sigma_{g-21}}\else $\sigma_{g-21}$ \fi$ is the dust cross section per hydrogen nucleus in units of $10^{-21}$~cm$^{2}$, and is typically $\approx 1.9$. \ifmmode {\alpha G}\else $\alpha G$ \fi is the (dimensionless) ratio of the H$_2$ {\it shielded} dissociation rate to H$_2$ formation rate. Assuming H$_2$ formation on dust grains \begin{equation} \label{eq: aG} \ifmmode {\alpha G}\else $\alpha G$ \fi \ = \ 6 \ \Big( \frac{I_{UV}}{n/10 {\rm cm^{-3}}} \Big) \Big(\frac{w}{0.4} \Big) \ , \end{equation} where $I_{\rm UV}$ is the FUV intensity in units of the \citet{Draine1978} field, $n$ is the volume density and $w$ is the fraction of LW photons that are absorbed in H$_2$-dust (see S14). For multiphased gas, the CNM density and $I_{\rm UV}$ are proportional \citep{Wolfire2003}. In this case $(\ifmmode {\alpha G}\else $\alpha G$ \fi)_{\rm CNM} \approx 3$. In our analysis however, we do not assume {\it a priori} CNM conditions \citep[as e.g.~][]{Krumholz2009}, but rather constrain $\ifmmode {\alpha G}\else $\alpha G$ \fi$ directly using the observational data. \section{Results} \begin{figure} \includegraphics[width=1\textwidth]{fig_all} \caption{(a) $R\equiv \ifmmode {\Sigma_{\rm HI}} \else $\Sigma_{\rm HI}$ \fi/\ifmmode {\Sigma_{\rm H_2}} \else $\Sigma_{\rm H_2}$ \fi$ as a function of $\ifmmode {\Sigma_{\rm tot}} \else $\Sigma_{\rm tot}$ \fi$, the black points are the observations and the red lines are fits to the model. The best fitted \ifmmode {\rm HI} \else H{\small I} \fi columns are indicated. (b) The \ifmmode {\Sigma_{\rm HI}} \else $\Sigma_{\rm HI}$ \fi observed columns as contours in the $\ifmmode {\alpha G}\else $\alpha G$ \fi$ -- $\ifmmode {\sigma_{g-21}}\else $\sigma_{g-21}$ \fi$ parameter space. The grey strip is the $(\ifmmode {\alpha G}\else $\alpha G$ \fi)_{\rm CNM}$ typical range, and the two dashed horizontal lines are the typical $\ifmmode {\sigma_{g-21}}\else $\sigma_{g-21}$ \fi =1.9$ and twice typical (3.8) values. The $\ifmmode {\alpha G}\else $\alpha G$ \fi$ values for this range and the corresponding densities (assuming $I_{\rm UV}=1$) are indicated.} \end{figure} In Fig.~1 (a) we test our theoretical prediction that $\ifmmode {\Sigma_{\rm HI}} \else $\Sigma_{\rm HI}$ \fi$ is independent of $\ifmmode {\Sigma_{\rm tot}} \else $\Sigma_{\rm tot}$ \fi$ by fitting $\mathcal{R}\equiv\ifmmode {\Sigma_{\rm HI}} \else $\Sigma_{\rm HI}$ \fi/\ifmmode {\Sigma_{\rm H_2}} \else $\Sigma_{\rm H_2}$ \fi$, for each of the five regions. The theory and observations are in excellent agreement. In Fig.~1 (b) we plot the \ifmmode {\rm HI} \else H{\small I} \fi columns as contours in the $\ifmmode {\alpha G}\else $\alpha G$ \fi$ -- $\ifmmode {\sigma_{g-21}}\else $\sigma_{g-21}$ \fi$ parameter space, using Equation (\ref{eq: Sigma_HI}). L12 obtained an elevated $A_V/N_{\rm H}$ ratio in Perseus, so $\ifmmode {\sigma_{g-21}}\else $\sigma_{g-21}$ \fi$ probably lies within 1.9 to 3.8 (dashed lines). For this realistic range in $\ifmmode {\sigma_{g-21}}\else $\sigma_{g-21}$ \fi$, $\ifmmode {\alpha G}\else $\alpha G$ \fi$ spans from $\sim 5$ to $\sim 20$, a factor of 2 - 7 larger than $(\ifmmode {\alpha G}\else $\alpha G$ \fi)_{\rm CNM}$ (grey strip). Therefore pure CNM shielding cannot explain the observed \ifmmode {\rm HI} \else H{\small I} \fi columns in Perseus. We use Equation (\ref{eq: aG}) to convert $\ifmmode {\alpha G}\else $\alpha G$ \fi$ into volume densities $n$. Assuming $I_{\rm UV} \approx 1$ \citep[][L12]{Tibbs2011}, we get $n \approx 2$ -- 10 cm$^{-3}$ for the \ifmmode {\rm HI} \else H{\small I} \fi shielding layers in Perseus. These values are in-between the CNM and WNM densities, $n_{\rm CNM} \approx 100 n_{\rm WNM} \approx 22$~cm$^{-3}$ \citep{Wolfire2003}. \section{Summary and Discussion} We constrained the controlling parameter $\ifmmode {\alpha G}\else $\alpha G$ \fi$ for the \ifmmode {\rm HI} \else H{\small I} \fi envelopes in Perseus. The $\ifmmode {\alpha G}\else $\alpha G$ \fi$ and \ifmmode {\rm HI} \else H{\small I} \fi volume densities are in-between the CNM and WNM values, suggesting that the \ifmmode {\rm HI} \else H{\small I} \fi shielding layers are probably multiphased, where UNM (and perhaps some WNM) significantly contribute to the shielding of the H$_2$ cores. An alternative explanation is that the observations of \ifmmode {\Sigma_{\rm HI}} \else $\Sigma_{\rm HI}$ \fi are contaminated by large amounts of \ifmmode {\rm HI} \else H{\small I} \fi gas that does not participate in shielding. In this case $\ifmmode {\Sigma_{\rm HI}} \else $\Sigma_{\rm HI}$ \fi$ is effectively smaller, reducing the inferred $\ifmmode {\alpha G}\else $\alpha G$ \fi$ and increasing $n$. However, unrealistically large amounts of the \ifmmode {\Sigma_{\rm HI}} \else $\Sigma_{\rm HI}$ \fi must be removed (50-90\%) for all of the shielding gas to be CNM. Therefore pure CNM shielding cannot explain the observed \ifmmode {\rm HI} \else H{\small I} \fi columns in Perseus. The situation in Perseus suggests that in addition to CNM, less dense UNM is important in controlling the \ifmmode {\rm HI-to-H_2}\else H{\small I}-to-H$_2$ \fi transitions and Schmidt-Kennicutt thresholds in external galaxies. Full details of this work are in \citet{Bialy2015}.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In supersymmetric extensions of the Standard Model with unbroken R-parity~\cite{Nilles:1983ge+X}, the lightest supersymmetric particle (LSP) is stable and plays an important role in both collider phenomenology and cosmology. The most popular LSP candidate is the lightest neutralino, which appears already in the Minimal Supersymmetric Standard Model (MSSM). Here we consider two well-motivated alternative LSP candidates, which are not part of the spectrum of the MSSM: the axino and the gravitino. In particular, either of them could provide the right amount of cold dark matter in the Universe if heavier than about 1~MeV (see~\cite{Covi:1999ty+X,Brandenburg:2004du+X} and~\cite{Moroi:1993mb,Bolz:1998ek+X,Fujii:2002fv+X,Feng:2003xh+X,Roszkowski:2004jd}, respectively, and references therein). The axino~\cite{Nilles:1981py+X,Tamvakis:1982mw,Kim:1983ia} appears (as the spin-1/2 superpartner of the axion) when extending the MSSM with the Peccei--Quinn mechanism~\cite{Peccei:1977hh+X} in order to solve the strong CP problem. Depending on the model and the supersymmetry (SUSY) breaking scheme, the mass of the axino can range between the eV and the GeV scale~\cite{Tamvakis:1982mw,Nieves:1985fq+X,Rajagopal:1990yx,Goto:1991gq+X}. The gravitino appears (as the spin-3/2 superpartner of the graviton) once SUSY is promoted from a global to a local symmetry leading to supergravity (SUGRA)~\cite{Wess:1992cp}. The mass of the gravitino depends strongly on the SUSY-breaking scheme and can range from the eV scale to scales beyond the TeV region~\cite{Nilles:1983ge+X,Dine:1994vc+X,Randall:1998uk+X}. In particular, in gauge-mediated SUSY breaking schemes~\cite{Dine:1994vc+X}, the gravitino mass is typically less than 100~MeV, while in gravity-mediated schemes~\cite{Nilles:1983ge+X} it is expected to be in the GeV to TeV range. Both the axino and the gravitino are singlets with respect to the gauge groups of the Standard Model. Both interact extremely weakly as their interactions are suppressed by the Peccei--Quinn scale~\cite{Sikivie:1999sy,Eidelman:2004wy} $f_a\buildrel>\over{_\sim} 5\times 10^9\,\mbox{GeV}$ and the (reduced) Planck scale~\cite{Eidelman:2004wy} $\mathrm{M}_{\mathrm{Pl}}=2.4\times 10^{18}\,\mbox{GeV}$, respectively. Therefore, in both the axino LSP and the gravitino LSP cases, the next-to-lightest supersymmetric particle (NLSP) typically has a long lifetime. For example, for axino cold dark matter, an NLSP with a mass of 100~GeV has a lifetime of $\Order$(1~sec). For gravitino cold dark matter, this lifetime is of $\Order$(1~sec) for a gravitino mass of 10~MeV and of $\Order(10^6~\mbox{sec})$ for a gravitino mass of 10~GeV. Late NLSP decays can spoil successful predictions of primordial nucleosynthesis and can distort the CMB blackbody spectrum. Constraints are obtained in order to avoid the corresponding (rather mild) axino problem or the more severe and better-known gravitino problem. In the axino LSP case, either a neutralino or a slepton could be the NLSP~\cite{Covi:2004rb}. In the gravitino LSP case, these constraints strongly disfavour a bino-dominated neutralino NLSP, while a slepton NLSP remains allowed~\cite{Fujii:2003nr+X,Roszkowski:2004jd}. Because of their extremely weak interactions, the direct detection of axinos and gravitinos seems hopeless. Likewise, their direct production at colliders is very strongly suppressed. Instead, one expects a large sample of NLSPs from pair production or cascade decays of heavier superparticles, provided the NLSP belongs to the MSSM spectrum. These NLSPs will appear as quasi-stable particles, which will eventually decay into the axino/gravitino LSP. A significant fraction of these NLSP decays will take place outside the detector and will thus escape detection. For the charged slepton NLSP scenario, however, there have recently been proposals, which discuss the way such NLSPs could be stopped and collected for an analysis of their decays into the LSP. It was found that up to $\Order(10^3$--$10^4)$ and $\Order(10^3$--$10^5)$ of charged NLSPs can be trapped per year at the Large Hadron Collider (LHC) and the International Linear Collider (ILC), respectively, by placing 1--10~kt of massive additional material around planned collider detectors~\cite{Hamaguchi:2004df,Feng:2004yi}. In this Letter we assume that the NLSP is a charged slepton. In Sec.~\ref{Sec:AxinoLSP} we investigate the NLSP decays in the axino LSP scenario. These decays were previously considered in~\cite{Covi:2004rb}. We show that the NLSP decays can be used to estimate the axino mass and to probe the Peccei--Quinn sector. In particular, we obtain a new method to measure the Peccei--Quinn scale $f_a$ at future colliders. In Sec.~\ref{Sec:GravitinoLSP} we consider the corresponding NLSP decays in the gravitino LSP scenario. These decays were already studied in~\cite{Buchmuller:2004rq}. It was shown that the measurement of the NLSP lifetime can probe the gravitino mass and can lead to a new (microscopic) determination of the Planck scale with an independent kinematical reconstruction of the gravitino mass. Moreover, it was demonstrated that slepton NLSP decays into the corresponding lepton, the gravitino, and the photon can be used to reveal the peculiar couplings and possibly even the spin of the gravitino. In Ref.~\cite{Buchmuller:2004rq} the limit of an infinite neutralino mass was used. Here we generalize the result obtained therein for the three-body decay by taking into account finite values of the neutralino mass. A question arises as to whether one can distinguish between the axino LSP and the gravitino LSP scenarios at colliders. From the NLSP lifetime alone, such a distinction will be difficult, in particular if the mass of the LSP cannot be determined. Thus, an analysis of the three-body decay of the charged NLSP slepton into the corresponding lepton, the LSP, and a photon will be essential. With a measurement of the polarizations of the final-state lepton and photon, the determination of the spin of the LSP should be possible~\cite{Buchmuller:2004rq} and would allow us to decide clearly between the spin-1/2 axino and the spin-3/2 gravitino. The spin measurement, however, will be very difficult. In Sec.~\ref{Sec:AxinovsGravitino} we present more feasible methods to distinguish between the axino LSP and the gravitino LSP scenarios, which are also based on the analysis of the three-body NLSP decay with a lepton and a photon in the final state. Let us comment on the mass hierarchy of the relevant particles. There are six possible orderings in the hierarchy of the axino mass $m_{{\widetilde a}}$, the gravitino mass $m_{\widetilde{G}}$, and the mass of the lightest ordinary supersymmetric particle (LOSP) $m_\mathrm{LOSP}$. Here the LOSP is the lightest charged slepton. The cases relevant in this Letter are (i)~$m_{{\widetilde a}} < m_\mathrm{LOSP} < m_{\widetilde{G}}$, (ii)~$m_{\widetilde{G}} < m_\mathrm{LOSP} < m_{{\widetilde a}}$, (iii)~$m_{{\widetilde a}} < m_{\widetilde{G}} < m_\mathrm{LOSP}$, and (iv)~$m_{\widetilde{G}} < m_{{\widetilde a}} < m_\mathrm{LOSP}$. In cases (iii) and (iv), the LOSP has two distinct decay channels, one into the axino and the other into the gravitino. However, unless the decay rates into the axino and the gravitino are (accidentally) comparable, the phenomenology of the LOSP decay in the cases (iii) and (iv) can essentially be reduced to the cases~(i) or~(ii), although not necessarily respectively, as will be discussed in Sec.~\ref{Sec:AxinovsGravitino}. We will thus concentrate on the cases (i) and (ii) and call the LOSP the NLSP. \section{Axino LSP Scenario} \label{Sec:AxinoLSP} In this section we consider the axino LSP scenario. The relevant interactions of the axino are discussed. The rates of the two-body and three-body decays of the charged slepton NLSP are given. We demonstrate that these decays can be used to estimate the Peccei--Quinn scale and the axino mass. To be specific, we focus on the case where the lighter stau ${\widetilde \tau}$ is the NLSP. In general, the stau is a linear combination of ${\widetilde \tau}_{\mathrm R}$ and ${\widetilde \tau}_{\mathrm L}$, which are the superpartners of the right-handed and left-handed tau lepton, respectively: ${\widetilde \tau}=\cos\theta_\tau{\widetilde \tau}_{\mathrm R}+\sin\theta_\tau{\widetilde \tau}_{\mathrm L}$. For simplicity, we concentrate on a pure `right-handed' stau ${\widetilde \tau}_{\mathrm R}$, which is a good approximation at least for small $\tan\beta$. Then, the neutralino--stau coupling is dominated by the bino coupling. In addition, we assume for simplicity that the lightest neutralino is a pure bino. \subsection{Axino Interactions} \label{Sec:AxinoInteractions} Let us first discuss how the axino couples to the stau. Concentrating on hadronic, or KSVZ, axion models~\cite{Kim:1979if+X} in a SUSY setting, the coupling of the axino to the bino and the photon/Z-boson at scales below the Peccei--Quinn scale $f_a$ is given effectively by the Lagrangian~\cite{Covi:1999ty+X} \begin{eqnarray} {\cal L}_{{\widetilde a}}&=& i\,\frac{\alpha_{\mathrm{Y}} C_{\rm aYY}}{16\pi f_a}\, {\overline {\tilde a}}\,\gamma_5\,[\gamma_\mu,\gamma_\nu]\, {\widetilde B}\, \left( \cos\theta_W F_{\mu\nu}-\sin\theta_W Z_{\mu\nu} \right) , \label{Eq:AxinoInteractions} \end{eqnarray} where $\theta_W$ is the weak mixing angle, $\alpha_{\mathrm{Y}}=\alpha/\cos^2\theta_W$ with the fine structure constant $\alpha$, and $F_{\mu\nu}$ and $Z_{\mu\nu}$ are the field strength tensors of the photon and Z-boson, respectively. The interaction Lagrangian~(\ref{Eq:AxinoInteractions}) is obtained by integrating out the heavy (s)quarks introduced in supersymmetric KSVZ axion models. Indeed, the KSVZ axino couples directly only to these additional heavy (s)quarks. Thus, the above coupling depends, for example, on the hypercharge of these heavy (s)quarks, which we assume to be non-zero. The model dependence related to the Peccei--Quinn sector is expressed in terms of the factor $C_{\rm aYY}\simeq \mathcal{O}(1)$. As the MSSM fields do not carry Peccei--Quinn charges, the axino couples to the stau only indirectly, via the exchange of intermediate gauge bosons and gauginos. In the alternative DFSZ axion models~\cite{Dine:1981rt+X}, once supersymmetrized, the mixing of the axino with the MSSM neutralinos can be non-negligible and other couplings between the axino and the MSSM fields will arise. Here, however, we focus on the KSVZ-type models. \vspace*{1cm} \subsection{The Two-Body Decay \boldmath{${\widetilde \tau} \to \tau + {\widetilde a}$} } \label{Sec:Axino2Body} We now consider the two-body decay ${\widetilde \tau}\to\tau+{\widetilde a}$ in the framework described above. We neglect the tau mass for simplicity. With the effective vertex~(\ref{Eq:AxinoInteractions}), i.e.\ with the heavy KSVZ (s)quarks integrated out, this two-body decay occurs at the one-loop level. The corresponding Feynman diagrams are shown in Fig.~\ref{Fig:Axino2Body}, where the effective vertex is indicated by a thick dot. \begin{figure} \centerline{\epsfig{file=Axino2Body_w_labels.eps,width=14cm}} \caption{ The dominant contributions to the two-body NLSP decay ${\widetilde \tau}_{\mathrm R} \to \tau +{\widetilde a}$.} \label{Fig:Axino2Body} \end{figure} Using the method described in~\cite{Covi:2002vw}, we obtain the following estimate for the decay rate:\footnote{We correct the factor of $(1/16)(1+\tan^2\theta_W)^2/(1-\tan^2\theta_W)^2$, which is missing in Eq.~(3.12) of Ref.~\cite{Covi:2004rb}.} \begin{eqnarray} \Gamma({\widetilde \tau}_{\mathrm R} \to\tau\,{\widetilde a}) &=& \frac{9\,\alpha^4\,C_{\rm aYY}^2}{512\pi^5\cos^8\theta_W}\,\, \frac{m_{{\widetilde B}}^2}{f_a^2}\,\, \frac{(m_{{\widetilde \tau}}^2 - m_{{\widetilde a}}^2)^2}{m_{{\widetilde \tau}}^3}\,\, \xi^2\, \log^2\left(\frac{f_a}{m}\right) \label{Eq:Axino2BodyI} \\ &\simeq& \xi^2\,(25~\mathrm{sec})^{-1} C_{\rm aYY}^2 \left(1-\frac{m_{\widetilde a}^2}{m_{\widetilde \tau}^2}\right) \left(\frac{m_{{\widetilde \tau}}}{100\,\mbox{GeV}}\right) \left(\frac{10^{11}\,\mbox{GeV}}{f_a}\right)^2 \left(\frac{m_{\tilde{B}}}{100\,\mbox{GeV}}\right)^2 , \label{Eq:Axino2BodyII} \end{eqnarray} where $m_{{\widetilde B}}$ is the mass of the bino and $m_{\widetilde \tau}$ is the mass of the stau NLSP, i.e.\ $m_{\widetilde a} < m_{\widetilde \tau} < m_{{\widetilde B}}$. As explained below, there is an uncertainty associated with the method used to derive the decay rate~(\ref{Eq:Axino2BodyI}). We absorb this uncertainty into the mass scale $m\simeq m_{{\widetilde \tau},{\widetilde B}} \simeq \Order(100\,\mbox{GeV})$ and into the factor $\xi\simeq\Order(1)$ in the first line. We used $\log\left(f_a/m\right)\simeq 20.7$ to get from the first to the second line. Here a technical comment on the loop integral is in order. If one naively integrates over the internal momentum in the diagrams with the effective vertex~---~see~Fig.~\ref{Fig:Axino2Body}~---~one encounters logarithmic divergencies. This is because the effective vertex~(\ref{Eq:AxinoInteractions}) is applicable only if the momentum is smaller than the heavy (s)quark masses, whereas the momentum in the loop goes beyond that scale. In a rigorous treatment, one has to specify the origin of the effective vertex, i.e.\ the Peccei--Quinn sector, and to calculate the two-loop integrals with heavy (s)quarks in the additional loop. Such a two-loop computation leads to a finite result~\cite{HSSW:2005}. Here, instead, we have regulated the logarithmic divergencies with the cut-off $f_a$ and kept only the dominant contribution. The mass scale $m$ and the factor $\xi$ have been introduced above to account for the uncertainty coming from this cut-off procedure. \subsection{The Three-Body Decay \boldmath{${\widetilde \tau} \to \tau + \gamma + {\widetilde a}$} } \label{Sec:Axino3Body} We now turn to the three-body decay ${\widetilde \tau}_{\mathrm R}\to\tau+\gamma+{\widetilde a}$. We again neglect the tau mass for simplicity. In contrast to the two-body decay considered above, the three-body decay occurs already at tree level, once the effective vertex given in~(\ref{Eq:AxinoInteractions}) is used. In addition, we take into account photon radiation from the loop diagrams of Fig.~\ref{Fig:Axino2Body}, since the additional factor of $\alpha$ is partially compensated by the additional factor of $\log(f_a/m)$. As above, we keep only the dominant contribution of the loop diagrams. The corresponding Feynman diagrams are shown in Fig.~\ref{Fig:Axino3Body}, \begin{figure} \centerline{\epsfig{file=Axino3Body_w_labels_new.eps,width=13.5cm}} \bigskip \caption{ The dominant contributions to the three-body NLSP decay ${\widetilde \tau}_{\mathrm R} \to \tau + \gamma +{\widetilde a}$.} \label{Fig:Axino3Body} \end{figure} where a thick dot represents the effective vertex~(\ref{Eq:AxinoInteractions}) and a shaded triangle the set of triangle diagrams given in Fig.~\ref{Fig:Axino2Body}. As the photon radiation from an electrically charged particle within the loops leads to a subdominant contribution, these processes are not shown in Fig.~\ref{Fig:Axino3Body}. At each order in $\log(f_a/m)$, only the leading order in $\alpha$ is computed while higher-order corrections are not considered. In terms of the observables that seem to be most accessible, i.e.\ the photon energy $E_{\gamma}$ and $\cos\theta$, the cosine of the opening angle between the photon and the tau direction, the corresponding differential decay rate reads \begin{equation} \frac{d^2\Gamma({\widetilde \tau}_{\mathrm R}\to\tau\,\gamma\,{\widetilde a})}{dx_\gamma\,d\cos\theta} = \frac{m_{{\widetilde \tau}}}{512\pi^3}\, \frac{x_\gamma(1-A_{\widetilde a}-x_\gamma)}{[1-(x_\gamma/2)(1-\cos\theta)]^2}\, \sum_{\rm spins} |\mathcal{M}({\widetilde \tau}_{\mathrm R}\to\tau\,\gamma\,{\widetilde a})|^2 \ , \end{equation} where \begin{eqnarray} \sum_{\rm spins} |\mathcal{M}({\widetilde \tau}_{\mathrm R}\to\tau\,\gamma\,{\widetilde a})|^2 & = & {\alpha^3\,\*C_{\rm aYY}^2\over \pi\*\cos^4\theta_W}\,\, \*{m_{{\widetilde \tau}}^2\over f_a^2}\*\,\, F_{\rm diff}^{({\widetilde a})}(x_\gamma,\cos\theta,A_{\widetilde a},A_{\widetilde B}) \ , \end{eqnarray} with \begin{equation} x_{\gamma} \equiv \frac{2 E_{\gamma}}{m_{{\widetilde \tau}}} \ , \quad\quad A_{\widetilde a} \equiv \frac{m_{\widetilde a}^2}{m_{{\widetilde \tau}}^2} \ , \quad\quad A_{\widetilde B} \equiv \frac{m_{\widetilde B}^2}{m_{{\widetilde \tau}}^2} \ , \label{Eq:xgammaAi} \end{equation} and \begin{eqnarray} && F_{\rm diff}^{({\widetilde a})} (x_\gamma,\cos\theta,A_{\widetilde a},A_{\widetilde B}) = {x_\gamma^2 \*(1 \!-\! A_{\widetilde a} \!-\! x_\gamma)\* [1 \!+\! \cos\theta \!+\! A_{\widetilde a}\* (1 \!-\! \cos\theta)]\* [1 \!+\! \cos\theta \!+\! A_{\widetilde B} \*(1 \!-\! \cos\theta)]\over \{x_\gamma\*(1+\cos\theta)+2\*A_{\widetilde a}-A_{\widetilde B}\*[2- x_\gamma (1-\cos\theta)] \}^2} \nonumber\\ && +\,\, {3\*\alpha\over \pi\*\cos^2\theta_W}\*\, \xi\*\log\left({f_a\over m}\right) \*\Bigg\{ {\sqrt{A_{\widetilde a}\*A_{\widetilde B}} \*(1+\cos\theta)\*(1-A_{\widetilde a}-x_\gamma)\over x_\gamma\*(1+\cos\theta)+2\*A_{\widetilde a}-A_{\widetilde B} \*[2-x_\gamma\*(1-\cos\theta)]} \nonumber \\ && \hskip 4.5cm +\, {A_{\widetilde B}\*\left[(1+\cos\theta)\*(1-A_{\widetilde a})+ A_{\widetilde a}\*x_\gamma\*(1-\cos\theta)\right]\over x_\gamma\*(1+\cos\theta)+2\*A_{\widetilde a}-A_{\widetilde B} \*[2-x_\gamma\*(1-\cos\theta)]}\Bigg\} \nonumber \\ && +\,\, {9\*\alpha^2\over4\*\pi^2\*\cos^4\theta_W}\*\, \xi^2\*\log^2\left({f_a\over m}\right)\* A_{\widetilde B} \*\Bigg\{ {1+\cos\theta+A_{\widetilde a}\*(1-\cos\theta)\over (1-\cos\theta) \*(1-A_{\widetilde a}-x_\gamma)}+{2\*(1+\cos\theta)\*(1-A_{\widetilde a}) \over x_\gamma^2\*(1-\cos\theta)} \Bigg\} \ . \end{eqnarray} Hereafter, we use $\log\left(f_a/m\right)\simeq 20.7$, as in the previous section. The three-body decay ${\widetilde \tau}\to\tau+\gamma+{\widetilde a}$ involves bremsstrahlung processes (see~Fig.~\ref{Fig:Axino3Body}) and, as already mentioned, we have neglected the tau mass. Thus, when the photon energy and/or the angle between the photon and the tau direction tend to zero, there are soft and/or collinear divergences. Consequently, the total rate of the decay ${\widetilde \tau}\to\tau+\gamma+{\widetilde a}$ is not defined. We define the integrated rate of the three-body decay ${\widetilde \tau} \to \tau + \gamma +{\widetilde a}$ with a cut on the scaled photon energy, $x_\gamma > x_\gamma^{\mathrm{cut}}$, and a cut on the cosine of the opening angle, $\cos\theta < 1-x_\theta^{\mathrm{cut}}$: \begin{equation} \Gamma({\widetilde \tau}_{\mathrm R}\to\tau\,\gamma\,{\widetilde a}\,; x_\gamma^{\mathrm{cut}},x_\theta^{\mathrm{cut}}) \equiv \int^{1-A_{\tilde{a}}}_{x_\gamma^{\mathrm{cut}}} d x_\gamma \int^{1-x_\theta^{\mathrm{cut}}}_{-1} d \cos\theta \frac{d^2\Gamma({\widetilde \tau}_{\mathrm R}\to\tau\,\gamma\,{\widetilde a})}{dx_\gamma d\cos\theta} \ . \label{Eq:Axino3BodywithCuts} \end{equation} As explained in Sec.~\ref{Sec:AxinovsGravitino}, the quantity $\Gamma({\widetilde \tau}_{\mathrm R}\to\tau\,\gamma\,{\widetilde a}\,; x_\gamma^{\mathrm{cut}},x_\theta^{\mathrm{cut}})$ will be important in distinguishing between the axino LSP and the gravitino LSP scenarios. \subsection{Probing the Peccei--Quinn Scale and the Axino Mass} In the axino LSP scenario, the stau NLSP decays provide us with a new method to probe the Peccei--Quinn scale $f_a$ at colliders. As we will see in Sec.~\ref{Sec:BranchingRatio}, the branching ratio of the three-body decay is small if reasonable cuts are used. Thus, we can use the two-body decay rate~(\ref{Eq:Axino2BodyII}) to estimate the stau lifetime, $\tau_{\widetilde \tau} \approx 1/\Gamma({\widetilde \tau}\to\tau\,{\widetilde a})$. Accordingly, the Peccei--Quinn scale $f_a$ can be estimated as \begin{equation} f_a^2 \simeq \left(\frac{\tau_{\widetilde \tau}}{25~\mathrm{sec}}\right)\, \xi^2\,C_{\rm aYY}^2 \left(1-\frac{m_{\widetilde a}^2}{m_{\widetilde \tau}^2}\right) \left(\frac{m_{{\widetilde \tau}}}{100\,\mbox{GeV}}\right) \left(\frac{m_{\tilde{B}}}{100\,\mbox{GeV}}\right)^2 \left(10^{11}\,\mbox{GeV}\right)^2 \ , \label{Eq:PQScale} \end{equation} once $m_{\widetilde \tau}$, $m_{\widetilde B}$, and the lifetime of the stau $\tau_{\widetilde \tau}$ have been measured. The dependence on the axino mass is negligible for $m_{\widetilde a}/m_{\widetilde \tau}\buildrel<\over{_\sim} 0.1$, so that $f_a$ can be determined without knowing $m_{\widetilde a}$. For larger values of $m_{\widetilde a}$, the stau NLSP decays can be used to determine the mass of the axino kinematically. In the two-body decay ${\widetilde \tau}\to\tau+{\widetilde a}$, the axino mass can be inferred from $E_\tau$, the energy of the emitted tau lepton: \begin{equation} m_{\widetilde a} = \sqrt{m_{\widetilde \tau}^2+m_\tau^2-2m_{\widetilde \tau} E_\tau} \ , \label{Eq:AxinoMass2Body} \end{equation} with an error depending on the experimental uncertainty on $m_{\widetilde \tau}$ and $E_\tau$. \section{Gravitino LSP Scenario} \label{Sec:GravitinoLSP} In this section we assume that the gravitino is the LSP and again that the pure right-handed stau is the NLSP. The corresponding rates of the two-body and three-body decay of the stau NLSP are given. These decays have already been studied in Refs.~\cite{Buchmuller:2004rq}. Here we generalize the result obtained for the three-body decay by taking into account finite values of the neutralino mass. For simplicity, we assume again that the lightest neutralino is a pure bino. The couplings of the gravitino ${\widetilde{G}}$ to the ${\widetilde \tau}_{\mathrm R}$, $\tau$, ${\widetilde B}$, and $\gamma$ are given by the SUGRA Lagrangian~\cite{Wess:1992cp}. The interactions of the gravitino are determined uniquely by local SUSY and the Planck scale and, in constrast to the axino case, are not model-dependent. \subsection{The Two-Body Decay \boldmath{${\widetilde \tau} \to \tau + {\widetilde{G}}$} } \label{Sec:Gravitino2Body} In the gravitino LSP scenario, the main decay mode of the stau NLSP is the two-body decay ${\widetilde \tau}\to\tau+{\widetilde{G}}$. As there is a direct stau--tau--gravitino coupling, this process occurs at tree level. Neglecting the $\tau$-lepton mass $m_\tau$, one obtains the decay rate: \begin{eqnarray} \Gamma({\widetilde \tau}_{\mathrm R}\to\tau\,{\widetilde{G}}) &=& \frac{m_{{\widetilde \tau}}^5}{48\pi\,m_{\widetilde{G}}^2\,\mathrm{M}_{\mathrm{Pl}}^2} \left( 1 - \frac{m_{\widetilde{G}}^2}{m_{{\widetilde \tau}}^2} \right)^4 \label{Eq:Gravitino2BodyI}\\ &=& (5.89~\mathrm{sec})^{-1} \left(\frac{m_{{\widetilde \tau}}}{100~\mathrm{GeV}}\right)^5 \left(\frac{10~\mathrm{MeV}}{m_{\widetilde{G}}}\right)^2 \left( 1 - \frac{m_{\widetilde{G}}^2}{m_{{\widetilde \tau}}^2} \right)^4 \ . \label{Eq:Gravitino2BodyII} \end{eqnarray} In order to get from the first to the second line, we have used the value of the reduced Planck mass $\mathrm{M}_{\mathrm{Pl}} = (8\pi\,G_{\rm N})^{-1/2} = 2.435\times 10^{18}\,\mbox{GeV}$ as obtained from macroscopic measurements of Newton's constant~\cite{Eidelman:2004wy} $G_{\rm N} = 6.709\times 10^{-39}\,\mbox{GeV}^{-2}$. Thus, the gravitino mass can be determined once the stau NLSP lifetime governed by~(12) and $m_{\widetilde \tau}$ are measured. As pointed out in Refs.~\cite{Buchmuller:2004rq}, expression~(\ref{Eq:Gravitino2BodyI}) can also be used the other way around, i.e.\ for a microscopic determination of the Planck scale once the masses of the gravitino and the stau are measured kinematically. Note the strong dependence on $m_{\widetilde{G}}$ and $m_{\widetilde \tau}$. In the axino LSP scenario, the corresponding rate~(\ref{Eq:Axino2BodyI}) becomes independent of the axino mass for $m_{{\widetilde a}}/m_{{\widetilde \tau}} \buildrel<\over{_\sim} 0.1$, so that the Peccei--Quinn scale can be determined even if $m_{{\widetilde a}}$ is too small to be inferred kinematically. \vspace*{-0.2cm} \subsection{The Three-Body Decay \boldmath{${\widetilde \tau} \to \tau + \gamma + {\widetilde{G}}$} } \label{Sec:Gravitino3Body} Let us now turn to the three-body decay ${\widetilde \tau}_{\mathrm R}\to\tau+\gamma+{\widetilde{G}}$. The corresponding Feynman diagrams are shown in Fig.~\ref{Fig:Gravitino3Body}. \begin{figure} \centerline{ \epsfig{file=Gravitino3Body_w_labels_new.eps,width=13.5cm} } \caption{ The three-body NLSP decay ${\widetilde \tau}_{\mathrm R} \to \tau + \gamma + {\widetilde{G}}$.} \label{Fig:Gravitino3Body} \end{figure} We neglect again the tau mass for simplicity. For finite bino mass, we obtain the following differential decay rate \begin{equation} \frac{d^2\Gamma({\widetilde \tau}_{\mathrm R}\to\tau\,\gamma\,{\widetilde{G}})}{dx_\gamma d\cos\theta} = \frac{m_{\tilde{\tau}}}{512\pi^3} \frac{x_\gamma(1-A_{{\widetilde{G}}}-x_\gamma)}{[1-(x_\gamma/2)(1-\cos\theta)]^2} \sum_{\rm spins} |\mathcal{M}({\widetilde \tau}_{\mathrm R}\to\tau\,\gamma\,{\widetilde{G}})|^2, \end{equation} where \begin{equation} \sum_{\rm spins} |\mathcal{M}({\widetilde \tau}_{\mathrm R}\to\tau\,\gamma\,{\widetilde{G}})|^2 = \frac{8\pi\alpha}{3}\,\, \frac{m_{\widetilde \tau}^2}{\mathrm{M}_{\mathrm{Pl}}^2\,A_{{\widetilde{G}}}}\,\, F_{\rm diff}^{({\widetilde{G}})}(x_\gamma,\cos\theta,A_{{\widetilde{G}}},A_{\widetilde B}) \end{equation} with the definitions of $x_{\gamma}$ and $A_{\widetilde B}$ given in~(\ref{Eq:xgammaAi}), $A_{\widetilde{G}} \equiv m_{\widetilde{G}}^2/m_{\widetilde \tau}^2$, and \begin{eqnarray} && F_{\rm diff}^{({\widetilde{G}})} (x_\gamma,\cos\theta,A_{{\widetilde{G}}},A_{\widetilde B}) = -3\*A_{{\widetilde{G}}}^2 - 7\*x_{\gamma}\*A_{{\widetilde{G}}} +{2\*(2-5\*\cos\theta)\*A_{{\widetilde{G}}} \over 1-\cos\theta} -{x_{\gamma}\*(1+\cos\theta) \over (1-\cos\theta)} \nonumber \\ &&\quad -{(1+\cos\theta)\*(3+\cos\theta) \over (1-\cos\theta)^2} + {2\*(1-A_{{\widetilde{G}}})^3\*(1+\cos\theta) \over x_{\gamma}^2\*(1-\cos\theta)} + {A_{{\widetilde{G}}}\*(1-A_{{\widetilde{G}}})^2 \over 1-A_{{\widetilde{G}}}-x_{\gamma}} \nonumber \\ &&\quad + {(1-A_{{\widetilde{G}}})^2\*(1+\cos\theta) \over (1-A_{{\widetilde{G}}}-x_{\gamma})\*(1-\cos\theta)} - {4\*\left[1+\cos\theta+A_{{\widetilde{G}}}\*(1-\cos\theta)\right]^2 \over \left[2-x_{\gamma}\*(1-\cos\theta)\right]^2\*(1-\cos\theta)^2} \nonumber \\ &&\quad + {2\*\left\{3+\cos\theta\*\left[4-\cos\theta+2\*A_{{\widetilde{G}}}\*(1-\cos\theta)\right]\right\} \*\left[1+\cos\theta+A_{{\widetilde{G}}}\*(1-\cos\theta)\right] \over \left[2-x_{\gamma}\*(1-\cos\theta)\right]\*(1-\cos\theta)^2} \nonumber \\ &&\quad + 2\*(1-A_{{\widetilde{G}}}-x_{\gamma}) \*\Bigg\{ {1+x_{\gamma}-x_{\gamma}^2 - 2\*A_{{\widetilde{G}}}\*(1+3\*x_{\gamma}-2\*x_{\gamma}^2) + A_{{\widetilde{G}}}^2\*(1+5\*x_{\gamma}) \over x_{\gamma}\*(1-A_{\tilde{B}})\*(1-A_{{\widetilde{G}}}-x_{\gamma})} \nonumber \\ &&\quad - {2\*\left[1+x_{\gamma}\*(2+A_{\tilde{B}})-x_{\gamma}^2 + 2\*A_{{\widetilde{G}}}\*(1-x_{\gamma})\right] \over x_{\gamma}\*\left[2-x_{\gamma}\*(1-\cos\theta)\right]} + {4\*(1-A_{{\widetilde{G}}}-x_{\gamma}) \over \left[2-x_{\gamma}\*(1-\cos\theta)\right]^2} \nonumber \\ &&\quad -{\sqrt{A_{{\widetilde B}}\*A_{{\widetilde{G}}}}\* \left[2\*(1+\cos\theta)\*(1-A_{{\widetilde{G}}})+3\*x_{\gamma}\*A_{{\widetilde{G}}}\*(1-\cos\theta)\right] \over x_{\gamma}\*(1+\cos\theta)+2\*(A_{{\widetilde{G}}}-A_{\tilde{B}}) +A_{\tilde{B}}\*x_{\gamma}\*(1-\cos\theta)} \nonumber \\ &&\quad - {2\*\left\{A_{{\widetilde{G}}}^2\*\left[-3-6\*x_{\gamma}+A_{\tilde{B}}\*(2+x_{\gamma})\right] + 4\*A_{{\widetilde B}}\*A_{{\widetilde{G}}}\*(1+x_{\gamma}-x_{\gamma}^2)\right\} \over x_{\gamma}\*(1-A_{\tilde{B}}) \*\left[x_{\gamma}\*(1+\cos\theta)+2\*(A_{{\widetilde{G}}}-A_{\tilde{B}}) +A_{\tilde{B}}\*x_{\gamma}\*(1-\cos\theta) \right]} \nonumber \\ &&\quad + {2\*A_{\tilde{B}}^2\*\left[(1-x_{\gamma})\*(1+2\*A_{{\widetilde{G}}}+x_{\gamma}) +x_{\gamma}\*A_{\tilde{B}}\right] \over x_{\gamma}\*(1-A_{\tilde{B}}) \*\left[x_{\gamma}\*(1+\cos\theta)+2\*(A_{{\widetilde{G}}}-A_{\tilde{B}}) +A_{\tilde{B}}\*x_{\gamma}\*(1-\cos\theta) \right]} \Bigg\} \nonumber \\ &&\quad + (1-A_{{\widetilde{G}}}-x_{\gamma}) \*\Bigg\{ {(-1+3\*A_{{\widetilde{G}}})\*(1-A_{{\widetilde{G}}}) \over (1-A_{\tilde{B}})} +{2\*\left[2-x_{\gamma}-2\*(A_{{\widetilde{G}}}-A_{\tilde{B}})\right] \over 2-x_{\gamma}\*(1-\cos\theta)} \nonumber \\ &&\quad - {4\*(1-A_{{\widetilde{G}}}-x_{\gamma}) \over \left[2-x_{\gamma}\*(1-\cos\theta)\right]^2} - {2\*(A_{{\widetilde{G}}}-A_{\tilde{B}}) \*\left[3\*A_{{\widetilde{G}}}\*(2-2\*A_{{\widetilde{G}}}-x_{\gamma}) +A_{\tilde{B}}\*(2-2\*A_{\tilde{B}}+x_{\gamma})\right] \over (1-A_{\tilde{B}})\*\left[x_{\gamma}\*(1+\cos\theta) +2\*(A_{{\widetilde{G}}}-A_{\tilde{B}}) + A_{\tilde{B}}\*x_{\gamma}\*(1-\cos\theta)\right]} \nonumber \\ &&\quad + {4\*(1-A_{{\widetilde{G}}}-x_{\gamma})\*(3\*A_{{\widetilde{G}}}+A_{\tilde{B}})\*(A_{{\widetilde{G}}}-A_{\tilde{B}})^2 \over (1-A_{\tilde{B}})\*\left[x_{\gamma}\*(1+\cos\theta) +2\*(A_{{\widetilde{G}}}-A_{\tilde{B}}) + A_{\tilde{B}}\*x_{\gamma}\*(1-\cos\theta)\right]^2} \Bigg\} \ . \label{Eq:FdiffGravitino} \end{eqnarray} In the limit $m_{{\widetilde B}}\to\infty$, only the terms in the first four lines of~(\ref{Eq:FdiffGravitino}) remain and the result given in the appendix of the first reference in~\cite{Buchmuller:2004rq} is obtained. For finite values of the bino mass, the diagram with the bino propagator in Fig.~\ref{Fig:Gravitino3Body} has to be taken into account, which then leads to our more general result. As in the axino case, the total rate of the three-body decay ${\widetilde \tau} \to \tau + \gamma +{\widetilde{G}}$ is not defined. We thus introduce again the integrated rate with a cut on the scaled photon energy, $x_\gamma > x_\gamma^{\mathrm{cut}}$, and a cut on the cosine of the opening angle, $\cos\theta < 1-x_\theta^{\mathrm{cut}}$, \begin{eqnarray} \Gamma({\widetilde \tau}_{\mathrm R}\to\tau\,\gamma\,{\widetilde{G}}\,; x_\gamma^{\mathrm{cut}},x_\theta^{\mathrm{cut}}) &=& \int^{1-A_{\tilde{G}}}_{x_\gamma^{\mathrm{cut}}} d x_\gamma \int^{1-x_\theta^{\mathrm{cut}}}_{-1} d \cos\theta \frac{d^2\Gamma({\widetilde \tau}_{\mathrm R}\to\tau\,\gamma\,{\widetilde{G}})} {dx_\gamma d\cos\theta} \ . \end{eqnarray} This quantity will be used in our comparison of collider signatures of the axino LSP and the gravitino LSP scenarios. \section{Axino vs.\ Gravitino} \label{Sec:AxinovsGravitino} In this section we show how the two-body and three-body decays of the stau NLSP can be used to distinguish between the axino LSP scenario and the gravitino LSP scenario at colliders. We compare the total decay rates of the stau NLSP, the branching ratios of the three-body decays ${\widetilde \tau}\to\tau+\gamma+{\widetilde a}/{\widetilde{G}}$ with cuts on the observables, and the differential distributions of the decay products in the three-body decays. \subsection{Total Decay Rates} \label{Sec:TotalDecayRates} Let us discuss the lifetime of the stau NLSP in the axino LSP and in the gravitino LSP scenarios, and examine whether the lifetime can be used to distinguish between the two. In both cases, the total decay rate of the stau NLSP is dominated by the two-body decay, \begin{equation} \Gamma^{\rm{total}}_{\tilde{\tau}_R\,\to\, i\,X} \simeq \Gamma({\widetilde \tau}_{\mathrm R}\to\tau\, i) \ , \quad i = {\widetilde a}, {\widetilde{G}} \ , \label{Eq:TotalRates} \end{equation} with the rates given respectively in~(\ref{Eq:Axino2BodyII}) and~(\ref{Eq:Gravitino2BodyII}). Thus, the order of magnitude of the stau NLSP lifetime is (essentially) determined by $m_{\widetilde \tau}$, $m_{\widetilde B}$, and $f_a$ in the axino LSP scenario and by $m_{\widetilde \tau}$ and $m_{\widetilde{G}}$ in the gravitino LSP scenario. Among those parameters, one should be able to measure the stau mass $m_{\widetilde \tau}$ and the bino mass $m_{\widetilde B}$ by analysing the other processes occurring in the planned collider detectors. Indeed, we expect that these masses will already be known when the stau NLSP decays are analysed. To be specific, we set these masses to $m_{\widetilde \tau}=100\,\mbox{GeV}$ and $m_{\widetilde B}=110\,\mbox{GeV}$, keeping in mind the NLSP lifetime dependencies $\tau_{\widetilde \tau} \propto 1/(m_{\widetilde \tau} \,m_{\widetilde B}^2)$ for the axino LSP and $\tau_{\widetilde \tau} \propto 1/m_{\widetilde \tau}^5$ for the gravitino LSP. Then, the order of magnitude of the stau NLSP lifetime is governed by the Peccei--Quinn scale $f_a$ in the axino LSP scenario and by the gravitino mass $m_{\widetilde{G}}$ in the gravitino LSP scenario. In the axino LSP scenario, the stau lifetime varies from $\Order(0.01~{\mbox{sec}})$ to $\Order(10~{\mbox{h}})$ if we change the Peccei--Quinn scale $f_a$ from $5\times 10^9\,\mbox{GeV}$ to $5\times 10^{12}\,\mbox{GeV}$, as can be seen from~(\ref{Eq:Axino2BodyII}). For the given values of $m_{\widetilde \tau}$ and $m_{\widetilde B}$, these values can probably be considered as the lower and upper bounds on the stau NLSP lifetime in the axino LSP case. In the gravitino LSP case, the stau lifetime can vary over a much wider range, e.g.\ from $6\times 10^{-8}\,{\rm sec}$ to 15~years by changing the gravitino mass $m_{\widetilde{G}}$ from 1~keV to 50~GeV, as can be seen from~(\ref{Eq:Gravitino2BodyII}). Therefore, both a very short stau NLSP lifetime, $\tau_{\widetilde \tau} \buildrel<\over{_\sim}$~msec, and a very long one, $\tau_{\widetilde \tau} \buildrel>\over{_\sim}$~days, will point to the gravitino LSP scenario. For example, in gravity-mediated SUSY breaking models, the gravitino mass is typically $(10$--$100)\,\mbox{GeV}$. Then, the lifetime of the NLSP becomes of $\Order(\mbox{years})$ and points clearly to the gravitino LSP scenario. On the other hand, if the observed lifetime of the stau NLSP is within the range $\Order(0.01~{\mbox{sec}})$--$\Order(10~{\mbox{h}})$, it will be very difficult to distinguish between the axino LSP and the gravitino LSP scenarios from the stau NLSP lifetime alone. In this case, the analysis of the three-body NLSP decays will be crucial to distinguish between the two scenarios. \subsection{Branching Ratio of the Three-Body Decay Modes} \label{Sec:BranchingRatio} We now consider the branching ratio of the integrated rate of the three-body decay ${\widetilde \tau}\to\tau+\gamma+{\widetilde a}/{\widetilde{G}}$ with cuts \begin{equation} BR({\widetilde \tau}_{\mathrm R}\to\tau\,\gamma\, i\,; x_{\gamma}^{\mathrm{cut}},x_{\theta}^{\mathrm{cut}}) \equiv {\Gamma({\widetilde \tau}_{\mathrm R}\to\tau\,\gamma\, i\,; x_\gamma^{\mathrm{cut}},x_\theta^{\mathrm{cut}}) \over \Gamma^{\rm{total}}_{\tilde{\tau}_R\,\to\, i\,X} } \ , \quad i = {\widetilde a}, {\widetilde{G}} \ . \label{Eq:BranchingRatio} \end{equation} In Fig.~\ref{Fig:BranchingRatio} \begin{figure} \hskip -0.5cm \epsfig{file=BR_with_cuts_new_ite.eps,width=14cm} \caption{The branching ratio of the integrated rate of the three-body decay ${\widetilde \tau}\to\tau+\gamma+{\widetilde a}/{\widetilde{G}}$ with cuts as a function of $x_{\theta}^{\mathrm{cut}}$ for $x_{\gamma}^{\rm cut}=0.1$ (left) and as a function of $x_{\gamma}^{\mathrm{cut}}$ for $x_{\theta}^{\mathrm{cut}}=0.1$ (right). The solid and dashed lines show the results for the gravitino LSP and the axino LSP, respectively, as obtained with $m_{\widetilde \tau} = 100\,\mbox{GeV}$, $m_{\widetilde B} = 110\,\mbox{GeV}$, $f_a = 10^{11}\,\mbox{GeV}$, $\xi^2 C_{\rm aYY}^2=1$, $m_{{\widetilde a}}^2/m_{{\widetilde \tau}}^2 \ll 1$, and $m_{\widetilde{G}} = 10\,\mbox{MeV}$.} \label{Fig:BranchingRatio} \end{figure} this quantity is shown for the gravitino LSP (solid line) and the axino LSP (dashed line) for $m_{\widetilde \tau} = 100\,\mbox{GeV}$, $m_{\widetilde B} = 110\,\mbox{GeV}$, $f_a = 10^{11}\,\mbox{GeV}$, $\xi^2 C_{\rm aYY}^2=1$, $m_{{\widetilde a}}^2/m_{{\widetilde \tau}}^2 \ll 1$, and $m_{\widetilde{G}} = 10\,\mbox{MeV}$.\footnote{The results shown in Fig.~\ref{Fig:BranchingRatio} are basically independent of the Peccei--Quinn scale $f_a$ and the gravitino mass $m_{\widetilde{G}}$ provided $m_{\widetilde{G}}/m_{\widetilde \tau}\buildrel<\over{_\sim} 0.1$. For larger values of the gravitino mass, the stau NLSP lifetime being of $\Order(\mbox{years})$ points already to the gravitino LSP scenario as discussed above.} In the left (right) part of the figure we fix $x_{\gamma}^{\mathrm{cut}}=0.1$ ($x_{\theta}^{\mathrm{cut}}=0.1$) and vary $x_{\theta}^{\mathrm{cut}}$ ($x_{\gamma}^{\mathrm{cut}}$). The dependence of the branching ratio~(\ref{Eq:BranchingRatio}) on the cut parameters in the axino LSP case differs qualitatively from the one in the gravitino LSP case. Moreover, there is a significant excess of $BR({\widetilde \tau}_{\mathrm R}\to\tau\,\gamma\, {\widetilde a}\,; x_{\gamma}^{\mathrm{cut}},x_{\theta}^{\mathrm{cut}})$ over $BR({\widetilde \tau}_{\mathrm R}\to\tau\,\gamma\, {\widetilde{G}}\,; x_{\gamma}^{\mathrm{cut}},x_{\theta}^{\mathrm{cut}})$ over large ranges in the cut parameters. For example, if $10^4$~stau NLSP decays can be analysed and the cuts are set to $x_\gamma^{\mathrm{cut}}=x_{\theta}^{\mathrm{cut}}=0.1$, we expect about 165$\pm$13 (stat.) ${\widetilde \tau}_{\mathrm R}\to\tau\,\gamma\,{\widetilde a}$ events for the axino LSP and about 100$\pm$10 (stat.) ${\widetilde \tau}_{\mathrm R}\to\tau\,\gamma\,{\widetilde{G}}$ events for the gravitino LSP. Thus, the measurement of the branching ratio~(\ref{Eq:BranchingRatio}) would allow a distinction to be made between the axino LSP and the gravitino LSP scenarios. For a smaller number of analysed stau NLSP decays, this distinction becomes more difficult. In addition to the statistical errors, details of the detectors and of the additional massive material needed to stop the staus and to analyse their decays will be important to judge on the feasibility of the distinction based on the branching ratios. We postpone this study for future work. \subsection{Differential Distributions in the Three-Body Decays} \label{Sec:DifferentialDistributions} Finally, we consider the differential distributions of the visible decay products in the three-body decays ${\widetilde \tau}\to\tau+\gamma+{\widetilde a}/{\widetilde{G}}$ in terms of the quantity \begin{equation} {1 \over \Gamma({\widetilde \tau}_{\mathrm R}\to\tau\,\gamma\, i\,; x_{\gamma}^{\mathrm{cut}},x_{\theta}^{\mathrm{cut}})} \,\, {d^2\Gamma({\widetilde \tau}_{\mathrm R}\to\tau\,\gamma\, i) \over d x_{\gamma}d \cos\theta} \ , \quad i = {\widetilde a}, {\widetilde{G}} \ , \label{Eq:Fingerprint} \end{equation} which is independent of the two-body decay, the total NLSP decay rate, and the Peccei--Quinn/Planck scale. In Fig.~\ref{Fig:Fingerprint}, \begin{figure} \hskip -0.cm \epsfig{file=DiffDistAxinoNew_ite.eps,width=8cm} \hfill \epsfig{file=Diff_Dist_Gravitino_new_ite.eps,width=8cm} \caption{ The normalized differential distributions of the visible decay products in the decays ${\widetilde \tau}\to\tau+\gamma+{\widetilde a}/{\widetilde{G}}$ for the axino LSP scenario (left) and the gravitino LSP scenario (right) for $m_{\widetilde \tau} = 100\,\mbox{GeV}$, $m_{\widetilde B} = 110\,\mbox{GeV}$, $m_{{\widetilde a}}^2/m_{{\widetilde \tau}}^2 \ll 1$, and $m_{\widetilde{G}} = 10\,\mbox{MeV}$. The cut parameters are set to $x_\gamma^{\mathrm{cut}} = x_\theta^{\mathrm{cut}}=0.1$. The contour lines represent the values 0.2, 0.4, 0.6, 0.8, and 1.0, where the darker shading implies a higher number of events.} \label{Fig:Fingerprint} \end{figure} the normalized differential distributions~(\ref{Eq:Fingerprint}) with $x_\gamma^{\mathrm{cut}} = x_\theta^{\mathrm{cut}}=0.1$ are shown for the axino LSP scenario (left) and the gravitino LSP scenario (right) for $m_{\widetilde \tau} = 100\,\mbox{GeV}$, $m_{\widetilde B} = 110\,\mbox{GeV}$, $m_{{\widetilde a}}^2/m_{{\widetilde \tau}}^2 \ll 1$, and $m_{\widetilde{G}} = 10\,\mbox{MeV}$.\footnote{A similar comparison between the gravitino and a hypothetical spin-1/2 fermion with extremely weak Yukawa couplings was performed in Refs.~\cite{Buchmuller:2004rq}. Note that our result for the axino shown in Fig.~\ref{Fig:Fingerprint} differs also from the one for the hypothetical spin-1/2 fermion due to different couplings.} In the case of the gravitino LSP, the events are peaked only in the region where the photons are soft and the photon and the tau are emitted with a small opening angle ($\theta\simeq 0$). In contrast, in the axino LSP scenario, the events are also peaked in the region where the photon energy is large and the photon and the tau are emitted back-to-back ($\theta \simeq \pi$). Thus, if the observed number of events peaks in both regions, there is strong evidence for the axino LSP and against the gravitino LSP. To be specific, with $10^4$ analysed stau NLSP decays, we expect about 165$\pm$13 (stat.) events for the axino LSP and about 100$\pm$10 (stat.) events for the gravitino LSP, which will be distributed over the corresponding ($x_\gamma$,\,$\cos\theta$)-planes shown in Fig.~\ref{Fig:Fingerprint}. In particular, in the region of $x_{\gamma}\gtrsim 0.8$ and $\cos\theta \lesssim -0.3$, we expect about 28\% of the 165$\pm$13 (stat.) events in the axino LSP case and about 1\% of the 100$\pm$10 (stat.) events in the gravitino LSP case. These numbers illustrate that $\Order(10^4)$ of analysed stau NLSP decays could be sufficient for the distinction based on the differential distributions. To establish the feasibility of this distinction, a dedicated study taking into account the details of the detectors and the additional massive material will be crucial, which we leave for future studies. Some comments are in order. The differences between the two scenarios shown in Figs.~\ref{Fig:BranchingRatio} and~\ref{Fig:Fingerprint} become smaller for larger values of $m_{\widetilde B} / m_{\widetilde \tau}$. This ratio, however, remains close to unity for the stau NLSP in unified models. Furthermore, if $m_{\widetilde{G}} < m_{{\widetilde a}} < m_\mathrm{LOSP}$~---~mentioned as case~(iv) in the Introduction~---~and $\Gamma({\widetilde \tau} \to {\widetilde a}\,X) \gg \Gamma({\widetilde \tau} \to {\widetilde{G}}\,X)$, one would still find the distribution shown in the left panel of Fig.~\ref{Fig:Fingerprint}. The axino would then eventually decay into the gravitino LSP and the axion. Conversely, the distribution shown in the right panel of Fig.~\ref{Fig:Fingerprint} would be obtained if $m_{{\widetilde a}} < m_{\widetilde{G}} < m_\mathrm{LOSP}$~---~mentioned as case~(iii) in the Introduction~---~and $\Gamma({\widetilde \tau} \to {\widetilde a}\,X) \ll \Gamma({\widetilde \tau} \to {\widetilde{G}}\,X)$. Then it would be the gravitino that would eventually decay into the axino LSP and the axion. Barring these caveats, the signatures shown in Figs.~\ref{Fig:BranchingRatio} and~\ref{Fig:Fingerprint} will provide a clear distinction between the axino LSP and the gravitino LSP scenarios. \section{Conclusion} \label{Sec:Conclusion} Assuming that a charged slepton is the NLSP, we have discussed signatures of both the gravitino LSP scenario and the axino LSP scenario in the framework of hadronic, or KSVZ, axion models~\cite{Kim:1979if+X}. These signatures can be observed at future colliders if the planned detectors are equipped with 1--10~kt of additional material to stop and collect charged NLSPs~\cite{Hamaguchi:2004df,Feng:2004yi}. With calorimetric and tracking performance, this additional material will serve simultaneously as a real-time detector, allowing an analysis of the decays of the trapped NLSPs with high efficiency~\cite{Hamaguchi:2004df}. In the scenario in which the axino is the LSP, we have shown that the NLSP lifetime can be used to estimate the Peccei--Quinn scale $f_a$. Indeed, if the axino is the LSP, the NLSP decays provide us with a new way to probe the Peccei--Quinn sector. This method is complementary to the existing and planned axion search experiments. The decays of the NLSP into the axino LSP will also allow us to determine the axino mass kinematically if it is not much smaller than the mass of the NLSP. The determination of both the Peccei--Quinn scale $f_a$ and the axino mass $m_{\widetilde a}$ will be crucial for insights into the cosmological relevance of the axino LSP. Once $f_a$ and $m_{\widetilde a}$ are known, we will be able to decide if axinos are present as cold dark matter in our Universe. In the gravitino LSP scenario, the measurement of the stau NLSP lifetime can be used to determine the gravitino mass $m_{\widetilde{G}}$ once the mass of the NLSP is known. This will be crucial for insights into the SUSY breaking mechanism. Moreover, if the gravitino mass can be determined independently from the kinematics and if the NLSP mass is known, the NSLP lifetime provides a microscopic measurement of the Planck scale~\cite{Buchmuller:2004rq}. Indeed, if the gravitino is the LSP, the lifetime of the NLSP depends strongly on the Planck scale and the masses of the NLSP and the gravitino. We have addressed the question of how to distinguish between the axino LSP and the gravitino LSP scenarios at colliders. If the mass of the LSP cannot be measured and if the NLSP lifetime is within the range $\Order(0.01~{\mbox{sec}})$--$\Order(10~{\mbox{h}})$, we have found that the NLSP lifetime alone will not allow us to distinguish clearly between the axino LSP and the gravitino LSP scenarios. The situation is considerably improved when one considers the three-body decay of a charged slepton NLSP into the associated charged lepton, a photon, and the LSP. We have found qualitative and quantitative differences between the branching ratios of the integrated three-body decay rate with cuts on the photon energy and the angle between the lepton and photon directions. In addition, the differential distributions of the decay products in the three-body decays provide characteristic fingerprints. For a clear distinction between the axino LSP and the gravitino LSP scenarios based on the three-body decay events, at least of $\Order(10^4)$ of analysed stau NLSP decays are needed. If the mass of the stau NLSP is not significantly larger than 100~GeV, this number could be obtained at both the LHC and the ILC with 1--10~kt of massive additional material around the main detectors. \section*{Acknowledgements} We thank W.~Buchm\"uller, G.~Colangelo, M.~Maniatis, M.~Nojiri, D.~Rainwater, M.~Ratz, R.~Ruiz de Austri, S.~Schilling, Y.Y.Y.~Wong, and D.~Wyler, for valuable discussions. We gratefully acknowledge financial support from the European Network for Theoretical Astroparticle Physics (ENTApP), member of ILIAS, EC contract number RII-CT-2004-506222. This work was completed during an ENTApP-sponsored Visitor's Program on Dark Matter at CERN, 17~January -- 4~February 2005. The research of A.B.\ was supported by a Heisenberg grant of the Deutsche Forschungsgemeinschaft.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{secn:intro} Gauge theories with extended supersymmetries show very remarkable behaviours. For example the maximally supersymmetric Yang-Mills theories in $d=4$, briefly ${\mathcal{N}}=4$ SYM theories, are conjectured to enjoy S-duality invariance. S-duality is a strong/weak-coupling relation that exchanges electrically charged states with non-perturbative magnetically charged states \cite{Montonen:1977sn}; over the years many tests of this conjecture have been carried out with success (see for instance \cite{Vafa:1994tf,Girardello:1995gf}). For theories with simply-laced gauge groups, S-duality maps the gauge group to itself, but when the gauge group is not simply laced, the gauge group of the S-dual theory is the GNO dual group \cite{Goddard:1976qe}. Also ${\mathcal{N}}=2$ SYM theories are very interesting: even if they are less constrained than the ${\mathcal{N}}=4$ theories, it is still possible to study several of their properties in an exact way. Indeed their perturbative contribution is exhausted at the one-loop level and their non-perturbative behaviour is by now well understood, on the one hand, via the Seiberg-Witten \cite{Seiberg:1994rs,Seiberg:1994aj} description of their low energy effective theory and, on the other hand, via the direct computation of instanton effects by means of localization techniques \cite{Nekrasov:2002qd}\nocite{Flume:2002az,Nekrasov:2003rj,Bruzzo:2002xf}\,-\,\cite{Fucito:2004ry}. A noticeable exception in this scenario is given by theories with exceptional gauge groups for which an ADHM construction of the instanton moduli space is still missing% \footnote{See for example \cite{Gaiotto:2012uq}\nocite{Keller:2012da,Benvenuti:2010pq,Keller:2011ek,Hanany:2012dm}\,-\,\cite{Cremonesi:2014xha} for recent progresses on the description of instanton moduli spaces in theories with exceptional gauge groups}. Among the ${\mathcal{N}}=2$ models much attention has been devoted, in the recent years, to superconformal theories and to their mass deformations, which sit at the crossroad of many approaches to the non-perturbative description of quantum field theories and of their duality structures (see for example the collective review \cite{Teschner:2014oja} and the references therein). This paper deals with the ${\mathcal{N}}=2^\star$ SYM theories with simply laced gauge group $G$ whose corresponding Lie algebra will be denoted by $\mathfrak{g}$. Beside the ${\mathcal{N}}=2$ gauge vector multiplet, these theories contain an adjoint hypermultiplet of mass $m$ and represent a mass deformation of the ${\mathcal{N}}=4$ SYM theory. In an appropriate large-$m$ limit, the hypermultiplet decouples and the pure ${\mathcal{N}}=2$ SYM theory is retrieved. The ${\mathcal{N}}=2^\star$ theory inherits from the ${\mathcal{N}}=4$ theory an interesting action of S-duality. In particular, S-duality acts non-trivially on the prepotential function $F$ that encodes the low-energy effective dynamics on the Coulomb branch of moduli space. Upon expanding the prepotential in powers of the mass $m$, this action can be exploited to efficiently determine the non-perturbative expression of the prepotential. This is achieved by showing that the coefficients $f_n$ of the mass expansion of $F$ are (quasi)-modular functions of the gauge coupling $\tau$ connected to each other by a recursion relation. Such a recursion relation, which encodes the ``modular anomaly'' of the prepotential, was first pointed out for U$(N)$ theories in \cite{Minahan:1997if} where it was derived from the Seiberg-Witten curve. The modular anomaly is related to the holomorphic anomaly of topological string amplitudes through local Calabi-Yau embeddings of the SW curves \cite{Bershadsky:1993ta}\nocite{Witten:1993ed,Aganagic:2006wq}\,-\,\cite{Gunaydin:2006bz}. It has been studied also in presence of an $\Omega$-background \cite{Huang:2006si}-\nocite{Grimm:2007tm,Huang:2009md,Huang:2010kf,Huang:2011qx,Galakhov:2012gw,Billo:2013fi,Billo:2013jba,Nemkov:2013qma,Billo:2014bja}\cite{Lambert:2014fma}, in the framework of the AGT correspondence \cite{Marshakov:2009kj}-\nocite{KashaniPoor:2012wb,Kashani-Poor:2013oza}\cite{Kashani-Poor:2014mua} and in ${\mathcal{N}}=2$ conformal SQCD models with fundamental matter \cite{Billo:2013fi,Billo:2013jba,Ashok:2015cba}. Here we review and streamline the derivation of the modular anomaly equation and the associated recursion relation directly from the S-duality requirement and for a generic simply-laced gauge group $G$ (the non simply-laced groups will be discussed in a companion paper \cite{Billo}). The modular anomaly equation leads to express the prepotential in terms of modular forms of $\tau$ and of functions of the periods $a$ which are written in terms of the root system of $\mathfrak{g}$, allowing for a unified treatment of all Lie algebras. In this way we can compute the prepotential also for the exceptional Lie algebras $E_6$, $E_7$ and $E_8$ for which an ADHM construction of the instanton moduli space is not available. This is the plan of the paper: in Section~\ref{secn:sduality} we discuss the behaviour of the $\mathcal{N}=2^\star$ theories under S-duality and derive from it the modular anomaly equation satisfied by the prepotential. In Section~\ref{secn:recursion} we exploit the recursion relation equivalent to the modular anomaly equation to compute exactly, {\it{i.e.}} to all orders in the instanton expansion, the first few coefficients in the mass expansion of the prepotential. The results are then generalised in Section~\ref{sec:recepsi} to account for a non-trivial $\Omega$-background. In Section~\ref{snek} we will describe the direct microscopic computation of the instanton corrections for the algebras of type $A_r$ and $D_r$ using the equivariant localization methods. The purpose of this section is to clarify some subtle points of the multi-instanton calculus and to check successfully these microscopic results against the instanton expansion of the solutions of the modular anomaly equation derived in the previous section. Our conclusions are presented in Section~\ref{sec:concl}, while several technical material is confined in various appendices. \section{S-duality and modular anomaly} \label{secn:sduality} In this section we briefly review the structure of ${\mathcal{N}}=2^\star$ theories with a gauge group $G$ of ADE type and discuss the constraint that S-duality imposes on their prepotential. \subsection{The $\mathrm{SL}(2,\mathbb{Z})$-duality symmetry} The field content of these theories includes an ${\mathcal{N}}=2$ vector multiplet and a massive hypermultiplet, both transforming in the adjoint representation of $G$. The ${\mathcal{N}}=2$ gauge multiplet contains an adjoint complex scalar $\varphi$, whose vacuum expectation value can always be aligned along the Cartan directions and written in the diagonal form \begin{equation} \label{phivev} \vev{\varphi} =a= \mathrm{diag}\, (a_1,a_2,\ldots,a_r)~, \end{equation} with $r$ denoting the rank of the gauge Lie algebra $\mathfrak{g}$. The parameters $\{ a_u \}$ span the Coulomb branch of the classical moduli space of the gauge theory. The low energy effective action on this branch is specified by a holomorphic function: the prepotential $F(a)$. Alternatively the gauge theory can be described in terms of the dual variables \begin{equation} \label{secada0} a^{\mathrm{D}}_u = \frac{1}{2\pi\mathrm{i}}\frac{\partial F}{\partial a_u}~. \end{equation} In the following we will often write $\partial_u$ for $\ft{\partial}{\partial a_u}$. We will also use a simplified vector notation, writing, for instance, $a$ for the vector, $\ft{\partial}{\partial a}$ for the gradient vector, and so on. The effective coupling matrix, which also encodes the metric on the moduli space, is \begin{equation} \label{tauuvdef} \tau_{uv} = \partial_u a^{\mathrm{D}}_{v} = \frac{1}{2\pi\mathrm{i}}\partial_u\partial_v F~. \end{equation} The classical part of the prepotential reads simply \begin{equation} \label{iprepclass} F^{\mathrm{cl}}= \pi\mathrm{i}\tau a^2 \end{equation} where $\tau$ is the complexified gauge coupling \begin{equation} \label{tau} \tau=\frac{\theta}{2\pi}+\mathrm{i}\frac{4\pi}{g^2} \end{equation} At this level, the dual periods and the effective coupling matrix are \begin{equation} \label{adclass} a^{\mathrm{D}} = \tau a~,~~~ \tau_{uv} = \tau\,\delta_{uv}~. \end{equation} In a Seiberg-Witten description of the theory, $a$ and $a^{\mathrm{D}}$ describe the periods of the Seiberg-Witten differential along the $2r$ cycles of the Riemann-surface defined by the Seiberg-Witten curve. The periods and dual periods can be assembled in a $2r$-dimensional vector $\big(a^{\mathrm{D}},a\big)$ that transforms as a vector of the modular group $\mathrm{Sp}(4r,\mathbb{Z})$ of the Riemann surface. The two set of variables are suitable to describe the regimes of weak and strong coupling ($g$ small and $g$ large respectively) of the gauge theory. These two regimes are mapped to each other by S-duality which, as an element of $\mathrm{Sp}(4r,\mathbb{Z})$, exchanges periods and dual periods and acts projectively on $\tau$ by inverting it, namely \begin{equation} \label{Saad} S(a) = a^{\mathrm{D}}~,~~~S(a^{\mathrm{D}}) = - a~,~~~S(\tau) = -\frac{1}{\tau}~. \end{equation} On the other hand, the T-duality acts as \begin{equation} \label{Taad} T(a) = a~,~~~T(a^{\mathrm{D}}) = a^{\mathrm{D}}+a~,~~~T(\tau) = \tau+1 ~. \end{equation} S and T generate the SL$(2,{\mathbb Z})$ modular group. On the prepotential, the T-duality action is \begin{equation} \label{Tduality} T[F(a)]=F(a)+\pi \mathrm{i} a^2~, \end{equation} as one can see from the fact that only the classical part $F^{\mathrm{cl}}$ given in (\ref{iprepclass}) transforms non-trivially under $\tau\to \tau+1$. Indeed, ${\mathcal{N}}=2$ supersymmetry allows only for one-loop ($\tau$-independent) and instanton corrections (weighted by $\mathrm{e}^{2\pi \mathrm{i} k \tau}$ with $k$ integer) which are $T$-invariant. The S-duality action is instead much less trivial since it maps the description of the theory in the variables $a$ to its dual description in terms of $a^{\mathrm{D}}$. Therefore $S$ should map the prepotential $F(a)$ to its Legendre transform: \begin{equation} \label{LTFcl} S[ F(a) ] ={\mathcal{L}}[F](a^{\mathrm{D}}) \end{equation} where \begin{equation} \label{LTF} {\mathcal{L}}[F](a^{\mathrm{D}}) = F(a) - 2\pi\mathrm{i} \,a\cdot a^{\mathrm{D}} = F(a) - a\cdot\frac{\partial F}{\partial a}~. \end{equation} The classical part of the prepotential verifies immediately (\ref{LTFcl}); in fact \begin{equation} \label{SF} S[F^{\mathrm{cl}}] = -\frac{\pi\mathrm{i}}{\tau}\,\big(a^{\mathrm{D}}\big)^2 ={\mathcal{L}}[F^{\mathrm{cl}}](a^{\mathrm{D}})~. \end{equation} The $S$-duality symmetry requirement (\ref{LTFcl}) represents instead a highly non-trivial constraint on the \emph{quantum} prepotential. As we will see, this constraint allows us to determine the exact form of the prepotential, order by order in the hypermultiplet mass, starting from very few microscopic data. As mentioned in the introduction, this is known to happen for U$(N)$ theories, where the prepotential satisfies a ``modular anomaly'' equation that has been discussed extensively in the literature \cite{Minahan:1997if}-\cite{Billo:2014bja}. In the following we derive the modular anomaly equation directly from the S-duality relation (\ref{LTFcl}) and show that it holds for gauge theories with all gauge groups of ADE type. \subsection{The small mass expansion of the prepotential} \label{subsec:n2} When the mass $m$ of the adjoint hypermultiplet vanishes, there are no quantum corrections to the prepotential since the supersymmetry is enhanced to ${\mathcal{N}}=4$. When the mass is turned on, the supersymmetry is only ${\mathcal{N}}=2$ and the prepotential $F$ is corrected. We write \begin{equation} \label{iprep1} F = F^{\mathrm{cl}} + f = \pi\mathrm{i}\tau a^2 + f~, \end{equation} where $f$ is the quantum part of the prepotential. The dual periods and effective coupling $\tau_{uv}$ also get quantum corrected and become non-trivial functions of $\tau$. As already mentioned, ${\mathcal{N}}=2$ supersymmetry allows perturbative corrections only at one-loop and non-perturbative corrections at all instanton numbers. Working in a mass expansion, we write the quantum prepotential as \begin{equation} f = f^{\mathrm{1-loop}}+f^{\mathrm{inst}}=\sum_{n=1}^\infty f_n \label{iprepfexp} \end{equation} where \begin{equation} \label{fn1linst} f_n = f^{\mathrm{1-loop}}_n+f^{\mathrm{inst}}_n \end{equation} is proportional to $m^{2n}$. The one-loop contribution to the prepotential has the form (see for instance \cite{D'Hoker:1999ft}) \begin{equation} \label{F1loop} \begin{aligned} f^{\mathrm{1-loop}}& = \frac{1}{4}\sum_{\alpha\in\Psi} \left[ -(\alpha\cdot a)^2 \log\left(\frac{\alpha\cdot a}{\Lambda}\right)^2 + (\alpha\cdot a+ m)^2 \log\left(\frac{\alpha\cdot a+m}{\Lambda}\right)^2 \right] \end{aligned} \end{equation} where $\Lambda$ is an arbitrary scale and $\alpha$ is an element of the root system $\Psi$ of the algebra $\mathfrak{g}$; $\alpha$ is an $r$-dimensional vector of components $\alpha_u$. The scalar product $\alpha\cdot a$ represents the mass acquired by the complex $W$-boson associated to the root $\alpha$ via the (super)-Higgs mechanism. Also the mass of the adjoint scalar along the root $\alpha$ is shifted with respect to its original value $m$ by the same amount. Expanding (\ref{F1loop}) in powers of $m$, all odd powers cancel upon summing over positive and negative roots, and we find \begin{equation} \label{fn1loop} \begin{aligned} f^{\mathrm{1-loop}} & = \frac{m^2}{4} \sum_{\alpha\in\Psi} \log\left(\frac{\alpha\cdot a}{\Lambda}\right)^2 - \sum_{n=2}^\infty \frac{m^{2n}}{4n(n-1)(2n-1)}\, C_{2n-2}\\ & = \frac{m^2}{4} \sum_{\alpha\in\Psi} \log\left(\frac{\alpha\cdot a}{\Lambda}\right)^2 -\frac{m^4}{24} \,C_2 -\frac{m^6}{120}\, C_4 -\frac{m^8}{336}\, C_6 - \ldots \end{aligned} \end{equation} where we defined \begin{equation} \label{defCn} C_{n} = \sum_{\alpha\in\Psi} \frac{1}{(\alpha\cdot a)^n }~. \end{equation} The non-perturbative part of the prepotential receives contributions from the various instanton sectors, so $f^{\mathrm{inst}}$ is a series in the instanton weight \begin{equation} \label{defq} q = \mathrm{e}^{-\frac{8\pi^2}{g^2}+\mathrm{i}\,\theta} = \mathrm{e}^{2\pi\mathrm{i}\tau}~. \end{equation} The term of order $q^k$ can be evaluated integrating over the moduli spaces of $k$-instantons by means of localization techniques when the gauge group $G$ is one of the classical matrix groups. This excludes the exceptional groups $E_{6,7,8}$. We will review this computation in Section~\ref{snek}. This procedure can in principle be carried out up to arbitrary order $k$; in practice, however, it is computationally rather intense. It is important to observe that \begin{equation} \label{Finst1} f^{\rm inst}_1 =0 \end{equation} since instanton contributions start at order $m^4$. This can be seen by noticing that every mass insertion soaks two of the eight instanton fermionic zero modes of the ${\mathcal{N}}=4$ theory, so we need at least four powers of $m$ to get a non-trivial result. \subsection{The modular anomaly equation} \label{subsec:Sprep} We now investigate the consequences of the S-duality relation (\ref{LTFcl}) on the quantum prepotential $f$. First we observe that the prepotential has mass dimension 2, so on dimensional grounds all $f_n$ with $n\geq 2$ must be homogeneous functions of degree $2-2n$ in $a$: \begin{equation} f_n(\lambda a)= \lambda^{2-2n}\,f_n(a)~. \label{homogeneity} \end{equation} Moreover, they are non-trivial functions of $\tau$ expressed as Fourier series in $q$. We therefore use the notation $f_n(\tau,a)$ to express this fact. Let us now compute first the two sides of the duality relation (\ref{LTFcl}). The Legendre transform of $F$ is \begin{equation} \label{legF} {\mathcal{L}}[F] = F - a \cdot \frac{\partial F}{\partial a} = - \pi\mathrm{i}\tau a^2 + f - a \cdot\frac{\partial f}{\partial a}~. \end{equation} On the other hand, using (\ref{Saad}), the S-transform of $F$ is \begin{equation} \label{Sonfexplicit} S[F] = - \frac{\pi\mathrm{i}}{\tau} \,\big(a^{\mathrm{D}}\big)^2+ f\big(-\ft{1}{\tau},a^{\mathrm{D}}\big)~, \end{equation} where, according to (\ref{secada0}), \begin{equation} \label{secada} a^{\mathrm{D}} = \frac{1}{2\pi\mathrm{i}}\,\frac{\partial F}{\partial a}= \tau \left( a +\frac{1}{2\pi\mathrm{i}\tau}\frac{\partial f}{\partial a}\right)~. \end{equation} Plugging (\ref{secada}) into (\ref{legF}), the S-duality relation $S[F] ={\mathcal{L}}[F]$ can be written in the form \begin{equation} \label{Sfeq} f \big( -\ft{1}{\tau}, a^{\mathrm{D}} \big) = f ( \tau, a) +\frac{1}{4\pi\mathrm{i}\tau} \left( \frac{\partial f (\tau,a)}{\partial a} \right)^2~. \end{equation} {From} (\ref{fn1loop}) and (\ref{Finst1}) we notice that $f_1$ is independent of $\tau$ but dependent on $\Lambda$, so equation (\ref{Sfeq}) at order $m^2$ implies \begin{equation} \label{f1is} f_1( \tau a, S(\Lambda) ) = f_1 ( a,\Lambda) \end{equation} where we have allowed an action of $S$-duality on the scale $\Lambda$. Using the explicit form of $f_1$, that is \begin{equation} \label{f1is0} f_1(a,\Lambda) = f_1^{\rm 1-loop}(a,\Lambda) = \frac{m^2}{4} \sum_{\alpha\in\Psi} \log\left(\frac{\alpha\cdot a}{\Lambda}\right)^2~, \end{equation} we conclude that $S(\Lambda)=\tau \Lambda$. At higher orders in the mass expansion, the differential equation (\ref{Sfeq}) can be solved by taking $f_n$, for $n\geq 2$, to be an $\mathrm{SL}(2,{\mathbb Z})$ quasi-modular form of weight $2n-2$. A basis of quasi-modular forms is given by the set of Eisenstein series $\{ E_2, E_4, E_6 \}$. More precisely $E_4$ and $E_6$ are true modular forms of weight $4$ and $6$ respectively, so under S-duality they transform as \begin{equation} \label{SE46} E_4\big(-\ft{1}{\tau}\big) = \tau^{4}\,E_4 (\tau) ~, ~~~ E_6\big(-\ft{1}{\tau}\big) = \tau^{6}\,E_6 (\tau) ~. \end{equation} The $E_2$ series is instead a quasi-modular form of weight 2 because under S it gets shifted: \begin{equation} \label{SE2} E_2\big(-\ft{1}{\tau}\big) = \tau^{2}\, \Big(E_2 (\tau) + \frac{6}{\pi\mathrm{i}\tau}\Big)\,\equiv\, \tau^{2}\, \big(E_2 (\tau) + \delta\big)~. \end{equation} Here we introduced the notation $\delta = \ft{6}{\pi\mathrm{i}\tau}$ to avoid clutter in the subsequent formul\ae\,. We notice that all $\delta$-dependence should cancel in the duality relation since $f$ is only a function of $q$, and that the quasi-modularity of $f_n$ is due entirely to its dependence on $E_2$. Indicating explicitly this dependence, we have (for $n\geq 2$ ) \begin{equation} \label{Sfn} f_n\big(-\ft{1}{\tau}, a^{\mathrm{D}},E_2(-\ft{1}{\tau})\big) =\tau^{2n-2}\, f_n\big(\tau, a^{\mathrm{D}},E_2+\delta\big)= f_n\big(\tau,\ft{a^{\mathrm{D}}}{\tau} ,E_2+\delta\big)~. \end{equation} where in the last step we used the homogeneity property (\ref{homogeneity}) of $f_n$. On the other hand we have \begin{equation} \label{Sfn1} f_1(a^{\mathrm{D}}, \tau \Lambda) = f_1\big(\ft{a^{\mathrm{D}}}{\tau}, \Lambda \big)~. \end{equation} Plugging (\ref{Sfn}) and (\ref{Sfn1}) into the left hand side of (\ref{Sfeq}), we find \begin{eqnarray} f\big(-\ft{1}{\tau}, a^{\mathrm{D}},E_2(-\ft{1}{\tau}),\tau\Lambda\big) &=&f\big(\tau, \ft{a^{\mathrm{D}}}{\tau},E_2+\delta,\Lambda\big)\phantom{\Big|}\nonumber\\ &=&f\Big(\tau, a + \frac{\delta}{12}\frac{\partial f}{\partial a},E_2 + \delta, \Lambda\Big)\nonumber\\ &=&f (\tau,a,E_2, \Lambda)+ \delta\,\left[ \frac{\partial f}{\partial {E_2}} +\frac{1}{12} \left(\frac{\partial f}{\partial a}\right)^2 \right](\tau,a,E_2,\Lambda)\nonumber\\ && + \frac{\delta^2}{2}\,\left[\frac{\partial^2 f}{\partial E_2^2} + \frac{1}{144} \left(\frac{\partial f}{\partial a}\right)^2 \frac{\partial^2 f}{\partial a^2} + \frac{1}{6} \frac{\partial f}{\partial a}\cdot \frac{\partial^2 f}{\partial a \partial E_2}\right](\tau,a,E_2,\Lambda)\nonumber\\ &&+ \cdots{\phantom{\Big|}} \label{Sfn2} \end{eqnarray} where the dots stand for higher order terms in $\delta$. Comparing (\ref{Sfn2}) with the right hand side of (\ref{Sfeq}), one finds that at order $\delta$ the following modular anomaly equation has to be satisfied \begin{equation} \label{recdiff} \frac{\partial f}{\partial E_2} +\frac{1}{24} \left(\frac{\partial f}{\partial a}\right)^2=0~. \end{equation} It is straightforward to check that the conditions obtained at higher orders in $\delta$ follow from this equation. For instance, the term in $\delta^2$ in (\ref{Sfn2}) is easily shown to correspond to a further $E_2$-derivative of the modular anomaly equation (\ref{recdiff}); thus it vanishes, as requested by the comparison with the right hand side of (\ref{Sfeq}). Summarizing, the S-duality symmetry requirement (\ref{LTFcl}) is satisfied if the coefficients $f_n$ in the mass expansion of the quantum prepotential $f$ are quasi-modular form of weight $2n-2$ and if $f$ satisfies the modular anomaly equation (\ref{recdiff}). \section{The recursion relation} \label{secn:recursion} \subsection{The prepotential} Expanding the quantum prepotential $f$ in mass powers according to (\ref{iprepfexp}), the requirement (\ref{recdiff}) turns into the relation \begin{equation} \label{rec} \frac{\partial f_n}{\partial E_2} = -\frac{1}{24} \sum_{m=1}^{n-1} \frac{\partial f_m}{\partial a}\cdot \frac{\partial f_{n-m}}{\partial a} \end{equation} which allows to recursively determine the $f_n$'s in terms of the lower coefficients up to $E_2$-independent functions. The $E_2$-independent part can be fixed by using one-loop or lower-$k$ instanton data. Actually, to the order we will consider here, the one-loop data will be enough. Let us start by determining $f_2$ which, being a quasi-modular form of weight 2, can only be proportional to $E_2$. For $n=2$ the recursion relation (\ref{rec}) simply reads \begin{equation} \label{recn2} \frac{\partial f_2}{\partial E_2} = -\frac{1}{24} \frac{\partial f_1}{\partial a}\cdot \frac{\partial f_1}{\partial a}= -\frac{m^4}{96} \sum_{\alpha,\beta\in\Psi} \frac{\alpha\cdot\beta}{(\alpha\cdot a)(\beta\cdot a)} ~, \end{equation} where in the second step we have used the expression (\ref{f1is}) for $f_1$. The sum over the roots $\alpha,\beta\in\Psi$ can be rewritten as \begin{equation} \label{df1df1} \sum_{\alpha,\beta\in\Psi} \frac{\alpha\cdot\beta}{(\alpha\cdot a)(\beta\cdot a)} = 4\,\sum_{\alpha\in\Psi} \frac{1}{(\alpha\cdot a)^2} + \sum_{\alpha\not= \pm \beta\in\Psi} \frac{\alpha\cdot\beta}{(\alpha\cdot a)(\beta\cdot a)}~. \end{equation} The first term corresponds to the cases $\alpha = \pm \beta$ and comes with an overall factor of 4 since for any ADE Lie algebra all roots have length square 2: $\alpha\cdot\alpha=2$ (see Appendix~\ref{secn:roots} for details on the root system of the ADE algebras). In the second term of (\ref{df1df1}), for any $\beta\not= \alpha$ we have either $\alpha\cdot\beta=\pm 1$ or $\alpha\cdot\beta=0$, and so both $\beta$ and $-\beta$ give the same contribution. Therefore, we can limit ourselves to sum over the roots $\beta\in\Psi(\alpha)$ where \begin{equation} \label{defDeltaalpha} \Psi(\alpha) = \left\{\beta\in\Psi:~ \alpha\cdot\beta=1 \right\}~, \end{equation} and get \begin{equation} \label{defc11} \sum_{\alpha,\beta\in\Psi} \frac{\alpha\cdot\beta}{(\alpha\cdot a)(\beta\cdot a)} = 4 \sum_{\alpha\in\Psi} \frac{1}{(\alpha\cdot a)^2} + 2 \sum_{\alpha\in\Psi}\sum_{\beta\in\Psi(\alpha)} \frac{1}{(\alpha\cdot\phi)(\beta\cdot\phi)}~. \end{equation} The first term is proportional to $C_2$ as one can see from (\ref{defCn}), while the second term suggests to introduce a more general sum over the root lattice, namely \begin{equation} \label{defcnms} C_{n;m_1m_2\ldots m_\ell} = \sum_{\alpha\in\Psi} \sum_{\beta_1\not=\beta_2\not=\ldots\beta_\ell\in\Psi(\alpha)} \frac{1}{(\alpha\cdot a)^n(\beta_1\cdot a)^{m_1}(\beta_2\cdot a)^{m_2}\cdots (\beta_\ell\cdot a)^{m_\ell}}~. \end{equation} As we will show in the following, these sums will be useful to express all higher prepotential coefficients in a very compact way. The properties of these sums are discussed in Appendix~\ref{secn:sums} where in particular we show that $C_{1;1}$ is identically vanishing. We therefore have \begin{equation} \sum_{\alpha,\beta\in\Psi} \frac{\alpha\cdot\beta}{(\alpha\cdot a)(\beta\cdot a)} = 4\,C_2+2\,C_{1;1}= 4\,C_2 \end{equation} Using this in (\ref{recn2}) and integrating with respect to $E_2$, we finally obtain \begin{equation} \label{f2bis} f_2 = -\frac{m^4}{24} \,E_2 \, C_2 = -\frac{m^4}{24}\, \big(1 - 24 q - 72 q^2 -96 q^3 \cdots\big)\, C_2 ~, \end{equation} where in the second step we inserted the Fourier expansion of $E_2$. We observe that the $q^0$-term matches the $m^4$ contribution in the one-loop result in (\ref{fn1loop}). The higher order terms in the $q$-expansion are a prediction for the instanton corrections to $f_2$. As we will see in Section~\ref{snek}, these predictions can be tested and verified for the first few instanton numbers in various gauge groups of the A and D series using localization methods. For the exceptional groups $E_{6,7,8}$, instead, these are truly predictions since the multi-instanton calculus is not available in these cases. We now consider the next mass order. For $f_3$, from (\ref{rec}), we have \begin{equation} \label{recf31} \frac{\partial f_3}{\partial_{E_2}} = -\frac{1}{12} \frac{\partial f_1}{\partial a}\cdot \frac{\partial f_2}{\partial a} = -\frac{m^6}{288}\, E_2 \sum_{\alpha,\beta\in\Psi} \frac{\alpha\cdot\beta}{(\alpha\cdot a)^3(\beta\cdot a)} ~, \end{equation} where we have used the explicit expressions of $f_1$ and $f_2$ given in (\ref{f1is}) and (\ref{f2bis}) to do the second step. Manipulating the root sums as before and using the identity (\ref{C31C211}), we can rewrite (\ref{recf31}) as \begin{equation} \label{recf3} \frac{\partial f_3}{\partial_{E_2}} = -\frac{m^6}{72} E_2 \Big(C_4 + \frac{1}{4}C_{2;11}\Big) ~. \end{equation} Integrating with respect to $E_2$, we find \begin{equation} \label{f3is} f_3 = - \frac{m^6}{144} E_2^2 \Big(C_4 + \frac{1}{4}C_{2;11}\Big) + x\, E_4~, \end{equation} where we have taken into account that the ``integration constant" must have modular weight $4$ and thus must be proportional to $E_4$. The coefficient $x$ must be chosen in such a way that in the perturbative limit, where $E_2$ and $E_4$ become 1, one recovers the $m^6$ term in the one-loop result (\ref{fn1loop}). This requires that \begin{equation} \label{cis} x = -m^6\,\Big(\frac{C_4}{720} - \frac{C_{2;11}}{576}\Big)~. \end{equation} Plugging this back into (\ref{f3is}), we finally obtain \begin{equation} \label{f3tris} f_3 = -\frac{m^6}{720}\,\big(5 E_2^2 + E_4\big)\, C_4 - \frac{m^6}{288}\,\big(E_2^2 - E_4\big)\times\frac{1}{2}\, C_{2;11}~. \end{equation} Expanding the Eisenstein series in powers of $q$ we find \begin{equation} f_3= -\frac{m^6}{120}\,C_4+q\,\frac{m^6}{2}\,C_{2;1,1}+q^2m^6\,\big(-6C_4+3C_{2;11}\big) +q^3m^6\,\big(-32C_4+6C_{2;11}\big)+\cdots \label{f3exp} \end{equation} from which we can explicitly read the multi-instanton corrections. Using the recursion relation and the comparison with the perturbative expression, we have determined also the terms of order $m^8$ and $m^{10}$ in the prepotential. We now collect all our results up to $f_5$: \begin{subequations} \label{f5is} \begin{align} f_1 &= \frac{m^2}{4} \sum_{\alpha\in\Psi} \log\left(\frac{\alpha\cdot a}{\Lambda}\right)^2~, \\ f_2 & =-\frac{m^4}{24} \,E_2 \, C_2 ~, \label{f2fin}\\ f_3 &= -\frac{m^6}{720}\,\big(5 E_2^2 + E_4\big)\, C_4 - \frac{m^6}{288}\,\big(E_2^2 - E_4\big)\times\frac{1}{2}\, C_{2;11}~,\label{f3fin}\\ f_4 & = -\frac{m^8}{90720}\,\big(175 E_2^3 + 84 E_2 E_4 + 11 E_6\big)\, C_6 \notag\\ &~~~~+ \frac{m^8}{8640}\,\big(5 E_2^3 - 3 E_2 E_4 - 2 E_6\big) \Big(C_{4;2} +\frac{1}{12} C_{3;3}\Big)\notag\\ &~~~~-\frac{m^8}{1728}\,\big(E_2^3 - 3 E_2 E_4 + 2 E_6\big)\times\frac{1}{24} C_{2;1111}~,\label{f4fin}\\ f_5 & = -\frac{m^{10}}{362880}\,\big(245 E_2^4 + 196 E_2^2 E_4 + 44 E_2 E_6 + 19 E_4^2\big)\, C_8\notag\\ &~~~~ +\frac{m^{10}}{145152}\,\big(35 E_2^4 - 7 E_2^2 E_4 - 18 E_2 E_6 - 10 E_4^2\big) \Big(C_{6;2} -\frac{13}{45} C_{3;3}\Big)\notag\\ & ~~~~+ \frac{m^{10}}{82944}\,\big(E_2^2-E_4\big)^2 \Big(\frac{5}{12} C_{4;4} - 3 C_{4;22} - C_{3;32} - C_{4;31}\Big)\notag\\ & ~~~~- \frac{m^{10}}{6912}\,\big(E_2^4 - 6 E_2^2 E_4 + 8 E_6 E_2 - 3 E_4^2\big)\times \frac{1}{720} C_{2;111111}~. \end{align} \end{subequations} If we were to proceed to the next order, {\it{i.e.}} to $f_6$, after having determined all terms containing $E_2$, we would still have to fix a purely modular term of order 12. Since there are two independent modular forms of weight 12, namely $E_6^2$ and $E_4^3$, we could no longer fix the coefficient of these two forms by comparison to the one-loop result only; we would also need to know the one-instanton result, if available. Having done this, however, all the subsequent instanton corrections would be predicted. The covariance of the prepotential under S-duality, implemented through the recursion relation, is a symmetry requirement: it is not sufficient by itself to determine the dynamics, and, in particular, it does not eliminate the need to evaluate explicitly the non-perturbative corrections. Still, it minimizes the number of such computations. \subsection{1-instanton terms} \label{subsec:oneinst} Let us consider the 1-instanton terms in the prepotential. Substituting the $q$-expansion of the Eisenstein series into (\ref{f5is}) one can see that the only terms which contribute at order $q$ are those proportional to $C_{2;11\cdots}$, whose coefficients follow an obvious pattern: \begin{equation} \label{f1instup3} \begin{aligned} F_{k=1} & = m^4 C_2 + \frac{m^6}{2!} C_{2;11} + \frac{m^8}{4!} C_{2;1111} + \frac{m^{10}}{6!}C_{2;111111} + \cdots \\ & = \sum_{\alpha\in\Psi} \frac{m^4}{(\alpha\cdot a)^2}\, \sum_{\ell=0} \frac{m^{2\ell}}{\ell!} \sum_{\beta_1\not=\beta_2\not=\ldots\beta_l\in\Psi(\alpha)} \frac{1}{(\beta_1\cdot a)\cdots (\beta_\ell\cdot a)} \end{aligned} \end{equation} where in the second line we have used the explicit definition of the sums $C_{2;11\cdots}$. This pattern extends to all orders in $m$, and we can rewrite the above expression as% \footnote{Note that the terms with odd powers of $m$ that we obtain expanding the product in the right hand side vanish identically, as discussed in Appendix~\ref{secn:sums}.} \begin{equation} \label{f1qnop} F_{k=1} = \sum_{\alpha\in\Psi} \frac{m^4}{(\alpha\cdot a)^2} \prod_{\beta\in\Psi(\alpha)}\left(1 + \frac{m}{\beta\cdot a}\right)~. \end{equation} We now show that, in the decoupling limit in which the ${\mathcal{N}}=2^\star$ theory reduces to the pure ${\mathcal{N}}=2$ SYM, the above result agrees with the explicit computations that are present in the literature \cite{Benvenuti:2010pq}-\nocite{Keller:2011ek,Hanany:2012dm}\cite{Cremonesi:2014xha}. In the decoupling limit the mass $m$ is sent to infinity and the instanton weight $q$ to zero, keeping constant the dynamically generated scale $\widehat\Lambda$ defined as \begin{equation} \label{declim} \widehat\Lambda^{\,2h^\vee} = m^{2h^\vee} \, q~. \end{equation} Here $2h^{\!\vee}$ is the one-loop $\beta$-function coefficient of the pure ${\mathcal{N}}=2$ theory, expressed in terms of the dual Coxeter number of the Lie algebra $\mathfrak{g}$. For the single-laced algebras these numbers are given by % \[ \begin{array}{c|ccccc} & ~~A_r~~ & ~~ D_r~~ & ~~E_6~~ &~~E_7 ~~& ~~E_8~~ \\ \hline h^\vee \phantom{\Big|} & r & 2r-2 & 12 & 18 & 30 \\ \end{array} \] Since the number of roots $\beta$ in the set $\Psi(\alpha)$ is $2h^\vee-4$, the highest mass power in (\ref{f1qnop}) is exactly $m^{2h^\vee}$, and so it is consistent to take the decoupling limit, in which all terms proportional to non-maximal powers of $m$ vanish. Doing this, we remain with \begin{equation} \label{purek1} q\,F_{k=1} ~\longrightarrow~ \widehat\Lambda^{\,2h^\vee} \sum_{\alpha\in\Psi} \frac{1}{(\alpha\cdot a)^2} \prod_{\beta\in\Psi(\alpha)} \frac{1}{\beta\cdot a}~. \end{equation} This expression has been derived in \cite{Keller:2011ek} following a completely different approach% \footnote{In \cite{Keller:2011ek} also the non-simply laced groups are considered; in the companion paper \cite{Billo} we will show that also in these cases our treatment reproduces, in the decoupling limit, their expression.}. Our result in (\ref{f1qnop}) generalizes this to the ${\mathcal{N}}=2^\star$ case. \section{The recursion relation in the $\Omega$-background} \label{sec:recepsi} The general features described in the previous section hold also when the ${\mathcal{N}}=2^\star$ theories are formulated in an $\Omega$-background \cite{Nekrasov:2003rj}. In fact we are going to show that for a generic gauge group of the ADE series the $\Omega$-deformed prepotential satisfies a generalized recursion relation, thus extending the analysis of \cite{Huang:2006si}\,-\,\nocite{Grimm:2007tm,Huang:2009md,Huang:2010kf,Huang:2011qx,Galakhov:2012gw,Billo:2013fi,Billo:2013jba}\cite{Nemkov:2013qma} for the SU$(2)$ theory and of \cite{Billo:2014bja} where the SU$(N)$ theories were considered. We parametrize the $\Omega$-background by $\epsilon_1$ and $\epsilon_2$ and, for later convenience, introduce the following combinations \begin{equation} \epsilon=\epsilon_1+\epsilon_2~,\qquad h=\sqrt{\epsilon_1\epsilon_2}~. \label{epsilonh} \end{equation} The deformed prepotential can still be written as in (\ref{iprep1})-(\ref{fn1linst}), but both the one-loop and the instanton parts receive corrections in $\epsilon$ and $h$. In particular, the one-loop term becomes \cite{Nekrasov:2003rj,Huang:2011qx,Billo:2013fi} \begin{equation} \begin{aligned} f^{\mathrm{1-loop}} &= h^2 \sum_{\alpha\in\Psi} \Big[\log\Gamma_2(\alpha\cdot\phi)-\log\Gamma_2(\alpha\cdot\phi+m+\epsilon)\Big] \end{aligned} \label{F1loopep} \end{equation} where $\Gamma_2$ is the Barnes double $\Gamma$-function (see Appendix~\ref{secn:appgamma2}). By expanding for small values of $m$, $\epsilon$ and $h$, we obtain \begin{subequations} \label{f1loopep} \begin{align} f_1^{\mathrm{1-loop}} &= \frac{M^2}{4} \sum_{\alpha\in\Psi} \log\Big(\frac{\alpha\cdot\phi}{\Lambda}\Big)^2~, \\ f_2^{\mathrm{1-loop}} &= - \frac{M^2(M^2 +h^2)}{24}\, C_2 ~, \label{f2he}\\ f_3^{\mathrm{1-loop}} &= - \frac{M^2(M^2 +h^2)(2M^2+ 3h^2 -\epsilon^2)}{240}\, C_4 ~, \label{f3he}\\ f_4^{\mathrm{1-loop}} &= - \frac{M^2(M^2 +h^2) (3M^4 + 10 h^4 +2 \epsilon^4 + 11h^2M^2 -4\epsilon^2M^2 -10 h^2\epsilon^2)}{1008}\, C_6 ~, \label{f4he} \end{align} \end{subequations} where we have defined \begin{equation} M^2 \,\equiv\, m^2- \frac{\epsilon^2}{4}~. \label{M2} \end{equation} As in the undeformed case, also here $f_1$ does not receive instantonic corrections, so we have \begin{equation} f_1=f_1^{\mathrm{1-loop}}~, \end{equation} while all other $f_n$'s with $n\geq 2$ have contributions at any order in the instanton expansion. The exact $q$-dependence of the deformed $f_n$'s can be determined by requiring that the prepotential transforms properly under S-duality. In an $\Omega$-background this means that S-duality acts on the prepotential as a Fourier transform \cite{Galakhov:2012gw}\nocite{Billo:2013fi,Billo:2013jba}\,-\,\cite{Nemkov:2013qma}, namely \begin{equation} \exp\Big(\!-\frac{S[F](a^{\mathrm{D}})}{h^2}\Big)= \Big(\frac{\mathrm{i} \tau}{h^2}\Big)^{r/2} \int d^{\,r} x ~\exp\Big(\frac{2\pi \mathrm{i} \,a^{\mathrm{D}} \cdot x - F(x)}{h^2}\Big) \label{FT2} \end{equation} where $r$ is the rank of the gauge group. This interpretation of S-duality is fully consistent with the interpretation of $a$ and $a^{\mathrm{D}}$ as canonical conjugate variables, on which S acts as a canonical transformation, and of \begin{equation} Z= \exp\Big(\!-\frac{F}{h^2}\Big) \end{equation} as a wave function in a quantization of this phase space, with $h^2 = \epsilon_1\epsilon_2$ playing the r\^ole of the Planck constant \cite{Witten:1993ed}\nocite{Aganagic:2006wq}\,-\,\cite{Gunaydin:2006bz}. If we compute the Fourier transform (\ref{FT2}) in the saddle point approximation for $h \to 0$ and denote by $a$ the solution of the saddle point equation, that is \begin{equation} 2\pi \mathrm{i} \,a^{\mathrm{D}}- \partial_x F(x)\Big|_{x=a}=0~, \end{equation} then the leading contribution to the integral in (\ref{FT2}) is \begin{equation} \exp\Big(\!-\frac{S[F](a^{\mathrm{D}})}{h^2}\Big)= \exp\left(\! -\frac{F(a) - a\cdot\partial_a F(a)}{h^2} -\frac12 \,\mathrm{tr} \log\Big(\delta_{u v}+\frac{1}{2\pi \mathrm{i} \tau} \, \partial_u \partial_v f \Big) \right) +\cdots \label{FT3} \end{equation} where the tr log part comes from the Gaussian integration around the saddle point and the ellipses stand for subleading terms in $h$. The dominant contribution for $h\to0$ reproduces the Legendre transform of the prepotential as expected, but there are corrections for finite $h$. Indeed we have \begin{equation} \begin{aligned} S[F]&={\mathcal{L}}[F]+\frac{h^2}{2}\, \mathrm{tr} \log\Big(\delta_{u v}+\frac{1}{2\pi \mathrm{i} \tau} \, \partial_u \partial_v f \Big) +\cdots\\ &={\mathcal{L}}[F]+\delta\,\Big(\frac{h^2}{24}\Delta f\Big)+{\mathcal{O}}(\delta^2)+\cdots \end{aligned} \label{FT4} \end{equation} where we have used $\delta=\ft{6}{\pi\mathrm{i}\tau}$, as before, and defined $\Delta=\sum_u\partial_u^2$. Repeating the same steps described in Section~\ref{secn:sduality} (see also Sections 3 and 4 of \cite{Billo:2013jba} for more details), one can show that (\ref {FT4}) leads to the following recursion relation for the prepotential coefficients $f_n$'s: \begin{equation} \frac{\partial f_n}{\partial E_2}=-\frac{1}{24}\sum_{m=1}^{n-1} \frac{\partial f_m}{\partial a}\cdot \frac{\partial f_{n-m}}{\partial a}\,+\,\frac{h^2}{24}\,\Delta f_{n-1}~. \label{recursion} \end{equation} The recursive computation of the $f_n$'s can then proceed along the lines we have discussed in the undeformed theory. At level two we find \begin{equation} \frac{\partial f_2}{\partial E_2}=-\frac{1}{24}\frac{\partial f_1}{\partial a}\cdot \frac{\partial f_1}{\partial a}\,+\,\frac{h^2}{24}\,\Delta f_1= -\frac{1}{24} M^2 (M^2+h^2) \,C_2 \label{rec-f2} \end{equation} where we have used \begin{equation} \Delta f_1 = -\frac{M^2}{2} \sum_{\alpha\in\Psi} \frac{(\alpha \cdot\alpha)}{(\alpha \cdot\phi)^2} = -M^2\, C_2~. \end{equation} Integrating with respect to $E_2$ we get \begin{equation} f_2=-\frac{1}{24} M^2 (M^2+h^2)\, E_2\, C_2~. \label{f2ep} \end{equation} It is immediate to see that in the perturbative limit, when $E_2$ reduces to 1, this correctly reproduces (\ref{f2he}). Using this result we can write the differential equation constraining $f_3$, namely \begin{equation} \begin{aligned} \frac{\partial f_3}{\partial E_2} &= -\frac{1}{12}\frac{\partial f_1}{\partial a}\cdot \frac{\partial f_2}{\partial a}\,+\,\frac{h^2}{24}\,\Delta f_2 \\ &= -\frac{1}{144} M^2 (M^2+h^2) \big((2M^2+3h^2) C_4 + M^2 C_{3;1} \big)\,E_2~. \end{aligned} \label{rec-f3} \end{equation} Integrating with respect to $E_2$ and fixing the dependence on $E_4$ in such a way to reproduce the perturbative result (\ref{f3he}), we get \begin{equation} \begin{aligned} f_3&=-\frac{1}{288} M^2 (M^2+h^2)\Big[\frac{1}{5}\big((2M^2+3h^2)(5 E_2^2+E_4)-6\epsilon^2E_4\big)\,C_4 \\&~~~\qquad +\frac{1}{2} M^2\,(E_2^2-E_4)\,C_{2;11}\Big] \end{aligned} \label{f3ep} \end{equation} where we have used the identity $C_{3;1}=C_{2;11}$ proven in Appendix \ref{secn:sums}. In a similar way we can determine $f_4$. The result we get is \begin{eqnarray} f_4 &=&-\frac{1}{1728} M^2 (M^2+h^2) \left\{ \Big[ \frac{2}{105}(2M^2+3h^2)(2M^2+5h^2)(35 E_2^3+21E_2E_4+4E_6) \right.\nonumber\\ && +\frac{2}{21} M^2(M^2+h^2)(7 E_2^3-E_6) -\frac{12}{35} \epsilon^2 (2M^2+5h^2)(7E_2E_4+3E_6) +\frac{24}{7} \epsilon^4E_6 \Big]C_6 \nonumber\\ && -\Big[ \frac{1}{10}M^2(2M^2+3h^2)(5 E_2^3-3E_2E_4-2E_6)- \frac{6}{5} \epsilon^2 M^2(E_2E_4-E_6) \Big] C_{4;2} \nonumber \\ && -\Big[ \frac{1}{60}M^2(M^2+4h^2)(5 E_2^3-3E_2E_4-2E_6) - \frac{3}{5} \epsilon^2 M^2 (E_2E_4-E_6) \Big] C_{3;3} \nonumber \\ &&\left. +\frac{1}{24} M^4 (E_2^3-3E_2E_4+2E_6)\, C_{2;1111}\right\}~. \label{f4ep} \end{eqnarray} This procedure can be carried out for the next orders in the mass expansion, but the results become lengthy and we see no reason to explicitly report them. \subsection{1-instanton terms} While the exact expressions of the $f_n$'s are rather involved, their 1-instanton part is quite simple and it is possible to write a compact expression which generalizes the one we have derived in (\ref{f1qnop}) for the underformed theory. Indeed, inserting the $q$-expansions of the Eisenstein series in (\ref{f2ep}), (\ref{f3ep}) and (\ref{f4ep}) one can see that only few terms contribute at order $q$: \begin{equation} \label{fk1} \begin{aligned} f_2\big|_{k=1}&=M^2(M^2+h^2)\, C_2 ~,\\ f_3\big|_{k=1}&=M^2(M^2+h^2)\,\Big[\epsilon^2 C_4 + \frac{1}{2} M^2C_{2;11} \Big] ~,\\ f_4\big|_{k=1}&=M^2(M^2+h^2)\, \Big[\epsilon^4 C_6 + \frac{1}{2} \epsilon^2M^2C_{4;11}+\frac{1}{24} M^2(M^2- \epsilon^2 ) C_{2;1111}\Big] \end{aligned} \end{equation} where in the last equation we have used the first identity given in (\ref{identfin}). Actually this pattern extends also to higher $f_n$'s, as we have verified by computing the 1-instanton prepotential using localization techniques described in the next section. Altogether we find \begin{equation} \label{FFk1} \begin{aligned} F_{k=1}&=\sum_{n=2} f_n\big|_{k=1} \\ &= M^2(M^2+h^2)\, \Big[ \big(C_2 +\epsilon^2\, C_4 + \epsilon^4\, C_6 + \epsilon^6 \,C_8\cdots\big) \\ &\qquad\quad + \frac{1}{2} M^2 \big(C_{2;11}+ \epsilon^2\, C_{4;11}+ \epsilon^4 \,C_{6;11} + \cdots\big) \\ &\qquad\quad +\frac{1}{24} M^2(M^2-\epsilon^2 ) \big(C_{2;1111}+ \epsilon^2\, C_{4;1111} + \cdots \big)\\ &\qquad\quad+\frac{1}{720} M^2(M^4-3M^2\epsilon^2+3\epsilon^4) \big(C_{2;111111}+ \cdots \big) +\cdots\Big]~. \end{aligned} \end{equation} This pattern suggests to introduce the following notation \begin{equation} \begin{aligned} g_{2n}&=\frac{1}{(2n)!}\Big( C_{2;{\underbrace{\mbox{\scriptsize{11\ldots1}}}_{\mbox{\scriptsize{$2n$}}}}}+ \epsilon^2\, C_{4;{\underbrace{\mbox{\scriptsize{11\ldots1}}}_{\mbox{\scriptsize{$2n$}}}}}+ \epsilon^4\,C_{6;{\underbrace{\mbox{\scriptsize{11\ldots1}}}_{\mbox{\scriptsize{$2n$}}}}} +\cdots\Big)\\ &= \frac{1}{(2n)!}\,\sum_{\alpha\in \Psi} \sum_{\beta_1 \neq \cdots \beta_{2n} \in \Psi(\alpha)} \frac{1}{(\alpha \cdot a)(\alpha \cdot a+\epsilon)(\beta_1 \cdot a) \cdots(\beta_{2n} \cdot a) }~, \end{aligned} \label{g2n} \end{equation} so that (\ref{FFk1}) becomes \begin{equation} F_{k=1}= M^2(M^2+h^2)\Big[g_0+M^2\,g_2+M^2(M^2-\epsilon^2)\,g_4+ M^2(M^4-3M^2\epsilon^2+3\epsilon^4) \,g_6+\cdots\Big]~. \label{FFk1a} \end{equation} Notice that in the sums $g_{2n}$ all odd powers in the $\epsilon$-expansion of the second line of (\ref{g2n}) vanish upon summing over the roots and that, for a given algebra $\mathfrak{g}$, the highest non-vanishing expression of this kind is $g_{2h^{\!\vee}-4}$, since the order of $\Psi(\alpha)$ is $2h^{\!\vee}-4$. It is interesting to observe that the $g$'s can be expressed in terms of a generating function \begin{equation} \label{GG} G(x)= \sum_{n=0}^{2h^{\!\vee}-4}g_n\,x^n= \sum_{\alpha\in \Psi}\frac{1}{(\alpha \cdot a)(\alpha \cdot a+\epsilon)} \prod_{\beta \in \Psi(\alpha)}\left( 1+\frac{x}{\beta \cdot a}\right) \end{equation} where \begin{equation} \label{gniader} g_{n} = \frac{1}{n!} \left.\frac{\partial ^n G(x)}{\partial x^n}\right|_{x=0}~. \end{equation} It is also possible to recognize a pattern in the mass- and $\epsilon$-dependent expressions that multiply the $g_n$'s in \eq{FFk1a}. Writing the latter in the form \begin{equation} \label{F2Fk1} F_{k=1}= M^2(M^2+h^2)\,\sum_{n=0}^{2h^\vee-4} g_n \,\epsilon^{n} H_n\Big(\ft{M^2}{\epsilon^2}\Big)~, \end{equation} one can see that the polynomials $H_n$ are connected to the Euler polynomials ${\mathcal{E}}_n$ according to \begin{equation} \label{H2} H_n\Big(\ft{M^2}{\epsilon^2}\Big)= \frac{1}{2} \left[ {\mathcal{E}}_n\Big( \ft{1}{2}+\ft{m}{\epsilon} \Big) + {\mathcal{E}}_n\Big( \ft{1}{2}-\ft{m}{\epsilon}\Big) \right] \end{equation} (recall that $M^2 = m^2-\ft{\epsilon^2}{4}$). In turn, the Euler polynomials are defined by \begin{equation} \label{defeulerpol} \frac{2\,\mathrm{e}^{z\,t}}{\mathrm{e}^t + 1} = \sum_{n=0}^\infty\frac{1}{n!}\, {\mathcal{E}}_n(z)\,t^n~. \end{equation} With this definition one can easily check that all $H_{2n+1}$ are vanishing, while the $H_{2n}$ reproduce the expressions appearing in (\ref{FFk1a}). Inserting (\ref{H2}) and (\ref{gniader}) into (\ref{F2Fk1}), we then obtain \begin{equation} \label{F3Fk1} \begin{aligned} F_{k=1} & = \frac 12\,\Big(m^2-\frac{\epsilon^2}{4}\Big)\Big(m^2-\frac{\epsilon^2}{4}+h^2\Big)\\ &~~~\times \sum_{n=0}^\infty \frac{1}{n!}\, \left[ {\mathcal{E}}_n\Big( \ft{1}{2}+\ft{m}{\epsilon} \Big) + {\mathcal{E}}_n\Big( \ft{1}{2}-\ft{m}{\epsilon}\Big) \right] \left(\epsilon \,\frac{\partial}{\partial x}\right)^n \!G(x)\Big|_{x=0}~. \end{aligned} \end{equation} Since $G(x)$ is a polynomial of order $2h^{\!\vee} -4$, all terms with $n>2h^{\!\vee} -4$ in the sum vanish and (\ref{F3Fk1}) is simply another way to write (\ref{F2Fk1}). However, this allows us to use the property (\ref{defeulerpol}) of the Euler polynomials in order to write \begin{eqnarray} F_{k=1} & =& \Big(m^2-\frac{\epsilon^2}{4}\Big)\Big(m^2-\frac{\epsilon^2}{4}+h^2\Big)\, \Bigg(\,\frac{\mathrm{e}^{(\frac{\epsilon}{2}+m)\,\partial_x}}{\mathrm{e}^{\epsilon\,\partial_x}+1}+ \frac{\mathrm{e}^{(\frac{\epsilon}{2}-m)\,\partial_x}}{\mathrm{e}^{\epsilon\,\partial_x}+1} \,\Bigg) G(x)\Big|_{x=0}\label{F3Fk1a}\\ & =& \Big(m^2-\frac{\epsilon^2}{4}\Big)\Big(m^2-\frac{\epsilon^2}{4}+h^2\Big)\! \sum_{n=0}^{2h^\vee-4}\!\left(-\epsilon\,\frac{\partial}{\partial x}\right)^{\!n} \Big[G\big(x + \ft{\epsilon}{2} + m\big) + G\big(x + \ft{\epsilon}{2} - m\big)\Big]\Big|_{x=0} \nonumber \end{eqnarray} where in the second line we truncated the expansion of the geometric series since, as we stressed above, $G(x)$ is a polynomial of order $2h^{\!\vee} -4$. It is not difficult to check that in the limit $\epsilon \to 0$ we recover the 1-instanton expression given in (\ref{f1qnop}) and that keeping the $\epsilon$ but decoupling the matter hypermultiplet we recover the same formula obtained in \cite{Keller:2011ek} for the pure ${\mathcal{N}}=2$ theories from the coherent states of the W-algebras. \section{Multi-instanton calculations} \label{snek} In this section, we test the results for the ${\mathcal N}=2^\star$ prepotential obtained from the modular recursion equation against a direct microscopic computation of the first instanton corrections based on equivariant localization techniques \cite{Nekrasov:2002qd}\nocite{Flume:2002az,Nekrasov:2003rj,Bruzzo:2002xf}\,-\,\cite{Fucito:2004ry} (see also \cite{Billo:2012st} for further details). To do so we first recall a few basic facts about the instanton moduli space and the multi instanton calculus starting from the gauge theories with unitary groups. \subsection{Multi-instantons for the U$(N)$ gauge theory} The moduli space of $k$-instantons in the $\mathcal{N}=2^\star$ theory with gauge group U$(N)$ can be built from the open strings connecting a stack of $k$ D(-1) and $N$ D3-branes in Type IIB string theory. The gauge theory prepotential can be viewed as the free energy of the statistical system describing the lowest modes of the open strings with at least one end-point on the D(-1) branes that account for the the instanton moduli \cite{Witten:1995gx}\nocite{Douglas:1995bn,Green:2000ke,Billo:2002hm}\,-\,\cite{Billo:2006jm}. The partition function $Z_k$ of the system can be computed using localization methods. To achieve full localization all symmetries of the system have to be broken. The gauge symmetries on the world volumes of the D3 and D(-1) branes can be broken by distributing them along a transverse complex plane $\mathbb{C}$. We label their positions in this plane by $a_u$ (with $u=1,\cdots, N$) and $\chi_i$ (with $i=1,\cdots,k$), respectively. The $\mathrm{SO}(4)\times \mathrm{SO}(4)$ Lorentz symmetry of the spacetime transverse to this plane can be broken by turning on an $\Omega$-background with parameters $\epsilon_1$, $\epsilon_2$, $\epsilon_3$ and $\epsilon_4$. The first two parameters, $\epsilon_1$ and $\epsilon_2$, describe a gravitational background, while $\epsilon_3$ and $\epsilon_4$ are related to the mass of the adjoint hypermultiplet. In Tab.~1 we list all moduli for given $k$ and $N$, together with their transformation properties with respect to the various symmetry groups. In the first column we have grouped the moduli into $Q$-pairs of the supersymmetric charge $Q$ used for localization and labeled by their $\mathrm{SO}(4)\times \mathrm{SO}(4)$ quantum numbers with spinor indices $\alpha,\dot\alpha,a,\dot a$, all taking two values. \begin{table}[ht] \begin{center} \begin{tabular}{|c|c|c|c|} \hline \begin{small} $ (\phi,\psi) $ \end{small} &\begin{small} $(-1)^{F_\phi}$ \end{small} &\begin{small} $\mathrm{U}(N) \times \mathrm{U}(k) $ \end{small} &\begin{small} $ \lambda_\phi\phantom{\Big|} $ \end{small} \\ \hline\hline $(B_{\alpha\dot\alpha},M_{\alpha \dot a} )$ & $+\phantom{\Big|}$ & $\bigl(\mathbf{1},\tableau{1}\,\overline{\tableau{1}}\bigr)$ & $ \chi_{ij}+\epsilon_1,\,\chi_{ij}+\epsilon_2$\\ $(B_{ a \dot a},M_{\dot\alpha a} )$ & $+\phantom{\Big|}$ & $\bigl(\mathbf{1},\tableau{1}\,\overline{\tableau{1}}\bigr)$ & $ \chi_{ij}+\epsilon_3,\chi_{ij}+\epsilon_4$\\ $(N_{(\dot\alpha\dot b)},D_{(\dot\alpha\dot \beta)})$ & $-\phantom{\Big|}$ & $\bigl(\mathbf{1},\tableau{1}\,\overline{\tableau{1}}\bigr) $ & $ \sqrt{\chi_{ij}},\, \chi_{ij}+\epsilon_1+\epsilon_2 $\\ $(\bar \chi , N)$ & $+\phantom{\Big|}$ & $\bigl(\mathbf{1},\tableau{1}\,\overline{\tableau{1}}\bigr) $ & $ \sqrt{\chi_{ij}} $\\ $(N_{\alpha a},D_{\alpha a})$ & $-\phantom{\Big|}$ & $\bigl(\mathbf{1},\tableau{1}\,\overline{\tableau{1}}\bigr) $ & $ \chi_{ij}+\epsilon_1+\epsilon_3, \,\chi_{ij}+\epsilon_1+\epsilon_4 $\\ $(w_{\dot \alpha},\mu_{\dot a})$ & $+\phantom{\Big|}$ & $\bigl( \overline{\tableau{1}},\tableau{1} \bigr) $ & $ \chi_i-a_u+\ft{\epsilon_1+\epsilon_2}{2} $ \\ $(\bar w_{\dot \alpha},\bar \mu_{\dot a})$ & $+\phantom{\Big|}$ & $\bigl({\tableau{1}},\overline{\tableau{1}}\bigr) $ & $ -\chi_i+a_u+\ft{\epsilon_1+\epsilon_2}{2} $ \\ $(h_{a},\mu_{a})$ & $-\phantom{\Big|}$ & $\bigl(\overline{\tableau{1}}, \tableau{1} \bigr) $ & $ \chi_i-a_u+\ft{\epsilon_3-\epsilon_4 }{2} $ \\ $(\bar h_{a},\bar \mu_{a})$ & $-\phantom{\Big|}$ & $\bigl({\tableau{1}} , \overline{\tableau{1}}\bigr) $ & $ -\chi_i+a_u+\ft{\epsilon_3-\epsilon_4 }{2} $ \\ \hline \end{tabular} \caption{Instanton moduli for the U$(N)$ gauge theory. The columns display, respectively, the moduli in a ADHM-like notation, their statistics, their transformation properties with respect to the gauge and instanton symmetry groups and their $Q^2$-eigenvalues $\lambda_\phi$. The notation $\chi_{ij}$ stands for $\chi_i-\chi_j$.} \label{tabun} \end{center} \end{table} The neutral bosonic moduli include the eight instanton positions transverse to ${\mathbb C}$, denoted by $B_{\alpha\dot\alpha}$ and $B_{a\dot a}$, and the positions along ${\mathbb C}$, denoted by $\chi$ and $\bar\chi$. The charged bosonic moduli $w_{\dot\alpha}$ and $\bar w_{\dot\alpha}$ describe the size and the orientation of the instantons, while the auxiliary fields $D_{\dot\alpha\dot\beta}$, $D_{\alpha a}$ and $h_a$ take care of the generalised ADHM constraints. The field $\chi$ can be viewed as the U$(k)$ gauge parameter and thus it should be integrated out in order to achieve U$(k)$-invariance. The $k$-instanton partition function is given by the complex superdeterminant of $Q^2$, which can be computed from the data reported in the last column of the above table. The result is \begin{equation} Z_k= \oint\prod_{i=1}^k\frac{d\chi_i}{2\pi\mathrm{i}}~ \Delta(0)\, \prod_\phi \lambda_\phi^{(-1)^{F_\phi+1} } \,=\, \oint\prod_{i=1}^k\frac{d\chi_i}{2\pi\mathrm{i}}~z_k^{\mathrm{gauge}}\,z_k^{\mathrm{matter}} \label{Zkun} \end{equation} where $\Delta(0)=\prod_{i\neq j} \chi_{ij}$ is the Vandermonde determinant and \begin{subequations} \begin{align} z_k^{\mathrm{gauge}}&=\frac{(-1)^k}{k!}\, \left( \frac{\epsilon_1+\epsilon_2 }{\epsilon_1\epsilon_2}\right)^k \,\frac{\Delta(0)\,\Delta(\epsilon_1+\epsilon_2)}{\Delta(\epsilon_1)\,\Delta(\epsilon_2)}\,\prod_{i=1}^k \frac{ 1}{P\big(\chi_i+\frac{\epsilon_1+\epsilon_2}{2}\big)\,P\big(\chi_i-\frac{\epsilon_1+\epsilon_2 }{2}\big)} ~,\\ \notag\\ z_k^{\mathrm{matter}}&=\left( \frac{ (\epsilon_1+\epsilon_3) (\epsilon_1+\epsilon_4) }{ \epsilon_3 \epsilon_4 }\right)^{\!k} \frac{\Delta(\epsilon_1+\epsilon_3)\,\Delta(\epsilon_1+\epsilon_4)}{\Delta(\epsilon_3)\,\Delta(\epsilon_4)}\, \prod_{i=1}^k P\big(\chi_i+\ft{\epsilon_3-\epsilon_4}{2}\big)\,P\big(\chi_i-\ft{\epsilon_3-\epsilon_4}{2} \big) \end{align} \end{subequations} with \begin{equation} \begin{aligned} P(x) =\prod_{u=1}^N\big(x- a_u)~,\qquad \Delta(x)=\prod_{i<j}^k\big( x^2- \chi_{ij}^2 \big) ~. \end{aligned} \end{equation} The integrals in (\ref{Zkun}) are computed by closing the contours in the upper-half complex $\chi_i$-planes after giving $\epsilon_1$, $\epsilon_2$, $\epsilon_3$ and $\epsilon_4$ an imaginary part with the following prescription \begin{equation} \mathrm{Im}(\epsilon_4)\gg \mathrm{Im}(\epsilon_3)\gg\mathrm{Im}(\epsilon_2)\gg\mathrm{Im}(\epsilon_1)> 0. \label{prescription} \end{equation} This choice allows us to unambiguously compute all integrals in (\ref{Zkun}) and obtain the instanton partition of the U$(N)$ theory \begin{equation} Z_{\mathrm{inst}}=1+\sum_{k=1}q^k\,Z_k \end{equation} where $q=\mathrm{e}^{2\pi\mathrm{i}\tau}$. At the end of the calculations we have to set \begin{equation} \epsilon_3=m-\frac{\epsilon_1+\epsilon_2}{2}~,\quad \epsilon_4=-m-\frac{\epsilon_1+\epsilon_2}{2} \label{mass34} \end{equation} in order to express the result in terms of the hypermultiplet mass $m$ in the normalization of the previous sections. The prepotential is then given by \begin{equation} F_{\mathrm{inst}}=-\epsilon_1\epsilon_2\,\log Z_{\mathrm{inst}}=\sum_{k=1}q^k\,F_k~; \label{FZ} \end{equation} by taking the limit $\epsilon_1,\epsilon_2\to0$ one finally recovers the prepotential of the undeformed gauge theory. \subsubsection*{1-instanton terms} At $k=1$ there is one integral to compute; it is very easy to see that the poles of (\ref{Zkun}) are located at \begin{equation} \chi_1=a_u+\frac{\epsilon_1+\epsilon_2}{2}~. \end{equation} Evaluating the residues, using (\ref{mass34}) and summing over $u$ we find \begin{equation} F_{k=1}=-\epsilon_1 \epsilon_2\, Z_1=-\left(m^2-\frac{(\epsilon_1-\epsilon_2)^2}{4} \right) \sum_{u=1}^N \prod_{v\neq u} \frac{(a_{uv }+ \ft{\epsilon_1+\epsilon_2}{2})^2-m^2}{a_{uv }(a_{uv}+\epsilon_1+\epsilon_2)} \label{Fk1nek} \end{equation} where $a_{uv}=a_u-a_v$. For example for U(2) we have \begin{equation} F_{k=1}\Big|_{\mathrm{U}(2)}=(M^2+h^2)\Big[-2+\frac{M^2}{a_{12}(a_{12}+\epsilon)}+ \frac{M^2}{a_{21}(a_{21}+\epsilon)}\Big] \end{equation} where $M^2$ and $\epsilon$ are defined in (\ref{M2}) and (\ref{epsilonh}). Notice that the terms proportional to $M^2$ in the square brackets precisely reconstruct the sum $g_0$ defined in (\ref{g2n}). To get the prepotential for the SU(2) theory we simply have to set $a_1=-a_2=a$ in the above expression; in this way we get \begin{equation} F_{k=1}\Big|_{\mathrm{SU}(2)}=\frac{2(M^2+h^2)\,(M^2+\epsilon^2-4 a^2)}{4 a^2-\epsilon^2}~. \label{Fk1SU2} \end{equation} For unitary groups of higher rank, the expanded expression of the 1-instanton prepotential is more cumbersome; however it is possible to check that (\ref{Fk1nek}) can be written as% \footnote{We discard all $a$-independent terms.} \begin{equation} F_{k=1}= M^2(M^2+h^2)\Big[g_0+M^2\,g_2+M^2(M^2-\epsilon^2)\,g_4+ M^2(M^4-3M^2\epsilon^2+3\epsilon^4) \,g_6+\cdots\Big] \label{Fk1gn} \end{equation} in agreement with (\ref{FFk1a}). The equality between (\ref{Fk1nek}) and (\ref{Fk1gn}) (up to $a$-independent terms) is not immediate to see, but nevertheless it is true. In the undeformed theory, {\it{i.e.}} $\epsilon,h\to0$ and $M^2\to m^2$ the above results further simplify and reduce to those obtained long ago in \cite{Ennes:1999fb} using a completely different approach. \subsubsection*{2-instanton terms} At $k=2$ there are two integrals in (\ref{Zkun}) to compute. The procedure we have described above is straightforward to implement and with the prescription (\ref{prescription}) no ambiguities arise. To avoid long formulas we write some explicit 2-instanton terms only in the $\epsilon,h\to 0$ limit. For example in the undeformed U(2) theory, we get \begin{equation} \begin{aligned} F_{k=2}\Big|_{\mathrm{U}(2)}&= -3 m^2+6 m^4\,\frac{1}{a_{12}^2}-12m^6\,\frac{1}{a_{12}^4} +5m^8\,\frac{1}{a_{12}^6} +\cdots\\ &=-3m^2+3m^4\, C_{2}-6m^6\,C_{4}+\frac{5m^8}{2}\,C_{6}+\cdots \end{aligned} \end{equation} where in the second line we have used the sums $C_n$'s defined in (\ref{defCn}) and the dots stand for subleading terms in the mass expansion. Likewise for U(3) we find \begin{equation} \begin{aligned} F_{k=2}\Big|_{\mathrm{U}(3)}&= -\frac{9m^2}{2}+6m^4\Big(\frac{1}{a_{12}^2}+\frac{1}{a_{13}^2} +\frac{1}{a_{23}^2}\Big)\\ &~-12m^6\Big(\frac{1}{a_{12}^4}+\frac{1}{a_{13}^4} +\frac{1}{a_{23}^4}+\frac{a_1^2+a_2^2+a_3^2-a_1a_2-a_1a_3-a_2a_3}{a_{12}^2a_{13}^2a_{23}^2}\Big) +\cdots\\ &=-\frac{9m^2}{2}+3m^4\, C_{2}-6m^6\,C_{4}+3m^6\,C_{2;11}+\cdots \end{aligned} \end{equation} We have explicitly checked up to U(5) that the same pattern occurs, namely that the 2-instanton prepotential is (up to $a$-independent terms) \begin{equation} F_{k=2}=3m^4\, C_{2}-6m^6\,C_{4}+3m^6\,C_{2;11}+\frac{5m^8}{2}\,C_6+6m^8\,C_{4;2}+\frac{m^8}{2}\,C_{3;3}+\frac{m^8}{2}\,C_{2;1111}+\cdots~. \label{Fk2fin} \end{equation} This result is in total agreement with the 2-instanton prepotential that can be obtained from (\ref{f5is}) by expanding the Eisenstein series; moreover it clearly shows the advantage of using the root lattice sums $C_{n;m_1\cdots}$ that allow us to write a single expression valid for all U$(N)$'s groups. Finally we mention that for the unitary groups it is possible to push the calculations to higher instanton numbers as we have shown in \cite{Billo:2014bja}. \subsection{Multi-instantons for the SO$(2N)$ gauge theory} The moduli space of the SO$(2N)$ gauge theory is obtained from that of the U$(2N)$ theory by using the projector $(1+\Omega\, I)/2$ where $\Omega$ is the orientifold operator that changes the orientation of the open strings and $I$ reflects the moduli carrying an index $\dot\alpha$ , {\it {i.e.}} transforming in the fundamental representation of the $SU(2)_{\mathrm{L}}$ factor of the spacetime Lorentz group \cite{Fucito:2004gi}. As a result, the symmetry of the brane system reduces to $\mathrm{SO}(2N)\times \mathrm{Sp}(2k)$. The instanton moduli and their transformation properties are listed in Tab.~2 which uses the same notation as Tab.~1. \begin{table}[ht] \begin{center} \begin{tabular}{|c|c|c|c|} \hline \begin{small} $ (\phi,\psi) $ \end{small} &\begin{small} $(-1)^{F_\phi}$ \end{small} &\begin{small} $\mathrm{SO}(2N) \times \mathrm{Sp}(2k)$ \end{small} &\begin{small} $ \lambda_\phi\phantom{\Big|} $ \end{small} \\ \hline\hline $(B_{\alpha\dot\alpha},M_{\alpha \dot a} )$ & $+\phantom{\Big|}$ & $\bigl(\mathbf{1},\tableau{1 1}\bigr)$ & $ \chi_{ij}+\epsilon_1,\,\chi_{ij}+\epsilon_2$\\ $(B_{ a \dot a},M_{\dot\alpha a} )$ & $+\phantom{\Big|}$ & $\bigl(\mathbf{1},\tableau{2}\bigr)$ & $ \chi_{ij}+\epsilon_3,\,\chi_{ij}+\epsilon_4$\\ $(N_{(\dot\alpha\dot b)},D_{(\dot\alpha\dot \beta)})$ & $-\phantom{\Big|}$ & $\bigl(\mathbf{1},\tableau{2}\bigr) $ & $ \sqrt{\chi_{ij}},\, \chi_{ij}+\epsilon_1+\epsilon_2 $\\ $( \bar \chi, N)$ & $+\phantom{\Big|}$ & $\bigl(\mathbf{1},\tableau{2}\bigr) $ & $ \sqrt{\chi_{ij}} $\\ $(N_{\alpha a},D_{\alpha a})$ & $-\phantom{\Big|}$ & $\bigl(\mathbf{1},\tableau{1 1}\bigr) $ & $ \chi_{ij}+\epsilon_1+\epsilon_3,\, \chi_{ij}+\epsilon_1+\epsilon_4 $\\ $(w_{\dot \alpha},\mu_{\dot a})$ & $+\phantom{\Big|}$ & $\bigl(\tableau{1}, {\tableau{1}}\bigr) $ & $ \chi_i-a_u+\ft{\epsilon_1+\epsilon_2}{2} $ \\ $(h_{a},\mu_{a})$ & $-\phantom{\Big|}$ & $\bigl( \tableau{1}, {\tableau{1}} \bigr) $ & $ \chi_i-a_u+\ft{\epsilon_3-\epsilon_4 }{2} $ \\ \hline \end{tabular} \caption{Instanton moduli for the SO$(2N)$ gauge theory. The columns display the moduli, their statistics, their transformation properties with respect to the gauge and instanton symmetry groups and their $Q^2$-eigenvalues $\lambda_\phi$. } \label{tabson} \end{center} \end{table} Collecting the eigenvalues $\lambda_\phi$ for all moduli, we find that the $k$-instanton partition function is \begin{equation} Z_k= \oint\prod_{i=1}^k\frac{d\chi_i}{2\pi\mathrm{i}}~z_k^{\mathrm{gauge}}\,z_k^{\mathrm{matter}} \label{Zkson} \end{equation} where \begin{subequations} \begin{align} z_k^{\mathrm{gauge}}&= \frac{(-1)^k}{2^k\,k!}\, \left( \frac{\epsilon_1+\epsilon_2 }{\epsilon_1\epsilon_2 }\right)^k \,\frac{\Delta(0)\,\Delta(\epsilon_1+\epsilon_2 )}{\Delta(\epsilon_1)\,\Delta(\epsilon_2)}\,\prod_{i=1}^k \frac{ 4\chi_i^2 \big( 4\chi_i^2-(\epsilon_1+\epsilon_2)^2\big)}{P\big(\chi_i+\frac{\epsilon_1+\epsilon_2 }{2}\big)P\big(\chi_i-\frac{\epsilon_1+\epsilon_2 }{2}\big)} ~,\\ &\notag\\ z_k^{\mathrm{matter}}&=\,\left( \frac{ (\epsilon_1+\epsilon_3) (\epsilon_1+\epsilon_4) }{ \epsilon_3 \epsilon_4 }\right)^{\!k} \frac{\Delta\big(\epsilon_1+\epsilon_3 \big)\Delta\big(\epsilon_1+\epsilon_4 \big) } {\Delta\big(\epsilon_3 \big)\Delta\big(\epsilon_4 \big) } \prod_{i=1}^k \frac{P\big(\chi_i+\frac{\epsilon_3-\epsilon_4}{2} \big)P\big(\chi_i- \frac{\epsilon_3-\epsilon_4}{2} \big)}{\big( 4 \chi_i^2-\epsilon_3^2 \big)\big( 4 \chi_i^2-\epsilon_4^2 \big)} \end{align} \end{subequations} with \begin{equation} \begin{aligned} P(x) = \prod_{u=1}^N\big(x^2-a_u^2)~,\qquad \Delta(x)=\prod_{i<j}^k\big((\chi_i-\chi_j)^2-x^2)\big)\big((\chi_i+\chi_j)^2-x^2\big)~. \end{aligned} \end{equation} Once again the integrals in (\ref{Zkson}) are computed by closing the contours in the upper-half complex $\chi_i$-planes with the prescription (\ref{prescription}). It is important to stress that unlike in the U$(N)$ theory, the integral (\ref{Zkson}) receives non-trivial contributions also from poles located at $\chi_i=\epsilon_3$, $\chi_i=\epsilon_4$, $\chi_{ij}=\epsilon_3$ and $\chi_{ij}=\epsilon_4$. The contributions of the corresponding residues are crucial to find an expression which is polynomial in the hypermultiplet mass as one expects on general grounds. Only at the very end of the computation one should use the identification (\ref{mass34}) in order to write the final results in terms of the vacuum expectation values $a_u$ and the mass $m$ in the normalization used in the previous sections. \subsubsection*{1-instanton terms} For $k=1$ the poles of (\ref{Zkson}) are located at \begin{equation} \chi_1= \left\{ \pm a_u+\frac{\epsilon_1+\epsilon_2}{2} ~,~\frac{\epsilon_3}{2} ~,~\frac{\epsilon_4}{2} \right\} \quad\mbox{for}~ u=1,\cdots, N \end{equation} The $k=1$ prepotential can then be written as \begin{equation} F_{k=1}=-\epsilon_1 \epsilon_2 \, Z_1 =\sum_{u=1}^N f_{+a_u+\ft{\epsilon_1+\epsilon_2}{2}}+ \sum_{u=1}^N f_{-a_u+\ft{\epsilon_1+\epsilon_2}{2}}+ f_{\ft{\epsilon_3}{2}}+f_{\ft{\epsilon_4}{2}} \label{Fk1SO2N} \end{equation} with \begin{equation} \begin{aligned} f_{ \pm a_u+\ft{\epsilon_1+\epsilon_2}{2} } &=- (\epsilon_1+\epsilon_3) (\epsilon_1+\epsilon_4) \, \frac{(\pm 2a_u+\epsilon_1+\epsilon_2)(\pm a_u+\epsilon_1+\epsilon_2)}{ (\pm 2a_u+\epsilon_1+\epsilon_2-\epsilon_3)( \pm 2a_u+\epsilon_1+\epsilon_2-\epsilon_4)} \nonumber\\ &~~~~~~~~~\times \prod_{v\neq u} \frac{ \big( (\pm a_u-\epsilon_3)^2-a_v^2\big) \big( (\pm a_u-\epsilon_4)^2-a_v^2\big)}{(a_u^2-a_v^2) \big((\pm a_u+\epsilon_1+\epsilon_2)^2-a_v^2\big) }~,\\ f_{ \ft{\epsilon_3}{2} } &= -\frac{(\epsilon_1+\epsilon_3) (\epsilon_1+\epsilon_4) (\epsilon_3-\epsilon_1-\epsilon_2)} { 8 (\epsilon_3-\epsilon_4) } \prod_{u=1}^N \frac{ (2\epsilon_3-\epsilon_4)^2-a_u^2}{(\epsilon_3-\epsilon)^2-a_u^2} ~,\\ f_{ \ft{\epsilon_4}{2} } &= -\frac{(\epsilon_1+\epsilon_3) (\epsilon_1+\epsilon_4) (\epsilon_4-\epsilon_1-\epsilon_2)}{8 (\epsilon_4-\epsilon_3) } \prod_{u=1}^N \frac{(2\epsilon_4-\epsilon_3)^2-a_u^2}{(\epsilon_4-\epsilon)^2-a_u^2} ~. \end{aligned} \end{equation} For example for SO(4) these formulas lead to \begin{eqnarray} F_{k=1}\Big|_{\mathrm{SO}(4)}&=&(M^2+h^2)\Big[-\frac{17}{8}+ \frac{M^2}{(a_1+a_2)(a_1+a_2+\epsilon)}+ \frac{M^2}{(-a_1-a_2)(-a_1-a_2+\epsilon)}\nonumber\\ &&\qquad+ \frac{M^2}{(a_1-a_2)(a_1-a_2+\epsilon)}+ \frac{M^2}{(-a_1+a_2)(-a_1+a_2+\epsilon)} \Big] \label{F1so4} \end{eqnarray} where we have used (\ref{mass34}), (\ref{M2}) and (\ref{epsilonh}). Inside the square brackets the terms proportional to $M^2$ precisely reconstruct the sum $g_0$ defined in (\ref{g2n}) so that this result is in perfect agreement with (\ref{FFk1a}). We also notice that (\ref{F1so4}) is related to the SU(2) prepotential (\ref{Fk1SU2}). Indeed, upon comparison, we have \begin{equation} F_{k=1}\Big|_{\mathrm{SO}(4)}(a_1,a_2)= F_{k=1}\Big|_{\mathrm{SU}(2)}(a_L)+ F_{k=1}\Big|_{\mathrm{SU}(2)}(a_R)+ \frac{15}{8}(M^2+ h^2) \end{equation} where \begin{equation} a_1 =a_L+a_R~, \qquad ~~~~~ a_2 =a_L-a_R ~, \end{equation} so the two prepotentials match up to an $a$-independent function as they should, since $\mathrm{SO}(4)\sim \mathrm{SU}(2)\times\mathrm{SU}(2)$. The explicit expressions of the prepotential for groups of higher rank quickly become rather involved; nevertheless we have checked up to SO(12) that the 1-instanton result (\ref{Fk1SO2N}) can be written as \begin{equation} \begin{aligned} F_{k=1}&=M^2(M^2+h^2)\Big[g_0+\!\frac{M^2}{2}g_2+\!\frac{M^2(M^2-\epsilon^2)}{24}g_4 +\!\frac{M^2(M^4-3M^2\epsilon^2+3\epsilon^4)}{720}g_6+\!\cdots\!\Big] \end{aligned} \label{Fk1gn1} \end{equation} This is the same expression we found for the U$(N)$ theories (see (\ref{Fk1gn})) and is in perfect agreement with what follows from solving the recursion relation as discussed in Section~\ref{sec:recepsi}. Furthermore, in the limit $\epsilon,h\to0$ we exactly recover the results obtained in \cite{Ennes:1999fb} using a very different approach. \subsubsection*{2-instanton terms} At $k=2$ one has to compute two integrals. Again, to avoid long formulas we only write an example in the $\epsilon,h\to0$ limit for the purpose of illustration. For SO(4), up to $a$-independent terms we find \begin{equation} \begin{aligned} F_{k=2}\Big|_{\mathrm{SO}(4)}&=12m^4\,\frac{a_1^2+a_2^2}{\big(a_1^2-a_2^2\big)^2}-24m^6\,\frac{a_1^4+6a_1^2a_2^2+a_2^4}{\big(a_1^2-a_2^2\big)^4}\\ &~~~+ 10m^8\frac{a_1^6+15a_1^4a_2^2+15a_1^2a_2^4+a_2^6}{\big(a_1^2-a_2^2\big)^6}\\ &=3m^4\, C_{2}-6m^6\,C_{4}+3m^6\,C_{2;11}+\frac{5m^8}{2}\,C_6+6m^8\,C_{4;2}+\frac{m^8}{2}\,C_{3;3}+\frac{m^8}{2}\,C_{2;1111} \end{aligned} \end{equation} where the last line follows upon using the sums (\ref{defcnms}) over the lattice root of SO$(2N)$. Formally, this is the same expression found for the unitary theories and agrees with the results obtained in Section~\ref{secn:recursion} from the recursion relations. We have verified that this agreement persist up for higher rank groups up to SO(12). This fact puts our findings on a very solid ground. \section{Conclusions} \label{sec:concl} In this paper we have shown that S-duality in $\mathcal{N}=2^\star$ gauge theories with simply-laced gauge groups requires that the quantum prepotential satisfies a modular anomaly equation which in turn allows to recursively determine the prepotential itself. It is very satisfactory that these conditions can be expressed in a unified form involving sums over the roots of the gauge algebra without resorting to the specific details of the algebra itself. This is the key to extend our results to the case of exceptional algebras, where the lack of an ADHM construction of the instanton moduli space does not allow the application of the traditional methods of investigation. The differential equation coming from the anomaly, irrespective of the gauge algebra, needs an external output to fix all the terms in the prepotential. Given that $f_n$ is a modular form of weight $2n-2$, in solving the recursion relation (\ref{rec}) we can add to $f_n$ all monomials in the Eisenstein series which have weight $2n-2$ but do not contain $E_2$. The coefficients in front of these terms are determined by comparing with the perturbative expansion and when this is not enough by resorting to microscopic instanton computations. Given that no microscopic instanton computations exist for the exceptional gauge groups this could seem a problem. Luckily enough, the results for $f_n$ given in terms of sums over the roots of the algebra are universal and thus should hold for the exceptional algebras as well. We believe our results be a very solid conjecture which we have successfully tested for the lowest instanton number with the result for pure $\mathcal{N}=2$ theory existing in literature \cite{Keller:2011ek,Keller:2012da} and provide an elegant generalisation to the $\mathcal{N}=2^\star$ case, as well as precise predictions for higher instanton corrections. \vskip 1.5cm \noindent {\large {\bf Acknowledgments}} \vskip 0.2cm We thank Carlo Angelantonj, Sujay Ashok, Eleonora Dell'Aquila and Igor Pesando for discussions. The work of M.B., M.F. and A.L. is partially supported by the Compagnia di San Paolo contract ``MAST: Modern Applications of String Theory'' TO-Call3-2012-0088. \vskip 1cm
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{\@startsection {section}{1}{\z@}% {-3.5ex \@plus -1ex \@minus -.2ex {2.3ex \@plus.2ex}% {\normalfont\large\bfseries}} \renewcommand\subsection{\@startsection{subsection}{2}{\z@}% {-3.25ex\@plus -1ex \@minus -.2ex}% {1.5ex \@plus .2ex}% {\normalfont\bfseries}} \makeatother \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\begin{array}}{\begin{array}} \newcommand{\end{array}}{\end{array}} \newcommand{nlk}{nlk} \newcommand{n_1l_1k_1}{n_1l_1k_1} \newcommand{n_2l_2k_2}{n_2l_2k_2} \newcommand{n_3l_3k_3}{n_3l_3k_3} \newcommand{n_4l_4k_4}{n_4l_4k_4} \newcommand{n_1l_1k_1n_2l_2k_2n_3l_3k_3n_4l_4k_4}{n_1l_1k_1n_2l_2k_2n_3l_3k_3n_4l_4k_4} \newcommand{n_2l_2k_2n_3l_3k_3n_4l_4k_4}{n_2l_2k_2n_3l_3k_3n_4l_4k_4} \newcommand{c_{nlk}}{c_{nlk}} \newcommand{c_{\indt}}{c_{n_2l_2k_2}} \newcommand{\bar{c}_{\indt}}{\bar{c}_{n_2l_2k_2}} \newcommand{c_{\indth}}{c_{n_3l_3k_3}} \newcommand{c_{\indf}}{c_{n_4l_4k_4}} \newcommand{\dot{c}_{\indices}}{\dot{c}_{nlk}} \newcommand{\ddot{c}_{\indices}}{\ddot{c}_{nlk}} \newcommand{\ddot{c}_{\indo}}{\ddot{c}_{n_1l_1k_1}} \newcommand{(n+1)}{(n+1)} \newcommand{(n_1+1)}{(n_1+1)} \newcommand{\partial}{\partial} \newcommand{\scriptscriptstyle}{\scriptscriptstyle} \newcommand{\scriptstyle}{\scriptstyle} \newcommand{\displaystyle}{\displaystyle} \newcommand{\s}{y} \newcommand{e_{\indices}}{e_{nlk}} \newcommand{e_{\indo}}{e_{n_1l_1k_1}} \newcommand{\bar{e}_{\indo}}{\bar{e}_{n_1l_1k_1}} \newcommand{e_{\indt}}{e_{n_2l_2k_2}} \newcommand{\bar{e}_{\indt}}{\bar{e}_{n_2l_2k_2}} \newcommand{e_{\indth}}{e_{n_3l_3k_3}} \newcommand{e_{\indf}}{e_{n_4l_4k_4}} \newcommand{\omega_{\indices}}{\omega_{nlk}} \newcommand{Y_{nlk}}{Y_{nlk}} \newcommand{\bar{a}}{\bar{a}} \newcommand{\bar{b}}{\bar{b}} \newcommand{\dot{y}}{\dot{y}} \newcommand{\alpha}{\alpha} \newcommand{\bar{\alpha}}{\bar{\alpha}} \newcommand{\dot{\alpha}}{\dot{\alpha}} \newcommand{\alpha_{\indices}}{\alpha_{nlk}} \newcommand{\alpha_{\indo}}{\alpha_{n_1l_1k_1}} \newcommand{\alpha_{\indt}}{\alpha_{n_2l_2k_2}} \newcommand{\bar{\alpha}_{\indt}}{\bar{\alpha}_{n_2l_2k_2}} \newcommand{\alpha_{\indth}}{\alpha_{n_3l_3k_3}} \newcommand{\alpha_{\indf}}{\alpha_{n_4l_4k_4}} \newcommand{\bar{\alpha}_{\indices}}{\bar{\alpha}_{nlk}} \newcommand{\bar{\alpha}_{\indo}}{\bar{\alpha}_{n_1l_1k_1}} \newcommand{\dot{\alpha}_{\indices}}{\dot{\alpha}_{nlk}} \newcommand{\dot{\alpha}_{\indo}}{\dot{\alpha}_{n_1l_1k_1}} \newcommand{\beta}{\beta} \newcommand{\bar{\beta}}{\bar{\beta}} \newcommand{\dot{\beta}}{\dot{\beta}} \newcommand{\beta_{\indices}}{\beta_{nlk}} \newcommand{\beta_{\indo}}{\beta_{n_1l_1k_1}} \newcommand{\beta_{\indt}}{\beta_{n_2l_2k_2}} \newcommand{\bar{\beta}_{\indt}}{\bar{\beta}_{n_2l_2k_2}} \newcommand{\beta_{\indth}}{\beta_{n_3l_3k_3}} \newcommand{\beta_{\indf}}{\beta_{n_4l_4k_4}} \newcommand{\bar{\beta}_{\indices}}{\bar{\beta}_{nlk}} \newcommand{\bar{\beta}_{\indo}}{\bar{\beta}_{n_1l_1k_1}} \newcommand{\dot{\beta}_{\indices}}{\dot{\beta}_{nlk}} \newcommand{\dot{\beta}_{\indo}}{\dot{\beta}_{n_1l_1k_1}} \newcommand{\B}[2]{B\left(#1,#2\right)} \newcommand{\G}[1]{\Gamma{\left(#1\right)}} \newcommand{\theta}{\theta} \newcommand{\varphi}{\varphi} \newcommand{\delta}{\delta} \newcommand{\Delta}{\Delta} \newcommand{\mbox{const}}{\mbox{const}} \newcommand{\varepsilon}{\varepsilon} \newcommand{\upsilon}{\upsilon} \newcommand{\zeta}{\zeta} \newcommand{\bar\zeta}{\bar\zeta} \newcommand{\bar\chi}{\bar\chi} \newcommand{\omega}{\omega} \newcommand{\triangle}{\triangle} \newcommand{\hyge}[4]{{}_2 F_1\left(#1,#2,#3;#4\right)} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\sect}[1]{Section~\ref{#1}} \newcommand{\frac{1}{2}}{\frac{1}{2}} \newcommand{\eq}[1]{(\ref{#1})} \newcommand{\fig}[1]{Figure~\ref{#1}} \newtheorem{theorem}{Theorem} \newtheorem{conj}{Conjecture} \theoremstyle{remark} \newtheorem{rem}{Remark} \renewcommand{\Re}{\operatorname{Re}} \renewcommand{\Im}{\operatorname{Im}} \newcommand{\minPlusOne}[1]{ \min\!\left(#1\right) + 1 } \newcommand{\nPlusOne}[1]{ #1+1 } \addtolength{\topmargin}{-0.5pc} \addtolength{\textheight}{1.pc} \begin{document} \begin{titlepage} \begin{flushright} \phantom{arXiv:yymm.nnnn} \end{flushright} \vspace{0cm} \begin{center} {\LARGE\bf Maximally rotating waves in AdS\vspace{2mm}\\ and on spheres} \\ \vskip 15mm {\large Ben Craps,$^{a}$ Oleg Evnin,$^{b,a}$ Vincent Luyten$^{a}$} \vskip 7mm {\em $^a$ Theoretische Natuurkunde, Vrije Universiteit Brussel (VUB) and\\ The International Solvay Institutes, Brussels, Belgium} \vskip 3mm {\em $^b$ Department of Physics, Faculty of Science, Chulalongkorn University, Bangkok, Thailand} \vskip 7mm {\small\noindent {\tt [email protected], [email protected], [email protected]}} \vskip 10mm \end{center} \vspace{2cm} \begin{center} {\bf ABSTRACT}\vspace{3mm} \end{center} We study the cubic wave equation in AdS$_{d+1}$ (and a closely related cubic wave equation on $S^3$) in a weakly nonlinear regime. Via time-averaging, these systems are accurately described by simplified infinite-dimensional quartic Hamiltonian systems, whose structure is mandated by the fully resonant spectrum of linearized perturbations. The maximally rotating sector, comprising only the modes of maximal angular momentum at each frequency level, consistently decouples in the weakly nonlinear regime. The Hamiltonian systems obtained by this decoupling display remarkable periodic return behaviors closely analogous to what has been demonstrated in recent literature for a few other related equations (the cubic Szeg\H o equation, the conformal flow, the LLL equation). This suggests a powerful underlying analytic structure, such as integrability. We comment on the connection of our considerations to the Gross-Pitaevskii equation for harmonically trapped Bose-Einstein condensates. \vfill \end{titlepage} \section{Introduction} Dynamics of nonlinear waves in confined domains is a fascinating subject, since (unlike in scattering situations) the interactions are never effectively cut off by the wave dispersal to infinity, resulting in sophisticated behaviors. Weakly turbulent phenomena, whereby nonlinearities transfer energy to progressively shorter wavelengths, are of particular interest in this setting. When nonlinearities are introduced to systems whose linearized spectra of frequencies are perfectly resonant (all frequencies are commensurate), the sophisticated behaviors due to repeated wave scattering in the confined domain survive to arbitrarily small magnitudes of nonlinear interactions, provided that one waits long enough. (The transfer of energy between linearized normal modes becomes slow when the nonlinearities are weak.) A number of equations of mathematical physics display this feature. For instance, nonlinear dynamics of small perturbations of the Anti-de Sitter spacetime has attracted a lot of attention over the recent years (starting with \cite{BR}, for a review see \cite{review}). The same features are shared by nonlinear wave equations in AdS and on spheres, which will be the main subject of our present investigation. As we will show, these equations can also be viewed as a relativistic version of the Gross-Pitaevskii equation describing Bose-Einstein condensates in a harmonic trap (reviews can be found in \cite{BEC1,BEC2,BEC3}). This Gross-Pitaevskii equation also represents waves in a confined domain with a perfectly resonant spectrum of linear frequencies. As a first step in analyzing nonlinear dynamics in confined domains for systems with resonant frequency spectra, it is natural to focus on weakly nonlinear regimes. Naive perturbative expansions break down due to so-called secular terms, and have to be replaced by alternative perturbative techniques. A number of such techniques are known, based on application of multiscale analysis, time-averaging or renormalization group resummation (for a textbook treatment, see \cite{murdock}, for discussions in the context of the AdS stability problem, see \cite{Balasubramanian:2014cja,CEV1,CEV2}). As an output of these methods, one obtains a simplified infinite-dimensional {\it flow system} (also known as the resonant or effective system) describing slow energy transfer between the linearized modes due to resonant interactions. Flow systems arising from the sort of weakly nonlinear analysis described above are often more structured than the original equations from which they descend, for example, they may have extra conservation laws \cite{BKS1,CEV2}. For some equations, the flow system may be simple enough to admit explicit analytic solution. For instance, both the conformal flow equation arising from the cubic wave equation on a 3-sphere, and the Lowest Landau Level (LLL) equation \cite{GHT,GT} arising from the Gross-Pitaevskii equation for harmonically trapped Bose-Einstein condensates, admit analytic solutions in which the linearized normal mode amplitudes exhibit exact periodic returns to the original configuration \cite{BCEHLM,ABCE}. Such remarkable solutions are likely to imply a deeper structure and allude to integrability. In fact, both of these flow equations look like more complicated generalizations of the cubic Szeg\H o equation, an integrable system designed in the mathematical literature \cite{GG} as a solvable model of weak turbulence. In this article, our aim is to develop a series of flow systems arising from weakly nonlinear wave equations in AdS spacetime and on spheres. These flows generalize the conformal flow considered in \cite{BCEHLM} by treating wave equations without spherical symmetry, whereas the considerations of \cite{BCEHLM} were specific to perturbations on a 3-sphere rotationally invariant about one picked point. (We note that attempts to analyze similar problems for the considerably more involved related case of gravitational perturbations of AdS spacetime have recently intensified \cite{asymm1,asymm2,asymm3,asymm4}.) The flows we derive are also closely related to the LLL equation \cite{ABCE}, since they arise from focusing on maximally rotating modes (modes of maximal angular momentum from each frequency level), and there is furthermore a relation between the Gross-Pitaevskii equation and the wave equations we consider, as we explain in the discussion section. Remarkably, we find that the flow equations we derive display periodic perfect returns of the normal mode amplitude spectrum to the initial configuration, analogous to what has been known for the cubic Szeg\H o equation \cite{GG}, the conformal flow \cite{BCEHLM} and the LLL equation \cite{ABCE}. While the systems we are considering are interesting in their own right from the nonlinear dynamics perspective, nonlinear wave equations in AdS spacetime have also surfaced in the context of gravitational holography research. In string-theory-derived versions of the AdS/CFT correspondence, matter fields in the AdS bulk backreact on the metric, but this is not always the case in so-called bottom-up approaches to holography. For simplicity, one often starts by studying a regime in which the bulk matter fields of interest do not backreact on the bulk geometry. Such probe approximations have been used, for instance, in studies of holographic QCD \cite{Karch:2002sh, Sakai:2004cn} (in the limit in which the number of flavors is much smaller than the number of colors), holographic superconductors \cite{Hartnoll:2008vx, Nishioka:2009zj} (in a limit in which the scalar operator that condenses has large charge), and holographic quantum quenches \cite{Basu:2011ft, Basu:2012gg} (in a limit in which a bulk scalar field has large self-coupling). The paper is organized as follows. In section~\ref{sec:2}, we mainly study the conformally invariant cubic wave equation on the Einstein cylinder spacetime $R\times S^3$. We review the procedure of time-averaging, show that the resulting flow equations allow a consistent truncation to the maximally rotating sector, and construct exact solutions describing remarkable periodic returns. We also show that a similar attempt to construct periodic solutions fails for other dimensions. In section~\ref{sec:3}, we discuss the case of AdS, where the equations are a bit more involved, but where we are able to construct periodic return solutions in any dimension and for any value of the scalar field mass. In section~\ref{sec:4}, we conclude with a discussion of the significance of our results and possible further implications. In particular, we point out that the Gross-Pitaevskii equation with a harmonic potential can be viewed as a nonrelativistic limit of the cubic wave equation in AdS, which makes contact with a recent study of the LLL equation in \cite{ABCE}. \section{Weakly nonlinear dynamics of maximally rotating\\ perturbations on spheres}\label{sec:2} We start by considering the wave equations on spatial spheres $S^d$, which correspond to the Einstein cylinder spacetime $R\times S^d$. We shall first specialize to $d=3$, the case for which the conformal flow of \cite{BCEHLM} was derived (in this dimension, the cubic wave equation enjoys symmetry enhancement to the full conformal group). We shall briefly comment on other dimensions at the end. The relevant metric is (we set the 3-sphere radius to 1): \begin{equation} \label{eq_metricS3} ds^2=-dt^2+d\chi^2+\sin^2{\chi}(d\theta^2+\sin^2{\theta}\,d\varphi^2). \end{equation} We choose to work with a complex scalar field $\phi$ for reasons that will become apparent below. The conformally invariant cubic wave equation is \begin{equation} \label{eq_waveS3} -\partial_t^2\phi+\Delta_{S^3}\phi-\phi=|\phi|^2\phi, \end{equation} where $\Delta_{S^3}$ is the 3-sphere Laplacian given by \begin{equation} \Delta_{S^3}=\frac{1}{\sin^2{\chi}}\partial_{\chi}\left(\sin^2{\chi}\partial_{\chi}\right)+\frac{1}{\sin^2{\chi}}\Delta_{S^2},\qquad \Delta_{S^2}=\frac{1}{\sin{\theta}}\partial_{\theta}\left(\sin{\theta}\partial_{\theta}\right)+\frac{1}{\sin^2{\theta}}\partial_{\varphi}^2. \end{equation} Note that if $\phi$ is replaced by a real field, this becomes identical to the wave equation treated in \cite{BCEHLM}. The above wave equation can be converted into an infinite set of coupled oscillators by expanding $\phi$ in the basis of (hyper)spherical harmonics on $S^3$ \begin{equation} \phi(t,\chi,\theta,\varphi)=\sum_{n=0}^{\infty}\sum_{l=0}^n\sum_{k=-l}^{l}c_{nlk}(t)Y_{nlk}(\chi,\theta,\varphi), \end{equation} where the $Y_{nlk}$ satisfy \begin{equation} \Delta_{S^3}Y_{nlk} =-n(n+2)Y_{nlk} , \end{equation} and are normalised to one, such that \eqref{eq_waveS3} becomes \begin{equation} \label{eq_oscS3} \ddot{c}_{\indo}+(n+1)^2c_{n_1l_1k_1} =-\hspace{-0.2cm}\sum_{n_2l_2k_2}\sum_{n_3l_3k_3}\sum_{n_4l_4k_4}C_{n_1l_1k_1\ldotsn_4l_4k_4}\bar{c}_{n_2l_2k_2}c_{n_3l_3k_3}c_{n_4l_4k_4}. \end{equation} The interaction coefficients $C_{n_1l_1k_1\ldotsn_4l_4k_4}$ are given by \begin{equation} \label{eq_C_S3} C_{n_1l_1k_1\ldotsn_4l_4k_4}=\int d\Omega_3\bar{Y}_{n_1l_1k_1}\bar{Y}_{n_2l_2k_2}Y_{n_3l_3k_3}Y_{n_4l_4k_4}, \end{equation} where $d\Omega_3=\sin^2\chi \sin\theta d\chi d\theta d\phi$ is the invariant measure on the 3-sphere. The hyperspherical harmonics on $S^3$ can be expressed in terms of the familiar spherical harmonics on $S^2$ by \begin{equation} Y_{nlk}\left(\chi,\theta,\varphi\right)=\sqrt{\frac{2(2l)!!(n+1)(n-l)!(2l+1)!}{\pi(2l+1)!!(n+l+1)!}}\,\sin^l{\chi}\,C^{(l+1)}_{n-l}\left(\cos{\chi}\right)\,Y_{lk}\left(\theta,\varphi\right), \end{equation} where the $C_n^{(\lambda)}(x)$ are Gegenbauer polynomials of degree $n$ with the measure parameter $\lambda$. They form a system of orthogonal polynomials on the interval $(-1,1)$ with respect to the measure $(1-x^2)^{\lambda-\frac{1}{2}}$. Note that the perfectly resonant spectrum of frequencies in (\ref{eq_oscS3}) is due to the conformal value of the mass in (\ref{eq_waveS3}) and would not be present for generic masses. (By contrast, in the AdS spacetimes we shall focus on in the next section, the spectrum is always fully resonant.) A fully resonant spectrum (more specifically, the property that the difference between any two frequencies is integer) is crucial to maintain the weakly nonlinear approximation in the form we are about to derive. The solutions to the linearized system corresponding to \eqref{eq_oscS3}, where the right-hand side has been replaced by zero, are simply \begin{equation} c_{nlk}^{\mbox{\tiny linear}}=A_{nlk}e^{i(n+1) t}+B_{nlk}e^{-i(n+1) t}, \end{equation} where $A_{nlk}$ and $B_{nlk}$ are arbitrary complex constants. One could then try to treat the non-linearity perturbatively by performing a weak field expansion of the form \begin{equation} \label{eq_expphi} \phi=\varepsilon\phi_{\mbox{\tiny linear}}+\varepsilon^3\phi_{\mbox{\tiny correction}}+\ldots, \end{equation} but $\phi_{\mbox{\tiny correction}}$ will grow in time, due to so called secular terms, and invalidate the above expansion at times of order $1/\varepsilon^2$. This perturbative expansion can be resummed (improved), in different ways, leading to {\it flow equations} that accurately describe slow energy transfer between the modes due to nonlinearities, while the fast oscillations of the original linearized modes are `integrated out.' Rather than presenting this entire procedure, it is quicker (and equivalent) to directly factor out fast oscillations using a method known as time-averaging. (Some pedagogical comments on different approaches to improving perturbation theory can be found in \cite{CEV1,CEV2}.) To employ time-averaging we first change variables from $c$ and $\dot c$ to the complex-valued functions of time $\alpha_{\indices}(t)$ and $\beta_{\indices}(t)$ \begin{align} c_{nlk}&=\varepsilon\left(\alpha_{\indices} e^{i(n+1) t}+\beta_{\indices} e^{-i(n+1) t}\right),\label{eq_cS3}\\ \dot{c}_{\indices}&=i\varepsilon(n+1)(\alpha_{\indices} e^{i(n+1) t}-\beta_{\indices} e^{-i(n+1) t}).\label{eq_cdotS3} \end{align} Combining these, we find for $\alpha_{\indices}$ \begin{equation} \varepsilon\alpha_{\indices} e^{i(n+1) t}=\frac{1}{2}\left(c_{nlk}+\frac{\dot{c}_{\indices}}{i(n+1)}\right). \end{equation} Differentiating and taking into account \eqref{eq_oscS3}, we find: \begin{equation} \label{eq_preaveragingS3alpha} 2i\varepsilon(n_1+1)\dot{\alpha}_{\indo}=-\hspace{-2mm}\sum_{n_2l_2k_2}\sum_{n_3l_3k_3}\sum_{n_4l_4k_4}C_{n_1l_1k_1\ldotsn_4l_4k_4}\bar{c}_{\indt}c_{\indth}c_{\indf} e^{-i(n_1+1) t}. \end{equation} Analogously, \begin{equation} \label{eq_preaveragingS3beta} 2i\varepsilon(n_1+1)\dot{\beta}_{\indo}=\hspace{-2mm}\sum_{n_2l_2k_2}\sum_{n_3l_3k_3}\sum_{n_4l_4k_4}C_{n_1l_1k_1\ldotsn_4l_4k_4}\bar{c}_{\indt}c_{\indth}c_{\indf} e^{i(n_1+1) t}. \end{equation} In the above equations, the $c_{nlk}$ should be expressed through $\alpha_{\indices}$ and $\beta_{\indices}$. This leads to a collection of terms on the right-hand side oscillating as $e^{-i\Omega t}$, where $\Omega$ is $(n_1+1)\pm (n_2+1)\pm (n_3+1)\pm (n_4+1)$ for (\ref{eq_preaveragingS3alpha}) and $-(n_1+1)\pm (n_2+1)\pm (n_3+1)\pm (n_4+1)$ for (\ref{eq_preaveragingS3beta}). All the plus-minus signs are independent. We call the terms with $\Omega=0$ resonant interactions and those with $\Omega\neq 0$ non-resonant. Rewriting (\ref{eq_preaveragingS3alpha}-\ref{eq_preaveragingS3beta}) in terms of the slow time $\tau=\varepsilon^2t$, the dependence on $\varepsilon$ drops out, except that the non-resonant terms are now proportional to $e^{-i\Omega\tau/\varepsilon^2}$. This means that in the weak field regime, $\varepsilon \rightarrow 0$, they become highly oscillatory and time-averaging is equivalent to simply discarding all non-resonant terms. It can be proved that the resulting time-averaged system with discarded non-resonant terms accurately approximates the original system on time scales of order $1/\varepsilon^2$ for small $\varepsilon$ \cite{murdock}. After the non-resonant terms in (\ref{eq_preaveragingS3alpha}-\ref{eq_preaveragingS3beta}) have been discarded, one further simplification occurs. There are solutions to the resonance condition $\Omega=0$ of the form $n_1=2+n_2+n_3+n_4$ (or other variants obtained by permuting the $n$'s). However, all the interaction coefficients $C$ for such terms will vanish. This is completely analogous to the usual angular momentum selection rules in quantum mechanics. The integral in (\ref{eq_C_S3}) is over the invariant measure on the sphere, and it can only be nonzero if the direct product of the representations of the sphere isometries furnished by the four factors in the integrand contains the identity representation. The spherical harmonics are in the rank $n$ traceless fully symmetric tensor representations. It is impossible to contract four traceless fully symmetric tensors of ranks $n_1$, $n_2$, $n_3$, $n_4$ satisfying $n_1=2+n_2+n_3+n_4$ to obtain a scalar. Therefore, the corresponding $C$-coefficients vanish. The only contributing solutions to the resonance condition $\Omega=0$ are thus of the form $n_1+n_2=n_3+n_4$ (and permutations of the $n$'s). Such selections rules are well-known for analogous considerations in AdS \cite{CEV1,CEV2,Yang,EK,EN}. Putting everything together, and taking into account the index permutation symmetries of the coefficients $C$ given by (\ref{eq_C_S3}), we obtain the following time-averaged equations for $\alpha$: \begin{equation} \label{eq_preflowS3alpha} \begin{split} 2i(n+1)\frac{d\alpha_{\indo}}{d\tau}=&-\underset{n_1+n_2=n_3+n_4}{\sum_{n_2l_2k_2}\sum_{n_3l_3k_3}\sum_{n_4l_4k_4}}C_{n_1l_1k_1\ldotsn_4l_4k_4}\bar{\alpha}_{\indt}\alpha_{\indth}\alpha_{\indf} \\ &-2\underset{n_1+n_3=n_2+n_4}{\sum_{n_2l_2k_2}\sum_{n_3l_3k_3}\sum_{n_4l_4k_4}}C_{n_1l_1k_1\ldotsn_4l_4k_4}\bar{\beta}_{\indt}\beta_{\indth}\alpha_{\indf}. \end{split} \end{equation} As we remarked, this equation accurately approximates the original system on time scales $\mathcal{O}(\varepsilon^{-2})$, in the sense that the difference of exact solutions and solutions obtained from the time-averaged system uniformly becomes arbitrarily small on such intervals for small $\varepsilon$ \cite{murdock}. The time-averaged system for $\beta$ becomes \begin{equation} \label{eq_preflowS3beta} \begin{split} 2i(n+1)\frac{d\beta_{\indo}}{d\tau}=&-\underset{n_1+n_2=n_3+n_4}{\sum_{n_2l_2k_2}\sum_{n_3l_3k_3}\sum_{n_4l_4k_4}}C_{n_1l_1k_1\ldotsn_4l_4k_4}\bar{\beta}_{\indt}\beta_{\indth}\beta_{\indf} \\ &-2\underset{n_1+n_3=n_2+n_4}{\sum_{n_2l_2k_2}\sum_{n_3l_3k_3}\sum_{n_4l_4k_4}}C_{n_1l_1k_1\ldotsn_4l_4k_4}\bar{\alpha}_{\indt}\alpha_{\indth}\beta_{\indf}. \end{split} \end{equation} We observe that time-averaging has enhanced the symmetry. While the original system possesed a $U(1)$ symmetry rotating $\alpha$ and $\beta$ by the same common phase (the usual $U(1)$ symmetry of a charged complex scalar), the time-averaged system allows rotation of all $\alpha$'s and all $\beta$'s by two independent common phases, thus giving two $U(1)$ groups. This is closely related to the apearance of a new $U(1)$ symmetry in the time-averaged system for real scalar fields described in the literature \cite{BKS1,CEV2} and resulting in extra conservation laws. In close relation to this, $\beta$ can be consistently set to zero in the time-averaged equations, resulting in the following system containing only $\alpha$: \begin{equation} \label{eq_flowS3} i(n+1)\dot{\alpha}_{\indo}=\underset{n_1+n_2=n_3+n_4}{\sum_{n_2l_2k_2}\sum_{n_3l_3k_3}\sum_{n_4l_4k_4}}C_{n_1l_1k_1\ldotsn_4l_4k_4}\bar{\alpha}_{\indt}\alpha_{\indth}\alpha_{\indf}. \end{equation} (We have rescaled $\tau$ to eliminate numerical factors.) We shall henceforth focus on the sector described by this equation (and disregard the $\beta$-variables). Equation (\ref{eq_flowS3}) is still rather complicated, and a reasonable strategy is to look for smaller decoupling subsectors, which can be analyzed independently. One example is the spherically symmetric scalar field, which amounts to retaining only the modes with $l=k=0$. The resulting equation is the conformal flow, studied previously in \cite{BCEHLM} and arising there from a real scalar field wave equation. In \cite{BCEHLM}, a range of remarkable properties were demonstrated for the spherically symmetric truncation of (\ref{eq_flowS3}), including explicit analytic solutions for which $|\alpha(t)|$ is a periodic function of time with a common period for all modes. Our main purpose of this article is to demonstrate that a few distinct systems sharing the same property emerge from other consistent truncations of (\ref{eq_flowS3}) and other related equations. The specific sector of (\ref{eq_flowS3}) we want to focus on can be called the maximally rotating sector. These are the modes with $n=l=k$, i.e., the modes of maximal angular momentum for each frequency. The decoupling can be seen as follows. First note that the mode functions are all proportional to $e^{-ik\varphi}$, where $k$ is the number of units of angular momentum along the polar axis. All three integrals in the interaction coefficients (\ref{eq_C_S3}) factorize, so that the integral over $\varphi$ ensures that only coefficients $k_1+k_2=k_3+k_4$ are non-zero (this is just the angular momentum conservation in mode interactions). Consider the situation where the only excited modes are maximally rotating. Then the time-derivatives of the non-maximally rotating modes are zero, as can be seen from \eqref{eq_flowS3}: only terms with coefficients $n_2=k_2$, $n_3=k_3$, $n_4=k_4$ are non-zero, but the summation restriction and properties of $C$ then impose $n_1+n_2=n_3+n_4$ and $k_1+n_2=n_3+n_4$, so that that $k_1$ (and hence $l_1$) equal $n_1$. Therefore, non-maximally rotating modes are never excited. In the rest of this section we focus on the dynamics in the maximally rotating subsector. (Note that restriction to the maximally rotating sector is incompatible with reality of the field $\phi$. That is the reason why we had to start with a complex field.) The maximally rotating spherical harmonics are given by \begin{equation} \label{eq_maxspinmodeS3} e_n(\chi,\theta,\varphi)=Y_{nnn}(\chi,\theta,\varphi)=\frac{\sqrt{n+1}}{\sqrt{2}\pi}\sin^n{\chi}\sin^n{\theta}e^{-in\varphi}. \end{equation} Correspondingly the interaction coefficients are evaluated as (we discard mode number independent numerical factors as they can be absorbed in a rescaling of time): \begin{equation} C_{nmkl}=\int d\Omega_3 e_ne_me_ke_l =\frac{\sqrt{(n+1)(m+1)(k+1)(l+1)}}{n+m+1}. \end{equation} The `maximally rotating flow' equation is then \begin{equation} \label{eq_maxspinflow} i(n+1)\dot{\alpha}_n=\sum_{j=0}^{\infty}\sum_{k=0}^{n+j}\frac{\sqrt{(n+1)(j+1)(k+1)(n+j-k+1)}}{n+j+1}\bar{\alpha}_j\alpha_k\alpha_{n+j-k}. \end{equation} In addition to the scaling symmetry $\alpha \rightarrow \lambda \alpha(\tau/\lambda^2)$ , this equation possesses two further symmetries (where $\xi_1$ and $\xi_2$ are real parameters), \begin{equation} \alpha_{\indices} \rightarrow e^{i\xi_1}\alpha_{\indices},\qquad \alpha_{\indices} \rightarrow e^{in\xi_2}\alpha_{\indices}, \end{equation} giving rise to two conserved quantities \begin{equation} Q=\sum_{n=0}^{\infty}(n+1)|\alpha_n|^2,\qquad E=\sum_{n=0}^{\infty}(n+1)^2|\alpha_n|^2. \end{equation} To simplify (\ref{eq_maxspinflow}), introduce \begin{equation} \beta_n\equiv \sqrt{n+1}\,\alpha_n \end{equation} (unrelated to the modes $\beta_{nlk}$ appearing at early stages of our derivations that we have consistently set to zero) and rewrite it as \begin{equation} \label{eq_maxspinflowbeta} i\dot{\beta}_n=\sum_{j=0}^{\infty}\sum_{k=0}^{n+j}\,\frac{\bar\beta_j\beta_k\beta_{n+j-k}}{n+j+1}. \end{equation} Analogously to the (spherically symmetric) conformal flow of \cite{BCEHLM}, we try to find low-dimensional invariant submanifolds of (\ref{eq_maxspinflowbeta}) by making an ansatz depending on only a few parameters and seeing whether it closes. Motivated by the spherically symmetric case, we choose \begin{equation}\label{s3ansatz} \beta_n=(b+an)p^n, \end{equation} where $a$, $b$ and $p$ are complex-valued functions of time. Substitution in \eqref{eq_maxspinflowbeta} gives \begin{equation}\label{substitution} i\left(\dot{b}+n\left(\dot{a}+b\frac{\dot{p}}{p}\right)+n^2a\frac{\dot{p}}{p}\right)=\sum_{j=0}^{\infty}\frac{(\bar{a}+j\bar{b})|p|^{2j}}{n+j+1}\sum_{k=0}^{n+j}(b+ka)(b+(n+j-k)a). \end{equation} Note that the factor of $p^n$ has consistently cancelled between the two sides. We will have a consistent ansatz if the right-hand side of (\ref{substitution}) is a quadratic polynomial in $n$ after the summations have been performed. We first use \begin{equation} \sum_{k=0}^{N}k = \frac{N(N+1)}{2}, \qquad \sum_{k=0}^{N}k^2 = \frac{N(N+1)(2N+1)}{6}. \end{equation} Note that these Faulhaber's sums are divisible by $N+1$, so that the factor of $1/(n+j+1)$ cancels, leaving at most quadratic terms in $n$. This guarantees the closure of our ansatz. It remains to carry out the sums over $j$ using \begin{align} \sum_{j=0}^{\infty}j^Ax^j&=(x\partial_x)^A\frac{1}{1-x}.\label{eq_geomsum} \end{align} Setting the coefficients of equal powers of $n$ equal we arrive at a system of three equations for three variables, which can be conveniently rewritten as \begin{align} \frac{i\dot{a}}{1+y} &= -\frac{a^2\bar{b}}{6}+\frac{5a|b|^2}{6}+y\left(\frac{5|a|^2b}{6}+\frac{a|a|^2}{6}+\frac{a^2\bar{b}}{3}\right)+2y^2\frac{a|a|^2}{3}, \label{eq_adot}\\ \frac{i\dot{b}}{1+y} &= b|b|^2+y\left(\bar{a} b^2+a|b|^2+\frac{a^2\bar{b}}{3}+|a|^2b-\frac{a|a|^2}{6}+\frac{b|b|^2}{6}\right)\nonumber\\ &\quad +2y^2\left(|a|^2b-\frac{a|a|^2}{6}+\frac{a^2\bar{b}}{6}+3\frac{b|b|^2}{6}\right)+6y^3\frac{b|b|^2}{6},\label{eq_bdot}\\ \frac{i\dot{p}}{1+y} &=\frac{p}{6}\left(a\bar{b} + y|a|^2\right) \label{eq_pdot}, \end{align} where we have introduced \begin{equation} \label{ydef} y=\frac{|p|^2}{1-|p|^2}. \end{equation} We will now explicitly solve the dynamics on this invariant subspace. Equation \eqref{eq_pdot} can be converted into an equation for $y$: \begin{equation} \label{eq_ydot} \dot{y}=\frac{1}{3}y(1+y)^2\Im{(a\bar{b})}. \end{equation} Using conservation laws, we will be able to reduce the system to a single equation for $y$. With the sums \eqref{eq_geomsum}, the conserved quantities $Q$ and $E$ take the following form in terms of the parameters of our ansatz: \begin{align} Q&=(1+y)\left[|b|^2+2\Re{(a\bar{b})}y+|a|^2y(1+2y)\right], \label{eq_Qansatz}\\ E&=(1+y)^2\left[|b|^2+4\Re{(a\bar{b})}y+2|a|^2y(1+3y)\right].\label{eq_Eansatz} \end{align} The Hamiltonian gives the third independent conservation law, but it is more convenient to derive another related conserved quantity quadratic in $a$. We first write \begin{equation} \frac{d|a|^2}{d\tau}=-\frac{1}{3}(1+y)(1+3y)\Im{(a\bar{b})}. \end{equation} Combined with \eqref{eq_ydot}, this ensures the conservation of \begin{equation} \label{eq_S} S=|a|^2y(1+y)^2. \end{equation} Expressing $|b|^2,|a|^2$ and $\Re{(a\bar{b})}$ through $Q,E,S$ and $y$ we get \begin{align} |a|^2&=\frac{S}{y(1+y)^2},\\\label{aabs} \Re{(a\bar{b})}&=\frac{E-Q(1+y)-S(1+4y)}{2y(1+y)^2},\\ |b|^2&=\frac{2Q(1+y)-E+2Sy}{(1+y)^2}.\label{babs} \end{align} Inserting these expressions in \eqref{eq_ydot} we find the following equation for $y$ \begin{equation} \dot{y}^2=\frac{1}{36}\left(-(E-Q-S)^2+2((2E-Q-4S)S+(E-Q)Q)y-(Q^2+8S^2)y^2\right), \end{equation} which expresses the energy conservation for an ordinary one-dimensional harmonic oscillator. This immediately guarantees that all solutions for $y$, and hence for $|p|$ are exactly periodic with period \begin{equation} T=\frac{12\pi}{\sqrt{Q^2+8S^2}}. \end{equation} Equations (\ref{aabs}-\ref{babs}) then guarantee that the same property is shared by $|a|^2$, $|b|^2$ and $\Re(a\bar b)$. From this it follows that the absolute values of the amplitudes $|\alpha_n|$ are exactly periodic with the same common period. We have thus recovered in our maximally rotating sector the periodic behaviors observed in the literature for the conformal flow, the cubic Szeg\H o equation and the LLL equation. We briefly comment on what happens when one tries to generalize the above story to general dimensions $d$. We will denote the angles on $S^d$ as $\theta_1,\ldots\theta_{d-1},\varphi$ and collectively as $\Omega$. The metric is most compactly expressed recursively in terms of metrics on lower-dimensional spheres, \begin{equation} d\Omega_d^2=(d\theta_{d-1})^2+\sin^2\theta_dd\Omega_{d-1}^2, \end{equation} resulting in a recursion relation for the Laplacian \begin{equation} \Delta_{S^{d}}=\frac{1}{\sin^{d-1}\theta_{d-1}}\partial_{\theta_{d-1}}\left(\sin^{d-1}\theta_{d-1}\partial_{\theta_{d-1}}\right)+\frac{1}{\sin^2\theta_{d-1}}\Delta_{S^{d-1}}. \end{equation} For general $d$, the mass term in the wave equation required to ensure a fully resonant linearized spectrum is $-(d-1)^2\phi/4$, corresponding to the conformal mass. (Note that this reduces to the previous $-\phi$ for $d=3$.) The wave equation is then given by \begin{equation} \label{eq_waveSd} -\partial_t^2\phi+\Delta_{S^d}\phi-\frac{(d-1)^2}{4}\phi=|\phi|^2\phi, \end{equation} and hence the mode functions are now spherical harmonics on $S^d$, which also have a corresponding recursive expression \begin{equation} Y_{nl\mu}\left(\theta_{d-1},\ldots,\theta_1,\varphi\right)=N_{nl}\sin^l\theta_{d_1}C^{(l+\frac{d}{2}-1)}_{n-l}\left(\cos\theta_{d-1}\right)Y_{l\mu}\left(\theta_{d-2},\ldots,\theta_1,\varphi\right), \end{equation} with an appropriate normalization factor $N_{nl}$. We use $\mu$ as a shorthand for all the other indices present in the successively lower-dimensional harmonics. The spherical harmonics are eigenfunctions of the Laplacian on $S^d$ with eigenvalues $-n(n+d-1)$. Hence, the corresponding oscillation frequencies are $n+(d-1)/2$, giving a perfectly resonant spectrum. The maximally rotating harmonics are given by \begin{equation} e_n(\theta_1,\ldots\theta_{d-1},\varphi)=\sqrt{\frac{\G{n+\frac{d+1}{2}}}{\G{n+1}}}\sin^n{\theta_1}\ldots\sin^n{\theta_{d-1}}e^{-in\varphi}. \end{equation} Repeating the time-averaging procedure with the generalized expressions for the mass and eigenvalues, we find the maximally rotating flow equation for general $d$ \begin{equation} \label{eq_maxspinflowSd} i\left(n+\frac{d-1}{2}\right)\dot{\alpha}_n=\sum_{j=0}^{\infty}\sum_{k=0}^{n+j}C_{njkn+j-k}\bar{\alpha}_j\alpha_k\alpha_{n+j-k}, \end{equation} where the interaction coefficients are given by (again up to irrelevant numerical factors) \begin{equation}\label{sdc} \textstyle C_{nmkl}=\int d\Omega_d e_ne_me_ke_l =\sqrt{\frac{\G{n+\frac{d+1}{2}}}{\G{n+1}}\frac{\G{m+\frac{d+1}{2}}}{\G{m+1}}\frac{\G{k+\frac{d+1}{2}}}{\G{k+1}}\frac{\G{l+\frac{d+1}{2}}}{\G{l+1}}}\frac{\G{n+m+1}}{\G{n+m+\frac{d+1}{2}}}. \end{equation} Here $d\Omega_d=d\varphi\Pi_{l=1}^{d-1}\sin^{d-l}\theta_l d\theta_l$ is the integration measure on $S^d$. The flow equation on $S^d$ generalizes the case of $S^3$ analyzed above, and is also extremely similar to the flow equations in AdS treated in the next section. These parallels suggest that one should try the ansatz \begin{equation} \beta_n\equiv\left(n+\frac{d-1}{2}\right)\sqrt{\frac{\G{n+1}}{\G{n+\frac{d+1}{2}}}}\,\alpha_n=(b+an)p^n, \end{equation} which reduces to (\ref{s3ansatz}) at $d=3$ and simplifies the equations (in particular, Faulhaber's sums again make an appearance). One discovers, however, that evaluation of the sums does not produce the necessary polynomial dependence on $n$, and the ansatz thus fails. We conclude that it is unlikely that the weak field dynamics on $S^d$ displays an invariant manifold analogous to what we have seen on $S^3$ and what we are about to see for AdS of any dimension in the next section. \section{Weakly nonlinear dynamics of maximally rotating\\ perturbations in AdS}\label{sec:3} We now turn to the case of global Anti-de Sitter spacetime AdS$_{d+1}$ with $d$ spatial dimensions, and consider a complex scalar field with a general mass $m$. AdS spacetime is remarkable in that the spectrum of frequencies of linear fields is fully resonant for any mass in any dimension (which has a simple explanation in terms of the algebra of AdS isometries). The AdS metric with radius set to $1$ is \begin{equation} ds^2=\frac{1}{\cos^2{x}}\left(-dt^2+dx^2+\sin^2{x}\,d\Omega_{d-1}^2\right), \end{equation} where $d\Omega_{d-1}^2$ is the metric on the $(d-1)$-sphere parametrized in hyperspherical coordinates collectively denoted as $\Omega$. On this spacetime, the complex scalar wave equation for a field of mass $m$ with a cubic non-linearity is \begin{equation}\label{AdSwave} \cos^2{x}\left(-\partial_t^2\phi+\frac{1}{\tan^{d-1}{x}}\partial_x\left(\tan^{d-1}{x}\partial_x\phi\right)+\frac{1}{\sin^2{x}}\Delta_{S^{d-1}}\phi\right)-m^2\phi=|\phi|^2\phi, \end{equation} where $\Delta_{S^{d-1}}$ is the $(d-1)$-sphere Laplacian. The linearized system can be solved by separation of variables. First one computes the mode functions as solutions of the eigenvalue problem \begin{equation} \label{eq_eigenvalueAdS} \left(\frac{1}{\tan^{d-1}{x}}\partial_x\left(\tan^{d-1}{x}\partial_x\right)+\frac{1}{\sin^2{x}}\Delta_{\Omega_{d-1}}-\frac{m^2}{\cos^2{x}}\right)e_{\indices}(x,\Omega)=-\omega_{\indices}^2e_{\indices}(x,\Omega). \end{equation} The expansion of $\phi$ in these modefunctions then yields the general linearized solution \begin{equation}\label{AdSlin} \phi_{\mbox{\tiny linear}}(t,x,\Omega)=\sum_{n=0}^{\infty}\sum_{l,k}(A_{nlk}e^{i\omega_{\indices} t}+B_{nlk}e^{-i\omega_{\indices} t})e_{\indices}(t,x,\Omega), \end{equation} where $A_{nlk}$ and $B_{nlk}$ are arbitrary complex constants. The explicit form of the mode functions is known as \begin{equation} \label{eq_modefunctionAdS} e_{\indices}(x,\Omega)=\mathcal{N}_{nlk}\cos^{\delta}{x}\sin^l{x}P_n^{\left(\delta-\frac{d}{2}, l+\frac{d}{2}-1\right)}(-\cos{2x})Y_{lk}(\Omega) \end{equation} and \begin{equation} \label{eq_eigenvalueAdS} \omega_{\indices}=\delta+2n+l, \end{equation} where $\delta=\frac{d}{2}+\sqrt{\frac{d^2}{4}+m^2}$ and $\mathcal{N}_{nlk}$ is a normalisation factor. (Note that the difference of any two frequencies is integer irrespectively of $\delta$.) The $P_n^{(a,b)}(x)$ are the Jacobi polynomials, an orthogonal basis on the interval $(-1,1)$ with respect to the measure $(1-x)^a(1+x)^b$. The $Y_{lk}$ are spherical harmonics in $(d-1)$ dimensions, i.e.\ eigenfunctions of the corresponding sphere Laplacian with eigenvalue $l(l+d-2)$, and $k$ labels all harmonics contained in a given $l$-multiplet. One can perform a weakly nonlinear analysis of (\ref{AdSwave}) in a manner exactly identical to the previous section. After implementing time-averaging, one obtains a system of flow equations describing slow evolution of the complex amplitudes $\alpha_{nlk}(t)$ and $\beta_{nlk}(t)$ descending from the constant amplitudes $A_{nlk}$ and $B_{nlk}$ in the linearized solution (\ref{AdSlin}). Due to selection rules in the interaction coefficients \cite{CEV1,CEV2,Yang,EK,EN}, the flow equations enjoy enhanced symmetries that permit consistently setting all $\beta$'s to zero. Furthermore, the resulting equation for $\alpha$ can be consistently truncated to the maximally rotating sector, comprising modes of maximal angular momentum at each frequency level (this is a consequence of the resonance condition on frequencies of the interacting modes and angular momentum conservation). In the notation of \eqref{eq_eigenvalueAdS}, maximally rotating modes exactly correspond to $n=0$. The modes we retain are then labelled by a single number, the polar axis projection of their angular momentum $m$, and are denoted simply by $\alpha_m$. We thus arrive at the maximally rotating conformal flow equation on AdS$_{d+1}$ \begin{equation} i(\delta+n)\dot{\alpha}_n=\sum_{m=0}^{\infty}\sum_{k=0}^{n+m}C_{nmk,n+m-k}\bar{\alpha}_m\alpha_k\alpha_{n+m-k}, \end{equation} where the interaction coefficients are given by \begin{equation} C_{nmjk}=\int_0^{\frac{\pi}{2}}dx\frac{\tan^{d-1}{x}}{\cos^2{x}} \int d\Omega_{d-1} e_ne_me_je_k. \end{equation} Here, $d\Omega_{d-1}$ is the integration measure on $S^{d-1}$ given below (\ref{sdc}). This equation possesses the same symmetries as \eqref{eq_flowS3} and hence the corresponding conserved quantities \begin{equation}\label{AdSconserved} Q=\frac{1}{\G{\delta}}\sum_{n=0}^{\infty}(n+\delta)|\alpha_n|^2,\qquad E=\frac{1}{\G{\delta}}\sum_{n=0}^{\infty}(n+\delta)^2|\alpha_n|^2, \end{equation} where we have divided by $\G{\delta}$ for future convenience. The maximally rotating modes are given by (again, we omit plain numerical factors independent of the mode number, as they can always be absorbed in a redefinition of time): \begin{equation} e_n(x,\theta_1,\dots, \theta_{d-2},\varphi)\equiv e_{0n\cdots n}(x,\Omega)=\sqrt{\frac{\G{n+1+\delta}}{\G{n+1}}}\cos^{\delta}{x}\sin^n{\theta_1}\dots\sin^n{\theta_{d-2}}e^{-in\varphi}, \end{equation} where we have written out explicitly the angles $\theta_1,\ldots\theta_{d-2},\phi$ collectively denoted by $\Omega$. The interaction coefficients can be evaluated as \begin{equation} C_{nmjk}= \sqrt{\frac{\G{n+1+\delta}\G{m+1+\delta}\G{j+1+\delta}\G{k+1+\delta}}{\G{n+1}\G{m+1}\G{j+1}\G{k+1}}}\frac{\G{n+m+1}}{\G{n+m+2\delta}}. \end{equation} (This expression is nearly identical to the formula on $S^d$ from the previous section, but the minor difference will play a crucial role in our subsequent derivation.) Note that at this point, the number of AdS dimensions and the scalar field mass only enter the equations through $\delta$ (which is also known as the `conformal dimension'). Once again, we try to find a finite-dimensional dynamically invariant subspace. To this end, define \begin{equation} \label{eq_defbeta} \beta_n\equiv (n+\delta)\sqrt{\frac{\G{n+1}}{\G{n+1+\delta}}}\alpha_n, \end{equation} in terms of which the flow equation in AdS becomes \begin{equation} \label{eq_adsflow} i\dot{\beta}_n=\sum_{m=0}^{\infty}\frac{\G{m+\delta}}{\G{m+1}}\sum_{k=0}^{n+m}\binom{n+m}{k}\B{k+\delta}{n+m-k+\delta}\bar{\beta}_m\beta_k\beta_{n+m-k}. \end{equation} Analogously to the considerations of the previous section, we examine the ansatz \begin{equation} \label{eq_adsansatz} \beta_n=(b+na)p^n \end{equation} to see if it is respected by the evolution. The sum over $k$ in (\ref{eq_adsflow}) can be computed as follows. First, for the sum without powers of $k$, \begin{equation} \begin{aligned} \sum_{k=0}^{N}\binom{N}{k}\B{k+\delta}{N-k+\delta}&=\int_0^1dx\sum_{k=0}^N\binom{N}{k}x^{k+\delta-1}(1-x)^{N-k+\delta-1}\\ &=\int_0^1dx\ x^{\delta-1}(1-x)^{\delta-1}=\B{\delta}{\delta}. \end{aligned} \end{equation} The sums involving powers of k are analogously computed as \begin{align} \sum_{k=0}^{N}k\binom{N}{k}\B{k+\delta}{N-k+\delta}&=N\B{\delta+1}{\delta}=\frac{N}{2}\B{\delta}{\delta},\\ \sum_{k=0}^{N}k^2\binom{N}{k}\B{k+\delta}{N-k+\delta} &=\left(\frac{N}{2}+\frac{N(N-1)}{2}\frac{1+\delta}{1+2\delta}\right)\B{\delta}{\delta}. \end{align} Finally, one carries out the $m$-summation using \begin{equation} \sum_{m=0}^{\infty}\frac{\G{m+\delta}}{\G{m+1}}m^Ax^m=(x\partial_x)^A\frac{\G{\delta}}{(1-x)^{\delta}}. \end{equation} At the end of the day, one obtains quadratic polynomials in $n$ on both sides of (\ref{eq_adsflow}) , which ascertains the validity of the ansatz and results in the following equations: \begin{align} \frac{i\dot{b}}{(y+1)^{\delta}}&=b|b|^2+y\delta\left(\bar{a} b^2+a|b|^2+|a|^2b\right)+\label{eq_bdot_AdS}\\ & \quad y^2\delta\left(|a|^2b(\delta+1)+\frac{a^2\bar{b}}{2}\frac{\delta(\delta+1)}{1+2\delta}+a|a|^2\frac{\delta(\delta+1)}{1+2\delta}\right)+y^3\frac{a|a|^2}{2}\frac{\delta^2(\delta+1)(\delta+2)}{1+2\delta}, \nonumber \\ \frac{i\dot{a}}{(y+1)^{\delta}}&=\frac{a|b|^2}{2}\frac{2+3\delta}{1+2\delta}-\frac{a^2\bar{b}}{2}\frac{\delta}{1+2\delta}+y\delta\left(\frac{|a|^2b}{2}\frac{2+3\delta}{1+2\delta}+a^2\bar{b}\frac{\delta}{1+2\delta}+\frac{a|a|^2}{2}\frac{\delta}{1+2\delta}\right)+\nonumber\\ & \quad y^2a|a|^2\frac{\delta^2(\delta+1)}{1+2\delta},\label{eq_adot_AdS}\\ \frac{i\dot{p}}{(y+1)^{\delta}}&=\frac{p}{2}\delta\left(a\bar{b}\frac{1}{1+2\delta}+y|a|^2\frac{\delta}{1+2\delta}\right).\label{eq_pdot_AdS} \end{align} We have absorbed an overall factor of $B(\delta,\delta)\G{\delta}$ in another rescaling of time. We will solve these equations in a manner completely analogous to the maximally rotating flow of the previous section. Expressing the conserved quantities (\ref{AdSconserved}) within our ansatz yields \begin{align} Q&=(y+1)^{\delta}\left[|b|^2+2\Re{(a\bar{b})}\delta y+|a|^2\delta y(1+(\delta+1)y)\right],\\ E&=\delta (y+1)^{\delta+1}\left[|b|^2+2\Re{(a\bar{b})}(\delta+1) y+|a|^2(\delta+1)y(1+(\delta+2)y)\right]. \end{align} We then convert the equation for $p$ into one for $y$, as defined by (\ref{ydef}): \begin{equation} \label{eq_ydot_AdS} \frac{\dot{y}}{(y+1)^{\delta+1}}=\frac{y\delta}{1+2\delta}\Im{(a\bar{b})}, \end{equation} and obtain an extra conserved quantity $S$ from this equation and the expression for the time derivative of $|a|^2$ derived from \eqref{eq_adot_AdS}: \begin{equation} S=|a|^2y(y+1)^{\delta+1}. \end{equation} Expressing $|b|^2$, $|a|^2$ and $\Re{(a\bar{b})}$ through $Q$, $E$, $S$ and $y$ we find \begin{align} |a|^2&=\frac{S}{y(y+1)^{\delta+1}}, \label{adsa2}\\ \Re{(a\bar{b})}&=\frac{E}{2\delta y(y+1)^{\delta+1}}-\frac{Q}{2y(y+1)^{\delta}}-\frac{(\delta+1)S}{y^{\delta+1}}-\frac{S}{2y(y+1)^{\delta+1}},\\ |b|^2&=\delta(\delta+1)\frac{y}{(y+1)^{\delta+1}}S-\frac{E}{(y+1)^{\delta+1}}+(\delta+1)\frac{Q}{(y+1)^{\delta}}.\label{adsb2} \end{align} Inserting this in \eqref{eq_ydot_AdS} we arrive at \begin{equation} \begin{split} \dot{y}^2=&-y^2\frac{\delta^2}{4(1+2\delta)^2}\left(Q^2+4(\delta+1)S^2\right)\\ & -y\frac{\delta}{2(1+2\delta)^2}\left(-E(Q+2(1+\delta)S)+\delta(Q^2+(Q+2E)S+2(1+\delta)S^2)\right)\\ &-\frac{(E-(Q+S)\delta)^2}{4(1+2\delta)^2}, \end{split} \end{equation} which again expresses energy conservation of a harmonic oscillator. This immediately guarantees that all solutions for $y$, and hence for $|p|$ are exactly periodic with period \begin{equation} T=\frac{4\pi(1+2\delta)}{\delta\sqrt{Q^2+4(\delta+1)S^2}}. \end{equation} Equations (\ref{adsa2}-\ref{adsb2}) then guarantee that the same property is shared by $|a|^2$, $|b|^2$ and $\Re(a\bar b)$. From this it follows that the absolute values of the amplitudes $|\alpha_n|$ are exactly periodic with the same common period. Maximally rotating flows in AdS thus share the same periodic return property previously described for the conformal flow, the cubic Szeg\H o equation, the LLL equation and the maximally rotating flow on $S^3$. \section{Discussion}\label{sec:4} We have considered cubic wave equations for a complex scalar field on Einstein static universes and in AdS spacetimes of various dimensions. In all cases considered, the spectrum of frequencies of linear normal modes is perfectly resonant (the difference of any two frequencies is an integer). Nontrivial effects of nonlinearities survive to arbitrarily small field amplitudes and can be effectively described by simplified flow systems capturing the slow energy transfer between the normal modes in weakly nonlinear regimes. These flow systems can be consistently truncated to maximally rotating modes (only one mode carrying the maximal angular momentum is retained from each frequency level). This is analogous to the Lowest Landau Level (LLL) truncation of the Gross-Pitaevskii equation describing harmonically trapped Bose-Einstein condensates. The resulting maximally rotating flow systems appear to be highly structured analytically and admit simple explicit analytic solutions with exactly periodic energy flows for $S^3$ and in AdS of any dimension. (Such explicit solutions are extraordinary for infinite-dimensional nonlinear systems and allude at deeper and more far-reaching structures, such as integrability.) We shall now briefly comment on the implications of our findings. The weakly nonlinear dynamics of a cubic conformal wave equation on $R\times S^3$ has been previously treated in \cite{BCEHLM}, where the analysis was restricted to the spherically symmetric sector and the weak field dynamics displayed periodic behaviors of the same type we found here. (This, in fact, was among the main motivations for our present work.) That the same equation for a complex scalar field, now truncated to the completely different maximally rotating sector, displays similar analytic structures, makes us strongly suspect that the full weakly nonlinear dynamics of the cubic conformal wave equation on $R\times S^3$, without any mode truncation, is analytically tractable. We leave this subject for future work. (Note that because of the conformal relation between $R\times S^3$ and AdS$_4$, our considerations of the maximally rotating sector in AdS$_4$ provide yet another sector of $S^3$ dynamics decoupling in the weak field regime and displaying exact returns of the energy spectrum. The cubic wave equation on $R\times S^d$ for $d\ne 3$ is not conformally invariant and thus cannot be mapped to a cubic wave equations in AdS.) We have not succeeded in obtaining similar returning solutions on $R\times S^d$ with $d\ne 3$, and strongly suspect that the dynamical features at $d=3$ are not shared by the spheres of other dimensions. By contrast, we see returning behaviors in AdS for any dimension and any mass of the complex scalar field. This suggests that the AdS picture captures the underlying dynamics more thoroughly. (Note that the only sphere case where we find the return structure is the one related to AdS$_4$ by a conformal transformation, though the AdS version only includes a subset of $S^3$ modes, see \cite{BCEHLM}.) Nonlinear wave equations in AdS have been considered in the context of gravitational holography research \cite{Karch:2002sh,Sakai:2004cn,Hartnoll:2008vx,Nishioka:2009zj,Basu:2011ft,Basu:2012gg}. It would be interesting to contemplate whether the weakly nonlinear dynamical return phenomena we have described here have implications from the standpoint of holographic interpretation of AdS dynamics. We would like to conclude with an even more straightforward connections of our AdS analysis to real-life physics. One can take a nonrelativistic limit of the wave equation (\ref{AdSwave}) in AdS by introducing the would-be nonrelativistic wavefunction $\Psi(t,r,\Omega)$ which is related to the relativistic field $\phi(t,x,\Omega)$ satisfying (\ref{AdSwave}) by \begin{equation} \phi(t,x,\Omega)=\sqrt{2m}\,e^{-imt}\, \Psi(t, x\sqrt{m},\Omega). \end{equation} One can then check that taking the limit $m\to\infty$ and enforcing the wave equation inside any finite ball in terms of the $r$-coordinate results in the Gross-Pitaevskii equation with a harmonic potential for $\Psi$: \begin{equation}\label{GPHO} i\frac{\partial\Psi}{\partial t}=\frac12\left(-\nabla^2+r^2\right)\Psi +|\Psi|^2\Psi. \end{equation} The Gross-Pitaevskii equation describes trapped Bose-Einstein condensates that can be created in a lab using ultracold atomic gases \cite{BEC1,BEC2,BEC3}. From the above AdS-based construction, it is not surprising that the Gross-Pitaevskii equation (\ref{GPHO}) enjoys a nonrelativistic version of conformal symmetry known as the Schr\"odinger symmetry, as described in \cite{OFN} (a classic treatment of the same symmetry group without the nonlinear term in (\ref{GPHO}) can be found in \cite{Niederer}). Analogies between weakly nonlinear dynamics of the Gross-Pitaevskii equation and AdS systems have been previously pointed out in \cite{BMP}, and the above non-relativistic limit makes the analogy precise for wave equations in AdS. Viewed from this perspective, our present results in AdS generalize the periodic behaviors of the LLL equation described in \cite{ABCE}, since the LLL equation simply represents the maximally rotating sector of the weakly nonlinear dynamics of the Gross-Pitaevskii equation with a harmonic potential. \section*{Acknowledgments} This work has been strongly influenced by our collaboration on related subjects and discussions with Piotr Bizo\'n. We furthermore thank Joaquim Gomis for a useful discussion on nonrelativistic limits and kinematic symmetries. Research presented here has been supported in part by the Belgian Federal Science Policy Office through the Interuniversity Attraction Pole P7/37, by FWO-Vlaanderen through projects G020714N and G044016N, and by Vrije Universiteit Brussel (VUB) through the Strategic Research Program ``High-Energy Physics''. The work of O.E.\ is funded under CUniverse research promotion project by Chulalongkorn University (grant reference CUAASC).
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The emerging spatial-temporal motions of swarms of interacting agents are a subject of great interest in application areas ranging from biology to physics and robotics. Typically, swarming entails robust, self-organized motion, that emerges from the interaction of large numbers of simple mobile agents. Examples have been observed in nature over many spatiotemporal scales from colonies of bacteria, to swarms of insects\cite{Theraulaz2002,Topaz2012,Polezhaev,Li_Sayed_2012}, flocks of birds \cite{Leonard2013,Ballerini08,Cavagna2015}, schools of fish\cite{Couzin2013,Calovi2014}, crowds of people\cite{Rio_Warren_2014}, and active-matter systems\cite{Cichos2020}. Understanding the underlying physics behind swarming patterns and describing how they emerge from simple models has been the subject of significant work in the mathematical and engineering sciences \cite{Vicsek,Marchetti,Aldana,Desai01,Jadbabaie03,Tanner03b,Tanner03a,Gazi05,Tanner07}. In pushing the theory to robotic platforms, engineers have focused on designing and building swarms of mobile robots with a large and ever expanding number of platforms, as well as virtual and physical interaction mechanisms\cite{Cichos2020,AutonomousMobileRobots,8990018,7989200,MultiRobotSystems}. Robotic applications range from exploration\cite{8990018}, mapping\cite{Ramachandran2018}, resource allocation \cite{Li17,Berman07,Hsieh2008}, and swarms for defense \cite{Wong2020,Chung2011,Witkowski} Since robotic swarms must operate in real environments, theoretical and experimental swarming systems have been analyzed in many contexts, including swarms of mobile robots with homogeneous and heterogeneous agents and delayed communication\cite{szwaykowska2016collective,edwards2020delay}. Moreover, the dynamics of robotic swarms have been tested in complex environments, from drones flying in the air, to boats tracking coherent structures in complex flows, and collaborating robots locating sources in turbulent media\cite{hajieghrary2016multi,heckman2015toward}. When deploying swarms in uncertain environments of varying complexity and geometry, it is important to understand stability. Recently, we have analyzed stability of swarms in various configurations. For example, we have studied swarms with complex network topology, and quantified instabilities arising from heterogeneous topology in the number of local interactions each agents has\cite{hindes2016hybrid}. We have examined the effects of communication delay and how environmental noise destabilizes self-organized patterns\cite{szwaykowska2018state,kyrychko2018enhancing}. In addition, we have analyzed other environmental effects, such as range-dependent communication and surface geometry, as a function swarm control parameters \cite{hindes2020stability,hindes2020unstable}. In all of the above--mentioned research we have considered only a single swarm and its stability in complex environments. Here we extend our analysis to multiple, interacting swarms, and their resulting patterns. The general model that we use to describe the dynamics of both single and interacting swarms contains self-propulsion, friction, and gradient-forces between agents: \begin{equation} \ddot{\bf{r}}_{i}= \big[\alpha_{i} -\beta|\dot{\bf{r}}_{i}|^{2}\big]\dot{\bf{r}}_{i}-\lambda_{i}\sum_{j\neq i} \partial_{\bf{r}_{i}}U(|\bf{r}_{j}-\bf{r}_{i}|) \label{eq:swarmmodel} \end{equation} where $\bf{r}_{i}$ is the position-vector for the $i$th agent in two spatial dimensions, $\alpha_{i}$ is a self-propulsion constant, $\beta$ is a damping constant, and $\lambda_{i}$ is a coupling constant\cite{Levine,Erdmann,DOrsagna,Romero2012}. The total number of swarming agents is $N$, and each agent has unit mass. Beyond providing a basis for theoretical insights, Eq.(\ref{eq:swarmmodel}) has been implemented in experiments with several robotics platforms including autonomous ground, surface, and aerial vehicles\cite{szwaykowska2016collective,edwards2020delay,hindes2020unstable}. We remark that Eq.~(\ref{eq:swarmmodel}) contains most of the relevant physics needed to model an enormous class of behaviors.Moreover, additional physics, stochastic effects, and network communication topologies may all be added to match many experiments. In this paper, we restrict ourselves to the well-known interaction Morse potential, $U$, which controls local attraction and repulsion length scales between interacting agents: \begin{equation} U(r)=Ce^{-r/l}-e^{-r}.\label{eq:MorsePotential} \end{equation} \section{The geometry and dynamics of colliding swarms} We use Eq.(\ref{eq:swarmmodel}) to model two interacting swarms with the same underlying dynamics but different parameters and initial conditions. The most straightforward collision scenario consists of two flocks colliding, where each swarm has achieved velocity consensus well before collision. The initial distance, $D$, which separates the swarms is large enough so that the interaction forces between the swarms are exponentially small, and $\theta$ defines the interaction angle. (See Fig.~\ref{fig:Geometry}.) The potential function of the Morse potential is defined by \noindent where $C,l$ define the repulsion and length constants respectively, and the attraction length constant is scaled to unity. \begin{figure}[h] \begin{centering} \includegraphics[scale=0.4]{GeometryGraphic_a} \par\end{centering} \caption{The geometry of two colliding swarms. The initial configurations are flocking states, which intersect at an angle $\theta.$\label{fig:Geometry}} \end{figure} Given the initial flocking state configurations, there are three possible final states of the combined interactions, shown in Fig.~\ref{fig:final-combined}; i.e., flocking where the swarms combined to form a translating state, milling where the combined center of mass is stationary, or scattering where the swarms pass through each other and flock in different directions. \begin{figure}[h] \begin{centering} \includegraphics[scale=1.5]{FinalCollisionStates} \par\end{centering} \caption{Possible final combined configurations of colliding swarm: flocking states (left), milling states (middle), and scattering states (right).\label{fig:final-combined}} \end{figure} A useful quantity for distinguishing between the three possibilities is the polarization, $\cal P$, given by \begin{equation} {\cal P}=\frac{\lvert\sum_{i} \dot{\mathbf{r_i}}\rvert}{\sum_{i}\lvert\dot{\mathbf{r_i}}\rvert}.\ \label{eq:polarization} \end{equation} When the agents are in alignment, ${\cal P} \approx 1$, and it is approximately zero when they are ani-parallel. Therefore, when the swarms are in the flocking state, ${\cal P}\approx 1$, while in the milling state, ${\cal P}\approx 0$. When the swarm is in the scattering state it is between 0 and 1. The polarization has been used to quantify the parameter space comparing angle $\theta$ against the coupling strength $\lambda_i = \lambda$ in Fig.~\ref{fig:polariz}. We notice that there exist distinct regions in parameter space where the milling state exists, as well as other regions show the existence of scattering and flocking. \begin{figure}[t] \begin{centering} \includegraphics[scale=0.75]{Polarization} \par\end{centering} \label{fig:polariz} \caption{Polarization as a function of collision angle $\theta$ and coupling strength $\lambda$. See \cite{kolon2018dynamics} for details and parameter values.} \end{figure} \section{The milling state - stopping colliding swarms} We now wish to concentrate on how one swarm may capture another into a combined milling state where the combined center of mass is stationary and the polarization is close to zero. To satisfy the latter, $\theta$ must be relatively small so that the total momentum is near zero. We make a new diagram showing exactly where the scattering-to-milling transition occurs for small $\theta$ as a function of coupling $\lambda$; an example is shown in Fig.~\ref{fig:MillingCollision}. The stable swarm states after collision are specified with blue and red for scattering and milling, respectively; the green portions indicate the formation of a combined flocking state, which is comparatively infrequent for small $\theta$ (and decreases in frequency as $N \rightarrow \infty$). In addition, in the right panel of Fig.~\ref{fig:MillingCollision}, we show an example of the approach to the milling state as a series of time snapshots. Initially, the swarms are far apart in flocking states with constant velocities. As the two swarms approach, however, each agent begins to sense the forces of intra-agent swarmers, causing the two swarms to rotate around each other while maintaining an approximately constant inter-swarm density. Over time the two swarms slowly relax to a well-mixed milling state composed of uniformly distributed agents from both. Motivated by Fig.~\ref{fig:MillingCollision}, one useful observation that can be made regarding the swarms is that, when flocking towards a collision, each swarm behaves as a rigid body. Assuming such motion in the swarms leads one to hypothesize that there exists a constant density approximation when all agents have the same characteristics. Such an approximation can used to create a theory for the center-of-mass dynamics describing the approach to a milling state, as shown in \cite{hindes2021critical}. In the left panel of Fig.~\ref{limitcycle}, the center of mass dynamics for each swarm is shown at the critical coupling, $\lambda_{min}$: the smallest coupling, over all collision angles, at which a milling state is stably formed. \begin{figure}[h] \begin{centering} \includegraphics[scale=0.25]{SwarmCollisionAbstractFig} \par\end{centering} \caption{Two swarms colliding. A scattering diagram is shown on the left that specifies the outcome of two-swarm collisions as a function of the incidence angle and the coupling strength. On the right are four time snapshots of the swarms at the critical point-- the minimum coupling, $\lambda_{min}$, at which a collision results in a mill. Swarm parameters are $\alpha=1$, $\beta=5$, $C=10/9$, $l=0.75$, and $N=100$.} \label{fig:MillingCollision} \end{figure} The constant density theory predicts that in order for the milling state to occur, the dynamics must approach a stable limit cycle of the interacting centers of mass. Within this approximation, the critical coupling corresponds to a generic saddle-node bifurcation. In general, the limit cycle acts as a capture radius, whereby the two interacting flocks slowly converge to a common, stationary center The same theory can be used to predict the maximum size of the transient center-of-mass oscillations as a function of the repulsive coupling, $C$. In the right panel of Fig.~\ref{limitcycle} the theory is plotted against numerical simulations to show how well the predictions work for a range of different repulsive-force strengths. \section{Analysis and final results of swarm symmetry and asymmetry} \begin{figure} \begin{centering} \includegraphics[scale=2]{LimitCycle} \par\end{centering} \caption{Collision dynamics resulting in milling. (a) Center-of-mass trajectories for two colliding swarms when $\lambda=\lambda_{min}$, shown with solid-blue and dashed-red lines. Arrows give the direction of motion. The dashed-black line indicates the bifurcating limit cycle in the uniform constant density approximation. Other swarm parameters are $\alpha=1$, $\beta=5$, $l=0.75$, $N=100$, and $C\!=\!1.0$. The inlet panel shows the corresponding trajectory for $\lambda=2 \lambda_{min}$. (b) Maximum x-coordinate reached by the center of mass of the rightward moving (blue) flock when $\lambda=\lambda_{min}$. Simulation results are shown with blue circles for $l=0.75$, green diamonds for $l=0.6$, and red squares for $l=0.5$. Limit-cycle predictions from theory are drawn with lines near each series. Other swarm parameters are $\alpha=1$, $\beta=5$, and $N=200$.} \label{limitcycle} \end{figure} One interesting aspect of the theory is that it can provide a range of parameter predictions for the critical coupling, $\lambda_{min}$, when the swarms are both symmetric and asymmetric. In particular, from the theory one can define the critical parameter for the saddle-node bifurcation via an equation analogous to an escape-velocity relation, \begin{align} \label{eq:escape} v^{2}/2 -N\lambda_{min} V_{\text{eff}}(C,l)=0, \end{align} where $v$ is the speed of each flock, and $V_{\text{eff}}(C,l)$ quantifies the strength of the potential between agents (see \cite{hindes2021critical} for full mathematical details). In terms of scaling Eq.(\ref{eq:escape}) implies that, if the potential-forces and number of agents are held constant, flocks moving twice as fast require four times the coupling in order to capture. Similarly, flocks with twice as many particles must fly $\sqrt{2}$-times faster in order to escape capture. We can use the theory to example how the velocity and potential function define the critical coupling $\lambda_{min}$ as we sweep physical swarm parameters. Examples are shown in Fig.~\ref{fig:sym_asym}. In the left subplot we show results for collisions with symmetric parameters. Our predicted scaling collapse holds. Qualitatively, the critical coupling increases monotonically with $C$, implying that the stronger the strength of repulsion, the larger the coupling needs to be in order for colliding swarms to form a mill. Also, note that our predictions are fairly robust to heterogeneities in the numbers in each flock, particularly for smaller values of $C/l-1$; predictions remain accurate for number asymmetries in the flocks as large as $20\%$. In the right panel, we consider how theory compares in the asymmetric case of two swarms with different velocities. In particular, agents in one flock have self-propulsion $\alpha_{i}=\alpha^{(1)}=1$, while $\alpha_{i}=\alpha^{(2)}$ is varied for the other flock. Again we see that when two swarms come together at the critical coupling; the results between bifurcation theory and simulations agree well. \begin{figure} \begin{centering} \includegraphics[scale = 2]{Sym_Asym_Results} \par\end{centering} \caption{Critical coupling for forming milling states upon collision. (Left panel) Symmetric parameter collisions for $\alpha=1$ (blue) and $\alpha=2$ (red): $N=10$ (squares), $N=20$ (diamonds), $N=40$ (circles), and $N=100$ (triangles). Green stars denote $\alpha=1$ and magenta x's denote $\alpha=2$, when 40 agents collide with 60. (Right panel) Asymmetric collisions for $C=10/9$ in which $\alpha^{(1)}=1$. Blue points indicate equal numbers in each flock: $N=20$ (diamonds), $N=40$ (circles), and $N=100$ (triangles). Green stars denote collisions between 40 agents with $\alpha^{(1)}=1$ and $60$ agents with $\alpha^{(2)}$. Solid and dashed lines indicate theoretical predictions. Other swarm parameters are $\beta=5$ and $l=0.75$.} \label{fig:sym_asym} \end{figure}\\ \section{Preliminary colliding swarm experiments} We have begun to test our theoretical predictions in colliding swarm experiments, where we implemented a mixed-reality setup\cite{szwaykowska2016collective,edwards2020delay}. To verify the presented theoretical model we used up to eight Crazyflie micro-UAVs, shown in Fig.~\ref{fig:crazyflie}; however eight is an insufficient number of robots to see meaningful interaction between two large intersecting swarms. To help increase the number of agents that were used during experimentation we used mixed reality to couple real and virtual robots\cite{Szwaykowska2016}. In running the experiments, we used a dimensional version of the Morse potential given by \begin{align} U(r_i, r_j) = c_r e^{\frac{-|r_i - r_j|}{l_r}} - c_a e^{\frac{-|r_i - r_j|}{l_a}}. \label{eq:morse_potential} \end{align} \noindent where $c_r$ is the repulsion strength and $l_r$ is the length scale of the repulsion; likewise $c_a$ is the attraction strength and $l_a$ is the attraction length scale. The mixed reality system shown in Fig.~\ref{fig:mixed_reality} uses a Vicon motion capture system in a 15x15m room with between 5-8 Crazyflie micro UAV. The robots positions are shared through a ground station which also runs the simulation. All agents positions are combined on the ground station and new positions for the real robots are determined by using a double integrator model of the agents. Figure \ref{fig:8v8} demonstrates how the physical robots interact with simulated agents. The simulated agents, red dots, are projected into the real world using a camera calibration and the real agents are highlighted by blue circles. These results allow for further improvement of theoretical predictions and increase preparedness for field experimentation. \begin{figure}[h] \centering \subfigure[Mixed Reality Setup]{ \label{fig:mixed_reality} \includegraphics[width=0.65\linewidth]{mixed_reality_setup_v5.pdf} } \subfigure[Micro-UAV]{ \label{fig:crazyflie} \includegraphics[width=0.23\linewidth]{crazyflie.pdf} } \label{fig:experimental_setup} \caption{ In Figure \ref{fig:mixed_reality} a mixed reality experimental platform is shown, which relies on each agent real and simulated having a global position and receiving some control command. In Figure \ref{fig:crazyflie} an example of the Crazyflie 2.0 Micro-UAV, which is used with the Crazyswarm Software. \cite{crazyswarm}. } \end{figure} Further examples of mixed reality experiments of two colliding swarms forming a milling state with a stationary center of mass are shown in Figure \ref{fig:time_series}. In addition to eight real robots vs. eight simulated robots and see a mill form, we consider even more agents where there are 5 real robots with 45 simulated robots versus 50 simulated robots. Due to the inclusion of physical agents which require space between them it is necessary to consider larger repulsion parameters, $c_r$ and $l_r$, to ensure robot safety. It is clear that even when these values are changed that experimentally a stationary mill is observed. \begin{figure*} \centering \subfigure[Eight real and eight virtual robots]{ \label{fig:8v8} \includegraphics[width=0.85\linewidth]{./Exp_figure_2_for_NATO.pdf} } \subfigure[High Repulsion]{ \label{fig:5_real} \includegraphics[width=0.95\linewidth]{./Exp_figure_for_NATO.pdf} } \caption{ An example of two time series mixed reality experiments. Figure \ref{fig:8v8} shows 8 virtual colliding with 8 real robots. Figure \ref{fig:5_real} shows an experiment in which 5 real robots join with 45 simulated robots to collide with 50 simulated robots. } \label{fig:time_series} \end{figure*} Although preliminary, the results show that when the theory is translated to experiments, we can have one swarm capture another based on the physical parameters chosen. Conversely, our theory and experiment should also predict when colliding swarms will not form a milling state; i.e., based on known parameters and sizes of the swarms, we can show one swarm cannot capture another. Other measures beyond the polarization of how the colliding swarms mix can also be ascertained; one such metric of measuring the scaling of the density of one swarm with respect to another is presented in the Appendix A for the mixed reality experiments. \section{Conclusion and Discussion} Here we studied the collision of two swarms with nonlinear interactions, and focused in particular on predicting when such swarms would combine to form a mill. Unlike the full final-scattering diagram, which depends on whether or not a particular set of initial conditions falls within the high-dimensional basin-of-attraction for milling -- a hard problem in general, we concentrated on predicting the minimum coupling needed to sustain a mill after the collision of two flocks. By noticing that colliding swarms, which eventually form a mill, initially rotate around a common center with an approximately constant density, we were able to transform the question of a critical coupling into determining the stability of limit-cycle states within a rigid-body approximation. This approach produced predictions that were independent of initial conditions (only depending on physical swarm parameters) and provided a lower-bound on the critical coupling for small collision angles. For example, in the case of symmetric flocks with equal numbers and physical parameters, the scatter-mill transition point was similar to an escape-velocity condition in which the critical coupling scaled with the squared-speed of each flock, and inversely with the number of agents in each flock. Our bifurcation analysis agreed well with many-agent simulations. Recent work in swarm robotics and autonomy has begun to address how one swarm can detect, redirect, capture, or defend itself against another\cite{8444217,9029573,9303837}. However, most approaches are algorithmic and lack basic physical and analytical insights. Our work fits nicely into the robotic swarm capture and redirect problem, since the critical coupling sets a general divide in parameter space between scattering and milling swarms operating with general physical interactions and dynamics. In this paper, however, we have not included the effects of communication delays or internal and external noise effects, which play a significant role in swarms of mobile robots\cite{edwards2020delay,szwaykowska2016collective}. For example, it is known that when the center of mass of a single swarm is stationary, time delays in communication can result in stable oscillations in the center of mass itself. The oscillations are the result of a general delay-induced Hopf bifurcation. On the other hand, it is also known that (even) small amounts of noise can act as a force, inducing large changes in swarm behavior\cite{6580546}. Such large fluctuations may happen in the case where there are multiple attractors for the center of mass of two interacting swarms. In such cases, noise ``kicks" the center of mass from one attractor to another. For these and other scenarios, new theory and potential controls will have to be developed using some of the techniques we have presented here to model how one flocking swarm can capture another.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction}\label{sec:intro} Ozsv\'ath-Szab\'o's Heegaard Floer homology associates $\mathbb{Z}[U]$-modules $\mathit{HF}^-(Y,\mathfrak{s})$ and $\mathit{HF}^+(Y,\mathfrak{s})$ to a closed, connected, oriented $3$-manifold $Y$ and $\text{Spin}^{\text{C}}$-structure $\mathfrak{s}$, and $\mathbb{Z}[U]$-module homomorphisms $F^\pm(W,\mathfrak{t})\colon \mathit{HF}^\pm(Y_1,\mathfrak{s}_1)\to\mathit{HF}^\pm(Y_2,\mathfrak{s}_2)$ to a $\text{Spin}^{\text{C}}$-cobordism $(W,\mathfrak{t})\colon (Y_1,\mathfrak{s}_1)\to (Y_2,\mathfrak{s}_2)$ \cite{OSz-hf-3manifolds,OSz-hf-4manifolds}. For a closed $4$-manifold $W$ with $b_2^+>0$, viewed as a cobordism from $S^3$ to itself by deleting two balls, the maps $\mathit{HF}^\pm(W,\mathfrak{t})$ vanish. Using the proof of vanishing, Ozsv\'ath-Szab\'o define a Heegaard Floer-theoretic analogue of the Seiberg-Witten invariant, the \emph{Heegaard Floer mixed invariant}, which is a map $\mathit{HF}^-(Y_1,\mathfrak{s}_1)\to\mathit{HF}^+(Y_2,\mathfrak{s}_2)$ associated to a $\text{Spin}^{\text{C}}$-cobordism with $b_2^+$ sufficiently large. The goal of this paper is to give an analogous construction in Khovanov homology, for smoothly embedded surfaces in $[0,1]\times S^3$ with crosscap number at least $3$. Following Finashin-Kreck-Viro~\cite{FKV-top-bulletin}, call a pair of smoothly embedded surfaces $F,F'\subset [0,1]\times S^3$ \emph{exotic} if there is a self-homeomorphism of $[0,1]\times S^3$ which is the identity on $\{0,1\}\times S^3$ and takes $F$ to $F'$, but there is no such self-diffeomorphism of $[0,1]\times S^3$. (See also Lemma~\ref{lem:MWW} and Section~\ref{sec:exotic}.) At the time of writing, we believe no pairs of exotic closed, orientable surfaces in $[0,1]\times S^3$ are known. There are, however, exotic nonorientable surfaces in $[0,1]\times S^3$, as first shown by Finashin-Kreck-Viro using results from Donaldson theory~\cite{FKV-top-knot-surf}. Recently, examples of exotic orientable cobordisms in $[0,1]\times S^3$ have also appeared, in work of Juh\'asz-Miller-Zemke~\cite{MJZ-hf-exotic}, Hayden~\cite{Hay-kh-disks}, and Hayden-Kjuchukova-Krishna-Miller-Powell-Sunukjian~\cite{HKKMPS-kh-Brun}, which use Heegaard Floer homology to distinguish them, and Hayden-Sundberg~\cite{HS-kh-exotic}, which uses Khovanov homology. Many question about exotic pairs of surfaces remain open. For example, Baykur-Sunukjian introduced stabilization operations for surfaces, and showed that all known examples of exotic pairs of closed surfaces become diffeomorphic after a single stabilization~\cite{BS-top-stabilizations}; it is not known if this holds in general. Building on these ideas, one can study the \emph{total stabilization distance} between two surfaces $F$, $F'$, the minimum number of stabilizations or destabilizations needed to turn one into the other~\cite{Miy-86-stab,MP-top-stab}, or the \emph{max stabilization distance}, the minimum over sequences $F=F_0,F_1,\dots,F_n=F'$, where $F_i$ and $F_{i+1}$ are related by a stabilization, destabilization, or taking the connected sum with a knotted $2$-sphere, of $\max\{|g(F_1)-g(F)|,\dots,|g(F_n)-g(F)|\}$ (where $g$ denotes the genus)~\cite{JZ-hf-stab}. (See also~\cite[p. 6]{Melvin-top-thesis}.) Another notion is the \emph{generalized total stabilization distance}, which is defined the same way as the total stabilization distance except that if two surfaces differ by taking the connected sum with a $2$-sphere then they are declared to be at distance $0$, so the generalized total stabilization distance, like the max stabilization distance, focuses on global, rather than local, knotting~\cite{MP-top-stab}. By using the Alexander module, Miyazaki shows there are pairs of embedded spheres in $S^4$ with arbitrarily high total stabilization distance~\cite{Miy-86-stab}, and Miller-Powell show there are pairs of embedded disks with arbitrarily high generalized total stabilization distance~\cite{MP-top-stab}. Juh\'asz-Zemke use Heegaard Floer homology to give pairs of disks with max stabilization distance at least $3$~\cite{JZ-hf-stab}. The analogous questions for exotic pairs are open. A key strategy for distinguishing knotted closed surfaces in $S^4$ has been to apply gauge theory, like the Heegaard Floer mixed invariant, to their branched double covers. Ozsv\'ath-Szab\'o showed that the Heegaard Floer homology of the branched double cover of a knot $K$ is closely related to the Khovanov homology of $K$~\cite{OSz-hf-branched}. So, it seems natural to look for an analogue of the Heegaard Floer mixed invariant in Khovanov homology In this paper, we give one such analogue. Khovanov homology admits a family of deformations~\cite{Kho-kh-Frobenius}; we will focus on two particular ones, the Lee deformation~\cite{Lee-kh-endomorphism} and the Bar-Natan deformation~\cite{Bar-kh-tangle-cob}. Rasmussen showed that the map of Lee homologies associated to a nonorientable cobordism vanishes~\cite{Ras-kh-slice}. Viewing these deformations as modules over polynomial algebras, analogous to the Heegaard Floer invariant $\mathit{HF}^-$, Rasmussen's result says that the map on the analogue of $\mathit{HF}^\infty$ associated to a nonorientable cobordism vanishes. Using this, and a notion of admissible cuts analogous to Heegaard Floer theory, we formulate a Khovanov mixed invariant $\MixedInvt{F}$ of a surface $F$ with crosscap number $\geq 3$ in the Lee and Bar-Natan deformations of Khovanov homology. Note that $F$ having crosscap number $\geq 3$ has no implication for $b_2^+$, so the Khovanov mixed invariant is defined for some surfaces $F$ where the Heegaard Floer mixed invariant of $\Sigma(F)$ is not. Verifying that the mixed invariant is well-defined (up to sign) has two steps: observing that the map on (deformed) Khovanov homology associated to a nonorientable cobordism is well-defined (up to sign), and verifying independence of the choice of admissible cut. The proof of the first statement is a straightforward extension of the literature~\cite{Jac-kh-cobordisms,Kho-kh-cobordism,Bar-kh-tangle-cob,MWW-kh-blobs}; see Section~\ref{sec:behave}. (Unlike the orientable case, the sign ambiguity here is essential; see Remark~\ref{rem:Klein-TQFT}.) To prove independence of the admissible cut, we use arguments involving the one-sided curve complex of a nonorientable surface; see Section~\ref{sec:cuts}. It turns out that, unlike the Heegaard Floer mixed invariant, this Khovanov mixed invariant does not distinguish closed, connected surfaces (Section~\ref{sec:closed}), though the proof of this fact is somewhat intricate. (We do not know if the mixed invariant distinguishes some closed, disconnected surfaces, and in particular have not generalized Gujral-Levine's results~\cite{LG-kh-split} to this setting.) On the other hand, both the mixed invariant and the map on Khovanov homology associated to a nonorientable cobordism do distinguish pairs of nonorientable surfaces with common boundary. Indeed, this was essentially already shown in computations of Sundberg-Swann~\cite{SS-kh-surf}: combined with the functoriality result mentioned above, their computations show the following. \begin{theorem}\label{thm:intro} There is a pair of connected surfaces $F,F'$ with boundary on $3_1\# m(3_1)$ with crosscap number $3$ and normal Euler number $-6$ which are not isotopic, and do not become isotopic after taking the connected sum with any knotted $2$-sphere. Further, $F$ is not obtained from a connected surface $F''$ by attaching a $1$-handle, or by taking the connected sum with a standard $\mathbb{R}\mathrm{P}^2$ or $\overline{\mathbb{R}\mathrm{P}}^2$. \end{theorem} Theorem~\ref{thm:intro} is proved in Section~\ref{sec:SunSwann}. The second half of the theorem uses the behavior of $\MixedInvt{F}$ under various local modifications to the surface, which are summarized in Theorem~\ref{thm:vanishing}. Hayden-Sundberg's examples of exotic pairs of slice disks distinguished by Khovanov homology can be enhanced to give exotic pairs of nonorientable surfaces distinguished by Khovanov homology. In particular, we have: \begin{theorem}\label{thm:intro-2} There is an exotic pair of surfaces with boundary $12^n_{309}$, crosscap number 3, and normal Euler number $-6$. \end{theorem} Theorem~\ref{thm:intro-2} is proved in Section~\ref{sec:exotic}. As far as we know, this is the first gauge theory-free proof that there are pairs of exotic nonorientable surfaces. This paper is organized as follows. We review the Lee and Bar-Natan deformations of Khovanov homology in Section~\ref{sec:BN-background}, in an algebraic framework parallel to Heegaard Floer homology. Section~\ref{sec:behave} shows that these deformed Khovanov complexes are functorial with respect to nonorientable cobordisms. For convenience later, we also allow our cobordisms to be decorated with stars (following the notation of~\cite{KhRo-kh-Frob2}). Section~\ref{sec:cuts} formulates the notion of admissible cuts, and shows that for surfaces with crosscap number $\geq 3$ all admissible cuts are equivalent in a suitable sense. Section~\ref{sec:mixed} defines the mixed invariant and proves it is well-defined. Section~\ref{sec:properties} gives basic properties of the maps associated to nonorientable cobordisms and the mixed invariant, and Section~\ref{sec:comps} gives some computations and applications of these invariants, including proving Theorems~\ref{thm:intro} and~\ref{thm:intro-2}, and concludes with some questions. \subsection*{Acknowledgments} We thank Ian Zemke for helpful comments on the first draft of this paper, and the referee for further comments and corrections. \section{Background on the Lee and Bar-Natan deformations}\label{sec:BN-background} Khovanov homology has two well-known deformations, the Lee deformation~\cite{Lee-kh-endomorphism} and the Bar-Natan deformation~\cite{Bar-kh-tangle-cob}. The two theories are similar, although they have some essential differences as well. Most of the constructions and results of this paper work for either of the two theories, so we will use the same notations for both the theories. When the two theories diverge, we will explicitly mention that in the text. Fix a commutative ring $\mathrm{R}$ with unit. All chain complexes and modules will be defined over $\mathrm{R}$, though we will often suppress $\mathrm{R}$ from the notation. If we are using the Lee theory, we assume $2$ is a unit in $\mathrm{R}$. Fix an oriented link diagram $L$ with $N$ crossings, $N_+$ of which are positive and $N_-$ of which are negative. Consider the Kauffman cube of resolutions of $L$. A \emph{Khovanov generator} $y$ is a choice of vertex $v$ and a labeling $y(Z)\in\{1,X\}$ of each circle $Z$ in the $v$-resolution. Denoting the homological, quantum bigrading by $(\mathrm{gr}_h,\mathrm{gr}_q)$, a Khovanov generator $y$ lying over a vertex $v\in\{0,1\}^N$ has bigrading \begin{align*} \mathrm{gr}_h(y)&=-N_-+|v|\\ \mathrm{gr}_q(y)&=N_+-2N_-+|v|+\#\{Z\mid y(Z)=1\}-\#\{Z\mid y(Z)=X\}. \end{align*} The deformed Khovanov complex $\mathcal{C}^-(L)$ is freely generated by these generators over a polynomial algebra over $\mathrm{R}$, and is obtained by feeding the cube of resolutions into a Frobenius algebra over that polynomial algebra. \begin{enumerate} \item In the Lee theory, the polynomial algebra is $\mathrm{R}[T]$ with $T$ in bigrading $(0,-4)$, and the Frobenius algebra is $\mathrm{R}[T,X]/(X^2=T)$, with co-multiplication given by \begin{align*} \Delta(1)&=1\otimes X + X\otimes 1 & \Delta(X)&=X\otimes X + T 1\otimes 1 \end{align*} and counit $\epsilon\colon \mathrm{R}[T,X]/(X^2=T)\to \mathrm{R}[T]$ given by $\epsilon(1)=0$, $\epsilon(X)=1$. \item In the Bar-Natan theory, the polynomial algebra is $\mathrm{R}[H]$ with $H$ in bigrading $(0,-2)$, and the Frobenius algebra is $\mathrm{R}[H,X]/(X^2=XH)$, with co-multiplication given by \begin{align*} \Delta(1)&=1\otimes X + X\otimes 1 - H 1\otimes 1 & \Delta(X)&=X\otimes X \end{align*} and counit $\epsilon\colon \mathrm{R}[H,X]/(X^2=XH)\to \mathrm{R}[H]$ given by $\epsilon(1)=0$, $\epsilon(X)=1$. \end{enumerate} (In Khovanov's paper~\cite{Kho-kh-Frobenius}, these are denoted $\mathcal{F}_7$ and $\mathcal{F}_3$, respectively.) In either theory, the differential increases the bigrading by $(1,0)$. To keep the notation the same, let $\mathrm{R}[U]$ denote the polynomial algebra for either theory. That is, when discussing the Lee theory we take $U=T$ (in bigrading $(0,-4)$), and when discussing the Bar-Natan theory we take $U=H$ (in bigrading $(0,-2)$). In either case, the original non-deformed Khovanov complex is obtained by setting $U=0$; in analogy with Heegaard Floer homology, we will denote the non-deformed complex $\widehat\mathcal{C}(L)=\mathcal{C}^-(L)/\{U=0\}$. The homology $\widehat\mathcal{H}(L)$ of $\widehat\mathcal{C}(L)$ is ordinary Khovanov homology, often denoted $\mathit{Kh}(L)$. Continuing the analogy with Heegaard Floer homology, let \begin{align*} \mathcal{C}^\infty(L)&=U^{-1}\mathcal{C}^-(L)\\ \mathcal{C}^+(L)&=\mathcal{C}^\infty(L)/\mathcal{C}^-(L), \end{align*} where the notation $U^{-1}$ denotes localization or, equivalently, tensoring over $\mathrm{R}[U]$ with $\mathrm{R}[U^{-1},U]$. (These conventions are not exactly parallel to Heegaard Floer homology~\cite[Section 4.1]{OSz-hf-3manifolds}.) Let $\mathcal{H}^-(L)$, $\mathcal{H}^\infty(L)$, and $\mathcal{H}^+(L)$ be the homologies of $\mathcal{C}^-(L)$, $\mathcal{C}^\infty(L)$, and $\mathcal{C}^+(L)$. See Figure~\ref{fig:compute} for an example of $\mathcal{C}^\infty(L)$ and its subcomplex $\mathcal{C}^-(L)$ and quotient complex $\mathcal{C}^+(L)$ (up to quasi-isomorphism), and a comparison with the usual formulation of the Lee deformation. There are short exact sequences \begin{equation}\label{eq:minfp} \begin{gathered} \begin{tikzpicture}[xscale=.9, >=To] \node at (.65,0) (tl0) {$0$}; \node at (2,0) (Cm) {$\mathcal{C}^-(L)$}; \node at (4,0) (Ci) {$\mathcal{C}^\infty(L)$}; \node at (6,0) (Cpt) {$\mathcal{C}^+(L)$}; \node at (7.35,0) (tr0) {$0\phantom{.}$}; \draw[->] (tl0) to (Cm); \draw[->] (Cm) to node[above=-.05]{\lab{\iota}} (Ci); \draw[->] (Ci) to node[above=-.05]{\lab{\pi}} (Cpt); \draw[->] (Cpt) to (tr0); \end{tikzpicture}\\[-.75em] \begin{tikzpicture}[xscale=.9, >=To] \node at (.65,0) (tl0) {$0$}; \node at (2,0) (Cm) {$\mathcal{C}^-(L)$}; \node at (4,0) (Ci) {$\mathcal{C}^-(L)$}; \node at (6,0) (Cpt) {$\widehat{\mathcal{C}}(L)$}; \node at (7.35,0) (tr0) {$0\phantom{.}$}; \draw[->] (tl0) to (Cm); \draw[->] (Cm) to node[above=-.05]{\lab{U}} (Ci); \draw[->] (Ci) to node[above=-.05]{\lab{\pi}} (Cpt); \draw[->] (Cpt) to (tr0); \end{tikzpicture}\\[-.75em] \begin{tikzpicture}[xscale=.9, >=To] \node at (.65,0) (tl0) {$0$}; \node at (2,0) (Cm) {$\widehat{\mathcal{C}}(L)$}; \node at (4,0) (Ci) {$\mathcal{C}^+(L)$}; \node at (6,0) (Cpt) {$\mathcal{C}^+(L)$}; \node at (7.35,0) (tr0) {$0.$}; \draw[->] (tl0) to (Cm); \draw[->] (Cm) to node[above=-.05]{\lab{\iota}} (Ci); \draw[->] (Ci) to node[above=-.05]{\lab{U}} (Cpt); \draw[->] (Cpt) to (tr0); \end{tikzpicture} \end{gathered} \end{equation} and corresponding long exact sequences \begin{equation}\label{eq:les} \begin{gathered} \begin{tikzpicture}[xscale=1, >=To] \node at (0.5,0) (n1) {$\cdots$}; \node at (2,0) (n2) {$\mathcal{H}^-(L)$}; \node at (4,0) (n3) {$\mathcal{H}^\infty(L)$}; \node at (6,0) (n4) {$\mathcal{H}^+(L)$}; \node at (8,0) (n5) {$\mathcal{H}^-(L)$}; \node at (9.55,0) (n6) {$\cdots,$}; \draw[->] (n1) to (n2); \draw[->] (n2) to node[above=-.05]{\lab{\iota_*}} (n3); \draw[->] (n3) to node[above=-.05]{\lab{\pi_*}} (n4); \draw[->] (n4) to node[above=-.05]{\lab{\partial}} (n5); \draw[->] (n5) to (n6); \end{tikzpicture}\\[-.75em] \begin{tikzpicture}[xscale=1, >=To] \node at (0.5,0) (n1) {$\cdots$}; \node at (2,0) (n2) {$\mathcal{H}^-(L)$}; \node at (4,0) (n3) {$\mathcal{H}^-(L)$}; \node at (6,0) (n4) {$\widehat{\mathcal{H}}(L)$}; \node at (8,0) (n5) {$\mathcal{H}^-(L)$}; \node at (9.55,0) (n6) {$\cdots,$}; \draw[->] (n1) to (n2); \draw[->] (n2) to node[above=-.05]{\lab{U}} (n3); \draw[->] (n3) to node[above=-.05]{\lab{\pi_*}} (n4); \draw[->] (n4) to node[above=-.05]{\lab{\partial}} (n5); \draw[->] (n5) to (n6); \end{tikzpicture}\\[-.75em] \begin{tikzpicture}[xscale=1, >=To] \node at (0.5,0) (n1) {$\cdots$}; \node at (2,0) (n2) {$\widehat{\mathcal{H}}(L)$}; \node at (4,0) (n3) {$\mathcal{H}^+(L)$}; \node at (6,0) (n4) {$\mathcal{H}^+(L)$}; \node at (8,0) (n5) {$\widehat{\mathcal{H}}(L)$}; \node at (9.55,0) (n6) {$\cdots.$}; \draw[->] (n1) to (n2); \draw[->] (n2) to node[above=-.05]{\lab{\iota_*}} (n3); \draw[->] (n3) to node[above=-.05]{\lab{U}} (n4); \draw[->] (n4) to node[above=-.05]{\lab{\partial}} (n5); \draw[->] (n5) to (n6); \end{tikzpicture}\\[-.75em] \end{gathered} \end{equation} The homomorphisms $U$ decrease the bigrading by $(0,2)$ for the Bar-Natan deformation and $(0,4)$ for the Lee deformation, the homomorphism $\widehat\mathcal{H}(L)\to\mathcal{H}^+(L)$ increases bigrading by $(0,2)$ for the Bar-Natan deformation and $(0,4)$ for the Lee deformation, the connecting homomorphisms $\mathcal{H}^+(L)\to \mathcal{H}^-(L)$ and $\mathcal{H}^+(L)\to \widehat\mathcal{H}(L)$ increase the bigrading by $(1,0)$, the connecting homomorphism $\widehat\mathcal{H}(L)\to\mathcal{H}^-(L)$ increases bigrading by $(1,2)$ for the Bar-Natan deformation and $(1,4)$ for the Lee deformation, and the other maps preserve the bigrading. Commutativity of the diagrams \[ \mathcenter{\begin{tikzpicture} \node at (.75,0) (tl0) {$0$}; \node at (2,0) (Cm) {$\mathcal{C}^-(L)$}; \node at (4,0) (Ci) {$\mathcal{C}^\infty(L)$}; \node at (6,0) (Cpt) {$\mathcal{C}^+(L)$}; \node at (7.25,0) (tr0) {$0$}; \node at (0.75,-1) (bl0) {$0$}; \node at (2,-1) (Ch) {$\widehat\mathcal{C}(L)$}; \node at (4,-1) (Cpb1) {$\mathcal{C}^+(L)$}; \node at (6,-1) (Cpb2) {$\mathcal{C}^+(L)$}; \node at (7.25,-1) (br0) {$0$}; \draw[->] (tl0) to (Cm); \draw[->] (Cm) to node[above]{\lab{\iota}} (Ci); \draw[->] (Ci) to node[above]{\lab{\pi}} (Cpt); \draw[->] (Cpt) to (tr0); \draw[->] (bl0) to (Ch); \draw[->] (Ch) to node[above]{\lab{\iota}} (Cpb1); \draw[->] (Cpb1) to node[above]{\lab{U}} (Cpb2); \draw[->] (Cpb2) to (br0); \draw[->] (Cm) to node[right]{\lab{\pi}} (Ch); \draw[->] (Ci) to node[right]{\lab{\pi\circ U^{-1}}} (Cpb1); \draw[->] (Cpt) to node[right]{\lab{\Id}} (Cpb2); \end{tikzpicture}} \text{\quad and\quad } \mathcenter{\begin{tikzpicture} \node at (.75,0) (tl0) {$0$}; \node at (2,0) (Cm) {$\mathcal{C}^-(L)$}; \node at (4,0) (Ci) {$\mathcal{C}^-(L)$}; \node at (6,0) (Cpt) {$\widehat{\mathcal{C}}(L)$}; \node at (7.25,0) (tr0) {$0$}; \node at (0.75,-1) (bl0) {$0$}; \node at (2,-1) (Ch) {$\mathcal{C}^-(L)$}; \node at (4,-1) (Cpb1) {$\mathcal{C}^\infty(L)$}; \node at (6,-1) (Cpb2) {$\mathcal{C}^+(L)$}; \node at (7.25,-1) (br0) {$0$}; \draw[->] (tl0) to (Cm); \draw[->] (Cm) to node[above]{\lab{U}} (Ci); \draw[->] (Ci) to node[above]{\lab{\pi}} (Cpt); \draw[->] (Cpt) to (tr0); \draw[->] (bl0) to (Ch); \draw[->] (Ch) to node[above]{\lab{\iota}} (Cpb1); \draw[->] (Cpb1) to node[above]{\lab{\pi}}(Cpb2); \draw[->] (Cpb2) to (br0); \draw[->] (Cm) to node[right]{\lab{\Id}} (Ch); \draw[->] (Ci) to node[right]{\lab{U^{-1}\circ\iota}} (Cpb1); \draw[->] (Cpt) to node[right]{\lab{\iota}} (Cpb2); \end{tikzpicture}} \] of short exact sequences and naturality of the snake lemma imply that the following diagrams commute: \begin{equation}\label{eq:bdybdy-tri} \vcenter{\hbox{\begin{tikzpicture}[xscale=1.5] \node (p) at (0,0) {$\mathcal{H}^+(L)$}; \node (h) at (2,0) {$\widehat\mathcal{H}(L)$}; \node (m) at (1,1) {$\mathcal{H}^-(L)$}; \draw[->] (p) -- (h) node[midway,anchor=north] {\tiny $\partial$}; \draw[->] (p) -- (m) node[midway,anchor=south east] {\tiny $\partial$}; \draw[->] (m) -- (h) node[midway,anchor=south west] {\tiny $\pi_*$}; \end{tikzpicture}}} \text{\quad\and\quad} \vcenter{\hbox{\begin{tikzpicture}[xscale=1.5] \node (p) at (0,0) {$\widehat{\mathcal{H}}(L)$}; \node (h) at (2,0) {$\mathcal{H}^-(L)$.}; \node (m) at (1,1) {$\mathcal{H}^+(L)$}; \draw[->] (p) -- (h) node[midway,anchor=north] {\tiny $\partial$}; \draw[->] (p) -- (m) node[midway,anchor=south east] {\tiny $\iota_*$}; \draw[->] (m) -- (h) node[midway,anchor=south west] {\tiny $\partial$}; \end{tikzpicture}}} \end{equation} For the empty link, $\mathcal{H}^-(\varnothing)\cong \mathrm{R}[U]$, $\mathcal{H}^\infty(\varnothing)\cong \mathrm{R}[U^{-1},U]$, $\mathcal{H}^+(\varnothing)\cong \mathrm{R}[U^{-1},U]/\mathrm{R}[U]$, and $\widehat\mathcal{H}(\varnothing)=\mathrm{R}$. More generally, for $\mathcal{H}^\infty$ we have the following well-known result. (Recall that for the Lee deformation we assume $2$ is invertible in $\mathrm{R}$.) \begin{proposition}\label{prop:or-gens-part1} In the Bar-Natan theory, there is a canonical isomorphism \begin{equation} \mathcal{H}^\infty(L)\cong \bigoplus_{o\in o(L)}\mathrm{R}[H^{-1},H] \end{equation} where $o(L)$ is the set of orientations of $L$. In the Lee theory, after adding a formal square root of $T$, there is a canonical isomorphism \begin{equation} \mathcal{H}^\infty(L)\otimes_{\mathrm{R}[T]}\mathrm{R}[\sqrt{T}]\cong \bigoplus_{o\in o(L)}\mathrm{R}[T^{-\frac{1}{2}},T^{\frac{1}{2}}]. \end{equation} In each case, the summand corresponding to an orientation $o$ is supported in homological grading $2\mathrm{lk}(L_o,L\setminus L_o)$, where $L_o$ is the sublink of $L$ consisting of components whose original orientations agree with $o$ and $\mathrm{lk}$ is the linking number. \end{proposition} \begin{proof} The proof is well-known (see~\cite{Lee-kh-endomorphism,BNM-kh-degeneration,Tur-kh-diag}), so we merely sketch it. The Bar-Natan Frobenius algebra $\mathrm{R}[H^{-1},H][X]/(X^2=XH)$ has a basis $\{ A\defeq X,\ B\defeq H-X\}$ over $\mathrm{R}[H^{-1},H]$, which diagonalizes it: \begin{equation*} \begin{split} A^2=HA,\qquad B^2=HB,\qquad AB=0\\ \Delta(A)=A\otimes A,\qquad \Delta(B)=-B\otimes B. \end{split} \end{equation*} For the Lee case, after adding a formal square root of $T$, the Frobenius algebra $\mathrm{R}[T^{-\frac{1}{2}},T^{\frac{1}{2}}][X]/(X^2=T)$ has a basis $\{ A\defeq \sqrt{T}+X,\ B\defeq \sqrt{T}-X\}$, which diagonalizes it: \begin{equation*} \begin{split} A^2=2\sqrt{T} A,\qquad B^2=2\sqrt{T} B,\qquad AB=0\\ \Delta(A)=A\otimes A,\qquad \Delta(B)=-B\otimes B. \end{split} \end{equation*} In the Lee case, note that the homology of $\mathcal{C}^\infty(L)\otimes_{\mathrm{R}[T]}\mathrm{R}[\sqrt{T}]$ is isomorphic to $\mathcal{H}^\infty(L)\otimes_{\mathrm{R}[T]}\mathrm{R}[\sqrt{T}]$, since $\mathrm{R}[\sqrt{T}]$ is free over $\mathrm{R}[T]$. For any vertex $v\in\{0,1\}^N$ in the cube of resolutions, let $L_v$ be the corresponding complete resolution of the link diagram $L$. With respect to the above basis, the chain group $\mathcal{C}^\infty(L)$ is freely generated (over $\mathrm{R}[H^{-1},H]$ in the Bar-Natan case or $\mathrm{R}[T^{-\frac{1}{2}},T^{\frac{1}{2}}]$ in the Lee case) by all possible labelings of the circles of $L_v$ by $\{A,B\}$, for all $v$. Call two such generators \emph{equivalent} if one can be obtained from the other by changing the resolutions at some crossings ($0$ to $1$ or $1$ to $0$) so that the circles have consistent labelings before and after the change. That is, given resolutions $L_v$ and $L_w$, there is a cobordism $\Sigma_{v,w}$ from $L_v$ to $L_w$ consisting of saddles at the crossings where $v$ and $w$ differ; a generator over $v$ and a generator over $w$ are equivalent if for each component $\Sigma$ of $\Sigma_{v,w}$, all circles in the boundary of $\Sigma$ have the same label. Since the basis $\{ A,B\}$ diagonalizes the Frobenius algebra, the complex $\mathcal{C}^\infty(L)$ decomposes as a direct sum along equivalence classes. Moreover, in each equivalence class, the complex is isomorphic to the tensor product of some number of copies of the two-step complex $\mathrm{R}[H^{-1},H]\stackrel{\Id}{\longrightarrow}\mathrm{R}[H^{-1},H]$ in the Bar-Natan case or $\mathrm{R}[T^{-\frac{1}{2}},T^{\frac{1}{2}}]\stackrel{\Id}{\longrightarrow} \mathrm{R}[T^{-\frac{1}{2}},T^{\frac{1}{2}}]$ in the Lee case. These complexes are acyclic, unless the tensor product is over zero copies, that is, the equivalence class contains just a single element. Therefore, the homology $\mathcal{H}^\infty(L)$ is generated by equivalence classes containing a single element, which are generators where every crossing connects two circles in the resolution with different labels. Such generators, in turn, are in canonical correspondence with orientations of $L$, as follows. For any resolution $L_v$, consider the checkerboard coloring of $\R^2\setminus L_v$ where the unbounded region is colored white. For any generator over $v$, orient each circle in $L_v$ as the boundary of the black (respectively, white) region if it is labeled $A$ (respectively, $B$). This orientation of $L_v$ induces an orientation of $L$ precisely for the above type of generators. The statement about the homological gradings is straightforward from the description of the generators above. \end{proof} \section{Behavior under (possibly nonorientable) cobordisms}\label{sec:behave} We will study the maps induced on these deformed Khovanov complexes and their homologies by a (possibly nonorientable) cobordism $F\subset [0,1]\times S^3$ from an oriented link $L_0\subset\{0\}\times S^3$ to an oriented link $L_1\subset \{1\}\times S^3$. Throughout the paper, all link cobordisms will be assumed to be products near the boundary. Recall the definition of the \emph{normal Euler number}. Pick Seifert surfaces for the $L_i$ and take a transverse pushoff $F'$ of $F$ so that the pushoff of $L_i$ is in the direction of its Seifert surface. (It follows from the Mayer-Vietoris theorem applied to the decomposition $S^3=\nbd(L_i)\cup (S^3\setminus L_i)$ that these pushoffs are independent of the choice of Seifert surfaces.) Then, the normal Euler number $e$ of $F$ is the signed count of intersection points between $F$ and $F'$, where the signs come from picking a local orientation of $F$ near each intersection point and using the induced local orientation of $F'$. This number is independent of the choice of pushoff. The normal Euler number is zero for oriented cobordisms from $L_0$ to $L_1$ and is some even number in general. We will consider compact link cobordisms decorated with finitely many marked points which, to be consistent with Khovanov-Robert~\cite{KhRo-kh-Frob2}, we will call \emph{stars}. So, an \emph{elementary cobordism} between link diagrams is one of the following moves: \begin{enumerate}[label=(EC-\arabic*)] \item\label{item:EC1} A planar isotopy of the diagram. \item A Reidemeister move. \item\label{item:EC3} A birth or death of an unknot disjoint from the rest of the diagram. \item\label{item:EC4} No change to the link diagram but a choice of a distinguished point (star) in the interior of an arc of the diagram; we interpret this as the identity cobordism with a single star in its interior, lying over this distinguished point. \item\label{item:EC5} A planar saddle. \item\label{item:EC6} The identity cobordism from a link $L$ to the same link with a different orientation on some components. \end{enumerate} Associated to each elementary cobordism is a map of the Khovanov complexes. For Reidemeister moves, these are the maps from the proof of invariance of these theories. Specifically, Bar-Natan associates particular picture-world maps to each Reidemeister move~\cite{Bar-kh-tangle-cob}, and feeding these pictures into the Lee or Bar-Natan Frobenius algebra gives the map of deformed Khovanov complexes. The map associated to a birth is the unit $1$ and associated to a death is the counit $\epsilon$. The map associated to a saddle is obtained by applying the corresponding multiplication $m$ or comultiplication $\Delta$ to each resolution. The map associated to the identity cobordism with a star on some arc $A$ multiplies the label of $A$, in each resolution, by $2X$ for the Lee deformation and $2X-H$ for the Bar-Natan deformation (compare~\cite[Formula (16)]{KhRo-kh-Frob2}). This map depends only on the arc containing the star, not the location of the star on that arc. The map associated to the identity cobordism with inconsistent orientations is the identity map. Suppose $F$ is a (possibly nonorientable) cobordism from $L_0$ to $L_1$, with a finite number of marked stars in the interior of $F$. If $F$ is represented by a movie of elementary cobordisms, then there is an induced map $\mathcal{C}^-(F)\colon \mathcal{C}^-(L_0)\to\mathcal{C}^-(L_1)$, obtained by composing the maps associated to elementary cobordisms. This induces maps on all the four versions $\mathcal{C}^\bullet$ of the Khovanov complexes, $\bullet\in\{+,-,\infty,\widehat{\ }\}$, as well as their homologies $\mathcal{H}^\bullet$. \begin{lemma}\label{lem:les-natural} The maps $\mathcal{C}^\bullet(F)\colon \mathcal{C}^\bullet(L_0)\to \mathcal{C}^\bullet(L_1)$, $\bullet\in\{+,-,\infty,\widehat{\ }\}$, induce maps $\mathcal{H}^\bullet\colon \mathcal{H}(L_0)\to\mathcal{H}(L_1)$, and the long exact sequences from Formula~\eqref{eq:les} are natural with respect to these maps. \end{lemma} \begin{proof} This is immediate from the definitions. \end{proof} Assuming the link diagrams are oriented coherently before and after the move, for planar isotopies and Reidemeister moves, the maps preserve the bigrading, and for births and deaths, the maps preserve $\mathrm{gr}_h$ and increase $\mathrm{gr}_q$ by $1$. The map associated to a star preserves $\mathrm{gr}_h$ and decreases $\mathrm{gr}_q$ by $2$. The behavior of saddles is more complicated. \begin{lemma}\label{lem:bigrading-shift-saddle} Let $F$ be a planar saddle from an oriented link diagram $L_0$ to an oriented link diagram $L_1$, which is not necessarily orientable coherently with the orientations of $L_0$ and $L_1$. Let $e$ be its normal Euler number. Then, the map $\mathcal{C}^-(F)\colon \mathcal{C}^-(L_0)\to\mathcal{C}^-(L_1)$ decreases $\mathrm{gr}_h$ by $e/2$ and decreases $\mathrm{gr}_q$ by $1+3e/2$. \end{lemma} \begin{proof} Ozsv\'ath-Stipsicz-Szab\'o show that the normal Euler number $e$ of the planar saddle $F$ is $w(L_0)-w(L_1)$, where $w(L_i)=N_+(L_i)-N_-(L_i)$ is the writhe of the link diagram $L_i$~\cite[Proof of Lemma 4.3]{OSS-hf-unoriented}. They write their proof only for knots but, as we sketch in the next paragraph, it works equally well for links. Fix any normal direction to the plane of projection of the link diagrams and consider a small pushoff of $L_i$ in this normal direction; call this the \emph{blackboard pushoff}. Since the total linking number of $L_i$ with its blackboard pushoff is the writhe $w(L_i)$ while the total linking number of $L_i$ with its Seifert pushoff is zero, the identity cobordism from $L_i$ to $L_i$ has a pushoff which intersects itself $w(L_i)$ times, so that the pushoff restricts to the Seifert pushoff at one end and the blackboard pushoff at the other. The planar saddle also has a (similarly defined) blackboard pushoff without any self-intersection, and it restricts to the blackboard pushoffs of $L_0$ and $L_1$ at the two ends. Putting these pieces together, we get a pushoff with $w(L_0)-w(L_1)$ self-intersections connecting the Seifert pushoffs of $L_0$ and $L_1$. Let $N$ be the total number of crossings in either $L_0$ or $L_1$. Recall that the complex $\mathcal{C}^-(L_i)$ is obtained from the total complex of a cube-shaped diagram---call it $\mathcal{C}'(L_i)$---by increasing $\mathrm{gr}_h$ by $-N_-(L_i)=(w(L_i)-N)/2$ and $\mathrm{gr}_q$ by $N_+(L_i)-2N_-(L_i)=(3w(L_i)-N)/2$. The saddle $F$ induces a map $\mathcal{C}'(L_0) \to \mathcal{C}'(L_1)$ that preserves the homological grading and decreases the quantum grading by $1$. Therefore, after the grading shifts, the map $\mathcal{C}^-(F)\colon \mathcal{C}^-(L_0)\to\mathcal{C}^-(L_1)$ decreases $\mathrm{gr}_h$ by $(w(L_0)-w(L_1))/2=e/2$ and decreases $\mathrm{gr}_q$ by $1+3(w(L_0)-w(L_1))/2=1+3e/2$. \end{proof} \begin{corollary}\label{cor:hq-gr-change} (Compare~\cite[Proposition 4.7]{Bal-kh-E1}) The map associated to a cobordism with Euler characteristic $\chi$, normal Euler number $e$, and $s$ stars decreases $\mathrm{gr}_h$ by $e/2$ and increases $\mathrm{gr}_q$ by $\chi-3e/2-2s$. \end{corollary} \begin{proof} Consider a movie decomposition into elementary cobordisms \ref{item:EC1}--\ref{item:EC6}. Choose orientations of all the link diagrams that appear in the movie, so that link diagrams before and after each of the moves~\ref{item:EC1}--\ref{item:EC4} are oriented coherently. (We may choose the orientations inductively, starting with the given orientation of $L_0$, and by using a move~\ref{item:EC6} if necessary, we may ensure that the chosen orientation of $L_1$ agrees with the given orientation of $L_1$.) We now check that the statement holds for each of the elementary cobordisms. The elementary cobordisms~\ref{item:EC1}--\ref{item:EC3} have $e=s=0$, and the associated maps increase the bigrading by $(0,\chi)=(-e/2,\chi-3e/2-2s)$. The elementary cobordism~\ref{item:EC4} has $e=\chi=0$ and $s=1$, and the associated map increases the bigrading by $(0,-2)=(-e/2,\chi-3e/2-2s)$. The elementary cobordism~\ref{item:EC5} has $\chi=-1$ and $s=0$, and by Lemma~\ref{lem:bigrading-shift-saddle}, the associated map increases the bigrading by $(-e/2,-1-3e/2)=(-e/2,\chi-3e/2-2s)$. Finally, for cobordisms of type~\ref{item:EC6}, we have $\chi=s=0$, and the associated identity map increases the bigrading by $(-e/2,-3e/2)=(-e/2,\chi-3e/2-2s)$ as well. (The proof is similar to, but easier than, the proof of Lemma~\ref{lem:bigrading-shift-saddle}.) Since $e$, $\chi$, and $s$ are additive, the composition of these maps also increases the bigrading by $(-e/2,\chi-3e/2-2s)$. \end{proof} \begin{remark} The corollary suggests that another natural grading is $\mathrm{gr}_\gamma=\mathrm{gr}_q-3\mathrm{gr}_h$: the map associated to any cobordism (possibly nonorientable) increases $\mathrm{gr}_\gamma$ by the Euler characteristic of the cobordism minus twice the number of stars. \end{remark} We also recall a well-known result of Rasmussen's~\cite{Ras-kh-slice}. Given a link $L$, let $o(L)$ be the set of orientations of $L$. Similarly, given a cobordism $F$ from $L_0$ to $L_1$, let $o(F)$ be the set of orientations of $F$. There are restriction maps $o(L_0)\leftarrow o(F)\rightarrow o(L_1)$; that is, $o(F)$ is a \emph{correspondence} from $o(L_0)$ to $o(L_1)$. (Here, we choose orientation conventions so that if $F=[0,1]\times L$ then $o(F)$ is the identity correspondence of $o(L)$.) \begin{proposition}\label{prop:or-gens-part2} Given a cobordism $F$ from $L_0$ to $L_1$, for the Bar-Natan and Lee theories, respectively, we have commutative diagrams \begin{align*} &\vcenter{\hbox{\begin{tikzpicture}[xscale=6,yscale=1.5] \node (a0) at (0,1) {$\mathcal{H}^\infty(L_0)$}; \node (a1) at (1,1) {$\mathcal{H}^\infty(L_1)$}; \node (b0) at (0,0) {$\displaystyle\bigoplus_{o\in o(L_0)}\mathrm{R}[H^{-1},H]$}; \node (b1) at (1,0) {$\displaystyle\bigoplus_{o\in o(L_1)}\mathrm{R}[H^{-1},H]$}; \draw[->] (a0) -- (a1) node[midway,anchor=south] {\tiny $\mathcal{H}^\infty(F)$}; \draw[->] (b0) -- (b1) node[midway,anchor=south] {\tiny $F_*$}; \draw[->] (a0) -- (b0) node[midway,anchor=west] {\tiny $\cong$}; \draw[->] (a1) -- (b1) node[midway,anchor=east] {\tiny $\cong$}; \end{tikzpicture}}}\\ &\vcenter{\hbox{\begin{tikzpicture}[xscale=6,yscale=1.5] \node (a0) at (0,1) {$\mathcal{H}^\infty(L_0)\otimes_{\mathrm{R}[T]}\mathrm{R}[\sqrt{T}]$}; \node (a1) at (1,1) {$\mathcal{H}^\infty(L_1)\otimes_{\mathrm{R}[T]}\mathrm{R}[\sqrt{T}]$}; \node (b0) at (0,0) {$\displaystyle\bigoplus_{o\in o(L_0)}\mathrm{R}[T^{-\frac{1}{2}},T^{\frac{1}{2}}]$}; \node (b1) at (1,0) {$\displaystyle\bigoplus_{o\in o(L_1)}\mathrm{R}[T^{-\frac{1}{2}},T^{\frac{1}{2}}]$}; \draw[->] (a0) -- (a1) node[midway,anchor=south] {\tiny $\mathcal{H}^\infty(F)\otimes\Id$}; \draw[->] (b0) -- (b1) node[midway,anchor=south] {\tiny $F_*$}; \draw[->] (a0) -- (b0) node[midway,anchor=west] {\tiny $\cong$}; \draw[->] (a1) -- (b1) node[midway,anchor=east] {\tiny $\cong$}; \end{tikzpicture}}} \end{align*} where the vertical arrows are from Proposition~\ref{prop:or-gens-part1}, and the bottom arrow is some map $F_*$ that refines the correspondence $o(L_0)\leftarrow o(F)\rightarrow o(L_1)$. That is, for $i\in\{0,1\}$ and for any orientation $o_i\in o(L_i)$ and any generator $g_i$ of the $\mathrm{R}[H^{-1},H]$ (respectively, $\mathrm{R}[T^{-\frac{1}{2}},T^{\frac{1}{2}}]$) summand corresponding to $o_i$, the coefficient of $g_1$ in $F_*(g_0)$ is a sum \( \sum_{o\in o(F),\\ \ o|_{L_i}=o_i} e_o, \) where each $e_o$ is a unit in $\mathrm{R}[H^{-1},H]$ (respectively, $\mathrm{R}[T^{-\frac{1}{2}},T^{\frac{1}{2}}]$). In particular, if $F$ is nonorientable, then in either theory, the map $\mathcal{H}^\infty(F)\colon \mathcal{H}^\infty(L_0)\to\mathcal{H}^\infty(L_1)$ is zero. \end{proposition} \begin{proof} For the first part, Rasmussen proved~\cite{Ras-kh-slice} the result for Lee's deformation with $\mathrm{R}=\mathbb{Q}$ using a change of basis that diagonalized Lee's Frobenius algebra (after adding a square root of $T$). The change of basis from the proof of Proposition~\ref{prop:or-gens-part1} diagonalizes the Frobenius algebra, both for the Bar-Natan theory (over any ring $\mathrm{R}$) and the Lee theory (over any ring $R$ with $2$ invertible, and after adding a square root of $T$). Using this diagonalized basis, Rasmussen's proof goes through without any essential changes. (For an elementary star cobordism, the map is multiplication by $A-B$, which sends each orientation to itself with coefficient $\pm 1$, and hence fits into Rasmussen's framework.) The last assertion is automatic for the Bar-Natan theory. For the Lee theory, note that if the map $\mathcal{H}^\infty(F)\otimes\Id\colon \mathcal{H}^\infty(L_0)\otimes_{\mathrm{R}[T]}\mathrm{R}[\sqrt{T}] \to \mathcal{H}^\infty(L_1)\otimes_{\mathrm{R}[T]}\mathrm{R}[\sqrt{T}]$ is zero, then the map $\mathcal{H}^\infty(F)\colon \mathcal{H}^\infty(L_0) \to \mathcal{H}^\infty(L_1)$ must be zero as well. This follows from commutativity of the diagram \[ \begin{tikzpicture}[xscale=6,yscale=1.5] \node (a0) at (0,0) {$\mathcal{H}^\infty(L_0)\otimes_{\mathrm{R}[T]}\mathrm{R}[\sqrt{T}]$}; \node (a1) at (1,0) {$\mathcal{H}^\infty(L_1)\otimes_{\mathrm{R}[T]}\mathrm{R}[\sqrt{T}]$}; \node (b0) at (0,1) {$\mathcal{H}^\infty(L_0)$}; \node (b1) at (1,1) {$\mathcal{H}^\infty(L_1)$}; \draw[->] (a0)--(a1) node[midway,anchor=south] {\tiny $\mathcal{H}^\infty(F)\otimes\Id$}; \draw[->] (b0)--(b1) node[midway,anchor=south] {\tiny $\mathcal{H}^\infty(F)$}; \draw[->] (b0)--(a0); \draw[->] (b1)--(a1); \end{tikzpicture} \] and noting that the rightmost vertical map \[ \mathcal{H}^\infty(L_1)\to \mathcal{H}^\infty(L_1)\otimes_{\mathrm{R}[T]}\mathrm{R}[\sqrt{T}]\cong \mathcal{H}^\infty(L_1)\otimes_{\mathrm{R}[T]}\big(\mathrm{R}[T]\oplus\sqrt{T}\mathrm{R}[T]\big)\cong \mathcal{H}^\infty(L_1)\oplus \sqrt{T}\mathcal{H}^\infty(L_1) \] is the inclusion as the first factor, and therefore is injective. \end{proof} Finally, we confirm well-definedness of the cobordism maps. Before stating the main result, we note some relations involving elementary star cobordisms (cobordisms of type~\ref{item:EC4}). \begin{lemma}\label{lem:star-commute} Up to sign, the map on $\mathcal{C}^-$ associated to an elementary star cobordism commutes with the map associated to any elementary cobordism disjoint from the star, and commutes with planar isotopies in general in the obvious sense. If $p$ and $q$ are points on opposite sides of a crossing then the map associated to the elementary star cobordism at $p$ is chain homotopic to $-1$ times the map associated to the elementary star cobordism at $q$; in particular, these two maps also agree up to homotopy and sign. \end{lemma} \begin{proof} The first statement is straightforward from the definitions. The second is immediate from a lemma of Hedden-Ni's \cite[Lemma 2.3]{HN-kh-detects} in the Lee case and a lemma of Alishahi's~\cite[Lemma 2.2]{Ali-kh-unknotting} in the Bar-Natan case. \end{proof} Well-definedness of the cobordism maps is the following: \begin{proposition}\label{prop:BNh-functorial} Let $F\subset [0,1]\times S^3$ be a (possibly nonorientable) cobordism from $L_0$ to $L_1$. For $\bullet\in\{+,-,\infty,\widehat{\ }\}$, the induced map $\mathcal{C}^\bullet(F)\colon\mathcal{C}^\bullet(L_0)\to\mathcal{C}^\bullet(L_1)$ in the homotopy category of complexes over $\mathrm{R}[U]$ is well-defined up to sign, and invariant under isotopy of $F$ in $[0,1]\times S^3$ rel.~boundary. In fact, if $\Phi\colon [0,1]\times S^3\to [0,1]\times S^3$ is a diffeomorphism which is the identity near the boundary, then $\mathcal{C}^\bullet(F)$ and $\mathcal{C}^\bullet(\Phi(F))$ are chain homotopic. \end{proposition} \begin{proof} Since the maps on $\mathcal{C}^+$, $\mathcal{C}^\infty$, and $\widehat{\mathcal{C}}$ are induced by the map on $\mathcal{C}^-$, it suffices to prove the result for $\mathcal{C}^-$ (compare Lemma~\ref{lem:les-natural}). For isotopies of oriented cobordisms in $[0,1]\times \R^3$, this follows from Bar-Natan's result~\cite[Theorem 4]{Bar-kh-tangle-cob} since both the Lee perturbation and the Bar-Natan perturbation can be obtained functorially from Bar-Natan's diagrammatic invariants. His proof works equally well for nonorientable cobordisms: any two movies representing isotopic nonorientable cobordisms also differ by a sequence of Carter-Saito's movie moves~\cite{CS-knot-movie}, every local movie move (between sequences of tangles) can be given a consistent orientation, and the map induced by a movie is independent of the choice of orientations. (In particular, we can suppress cobordisms of type~\ref{item:EC6} in these movies.) Finally, functoriality for starred cobordisms follows easily from Lemma~\ref{lem:star-commute}. (See, for instance,~\cite[Lemma 2.1 and 4.1]{Sar-ribbon} for more details.) To verify invariance under isotopies in $[0,1]\times S^3$ we must also check invariance under Morrison-Walker-Wedrich's \emph{sweep-around move}~\cite[Formula (1.1)]{MWW-kh-blobs}. Their proof works mutatis mutandis for the Lee and Bar-Natan deformations~\cite[Remark 2.2]{MWW-kh-blobs}. Nevertheless, for the sake of completeness, we present their proof adapted to our setting below. We will mostly use their notation, but a slightly different language. Consider Morrison-Walker-Wedrich's picture~\cite[Formula (3.1)]{MWW-kh-blobs}. The picture shows two ways of moving a strand from top to bottom to get from a link diagram $L$ to a link diagram $L'$: in the first method, this strand moves in front of the rest of the link, while in the second, it moves behind. Let $L^i_+$ (respectively, $L^i_-$) denote the link diagram at the $i\th$ stage in the first (respectively, second) method. The two sequences of link diagram $L^i_\pm$ produce two chain maps $\mathcal{C}^-(L)\to\mathcal{C}^-(L')$ by composing maps associated to Reidemeister moves. By choosing the Reidemeister maps carefully, we will show that the two maps agree on the nose. Recall that for any link diagram, the homological grading of any resolution is given by the number of crossings that have been resolved as the $1$-resolution minus the total number of negative crossings in the diagram. For the link diagrams that appear above, call a crossing \emph{external} (respectively, \emph{internal}) if it involves (respectively, does not involve) the moving horizontal strand, and define the \emph{external grading} (respectively, \emph{internal grading}) of any resolution to be the number of external (respectively, internal) crossings that have been resolved as the $1$ resolution minus the total number of negative external (respectively, internal) crossings. The differential preserves or increases the external grading. For any of the above link diagrams, let $\mathcal{C}^-_0$ denote the subgroup of the chain group that lives in external grading $0$. Note that $\mathcal{C}^-_0(L)=\mathcal{C}^-(L)$ and $\mathcal{C}^-_0(L')=\mathcal{C}^-(L')$ since these diagrams have no external crossings. Also note that there is a natural isomorphism of groups $\mathcal{C}^-_0(L^i_+)\cong\mathcal{C}^-_0(L^i_-)$ since any resolution of $L^i_+$ in external grading $0$, when viewed as a resolution of $L^i_-$, is also in external grading $0$. (This uses the fact that Morrison-Walker-Wedrich start with a braid closure.) The Reidemeister maps will be chosen in such a way that they will preserve or decrease the external grading. Since the composition is a map $\mathcal{C}^-_0(L)\to\mathcal{C}^-_0(L')$, it is therefore enough to consider the portion of the maps that preserve the external grading, namely, the maps \[ \mathcal{C}^-(L)\to\mathcal{C}^-_0(L^0_\pm)\to\dots\to\mathcal{C}^-_0(L^i_\pm)\to\dots\to\mathcal{C}^-_0(L^\ell_\pm)\to\mathcal{C}^-(L'). \] (These individual maps are typically not chain maps; however, perhaps surprisingly, that is irrelevant to the proof.) Furthermore, these components of the maps will commute with the isomorphisms $\mathcal{C}^-_0(L^i_+)\cong\mathcal{C}^-_0(L^i_-)$, that is, the following diagram will commute: \[ \begin{tikzpicture}[xscale=1.5,yscale=0.7] \node (L) at (-0.5,0) {$\mathcal{C}^-(L)$}; \node (L0p) at (1,1) {$\mathcal{C}^-_0(L^0_+)$}; \node (L0m) at (1,-1) {$\mathcal{C}^-_0(L^0_-)$}; \node (L1p) at (2,1) {$\cdots$}; \node (L1m) at (2,-1) {$\cdots$}; \node (L2p) at (3,1) {$\mathcal{C}^-_0(L^i_+)$}; \node (L2m) at (3,-1) {$\mathcal{C}^-_0(L^i_-)$}; \node (L3p) at (4,1) {$\cdots$}; \node (L3m) at (4,-1) {$\cdots$}; \node (L4p) at (5,1) {$\mathcal{C}^-_0(L^\ell_+)$}; \node (L4m) at (5,-1) {$\mathcal{C}^-_0(L^\ell_-)$}; \node (LL) at (6.5,0) {$\mathcal{C}^-(L')$}; \draw[->] (L)--(L0p); \draw[->] (L)--(L0m); \draw[<-] (LL)--(L4p); \draw[<-] (LL)--(L4m); \foreach\i in{0,2,4}{ \draw[->] (L\i p)--(L\i m) node[pos=0.5,anchor=west] {\tiny $\cong$}; } \foreach\i [count=\j from 1] in {0,1,2,3}{ \draw[->] (L\i p)--(L\j p); \draw[->] (L\i m)--(L\j m); } \end{tikzpicture} \] That will establish that the two compositions agree on the nose. For the Reidemeister I and II moves at the beginning and end of the sequence, the above check is almost automatic. The maps preserve the homological grading. Since all the crossings involved are external, the maps also preserve the internal grading, and therefore preserve the external grading as well. \begin{figure} \centering \begin{tikzpicture}[scale=1] \foreach\mode/\modesym [count=\y from 0] in {over/+,under/-}{ \foreach\time/\timesym [count=\x from 0] in {before/,after/+1}{ \begin{scope}[yshift=-300*\y,xshift=200*\x,yscale=1.5,xscale=2] \node at (-1.5+3*\x,3) {$=$}; \node (\mode\time main) at (-2+4*\x,3) {% \begin{tikzpicture}[scale=0.4] \coordinate (leftleft) at (-1.5,0); \coordinate (left) at (-1,0.5-\x); \coordinate (centerleft) at (-0.5,0.5-\x); \coordinate (center) at (0,0.5-\x); \coordinate (centerright) at (0.5,0.5-\x); \coordinate (right) at (1,0.5-\x); \coordinate (rightright) at (1.5,0); \coordinate (topleft) at (-0.5,1); \coordinate (topright) at (0.5,1); \coordinate (bottomleft) at (-0.5,-1); \coordinate (bottomright) at (0.5,-1); \coordinate (intnw) at (-0.5,0+\x); \coordinate (intne) at (0.5,0+\x); \coordinate (intsw) at (-0.5,-1+\x); \coordinate (intse) at (0.5,-1+\x); \ifnum\x=0 \node at (0,-1) {\tiny 3}; \else \node at (0,1) {\tiny 3}; \fi \ifnum\y=0 \node at (-0.9,0) {\tiny 1}; \node at (0.9,0) {\tiny 2}; \else \node at (-0.9,0) {\tiny 2}; \node at (0.9,0) {\tiny 1}; \fi \ifnum\y=1 \draw[knot] (leftleft) to[out=0,in=180] (left)--(right) to[out=0,in=180] (rightright); \fi \draw[knot] (topright)--(intne) to[out=-90,in=90] (intsw)--(bottomleft); \draw[knot] (topleft)--(intnw) to[out=-90,in=90] (intse)--(bottomright); \ifnum\y=0 \draw[knot] (leftleft) to[out=0,in=180] (left)--(right) to[out=0,in=180] (rightright); \fi \end{tikzpicture} }; \node[above=0pt of \mode\time main] {$L^{i\timesym}_{\modesym}$}; \foreach\i in {0,1}{ \foreach\j in {0,1}{ \pgfmathtruncatemacro\extgr{\i+\j} \ifnum\extgr=0 \defgreen!50!black{red} \defdensely dotted{dashed} \else \ifnum\extgr=1 \defgreen!50!black{blue} \defdensely dotted{solid} \else \defgreen!50!black{green!50!black} \defdensely dotted{densely dotted} \fi \fi \ifnum\y=0 \pgfmathtruncatemacro\ii{\i} \pgfmathtruncatemacro\jj{\j} \else \pgfmathtruncatemacro\ii{\j} \pgfmathtruncatemacro\jj{\i} \fi \foreach\k in {0,1}{ \node[inner sep=0,outer sep=0,anchor=center] (\mode\time\i\j\k) at ($\i*(1,1)+\j*(0,2)+\k*(-1,3)$) {% \begin{tikzpicture}[scale=0.3] \draw[densely dotted,green!50!black] (-1.5,-1) rectangle (1.5,1); \coordinate (intnw) at (-0.5,0+\x); \coordinate (intne) at (0.5,0+\x); \coordinate (intsw) at (-0.5,-1+\x); \coordinate (intse) at (0.5,-1+\x); \coordinate (intw) at (-0.2,-0.5+\x); \coordinate (inte) at (0.2,-0.5+\x); \coordinate (leftleft) at (-1.5,0); \coordinate (left) at (-1,0.5-\x); \coordinate (centerleft) at (-0.5,0.5-\x); \coordinate (center) at (0,0.5-\x); \coordinate (centerright) at (0.5,0.5-\x); \coordinate (right) at (1,0.5-\x); \coordinate (rightright) at (1.5,0); \coordinate (left0) at (-0.5,1-\x-\y); \coordinate (left1) at (-0.5,-\x+\y); \coordinate (right1) at (0.5,1-\x-\y); \coordinate (right0) at (0.5,-\x+\y); \ifnum\ii=0 \draw[resol] (leftleft) to[out=0,in=180] (left) ..controls(centerleft).. (left0); \draw[resol] (center) ..controls(centerleft).. (left1); \else \draw[resol] (leftleft) to[out=0,in=180] (left) ..controls(centerleft).. (left1); \draw[resol] (center) ..controls(centerleft).. (left0); \fi \ifnum\jj=0 \draw[resol] (rightright) to[out=180,in=0] (right) ..controls(centerright).. (right0); \draw[resol] (center) ..controls(centerright).. (right1); \else \draw[resol] (rightright) to[out=180,in=0] (right) ..controls(centerright).. (right1); \draw[resol] (center) ..controls(centerright).. (right0); \fi \ifnum\k=0 \draw[resol] (intnw) to[out=-90,in=-90,looseness=1.3] (intne); \draw[resol] (intsw) to[out=90,in=90,looseness=1.3] (intse); \else \draw[resol] (intnw) to[out=-90,in=90] (intw) to[out=-90,in=90] (intsw); \draw[resol] (intne) to[out=-90,in=90] (inte) to[out=-90,in=90] (intse); \fi \end{tikzpicture} }; \begin{scope}[zlevel=foreground] \node[above =0pt of \mode\time\i\j\k,inner sep=1pt,outer sep=1pt,fill=white] {\tiny \i\j\k}; \end{scope} }}} \draw[khdiff] (\mode\time000)--(\mode\time001) node[pos=0.5,linelabel] {\tiny $s$}; \draw[khdiff] (\mode\time000)--(\mode\time010) node[pos=0.3,linelabel] {\tiny $s$}; \draw[khdiff] (\mode\time000)--(\mode\time100) node[pos=0.5,linelabel] {\tiny $s$}; \draw[khdiff] (\mode\time001)--(\mode\time011) node[pos=0.3,linelabel] {\tiny $s$}; \draw[khdiff] (\mode\time001)--(\mode\time101) node[pos=0.3,linelabel] {\tiny $s$}; \draw[khdiff] (\mode\time010)--(\mode\time011) node[pos=0.3,linelabel] {\tiny $-s$}; \draw[khdiff] (\mode\time010)--(\mode\time110) node[pos=0.3,linelabel] {\tiny $s$}; \draw[khdiff] (\mode\time100)--(\mode\time101) node[pos=0.7,linelabel] {\tiny $-s$}; \draw[khdiff] (\mode\time100)--(\mode\time110) node[pos=0.6,linelabel] {\tiny $-s$}; \draw[khdiff] (\mode\time011)--(\mode\time111) node[pos=0.5,linelabel] {\tiny $s$}; \draw[khdiff] (\mode\time101)--(\mode\time111) node[pos=0.7,linelabel] {\tiny $-s$}; \draw[khdiff] (\mode\time110)--(\mode\time111) node[pos=0.5,linelabel] {\tiny $s$}; \end{scope} } \draw[preservemap] (\mode before001) to[out=30,in=150] node[pos=0.45,linelabel] {\tiny $\Id$} (\mode after001); \draw[preservemap] (\mode before010)--(\mode after100) node[pos=0.4,linelabel] {\tiny $-dssb$}; \draw[dropmap] (\mode before010)--(\mode after001) node[pos=0.5,linelabel] {\tiny $-ds$}; \draw[preservemap] (\mode before010)--(\mode after010) node[pos=0.5,linelabel] {\tiny $-ds$}; \draw[preservemap] (\mode before100)--(\mode after010) node[pos=0.6,linelabel] {\tiny $\Id$}; \draw[preservemap] (\mode before100)--(\mode after100) node[pos=0.5,linelabel] {\tiny $sb$}; \draw[dropmap] (\mode before110)--(\mode after011) node[pos=0.7,linelabel] {\tiny $\Id$}; \draw[preservemap] (\mode before011)--(\mode after011) node[pos=0.5,linelabel] {\tiny $\Id$}; \draw[preservemap] (\mode before101)--(\mode after101) node[pos=0.4,linelabel] {\tiny $\Id$}; \draw[preservemap] (\mode before111)--(\mode after111) node[pos=0.5,linelabel] {\tiny $\Id$}; } \end{tikzpicture} \caption{The Reidemeister III move during the proof of invariance under the sweep-around move.}\label{fig:sweeparound-RIII} \end{figure} The Reidemeister III move requires more care. The proof is illustrated in Figure~\ref{fig:sweeparound-RIII}. Assume $L^{i+1}_\pm$ is obtained from $L^i_\pm$ by moving the horizontal strand past an internal crossing, as shown in the figure. The other Reidemeister III move is obtained by mirroring all the link diagrams, and the proof in that case follows formally from the following proof by reversing all arrows. The 3-dimensional cubes of resolution for the four link diagrams $L^*_\pm$, $*\in\{i,i+1\}$, are shown. The two external crossings are numbered 1 and 2---left to right for $L^*_+$ and right to left for $L^*_-$---and the internal crossing is numbered 3. The eight vertices in each cube of resolutions decompose according to the local external grading, which is the sum of the first two coordinates of the vertices (up to a shift); this is shown by boxing them with a \textcolor{red}{dashed}, \textcolor{blue}{solid}, or \textcolor{green!50!black}{dotted} line. The differentials are shown in light gray. The Reidemeister maps go from the cube of resolution of $L^i_\pm$ to the cube of resolution of $L^{i+1}_\pm$. The maps either preserve or decrease the external grading. We are only interested in the maps which preserve the external grading, so we have drawn them in black, and the other maps in light gray. The maps (and the differentials) are decorated with the cobordisms that induce them, with $s$, $b$, $d$ being shorthand for saddle, birth, and death, respectively. The top row (corresponding to $L^*_+$) is essentially a copy of Bar-Natan's picture~\cite[Figure 9]{Bar-kh-tangle-cob}; we have merely rotated Bar-Natan's tangle so that his vertical over-strand has become our horizontal moving strand, and we have reordered the crossings as well, so our signs differ from Bar-Natan's. For example, the surface highlighted in Bar-Natan's picture corresponds to our map from the $010$ vertex of $L^i_+$ to the $100$ vertex of $L^{i+1}_-$; it is decorated $-dssb$, so it is the negative of a death, followed by two saddles (which are easy to figure out from the diagrams), followed by a birth. The bottom row (corresponding to $L^*_-$) is also obtained from Bar-Natan's picture~\cite[Figure 9]{Bar-kh-tangle-cob}; this time we have rotated Bar-Natan's tangle so that his northwest-to-southeast under-strand has become our horizontal moving strand, and once again, we have reordered the crossings. The diagram thus obtained is not quite the bottom row of our figure: it does not have the map from the $100$ vertex of $L^i_-$ to the $100$ vertex of $L^{i+1}_-$, nor the map from the $101$ vertex of $L^i_-$ to the $101$ vertex of $L^{i+1}_-$, but instead has maps from the $001$ vertex of $L^i_-$ to the $100$ vertex of $L^{i+1}_-$ and from the $101$ vertex of $L^i_-$ to the $110$ vertex of $L^{i+1}_-$. The latter maps increase the external gradings, so we modify the chain map by a null-homotopy $\partial f+f\partial$ to get to our diagram, where $f$ is the map from the $101$ vertex of $L^i_-$ to the $100$ vertex of $L^{i+1}_-$ corresponding to a birth. The natural isomorphism $\mathcal{C}^-_0(L^*_+)\stackrel{\cong}{\longrightarrow}\mathcal{C}^-_0(L^*_-)$ sends the \textcolor{green!50!black}{dotted} vertices to the \textcolor{red}{dashed} vertices and vice-versa, and sends the \textcolor{blue}{solid} vertices to the corresponding \textcolor{blue}{solid} vertices. These isomorphisms commute with the Reidemeister maps that preserve external gradings (which are the black arrows in the figure). This gives invariance under the sweep-around move. Finally, the proof that the map is invariant under diffeomorphisms follows from another argument of Morrison-Walker-Wedrich~\cite[Section 4.2]{MWW-kh-blobs}; we refer the reader there, though a key point in the proof is quoted as Lemma~\ref{lem:MWW}, below. \end{proof} \begin{remark}\label{rem:Klein-TQFT} The Khovanov Frobenius algebra---the $U=0$ specialization of the Lee and Bar-Natan algebras---corresponds to a (1+1)-dimensional TQFT. It is natural to ask if this TQFT extends to non-orientable cobordisms. TQFTs for oriented 1-manifolds but allowing non-orientable cobordisms are called \emph{Klein TQFTs}~\cite{AN-top-Klein-tqft} or \emph{unoriented TQFTs}~\cite{TT-top-un-tqft}. Unoriented (1+1)-dimensional TQFTs correspond to Frobenius algebras $V$ with extra structure: an element $\theta\in V$ corresponding to a M\"obius band and an involution $\phi$ corresponding to the mapping cylinder of an orientation-reversing involution of $S^1$, satisfying the conditions that $\phi(m(\theta,v))=m(\theta,v)$ for all $v\in V$ and $\bigl(m\circ (\phi\otimes\Id)\circ \Delta\bigr)(1)=m(\theta,\theta)$~\cite[Proposition 2.9]{TT-top-un-tqft}. For the Khovanov TQFT, the fact that $\phi$ respects the unit and counit implies that $\phi=\Id$, so the second identity implies that $m(\theta,\theta)=2X$ which is impossible (cf.~\cite[Section 4.2]{TT-top-un-tqft}). It is possible to extend $V$ to a projective unoriented TQFT, by defining $\theta=0$, $\phi(1)=1$, and $\phi(X)=-X$; the map $\phi$ only respects the counit up to sign. Imitating part of this argument, we can see that it is impossible to remedy the sign ambiguity in for non-orientable surfaces (without equipping the surfaces with some extra data). There is a movie which starts with a 0-crossing unknot, performs a Reidemeister I move on half of it, introducing one crossing, then performs a Reidemeister I move on the other half eliminating the crossing. The induced map $V\to V$ is either $(1\mapsto 1,\ X\mapsto -X)$ or $(1\mapsto -1,\ X\mapsto X)$. One can compute this directly, but it is also forced by the fact that the invariant of a once-punctured Klein bottle is zero (see the proof of Corollary~\ref{cor:star-0}): the map associated to a once-punctured Klein bottle factors as a birth, then a split, then applying the map just described to one of the two circles, and then a merge. (This is an embedded version of the proof of the relation $\bigl(m\circ (\phi\otimes\Id)\circ \Delta\bigr)(1)=m(\theta,\theta)$.) However, following the first option by a death (counit) gives a cobordism isotopic to a death, but sending $X\mapsto -1$ instead of $X\mapsto 1$; and preceding the second option by a birth gives a cobordism isotopic to a birth, but sending $1\mapsto -1$. (Note that, in the construction of the Khovanov cube, all the surfaces that arise are orientable; in fact, a checkerboard coloring of the knot projection induces an orientation of them. So, to construct the Khovanov cube one does not need the extension of the TQFT to non-orientable surfaces. See also~\cite{TT-top-un-tqft} for further discussion.) Mikhail Khovanov informs us that Greg Kuperberg mentioned to him around 2003 that Khovanov homology is functorial with respect to nonorientable cobordisms in $[0,1]\times\R^3$, up to a sign, which is part of Proposition~\ref{prop:BNh-functorial}, above. \end{remark} \section{Admissible cuts}\label{sec:cuts} Any compact, connected, nonorientable surface $F$ is diffeomorphic to $(\#^g\mathbb{R}\mathrm{P}^2)\#(\#^k \mathbb{D}^2)$, where $k=|\pi_0(\partial F)|$ is the number of boundary components. The number $g=2-\chi(F)-k$ is called the \emph{crosscap number} of $F$. For any surface $F$ (not necessarily connected), define its crosscap number to be the sum of the crosscap numbers of its nonorientable components. \begin{definition}\label{def:admissible-cut} Fix a small $\epsilon>0$. Let $F\subset [0,1]\times S^3$ be a nonorientable cobordism from $L_0$ to $L_1$, which is a product near the boundary. An \emph{admissible cut} for $F$ consists of the data $(S,V,\phi)$, where: \begin{itemize} \item $S\subset (0,1)\times S^3$ is a smoothly embedded 3-manifold; \item $V\subset (0,1)\times S^3$ is a tubular neighborhood of $S$; and \item $\phi\colon V\to (\tfrac{1}{2}-\epsilon,\tfrac{1}{2}+\epsilon)\times S^3$ is a diffeomorphism, \end{itemize} satisfying: \begin{enumerate}[label=(AC-\arabic*)] \item $\phi$ takes $S$ to $\{\tfrac{1}{2}\}\times S^3$ and $F\cap V$ to a product cobordism; \item the intersection of $F$ with each of the 2 components of $([0,1]\times S^3)\setminus S$ is nonorientable; and \item\label{item:AC-diffeo} there exists a diffeomorphism $\Phi\colon ([0,1]\times S^3,V)\stackrel{\cong}{\longrightarrow} ([0,1]\times S^3, (\tfrac{1}{2}-\epsilon,\tfrac{1}{2}+\epsilon)\times S^3)$, which is the identity near the boundary and agrees with $\phi$ on $V$. \end{enumerate} Call a pair of admissible cuts $(S,V,\phi)$ and $(S',V',\phi')$ for $F$ \emph{elementary equivalent} if $V\cap V'=\emptyset$ and there is a diffeomorphism \begin{equation}\label{eq:admis-cut-equiv} \begin{split} \bigl([0,1]\times S^3,V,V'\bigr)&\cong \bigl([0,1]\times S^3, (\tfrac{1}{3}-\epsilon,\tfrac{1}{3}+\epsilon)\times S^3,(\tfrac{2}{3}-\epsilon,\tfrac{2}{3}+\epsilon)\times S^3\bigr) \text{\qquad or }\\ \bigl([0,1]\times S^3,V',V\bigr)&\cong \bigl([0,1]\times S^3, (\tfrac{1}{3}-\epsilon,\tfrac{1}{3}+\epsilon)\times S^3,(\tfrac{2}{3}-\epsilon,\tfrac{2}{3}+\epsilon)\times S^3\bigr), \end{split} \end{equation} which is the identity near the boundary and which agrees with $\phi$ and $\phi'$ on $V$ and $V'$, respectively, after post-composition by a translation in the first factor. Call admissible cuts $(S,V,\phi)$ and $(S',V',\phi')$ for a pair of surfaces $F$ and $F'$ \emph{diffeomorphic} if there is a diffeomorphism $\Psi\colon ([0,1]\times S^3,F,V)\stackrel{\cong}{\longrightarrow} ([0,1]\times S^3,F',V')$ which is the identity near the boundary and satisfies $\phi'\circ\Psi=\phi$ on $V$. Call admissible cuts $(S,V,\phi)$ and $(S',V',\phi')$ for a pair of surfaces $F$ and $F'$ \emph{equivalent} if they differ by a sequence of elementary equivalences and diffeomorphisms. \end{definition} \begin{proposition}\label{prop:admis-cut} Suppose $F$ is a cobordism (not necessarily connected) with crosscap number $\geq 2$. Then, $F$ has an admissible cut. Further, if $F$ has crosscap number $\geq 3$, and if $F'$ is obtained from $F$ by a self-diffeomorphism of $[0,1]\times S^3$ which is the identity near the boundary, then any admissible cut for $F$ is equivalent to any admissible cut for $F'$. \end{proposition} We recall some results about the curve complex of nonorientable surfaces before proving Proposition~\ref{prop:admis-cut}. Let $F$ be a compact, nonorientable surface. Consider the long exact sequence for the pair $(F,\partial F)$, \[ \widetilde{H}^0(\partial F;\mathbb{F}_2)\to H^1(F,\partial F;\mathbb{F}_2)\to H^1(F;\mathbb{F}_2)\to H^1(\partial F;\mathbb{F}_2). \] The first Stiefel-Whitney class $w_1(TF)\in H^1(F;\mathbb{F}_2)$ maps to zero in $H^1(\partial F;\mathbb{F}_2)$ (since $TF|_{\partial F}$ is orientable), hence is in the image of $H^1(F,\partial F;\mathbb{F}_2)$. Call a closed curve $\alpha$ in the interior of $F$ \emph{complement-orientable} if $\mathrm{PD}([\alpha])\in H^1(F,\partial F;\mathbb{F}_2)$ maps to $w_1(TF)$; assuming $\alpha$ is embedded, this is equivalent to the condition that $F\setminus\alpha$ is orientable, since for any other curve $\beta$, $\langle w_1(TF),\beta\rangle=\alpha\cdot\beta\pmod{2}$. Call the other curves \emph{complement-nonorientable}. Call a closed curve $\alpha\subset F$ \emph{one-sided} if $\langle w_1(TF),[\alpha]\rangle=1$; assuming $\alpha$ is embedded, this is equivalent to $TF|_\alpha$ being a M\"obius band. \begin{lemma}\label{lem:comp-or-one-side} Let $\alpha\subset F$ be a complement-orientable, embedded circle. Then, $F$ has a single nonorientable component $F_0$. Moreover, $\alpha$ is one-sided if and only if the crosscap number of $F_0$ (equivalently, $F$) is odd. \end{lemma} \begin{proof} By hypothesis, $[\alpha]=\mathrm{PD}(w_1(TF))$. If $F$ has multiple nonorientable components, then $[\alpha]$ cannot be represented by a single curve. For the second part, we have to calculate $\langle w_1(TF),[\alpha]\rangle=\langle w_1(TF_0)\cup w_1(TF_0),[F_0]\rangle=\alpha\cdot\alpha\pmod{2}$. By classification of surfaces and a direct computation, this number equals the parity of the crosscap number of $F_0$. \end{proof} The \emph{one-sided curve complex} of $F$ is the graph with vertices isotopy classes of embedded, one-sided curves $\alpha$ in the interior of $F$ and an edge from $\alpha$ to $\beta$ if and only if there are disjoint representatives of $\alpha$ and $\beta$. The \emph{restricted one-sided curve complex} is the full sub-graph spanned by the complement-nonorientable one-sided curves $\alpha$. (By Lemma~\ref{lem:comp-or-one-side}, if $F$ has multiple nonorientable components or if the crosscap number of $F$ is even, then the restricted one-sided curve complex is the same as the one-sided curve complex.) \begin{proposition}\label{prop:cc-connect} Let $F$ be a compact, nonorientable surface (with boundary) of crosscap number $\geq 2$. Then, the restricted one-sided curve complex of $F$ is connected. \end{proposition} \begin{proof} This is essentially due to Pieloch~\cite[Proposition 2.7]{Pie-top-nonor}, and we follow his argument. If $F$ has more than one nonorientable connected component then the statement is obvious. So, it suffices to prove the result when $F$ is a connected surface with crosscap number $\geq 2$. \begin{figure} \centering \includegraphics{mcgGens} \caption{\textbf{Generators for the mapping class group of a punctured surface.} The left column is the crosscap number $2$ case, center is the crosscap number $2k+1$ ($k\geq 1$) case, and right is the crosscap number $2k$ ($k>1$) case. Crosscaps are shaded, and hidden lines are gray. Top: the mapping class group is generated by elementary braids in the $z_i$, Dehn twists around the thin curves and the dashed curve, boundary slides (push maps) along the dotted curves, and a crosscap slide. In the left picture, the crosscap slide pulls one crosscap along the dashed curve. In the other pictures, the crosscap slide occurs in the shaded region (a punctured Klein bottle). Second row: the image of $\theta$ under the boundary slide that moves it. Third row: the images of $\theta$ under the Dehn twist(s) that move it. Bottom row: the image of $\theta$ under the crosscap slide. When $f(\theta)$ is neither disjoint from nor equal to $\theta$, a third curve $\eta$ disjoint from both is shown (dash-dotted).} \label{fig:MCG-gens} \end{figure} Let $\theta$ be the one-sided curve shown in Figure~\ref{fig:MCG-gens} and let $\lambda$ be any other complement-nonorientable one-sided curve. From the classification of surfaces, there is a homeomorphism from $F$ to itself sending $\theta$ to $\lambda$. So, it suffices to show that the mapping class group of $F$ takes the path component of $\theta$ in the restricted one-sided curve complex to itself. For that, it suffices to show that a set of generators for the mapping class group take this path component to itself, i.e., take $\theta$ to curves which can be connected to $\theta$ in the restricted one-sided curve complex. Here, we will not require homeomorphisms to be the identity on $\partial F$ or to take boundary components to themselves. In fact, since deleting $\partial F$ has no effect on the restricted one-sided curve complex, we can view $F$ as a punctured surface and the homeomorphism as an element of the mapping class group of the punctured surface $F$. This mapping class group was studied by Korkmaz~\cite{Kork-top-mcg}, who denoted it $\mathcal{M}_{g,n}$, where $g$ is the crosscap number and $n$ is the number of punctures $z_1,\dots,z_n$. In particular, Korkmaz gave a set of generators for this mapping class group~\cite[Section 4]{Kork-top-mcg}. There are three cases, depending on the crosscap number: crosscap number $2$, $2k+1$ for $k\geq 1$, or $2k$ for $k>1$. In each case, the mapping class group is generated by a finite set of Dehn twists, braid generators in the $z_i$, one crosscap slide (see~\cite[Section 2]{Kork-top-mcg} and the references he gives for the definition), and one or two boundary slides (again, see~\cite[Section 2]{Kork-top-mcg}); the generators are shown in Figure~\ref{fig:MCG-gens}. In each case, most of the generators fix $\theta$. The remaining ones either take $\theta$ to a curve disjoint from $\theta$ (up to isotopy) or to a curve $f(\theta)$ so that there is a third curve $\eta$ disjoint from both $\theta$ and $f(\theta)$. See Figure~\ref{fig:MCG-gens}. So, in all cases, $f(\theta)$ lies in the same path component as $\theta$, as desired. \end{proof} We note an easier lemma: \begin{lemma}\label{lem:easy-surfaces} If $F$ is a nonorientable surface of crosscap number $>1$ (i.e., is not a punctured $\mathbb{R}\mathrm{P}^2$ union an orientable surface) then the restricted curve complex has at least two points. If $F$ has crosscap number $>2$ (i.e., is also neither a punctured Klein bottle nor a punctured $\mathbb{R}\mathrm{P}^2\amalg\mathbb{R}\mathrm{P}^2$, union an orientable surface) then the restricted curve complex contains a $3$-cycle. \end{lemma} \begin{proof} This is straightforward from the classification of surfaces, and is left to the reader. \end{proof} Proposition~\ref{prop:cc-connect} and Lemma~\ref{lem:easy-surfaces} together imply the following. \begin{lemma}\label{lem:even-length-walk} Let $F$ be a compact, nonorientable surface with crosscap number $>2$. For any complement-nonorientable one-sided embedded curves $\alpha,\beta$ in $F$, there exists an even-length sequence $\alpha=\gamma_0,\gamma_1,\dots,\gamma_{2n}=\beta$ of complement-nonorientable one-sided embedded curves connecting $\alpha$ to $\beta$ so that every pair of consecutive curves $\gamma_i,\gamma_{i+1}$ are disjoint. \end{lemma} \begin{proof} Using Proposition~\ref{prop:cc-connect}, we may choose a walk in the restricted curve complex connecting $\alpha$ to $\beta$. Using the $3$-cycle from the second part of Lemma~\ref{lem:easy-surfaces} if needed, we may ensure that the walk has even length. Choose embedded curves representing the vertices of the walk to get a sequence $\gamma_0,\gamma_1,\dots,\gamma_{2m}$. We may choose $\gamma_0=\alpha$ and we may choose $\gamma_i$ inductively to ensure that it is disjoint from $\gamma_{i-1}$. The final curve $\gamma_{2m}$ will be isotopic to $\beta$, but need not equal $\beta$. Let $\phi_t$ for $t\in[0,1]$ be an ambient isotopy taking $\gamma_{2m}$ to $\beta$. Using the first part of Lemma~\ref{lem:easy-surfaces}, choose a complement-nonorientable one-sided embedded curve $\delta$ which is disjoint from $\gamma_{2m}$. To finish the proof, we will construct a sequence $\gamma_{2m},\gamma_{2m+1},\dots,\allowbreak\gamma_{2n}=\beta$ of complement-nonorientable one-sided embedded curves with every consecutive pair disjoint, as follows: \[ \gamma_{2i}=\phi_{\frac{i-m}{n-m}}(\gamma_{2m}),\ m\leq i\leq n\qquad\qquad \gamma_{2i+1}=\phi_{\frac{i-m}{n-m}}(\delta),\ m\leq i <n. \] Clearly, $\gamma_{2i+1}$ is disjoint from $\gamma_{2i}$, and by compactness, for $n$ large enough, it will be disjoint from $\gamma_{2i+2}$ as well. For instance, fix a metric on $F$ so that length of $\frac{\partial}{\partial t}\phi_t$ is bounded above by $1$; let $D=\min_{t\in[0,1]}\mathrm{dist}(\phi_t(\gamma_{2m}),\phi_t(\delta))$; then $n>m+(1/D)$ suffices. \end{proof} Finally we need the following analogue of a result of Morrison-Walker-Wedrich's~\cite[Lemma~4.7]{MWW-kh-blobs}: \begin{lemma}\label{lem:MWW} Let $\Sigma\subset [0,1]\times S^3$ be a link cobordism and $f\colon [0,1]\times S^3\to[0,1]\times S^3$ a diffeomorphism which is the identity near the boundary. Then, $\Sigma$ is isotopic to $f(\Sigma)$, and the isotopy may also be assumed to be the identity near the boundary. \end{lemma} \begin{proof} This is proved by replacing $\R^3$ by $S^3$ throughout Morrison-Walker-Wedrich's proof~\cite[Lemma~4.7]{MWW-kh-blobs}, but for completeness, we sketch the proof below. Let $U$ be a collar neighborhood of $\{0,1\}\times S^3$ on which $f$ is the identity. Fix a point $p\in S^3$ so that $[0,1]\times\{p\}$ is disjoint from $\Sigma$. By postcomposing $f$ by an isotopy that takes $f([0,1]\times\{p\})$ to $[0,1]\times\{p\}$ (and is the identity on $U$), we may assume $f$ is the identity on $[0,1]\times\{p\}$. Let $B\subset S^3$ be a ball around $p$ with $[0,1]\times B$ disjoint from $\Sigma$. Let $\phi$ be a diffeomorphism of $[0,1]\times S^3$ which is the identity on $U\cup\big([0,1]\times\big(\{p\}\cup(S^3\setminus B)\big)\big)$, so that on the normal bundle of $[0,1]\times\{p\}$ inside $[0,1]\times S^3$ (which is a trivial $\R^3$ bundle in an obvious way) $d\phi$ induces the non-trivial element of $\pi_1(\mathrm{SO}(3))$. By post-composing $f$ by $\phi$ if necessary, and a further small isotopy, we may assume $f$ is the identity on $[0,1]\times B$. Now let $g_t$ be an isotopy of $[0,1]\times S^3$ which is the identity near the boundary, so that $g_0=\Id$ and $g_1(\Sigma)\subset U\cup([0,1]\times B)$. Then, the isotopy $g_t$ takes $\Sigma$ to $g_1(\Sigma)$ and the isotopy $f\circ g_{1-t}$ takes $g_1(\Sigma)=f(g_1(\Sigma))$ to $f(\Sigma)$. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:admis-cut}] Let $\pi\colon [0,1]\times S^3\to S^3$ be projection. Given a curve $\gamma\subset F$, let \begin{align*} C_{\leq \gamma}&=\{(t,p)\in [0,1]\times S^3\mid \exists t'\in[0,1]\text{ so that } (t',p)\in \gamma\text{ and }t\leq t'\}\\ C_{\geq \gamma}&=\{(t,p)\in [0,1]\times S^3\mid \exists t'\in[0,1]\text{ so that } (t',p)\in \gamma\text{ and }t\geq t'\}, \end{align*} so $\pi^{-1}(\pi(\gamma))=C_{\leq \gamma}\cup C_{\geq \gamma}$ and if $\pi|_{\gamma}$ is injective then $C_{\leq \gamma}\cap C_{\geq \gamma}=\gamma$. For the first statement, that $F$ has an admissible cut, choose disjoint one-sided embedded curves $\gamma,\eta\subset F$; this is possible by Lemma~\ref{lem:easy-surfaces}. Perturbing $F$ slightly, we may assume: \begin{enumerate}[label=(G-\arabic*),ref=(G-\arabic*)] \item\label{item:generic-gamma-first} $\pi|_{\gamma}$ is injective, \item\label{item:generic-gamma-second} $d\pi$ restricted to $TF|_\gamma$ has rank 2 everywhere, \item\label{item:generic-gamma-last} $F$ intersects $C_{\leq\gamma}$ transversely away from $\gamma$, \item\label{item:generic-eta-first} $\pi|_{\eta}$ is injective, \item $d\pi$ restricted to $TF|_\eta$ has rank 2 everywhere, \item\label{item:generic-eta-last} $F$ intersects $C_{\geq\eta}$ transversely away from $\eta$, and \item\label{item:generic-last} $\pi(\gamma)\cap\pi(\eta)=\emptyset$. \end{enumerate} Let $U_{\leq\gamma},U_{\geq\eta}$ be tubular neighborhoods of $C_{\leq \gamma}$ and $C_{\geq \eta}$ with disjoint closures. Choose $U_{\leq\gamma}$ small enough that $U_{\leq\gamma}\cap F$ consists of a M\"obius band around $\gamma$ and disks around the other points in $C_{\leq\gamma}\cap F$. Choose $U_{\geq\eta}$ similarly. Choose $\epsilon>0$ small enough that \[ \bigl(([0,\epsilon]\times S^3)\cup \overline{U}_{\leq\gamma}\bigr)\cap\bigl(([1-\epsilon,1]\times S^3)\cup \overline{U}_{\geq\eta}\bigr)=\emptyset. \] Let \[ S_{\leq\gamma}=\partial \bigl(([0,\epsilon]\times S^3)\cup \overline{U}_{\leq\gamma}\bigr)\setminus (\{0\}\times S^3),\qquad S_{\geq\eta}=\partial \bigl(([1-\epsilon,1]\times S^3)\cup \overline{U}_{\geq\eta}\bigr)\setminus (\{1\}\times S^3). \] Then, after smoothing corners, both $S_{\leq\gamma}$ and $S_{\geq\eta}$ are admissible cuts for $F$, since either side of either cut is nonorientable (containing one of the one-sided curves $\gamma$ or $\eta$). The diffeomorphisms $\phi$ from Definition~\ref{def:admissible-cut} for $S_{\leq\gamma}$ and $S_{\geq\eta}$ are induced from the natural isotopies that take $\gamma$ to $\{\epsilon\}\times \pi(\gamma)$ along $C_{\leq\gamma}$ and $\eta$ to $\{1-\epsilon\}\times \pi(\eta)$ along $C_{\geq\eta}$. For the second statement, fix admissible cuts $S$ and $S'$ for $F$ and $F'$ and choose diffeomorphisms $\Phi\colon ([0,1]\times S^3,S)\stackrel{\cong}{\longrightarrow} ([0,1]\times S^3, \{\tfrac{1}{2}\}\times S^3)$ and $\Phi'\colon ([0,1]\times S^3,S')\stackrel{\cong}{\longrightarrow} ([0,1]\times S^3, \{\tfrac{1}{2}\}\times S^3)$ as in Definition~\ref{def:admissible-cut}. It is enough to show that the admissible cuts $\{\tfrac{1}{2}\}\times S^3$ for $\Phi(F)$ and $\Phi'(F')$ are equivalent. Choose one-sided curves $\gamma,\eta\subset \Phi(F)$ on the left and right side of $\{\tfrac{1}{2}\}\times S^3$ and choose a one-sided curve $\alpha\subset \Phi'(F')$ on the left side of $\{\tfrac{1}{2}\}\times S^3$. Consider the admissible cuts $S_{\leq\gamma}$ and $S_{\geq\eta}$ for $\Phi(F)$ as defined above. They are both elementary equivalent to the cut $\{\tfrac{1}{2}\}\times S^3$ (as well as to each other). Similarly the admissible cut $S_{\leq\alpha}$ for $\Phi'(F')$ is elementary equivalent to the cut $\{\tfrac{1}{2}\}\times S^3$. In particular, it is enough to show that the cut $S_{\leq\gamma}$ for $\Phi(F)$ is equivalent to the cut $S_{\leq\alpha}$ for $\Phi'(F')$. Using Lemma~\ref{lem:MWW}, choose an ambient isotopy $\psi_t\colon[0,1]\times S^3\to[0,1]\times S^3$ which is the identity near the boundary, with $\psi_0=\Id$, and $\psi_1(\Phi(F))=\Phi'(F')$. We may perturb the isotopy to ensure that the genericity assumptions~\ref{item:generic-gamma-first}--\ref{item:generic-last} hold for the curves $\psi_t(\gamma),\psi_t(\eta)\subset \psi_t(\Phi(F))$, except for finitely many $t\in(0,1)$ when exactly one of them fails. Choose non-exceptional points $0=t_0<t_1<\dots<t_{m-1}<t_m=1$ such that each interval $[t_i,t_{i+1}]$ contains at most one such exceptional point; call such an interval \emph{$\gamma$-good} (respectively, \emph{$\eta$-good}) if the genericity conditions~\ref{item:generic-gamma-first}--\ref{item:generic-gamma-last} (respectively,~\ref{item:generic-eta-first}--\ref{item:generic-eta-last}) hold on the interval. Since at most one genericity condition fails at each exceptional point, each of these intervals $[t_i,t_{i+1}]$ is either $\gamma$-good or $\eta$-good (or both, if~\ref{item:generic-last} fails). At non-exceptional $t$ (such as $t_0,\dots,t_m$), both the cuts $S_{\leq\psi_t(\gamma)}$ and $S_{\geq\psi_t(\eta)}$ for $\psi_t(\Phi(F))$ are admissible, and they are elementary equivalent to each other. We need the following lemma. \begin{lemma}\label{lem:inside-prop} For any $\gamma$-good (respectively, $\eta$-good) interval $[t_i,t_{i+1}]$, the cuts $S_{\leq\psi_{t_i}(\gamma)}$ (respectively, $S_{\geq\psi_{t_i}(\eta)}$) for $\psi_{t_i}(\Phi(F))$ and $S_{\leq\psi_{t_{i+1}}(\gamma)}$ (respectively, $S_{\geq\psi_{t_{i+1}}(\eta)}$) for $\psi_{t_{i+1}}(\Phi(F))$ are diffeomorphic. \end{lemma} \begin{proof} We will explain the $\gamma$-good case; the $\eta$-good case is similar. For notational convenience, let $a=t_i$, $b=t_{i+1}$, $F_a=\psi_{a}(\Phi(F))$, $\gamma_a=\psi_a(\gamma)$, $F_b=\psi_b(\Phi(F))$, and $\gamma_b=\psi_b(\gamma)$. Note also that, in the definition of $S_{\leq \gamma_a}$ and $S_{\leq \gamma_b}$, up to diffeomorphism, we can choose the collar neighborhoods of the boundary and the tubular neighborhoods of $C_{\leq \gamma_a}$ and $C_{\leq \gamma_b}$ to be as small as we like. To construct the diffeomorphism \[ ([0,1]\times S^3,F_a,S_{\leq\gamma_a})\stackrel{\cong}{\longrightarrow}([0,1]\times S^3,F_b,S_{\leq\gamma_b}) \] we first construct an ambient isotopy $\theta_t$, $t\in[a,b]$, of $[0,1]\times S^3$ which carries $F_a\cup C_{\leq\gamma_a}$ to $F_b\cup C_{\leq\gamma_b}$. The map $\psi_t\circ \psi_a^{-1}$ restricts to an isotopy $\theta^F_t$ from $F_a$ to $F_b$, and the isotopy $(\psi_t\circ\psi_a^{-1})|_{\gamma_a}$ induces an isotopy $\theta^\gamma_t$ from $C_{\leq \gamma_a}$ to $C_{\leq \gamma_b}$. (This uses conditions~\ref{item:generic-gamma-first} and~\ref{item:generic-gamma-second}.) The isotopy $\theta^\gamma_t$ will not be the identity on the boundary, but we can choose it so that, for all small $s$, it sends $C_{\leq \gamma_a}\cap (\{s\}\times S^3)$ to $C_{\leq \gamma_t}\cap (\{s\}\times S^3)$ for each $t$. The maps $\theta^F_t$ and $\theta^\gamma_t$ typically will not agree on $F_a\cap C_{\leq \gamma_a}$, but by Condition~\ref{item:generic-gamma-last}, for any $t\in[a,b]$, $\theta^F_t(F_a)$ and the interior of $\theta^\gamma_t(C_{\leq\gamma_a})$ intersect transversally in finitely many points, say $P_t$. So, we get one-parameter families of points $(\theta^F_t)^{-1}(P_t)$ on $F_a$ and $(\theta^\gamma_t)^{-1}(P_t)$ on $C_{\leq\gamma_a}$. Let $\tilde{\theta}^F_t$ be an isotopy of $F_a$ which preserves it setwise, is the identity near the boundary and near $\gamma_a$, and satisfies $\tilde{\theta}^F_t(P_a)=(\theta^F_t)^{-1}(P_t)$. Similarly, let $\tilde{\theta}^\gamma_t$ be an isotopy of $C_{\leq\gamma_a}$ which preserves it setwise, is the identity near the boundary, and satisfies $\tilde{\theta}^\gamma_t(P_a)=(\theta^\gamma_t)^{-1}(P_t)$. On $F_a$, set $\theta_t=\theta^F_t\circ\tilde{\theta}^F_t$, and on $C_{\leq\gamma_a}$, set $\theta_t=\theta^\gamma_t\circ\tilde{\theta}^\gamma_t$. These two isotopies do agree on $F_a\cap C_{\leq \gamma_a}$, so let $\theta_t\colon F_a\cup C_{\leq\gamma_a}\to [0,1]\times S^3$, $t\in[a,b]$, be their union. By the isotopy extension lemma, we can extend $\theta_t$ to a smooth ambient isotopy $\theta_t$ of $[0,1]\times S^3$. (This again uses Conditions~\ref{item:generic-gamma-second} and~\ref{item:generic-gamma-last}.) We can ensure that this extension preserves the slices $\{s\}\times S^3$ for $s\in [0,2\epsilon]\cup [1-2\epsilon,1]$, for some sufficiently small $\epsilon$; shrinking $\epsilon$ if necessary, we may assume that $\theta_t$ is the identity on $F_a\cap ([0,2\epsilon]\cup[1-2\epsilon,1])\times S^3$. The ambient isotopy $\theta_t$ is not the identity on the whole boundary, but $\theta_t|_{\{0,1\}\times S^3}$ is, of course, isotopic to the identity. So, we can modify $\theta_t$ to a new isotopy $\theta'_t$ so that: \begin{itemize} \item $\theta'_t$ is the identity on a small collar neighborhood $S^3\times ([0,\epsilon]\cup[1-\epsilon,1])$ of the boundary. \item $\theta'_t$ preserves $\{s\}\times S^3$ for $s\in [0,2\epsilon]\cup[1-2\epsilon,1]$. \item $\theta'_t$ agrees with $\theta_t$ on $[2\epsilon,1-2\epsilon]\times S^3$. \item $\theta'_t$ is the identity on $F_a\cap ([0,2\epsilon]\cup[1-2\epsilon,1])\times S^3$. So, in particular, $\theta'_b(F_a)=F_b$. \end{itemize} Choose the collar neighborhoods of the boundary, in the definition of $S_{\leq \gamma_a}$ and $S_{\leq \gamma_b}$, to be $([0,2\epsilon]\cup[1-2\epsilon,1])\times S^3$. Then, for appropriate choices of tubular neighborhoods of $C_{\leq\gamma_a}$ and $C_{\leq \gamma_b}$, the map $\theta'_b$ sends $F_a$ to $F_b$ and $S_{\leq\gamma_a}$ to $S_{\leq \gamma_b}$, as desired. \end{proof} We can now conclude that for any interval $[t_i,t_{i+1}]$, the cuts $S_{\leq\psi_{t_i}(\gamma)}$ for $\psi_{t_i}(\Phi(F))$ and $S_{\leq\psi_{t_{i+1}}(\gamma)}$ for $\psi_{t_{i+1}}(\Phi(F))$ are equivalent. If the interval is $\gamma$-good, then this follows from Lemma~\ref{lem:inside-prop}. On the other hand, if the interval is $\eta$-good, then the cuts $S_{\geq\psi_{t_i}(\eta)}$ for $\psi_{t_i}(\Phi(F))$ and $S_{\geq\psi_{t_{i+1}}(\eta)}$ for $\psi_{t_{i+1}}(\Phi(F))$ are diffeomorphic, again by Lemma~\ref{lem:inside-prop}; however, the former is elementary equivalent to $S_{\leq\psi_{t_i}(\gamma)}$, while the latter is elementary equivalent to $S_{\leq\psi_{t_{i+1}}(\gamma)}$. Therefore, we get that the cut $S_{\leq\psi_0(\gamma)}=S_{\leq\gamma}$ for $\psi_0(\Phi(F))=\Phi(F)$ and the cut $S_{\leq\psi_1(\gamma)}$ for $\psi_1(\Phi(F))=\Phi'(F')$ are equivalent. So all that remains is to show that the two cuts $S_{\leq\psi_1(\gamma)}$ and $S_{\leq\alpha}$ for $\Phi'(F')$ are equivalent. Using Lemma~\ref{lem:even-length-walk}, choose an even-length sequence $\alpha=\delta_0,\delta_1,\dots,\delta_{2n}=\psi_1(\gamma)$ of complement-nonorientable one-sided embedded curves on $\Phi'(F')$ so that every pair of consecutive curves are disjoint. Perturbing slightly, we may assume that the genericity conditions~\ref{item:generic-gamma-first}--\ref{item:generic-gamma-last} hold for each of these curves, and the genericity condition~\ref{item:generic-last} holds for each pair of consecutive curves. Then, \( S_{\leq\delta_0},S_{\geq\delta_1},S_{\leq\delta_2},\dots,S_{\leq\delta_{2n}} \) is a sequence of admissible cuts connecting $S_{\leq\alpha}$ and $S_{\leq\psi_1(\gamma)}$ where every consecutive pair is elementary equivalent. \end{proof} \begin{remark}\label{remark:OSz} The proof that all admissible cuts are equivalent is inspired by the $b_2^+\geq 3$ case of Ozsv\'ath-Szab\'o's argument~\cite[Theorem~8.5]{OSz-hf-4manifolds}. Their argument is terse, so for comparison with the proof of Proposition~\ref{prop:admis-cut} we expand the $b_2^+\geq 3$ case of their argument here. Given a compact, oriented, connected 4-dimensional cobordism $W\colon Y_0\to Y_1$, $Y_i$ connected, an \emph{admissible cut} for $W$ is a decomposition $W=W_0\cup_NW_1$ along a closed, connected 3-manifold $N$ so that both $b_2^+(W_0)>0$ and $b_2^+(W_1)>0$ (and $Y_i\subset W_i$). Ozsv\'ath-Szab\'o show that any 4-manifold $W$ with $b_2^+(W)\geq 2$ has an admissible cut, as follows. Choose closed, connected, oriented surfaces $\Sigma_0,\Sigma_1\subset W$ with $[\Sigma_0]^2,[\Sigma_1]^2>0$ and $[\Sigma_0]\cdot[\Sigma_1]=0$. One can make $\Sigma_0$ and $\Sigma_1$ disjoint by repeatedly performing embedded surgery on $\Sigma_0$ to cancel pairs of points $p,q\in \Sigma_0\cap \Sigma_1$ of opposite sign, along an arc in $\Sigma_1$ from $p$ to $q$. Let $W_0$ be a neighborhood of the union of $\Sigma_0$, an arc from $\Sigma_0$ to $Y_0$, and $Y_0$. Choose the arc generically and its neighborhood small enough to be disjoint from $\Sigma_1$, and let $W_1$ be the complement of the interior of $W_0$. Then, $W_0\cup W_1$ is an admissible cut for $W$. \begin{figure} \centering \includegraphics{OSz} \caption{\textbf{Equivalence of admissible cuts in Ozsv\'ath-Szab\'o's setting.} This is a schematic illustration of the surfaces and paths in Remark~\ref{remark:OSz}. The cuts $N$ and $N'$ are dashed, the paths $\gamma$ are dotted, the surface $\Sigma_1'$ is thick, and the surface $\widetilde{\Sigma}_0$ is \textcolor{red}{thin}. A key point is that $\widetilde{\Sigma}_0$ is disjoint from $\Sigma_1$ and $\Sigma'_1$.} \label{fig:OSz} \end{figure} Next, call admissible cuts $W=W_0\cup_NW_1=W'_0\cup_{N'}W'_1$ \emph{elementary equivalent} if $N\cap N'=\emptyset$, and \emph{equivalent} if they differ by a sequence of elementary equivalences. If $b_2^+(W)\geq 3$ then Ozsv\'ath-Szab\'o argue that any two admissible cuts for $W$ are equivalent. Fix admissible cuts $W=W_0\cup_NW_1=W_0'\cup_{N'}W_1'$. For convenience, assume that $b_2^+(W_1')\geq 2$; the other case is symmetric. Choose connected surfaces $\Sigma_0\subset \mathring{W}_0$ with $[\Sigma_0]^2>0$ and $\Sigma'_1\subset \mathring{W}'_1$ with $[\Sigma'_1]^2>0$, and so that $[\Sigma_0]\cdot[\Sigma'_1]=0$ (this uses the assumption on $b_2^+(W'_1)$). (This part of the argument is illustrated schematically in Figure~\ref{fig:OSz}.) Performing surgery on $\Sigma_0$ along arcs $\Gamma$ in $\Sigma'_1$ with interiors disjoint from $\Sigma_0$ gives a new surface $\widetilde{\Sigma}_0$ homologous to $\Sigma_0$ and disjoint from $\Sigma'_1$. Let $\Sigma_1\subset \mathring{W}_1$ be a surface with $[\Sigma_1]^2>0$. Since $\Sigma_0\subset \mathring{W}_0$ and $\Sigma_1\subset \mathring{W}_1$, $\Sigma_0\cap\Sigma_1=\emptyset$. Perturbing $\Sigma_1$ slightly, we can assume that $\Sigma_1$ is also disjoint from the arcs $\Gamma$, and hence from $\widetilde{\Sigma}_0$. Choose an arc $\gamma_0\subset W_0$ connecting $\widetilde{\Sigma}_0$ to $Y_0$, disjoint from $\Sigma'_1$; $\gamma'_1\subset W'_1$ connecting $\Sigma'_1$ to $Y_1$, disjoint from $\widetilde{\Sigma}_0\cup\gamma_0$; and $\gamma_1\subset W_1$ connecting $\Sigma_1$ to $Y_1$, disjoint from $\Gamma$. Let $\widetilde{N}_0$ be the boundary of a neighborhood of $Y_0\cup\gamma_0\cup\widetilde{\Sigma}_0$, $N'_1$ the boundary of a neighborhood of $Y_1\cup \gamma'_1\cup \Sigma'_1$, and $N_1$ the boundary of a neighborhood of $Y_1\cup\gamma_1\cup\Sigma_1$. Observe that $\widetilde{N}_0$, $N_1$, and $N'_1$ are all admissible cuts. Further, $N$ is equivalent to $N_1$, which is equivalent to $\widetilde{N}_0$, which is equivalent to $N'_1$, which is equivalent to $N'$, proving the result. \end{remark} \section{The mixed invariant}\label{sec:mixed} Consider the long exact sequence $\cdots\to \mathcal{H}^-(L)\stackrel{\iota_*}{\longrightarrow}\mathcal{H}^\infty(L)\stackrel{\pi_*}{\longrightarrow} \mathcal{H}^+(L)\stackrel{\partial}{\longrightarrow}\mathcal{H}^-(L)\to\cdots$ from Formula~\eqref{eq:les}. Define $\mathcal{H}^{\mathrm{red}}(L)=\ker(\iota_*)\cong \cokernel(\pi_*)$, and give it the grading it inherits as a submodule of $\mathcal{H}^-(L)$; this is $(1,0)$ higher than its grading as a quotient module of $\mathcal{H}^+(L)$. (This is analogous to the reduced Heegaard Floer invariant $\mathit{HF}_\mathrm{red}$, and is not immediately related to reduced Khovanov homology.) \begin{definition}\label{def:compat-with-cut} Fix an admissible cut $(S,V,\phi)$ for a cobordism $F\subset [0,1]\times S^3$, decomposing $F$ into $F_0$ and $F_1$. Let $\Phi$ be a diffeomorphism as in Condition~\ref{item:AC-diffeo} of Definition~\ref{def:admissible-cut}. Given a movie $M_0$ representing $\Phi(F_0)\subset [0,1/2]\times S^3$ (identified with $[0,1]\times S^3$ in the obvious way) and a movie $M_1$ representing $\Phi(F_1)\subset [1/2,1]\times S^3$, we say that the concatenated movie $(M_0,M_1)$ is a \emph{movie compatible with the admissible cut $(S,V,\phi)$}. \end{definition} \begin{lemma}\label{lem:non-or-red} Let $F$ be a nonorientable cobordism in $[0,1]\times S^3$ from $L_0$ to $L_1$. Then, the image of the induced map $\mathcal{H}^-(F)\colon \mathcal{H}^-(L_0)\to \mathcal{H}^-(L_1)$ lies in $\mathcal{H}^{\mathrm{red}}(L_1)\subset \mathcal{H}^-(L_1)$, and the map $\mathcal{H}^+(F)\colon \mathcal{H}^+(L_0)\to \mathcal{H}^+(L_1)$ descends to a map $\mathcal{H}^{\mathrm{red}}(L_0)\to\mathcal{H}^{+}(L_1)$. \end{lemma} \begin{proof} By Proposition~\ref{prop:or-gens-part2}, $\mathcal{H}^\infty(F)\colon \mathcal{H}^\infty(L_0)\to \mathcal{H}^\infty(L_1)$ vanishes. So, both claims follow from the first long exact sequence in Formula~\eqref{eq:les} and Lemma~\ref{lem:les-natural}. \end{proof} \begin{definition}\label{def:mixed-invt} Let $F$ be a nonorientable cobordism from $L_0$ to $L_1$ with crosscap number $\geq 3$. Let $S$ be an admissible cut for $F$, decomposing $F$ as $F_1\circ F_0$ and $[0,1]\times S^3$ as $W_1\circ W_0$. Choose a movie compatible with the admissible cut, and let $\mathcal{H}^\pm(F_i)$ be the induced maps. By Lemma~\ref{lem:non-or-red}, the map $\mathcal{H}^-(F_0)$ induces a map $\mathcal{H}(F_0)\colon\mathcal{H}^-(L_0)\to \mathcal{H}^{\mathrm{red}}(L)$ and $\mathcal{H}^+(F_1)$ descends to a map $\mathcal{H}(F_1)\colon\mathcal{H}^{\mathrm{red}}(L)\to\mathcal{H}^+(L_1)$. Define the mixed invariant $\MixedInvt{F}\colon \mathcal{H}^-(L_0)\to\mathcal{H}^+(L_1)$ to be the composition $\mathcal{H}(F_1)\circ\mathcal{H}(F_0)$. That is, $\MixedInvt{F}$ is is the composition of the two dashed arrows in the following diagram: \begin{equation}\label{eq:mixed-invt-diagram} \mathcenter{\begin{tikzpicture}[xscale=5,yscale=1.8] \node (minus-0) at (0,1) {$\mathcal{H}^-(L_0)$}; \node (inf-0) at (0,.45) {$\mathcal{H}^\infty(L_0)$}; \node (inff-1) at (1,2.55) {$\mathcal{H}^\infty(L)$}; \node (plus-1) at (1,2) {$\mathcal{H}^+(L)$}; \node (minus-1) at (1,1) {$\mathcal{H}^-(L)$}; \node (inf-1) at (1,.45) {$\mathcal{H}^\infty(L)$}; \node (red) at (1,1.5) {$\mathcal{H}^\mathrm{red}(L)$}; \node (inff-2) at (2,2.55) {$\mathcal{H}^\infty(L_1)$}; \node (plus-2) at (2,2) {$\mathcal{H}^+(L_1)$}; \draw[->] (minus-0) -- (inf-0); \draw[->] (inff-1) -- (plus-1); \draw[->] (plus-1) -- (red); \draw[->] (red)--(minus-1); \draw[->] (minus-1) -- (inf-1); \draw[->] (inff-2) -- (plus-2); \draw[->] (inf-0) -- (inf-1) node[midway,below] {\tiny $\mathcal{H}^\infty(F_0)=0$}; \draw[->] (minus-0) -- (minus-1) node[midway,below] {\tiny $\mathcal{H}^-(F_0)$}; \draw[->,dashed] (minus-0) -- (red) node[midway,above] {\tiny $\mathcal{H}(F_0)$}; \draw[->,dashed] (red) -- (plus-2) node[midway,below] {\tiny $\mathcal{H}(F_1)$}; \draw[->] (inff-1) -- (inff-2) node[midway,above] {\tiny $\mathcal{H}^\infty(F_1)=0$}; \draw[->] (plus-1) -- (plus-2) node[midway,above] {\tiny $\mathcal{H}^+(F_1)$}; \end{tikzpicture}} \end{equation} \end{definition} \begin{remark}\label{rem:2-crosscaps} Definition~\ref{def:mixed-invt} also makes sense if $F$ has crosscap number $2$, but the proof that $\MixedInvt{F}$ is independent of the choice of admissible cut (Theorem~\ref{thm:invt}) requires crosscap number at least $3$. That is, in the case $F\colon L_0\to L_1$ has crosscap number $2$, there is a map $\MixedInvt{F,S}\colon \mathcal{H}^-(L_0)\to\mathcal{H}^+(L_1)$ which, as far as we know, depends on both the surface $F$ and the admissible cut $S$. \end{remark} \begin{theorem}\label{thm:invt} Let $F$ be a cobordism from $L_0$ to $L_1$, with crosscap number $\geq 3$. Then, the mixed invariant $\MixedInvt{F}\colon\mathcal{H}^-(L_0)\to\mathcal{H}^+(L_1)$ is independent of the choices in its construction, up to an overall sign. Further, If $F$ is isotopic to $F'$ or, more generally, if there is a self-diffeomorphism of $[0,1]\times S^3$ which is the identity on the boundary and sends $F$ to $F'$ then $\MixedInvt{F}=\pm\MixedInvt{F'}\colon \mathcal{H}^-(L_0)\to\mathcal{H}^+(L_1)$. \end{theorem} \begin{proof} We will show (in order): \begin{enumerate}[label=(\arabic*)] \item\label{item:inv-movie} Independence of the choice of movies compatible with a fixed admissible cut and, in particular, of isotopies of $F_0$ and $F_1$ rel boundary, and of the choice of diffeomorphism $\Phi$ in Definition~\ref{def:compat-with-cut}. \item\label{item:inv-cut} Invariance under elementary equivalences of admissible cuts. \item\label{item:inv-cob} Invariance under diffeomorphisms of surfaces and admissible cuts. \end{enumerate} By Proposition~\ref{prop:admis-cut}, this implies the result. Throughout the proof, ``equal'' or ``homotopic'' will mean equal or homotopic up to an overall sign. For point~\ref{item:inv-movie}, by Proposition~\ref{prop:BNh-functorial}, difference choices of movies compatible with the same admissible cut give chain homotopic maps $\mathcal{C}^-(F_0)$ and $\mathcal{C}^-(F_1)$. (For independence of $\Phi$, this uses the last statement in Proposition~\ref{prop:BNh-functorial}.) If $\mathcal{C}^-(F_0)\sim \mathcal{C}^-(F_0')$, then $\mathcal{H}^-(F_0)=\mathcal{H}^-(F'_0)$; similarly, if $\mathcal{C}^-(F_1)\sim \mathcal{C}^-(F_1')$, then $\mathcal{C}^+(F_1)\sim \mathcal{C}^+(F_1')$, and hence $\mathcal{H}^+(F_1)=\mathcal{H}^+(F'_1)$. Notice that the lift $\mathcal{H}(F_0)\colon \mathcal{H}^-(L_0)\to\mathcal{H}^{\mathrm{red}}(L)$ of $\mathcal{H}^-(F_0)$ is canonical: $\mathcal{H}^{\mathrm{red}}(L)$ is a canonical submodule of $\mathcal{H}^-(L)$. Similarly, the induced map $\mathcal{H}(F_1)\colon \mathcal{H}^{\mathrm{red}}(L)\to \mathcal{H}^+(L_1)$ is canonical, since $\mathcal{H}^{\mathrm{red}}(L)$ is a canonical quotient module of $\mathcal{H}^+(L)$. So, different choices of movies for $F_0$ and $F_1$ give the same mixed invariant. For point~\ref{item:inv-cut}, if the admissible cuts $(S,V,\phi)$ and $(S',V',\phi')$ are elementary equivalent, let $L=\phi(S\cap F)$ and $L'=\phi'(S\cap F)$. Assume, without loss of generality, that we are in the first case of Formula~\eqref{eq:admis-cut-equiv}. Choose a movie for $F$ compatible with this decomposition (in a sense analogous to Definition~\ref{def:compat-with-cut}), so $L$ and $L'$ are frames in the movie. Then, it follows from commutativity of the diagram \[ \begin{tikzpicture}[xscale=3,yscale=2.4] \node (plus-0) at (0,2) {$\mathcal{H}^+(L;R)$}; \node (minus-0) at (0,1) {$\mathcal{H}^-(L;R)$}; \node (red-0) at (0,1.5) {$\mathcal{H}^\mathrm{red}(L;R)$}; \node (plus-1) at (1,2) {$\mathcal{H}^+(L';R)$}; \node (minus-1) at (1,1) {$\mathcal{H}^-(L';R)$}; \node (red-1) at (1,1.5) {$\mathcal{H}^\mathrm{red}(L';R)$}; \draw[->] (plus-0) -- (red-0); \draw[->] (plus-1) -- (red-1); \draw[->] (red-0) -- (minus-0); \draw[->] (red-1) -- (minus-1); \draw[->] (plus-0) -- (plus-1); \draw[->] (minus-0) -- (minus-1); \draw[->] (red-0) -- (red-1); \end{tikzpicture} \] (and point~\ref{item:inv-movie}) that the mixed invariants with respect to $(S,V,\phi)$ and $(S',V',\phi')$ agree. Finally, for point~\ref{item:inv-cob}, suppose an admissible cut $(S,V,\phi)$ for $F$ is diffeomorphic to an admissible cut $(S',V',\phi')$ for $F'$, via a diffeomorphism $\Psi$. Fix a movie for $F$ compatible with $(S,V,\phi)$, with respect to a diffeomorphism $\Phi$ extending $\phi$. Then, the same movie is compatible with $(S',V',\phi')$, via the diffeomorphism $\Phi\circ\Psi$. By points~\ref{item:inv-movie} and~\ref{item:inv-cut}, we can compute the mixed invariants of $F$ and $F'$ using these movies, so the mixed invariants agree. \end{proof} \section{Properties}\label{sec:properties} \subsection{First observations}\label{sec:first-obs} We start by noting the mixed invariant's grading: \begin{lemma}\label{lem:mixed-grading} The mixed invariant $\MixedInvt{F}\colon \mathcal{H}^-(L_0)\to\mathcal{H}^+(L_1)$ increases the bigrading by $(-1-e/2,\chi-3e/2-2s)$, where $\chi$ is the Euler characteristic of $F$, $e$ is the normal Euler number of $F$, and $s$ is the number of stars on $F$. \end{lemma} \begin{proof} This follows immediately from Corollary~\ref{cor:hq-gr-change}. The additional downward grading shift by $(1,0)$ comes from the identification of $\mathcal{H}^{\mathrm{red}}(L)$ as a quotient module of $\mathcal{H}^+(L)$. \end{proof} There is a simple condition guaranteeing the mixed invariant vanishes: \begin{lemma}\label{lem:simple-vanishing} If $F$ has an admissible cut $S$ so that the link $L\subset S^3$ corresponding to $S\cap F$ has $\mathcal{H}^\mathrm{red}(L)=0$, then the mixed invariant $\MixedInvt{F}$ vanishes. \end{lemma} \begin{proof} This is immediate from Definition~\ref{def:mixed-invt}. \end{proof} \begin{remark} The analogue of Lemma~\ref{lem:simple-vanishing} for Heegaard Floer homology is factoring through an $L$-space. Note, however, that $L$-spaces seem to be much more common than links with vanishing $\mathcal{H}^\mathrm{red}$. \end{remark} The mixed invariant behaves simply with respect to (a particular kind of) mirroring. Let $F\colon L_0\to L_1$ be a cobordism, so $F$ is smoothly embedded in $[0,1]\times S^3$. Applying the orientation-preserving diffeomorphism $[0,1]\times S^3\to [0,1]\times S^3$, $(t,x,y,z,w)\mapsto (1-t,-x,y,z,w)$ gives a new cobordism $m(F)\colon m(L_1)\to m(L_0)$, where $m(L_i)$ denotes the mirror of $L_i$. The statement is a little simpler for the Lee deformation than the Bar-Natan deformation, so we separate the two cases. Given an $\mathrm{R}[T]$-module $M$, $\Hom_{\mathrm{R}}(M,\mathrm{R})$ inherits the structure of an $\mathrm{R}[T]$-module, as well. \begin{lemma}\label{lem:mirror-Lee} Let $\mathcal{C}^\pm$ denote the Lee deformation. Given a link $L$, there is an isomorphism $\mathcal{C}^+(m(L))\cong \Hom_{\mathrm{R}}(\mathcal{C}^-(L),\mathrm{R})$ of complexes over $\mathrm{R}[T]$ so that for any cobordism $F\colon L_0\to L_1$, \[ \mathcal{C}^+(m(F))\colon \mathcal{C}^+(m(L_1))\cong \Hom_{\mathrm{R}}(\mathcal{C}^-(L_1),\mathrm{R})\to \Hom_{\mathrm{R}}(\mathcal{C}^-(L_0),\mathrm{R})\cong\mathcal{C}^+(m(L_0)) \] is the dual to the map $\mathcal{C}^-(F)\colon \mathcal{C}^-(L_0)\to\mathcal{C}^-(L_1)$. Further, if $F$ has crosscap number $\geq 3$ and $\mathrm{R}$ is a field then the mixed invariant $\MixedInvt{m(F)}$ is given by the composition \[ \mathcal{H}^-(m(L_1))\cong \Hom(\mathcal{H}^+(L_1),\mathrm{R})\stackrel{\MixedInvt{F}^*}{\longrightarrow}\Hom(\mathcal{H}^-(L_0),\mathrm{R})\cong\mathcal{H}^+(m(L_0)). \] In particular, $\MixedInvt{m(F)}$ vanishes if and only if $\MixedInvt{F}$ does. \end{lemma} (Compare~\cite[Proposition 32]{Kho-kh-categorification},~\cite[pp. 184--185]{Kho-kh-Frobenius}, and~\cite[Proposition 3.1]{CMW-kh-functoriality}.) \begin{proof} The isomorphism $\mu\colon \Hom(\mathcal{C}^-(L),\mathrm{R})\to \mathcal{C}^+(m(L))$ is defined as follows. Given a generator $T^n(v,y)$ of $\mathcal{C}^-(L)$ (over $\mathrm{R}$), let $[T^n(v,y)]^*$ denote the dual generator of $\Hom(\mathcal{C}^-(L),\mathrm{R})$. The isomorphism is given by $\mu\bigl([T^n(v,y)]^*)=T^{-n-1}(\vec{1}-v,y^*)$ where $\vec{1}-v=(1-v_1,\dots,1-v_c)$ and $y^*$ is the result of reversing the label of every circle. (That is, if $y$ labels a circle $Z$ by $X$ then $y^*$ labels the corresponding circle by $1$, and vice-versa.) It is straightforward to check that this defines a chain isomorphism. A movie for $F$ induces a movie for $m(F)$ by mirroring each frame and reversing the order of the frames. So, it suffices to check the second statement for a single elementary cobordism (pair of adjacent frames in a movie). This is a straightforward case check. For the statement about the mixed invariant, a choice of admissible cut for $F$, along some link $L$, and movie compatible with it induce an admissible cut for $m(F)$, along $m(L)$, and a movie compatible with it. The map $\mu$ induces an isomorphism of short exacts sequences \[ \mathcenter{\begin{tikzpicture}[xscale=1.65] \node at (.75,0) (tl0) {$0$}; \node at (2,0) (Cm) {$\Hom(\mathcal{C}^+(L),\mathrm{R})$}; \node at (4,0) (Ci) {$\Hom(\mathcal{C}^\infty(L),\mathrm{R})$}; \node at (6,0) (Cpt) {$\Hom(\mathcal{C}^-(L),\mathrm{R})$}; \node at (7.25,0) (tr0) {$0$}; \node at (0.75,-1) (bl0) {$0$}; \node at (2,-1) (Ch) {$\mathcal{C}^-(m(L))$}; \node at (4,-1) (Cpb1) {$\mathcal{C}^\infty(m(L))$}; \node at (6,-1) (Cpb2) {$\mathcal{C}^+(m(L))$}; \node at (7.25,-1) (br0) {$0$}; \draw[->] (tl0) to (Cm); \draw[->] (Cm) to (Ci); \draw[->] (Ci) to (Cpt); \draw[->] (Cpt) to (tr0); \draw[->] (bl0) to (Ch); \draw[->] (Ch) to (Cpb1); \draw[->] (Cpb1) to (Cpb2); \draw[->] (Cpb2) to (br0); \draw[->] (Cm) to (Ch); \draw[->] (Ci) to (Cpb1); \draw[->] (Cpt) to (Cpb2); \end{tikzpicture}} \] so naturality of the snake lemma implies that the diagram \[ \mathcenter{\begin{tikzpicture}[xscale=1.75, yscale=1.1] \node at (0,0) (Cmdual) {$\Hom(\mathcal{H}^-(L),\mathrm{R})$}; \node at (4,0) (Cpdual) {$\Hom(\mathcal{H}^+(L),\mathrm{R})$}; \node at (0,-3) (Cp) {$\mathcal{H}^+(m(L))$}; \node at (4,-3) (Cm) {$\mathcal{H}^-(m(L))$}; \node at (2,-1) (reddual) {$\Hom(\mathcal{H}^{\mathrm{red}}(L),\mathrm{R})$}; \node at (2,-2) (redu) {$\mathcal{H}^{\mathrm{red}}(m(L))$}; \draw[->] (Cmdual) to node[above]{\lab{\partial^*}} (Cpdual); \draw[->] (Cp) to node[above]{\lab{\partial}} (Cm); \draw[->] (Cmdual) to node[left]{\lab{\cong}} (Cp); \draw[->] (Cpdual) to node[right]{\lab{\cong}} (Cm); \draw[->] (Cp) to (redu); \draw[->] (redu) to (Cm); \draw[->, dashed] (reddual) to (redu); \draw[->] (Cmdual) to (reddual); \draw[->] (reddual) to (Cpdual); \end{tikzpicture}} \] commutes. Combining this with the definition of the mixed invariant in Diagram~\eqref{eq:mixed-invt-diagram} gives the result. \end{proof} For the analogous results for the Bar-Natan complex, there is an extra sign. There is a ring automorphism $\sigma\colon \mathrm{R}[H]\to\mathrm{R}[H]$ induced by $\sigma(H)=-H$. Given a module $M$ over $\mathrm{R}[H]$, let $M_\sigma$ be the result of restricting (or extending) scalars by $\sigma$. That is, $M_\sigma$ is the same as $M$ except $H$ acts on $M_\sigma$ the way $-H$ acts on $M$. Given another $\mathrm{R}[H]$-module $N$ and a homomorphism $f\colon M\to N$, there is an induced homomorphism $f\colon M_\sigma\to N_\sigma$ (which, as a map of sets, is the same as $f\colon M\to N$). Then, the following is the analogue of Lemma~\ref{lem:mirror-Lee}: \begin{lemma} Let $\mathcal{C}^\pm$ denote the Bar-Natan deformation. Given a link $L$, there is an isomorphism $\mathcal{C}^+(m(L))_\sigma\cong \Hom_{\mathrm{R}}(\mathcal{C}^-(L),\mathrm{R})$ of complexes over $\mathrm{R}[H]$ so that for any cobordism $F\colon L_0\to L_1$, \[ \mathcal{C}^+(m(F))\colon \mathcal{C}^+(m(L_1))_\sigma\cong \Hom_{\mathrm{R}}(\mathcal{C}^-(L_1),\mathrm{R})\to \Hom_{\mathrm{R}}(\mathcal{C}^-(L_0),\mathrm{R})\cong\mathcal{C}^+(m(L_0))_\sigma \] is the dual to the map $\mathcal{C}^-(F)\colon \mathcal{C}^-(L_0)\to\mathcal{C}^-(L_1)$. Further, if $F$ has crosscap number $\geq 3$ and $\mathrm{R}$ is a field then the mixed invariant $\MixedInvt{m(F)}$, viewed as a map of modules twisted by $\sigma$, is given by the composition \[ \mathcal{H}^-(m(L_1))_\sigma\cong \Hom(\mathcal{H}^+(L_1),\mathrm{R})\stackrel{\MixedInvt{F}^*}{\longrightarrow}\Hom(\mathcal{H}^-(L_0),\mathrm{R})\cong\mathcal{H}^+(m(L_0))_\sigma. \] In particular, $\MixedInvt{m(F)}$ vanishes if and only if $\MixedInvt{F}$ does. \end{lemma} \begin{proof} The isomorphism sends $[H^n(v,y)]^*$ to $(-1)^nH^{-n-1}(\vec{1}-v,y^*)$. The rest of the proof is a straightforward adaptation of the Lee case. \end{proof} \begin{remark} If $\mathrm{R}$ is a field, it follows from the classification of modules over a PID that there is a (perhaps unnatural) isomorphism over $\mathrm{R}[H]$ between $\mathcal{H}^+(m(L))$ and $\Hom(\mathcal{H}^+(L),\mathrm{R})$. \end{remark} \begin{remark} There is another mirror one might consider: the map $(t,x,y,z,w)\mapsto (t,-x,y,z,w)$ which mirrors each frame in the movie but does not reverse the order of the frames. Neither $\mathcal{H}^-(F)$ nor the mixed invariant seems to behave simply with respect to this operation, as the example in Section~\ref{sec:direct-comp} shows. (Gauge-theoretic invariants of the branched double cover also do not behave well with respect to this operation.) \end{remark} The mixed invariant also respects composition, as follows: \begin{lemma}\label{lem:composition} Let $F_0\colon L_0\to L_1$, $F_1\colon L_1\to L_2$, and $F_2\colon L_2\to L_3$ be cobordisms, so that $F_1$ has crosscap number $\geq 3$. Then, \begin{align} \MixedInvt{F_2\circ F_1}&=\mathcal{H}^+(F_2)\circ \MixedInvt{F_1}\\ \MixedInvt{F_1\circ F_0}&=\MixedInvt{F_1}\circ\mathcal{H}^-(F_0). \end{align} The same result holds if $F_1$ has crosscap number $2$, as long as either $F_1\circ F_0$ and $F_2\circ F_1$ have crosscap number $\geq 3$, and we define $\MixedInvt{F_1}$ with respect to any choice of admissible cut for $F_1$ (compare \ Remark~\ref{rem:2-crosscaps}). \end{lemma} \begin{proof} This is immediate from the definitions. \end{proof} There is an easy criterion for the mixed invariant to be non-vanishing, in terms of the induced map on ordinary Khovanov homology $\widehat{\mathcal{H}}$: \begin{lemma}\label{lem:hat} Let $F\colon L_0\to L_1$ be a cobordism with crosscap number $\geq 3$. Then, the following diagrams commute: \[ \mathcenter{\begin{tikzpicture}[xscale=3.5,yscale=1.2] \node at (0,0) (Hm) {$\mathcal{H}^-(L_0)$}; \node at (1,0) (Hp) {$\mathcal{H}^+(L_1)$}; \node at (0,-1) (Hh0) {$\widehat{\mathcal{H}}(L_0)$}; \node at (1,-1) (Hh1) {$\widehat{\mathcal{H}}(L_1)$}; \draw[->] (Hm) to node[above]{\lab{\MixedInvt{F}}} (Hp); \draw[->] (Hh0) to node[above]{\lab{\widehat{\mathcal{H}}(F)}} (Hh1); \draw[->] (Hm) to node[left]{\lab{\pi_*}} (Hh0); \draw[->] (Hp) to node[right]{\lab{\partial}} (Hh1); \end{tikzpicture}} \text{\quad and \quad} \mathcenter{\begin{tikzpicture}[xscale=3.5,yscale=1.2] \node at (0,0) (Hm) {$\mathcal{H}^-(L_0)$}; \node at (1,0) (Hp) {$\mathcal{H}^+(L_1).$}; \node at (0,1) (Hh0) {$\widehat{\mathcal{H}}(L_0)$}; \node at (1,1) (Hh1) {$\widehat{\mathcal{H}}(L_1)$}; \draw[->] (Hm) to node[above]{\lab{\MixedInvt{F}}} (Hp); \draw[->] (Hh0) to node[above]{\lab{\widehat{\mathcal{H}}(F)}} (Hh1); \draw[->] (Hh0) to node[left]{\lab{\partial}} (Hm); \draw[->] (Hh1) to node[right]{\lab{\iota_*}} (Hp); \end{tikzpicture}} \] In particular, if $\widehat{\mathcal{H}}(F)\circ\pi_*$ or $\iota_*\circ \widehat{\mathcal{H}}(F)$ is non-zero then the mixed invariant $\MixedInvt{F}$ is also non-zero. \end{lemma} \begin{proof} Let $L$ be an admissible cut for $F$, decomposing $F$ as $F_1\circ F_0$. Define the map $\partial\colon \mathcal{H}^{\mathrm{red}}(L)\to \widehat{\mathcal{H}}(L)$ to be the composition $\mathcal{H}^\mathrm{red}(L)\to\mathcal{H}^-(L)\stackrel{\pi_*}{\longrightarrow} \widehat{\mathcal{H}}(L)$. Using the first commutative triangle in Formula~(\ref{eq:bdybdy-tri}), the map $\partial\colon \mathcal{H}^+(L)\to\widehat{\mathcal{H}}(L)$ is the composition $\mathcal{H}^+(L)\to \mathcal{H}^\mathrm{red}(L)\to\mathcal{H}^-(L)\stackrel{\pi_*}{\longrightarrow} \widehat{\mathcal{H}}(L)$, so is also the composition $\mathcal{H}^+(L)\to \mathcal{H}^\mathrm{red}(L)\stackrel{\partial}{\longrightarrow} \widehat{\mathcal{H}}(L)$. To see that $\partial\circ \Phi_F=\widehat{\mathcal{H}}(F)\circ\pi_*$, consider the larger diagram \[ \begin{tikzpicture} \node at (1,0) (hatL0) {$\widehat{\mathcal{H}}(L_0)$}; \node at (0,2) (mL0) {$\mathcal{H}^-(L_0)$}; \node at (3,1.5) (mL) {$\mathcal{H}^-(L)$}; \node at (4,0) (hatL) {$\widehat{\mathcal{H}}(L)$}; \node at (4,3) (Hred) {$\mathcal{H}^{\mathrm{red}}(L)$}; \node at (5,1.5) (pL) {$\mathcal{H}^+(L)$}; \node at (7,0) (hatL1) {$\widehat{\mathcal{H}}(L_1)$}; \node at (8,2) (pL1) {$\mathcal{H}^+(L_1)$}; \draw[->] (mL0) to node[left]{\lab{\pi_*}} (hatL0); \draw[->] (hatL0) to node[above]{\lab{\widehat{\mathcal{H}}(F_0)}} (hatL); \draw[->] (hatL) to node[above]{\lab{\widehat{\mathcal{H}}(F_1)}} (hatL1); \draw[->,sloped] (mL0) to node[below]{\lab{\mathcal{H}^-(F_0)}} (mL); \draw[->, dashed] (mL0) to node[above,sloped]{\lab{\mathcal{H}(F_0)}} (Hred); \draw[->, dashed] (Hred) to node[above,sloped]{\lab{\mathcal{H}(F_1)}} (pL1); \draw[->] (Hred) to (mL); \draw[->] (pL) to (Hred); \draw[->] (mL) to node[left]{\lab{\pi_*}} (hatL); \draw[->] (pL) to node[right]{\lab{\partial}} (hatL); \draw[->,sloped] (pL) to node[below]{\lab{\mathcal{H}^+(F_1)}} (pL1); \draw[->] (pL1) to node[right]{\lab{\partial}} (hatL1); \draw[->] (Hred) to node[left]{\lab{\partial}} (hatL); \end{tikzpicture} \] The middle triangles commute by the discussion above. The outer squares commute by naturality of the long exact sequences~\eqref{eq:les}, Lemma~\ref{lem:les-natural}. The triangles at the top commute by the definition of the dashed lifts. Since the map $\mathcal{H}^+(L)\to \mathcal{H}^{\mathrm{red}}(L)$ is surjective, commutativity of the right square and triangles implies that $\partial\circ \mathcal{H}(F_1)=\widehat{\mathcal{H}}(F_1)\circ \partial\colon \mathcal{H}^{\mathrm{red}}(L)\to\widehat{\mathcal{H}}(L_1)$. Commutativity of the square and two triangles on the left then implies the result. The proof that $\iota_*\circ \widehat{\mathcal{H}}(F)=\Phi_F\circ\partial$ is similar, but instead uses the commutative diagram \[ \begin{tikzpicture} \node at (1,0) (hatL0) {$\widehat{\mathcal{H}}(L_0)$}; \node at (0,-2) (mL0) {$\mathcal{H}^-(L_0)$}; \node at (3,-1.5) (mL) {$\mathcal{H}^-(L)$}; \node at (4,0) (hatL) {$\widehat{\mathcal{H}}(L)$}; \node at (4,-3) (Hred) {$\mathcal{H}^{\mathrm{red}}(L)$}; \node at (5,-1.5) (pL) {$\mathcal{H}^+(L)$}; \node at (7,0) (hatL1) {$\widehat{\mathcal{H}}(L_1)$}; \node at (8,-2) (pL1) {$\mathcal{H}^+(L_1)$}; \draw[->] (hatL0) to node[left]{\lab{\partial}} (mL0); \draw[->] (hatL0) to node[above]{\lab{\widehat{\mathcal{H}}(F_0)}} (hatL); \draw[->] (hatL) to node[above]{\lab{\widehat{\mathcal{H}}(F_1)}} (hatL1); \draw[->,sloped] (mL0) to node[above]{\lab{\mathcal{H}^-(F_0)}} (mL); \draw[->, dashed] (mL0) to node[below,sloped]{\lab{\mathcal{H}(F_0)}} (Hred); \draw[->, dashed] (Hred) to node[below,sloped]{\lab{\mathcal{H}(F_1)}} (pL1); \draw[->] (Hred) to (mL); \draw[->] (pL) to (Hred); \draw[->] (hatL) to node[left]{\lab{\partial}} (mL); \draw[->] (hatL) to node[right]{\lab{\iota_*}} (pL); \draw[->,sloped] (pL) to node[above]{\lab{\mathcal{H}^+(F_1)}} (pL1); \draw[->] (hatL1) to node[right]{\lab{\iota_*}} (pL1); \draw[->] (hatL) to (Hred); \end{tikzpicture} \] and the fact that the map $\mathcal{H}^{\mathrm{red}}(L)\to \mathcal{H}^-(L)$ is injective. \end{proof} \begin{remark}\label{rem:mixed-strong} Since $\pi_*\colon \mathcal{H}^-(\emptyset)\to\widehat{\mathcal{H}}(\emptyset)$ is surjective, it follows from Lemma~\ref{lem:hat} that if $F$ is a cobordism from $\emptyset$ to $L$ and $\widehat{\mathcal{H}}(F)\neq 0$ then $\MixedInvt{F}\neq 0$, as well. Similarly, if $F$ is a cobordism from $L$ to $\emptyset$ and $\widehat{\mathcal{H}}(F)\neq 0$ then $\MixedInvt{F}\neq 0$. \end{remark} \subsection{Stabilizations} Next, we turn to the behavior of the mixed invariant under various local changes to the knot. For example, we will study the behavior under Baykur-Sunukjian's \emph{stabilizations} (attaching arbitrary $1$-handles to a surface) and \emph{standard stabilizations} (local connect sums with a standard $T^2$)~\cite{BS-top-stabilizations}, \emph{crosscap stabilizations} (taking local connected sums with a standard $\mathbb{R}\mathrm{P}^2$ or $\overline{\mathbb{R}\mathrm{P}}^2$), \emph{local knotting} (taking a local connected sum with a knotted $S^2$), and more general local connected sums. The main results are summarized in Theorem~\ref{thm:vanishing}, though some technical results along the way (e.g., Corollaries~\ref{cor:star-0} and~\ref{cor:H-vanish}) may also be of interest. Most of the results in this section work for all four versions of Khovanov homology, $\mathcal{H}^-$, $\mathcal{H}^+$, $\mathcal{H}^\infty$, or $\widehat{\mathcal{H}}$, so we will use the symbol $\mathcal{H}^\bullet$ to denote any of these four versions. The key technical property we will use, as usual for these kinds of arguments, is a neck-cutting relation. \begin{proposition}\label{prop:neck-cut} Let $F\colon L_0\to L_1$ be a cobordism, and let $A$ be an arc in $[0,1]\times S^3$ with endpoints on a single component $F_0$ of $F$ and interior disjoint from $F$. Let $F^\cap$ be the result of attaching a $1$-handle to $F$ along $A$ and $F^\star$ the result of adding a star to $F$ at one endpoint of $A$. If $F_0$ is orientable, assume that the $1$-handle is attached in such a way that the resulting component is still orientable. Then, $\mathcal{H}^\bullet(F^\cap)=\mathcal{H}^\bullet(F^\star)$. \end{proposition} \begin{proof} For the Lee deformation, this was essentially shown by Levine-Zemke~\cite[Proposition 7]{LV-kh-ribbon}, so we focus on the case of the Bar-Natan deformation and comment on the Lee deformation at the end. Let $B$ be an arc in $F$ connecting the endpoints of $A$, chosen so that the loop $A\cup B$ is two-sided, i.e., so that $TF^\cap|_{A\cup B}$ is trivial. (If $F_0$ is orientable, any arc $B$ has $TF^\cap|_{A\cup B}$ trivial by the assumption that orientability was preserved; if $F_0$ is nonorientable, given any arc $B$ connecting the endpoints of $A$ we can modify $B$ by taking the connected sum with a one-sided loop to achieve this property.) Let $H=F^\cap\setminus F$ denote the new $1$-handle. Perform an isotopy to $F^\cap$ so that: \begin{itemize} \item The projection $[0,1]\times S^3\to [0,1]$ restricts to a Morse function $f$ on $F^\cap$, and induces a movie decomposition of $F^\cap$. \item The restriction $f|_H$ has two critical points, both of index $1$, corresponding to a pair of saddles in the movie. \item The two saddles corresponding to $f|_H$ in the movie are adjacent, happening at times $t+\epsilon,t+2\epsilon\in (0,1)$, and there are no other elementary cobordisms between $t-\epsilon$ and $t+3\epsilon$. Further, these frames are obtained by gluing the following local model to the identity cobordism of the rest of the link: \begin{equation}\label{eq:neck-tangle} \mathcenter{\begin{tikzpicture}[scale=0.8] \draw[dashed] (0,0) circle (1); \draw[bend right=45,knot] (45:1) to (.7071,-.7071); \draw[bend left=45,knot] (-.7071,.7071) to (-.7071,-.7071); \end{tikzpicture}} \longrightarrow \mathcenter{ \begin{tikzpicture}[scale=0.8] \draw[dashed] (0,0) circle (1); \draw[bend left=45,knot] (.7071,.7071) to (-.7071,.7071); \draw[bend right=45,knot] (.7071,-.7071) to (-.7071,-.7071); \end{tikzpicture} } \longrightarrow \mathcenter{ \begin{tikzpicture}[scale=0.8] \draw[dashed] (0,0) circle (1); \draw[bend right=45,knot] (.7071,.7071) to (.7071,-.7071); \draw[bend left=45,knot] (-.7071,.7071) to (-.7071,-.7071); \end{tikzpicture} } \end{equation} \item The arc $B$ satisfies $f(B)=t$. \end{itemize} (To arrange this, first isotope $F^\cap$ so that $H$ is small, then make $H$ standard with respect to the projection to $[0,1]$, and then isotope $B$ to lie in the desired level set and use the isotopy extension lemma to push the rest of $F^\cap$ out of the way.) Given a dotted (not starred) cobordism, decomposed as a movie, there is a corresponding map of Bar-Natan complexes, where the map associated to a dot is multiplication by $X$. Let $F^{{\vcenter{\hbox{\tikz{\fill[black] (0,0) circle (0.05);}}}},0}$ and $F^{{\vcenter{\hbox{\tikz{\fill[black] (0,0) circle (0.05);}}}},1}$ be the result of placing a dot on $F$ at each endpoint of $A$. Then, an easy local computation shows that \begin{equation}\label{eq:neck-cut-helper} \pm\mathcal{H}^\bullet(F^\cap)=\mathcal{H}^\bullet(F^{{\vcenter{\hbox{\tikz{\fill[black] (0,0) circle (0.05);}}}},0})+\mathcal{H}^\bullet(F^{{\vcenter{\hbox{\tikz{\fill[black] (0,0) circle (0.05);}}}},1})-H\mathcal{H}^\bullet(F). \end{equation} The surface $F^{{\vcenter{\hbox{\tikz{\fill[black] (0,0) circle (0.05);}}}},0}$ can be transformed into $F^{{\vcenter{\hbox{\tikz{\fill[black] (0,0) circle (0.05);}}}},1}$ by moving the dot along the arc $B$. Since $B$ is contained in a single level, this corresponds to moving the dot along an arc in a single link diagram in the movie. Let $F^{{\vcenter{\hbox{\tikz{\fill[black] (0,0) circle (0.05);}}}},(i)}$ be the surface after we have moved the dot through $i$ crossings. By Alishahi's lemma~\cite[Lemma 2.2]{Ali-kh-unknotting}, \[ \mathcal{H}^\bullet(F^{{\vcenter{\hbox{\tikz{\fill[black] (0,0) circle (0.05);}}}},(i)})=H\mathcal{H}^\bullet(F)-\mathcal{H}^\bullet(F^{{\vcenter{\hbox{\tikz{\fill[black] (0,0) circle (0.05);}}}},(i+1)}) \] So, it suffices to show that the arc $B$ has an even number of crossings on it: then $\mathcal{H}^\bullet(F^{{\vcenter{\hbox{\tikz{\fill[black] (0,0) circle (0.05);}}}},0})=\mathcal{H}^\bullet(F^{{\vcenter{\hbox{\tikz{\fill[black] (0,0) circle (0.05);}}}},1})$ so the right side of Formula~\eqref{eq:neck-cut-helper} is equal to $\mathcal{H}^\bullet(F^\star)$. This is where the assumption that $A\cup B$ is two-sided is used. Orient the arc $B$. This induces an orientation of the arcs in the first frame of~\eqref{eq:neck-tangle}. The fact that $TF^\cap|_{A\cup B}$ is trivial implies that this orientation is compatible with the saddle cobordisms in~\eqref{eq:neck-tangle}; that is, $B$ connects the bottom-left endpoint to the bottom-right one, or the top-left endpoint to the top-right one. Without loss of generality, assume $B$ runs from the top-left endpoint to the top-right one. Choose a checkerboard coloring of the left link diagram so that the region between the arcs shown is black. Then, $B$ starts with a black region to its right, and ends with a black region to its right. Each time $B$ passes through a crossing, the black region switches between the left and right side of $B$. So, $B$ passes through an even number of crossings, as claimed. The proof for the Lee case is the same, except that the analogue of Equation~\eqref{eq:neck-cut-helper} does not have the $H\mathcal{H}(F)$ term, and Alishahi's lemma is replaced by Hedden-Ni's \cite[Lemma 2.3]{HN-kh-detects}. \end{proof} The standard $\mathbb{R}\mathrm{P}^2$ inside the $4$-ball is the surface represented by the following movie: \begin{equation}\label{eq:std-rp2} \vcenter{\hbox{\begin{tikzpicture}[xscale=2] \foreach \i in {1,6}{ \node (m\i) at (\i,0) {$\varnothing$}; } \foreach \i in {2,5}{ \node (m\i) at (\i,0) {\begin{tikzpicture}[scale=0.5] \draw[knot] (0,0) circle (1); \end{tikzpicture}}; } \node (m3) at (3,0) {\begin{tikzpicture}[scale=0.5] \draw[knot] (45:1) arc (45:315:1); \draw[knot] (45:1) to [out=-45,in=-90] (-0.7,0); \draw[knot] (-45:1) to [out=45,in=90] (-0.7,0); \end{tikzpicture}}; \node (m4) at (4,0) {\begin{tikzpicture}[scale=0.5] \draw[knot] (45:1) arc (45:135:1); \draw[knot] (-45:1) arc (-45:-135:1); \draw[knot] (45:1) to [out=-45,in=135] (225:1); \draw[knot] (-45:1) to [out=45,in=-135] (-225:1); \end{tikzpicture}}; \foreach \i/\l in {1/b,2/RI,3/s,4/RI,5/d}{ \pgfmathtruncatemacro{\j}{\i+1}% \node (a\i) at ($(m\i)!0.5!(m\j)$) {$\longrightarrow$}; \node [anchor=south] at (a\i) {\tiny $\l$}; } \end{tikzpicture}}} \end{equation} The standard $\mathbb{R}\mathrm{P}^2$ has $e=-2$. The standard $\overline{\mathbb{R}\mathrm{P}}^2$ is the mirror of the above, and has $e=2$. (Our conventions are chosen to agree with~\cite{FKV-top-knot-surf}.) Define a \emph{crosscap stabilization} to be the result of taking the connected sum with a standard $\mathbb{R}\mathrm{P}^2$ or $\overline{\mathbb{R}\mathrm{P}}^2$. \begin{lemma}\label{lem:cc-stab-vanish} If $F^\otimes$ is obtained from $F$ by a crosscap stabilization then $\mathcal{H}^\bullet(F^\otimes)$ vanishes. \end{lemma} \begin{proof} A movie for $F^\otimes$ is obtained from a movie for $F$ by taking the disjoint union with the movie in Formula~\eqref{eq:std-rp2} or its mirror, but replacing $d$ with a saddle map between the unknot shown and $F$. It is straightforward to see that the map on $\mathcal{H}^\bullet$ induced by $s\circ RI\circ b$ vanishes (as does the map associated to the mirror of this movie), so the map associated to the whole movie vanishes, as well. \end{proof} \begin{corollary}\label{cor:star-0} Let $F\colon L_0\to L_1$ be a cobordism, and suppose that some nonorientable component $F_0$ of $F$ has a star on it. Then, $\mathcal{H}^\bullet(F)=0$. Similarly, if $F$ is obtained from another surface by attaching a $1$-handle to a nonorientable component then $\mathcal{H}^\bullet(F)=0$. \end{corollary} \begin{proof} For the first statement, let $F^\curlywedge$ be the result of taking the connected sum of $F$ with a local Klein bottle (with normal Euler number $0$) at the star on $F_0$, and forgetting the star. That is, $F^\curlywedge$ is obtained from $F$ by attaching a local $1$-handle with both feet near the star, in a locally-nonorientable way (and forgetting the star). By Proposition~\ref{prop:neck-cut}, $\mathcal{H}^\bullet(F)=\mathcal{H}^\bullet(F^\curlywedge)$. On the other hand, $F^\curlywedge$ is also obtained from $F$ by taking the connect sum with $\mathbb{R}\mathrm{P}^2\#\overline{\mathbb{R}\mathrm{P}}^2$ (and forgetting the star). So, by Lemma~\ref{lem:cc-stab-vanish}, $\mathcal{H}^\bullet(F^\curlywedge)$ vanishes. The second statement follows from the first and Proposition~\ref{prop:neck-cut}. \end{proof} Adding an even number of stars has a predictable effect on the cobordism maps and the mixed invariant: \begin{lemma}\label{lem:starstar} Let $F\colon L_0\to L_1$ be a cobordism, and let $F^{\star\star}$ be the result of adding two stars to the same component of $F$. Then, \[ \mathcal{H}^\bullet(F^{\star\star})= \begin{cases} 4T\mathcal{H}^\bullet(F) & \text{for the Lee deformation}\\ H^2\mathcal{H}^\bullet(F) & \text{for the Bar-Natan deformation.} \end{cases} \] Further, if the crosscap number of $F$ is at least $3$ then \[ \MixedInvt{F^{\star\star}}= \begin{cases} 4T\MixedInvt{F} & \text{for the Lee deformation}\\ H^2\MixedInvt{F} & \text{for the Bar-Natan deformation.} \end{cases} \] \end{lemma} \begin{proof} Since the maps are invariant under isotopy of the cobordisms, we can arrange that the two elementary star cobordisms are adjacent. Then, the result is immediate from the definition of the map associated to an elementary star cobordism. \end{proof} \begin{corollary}\label{cor:H-vanish} If $F$ is nonorientable then for the Lee deformation, $4T\mathcal{H}^\bullet(F)=0$, and for the Bar-Natan deformation, $H^2\mathcal{H}^\bullet(F)=0$. If $F$ has crosscap number at least $3$ then additionally, $4T\MixedInvt{F}=0$ and $H^2\MixedInvt{F}=0$ for the Lee and Bar-Natan deformations, respectively. \end{corollary} \begin{proof} This is immediate from Corollary~\ref{cor:star-0} and Lemma~\ref{lem:starstar}. \end{proof} We note a very mild extension of a result of Rasmussen~\cite{Rasmussen-kh-closed} and Tanaka~\cite{Tanaka-kh-closed}: \begin{lemma}\label{lem:closed-surf} If $F\subset [0,1]\times S^3$ is a closed, connected, orientable surface of genus $g$ with $s$ stars then $\mathcal{H}^\bullet(F)$ is \begin{itemize} \item $0$ if $g+s$ is even, \item $2H^{g+s-1}$ if $g+s$ is odd and we are considering the Bar-Natan deformation, and \item $2^{g+s}T^{\frac{g+s-1}{2}}$ if $g+s$ is odd and we are considering the Lee deformation. \end{itemize} If $F\subset [0,1]\times S^3$ is a closed, connected, nonorientable surface (possibly with stars) then $\mathcal{H}^\bullet(F)=0$. \end{lemma} (In both cases, the surface may be knotted.) \begin{proof} We start with the orientable case. Any such surface becomes isotopic to a standardly-embedded one after attaching some number of $1$-handles (see, e.g.,~\cite[Theorem 1]{BS-top-stabilizations}). By adding an extra one if necessary, we may assume the number of $1$-handles added is even, say $2k$. By Proposition~\ref{prop:neck-cut}, adding these $1$-handles has the same effect as adding the same number of stars which, by Lemma~\ref{lem:starstar}, multiplies $\mathcal{H}^\bullet(F)$ by $(4T)^k$ or $H^{2k}$. Considering first $\mathcal{H}^-(F)$, multiplication by $(4T)^k$ or $H^{2k}$ is an injective map $\mathrm{R}[H]\to \mathrm{R}[H]$ or $\mathrm{R}[T]\to\mathrm{R}[T]$, so the element $\mathcal{H}^-(F)$ depends only on $g$ and $s$, not the embedding of $F$. Thus, the result for $\mathcal{H}^-(F)$ follows from an easy model computation for a standardly-embedded surface (which can be made even easier by applying Proposition~\ref{prop:neck-cut} to trade the genus for stars and then applying Lemma~\ref{lem:starstar}). The results for the other versions---$\mathcal{H}^\infty(F)$, $\mathcal{H}^+(F)$, and $\widehat{\mathcal{H}}(F)$---follow formally from the case of $\mathcal{H}^-(F)$ and the natural long exact sequences~(\ref{eq:les}). The proof for the nonorientable case is essentially the same. By a result of Baykur-Sunukjian \cite[Theorem 6]{BS-top-stabilizations}, after a finite number of stabilizations, $F$ becomes isotopic to a connected sum of copies of the standard $\mathbb{R}\mathrm{P}^2$ and $\overline{\mathbb{R}\mathrm{P}}^2$. A straightforward model computation, or Lemma~\ref{lem:cc-stab-vanish}, shows the map associated to a connected sum of copies of the standard $\mathbb{R}\mathrm{P}^2$ or $\overline{\mathbb{R}\mathrm{P}}^2$ vanishes, so by Proposition~\ref{prop:neck-cut} the map associated to $F$ vanishes, as well. \end{proof} Given a link cobordism $F\colon L_0\to L_1$, a closed surface $E\subset S^4$, and points $p\in E$ and $q\in F$, there is a \emph{standard connected sum} of $F$ and $E$ at $q$ and $p$; this is the connected sum of pairs $([0,1]\times S^3,F)\#(S^4,E)$. \begin{proposition}\label{prop:cyl-sum} Let $F\colon L_0\to L_1$ be a cobordism and $E\subset S^4$ a closed, connected surface with no stars on it. Let $F^\#\defeq F\# E$ be a standard connected sum of $F$ and $E$, and let $F^\star$ be the result of adding a star to $F$, on the component where the connect sum is occurring. Then, \begin{enumerate} \item\label{item:BNh-or} If $E$ is an orientable surface of genus $g>0$ then for the Lee deformation, \begin{equation} \mathcal{H}^\bullet(F^\#)= \begin{cases} (4T)^{\frac{g}{2}}\mathcal{H}^\bullet(F) & \text{$g$ even}\\ (4T)^{\frac{g-1}{2}}\mathcal{H}^\bullet(F^\star) & \text{$g$ odd} \end{cases} \end{equation} while for the Bar-Natan deformation \begin{equation} \mathcal{H}^\bullet(F^\#)= \begin{cases} H^{g}\mathcal{H}^\bullet(F) & \text{$g$ even}\\ H^{g-1}\mathcal{H}^\bullet(F^\star) & \text{$g$ odd}. \end{cases} \end{equation} \item\label{item:BNh-non-or} If $E$ is a nonorientable surface then $\mathcal{H}^\bullet(F^\#)=0$. \end{enumerate} \end{proposition} \begin{proof} Let $D$ be a small disk on $E$, so $E\setminus D$ is a cobordism from the empty set to the unknot $U$. We will show that $\mathcal{H}^\bullet(E\setminus D)$ is the same as the invariant of a disk with $g$ stars in the orientable case, and vanishes in the nonorientable case. The result then follows from Lemma~\ref{lem:starstar} and functoriality, since $F^\#$ is obtained from $F$ by replacing a small disk by $E\setminus D$. Consider first the version $\mathcal{H}^-$ for the Lee deformation, for the case that $E$ is orientable. Let $E^\star$ be the result of adding a star to $E$. Write \[ \mathcal{H}^-(E\setminus D)=p(T)1+q(T)X\in\mathcal{H}^-(U)=\mathrm{R}[T]\langle 1,X\rangle. \] By Lemma~\ref{lem:closed-surf} applied to $E$, $q(T)=0$ if $g$ is even and $q(T)=2^{g}T^{\frac{g-1}{2}}$ if $g$ is odd. Also, \[ \mathcal{H}^-(E^\star\setminus D)=2p(T)X+2q(T), \] so by Lemma~\ref{lem:closed-surf} applied to $E^\star$, $p(T)=0$ if $g$ is odd and $p(T)=2^{g}T^{\frac{g}{2}}$ if $g$ is even. So, $\mathcal{H}^-(E\setminus D)$ is $2^{g}T^{\frac{g}{2}}$ times the invariant of a disk if $g$ is even, and $2^{g-1}T^{\frac{g-1}{2}}$ times the invariant of a disk with a star if $g$ is odd. The results for the other versions---$\mathcal{H}^\infty$, $\mathcal{H}^+$, and $\widehat{\mathcal{H}}$---follow formally from the case of $\mathcal{H}^-$ since $\mathcal{H}^-(U)$ is free over $\mathcal{H}^-(\varnothing)$. The proof for the Bar-Natan deformation in the orientable case is similar. Now, suppose $E$ is nonorientable and again, for definiteness, consider the Lee deformation. We can again write $\mathcal{H}^-(E\setminus D)=p(T)1+q(T)X$, but now $\mathcal{H}^-(E)=\mathcal{H}^-(E^\star)=0$. Consequently, $p(T)=q(T)=0$. \end{proof} To summarize, both the map $\mathcal{H}^\bullet(F)$ and the mixed invariant $\MixedInvt{F}$ obstruct surfaces being stabilizations and crosscap stabilizations, and are independent of local knotting. \begin{theorem}\label{thm:vanishing} Let $F\colon L_0\to L_1$ be a cobordism. \begin{enumerate}[label=(\arabic*)] \item\label{item:van-star} If $F$ has at least one star on some nonorientable component, then $\mathcal{H}^\bullet(F)=0$; and if in addition $F$ has crosscap number $\geq3$ then $\MixedInvt{F}=0$, as well. \item\label{item:van-stab} If $F$ is a (possibly nonstandard) stabilization, obtained from another cobordism $F'$ by attaching a handle to some nonorientable component of $F'$, then $\mathcal{H}^\bullet(F)=0$. \item\label{item:van-sum-nonor} If $F$ is obtained from another cobordism $F'$ by taking a standard connected sum with a closed, nonorientable surface then $\mathcal{H}^\bullet(F)=0$; if $F'$ has crosscap number $\geq 2$ then $\MixedInvt{F}=0$, as well. In particular, this applies if $F$ is a crosscap stabilization of a surface (with crosscap number $\geq 2$ in the case of $\MixedInvt{F}=0$). \item\label{item:van-sum-or} If $F$ is obtained from another cobordism $F'$ by taking a standard connected sum of some nonorientable component of $F'$ with a closed, orientable surface of genus $g>0$, then $\mathcal{H}^\bullet(F)=0$; if $F'$ has crosscap number $\geq 2$ then $\MixedInvt{F}=0$, as well. In particular, this applies if $F$ is a standard stabilization of a surface (with crosscap number $\geq 2$ in the case of $\MixedInvt{F}$) at some nonorientable component. \item\label{item:van-knot} If $F$ is obtained from another cobordism $F'$ by taking a standard connected sum with a knotted $2$-sphere then $\mathcal{H}^\bullet(F)=\mathcal{H}^\bullet(F')$; if in addition $F$ has crosscap number $\geq 3$ then $\MixedInvt{F}=\MixedInvt{F'}$. \end{enumerate} \end{theorem} \begin{proof} For $\mathcal{H}^\bullet(F)$, Points~\ref{item:van-star} and~\ref{item:van-stab} are Corollary~\ref{cor:star-0}, Points~\ref{item:van-sum-nonor} and~\ref{item:van-knot} are Proposition~\ref{prop:cyl-sum}, while Point~\ref{item:van-sum-or} is Proposition~\ref{prop:cyl-sum} together with Corollaries~\ref{cor:star-0} and~\ref{cor:H-vanish}. For $\MixedInvt{F}$, Points~\ref{item:van-sum-nonor}, \ref{item:van-sum-or}, and~\ref{item:van-knot} follow by the same methods, after isotoping the surface so the connect sum happens entirely on one side of the admissible cut. For Point~\ref{item:van-star}, choose disjoint one-sided embedded curves $\gamma,\eta\subset F$ so that $\eta$ contains a star, and then consider the admissible cut $S_{\leq\gamma}$, as in the proof of Proposition~\ref{prop:admis-cut}; then one side of the admissible cut contains a nonorientable component with a star, and so $\MixedInvt{F}$ vanishes by Corollary~\ref{cor:star-0}. \end{proof} We conclude the section by singling out one consequence of Point~\ref{item:van-knot}. Miller-Powell introduced the notion of the \emph{generalized stabilization distance}~\cite{MP-top-stab} (see also~\cite{Miy-86-stab,JZ-hf-stab}). In particular, surfaces $F$ and $F'$ have generalized stabilization distance $0$ if and only if they are related by taking the connected sums with embedded $2$-spheres. (See also~\cite{SS-kh-surf}.) While they work in the topological category, we will continue to assume all surfaces are smoothly embedded. \begin{corollary} If $F$ and $F'$ are cobordisms with $\mathcal{H}^\bullet(F)\neq \mathcal{H}^\bullet(F')$ or $\MixedInvt{F}\neq \MixedInvt{F'}$ (and the cobordisms have crosscap number $\geq 3$) then $F$ and $F'$ have generalized stabilization distance $>0$. \end{corollary} \begin{remark} There is also an obstruction to destabilizing orientable surfaces from Heegaard Floer homology~\cite[Proposition 5.5]{JZ-hf-clasp}. \end{remark} \subsection{Closed surfaces}\label{sec:closed} The following shows that the mixed invariant is often zero for closed surfaces (and is always zero for connected closed surfaces). By contrast, in Section~\ref{sec:comps} we will see that for surfaces with boundary the mixed invariant does contain interesting information. \begin{theorem}\label{thm:cc-3-vanish} Let $F$ be a closed surface with crosscap number $\geq 3$, normal Euler number $e(F)$, Euler characteristic $\chi(F)$, $s_o(F)$ stars on orientable components, and $s_n(F)$ stars on nonorientable components. If its mixed invariant $\MixedInvt{F}$ is non-zero then $e(F)=-2$, $s_n(F)=0$, and $\chi(F)=1+2s_o(F)$. \end{theorem} \begin{corollary}\label{cor:closed-vanish} If $F$ is a closed, connected surface with crosscap number $\geq 3$ then its mixed invariant $\MixedInvt{F}$ vanishes. \end{corollary} \begin{proof}[Proof of Theorem~\ref{thm:cc-3-vanish}] By Theorem~\ref{thm:vanishing}, $s_n(F)=0$. Since the mixed invariant $\MixedInvt{F}$ is an $\mathrm{R}[U]$-module homomorphism $\mathrm{R}[U]=\mathcal{H}^-(\emptyset)\to \mathcal{H}^+(\emptyset)=\mathrm{R}[U^{-1},U]/\mathrm{R}[U]$, $\MixedInvt{F}$ may be viewed as an element of $\mathrm{R}[U^{-1},U]/\mathrm{R}[U]$ (the image of $1$). By Lemma~\ref{lem:mixed-grading}, $\MixedInvt{F}$ is in bigrading $(-1-e/2,\chi-3e/2-2s_o)$. Since $\mathcal{H}^+(\emptyset)$ is supported in homological grading $0$, this forces $e(F)=-2$. In the Bar-Natan theory, by Corollary~\ref{cor:H-vanish}, $\MixedInvt{F}(1)\in\ker(H^2)\subset\mathrm{R}[H^{-1},H]/\mathrm{R}[H]$, which is $\mathrm{R}\langle H^{-1},H^{-2}\rangle$, supported in bigradings $(0,2)$ and $(0,4)$, which forces $(\chi-2s_o)\in\{-1,1\}$. In the Lee theory, again by Corollary~\ref{cor:H-vanish}, $\MixedInvt{F}(1)\in\ker(4T)\subset \mathrm{R}[T^{-1},T]/\mathrm{R}[T]$, which is $\mathrm{R}\langle T^{-1}\rangle$ (recall that $2$ is invertible in $\mathrm{R}$), supported in bigrading $(0,4)$, forcing $\chi-2s_o=1$. Thus, the only case remaining to exclude is $\chi-2s_o=-1$. So, for the rest of the proof assume $e(F)=-2$, $\chi(F)-2s_o(F)=-1$, and $s_n(F)=0$. We need to show $\MixedInvt{F}=0$. To settle this case, we will need to study the mixed invariants over various Frobenius algebras, so we will use superscripts to denote the various Frobenius algebras that we are working over. We have already observed that the mixed invariant in the Lee theory, $\MixedInvt{F}^{\mathrm{R}[T,X]/(X^2=T)}$, vanishes for grading reasons over any ring $\mathrm{R}$ (with $2$ invertible). Consider the Frobenius algebra $\mathrm{R}[\sqrt{T},X]/(X^2=T)$ obtained from the Lee Frobenius algebra by adjoining a formal square root of $T$, as in Proposition~\ref{prop:or-gens-part1}. Since $\mathrm{R}[\sqrt{T}]$ is free over $\mathrm{R}[T]$, all versions of the Khovanov chain complexes and homologies over the Frobenius algebra $\mathrm{R}[\sqrt{T},X]/(X^2=T)$ can be obtained from the corresponding versions over the Frobenius algebra $\mathrm{R}[T,X]/(X^2=T)$ by tensoring with $\mathrm{R}[\sqrt{T}]$ over $\mathrm{R}[T]$; similarly, the maps over the Frobenius algebra $\mathrm{R}[\sqrt{T},X]/(X^2=T)$ can be obtained from the maps over the Frobenius algebra $\mathrm{R}[T,X]/(X^2=T)$ by tensoring with $\mathrm{R}[\sqrt{T}]$ over $\mathrm{R}[T]$. Therefore, the mixed invariant over this new Frobenius algebra, $\MixedInvt{F}^{\mathrm{R}[\sqrt{T},X]/(X^2=T)}$, vanishes over any ring $\mathrm{R}$ (with $2$ invertible). In particular, with $\mathrm{R}=\mathbb{Q}$, we get $\MixedInvt{F}^{\mathbb{Q}[\sqrt{T},X]/(X^2=T)}=0$. Now consider the Bar-Natan Frobenius algebra over the rationals, $\mathbb{Q}[H,X]/(X^2=HX)$. This is \emph{twist-equivalent} to the above Frobenius algebra $\mathbb{Q}[\sqrt{T},X]/(X^2=T)$, in the sense of Khovanov~\cite{Kho-kh-Frobenius}. Specifically, we have an isomorphism \[ \begin{gathered} \phi\colon \mathbb{Q}[\sqrt{T},X]/(X^2=T)\to\mathbb{Q}[H,X]/(X^2=HX)\\ \phi(1)=1,\qquad\phi(X)=2X-H,\qquad\phi(\sqrt{T})=H \end{gathered} \] which preserves the algebra structure, and twists the counit $\eta$ and comultiplication $\Delta$ by the invertible element $2\in\mathbb{Q}$: \[ \eta(\phi(a))=2\phi(\eta(a))\qquad\Delta(\phi(a))=\tfrac{1}{2}\phi(\Delta(a))\qquad\forall a\in\mathbb{Q}[\sqrt{T},X]/(X^2=T). \] Khovanov's proof of invariance under twist equivalence~\cite[Proposition~3]{Kho-kh-Frobenius} works also for $\mathcal{H}^-$ (respectively, $\mathcal{H}^+$), and shows that the $\mathcal{H}^-$ (respectively, $\mathcal{H}^+$) Khovanov homologies over $\mathbb{Q}[\sqrt{T},X]/(X^2=T)$ and $\mathbb{Q}[H,X]/(X^2=HX)$ are isomorphic. Moreover, the proof can be modified to see that for both versions, the maps induced by cobordisms agree over $\mathbb{Q}[\sqrt{T},X]/(X^2=T)$ and $\mathbb{Q}[H,X]/(X^2=HX)$ up to multiplication by (possibly negative) powers of $2$. Therefore, $\MixedInvt{F}^{\mathbb{Q}[H,X]/(X^2=HX)}=2^k\MixedInvt{F}^{\mathbb{Q}[\sqrt{T},X]/(X^2=T)}$ for some integer $k$, and hence is zero. Finally, consider the Bar-Natan mixed invariant over the integers, $\MixedInvt{F}^{\mathbb{Z}[H,X]/(X^2=HX)}$. Recall its definition at the chain level. We choose an admissible cut and decompose the surface $F$ into two cobordisms $F_0\colon\varnothing\to L$ and $F_1\colon L\to\varnothing$, and choose movies $M_0$ and $M_1$ describing $F_0$ and $F_1$. Consider the generator $1\in\mathcal{C}^-(\varnothing)=\mathbb{Z}[H]$, and its image $\mathcal{C}^-(F_0)(1)\in \mathcal{C}^-(L)$ induced by the movie $M_0$. This is a boundary when viewed as a cycle in $\mathcal{C}^\infty(L)$; choose a chain $c\in \mathcal{C}^\infty(L)$ with $\partial c=\mathcal{C}^-(F_0)(1)$. Let $\bar{c}$ be the image of $c$ in $\mathcal{C}^+(L)$ obtained by removing all the terms with non-negative powers of $H$, and consider its image $\mathcal{C}^+(F_1)(\bar{c})\in \mathcal{C}^+(\varnothing)=\mathbb{Z}[H^{-1},H]/\mathbb{Z}[H]$ induced by the movie $M_1$. Since we assumed $e=-2$, $\chi-2s_o=-1$, and $s_n=0$, $\mathcal{C}^+(F_1)(\bar{c})$ lies in bigrading $(0,2)$, and so gives an element of $\mathbb{Z}$ (which is the $\mathrm{gr}_q=2$ part of $\mathbb{Z}[H^{-1},H]/\mathbb{Z}[H]$); this is the mixed invariant $\MixedInvt{F}^{\mathbb{Z}[H,X]/(X^2=HX)}(1)$. For any ring $\mathrm{R}$, if we tensor each step in the above chain-level description of the definition of $\MixedInvt{F}^{\mathbb{Z}[H,X]/(X^2=HX)}(1)$ with $\mathrm{R}$, we get a chain-level description of the definition of $\MixedInvt{F}^{\mathrm{R}[H,X]/(X^2=HX)}(1)$. So, the Bar-Natan mixed invariant $\MixedInvt{F}^{\mathrm{R}[H,X]/(X^2=HX)}(1)$, viewed as an element of $\mathrm{R}$ (which is the $\mathrm{gr}_q=2$ part of $\mathcal{C}^+(\varnothing)=\mathrm{R}[H^{-1},H]/\mathrm{R}[H]$), can be obtained from the above element by tensoring with $\mathrm{R}$ over $\mathbb{Z}$. Since $\MixedInvt{F}^{\mathbb{Q}[H,X]/(X^2=HX)}(1)=0\in\mathbb{Q}$, we get $\MixedInvt{F}^{\mathbb{Z}[H,X]/(X^2=HX)}(1)=0\in\mathbb{Z}$, and therefore, $\MixedInvt{F}^{\mathrm{R}[H,X]/(X^2=HX)}(1)=0\in\mathrm{R}$ for all rings $\mathrm{R}$. \end{proof} \section{Computations, applications, and questions}\label{sec:comps} Theorems~\ref{thm:vanishing} and~\ref{thm:cc-3-vanish} give many examples where the mixed invariant vanishes. In this section, we use Lemma~\ref{lem:hat} to give some examples where it does not vanish, and note some corollaries. \subsection{A first direct computation}\label{sec:direct-comp} Let $M$ denote the obvious M\"obius band in $S^3$ with boundary the (right-handed) trefoil $3_1$. View the boundary sum $M\natural M\natural M$ as a cobordism from the empty link to $3_1\#3_1\#3_1$; explicitly, $M\natural M\natural M$ is given by the following movie: \begin{center} \includegraphics[scale=.3333]{TrefMovie}. \end{center} This movie corresponds to a cobordism with crosscap number $3$ and normal Euler number $-18$ (see the proof of Lemma~\ref{lem:bigrading-shift-saddle}, and perform one saddle at a time). We compute directly that the mixed invariant, with respect to the Bar-Natan deformation, is non-vanishing, and then observe that this also follows from Lemma~\ref{lem:hat}. The frame $3_1$ in the movie above is an admissible cut, decomposing the cobordism as $F_1\circ F_0$. The normal Euler number of $F_1$ is $-6$, so the map $\mathcal{H}^-(F_0)\colon \mathbb{Z}[H]=\mathcal{H}^-(\emptyset)\to \mathcal{H}^-(3_1)$ shifts the $(\mathrm{gr}_h,\mathrm{gr}_q)$-bigrading by $(3,9)$. The image of $\mathcal{H}^-(\emptyset)$ at each stage of the movie $F_0$ lies in the all-1 resolution, and a generator labels each circle $1$: \begin{equation}\label{eq:tref-eg-1} \mathcenter{\includegraphics[scale=.3]{TrefMap1}.} \end{equation} The element $\mathcal{H}^-(F_0)(1)\in\mathcal{H}^-(3_1)$ is non-zero, but its image in $\mathcal{H}^\infty$ is zero: the element $\mathcal{C}^-(F_0)(1)$ is the boundary of the following element: \begin{center} \includegraphics[scale=.3333]{TrefPlusElt}. \end{center} (For computing the signs, we have ordered the crossings from left to right.) In particular, $\mathcal{H}^-(F_0)(1)$ is an element of $\mathcal{H}^{\mathrm{red}}(3_1)$, as expected. The element of $\mathcal{C}^\infty(L)$ shown with boundary $\mathcal{C}^-(F_0)(1)$ lies in $\mathcal{C}^+(L)$, so to compute the mixed invariant, we apply $\mathcal{H}^+(F_1)$ to this element. The result is: \begin{center} \includegraphics[scale=.3333]{TrefPlusEltTarg} \end{center} We could compute directly that this is a nontrivial element of $\mathcal{H}^+(L_1)$, but it is slightly easier to apply the connecting homomorphism to $\widehat{\mathcal{H}}(L_1)$. The image under the connecting homomorphism has a term \begin{equation}\label{eq:tref-eg-last} \mathcenter{\includegraphics[scale=.3333]{TrefHatEltTarg}} \end{equation} which is does not appear in the boundary of any element of $\widehat{\mathcal{C}}(L_1)$. So, the mixed invariant, in $\mathcal{H}^+(L_1)$, is nontrivial. We can obtain the same result slightly more easily using Lemma~\ref{lem:hat}. By that lemma, it suffices to show that the image of the class $1\in\widehat{\mathcal{H}}(L_0)$, under $\widehat{\mathcal{H}}(F)$, is non-zero. A similar computation to Formula~(\ref{eq:tref-eg-1}) shows that $\widehat{\mathcal{H}}(F)(1)$ is the element in the all-1 resolution where every circle is labeled $1$, i.e., the element shown in Formula~(\ref{eq:tref-eg-last}). Since all maps into this resolution are split maps, this is a nontrivial element of $\widehat{\mathcal{H}}(L_1)$. In particular, by Theorem~\ref{thm:vanishing}, this cobordism is not obtained by taking the connected sum of a crosscap-number $2$ cobordism with $\mathbb{R}\mathrm{P}^2$ or $\overline{\mathbb{R}\mathrm{P}}^2$. The fact that the cobordism does not split off a copy of $\overline{\mathbb{R}\mathrm{P}}^2$ also follows from the Gordon-Litherland formula~\cite{GL-top-signature}: if $F=F'\#\overline{\mathbb{R}\mathrm{P}}^2$ then $F'$ is a surface with $b_1(F')=2$, $e(F')=-20$, and boundary $3_1\#3_1\#3_1$, so $\sigma(K)-e(F')/2=4$ but such a surface violates the inequality $|\sigma(K)-e(F')/2|\leq b_1(F')$. This inequality seems not to obstruct splitting off a copy of $\mathbb{R}\mathrm{P}^2$. This computation also provides a little more evidence that the 4-dimensional crosscap number of $3_1\#3_1\#3_1$ is $3$, a conjecture which appears to be open. By contrast, a similar direct computation to the above shows that the mixed invariant associated to the mirror of this cobordism, from $\emptyset$ to $m(3_1)\#m(3_1)\#m(3_1)$, vanishes. (Here, we mean a different mirror from Section~\ref{sec:first-obs}: the map $(t,x,y,z,w)\mapsto (t,-x,y,z,w)$.) \subsection{A more interesting example}\label{sec:SunSwann} Sundberg-Swann showed that the map on Khovanov homology distinguishes two slice disks for the knot $9_{46}$. As we will see, their proof actually gives somewhat more: two distinct punctured $\mathbb{R}\mathrm{P}^2\#\mathbb{R}\mathrm{P}^2\#\mathbb{R}\mathrm{P}^2$s with boundary $3_1\# m(3_1)$ and normal Euler number $-6$. \begin{figure} \centering \includegraphics[scale=.5]{946} \caption{\textbf{The knot $9_{46}$.} The slice disks $\Sigma_L$ and $\Sigma_R$ are obtained by attaching a saddle at the left and right thick lines, respectively, to obtain a 2-component unlink. The cobordism to $3_1\#m(3_1)$ comes from attaching saddles at the three dotted lines.} \label{fig:946} \end{figure} Recall that the knot $9_{46}$ has two slice disks, corresponding to attaching saddles at two of the handles shown in Figure~\ref{fig:946}; we will refer to these as the left and right slice disks $\Sigma_L$ and $\Sigma_R$, respectively. We will view $\Sigma_L$ and $\Sigma_R$ as cobordisms from $\emptyset$ to $9_{46}$. There is also a cobordism $C$ from $9_{46}$ to $3_1\#m(3_1)$ with crosscap number $3$ and normal Euler number $-6$, obtained by attaching three saddles to $9_{46}$; again, see Figure~\ref{fig:946}. (Attaching just one of these saddles gives $8_{20}$, and attaching two gives $6_1$.) Sundberg-Swann call each of these three saddles a \emph{trim cobordism}. They show, by direct computation, that the map $\widehat{\mathcal{H}}(C\circ \Sigma_L)=0$ while $\widehat{\mathcal{H}}(C\circ \Sigma_R)$ sends the generator $1\in\widehat{\mathcal{H}}(\emptyset)$ to the class in $\widehat{\mathcal{H}}(3_1\#m(3_1))$ shown in Figure~\ref{fig:ss-image} \cite[Proof of Theorem 6.3]{SS-kh-surf}. In particular, by Proposition~\ref{prop:BNh-functorial}, the surfaces $C\circ \Sigma_L$ and $C\circ \Sigma_R$ are not smoothly isotopic. \begin{figure} \centering \includegraphics{31m31} \caption{\textbf{The image of $\widehat{\mathcal{H}}(C\circ \Sigma_R)$.} Left: the diagram for $3_1\#m(3_1)$ obtained by performing three saddle moves to $9_{46}$. Right: the element $\widehat{\mathcal{H}}(C\circ \Sigma_R)$ lies in the all-$1$ resolution, and labels every circle by $1$.} \label{fig:ss-image} \end{figure} By Lemma~\ref{lem:hat}, the mixed invariant $\MixedInvt{C\circ \Sigma_R}$ is non-zero, as is $\partial\circ\MixedInvt{C\circ \Sigma_R}$. By contrast, $\partial\circ\MixedInvt{C\circ \Sigma_L}=0$. The element $\MixedInvt{C\circ \Sigma_L}$ lies in bigrading $(2,7)$, and the generator in this bigrading is not in the image of multiplication by $U$; see Figure~\ref{fig:compute}. So, from the long exact sequence~(\ref{eq:les}), $\partial$ is injective in this bigrading, so $\MixedInvt{C\circ\Sigma_L}=0$, as well. \begin{figure} \centering \begin{tikzpicture}[xscale=.6, yscale=.6,every node/.style={inner sep=0,outer sep=0}] \draw[step=1, very thin] (0,0) grid (3,10); \draw[xstep=2, ystep=1, xshift=1cm, very thin] (2,0) grid (4,10); \draw[step=1, very thin] (5,0) grid (8,10); \draw (0.5,-.2) node[below] {$-3$}; \draw (1.5,-.2) node[below] {$-2$}; \draw (2.5,-.2) node[below] {$-1$}; \draw (4,-.2) node[below] {$0$}; \draw (5.5,-.2) node[below] {$1$}; \draw (6.5,-.2) node[below] {$2$}; \draw (7.5,-.2) node[below] {$3$}; \draw (-.2,0.5) node[left] {$-7$}; \draw (-.2,1.5) node[left] {$-5$}; \draw (-.2,2.5) node[left] {$-3$}; \draw (-.2,3.5) node[left] {$-1$}; \draw (-.2,4.5) node[left] {$1$}; \draw (-.2,5.5) node[left] {$3$}; \draw (-.2,6.5) node[left] {$5$}; \draw (-.2,7.5) node[left] {$7$}; \draw (-.2,8.5) node[left] {$9$}; \draw (-.2,9.5) node[left] {$11$}; \node at (0.5, 0.5) (a) {$a$}; \node at (1.5, 2.5) (b) {$b$}; \node at (2.5, 2.5) (c) {$c$}; \node at (3.5, 3.5) (d) {$d$}; \node at (4.5,3.5) (e) {$\vphantom{d}e$}; \node at (3.5, 4.5) (f) {$f\vphantom{g}$}; \node at (4.5, 4.5) (g) {$\vphantom{f}g$}; \node at (5.5, 5.5) (h) {$h$}; \node at (6.5, 5.5) (i) {$i$}; \node at (7.5, 7.5) (j) {$j$}; \draw[->, thick] (a) to (b); \draw[->, thick] (c) to (f); \draw[->, thick] (e) to (h); \draw[->, thick] (i) to (j); \end{tikzpicture}\qquad \begin{tikzpicture}[xscale=1.1, yscale=.6, every node/.style={inner sep=0,outer sep=0}] \draw[step=1, very thin] (0,0) grid (3,10); \draw[xstep=2, ystep=1, xshift=1cm, very thin] (2,0) grid (4,10); \draw[step=1, very thin] (5,0) grid (8,10); \draw (0.5,-.2) node[below] {$-3$}; \draw (1.5,-.2) node[below] {$-2$}; \draw (2.5,-.2) node[below] {$-1$}; \draw (4,-.2) node[below] {$0$}; \draw (5.5,-.2) node[below] {$1$}; \draw (6.5,-.2) node[below] {$2$}; \draw (7.5,-.2) node[below] {$3$}; \draw (-.2,0.5) node[left] {$-7$}; \draw (-.2,1.5) node[left] {$-5$}; \draw (-.2,2.5) node[left] {$-3$}; \draw (-.2,3.5) node[left] {$-1$}; \draw (-.2,4.5) node[left] {$1$}; \draw (-.2,5.5) node[left] {$3$}; \draw (-.2,6.5) node[left] {$5$}; \draw (-.2,7.5) node[left] {$7$}; \draw (-.2,8.5) node[left] {$9$}; \draw (-.2,9.5) node[left] {$11$}; \node at (0.5, 0.5) (a) {$a$}; \node at (0.5, 2.5) (am1) {$T^{-1}\!a$}; \node at (0.5, 4.5) (am2) {$T^{-2}\!a$}; \node at (0.5, 6.5) (am3) {$T^{-3}\!a$}; \node at (0.5, 8.5) (am4) {$T^{-4}\!a$}; \node at (1.5, 2.5) (b) {$b$}; \node at (1.5, 0.5) (b1) {$Tb$}; \node at (1.5, 4.5) (bm1) {$T^{-1}\!b$}; \node at (1.5, 6.5) (bm2) {$T^{-2}\!b$}; \node at (1.5, 8.5) (bm3) {$T^{-3}\!b$}; \node at (2.5, 2.5) (c) {$c$}; \node at (2.5, 0.5) (c1) {$Tc$}; \node at (2.5, 4.5) (cm1) {$T^{-1}\!c$}; \node at (2.5, 6.5) (cm2) {$T^{-2}\!c$}; \node at (2.5, 8.5) (cm3) {$T^{-3}\!c$}; \node at (3.5, 3.5) (d) {$d$}; \node at (3.5, 1.5) (d1) {$Td$}; \node at (3.5, 5.5) (dm1) {$T^{-1}\!d$}; \node at (3.5, 7.5) (dm2) {$T^{-2}\!d$}; \node at (3.5, 9.5) (dm3) {$T^{-3}\!d$}; \node at (4.5,3.5) (e) {$\vphantom{d}e$}; \node at (4.5,1.5) (e1) {$Te$}; \node at (4.5,5.5) (em1) {$T^{-1}\!e$}; \node at (4.5,7.5) (em2) {$T^{-2}\!e$}; \node at (4.5,9.5) (em3) {$T^{-3}\!e$}; \node at (3.5, 4.5) (f) {$f\vphantom{g}$}; \node at (3.5, 2.5) (f1) {$Tf$}; \node at (3.5, 0.5) (f2) {$T^2\!f$}; \node at (3.5, 6.5) (fm1) {$T^{-1}\!f$}; \node at (3.5, 8.5) (fm2) {$T^{-2}\!f$}; \node at (4.5, 4.5) (g) {$\vphantom{f}g$}; \node at (4.5, 2.5) (g1) {$Tg$}; \node at (4.5, 0.5) (g2) {$T^2\!g$}; \node at (4.5, 6.5) (gm1) {$T^{-1}g$}; \node at (4.5, 8.5) (gm2) {$T^{-2}g$}; \node at (5.5, 5.5) (h) {$h$}; \node at (5.5, 3.5) (h1) {$Th$}; \node at (5.5, 1.5) (h2) {$T^2\!h$}; \node at (5.5, 7.5) (hm1) {$T^{-1}\!h$}; \node at (5.5, 9.5) (hm2) {$T^{-2}\!h$}; \node at (6.5, 5.5) (i) {$i$}; \node at (6.5, 3.5) (i1) {$Ti$}; \node at (6.5, 1.5) (i2) {$T^2\!i$}; \node at (6.5, 7.5) (im1) {$\mathbf{T^{-1}\!i}$}; \node at (6.5, 9.5) (im2) {$T^{-2}\!i$}; \node at (7.5, 7.5) (j) {$j$}; \node at (7.5, 5.5) (j1) {$Tj$}; \node at (7.5, 3.5) (j2) {$T^{2}\!j$}; \node at (7.5, 1.5) (j3) {$T^3\!j$}; \node at (7.5, 9.5) (jm1) {$T^{-1}\!j$}; \draw[->] (a) to (b1); \draw[->] (am1) to (b); \draw[->] (am2) to (bm1); \draw[->] (am3) to (bm2); \draw[->] (am4) to (bm3); \draw[->] (c) to (f1); \draw[->] (c1) to (f2); \draw[->] (cm1) to (f); \draw[->] (cm2) to (fm1); \draw[->] (cm3) to (fm2); \draw[->] (e) to (h1); \draw[->] (e1) to (h2); \draw[->] (em1) to (h); \draw[->] (em2) to (hm1); \draw[->] (em3) to (hm2); \draw[->] (i) to (j1); \draw[->] (i1) to (j2); \draw[->] (i2) to (j3); \draw[->] (im1) to (j); \draw[->] (im2) to (jm1); \draw[very thick] (0,1)--(1,1)--(1,3)--(3,3)--(3,5)--(5,5)--(5,6)--(7,6)--(7,8)--(8,8); \end{tikzpicture} \caption{\textbf{Computing $\mathcal{H}^+(3_1\# m(3_1))$.} Left: $\widehat{\mathcal{H}}$ as computed by knotkit~\cite{KKI-kh-knotkit}, with $\mathbb{Q}$ coefficients. Each letter is a basis element over $\mathbb{Q}$. (This computation can also be deduced from $\mathcal{H}(3_1)$, with a little work.) The arrows are the differentials in the Lee spectral sequence, which one can deduce from knowing that the Lee homology is $\mathbb{Q}\oplus\mathbb{Q}$ in bidegrees $(0,\pm1)$, since $s(3_1\#m(3_1))=0$. Right: the differentials on $\mathcal{H}^\infty(3_1\#m(3_1))$, which one can read off from the top-left computation. The subcomplex $\mathcal{C}^-$ lies below the thick steps, and the quotient complex $\mathcal{C}^+$ lies above the thick steps. The generator of $\mathcal{H}^+$ in bigrading $(2,7)$ is in bold; $U^{-1}$ times this generator is not a cycle in $\mathcal{C}^+$. The analogous computation for the Bar-Natan deformation is similar, but the differential on the left has bi-degree $(1,2)$ instead of $(1,4)$, and the variable $H$ has bi-degree $(0,-2)$.} \label{fig:compute} \end{figure} In conclusion, both the map $\widehat{\mathcal{H}}$ and the the mixed invariant distinguish this pair of surfaces. By Theorem~\ref{thm:vanishing}, this implies that $C\circ\Sigma_L$ and $C\circ\Sigma_R$ do not differ by taking a connected sum with a smoothly embedded $2$-sphere. Further, non-vanishing of $\widehat{\mathcal{H}}(C\circ \Sigma_R)$ implies that $C\circ \Sigma_R$ is not obtained from another connected surface by attaching a $1$-handle, and is not a crosscap stabilization. Hence, we have proved Theorem~\ref{thm:intro}. \begin{remark} Using one of the three dotted saddles in Figure~\ref{fig:946} gives a pair of M\"obius bands with boundary $8_{20}$ distinguished by Khovanov homology, and using two of them gives a pair of punctured Klein bottles with boundary $6_1$ distinguished by Khovanov homology. \end{remark} \subsection{An exotic pair of surfaces}\label{sec:exotic} Recall that a pair of surfaces $F,F'\subset B^4$ with boundary $K\subset S^3$ is \emph{exotic} if there is a homeomorphism $\phi\colon B^4\to B^4$ so that $\phi|_{S^3}=\Id$ and $\phi(F)=F'$, but no diffeomorphism with these properties. (See also Remark~\ref{rem:exotic-exotic}.) Hayden-Sundberg give a family of exotic pairs of surfaces~\cite{HS-kh-exotic}. The simplest of their pairs is the pair of slice disks shown in Figure~\ref{fig:exotic} for the knot $J$. The slice disk $D$ (respectively $D'$) is obtained by attaching a saddle along the arc $b$ (respectively $b'$) shown there, and then capping the resulting 2-component unlink with disks. The fact that these surfaces are distinct is witnessed by the map on Khovanov homology: for the element $\phi$ of $\widehat{\mathcal{H}}(J)$ shown in Figure~\ref{fig:exotic}, $\widehat{\mathcal{H}}(D')(\phi)=0$ (obvious) but $\widehat{\mathcal{H}}(D)(\phi)=1\in\mathbb{Z}=\widehat{\mathcal{H}}(\emptyset)$~\cite[Figure 4]{HS-kh-exotic}. \begin{figure} \centering \includegraphics{HS} \caption{\textbf{An exotic pair.} Top right: Hayden-Sundberg's knot $J$ and the two saddles $b$ and $b'$ giving an exotic pair of distinct slice disks for $J$, drawn as thick line segments. Top-left: Hayden-Sundberg's cycle $\phi$ in $\widehat{\mathcal{C}}(J)$ witnessing the fact that these slice disks are distinct. Bottom-right: a diagram for $12^n_{309}$ and three saddles giving a nonorientable cobordism to $J$. Bottom-left: a cycle $\psi$ in $\widehat{\mathcal{C}}(12^n_{309})$ which maps to Hayden-Sundberg's cycle. As in Hayden-Sundberg's figure, dotted lines indicate which crossings had $0$-resolutions.} \label{fig:exotic} \end{figure} There is a cobordism $C$ with crosscap number $3$ and normal Euler number $-6$ from the knot $12^n_{309}$ to $J$, so that $\phi=\widehat{\mathcal{H}}(\psi)$ for an appropriate class $\psi\in\widehat{\mathcal{H}}(12^n_{309})$: see Figure~\ref{fig:exotic}. So, $\widehat{\mathcal{H}}(D\circ C)(\psi)=1$ while $\widehat{\mathcal{H}}(D'\circ C)(\psi)=0$. Thus, $D\circ C$ is not diffeomorphic to $D'\circ C$ rel boundary. On the other hand, since $D$ is homeomorphic to $D'$ rel boundary, $D\circ C$ is homeomorphic to $D'\circ C$ rel boundary. Thus, we have proved Theorem~\ref{thm:intro-2}. By the second case of Lemma~\ref{lem:hat}, the mixed invariant also distinguishes $D\circ C$ and $D'\circ C$ (compare Remark~\ref{rem:mixed-strong}). The fact that the pair $D\circ C$ and $D'\circ C$ are not diffeomorphic is, of course, slightly stronger than the statement that $D$ and $D'$ are not diffeomorphic. \begin{remark} Hayden-Sundberg's example also immediately gives an exotic pair of crosscap number $3$ surfaces with boundary $12^n_{404}$, as well as exotic pairs of crosscap number $3$ surfaces with boundary on several links, by an easy adaptation of Figure~\ref{fig:exotic}. \end{remark} \begin{remark}\label{rem:exotic-exotic} Hayden-Sundberg take a slightly different definition of \emph{exotic} than we have: they define a pair of surfaces $F,F'\subset B^4$ to be exotic if there is an ambient isotopy through homeomorphisms taking $F$ to $F'$ but no ambient isotopy through diffeomorphisms (which are, in both cases, the identity on $S^3$). Since their surfaces are distinguished by the map on Khovanov homology, however, by Proposition~\ref{prop:BNh-functorial}, their computation shows that there is no diffeomorphism from $B^4$ to itself which is the identity on the boundary and takes $F$ to $F'$ (even one which is not isotopic to the identity). That is, their pairs of surfaces really are exotic in the sense described above. \end{remark} \subsection{Some questions} To put the results above in context, and in particular to acknowledge the cases they do not cover, we conclude with some open questions. In Corollary~\ref{cor:closed-vanish}, we showed that the mixed invariant does not distinguish closed, connected surfaces. By Proposition~\ref{prop:or-gens-part2} for the nonorientable case and work of Gujral-Levine~\cite{LG-kh-split} for the orientable case, the map on $\mathcal{H}^\bullet$ also does not distinguish disconnected surfaces. \begin{question} Is there a pair $F,F'\subset S^4$ of closed, disconnected surfaces with the same topology and componentwise normal Euler numbers so that $\MixedInvt{F}\neq \MixedInvt{F'}$? \end{question} If we are only considering surfaces without stars, by Theorem~\ref{thm:cc-3-vanish}, these surfaces would have to have (total) normal Euler number $-2$ and Euler characteristic $1$. For example, perhaps $F$ could be a knotted copy of $\mathbb{R}\mathrm{P}^2\amalg (\mathbb{R}\mathrm{P}^2\# \overline{\mathbb{R}\mathrm{P}}^2)$ with total normal Euler number $-2$ and non-vanishing mixed invariant, and $F'$ the standard $\mathbb{R}\mathrm{P}^2\amalg (\mathbb{R}\mathrm{P}^2\# \overline{\mathbb{R}\mathrm{P}}^2)$, which has vanishing mixed invariant. The Seiberg-Witten invariant is not just defined when $b_2^+\geq 3$, but also when $b_2^+=2$. As noted in Remark~\ref{rem:2-crosscaps}, we can define a Khovanov mixed invariant when the crosscap number is $2$, but we do not know if it is well-defined. \begin{question} If $F,F'$ are isotopic surfaces (rel boundary) with crosscap number $2$ and admissible cuts $(S,V,\phi)$ and $(S',V',\phi')$, respectively, is the mixed invariant of $F$ with respect to $(S,V,\phi)$ equal to the mixed invariant of $F'$ with respect to $(S',V',\phi')$? \end{question} In the examples in Sections~\ref{sec:SunSwann} and~\ref{sec:exotic} of surfaces distinguished by their mixed invariants, the surfaces were also distinguished by the induced maps on ordinary Khovanov homology $\widehat{\mathcal{H}}$. By Remark~\ref{rem:mixed-strong}, for surfaces with connected boundary, the mixed invariant is at least as strong as the map on $\widehat{\mathcal{H}}$. \begin{question} Is there a pair of surfaces with crosscap number $\geq 3$ and the same topology and normal Euler number which are distinguished by the mixed invariant but not by the map on $\widehat{\mathcal{H}}$? Is there such a pair not distinguished by the homotopy class of maps on $\mathcal{C}^-$? Is there an exotic pair of surfaces with this property? \end{question} Lemma~\ref{lem:hat} and its proof give restrictions on what the Khovanov homology of such a pair must look like, as does Corollary~\ref{cor:H-vanish}. On a related point, by Theorem~\ref{thm:vanishing}, the map on $\mathcal{H}^\bullet$ vanishes for stabilizations of surfaces, so never distinguishes them. There is one case in which the mixed invariant could potentially distinguish stabilized surfaces: \begin{question} Is there an exotic pair of M\"obius bands $F,F'\subset B^4$ so that the mixed invariant distinguishes their stabilizations? That is, if $F\# T^2$ denotes a standard stabilization of $F$, is there an exotic pair of M\"obius bands $F,F'$ with boundary some knot $K$ so that $\MixedInvt{F\#T^2}\neq \MixedInvt{F'\#T^2}\in\mathcal{H}^+(K)$? \end{question} The examples of nonorientable surfaces in Sections~\ref{sec:SunSwann} and~\ref{sec:exotic} came from pairs of slice disks. Indeed, the nonorientable surfaces were apparent from the slice disks and class in Khovanov homology. Perhaps this phenomenon is general: \begin{question} Is it true that for every exotic pair of slice disks $D,D'$, for any knot $K$, there is a nonorientable cobordism $F$ from $K$ to another knot $K'$ so that $F\circ D$ and $F\circ D'$ are also an exotic pair? Can $F$ be chosen to have crosscap number $\geq 3$? \end{question} One can ask the same question, but for exotic pairs detected by Khovanov homology: \begin{question} Is it true that for every pair of slice disks $D,D'$, for a knot $K$ such that $\widehat{\mathcal{H}}(D)\neq \widehat{\mathcal{H}}(D')$, there is a nonorientable cobordism $F$ from $K$ to another knot $K'$ so that $\widehat{\mathcal{H}}(F\circ D)\neq \widehat{\mathcal{H}}(F\circ D')$? \end{question} One could also replace $\widehat{\mathcal{H}}$ by $\mathcal{H}^\bullet$ in the question, or require that $F$ have crosscap number $\geq 3$ and ask if $\MixedInvt{F\circ D}\neq \MixedInvt{F\circ D'}$. As noted in the introduction, our mixed invariant is inspired by Ozsv\'ath-Szab\'o's mixed invariant in Heegaard Floer homology. \begin{question} Is there a precise relationship between the Khovanov mixed invariant of a surface $F$ and the Heegaard Floer mixed invariant of the branched double cover of $F$? \end{question} Note that having crosscap number $\geq 3$ does not give an inequality for $b_2^+$ (consider the standard $\overline{\mathbb{R}\mathrm{P}}^2\#\overline{\mathbb{R}\mathrm{P}}^2\#\overline{\mathbb{R}\mathrm{P}}^2$), nor does having $b_2^+(\Sigma(F))\geq 2$ imply crosscap number $\geq 3$ (consider $\mathbb{R}\mathrm{P}^2\#\mathbb{R}\mathrm{P}^2$ or, for that matter, an orientable surface of genus $g\geq 2$), so the two invariants are not defined in exactly the same cases; perhaps this argues against a direct relationship. \input{conclusion}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:introduction} Fair allocation of goods or resources among various agents is a central task in multiagent systems and other fields. The specific setting where just one divisible resource is to be divided fairly is commonly referred to as cake-cutting, and agents are called players in this setting. Research in the area of cake-cutting started off in the 1940s with the pioneering work of Steinhaus~\cite{ste:j:steinhaus} who, to the best of our knowledge, was the first to introduce the problem of fair division. Dividing a good (or a resource) fairly among several players such that each of them is satisfied with the portion received is of central importance in many fields. In the last 60 years this research area has developed vividly, spreading out into various directions and with applications in areas as diverse as economics, mathematics, computer science, and psychology. While some lines of this research seek to find reasonable interpretations of what ``fairness'' really stands for and how to measure it~\cite{fol:j:resource-allocation,fel-kir:j:fairness-and-envy,cha:j:measure-envy}, others are concerned more with proofs of existence or impossibility theorems regarding fair division \cite{var:j:equity-envy-efficiency, wel:j:fair-division,aki:j:vilfredo-pareto}, or with the development of new cake-cutting procedures~\cite{bra-tay:j:protocol,str:j:cut-cake,rob-web:b:cake-cutting,bar-bra:j:minimal-cuts} and, relatedly, with the analysis of their complexity relative to both upper and lower bounds \cite{mag-bus-kri:c:cake-cutting,woe-sga:j:complexity-cake-cutting,pro:c:cake-cutting}. Since cake-cutting procedures involve several parties (namely, the players), they are also referred to as ``protocols.'' Cake-cutting protocols aim at achieving a \emph{fair} division of an infinitely divisible resource among $n$ players, who each may have different preferences for (and thus different valuations of) different parts of the resource. In this paper, we focus on (a notion of) fairness in finite bounded cake-cutting protocols. Many cake-cutting protocols are known today, both finite and continuous ones. While a \emph{finite} protocol always provides a solution after only a finite number of decisions have been made, a \emph{continuous} protocol could potentially run forever. Among finite protocols, one can further distinguish between bounded and unbounded ones. A finite \emph{bounded} cake-cutting protocol is present if we know in advance that a certain number of steps (that may depend on the number of players) will suffice to divide the resource fairly---independently of how the players may value distinct parts of the resource in a particular case and independently of the strategies chosen by the players. In contrast, in finite \emph{unbounded} cake-cutting protocols, we cannot predict an upper bound on how many steps will be required to achieve the same goal. Aiming to apply cake-cutting procedures to real-world scenarios, it is important to develop fair \emph{finite bounded} cake-cutting protocols. In this context, ``fairness'' is often interpreted as meaning ``envy-freeness.'' A division is \emph{envy-free} if no player has an incentive to switch his or her portion with the portion any other player received. Steinhaus~\cite{ste:j:division-pragmatique} proved that for any number of players an envy-free division of a single divisible good always exists. However, the current state of the art---after six decades of intense research---is that for arbitrary $n$, and even for $n=4$, the development of \emph{finite bounded} envy-free cake-cutting protocols still appears to be out of reach, and a big challenge for future research. For $n > 3$ players, hardly any envy-free cake-cutting protocol is known, and the ones that are known are either finite unbounded or continuous (see, e.g., \cite{bra-tay:j:protocol,rob-web:j:protocol,bra-tay-zwi:c:moving-knife}). Though, from an implementation perspective, finite bounded protocols are the ones that are most desirable. Recently, Stromquist~\cite{str:j:finite-protocols} has shown that for more than two players there is no finite cake-cutting protocol that provides an envy-free division when all portions are required to consist of contiguous pieces. Our goal in this paper is to look for compromises that can be made with respect to envy-freeness while keeping the protocol finite bounded: We propose an approach to evaluate finite bounded (yet possibly non-envy-free) cake-cutting protocols with respect to their ``degree of guaranteed envy-freeness'' (DGEF). Informally put, this notion provides a measure of how good such a protocol can approximate the (possibly for this particular protocol unreachable) ideal of envy-freeness in terms of the number of envy-free-relations that are guaranteed to exist even in the worst case. \OMIT{ We stress that our approach of approximating envy-freeness differs from other lines of research that also deal with approximating fairness. For example, Lipton et al.~\cite{lip:c:approximately-fair} propose to seek for minimum-envy allocations of \emph{indivisible} goods in terms of the value difference of the utility functions of envied players, and Edmonds and Pruhs~\cite{edm-pru:c:not-a-piece-of-cake,edm-pru:c:balanced-allocations-of-cake} approximate fairness in cake-cutting protocols by allowing merely approximately fair pieces (in terms of their value to the players) and by using only approximate cut queries (in terms of exactness). Further related approaches will be mentioned in Section~\ref{sec:def-guaranteed-envy-freeness}, and we will point out in which ways they differ from our approach. } This paper is organized as follows. After defining some basic notions in Section~\ref{sec:preliminaries}, we introduce the notion of degree of guaranteed envy-freeness, and specify the DGEF for some well-known finite bounded proportional cake-cutting protocols in Section~\ref{sec:guaranteed-envy-freeness}. In Section~\ref{sec:improved-algorithm-guaranteed-envy-freeness} we present a new finite bounded proportional cake-cutting protocol with an enhanced degree of guaranteed envy-freeness, compared with the proportional protocols mentioned in Section~\ref{sec:guaranteed-envy-freeness}. This new cake-cutting protocol makes use of parallelization in order to include as many matching valuations (in terms of not raising envy) as possible. Section~\ref{sec:survey} briefly describes those protocols mentioned in Section~\ref{sec:guaranteed-envy-freeness}, and determines their degree of guaranteed envy-freeness via a detailed analysis. In Section~\ref{sec:discussion}, we compare the DGEF approach with related work, and show that even small steps toward the development of cake-cutting protocols with an enhanced DGEF are of significance. Finally, we conclude in Section~\ref{sec:conclusion} that our approach extends the scope for the development of new finite bounded cake-cutting protocols by ``approximating'' envy-freeness instead of insisting on it. \section{Preliminaries and Notation} \label{sec:preliminaries} Cake-cutting is about dividing a cake into portions that are assigned to the players such that each of them feels, according to his or her individual valuation of the portions, to have received a fair amount of the cake.\footnote{As is common, ``cake'' will be used as a metaphor for the resource or the good to be divided.} The cake is assumed to be infinitely divisible, and can be divided into arbitrary pieces without losing any of its value. Moreover, we assume the cake to be heterogeneous.\footnote{Consider, for example, a cake with cherry, chocolate, and strawberry toppings. A player may value, say, the pieces with strawberry toppings higher than those with cherry toppings.} This assumption can be made without loss of generality, as any cake-cutting protocol providing a ``fair'' division of a heterogeneous resource can be applied in the same way to a homogeneous one \cite{bra-tay:b:fair-division}. Given $n$ players, cake $C$ is to be divided into $n$ portions that are to be distributed among the players so as to satisfy each of them. A portion is not necessarily a single piece of cake but rather can be a collection of disjoint, possibly noncontiguous pieces of~$C$. Furthermore, all players may have different individual valuations of the single pieces of the cake. For example, one player may prefer the pieces with the chocolate topping, whereas another player may prefer the pieces with the cherry topping. More formally, cake $C$ is represented by the unit interval $[0,1]$ of real numbers. By performing cuts, $C$ is divided into $m$ pieces $c_k$, $1 \leq k \leq m$: Each player~$p_i$, $1 \leq i \leq n$, assigns value $v_i(c_k) = v_{i}(x_k,y_k)$ to piece $c_k \subseteq C$, where $c_k$ is represented by the subinterval $[x_k,y_k] \subseteq [0,1]$ and $p_i$'s valuation function $v_i$ maps subintervals of $[0,1]$ to real numbers in $[0,1]$. We require each valuation function $v_i$ to satisfy the following properties: \begin{enumerate} \item {\emph{Normalization:}} $v_{i}(0,1)=1$. \item {\emph{Positivity:}}\footnote{% \label{foo:positivity}% The literature is a bit ambiguous regarding this assumption. Some papers require the players' values for nonempty pieces of cake to be \emph{nonnegative} (i.e., $v_i(c_k) \geq~0$) instead of positive. For example, Robertson and Webb~\cite{rob-web:b:cake-cutting} and Woeginger and Sgall~\cite{woe-sga:j:complexity-cake-cutting} require nonnegative values for nonempty pieces of cake, whereas positive values for such pieces are required by Brams and Taylor~\cite{bra-tay:b:fair-division}, Brams, Jones, and Klamler~\cite{bra-jon-kla:j:minimal-envy}, and Weller~\cite{wel:j:fair-division}.} For all $c_k \subseteq C$, $c_k \neq \emptyset$, we have $v_i(c_k) >~0$. \item {\emph{Additivity:}} For all $c_k,c_{\ell} \subseteq C$, $c_k \cap c_{\ell} = \emptyset$, we have $v_i(c_k)+v_i(c_{\ell}) = v_i(c_k \cup c_{\ell})$. \item {\emph{Divisibility:}}\footnote{Divisibility implies that for each $x \in [0,1]$, $v_i(x,x)=0$. That is, isolated points are valued~$0$, and open intervals have the same value as the corresponding closed intervals. \label{foo:divisibility}} For all $c_k \subseteq C$ and for each~$\alpha$, $0 \leq \alpha \leq 1$, there exists some $c_{\ell} \subseteq c_k$ such that $v_i(c_{\ell}) = \alpha \cdot v_i(c_k)$. \end{enumerate} Note that, to simplify notation, we write $v_i(x_k,y_k)$ instead of $v_i([x_k,y_k])$ for intervals $[x_k,y_k] \subseteq [0,1]$. Due to Footnote~\ref{foo:divisibility}, no ambiguity can arise. For each $[x,y] \subseteq [0,1]$, define $\|[x,y]\| = y-x$. For any real number~$x$, $\lfloor x \rfloor$ denotes the greatest integer not exceeding~$x$, and $\lceil x \rceil$ denotes the least integer not smaller than~$x$. The assumption that $C$ is heterogeneous formally means that subintervals of $[0,1]$ having equal size can be valued differently by the same player. Moreover, distinct players may value one and the same piece of the cake differently, i.e., their individual valuation functions will in general be distinct. Every player knows only the value of (arbitrary) pieces of $C$ corresponding to his or her own valuation function. Players do not have any knowledge about the valuation functions of other players. A \emph{division of $C$} is an assignment of disjoint and nonempty portions~$C_i \subseteq C$, where $C=\bigcup_{i=1}^{n} C_i=\bigcup_{k=1}^{m} c_k$, to the players such that each player $p_i$ receives a portion $C_i \subseteq C$ consisting of at least one nonempty piece $c_k \subseteq C$. The goal of a cake-cutting division is to assign the portions to the players in as fair a way as possible. There are different interpretations, though, of what ``fair'' might mean. To distinguish between different degrees of fairness, the following notions have been introduced in the literature (see, e.g., Robertson and Webb~\cite{rob-web:b:cake-cutting}): \begin{definition} \label{def:proportional-strongfair-envyfree-division} Let $v_1, v_2, \dots, v_n$ be the valuation functions of the $n$ players. A division of cake $C = \bigcup_{i=1}^{n} C_i$, where $C_i$ is the $i$th player's portion, is said to be: \begin{enumerate} \item \emph{simple fair} (a.k.a.\ \emph{proportional}) if and only if for each~$i$, $1 \leq i \leq n$, we have $v_i(C_i) \geq \nicefrac{1}{n}$; \item \emph{strong fair} if and only if for each~$i$, $1 \leq i \leq n$, we have $v_i(C_i) > \nicefrac{1}{n}$; \item \emph{envy-free} if and only if for each $i$ and~$j$, $1 \leq i, j \leq n$, we have $v_i(C_i) \geq v_i(C_j)$. \end{enumerate} \end{definition} A cake-cutting protocol describes an interactive procedure for obtaining a division of a given cake, without having any information on the valuation functions of the players involved. Each protocol is characterized by a set of rules and a set of strategies (see, e.g., Brams and Taylor~\cite{bra-tay:b:fair-division}). The rules just determine the course of action, such as a request to cut the cake, whereas the strategies define how to achieve a certain degree of fairness, e.g., by advising the players where to cut the cake. If all players obey the protocol, it is guaranteed that every player receives a ``fair'' portion of the cake. Cake-cutting protocols are characterized according to the degree of fairness of the divisions obtained:\footnote{In addition to the fairness criteria given here, other fairness criteria may be reasonable as well and have been proposed in the literature (see, e.g., Robertson and Webb~\cite{rob-web:b:cake-cutting}).} \begin{definition} \label{def:proportional-strongfair-envyfree-protocol} A cake-cutting protocol is said to be \emph{simple fair} (or \emph{proportional}), \emph{strong fair}, and \emph{envy-free}, respectively, if every division obtained (i.e., regardless of which valuation functions the players have) is simple fair (or proportional), strong fair, and envy-free, respectively, provided that all players follow the rules and strategies of the protocol. \end{definition} Apparently, every division obtained by either a strong fair cake-cutting protocol or by an envy-free cake-cutting protocol is simple fair as well (i.e., every strong fair or envy-free cake-cutting protocol can be classified as being simple fair, too). Moreover, every simple fair cake-cutting protocol can easily be applied to the case when there are unequal shares to be assigned, though with respect to rational ratios only. Informally speaking, this can be done by cloning players and their valuation functions so as to have in total as many players as the smallest common denominator specifies (see, e.g., \cite{rob-web:b:cake-cutting}). \section{Degrees of Guaranteed Envy-Freeness for Proportional Protocols} \label{sec:guaranteed-envy-freeness} As mentioned in the introduction, the design of envy-free cake-cutting protocols for any number $n$ of players seems to be quite a challenge. For $n \leq 3$~players, a number of protocols that always provide envy-free divisions have been published, both finite (bounded and unbounded) and continuous ones~\cite{str:j:cut-cake,bra-tay:b:fair-division,rob-web:b:cake-cutting,bar-bra:j:minimal-cuts}. However, to the best of our knowledge, up to date no finite bounded cake-cutting protocol for $n > 3$~players is known to always provide an envy-free division. For practical purposes, it would be most desirable to have \emph{finite bounded} cake-cutting protocols that always provide divisions as fair as possible. In this regard, it is questionable whether the advantage of always having an envy-free rather than just a proportional division would be big enough to justify the lack of finite boundedness. It may be worthwhile to be content with a certain lower degree of envy-freeness, rather than insisting on complete envy-freeness, for the benefit of having a finite bounded protocol in exchange. In this paper, we propose an approach that weakens the concept of envy-freeness for the purpose of keeping protocols finite bounded. On the one hand, in Section~\ref{sec:survey-guaranteed-envy-freeness} we are concerned with known simple fair (i.e., proportional) cake-cutting protocols that are finite bounded, and determine their degree of guaranteed envy-freeness, a notion to be introduced in Section~\ref{sec:def-guaranteed-envy-freeness} (see Definition~\ref{def:degree-guaranteed-envy-freeness}). On the other hand, in Section~\ref{sec:improved-algorithm-guaranteed-envy-freeness} we propose a new finite bounded proportional cake-cutting protocol that---compared with the known protocols---has an enhanced degree of guaranteed envy-freeness. \subsection{Degrees of Guaranteed Envy-Freeness} \label{sec:def-guaranteed-envy-freeness} When investigating the degree of envy-freeness of a cake-cutting protocol for $n$~players, for each player $p_i$, $1 \leq i \leq n$, the value of his or her portion needs to be compared to the values of the $n-1$ other portions (according to the measure of player~$p_i$).\footnote{We will use ``valuation'' and ``measure'' interchangeably.} Thus, $n(n-1)$ pairwise relations need to be investigated in order to determine the degree of envy-freeness of a cake-cutting protocol for $n$ players. A player $p_i$ envies another player $p_j$, $1 \leq i,j \leq n$, $i \neq j$, when $p_i$ prefers player $p_j$'s portion to his or her own. If $p_i$ envies~$p_j$, we call the relation from $p_i$ to $p_j$ an \emph{envy-relation}; otherwise, we call it an \emph{envy-free-relation}. \begin{definition}\upshape \label{def:envy-relation-envy-free-relation} Consider a division of cake $C=\bigcup_{i=1}^{n} C_i$ for a set $P = \{p_1,p_2,\dots,p_n\}$ of players, where $v_i$ is $p_i$'s valuation function and $C_i$ is $p_i$'s portion. \begin{enumerate} \item An \emph{envy-relation for this division} (denoted by $\Vdash$) is a binary relation on~$P$. Player $p_i$ envies player $p_j$, $1 \leq i,j \leq n$, $i \neq j$, if and only if $v_i(C_i) < v_i(C_j)$. We write $p_i \Vdash p_j$. \item An \emph{envy-free-relation for this division} (denoted by $\nVdash$) is a binary relation on~$P$. Player $p_i$ does not envy player $p_j$, $1 \leq i,j \leq n$, $i \neq j$, if and only if $v_i(C_i) \geq v_i(C_j)$. We write $p_i \nVdash p_j$. \end{enumerate} \end{definition} The following properties of envy-relations and envy-free-relations are worth mentioning.\footnote{Various analogs of envy-relations and envy-free-relations have also been studied, from an economic perspective, in the different context of multiagent allocation of indivisible resources. In particular, Feldman and Weiman~\cite{fel-wei:j:envy-and-wealth} consider ``non-envy relations'' (which are similar to our notion of envy-free-relations) and mention that these are not necessarily transitive. Chauduri~\cite{cha:j:interpersonal-envy} introduces ``envy-relations'' and mentions that these are irreflexive and not necessarily transitive. Despite some similarities, their notions differ from ours, both in their properties and in the way properties holding for their and our notions are proven. For example, Chauduri~\cite{cha:j:interpersonal-envy} notes that mutual envy cannot occur in a market equilibrium, i.e., in this case his ``envy-relations'' are asymmetric, which is in sharp contrast to two-way envy being allowed for our notion.} No player can envy him- or herself, i.e., envy-relations are irreflexive: The inequality $v_i(C_i) < v_i(C_i)$ never holds. Thus, we trivially have that $v_i(C_i) \geq v_i(C_i)$ always holds. However, when counting envy-free-relations for a given division, we will disregard these trivial envy-free-relations $p_i \nVdash p_i$, $1 \leq i \leq n$, throughout the paper. Furthermore, neither envy-relations nor envy-free-relations need to be transitive. This is due to the fact that each player values every piece of the cake according to his or her own valuation function. The valuation functions of different players will be distinct in general. For example, given three distinct players $p_i$, $p_j$, and $p_k$ with valuations $v_i(C_i) < v_i(C_j)$ and $v_j(C_j) < v_j(C_k)$, we have that $p_i \Vdash p_j$ and $p_j \Vdash p_k$. However, these valuations do not provide any information about player $p_i$'s valuation of portion $C_k$, so we cannot conclude that $p_i \Vdash p_k$. An analogous argument applies to envy-free-relations. The above observations imply that envy-relations and envy-free-relations are either one-way or two-way, i.e., it is possible that: \begin{enumerate} \item two players envy each other ($p_i \Vdash p_j$ and $p_j \Vdash p_i$), which we refer to as ``two-way envy,'' \item neither of two players envies the other ($p_i \nVdash p_j$ and $p_j \nVdash p_i$), which we refer to as ``two-way envy-freeness,'' and \item one player envies another player but is not envied by this other player ($p_i \Vdash p_j$ and $p_j \nVdash p_i$), which we refer to as both ``one-way envy'' (from $p_i$ to~$p_j$) and ``one-way envy-freeness'' (from $p_j$ to~$p_i$). \end{enumerate} Assuming that all players are following the rules and strategies, some cake-cutting protocols always guarantee an envy-free division (i.e., they always find an envy-free division of the cake), whereas others do not. Only protocols that \emph{guarantee} an envy-free division in \emph{every} case, even in the worst case (in terms of the players' valuation functions), are considered to be envy-free. Note that an envy-free division may be obtained by coincidence, just because the players have matching valuation functions that avoid envy, and not because envy-freeness is enforced by the rules and strategies of the cake-cutting protocol used. In the worst case, however, when the players have totally nonconforming valuation functions, an envy-free division would not just happen by coincidence, but needs to be enforced by the rules and strategies of the protocol. An envy-free-relation is said to be \emph{guaranteed} if it exists even in the worst case. \OMIT{ The analysis of envy-relations dates back at least to Feldman and Kirman~\cite{fel-kir:j:fairness-and-envy}. In contrast to our approach, they consider the number of envy-pairs in already existing divisions with the intention of maximizing fairness afterwards via trading. In particular, they do not consider the \emph{design} of cake-cutting protocols that maximize fairness. In the majority of cases, research in the area of cake-cutting from an economic perspective is concerned more with the existence of certain divisions and their properties than with how to achieve these divisions. A different approach measures the intensity of envy in terms of the distance between envied portions~\cite{cha:j:measure-envy}. More recently, Brams, Jones, and Klamler~\cite{bra-jon-kla:j:minimal-envy} proposed to minimize envy in terms of the maximum number of players that a player may envy.\footnote{Their notion of measuring envy differs from our notion of DGEF in various ways, the most fundamental of which is to take an egalitarian approach to reducing the number of envy-relations (namely, via minimizing the most-envious player's envy, in terms of decreasing the number of this single player's envy-relations), as opposed to the utilitarian approach the DGEF is aiming at (namely, via minimizing overall envy, in terms of increasing the total number of guaranteed envy-free-relations among all players). Note also that Brams, Jones, and Klamler~\cite{bra-jon-kla:j:minimal-envy} focus on presenting a new protocol rather than on introducing a new notion for measuring envy.} Another approach is due to Chevaleyre et al.~\cite{che-end-est-mau:c:envy-free-states}, who define various metrics for the evaluation of envy in order to classify ``the degree of envy in a society,'' and they use the term ``degree of envy'' in the quite different setting of multiagent allocation of \emph{indivisible} resources. } We now define the degree of guaranteed envy-freeness in relation to the problem of cake-cutting. \begin{definition}\upshape \label{def:degree-guaranteed-envy-freeness} For $n \geq 1$ players, the \emph{degree of guaranteed envy-freeness} (DGEF, for short) of a given proportional\footnote{The DGEF should be restricted to proportional protocols only, since otherwise the DGEF may overstate the actual level of fairness, e.g., if all the cake is given to a single player.} cake-cutting protocol is defined to be the maximum number of envy-free-relations that exist in every division obtained by this protocol (provided that all players follow the rules and strategies of the protocol), i.e., the DGEF (which is expressed as a function of~$n$) is the number of envy-free-relations that can be guaranteed even in the worst case. \end{definition} By a slight abuse of notation, we will sometimes speak of the number of guaranteed envy-free-relations (rather than of the guaranteed number of envy-free-relations). When we do so, let us remind the reader that what matters is the \emph{total number} of envy-free-relations that exist in the worst case, and not the \emph{identification} of specific envy-free-relations. Moreover, for technical reasons (see the proof of Lemma~\ref{lem:divide-conquer}), we also consider the case that there is only one player (i.e., $n=1$). Note, however, that in this case the DGEF of any cake-cutting protocol is trivially zero, since we disregard the trivial envy-free-relation $p_1 \nVdash p_1$. Definition~\ref{def:degree-guaranteed-envy-freeness} is based on the idea of weakening the notion of fairness in terms of envy-freeness in order to obtain cake-cutting protocols that are fair (though perhaps non-envy-free) \emph{and} finite bounded, where the fairness level of a protocol is given by its degree of guaranteed envy-freeness. The higher the degree of guaranteed envy-freeness the fairer the protocol. \OMIT{ Note that if the DGEF is lower than~$\nicefrac{n(n-1)}{2}$, the number of guaranteed envy-free-relations can be improved to this lower bound by resolving circular envy-relations (of which two-way envy-relations are a special case) by means of circular trades after the execution of the protocol.\footnote{To be specific here, all occurrences of ``guaranteed envy-free-relations'' in this paragraph refer to those envy-free-relations that are guaranteed to exist after executing some cake-cutting protocol \emph{and in addition, subsequently, performing trades that are guaranteed to be feasible}. This is in contrast with what we mean by this term anywhere else in the paper; ``guaranteed envy-free-relations'' usually refers to those envy-free-relations that are guaranteed to exist after executing the protocol only. As is common, we consider trading not to be part of a cake-cutting protocol, though it might be useful in certain cases (for example, Brams and Taylor mention that trading might be used ``to obtain better allocations; however, this is not a procedure but an informal adjustment mechanism''~\cite{bra-tay:b:fair-division}). In particular, the notion of DGEF refers to (proportional) cake-cutting protocols without additional trading.} Thus, in this case, involving subsequent trading actions adds on the number of guaranteed envy-free-relations. Furthermore, having $\nicefrac{n(n-1)}{2}$ guaranteed envy-free-relations after all circular envy-relations have been resolved, three more guaranteed envy-free-relations can be gained by applying an envy-free protocol\footnote{For example, one might use the Selfridge--Conway protocol, which is known to be a finite bounded envy-free cake-cutting protocol for $n=3$ players (see Stromquist~\cite{str:j:cut-cake} and also, e.g., \cite{bra-tay:j:protocol,bra-tay:b:fair-division,rob-web:b:cake-cutting}) and will also be applied within our protocol to be presented in Figure~\ref{algo:n} (see also Figure~\ref{algo:n4} for the special case of four players).} to the three most envied players, which yields to an overall lower bound of~$3+\nicefrac{n(n-1)}{2}$ guaranteed envy-free-relations. However, the DGEF is defined to make a statement on the performance of a particular protocol and not about all sorts of actions to be undertaken afterwards. Moreover, if the DGEF of a proportional cake-cutting protocol is at least~$\nicefrac{n(n-1)}{2}$ (such as the DGEF of the protocol to be presented in Figure~\ref{algo:n}) then circular envy-relations are not \emph{guaranteed} to exist, and hence, in this case, trading has no impact on the number of guaranteed envy-free-relations. } \subsection{Degrees of Guaranteed Envy-Freeness in Proportional Cake-Cutting Protocols} \label{sec:survey-guaranteed-envy-freeness} We now give an upper and a lower bound on the degree of guaranteed envy-freeness for proportional cake-cutting protocols. For comparison, note that Feldman and Kirman~\cite{fel-kir:j:fairness-and-envy} observed that, for \emph{any} division, the number of envy-relations is always between zero and $n(n-1)$; zero if everyone is happy with his or her share of the cake, and $n(n-1)$ if everyone is envious of everyone else. \begin{proposition} \label{prop:DGEF-Minimum-Maximum} Let $d(n)$ be the degree of guaranteed envy-freeness of a proportional cake-cutting protocol for $n \geq 2$ players. It holds that $n \leq d(n) \leq n(n-1)$. \end{proposition} \begin{proofs} If $n=2$ then we obviously have $d(2) = 2$, and this will be the case exactly if both players value their share of the cake as being at least~$\nicefrac{1}{2}$. Note that this case, $n = 2 = d(n)$, reflects the fact that every proportional protocol for two players is envy-free. So, we now assume that $n \geq 3$. As stated above, we disregard the trivial envy-free-relation $p_i \nVdash p_i$ for each~$i$, $1 \leq i \leq n$. Consequently, each player can have at most $n-1$ envy-free-relations, one to each of the other players, which gives a total of at most $n(n-1)$ guaranteed envy-free-relations. This proves the upper bound: $d(n) \leq n(n-1)$. To prove the lower bound, note that, by definition, in a proportional division every player~$p_i$, $1 \leq i \leq n$, regards his or her portion being of value at least $\nicefrac{1}{n}$, i.e., $v_i(C_i) \geq \nicefrac{1}{n}$. Thus, $v_i(C-C_i) \leq \nicefrac{(n-1)}{n}$ for each~$i$. We will now prove that this implies that none of the players can envy each of the $n-1$ other players at the same time. We show this for player~$p_1$; the argument is analogous for the other players. So, assume that $p_1$ envies some other player, say~$p_2$. Thus $v_1(C_2) > v_1(C_1) \geq \nicefrac{1}{n}$. It follows that $p_1$ values the remaining cake $(C-C_1)-C_2$ as being less than $\nicefrac{(n-2)}{n}$, i.e., $v_1((C-C_1)-C_2) < \nicefrac{(n-2)}{n}$. Consequently, there is no way to divide the remaining cake $(C-C_1)-C_2$ into $n-2$ portions $C_3, C_4, \dots, C_n$ such that for each $i$, $3 \leq i \leq n$, we have $v_1(C_i) \geq \nicefrac{1}{n}$. Hence, there must be at least one player~$p_j$, $3 \leq j \leq n$, such that $v_1(C_j) < \nicefrac{1}{n} \leq v_1(C_1)$, so $p_1 \nVdash p_j$. Considering all $n$ players, this gives at least $n$ guaranteed envy-free-relations for a proportional protocol, so $d(n) \geq n$.~\end{proofs} The degree of fairness of a division obtained by applying a proportional cake-cutting protocol highly depends on the rules of this protocol. Specifying and committing to appropriate rules often increases the degree of guaranteed envy-freeness, whereas the lack of such rules jeopardizes it in the sense that the number of guaranteed envy-free-relations may be limited to the worst-case minimum of $n$ as stated in Proposition~\ref{prop:DGEF-Minimum-Maximum}. In this context, ``appropriate rules'' are those that involve the players' evaluations of other players' portions and of pieces that still are to be assigned. Concerning a particular piece of cake, involving the evaluations of as many players as possible in the allocation process helps to keep the number of envy-relations to be created low, since this allows to determine early on whether a planned allocation may later turn out to be disadvantageous---and thus allows to take adequate countermeasures. In contrast, omitting mutual evaluations means to forego additional knowledge that could turn out to be most valuable later on. For example, say player~$p_i$ is going to get assigned piece~$c_j$. If the protocol asks all other players to evaluate piece~$c_j$ according to their measures, all envy-relations to be created by the assignment of piece~$c_j$ to player~$p_i$ can be identified before the actual assignment and thus countermeasures (such as trimming piece~$c_j$) can be undertaken. However, if the protocol requires no evaluations on behalf of the other players, such envy-relations cannot be identified early enough to prevent them from happening. \begin{lemma} \label{lem:DGEF-no-evaluations} Let a proportional cake-cutting protocol for $n \geq 2$ players be given. If the rules of the protocol require none of the players to value any of the other players' portions, then the degree of guaranteed envy-freeness is~$n$ (i.e., each player is guaranteed only one envy-free-relation). \end{lemma} \begin{proofs} Having a certain number of guaranteed envy-free-relations means to have at least this number in any case, even in the worst case. For $n=2$, proportionality implies envy-freeness, so the worst case is the best case. For $n \geq 3$ players, consider the following scenario. Given a division of cake $C=\bigcup_{i=1}^{n}C_i$, without any restrictions other than aiming at a proportional division (i.e., the rules of the protocol require none of the players to value any of the other players' portions), we set the valuation functions of the players as follows. For each $i$, $1 \leq i \leq n$, player $p_i$ values portion $C_i$ to be worth exactly $\nicefrac{n}{n^2} = \nicefrac{1}{n}$, $p_i$ values exactly one portion~$C_j$, $i \neq j$, to be worth exactly $\nicefrac{2}{n^2} < \nicefrac{1}{n}$, and $p_i$ values each of the $n-2$ remaining portions~$C_k$, $\|\{i,j,k\}\| = 3$, to be worth exactly $\nicefrac{(n+1)}{n^2} > \nicefrac{1}{n}$. These valuations make this division proportional, as every player values his or her portion to be worth exactly $\nicefrac{1}{n}$. Moreover, each player has $n-2$ envy-relations and just one guaranteed envy-free-relation, the latter of which is due to Proposition~\ref{prop:DGEF-Minimum-Maximum}. Hence, if the rules of the protocol require none of the players to value any of the other players' portions, then no more than $n$ envy-free-relations can be guaranteed by the given proportional cake-cutting protocol in the worst case.~\end{proofs} The argument in the proof of Lemma~\ref{lem:DGEF-no-evaluations} will be used to prove the upper bounds on the DGEF of the proportional cake-cutting protocols considered in Theorem~\ref{thm:survey-DGEF} (see also Lemmas~\ref{lem:last-diminisher} through~\ref{lem:divide-and-choose}) and Theorems~\ref{thm:n4-DGEF} and~\ref{thm:n-DGEF}. An envy-free cake-cutting protocol for $n$ players guarantees that no player $p_i$ envies any other player~$p_j$, i.e., $v_i(C_i) \geq v_i(C_j)$ for all $i,j$ with $1 \leq i,j \leq n$, in each division $C=\bigcup_{i=1}^{n} C_i$ obtained by this protocol. This is the case exactly if the protocol meets the upper bound on the degree of guaranteed envy-freeness given in Proposition~\ref{prop:DGEF-Minimum-Maximum}. That is, a cake-cutting protocol for $n \geq 2$ players is envy-free (or completely fair) exactly if the degree of guaranteed envy-freeness equals $n(n-1)$. Our next result shows the DGEF for a number of well-known finite bounded \emph{proportional} cake-cutting protocols.\footnote{These protocols may also be known under different names.} For the sake of self-containment, these protocols will be described in Section~\ref{sec:survey}. \begin{theorem} \label{thm:survey-DGEF} For $n \geq 3$ players,\footnote{\label{fn:trivial-cases}The trivial cases $n=1$ (where one player receives all the cake) and $n=2$ (where each proportional division is always envy-free) are ignored. Specifically, an envy-free (and thus proportional) division for $n=2$ players can always be obtained by applying the cut-and-choose protocol: One player cuts the cake into two pieces both of which he or she considers to be worth exactly one half of the cake, and the other player chooses the piece that he or she considers to be worth at least half of the cake.} the proportional cake-cutting protocols listed in Table~\ref{tab:DGEF-survey} have a degree of guaranteed envy-freeness as shown in the same table. \end{theorem} The proof of Theorem~\ref{thm:survey-DGEF} can be found in Section~\ref{sec:survey}. In particular, the proofs of Lemmas~\ref{lem:last-diminisher} through~\ref{lem:divide-and-choose} provide the details of the analysis yielding the values in the DGEF column of Table~\ref{tab:DGEF-survey}. \begin{table}[h!tp] \centering \begin{tabular}{|l||c|c|} \hline \multicolumn{1}{|c||}{Protocol} & DGEF & Established via \\ \hline \hline Last Diminisher \cite{ste:j:steinhaus} & $2 + \nicefrac{n(n-1)}{2}$ & Lemma~\ref{lem:last-diminisher} \\ \hline Lone Chooser \cite{fin:j:fair-division} & $n$ & Lemma~\ref{lem:lone-chooser} \\ \hline Lone Divider \cite{kuh:b:games-fair-division} & $2n-2$ & Lemma~\ref{lem:lone-divider} \\ \hline Cut Your Own Piece (no strategy) \cite{ste:b:math-snapshots} & $n$ & Lemma~\ref{lem:cut-your-own-piece} \\ Cut Your Own Piece (left-right strategy) & $2n-2$ & Lemma~\ref{lem:cut-your-own-piece} \\ \hline Divide and Conquer \cite{eve-paz:j:cake-cutting} & $n \cdot \left\lfloor \log n \right\rfloor + 2n - 2^{\left\lfloor \log n \right\rfloor + 1}$ & Lemma~\ref{lem:divide-conquer} \\ Minimal-Envy Divide and Conquer \cite{bra-jon-kla:j:minimal-envy} & $n \cdot \left\lfloor \log n \right\rfloor + 2n - 2^{\left\lfloor \log n \right\rfloor + 1}$ & Lemma~\ref{lem:divide-conquer} \\ \hline Recursive Divide and Choose \cite{tas:j:proportional-protocol} & $n$ & Lemma~\ref{lem:divide-and-choose} \\ \hline \end{tabular} \caption{DGEF of selected finite bounded cake-cutting protocols.} \label{tab:DGEF-survey} \end{table} Apparently, the degrees of guaranteed envy-freeness of the protocols listed in Table~\ref{tab:DGEF-survey} vary considerably. Although each of the protocols may provide an envy-free division in the best case, in the worst case some of them show just the minimum number of guaranteed envy-free-relations according to Proposition~\ref{prop:DGEF-Minimum-Maximum}, while others possess a significantly higher degree of guaranteed envy-freeness. These differences can be explained by the fact that these protocols have been developed with a focus on achieving proportionality, and not on maximizing the degree of guaranteed envy-freeness. However, this indicates a new direction for future research, namely to increase the number of guaranteed envy-free-relations while ensuring finite boundedness. In the next section, we take a first step in this direction. \section{Enhancing the Degree of Guaranteed Envy-Freeness} \label{sec:improved-algorithm-guaranteed-envy-freeness} In this section, we introduce a finite bounded cake-cutting protocol that, compared with the protocols in Table~\ref{tab:DGEF-survey}, improves upon the degree of guaranteed envy-freeness. We will prove that this protocol is proportional and strategy-proof, and that it can be adapted so as to even provide a strong fair division. To present the protocol and its properties in an accessible way, we first handle the case of $n=4$ players separately in Section~\ref{sec:improved-algorithm-n-4}, before presenting and analyzing it for arbitrary $n \geq 3$ in Section~\ref{sec:improved-algorithm-arbitrary-n}. \subsection{A Proportional Protocol with an Enhanced DGEF for Four Players} \label{sec:improved-algorithm-n-4} Figure~\ref{algo:n4} gives a finite bounded proportional cake-cutting protocol for four players. We give both the rules and the strategies at once. Note that the players have to follow the rules and strategies in order to obtain a proportional share of the cake in any case. The protocol in Figure~\ref{algo:n4} always provides a proportional division (see Theorem~\ref{thm:n4-proportional}) and has ten guaranteed envy-free-relations (see Theorem~\ref{thm:n4-DGEF}). To allow comparison for four players, the best DGEF of any of the proportional protocols listed in Table~\ref{tab:DGEF-survey} (see also Section~\ref{sec:survey})---namely, that of both the Divide and Conquer protocols and the Last Diminisher protocol---is eight. Note that a maximum number of twelve guaranteed envy-free-relations is possible for four players (and this would give envy-freeness). Moreover, the protocol can be proven to be strategy-proof (see Theorem~\ref{thm:n4-strategy-proof}), and to even yield a strong fair division, provided that exactly one player makes a mark in Step~1 of the protocol that is closest to~$1$ (see Theorem~\ref{thm:n4-strong-fair}). \begin{figure}[h!tp] \centering \begin{tabular}{|lp{118mm}|} \hline {\bf Input:} & Cake~$C$, and four players $p_1, p_2, p_3, p_4$, where $p_i$ has the valuation function~$v_i$. Note that the value of cake~$C$ is normalized such that $v_i(C)=1$, $1 \leq i \leq 4$. \\ {\bf Output:} & Mapping of portions $C_i$ to players $p_i$, where $C=\bigcup_{i=1}^{4} C_i$. \vspace*{3mm} \\ {\bf Step~1.} & Let each player~$p_i$, $1 \leq i \leq 4$, make a mark at $m_i \in C$, such that $v_i(m_i,1)=\nicefrac{1}{4}$. \\ {\bf Step~2.} & Find any player $p_j$ such that there is no player $p_k$, $1 \leq j,k \leq 4$, $j \neq k$, with $\|[m_k,1]\| < \|[m_j,1]\|$. (Ties can be broken arbitrarily.) \\ {\bf Step~3.} & Assign portion $C_j=[m_j,1]$ to player $p_j$ and let $p_j$ drop out. \\ \multicolumn{2}{|l|}{Denote the remaining players by $p_1$, $p_2$, and $p_3$, without loss of generality, and let $v_i$ be} \\ \multicolumn{2}{|l|}{$p_i$'s valuation function.} \\ {\bf Step~4.} & Let player $p_1$ cut $[0,m_j]$ into three pieces, say $c_x$, $c_y$, and $c_z$, of equal value according to~$v_1$. \\ {\bf Step~5.} & If $v_2(c_x) > v_2(c_y)$ and $v_2(c_x) > v_2(c_z)$, $\{x, y, z\} = \{1, 2, 3\}$, let player $p_2$ trim piece $c_x$ into piece $c'_x$ and trimmings $R$ such that $v_2(c'_x) = v_2(c_y) \geq v_2(c_z)$ or $v_2(c'_x) = v_2(c_z) \geq v_2(c_y)$. If there already exists a two-way tie for the most valuable piece according to~$v_2$, do nothing. \\ {\bf Step~6.} & Let player $p_3$ choose one of the three pieces $c_x$ (respectively, $c'_x$ if $c_x$ has been trimmed), $c_y$, or $c_z$ that is most valuable according to~$v_3$. \\ {\bf Step~7.} & Let player $p_2$ choose one piece from the two remaining pieces that is most valuable according to~$v_2$. If $c'_x$ is among the remaining pieces, player $p_2$ has to choose this one. \\ {\bf Step~8.} & Assign the remaining piece to player $p_1$. \\ \multicolumn{2}{|l|}{If trimmings have been made in Step~5, continue with Step~9, otherwise finished.} \\ {\bf Step~9.} & Either player $p_2$ or player $p_3$ received the trimmed piece $c'_x$. From these two, let the player not having received $c'_x$ cut $R$ into three equal pieces according to his or her measure. \\ {\bf Step~10.} & Let the player having received $c'_x$ choose one of these three pieces that is most valuable according to his or her measure. \\ {\bf Step~11.} & Let player $p_1$ choose one out of the two remaining pieces that is most valuable according to~$v_1$. \\ {\bf Step~12.} & Assign the remaining piece to the player that cut $R$. \\ \hline \end{tabular} \caption{A finite bounded proportional cake-cutting protocol with DGEF of $10$ for four players.} \label{algo:n4} \end{figure} Steps~4 through~12 of the protocol in Figure~\ref{algo:n4} (which is the part of the protocol when there are just three players left) is simply the Selfridge--Conway protocol.\footnote{The Selfridge--Conway protocol is known to be a finite bounded envy-free cake-cutting protocol for $n=3$ players (see Stromquist~\cite{str:j:cut-cake} and also, e.g., \cite{bra-tay:j:protocol,bra-tay:b:fair-division,rob-web:b:cake-cutting}).} We explicitly describe the Selfridge--Conway protocol here for the sake of self-containment. Note that the Selfridge--Conway protocol is also part of the more involved protocol for an arbitrary number of players, which will be presented as Figure~\ref{algo:n} in Section~\ref{sec:improved-algorithm-arbitrary-n}. \begin{theorem} \label{thm:n4-proportional} The cake-cutting protocol in Figure~\ref{algo:n4} is proportional. \end{theorem} \begin{proofs} Since all three players entering Step~4 consider the portion of the fourth player (who dropped out in Step~3) as being worth no more than~$\nicefrac{1}{4}$, the Selfridge--Conway protocol is applied to a part of the cake that all three involved players consider as being worth at least $\nicefrac{3}{4}$. Thus, since the Selfridge--Conway protocol is an envy-free (hence, in particular, a proportional) protocol, it by definition guarantees each of the three players entering Step~4 a portion of value at least $\nicefrac{1}{4}$ according to their measures. Moreover, the player who dropped out in Step~3 considers his or her portion to be worth $\nicefrac{1}{4}$ due to Step~1. Therefore, the cake-cutting protocol in Figure~\ref{algo:n4} always provides a proportional division for $n=4$ players.~\end{proofs} \begin{theorem} \label{thm:n4-DGEF} The cake-cutting protocol in Figure~\ref{algo:n4} has ten guaranteed envy-free-relations. \end{theorem} \begin{proofs} The DGEF of the protocol in Figure~\ref{algo:n4} can be justified analogously to the arguments in the proof of Theorem~\ref{thm:n4-proportional}. Because the Selfridge--Conway protocol always provides an envy-free division, there is no envy between the three players entering Step~4, which results in six guaranteed envy-free-relations. In addition, the same three players will not envy the player (call him or her~$p_j$) who dropped out in Step~3 with portion~$C_j$, since none of them valued portion $C_j$ as being worth more than~$\nicefrac{1}{4}$ and each of them is guaranteed a share of at least $(\nicefrac{1}{3})(\nicefrac{3}{4}) = \nicefrac{1}{4}$, which gives three more guaranteed envy-free-relations. Simply put, none of the three players entering Step~4 will envy any of the three other players, summing up to nine guaranteed envy-free-relations. By the argument in the proof of Proposition~\ref{prop:DGEF-Minimum-Maximum}, no more envy-free-relations can be guaranteed on behalf of these three players. The last guaranteed envy-free-relation is due to player~$p_j$: Since the protocol always provides a proportional division, $p_j$ cannot envy each of the three remaining players (again by the proof of Proposition~\ref{prop:DGEF-Minimum-Maximum}). However, it cannot be guaranteed that $p_j$ does not envy any of the other two remaining players either, since $p_j$ does not evaluate their portions. This follows from the proof of Lemma~\ref{lem:DGEF-no-evaluations}.\footnote{More specifically, consider the following scenario. Suppose the fourth player dropped out in Step~3 (so $j=4$), and $p_4$ values his or her portion $C_4$ to be worth exactly~$\nicefrac{1}{4}$ and the remaining cake $[0, m_4]$ to be worth exactly~$\nicefrac{3}{4}$. If we set the valuation functions such that $v_1(C_1)=v_2(C_2)=v_3(C_3)=\nicefrac{1}{4}$, $v_4(C_1)=\nicefrac{1}{8}$, and $v_4(C_2)=v_4(C_3)=\nicefrac{5}{16}$ (thus, $v_4(C_1)+v_4(C_2)+v_4(C_3)=\nicefrac{3}{4}$), dividing cake $C$ into $C_1$, $C_2$, $C_3$, and $C_4$ results in a proportional division such that player $p_4$ envies players $p_2$ and~$p_3$.} Thus, the protocol shown in Figure~\ref{algo:n4} has a DGEF of exactly ten in total.~\end{proofs} Thinking of manipulation aspects, there may be players who try to gain most of the cake for themselves, or who intentionally try to make other players envious. To prevent this from happening, cake-cutting protocols should be strategy-proof. \begin{definition} \label{def:proportional-strategy-proof} A proportional cake-cutting protocol is said to be \emph{strategy-proof} if a cheating player is no longer guaranteed a proportional share, whereas all other players are still guaranteed to receive their proportional share. \end{definition} In a strategy-proof proportional protocol, a cheater (i.e., a player who doesn't play truthfully) cannot harm any of the other players with respect to proportionality, and may even jeopardize receiving a proportional share of the cake for him- or herself. It is worth noting that the definition of strategy-proofness is slightly stronger when restricted to \emph{envy-free} cake-cutting protocols. An envy-free cake-cutting protocol is said to be strategy-proof if a cheating player is no longer guaranteed to not envy any other player, whereas all other players are. That is, a strategy-proof envy-free cake-cutting protocol is resistant to manipulation in the sense that for a player to be \emph{guaranteed} to not envy any other player, he or she is required to play truthfully. \begin{theorem} \label{thm:n4-strategy-proof} The proportional cake-cutting protocol in Figure~\ref{algo:n4} is strategy-proof in the sense of Definition~\ref{def:proportional-strategy-proof}. \end{theorem} \begin{proofs} When analyzing the strategy-proofness of the protocol in Figure~\ref{algo:n4}, only decisions made in Steps~1 through~3 need to be considered, since the Selfridge--Conway protocol has already been proven to be strategy-proof (see, e.g., \cite{bra-tay:b:fair-division}). So, in each of the following three cases, we thus will consider only Steps~1 through~3 of the protocol. Moreover, it is assumed that there is exactly one cheater (i.e., one player not playing truthfully) that is trying to get more than a proportional share, call him or her~$p_c$, $1 \leq c \leq 4$. \begin{fall} \item \label{n4-strategy-proof-case1} If $p_c$ is the player to drop out in Step~3 with portion $C_c=[m_c,1]$, $v_c(m_c,1) > \nicefrac{1}{4}$, then the cheater would receive more than a proportional share according to his or her measure. However, all other players are still guaranteed a proportional share, since all of them consider portion $C_c$ as being worth at most~$\nicefrac{1}{4}$ according to their measures, so they all consider the remaining part of the cake to be worth at least~$\nicefrac{3}{4}$. \item \label{n4-strategy-proof-case2} If the cheater, $p_c$, would have dropped out in Step~3 with portion $C_c=[m_c,1]$ when telling the truth (i.e., $v_c(m_c,1)=\nicefrac{1}{4}$ and $v_i(m_c,1) \leq \nicefrac{1}{4}$ with $1 \leq i \leq 4$ and $i \neq c$) but now is not (since $p_c$ makes a mark at $m_c^\prime$ with $m_c^\prime < m_c$), then the cheater may end up with even less than~$\nicefrac{1}{4}$. Let us see why this is true. In this case, another player, $p_j$, drops out in Step~3 with portion $C_j=[m_j,1]$, where $m_c^\prime \leq m_j \leq m_c$, which determines the subpart of the cake for Step~4 and the following steps to be $[0,m_j]$. According to the measure of the cheater, subpart $[0,m_j]$ is worth at most~$\nicefrac{3}{4}$, since he or she values $[m_j,1]$ as being worth at least $\nicefrac{1}{4}$, as assumed beforehand. Applying the Selfridge--Conway protocol to subpart $[0,m_j]$ guarantees each of the involved players at least a proportional share, which results for the cheater in a portion that may be worth even less than $(\nicefrac{1}{3})(\nicefrac{3}{4}) = \nicefrac{1}{4}$ (namely, if $m_j < m_c$). (Note that this is the point where the cheater loses his or her guarantee for a proportional share via cheating.) Again, all other players are still guaranteed a proportional share. The player receiving portion $C_j=[m_j,1]$ values this portion as being~$\nicefrac{1}{4}$. The two remaining players both value subpart $[0,m_j]$ to be worth at least~$\nicefrac{3}{4}$, and thus receive a portion that is worth at least $(\nicefrac{1}{3})(\nicefrac{3}{4})$ according to their measures. \item \label{n4-strategy-proof-case3} If the cheater does not drop out in Step~3 by cheating, nor would have dropped out in Step~3 if he or she would have been truthfully, then this would not influence the division at all. The player dropping out in Step~3 with portion $C_j=[m_j,1]$ values this portion as being~$\nicefrac{1}{4}$, and the remaining players receive a proportional share of subpart $[0,m_j]$, which all of them value at least $\nicefrac{3}{4}$, even the cheater. \end{fall} This concludes the proof of Theorem~\ref{thm:n4-strategy-proof}.~\end{proofs} Moreover, just a little change in the procedure can make the protocol presented above to provide not just a simple fair but a strong fair division for four players, which means that every player considers the portion received to be worth strictly more than one quarter of $C$. \begin{theorem} \label{thm:n4-strong-fair} The cake-cutting protocol in Figure~\ref{algo:n4} can be modified so as to yield a strong fair division, provided that exactly one player makes a mark in Step~1 that is closest to~$1$ (with respect to the interval $[0,1]$). \end{theorem} \begin{proofs} Figure~\ref{algo:n4:strongfair} shows only the modified steps of the protocol in Figure~\ref{algo:n4} that are required to achieve a strong fair division. Let $p_j$ be the unique player whose mark in Step~1 is closest to~$1$. According to Step~2 in Figure~\ref{algo:n4:strongfair}, let $p_k$ be any player such that $\|[m_j,1]\| < \|[m_k,1]\|$ and there is no player $p_{\ell}$, $1 \leq j,k,\ell \leq 4$, $\|\{j,k,\ell\}\|=3$, with $\|[m_{\ell},1]\| < \|[m_k,1]\|$, where ties can be broken arbitrarily. Step~3 in Figure~\ref{algo:n4:strongfair} assures that player $p_j$ is dropping out with a portion that is worth strictly more than $\nicefrac{1}{4}$ according to his or her measure,\footnote{Recall that we assumed the axiom of positivity, which requires nonempty pieces of cake to have a nonzero value for each player, see also Footnote~\ref{foo:positivity}.} since $p_j$ receives a portion that is bigger and thus is worth more than the one he or she has marked as being worth exactly~$\nicefrac{1}{4}$. The three remaining players continue by applying a proportional protocol to a part of the cake that all of them consider to be worth strictly more than~$\nicefrac{3}{4}$. Thus, each of the players receives a portion that is worth strictly more than $\nicefrac{1}{4}$ according to his or her measure, which results in a strong fair division.~\end{proofs} \begin{figure}[h!tp] \centering \begin{tabular}{|lp{118mm}|} \hline {\bf Step~2.} & Find any players $p_j$ and $p_k$ such that $\|[m_j,1]\| < \|[m_k,1]\|$ and there is no player $p_{\ell}$, $1 \leq j,k,\ell \leq 4$, $\|\{j,k,\ell\}\|=3$, with $\|[m_{\ell},1]\| < \|[m_k,1]\|$. (Ties can be broken arbitrarily.) \\ {\bf Step~3.} & Set $m=m_k+\nicefrac{(m_j-m_k)}{2}$ and assign portion $C_j=[m,1]$ to player $p_j$ and let $p_j$ drop out. \\ \multicolumn{2}{|l|}{Denote the remaining players by $p_1$, $p_2$, and~$p_3$, without loss of generality, and let $v_i$ be} \\ \multicolumn{2}{|l|}{$p_i$'s valuation function.} \\ {\bf Step~4.} & Let player $p_1$ cut $[0,m]$ into three pieces, say $c_x$, $c_y$ and $c_z$, of equal value according to~$v_1$. \\ \hline \end{tabular} \caption{Modified steps in the protocol of Figure~\ref{algo:n4} to achieve a strong fair division for four players.} \label{algo:n4:strongfair} \end{figure} \subsection{A Proportional Protocol with an Enhanced DGEF for any Number of Players} \label{sec:improved-algorithm-arbitrary-n} Figure~\ref{algo:n} shows a finite bounded proportional cake-cutting protocol with an enhanced DGEF for $n$ players, where $n \geq 3$ is arbitrary. Again, we give both the rules and the strategies at once, and players have to follow the rules and strategies in order to obtain a proportional share of the cake. Unless specified otherwise, ties in this protocol can be broken arbitrarily. With respect to the DGEF results of previously known finite bounded proportional cake-cutting protocols given in Table~\ref{tab:DGEF-survey} (see also Section~\ref{sec:survey}), the Last Diminisher protocol\footnote{This protocol has been developed by Banach and Knaster and was first presented in Steinhaus~\cite{ste:j:steinhaus}.} shows the best results for $n \geq 6$, whereas the best results for $n < 6$ are achieved by the Last Diminisher protocol as well as both the Divide and Conquer protocols \cite{eve-paz:j:cake-cutting,bra-jon-kla:j:minimal-envy}. The protocol presented in Figure~\ref{algo:n} improves upon these protocols in terms of the degree of guaranteed envy-freeness for all $n \geq 3$ and, in particular, improves upon the DGEF of the Last Diminisher protocol by $\left\lceil \nicefrac{n}{2} \right\rceil - 1$ additional guaranteed envy-free-relations.\footnote{Recall that we ignore the trivial cases $n=1$ and $n=2$, see Footnote~\ref{fn:trivial-cases}. Moreover, in the special case of~$n=4$ the DGEF of the protocol in Figure~\ref{algo:n} is~$\left( \nicefrac{n^2}{2} \right) + 2 = 10$ and thus improves the DGEF of the Last Diminisher protocol by even~$\nicefrac{n}{2} = 2$ guaranteed envy-free-relations, see Theorem~\ref{thm:n4-DGEF}.} Both the protocol in Figure~\ref{algo:n} and the Last Diminisher protocol are, more or less, based on the same idea of determining a piece of minimal size that is valued exactly $\nicefrac{1}{n}$ by one of the players (who is still in the game), which guarantees that all other players (who are still in the game) will not envy this player for receiving this particular piece. However, the protocol in Figure~\ref{algo:n} works in a more parallel way, which makes its enhanced DGEF of $\left\lceil \nicefrac{n^2}{2} \right\rceil + 1$ possible (see Theorem~\ref{thm:n-DGEF}), and it forbears from using trimmings. To ensure that working in a parallel manner indeed pays off in terms of increasing the degree of guaranteed envy-freeness, the ``inner loop'' (Steps~4.1 through~4.3) of the protocol is decisive. In addition, the protocol in Figure~\ref{algo:n} always provides a proportional division (see Theorem~\ref{thm:n-proportional}) in a finite bounded number of steps (see Theorem~\ref{thm:n-finite-bounded}), it can be proven to be strategy-proof (in the sense of Definition~\ref{def:proportional-strategy-proof}, see Theorem~\ref{thm:n-strategy-proof}), and analogously to the modification described in Section~\ref{sec:improved-algorithm-n-4}, this protocol can be adjusted to provide a strong fair division for suitable valuation functions of the players (see Theorem~\ref{thm:n-strong-fair}). All of the above properties are shared also by the Last Diminisher protocol (see, e.g., \cite{rob-web:b:cake-cutting}). \begin{remark} \label{rem:remarksblock} Some remarks on the protocol in Figure~\ref{algo:n} are in order: \begin{enumerate} \item \label{rem:remarksblock-1} From a very high-level perspective the procedure is as follows: The protocol runs over several rounds in each of which it is to find a player~$p_j$ who takes a portion from the left side of the cake, and to find a player~$p_k$ who takes a disjoint portion from the right side of the cake, such that none of the players still in the game envy~$p_j$ or~$p_k$ (at this, appropriate ``inner-loop handling'' might be necessary, see Figure~\ref{algo:n} for details). Thereafter, $p_j$ and $p_k$ are to drop out with their portions, and a new round is started with the remaining cake (which is being renormalized, see Remarks~\ref{rem:remarksblock-3} and~\ref{rem:remarksblock-5} below) and the remaining players. Finally, the Selfridge--Conway protocol is applied to the last three players in the game. \item \label{rem:remarksblock-2} Note that this protocol is applied only if there are more than two players in total. If there is just one player then he or she receives all the cake, and if there are only two players then the simple cut-and-choose protocol is applied (see Footnote~\ref{fn:trivial-cases}). \item \label{rem:remarksblock-3} Regarding $n \geq 5$ players, if at any stage of our protocol the same player marks both the leftmost smallest piece and the rightmost smallest piece, the cake may be split up into two pieces and later on merged again. To simplify matters, in such a case the interval boundaries are adapted as well, which is expressed in Step~8 of Figure~\ref{algo:n}. Simply put, the two parts of the cake are set next to each other again to ensure a seamless transition. This can be done without any loss in value due to additivity of the players' valuation functions. \item \label{rem:remarksblock-4} Note that if the inner loop (Steps~4.1 through~4.3) has not been executed in an outer-loop iteration (Steps~1 through~8), we have $\varrho = \varrho^{\prime}$. This is the special case of zero iterations of the inner loop. Consequently, if $\varrho = \varrho^{\prime}$ then the portion $C_k=[\varrho_k,\varrho]$ assigned to player $p_k$ in Step~6 is the same as $C_k=[\varrho_k,\varrho^{\prime}]$ in the general case, and the values for $\varrho:=\varrho_k$ and $C^{\prime}:=[\lambda_j, \varrho_k]$ that are set in Step~8 in this case are special cases of $\varrho:=\varrho-\varrho^{\prime}+\varrho_k$ and $C^{\prime}:=[\lambda_j, \varrho_k] \cup [\varrho^{\prime}, \varrho]$ (since if $\varrho = \varrho^{\prime}$ then $[\varrho^{\prime}, \varrho]$ degenerates to a single point, which is valued zero by the axiom of divisibility, see Footnote~\ref{foo:divisibility}). However, to make the protocol in Figure~\ref{algo:n} easier to comprehend, we have stated these special cases explicitly in addition to the general case. \item \label{rem:remarksblock-5} In Steps~1 and~9.1, the value of subcake~$C^{\prime} \subseteq C$ is renormalized such that $v_i(C^{\prime})=1$ for each player $p_i$, $1 \leq i \leq s$, for the sake of convenience. In more detail, each player~$p_i$ values~$C^{\prime}$ at least $\nicefrac{s}{n}$ of $C$, i.e., $v_i(C^{\prime}) \geq (\nicefrac{s}{n}) \cdot v_i(C)$. The latter holds true since each of the $s$ players still in the game values the union of the $n-s$ portions already assigned to be worth at most $\nicefrac{(n-s)}{n}$ of $C$. Thus, by receiving a proportional share (valued $\nicefrac{1}{s}$) of $C^{\prime}$ each player~$p_i$ is guaranteed at least a proportional share (valued $\nicefrac{1}{n}$) of $C$. \item \label{rem:remarksblock-6} Note that Steps~9.1 through~9.3 of the protocol in Figure~\ref{algo:n} correspond to Steps~1 through~3 of the protocol in Figure~\ref{algo:n4}, and that Steps~9.1 through~9.4 are performed exactly if the initial number $n$ of players is even. \end{enumerate} \end{remark} \begin{figure}[h!tp] \centering \begin{tabular}{|lp{118mm}|} \hline {\bf Input:} & Cake~$C$, and $n$ players $p_1, p_2, \dots, p_n$, where $p_i$ has valuation function~$v_i$. \\ {\bf Output:} & Mapping of portions $C_i$ to players~$p_i$, where $C=\bigcup_{i=1}^{n} C_i$. \\ {\bf Initialization:} & Set $\lambda:=0$, $\varrho:=1$, $\varrho^{\prime}:=\varrho$, $s:=n$, and $C^{\prime}:=[0,1]=C$. \\ \multicolumn{2}{|p{145mm}|}{While there are more than four players (i.e., $s>4$), perform the outer loop (Steps~1 through~8).} \\ {\bf Step~1.} & Let players $p_i$, $1 \leq i \leq s$, each make two marks at $\lambda_i$ and $\varrho_i$ with $\lambda_i,\varrho_i \in C^{\prime}$ such that $v_i(\lambda,\lambda_i)=\nicefrac{1}{s}$ and $v_i(\varrho_i,\varrho)=\nicefrac{1}{s}$; note that $v_i(C^{\prime})=1$ (see Remark~\ref{rem:remarksblock}.\ref{rem:remarksblock-5}). \\ {\bf Step~2.} & Find any player $p_j$ such that there is no player $p_z$, $1 \leq j,z \leq s$, $j \neq z$, with $\|[\lambda,\lambda_z]\| < \|[\lambda,\lambda_j]\|$. \\ {\bf Step~3.} & Find any player $p_k$ such that there is no player $p_z$, $1 \leq k,z \leq s$, $k \neq z$, with $\|[\varrho_z,\varrho]\| < \|[\varrho_k,\varrho]\|$. If more than one player fulfills this condition for~$p_k$, and $p_j$ is one of them, choose $p_k$ other than $p_j$.\\ \multicolumn{2}{|p{145mm}|}{If $j \neq k$, go directly to Step~5, else repeat the inner loop (Steps~4.1 through 4.3) until $p_j$ and $p_k$ are found such that $j \neq k$, where $p_j$ marks the leftmost smallest piece and $p_k$ marks the rightmost smallest piece.} \\ {\bf Step~4.1.} & Set $\varrho^{\prime} := \varrho_k$. \\ {\bf Step~4.2.} & Let players $p_i$, $1 \leq i \leq s$, each make a mark at $\varrho_i$ with $\varrho_i \in C^{\prime}$ such that $v_i(\varrho_i,\varrho^{\prime})=\nicefrac{1}{s}$. \\ {\bf Step~4.3.} & Find player $p_k$ such that there is no player $p_z$, $1 \leq k,z \leq s$, $k \neq z$, with $\|[\varrho_z,\varrho^{\prime}]\| < \|[\varrho_k,\varrho^{\prime}]\|$. If more than one player fulfills this condition for~$p_k$, and $p_j$ is one of them, choose $p_k$ other than $p_j$. \\ {\bf Step~5.} & Assign portion $C_j=[\lambda,\lambda_j]$ to player $p_j$. \\ {\bf Step~6.} & If $\varrho = \varrho^{\prime}$, assign portion $C_k=[\varrho_k,\varrho]$ to player $p_k$, else assign portion $C_k=[\varrho_k,\varrho^{\prime}]$ to player $p_k$ (see Remark~\ref{rem:remarksblock}.\ref{rem:remarksblock-4}). \\ {\bf Step~7.} & Let players $p_j$ and $p_k$ drop out. \\ {\bf Step~8.} & Adapt the interval boundaries of the remaining cake $C^{\prime}$ for the round / step to follow: If $\varrho = \varrho^{\prime}$, set $C^{\prime}:=[\lambda_j, \varrho_k]$ and $\varrho:=\varrho_k$, else set $C^{\prime}:=[\lambda_j, \varrho_k] \cup [\varrho^{\prime}, \varrho]$ and $\varrho:=\varrho-\varrho^{\prime}+\varrho_k$ (see Remarks~\ref{rem:remarksblock}.\ref{rem:remarksblock-3} and~\ref{rem:remarksblock}.\ref{rem:remarksblock-4}). Set $\lambda:=\lambda_j$, $\varrho^{\prime} := \varrho$, and $s:=s-2$. \\ \multicolumn{2}{|p{145mm}|}{Perform Steps~9.1 through 9.4 if and only if there are four players (i.e., $s=4$). If there are three players (i.e., $s=3$), go directly to Step~10.} \\ {\bf Step~9.1.} & Let each $p_i$, $1 \leq i \leq s=4$, make a mark at $\varrho_i \in C^{\prime}$ such that $v_i(\varrho_i,\varrho)=\nicefrac{1}{s}=\nicefrac{1}{4}$; note that $v_i(C^{\prime})=1$ (see Remark~\ref{rem:remarksblock}.\ref{rem:remarksblock-5}). \\ {\bf Step~9.2.} & Find any player $p_j$ such that there is no player $p_k$, $1 \leq j,k \leq s$, $j \neq k$ with $\|[\varrho_k,\varrho]\| < \|[\varrho_j,\varrho]\|$. \\ {\bf Step~9.3.} & Assign portion $C_j=[\varrho_j,\varrho]$ to player $p_j$. Let player $p_j$ drop out. \\ {\bf Step~9.4.} & Set $\varrho:=\varrho_j$, $C^{\prime}:=[\lambda, \varrho]$, and $s:=s-1$. \\ {\bf Step~10.} & Divide the remaining cake $C^{\prime}$ among the $s=3$ remaining players via the Selfridge--Conway protocol (as described in Steps~4 through 12 in Figure~\ref{algo:n4}). \\ \hline \end{tabular} \caption{A proportional protocol with an enhanced DGEF of~$\left\lceil \nicefrac{n^2}{2} \right\rceil + 1$ for $n \geq 3$ players.} \label{algo:n} \end{figure} \begin{theorem} \label{thm:n-proportional} The cake-cutting protocol in Figure~\ref{algo:n} is proportional. \end{theorem} \begin{proofs} In the case of $n$ being even, all four players entering Step~9.1 consider $C^{\prime}$ at this stage to be worth at least~$\nicefrac{s}{n} = \nicefrac{4}{n}$ of $C$, since each of them values the union of the $n-4$ portions already assigned to be worth at most $\nicefrac{(n-4)}{n}$ of $C$. The same argument can be applied to the three players that enter Step~10 in the case of $n$ being odd. Thus, from Step~9.1 on, which is the part when there are no more than four players left, the protocol provides a proportional division according to Theorem~\ref{thm:n4-proportional}. When there are more than four players in total, either the first $n-4$ players (if $n$ is even), or the first $n-3$ players (if $n$ is odd), receive a portion they each value to be worth at least $\nicefrac{1}{n}$ according to Steps~1 and~4.2, since each of these players receives a portion he or she once specified to be worth exactly~$\nicefrac{1}{s}$ of $C^{\prime}$, while valuing $C^{\prime}$ at least~$\nicefrac{s}{n}$ of $C$ (see Remark~\ref{rem:remarksblock}.\ref{rem:remarksblock-5}). Thus, the cake-cutting protocol in Figure~\ref{algo:n} always provides a proportional division for any number $n \geq 3$ of players.~\end{proofs} \begin{theorem} \label{thm:n-finite-bounded} The cake-cutting protocol in Figure~\ref{algo:n} is finite bounded. \end{theorem} \begin{proofs} The protocol in Figure~\ref{algo:n} has only a finite number of steps---with two loops though. The outer loop (which repeats Steps~1 through~8 as long as there are more than either four players if $n$ is even, or three players if $n$ is odd) is iterated $\nicefrac{(n-4)}{2}$ times if $n$ is even, and is iterated $\nicefrac{(n-3)}{2}$ times if $n$ is odd. The inner loop (which repeats Steps~4.1 through~4.3 until two distinct players are found to which the two outermost pieces are assigned, one player receiving that from the present left and the other player receiving that from the present right boundary) is iterated at most $s-2$ times per outer-loop iteration with $s$ players, summing up to at most $\sum_{i=1}^{\left\lceil \nicefrac{(n-4)}{2} \right\rceil}{(n-2i)}$ iterations of the inner loop in total. Let us see why this is true. For each outer-loop iteration with $s$ players, the inner loop is iterated as long as the player that marked the leftmost smallest piece also marks the rightmost smallest piece (with respect to the current right boundary $\varrho^{\prime}$) and there is no tie with another player for the rightmost smallest piece. With respect to every single player~$p_i$, $1 \leq i \leq s$, when setting $v_i(C^{\prime})=1$, the division of $C^{\prime}$ into pieces valued $\nicefrac{1}{s}$ each results in $s$ disjoint pieces. Two of these $s$ pieces have been identified in Steps~2 and~3 already and thus only $s-2$ more pieces can be identified. At this, let $p_j$ be the player that is to be assigned the leftmost smallest piece $[\lambda,\lambda_j]$ in Step~5 and the only player that marked the rightmost smallest piece $[\varrho_j, \varrho]$. Then there must be a player~$p_k$, $k \neq j$, with $v_k(\lambda, \lambda_j) \leq \nicefrac{1}{s}$ and $v_k(\varrho_j, \varrho) < \nicefrac{1}{s}$, and there is at least one piece $[\varrho_k, \varrho^{\prime}]$ among the $s-2$ remaining pieces for which it is true that $v_k(\varrho_k, \varrho^{\prime})=\nicefrac{1}{s}$ and $v_j(\varrho_k, \varrho^{\prime}) \leq \nicefrac{1}{s}$, i.e., for some $\varrho^{\prime}$ it holds that $\|[\varrho_k,\varrho^{\prime}]\| \leq \|[\varrho_j,\varrho^{\prime}]\|$. Hence, within any outer-loop iteration, the inner loop stops after at most $s-2$ iterations. Counting in single steps, Steps~1, 2, 3, 5, 6, 7, and~8 each are repeated $\left\lceil \nicefrac{(n-4)}{2} \right\rceil$ times, Steps~4.1, 4.2 and~4.3 each are repeated at most $\sum_{i=1}^{\left\lceil \nicefrac{(n-4)}{2} \right\rceil}{(n-2i)}$ times, Steps~9.1 through~9.4 each are repeated at most once, and Step~10 involves at most nine more steps (according to Figure~\ref{algo:n4}). Thus, in terms of single steps as presented in Figure~\ref{algo:n}, the protocol is bounded by $(7 \cdot \left\lceil \nicefrac{(n-4)}{2} \right\rceil) + (3 \cdot \sum_{i=1}^{\left\lceil \nicefrac{(n-4)}{2} \right\rceil}{(n-2i)}) + 4 + 9$ steps. Thus, the protocol in Figure~\ref{algo:n} carries out only finitely many operations and is finite bounded.~\end{proofs} \begin{theorem} \label{thm:n-DGEF} For $n \geq 5$ players, the cake-cutting protocol in Figure~\ref{algo:n} has $\left\lceil \nicefrac{n^2}{2} \right\rceil + 1$ guaranteed envy-free-relations.\footnote{Note that the same formula holds if~$n=3$, but for the special case of~$n=4$ even one more envy-free-relation can be guaranteed (see~Theorem~\ref{thm:n4-DGEF}).} \end{theorem} \begin{proofs} The DGEF of the protocol in Figure~\ref{algo:n} increases every time a portion is assigned to a player. Considering Steps~1 through~8 of any outer-loop iteration with $s$ players, player~$p_j$ (the player receiving the leftmost smallest piece in Step~5 and dropping out with this portion in Step~7) will not be envied by any of the $s-1$ other players. Regarding player $p_k$ (the player receiving the rightmost smallest piece in Step~6 and dropping out with this portion in Step~7), two cases need to be considered---the best case and the worst case. In the best case, player $p_k$ immediately is not the same one as player~$p_j$. In the worst case, players $p_j$ and $p_k$ are one and the same player (i.e., $j=k$) in Step~3, and Steps~4.1 through 4.3 need to be executed. In this case, as already mentioned in the proof of Theorem~\ref{thm:n-finite-bounded}, there must be at least one piece $[\varrho_k, \varrho^{\prime}]$ with $v_k(\varrho_k, \varrho^{\prime}) = \nicefrac{1}{s}$ and $v_j(\varrho_k, \varrho^{\prime}) \leq \nicefrac{1}{s}$, i.e., for some $\varrho^{\prime}$ it holds that $\|[\varrho_k,\varrho^{\prime}]\| \leq \|[\varrho_j,\varrho^{\prime}]\|$. Consequently, in both the best and the worst case the player receiving the rightmost smallest piece (with respect to the current right boundary, either $\varrho$ or some $\varrho^{\prime}$) will not be envied by any of the $s-1$ other players, not even by player~$p_j$. However, none of the players $p_j$ and $p_k$ can be guaranteed to be not envied by more than $s-1$ players according to the proof of Lemma~\ref{lem:DGEF-no-evaluations}. Since the outer loop is repeated $\left\lceil \nicefrac{(n-4)}{2} \right\rceil$ times, the number of guaranteed envy-free-relations among the players sums up to $\left(\nicefrac{n^2}{2}\right)-8$ when $n$ is even, and to $\lfloor\nicefrac{n^2}{2}\rfloor-4=\left\lceil\nicefrac{n^2}{2}\right\rceil-5$ when $n$ is odd. Note that $s$ is decreasing in every outer-loop iteration, and so $\nicefrac{1}{s}$ is increasing, which implies that no player receiving a portion valued $\nicefrac{1}{s}$ at a later outer-loop iteration will envy any of those players that received a portion in one of the previous outer-loop iterations. To verify the latter, let us denote by $C^{\prime}_{t}$ the subcake to be divided and by $s_{t}$ the number of players participating in outer-loop iteration~$t$ with $2 \leq t \leq \left\lceil \nicefrac{(n-4)}{2} \right\rceil$. Considering any outer-loop iteration $t-1$ with cake $C^{\prime}_{t-1}$ being renormalized such that $v_i(C^{\prime}_{t-1})=1$, $1 \leq i \leq s_{t-1}$, each of the two players dropping out in iteration~$t-1$ receives a portion he or she values exactly $\nicefrac{1}{s_{t-1}}$ of $C^{\prime}_{t-1}$ and which is valued at most $\nicefrac{1}{s_{t-1}}$ of $C^{\prime}_{t-1}$ by the remaining players, where $v_i(C^{\prime}_{t-1}) \geq \nicefrac{s_{t-1}}{n}$ (see Remark~\ref{rem:remarksblock}.\ref{rem:remarksblock-5}). Consequently, all players $p_j$, $1 \leq j \leq s_t$, participating in outer-loop iteration~$t$ value subcake $C^{\prime}_{t}$ to be worth at least $\nicefrac{s_t}{s_{t-1}}$ of $C^{\prime}_{t-1}$, i.e., $v_j(C^{\prime}_{t}) \geq (\nicefrac{s_t}{s_{t-1}}) \cdot v_j(C^{\prime}_{t-1})$, and the two players dropping out in iteration~$t$ both receive a portion valued $\nicefrac{1}{s_{t}}$ of subcake $C^{\prime}_{t}$. Thus, players dropping out in outer-loop iteration~$t$ receive each a portion that is valued at least~$\nicefrac{1}{s_{t-1}}$ of $C^{\prime}_{t-1}$ according to their measures. Once the outer loop has been completed, either four (if $n$ is even) or three (if $n$ is odd) players are left. If there are four players left, three more envy-free-relations can be guaranteed by executing Steps~9.1 through~9.4, as the three remaining players will not envy the player receiving the fourth-to-last portion, but no more than three envy-free-relations can be guaranteed, again by the proof of Lemma~\ref{lem:DGEF-no-evaluations}. In addition, the three last players entering Step~10 will not envy each other, because the Selfridge--Conway protocol always provides an envy-free division. Accordingly, six more envy-free-relations are guaranteed. Summing up, the protocol in Figure~\ref{algo:n} has a DGEF of $\left\lceil \nicefrac{n^2}{2} \right\rceil + 1$.~\end{proofs} \begin{theorem} \label{thm:n-strategy-proof} The proportional cake-cutting protocol in Figure~\ref{algo:n} is strategy-proof in the sense of Definition~\ref{def:proportional-strategy-proof}. \end{theorem} \begin{proofs} From Step~9.1 on (when at most four players are left), the protocol has been proven to be strategy-proof in Theorem~\ref{thm:n4-strategy-proof}. With respect to Steps~1 through~8, three different cases need to be considered, each of which applies analogously to both assigning the leftmost smallest piece and assigning the rightmost smallest piece. Note that a cheating attempt on one of the pieces does not influence the assignment of the other. The same three cases apply when there is a cheating attempt on the rightmost smallest piece and Steps~4.1 through~4.3 need to be executed. In the following case distinction, given any iteration~$t$ of the outer loop and cake~$C^{\prime}_{t}=[\lambda, \varrho]$ with $v_i(C^{\prime}_{t}) = 1$ for all players~$p_i$, $1 \leq i \leq s_t$, we will consider only the situation when there is exactly one player not telling the truth with respect to the leftmost smallest piece, trying to get more than a proportional share. Let $p_c$ be this cheating player, where $1 \leq c \leq s_t$. \begin{fall} \item \label{n-strategy-proof-case1} If $p_c$ is the player receiving portion $C_c=[\lambda,\lambda_c]$ with $v_c(\lambda,\lambda_c) > \nicefrac{1}{s_t}$ in Step~5 (and dropping out in Step~7), then $p_c$ would receive more than a proportional share. However, all $s_t-1$ other players are still guaranteed a proportional share, since all of them consider portion $C_c$ as being worth at most~$\nicefrac{1}{s_t}$ of $C^{\prime}_{t}$ according to their measures, so they all consider the remaining part of the cake to be worth at least~$\nicefrac{(s_t-1)}{s_t}$ of $C^{\prime}_{t}$. \item \label{n-strategy-proof-case2} If the cheater, $p_c$, would have received portion $C_c=[\lambda,\lambda_c]$ in Step~5 when telling the truth (so $v_c(\lambda,\lambda_c)=\nicefrac{1}{s_t}$ and $v_i(\lambda,\lambda_c) \leq \nicefrac{1}{s_t}$ with $1 \leq i \leq s_t$ and $i \neq c$) but now is not (since $p_c$ is making a mark at $\lambda_c^\prime$ with $\lambda_c < \lambda_c^\prime$), then the cheater could end up with even less than a proportional share. Let us see why this holds true. In this case, another player, $p_j$ with $j \neq c$, receives portion $C_j=[\lambda,\lambda_j]$, $\lambda_c \leq \lambda_j \leq \lambda_c^\prime$, which determines the left boundary of $C^\prime_{t+1}$ (the remaining cake to be continued with) to be $\lambda_j$. According to the measure of player~$p_c$, $[\lambda_j, \varrho]$~is worth at most~$\nicefrac{(s_t-1)}{s_t}$ of $C^{\prime}_{t}$, because he or she values $[\lambda,\lambda_j]$ to be worth at least $\nicefrac{1}{s_t}$ of $C^{\prime}_{t}$, as assumed beforehand. However, since the protocol works in a parallel way, this loss in value may be compensated for by some gain in value with respect to the rightmost smallest piece $C_k=[\varrho_k, \varrho^{\prime \prime}]$ that is assigned in Step~6 (where $\varrho_{k}$ is the left boundary of the rightmost smallest piece marked by some player $p_{k}$, $\|\{c,j,k\}\|=3$, and $\varrho^{\prime \prime}$ is the current right boundary, i.e., either $\varrho^{\prime \prime} = \varrho$ or $\varrho^{\prime \prime} = \varrho^{\prime}$). That is, if $v_c(\lambda_c, \lambda_j) \leq v_c(\varrho_c, \varrho_k)$ then the loss player $p_c$ experiences by cheating with respect to the leftmost smallest piece is made up for by an accidentally sufficient gain relative to the assignment of the rightmost smallest piece.\footnote{Note that the compensation for the cheater's loss may also be accumulated over the following rounds as long as the cheater has not been assigned a portion yet.} However, if $v_c(\lambda_c, \lambda_j) > v_c(\varrho_c, \varrho_{k})$ then no sufficient compensation takes place and player $p_c$ considers $C^{\prime}_{t+1}=[\lambda_j, \varrho_{k}]$, which is the part of the cake to be continued with and to be divided among $s_{t}-2$~players, as being worth at most~$\nicefrac{(s_t-2)}{s_t}$ of $C^{\prime}_{t}$. Thus, in this case player $p_c$ is not guaranteed a portion that is worth at least $\nicefrac{1}{n}$ of $C$ in total (according to his or her measure), even though the protocol would be proportional if all players were playing by the rules and strategies required by the protocol. Again, all other players are still guaranteed a proportional share. Players~$p_j$ and $p_{k}$ both value their portion to be $\nicefrac{1}{s_t}$ of $C^{\prime}_{t}$, and the $s_t-3$ remaining players consider $C^{\prime}_{t+1}=[\lambda_j, \varrho_{k}]$ (which is to be divided among $s_t-2$~players) as being worth at least $\nicefrac{(s_{t}-2)}{s_t}$ of $C^{\prime}_{t}$. Note that if the cheater marks the rightmost smallest piece in the very same outer-loop iteration in which he or she is cheating on the leftmost smallest piece and happens to receive the rightmost smallest piece, then there is no influence on the portions to be assigned in this outer-loop iteration, as even the cheater will receive a proportional share being worth $\nicefrac{1}{s_t}$ of $C^{\prime}_{t}$ according to his or her measure. \item \label{n-strategy-proof-case3} If the cheater does not receive the leftmost smallest portion $C_j=[\lambda,\lambda_j]$ by cheating nor would have received this portion when telling the truth, then this would not at all affect the leftmost portion to be assigned in this particular outer-loop iteration. The player receiving portion $C_j=[\lambda,\lambda_j]$ in Step~5 values this portion as being~$\nicefrac{1}{s_t}$ of $C^{\prime}_{t}$, and the remaining players (except for some player $p_k$, $j \neq k$, that is assigned the rightmost smallest piece) continue the procedure with $C^{\prime}_{t+1}=[\lambda_j,\varrho_k]$ (where $\varrho_k$ is the left boundary of the rightmost smallest piece marked by~$p_k$), which each of them values at least $\nicefrac{(s_t-2)}{s_t}$ of $C^{\prime}_{t}$, even the cheater. \end{fall} This concludes the proof of Theorem~\ref{thm:n-strategy-proof}.~\end{proofs} Figure~\ref{algo:n:strongfair} shows how to adapt the protocol in Figure~\ref{algo:n} so as to achieve a strong fair division. Again, note that if the inner loop (Steps~4.1 through~4.3) has not been executed in an outer-loop iteration (Steps~1 through~8), we have the special case of zero inner-loop iterations and $\varrho = \varrho^{\prime}$. In this case, portion $C_k=[\varrho_k-m_r,\varrho]$ assigned to player $p_k$ in Step~6 is the same as $C_k=[\varrho_k-m_r,\varrho^{\prime}]$ in the general case, and the values for $\varrho:=\varrho_k-m_r$ and $C^{\prime}:=[\lambda_j+m_{\ell}, \varrho_k-m_r]$ that are set in Step~8 in this case are special cases of $\varrho:=\varrho-\varrho^{\prime}+\varrho_k-m_r$ and $C^{\prime}:=[\lambda_j+m_{\ell}, \varrho_k-m_r] \cup [\varrho^{\prime}, \varrho]$ (again, since $[\varrho^{\prime}, \varrho]$ degenerates to a single point if $\varrho = \varrho^{\prime}$, which is valued zero by the axiom of divisibility, see Footnote~\ref{foo:divisibility}). \begin{theorem} \label{thm:n-strong-fair} For $n > 3$ players, modifying the first iteration of the cake-cutting protocol in Figure~\ref{algo:n} according to Figure~\ref{algo:n:strongfair} yields a strong fair division, provided that in this iteration exactly one player makes a mark that is closest to the left boundary and exactly one distinct player makes a mark that is closest to the right boundary. \end{theorem} \begin{proofs} For $n=4$ players, the protocol in Figure~\ref{algo:n} is the same as the protocol in Figure~\ref{algo:n4} and thus can be modified so as to yield a strong fair division according to Theorem~\ref{thm:n4-strong-fair}, see Figure~\ref{algo:n4:strongfair}. For $n>4$ players, consider the very first outer-loop iteration (Steps~1 through~8) of the protocol. Let $p_j$ be the unique player whose mark in Step~1 is closest to the left boundary~$\lambda$, and let $p_k$, $j \neq k$, be the unique player whose mark in Step~1 is closest to the right boundary~$\varrho$ (if the inner loop needs to be executed, let $p_k$ be the unique player whose mark in Step~4.2 is closest to the current right boundary~$\varrho^\prime$). According to Step~2 in Figure~\ref{algo:n:strongfair}, let $p_\ell$ be any player such that $\|[\lambda,\lambda_j]\| < \|[\lambda,\lambda_{\ell}]\|$ and there is no player $p_z$, $1 \leq j,\ell,z \leq s$, $\|\{j,\ell,z\}\|=3$, with $\|[\lambda,\lambda_z]\| < \|[\lambda,\lambda_{\ell}]\|$, where ties can be broken arbitrarily. Analogously, according to Step~3 (respectively, Step~4.3) in Figure~\ref{algo:n:strongfair}, let $p_r$ be any player such that $\|[\varrho_k,\varrho]\| < \|[\varrho_r,\varrho]\|$ (respectively, $\|[\varrho_k,\varrho^\prime]\| < \|[\varrho_r,\varrho^\prime]\|$) and there is no player $p_z$, $1 \leq k,r,z \leq s$, $\|\{k,r,z\}\|=3$, with $\|[\varrho_z,\varrho]\| < \|[\varrho_r,\varrho]\|$ ($\|[\varrho_z,\varrho^\prime]\| < \|[\varrho_r,\varrho^\prime]\|$). Steps~5 and~6 in Figure~\ref{algo:n:strongfair} assure that players~$p_j$ and~$p_k$ each are assigned a portion that, according to their measures, is worth strictly more than $\nicefrac{1}{s}$ of $C^{\prime}$ (which is worth at least as much as $\nicefrac{1}{n}$ of $C$ by the argument of Remark~\ref{rem:remarksblock}.\ref{rem:remarksblock-5}),\footnote{Recall that we assumed the axiom of positivity, which requires nonempty pieces of cake to have a nonzero value for each player, see Footnote~\ref{foo:positivity} for more discussion of this point.} since $p_j$ and $p_k$ each receive a portion that is bigger and thus is worth more than the one they have marked as being worth exactly~$\nicefrac{1}{s}$ of $C^{\prime}$ in Step~1. By the same argument, the $n-2$~remaining players continue by applying a proportional protocol to a part of the cake that all of them consider to be worth strictly more than~$\nicefrac{(n-2)}{n}$ of $C$. Thus, each of the players receives a portion that is worth strictly more than $\nicefrac{1}{n}$ of $C$ according to his or her measure, which results in a strong fair division.~\end{proofs} \begin{figure}[h!tp] \centering \begin{tabular}{|lp{118mm}|} \hline {\bf Step~2.} & Find any players $p_j$ and $p_{\ell}$ such that $\|[\lambda,\lambda_j]\| < \|[\lambda,\lambda_{\ell}]\|$ and there is no player $p_{z}$,$1 \leq j,{\ell},z \leq s$, $\|\{j,{\ell},z\}\|=3$, with $\|[\lambda,\lambda_{z}]\| < \|[\lambda,\lambda_{\ell}]\|$. Set $m_{\ell}:=\nicefrac{(\lambda_{\ell} - \lambda_j)}{2}$. \\ {\bf Step~3.} & Find any players $p_k$ and $p_{r}$ such that $\|[\varrho_k,\varrho]\| < \|[\varrho_{r},\varrho]\|$ and there is no player $p_z$, $1 \leq k,r,z \leq s$, $\|\{k,r,z\}\|=3$, with $\|[\varrho_z,\varrho]\| < \|[\varrho_{r},\varrho]\|$. If there is more than one player fulfilling this condition for player~$p_k$, and player $p_j$ is one of them, choose player $p_k$ other than $p_j$. Set $m_r:=\nicefrac{(\varrho_k - \varrho_{r})}{2}$. \\ {\bf Step~4.3.} & Find any players $p_k$ and $p_{r}$ such that $\|[\varrho_k,\varrho^{\prime}]\| < \|[\varrho_{r},\varrho^{\prime}]\|$ and there is no player $p_z$, $1 \leq k,r,z \leq s$, $\|\{k,r,z\}\|=3$, with $\|[\varrho_z,\varrho^{\prime}]\| < \|[\varrho_{r},\varrho^{\prime}]\|$. If there is more than one player fulfilling this condition for player~$p_k$, and player $p_j$ is one of them, choose player $p_k$ other than $p_j$. Set $m_r:=\nicefrac{(\varrho_k - \varrho_{r})}{2}$.\\ {\bf Step~5.} & Assign portion $C_j=[\lambda,\lambda_j+m_{\ell}]$ to player~$p_j$. \\ {\bf Step~6.} & If $\varrho = \varrho^{\prime}$, assign portion $C_k=[\varrho_k-m_r,\varrho]$ to player $p_k$, else assign portion $C_k=[\varrho_k-m_r,\varrho^{\prime}]$ to player $p_k$. \\ {\bf Step~8.} & If $\varrho = \varrho^{\prime}$, set $C^{\prime}:=[\lambda_j+m_{\ell}, \varrho_k-m_r]$ and $\varrho:=\varrho_k-m_r$, else set $C^{\prime}:=[\lambda_j+m_{\ell}, \varrho_k-m_r] \cup [\varrho^{\prime}, \varrho]$ and $\varrho:=\varrho-\varrho^{\prime}+\varrho_k-m_r$. Set $\lambda:=\lambda_j+m_{\ell}$, $\varrho^{\prime} := \varrho$, and $s:=s-2$. \\ {\bf Step~9.2.} & Find any players $p_j$ and $p_k$ such that $\|[\varrho_j,\varrho]\| < \|[\varrho_k,\varrho]\|$ and there is no player $p_{\ell}$, $1 \leq j,k,\ell \leq s$, $\|\{j,k,\ell\}\|=3$, with $\|[\varrho_{\ell},\varrho]\| < \|[\varrho_k,\varrho]\|$. Set $m_r:=\nicefrac{(\varrho_j - \varrho_k)}{2}$. \\ {\bf Step~9.3.} & Assign portion $C_j=[\varrho_j-m_r,\varrho]$ to player $p_j$. Let player $p_j$ drop out. \\ {\bf Step~9.4.} & Set $\varrho:=\varrho_j-m_r$ and $C^{\prime}:=[\lambda, \varrho]$. Set $s:=s-1$. \\ \hline \end{tabular} \caption{Modified steps in the first iteration of the protocol in Figure~\ref{algo:n} to achieve a strong fair division for $n > 3$ players.} \label{algo:n:strongfair} \end{figure} \section{Proof of Theorem~\ref{thm:survey-DGEF}} \label{sec:survey} In this section, we determine the degrees of guaranteed envy-freeness of the proportional cake-cutting protocols listed in Table~\ref{tab:DGEF-survey}. Thus, we prove Theorem~\ref{thm:survey-DGEF} via Lemmas~\ref{lem:last-diminisher} through~\ref{lem:divide-and-choose}. We investigate proportional protocols only, because proportional cake-cutting protocols have a DGEF of at least $n$ according to Proposition~\ref{prop:DGEF-Minimum-Maximum} and, thus, show some degree of fairness already. Over the years, several cake-cutting protocols have been proven to be proportional and finite bounded for any number $n$ of players. A detailed description of various finite bounded proportional protocols can be found in the books by Brams and Taylor~\cite{bra-tay:b:fair-division} and Robertson and Webb~\cite{rob-web:b:cake-cutting}. Our analysis of the protocols in Table~\ref{tab:DGEF-survey} provides a basis for further algorithmic improvements in terms of the degree of guaranteed envy-freeness, and the protocol in Figure~\ref{algo:n} is a first step in this direction. In the following subsections, we give a brief description of the protocols listed in Table~\ref{tab:DGEF-survey} and provide a detailed analysis of their {DGEF}. Note that the value of cake~$C$ to be divided is normalized such that $v_i(C)=1$ for all players~$p_i$ with $1 \leq i \leq n$. \subsection{Last Diminisher} The protocol works as follows: The first player cuts a piece he or she considers being worth exactly $\nicefrac{1}{n}$ in his or her measure. This piece is given to the $n-1$ other players, one after the other. Now, each player has the choice to either pass the piece on as it is, or to trim it before passing it on. If a player considers the piece to be worth more than $\nicefrac{1}{n}$, he or she trims it to exactly $\nicefrac{1}{n}$ according to his or her measure. When the last player has evaluated this piece, it is given to the player who was the last trimming it, or to the player who cut it in the first place if no trimmings have been made. The trimmings are reassembled with the remainder of the cake, and the procedure is applied in the same way for the $n-1$ remaining players and the reassembled remainder of the cake. This process is repeated until only two players remain. In the final round (of $n-1$ rounds in total), these last two players apply the simple cut-and-choose protocol to the remainder of the cake. For guaranteeing each player a proportional share of the cake, the order of the players is of no significance. \begin{lemma} \label{lem:last-diminisher} The Last Diminisher protocol has a degree of guaranteed envy-freeness of $2 + \nicefrac{n(n-1)}{2}$. \end{lemma} \begin{proofs} Concerning the analysis of the degree of guaranteed envy-freeness, it is quite evident that in the first round $n-1$ envy-free-relations are guaranteed, since each of the $n-1$ players not receiving the first piece consider this piece to be of value at most~$\nicefrac{1}{n}$. Analogously, in the $k$th round, $1 < k < n$, $n-k$ additional envy-free-relations are guaranteed. The number of guaranteed envy-free-relations is consecutively decreasing by one per round, as the players who already received a piece are not involved in the evaluation process of subsequent rounds (see the proof of Lemma~\ref{lem:DGEF-no-evaluations}). This sums up to $\sum_{i=1}^{n-1}{i}=\nicefrac{n(n-1)}{2}$ guaranteed envy-free-relations. In addition, in the final round one more guaranteed envy-free-relation is created, as the simple cut-and-choose protocol guarantees that both of the players will not envy each other. Finally, note that the player receiving the first portion is not involved in the evaluations of any other portions, but since the Last Diminisher protocol is proportional, this player too cannot envy each of the other players according to the argument in the proof of Proposition~\ref{prop:DGEF-Minimum-Maximum}. Thus, one more guaranteed envy-free-relation must be added. Consequently, the Last Diminisher protocol guarantees $2+\nicefrac{n(n-1)}{2}$ envy-free-relations.~\end{proofs} \subsection{Lone Chooser} The Lone Chooser protocol was first proposed by Fink~\cite{fin:j:fair-division} (as cited in~\cite{saa:b:cake-cutting-protocol,bra-tay:b:fair-division}). It can be described as follows. For two players, the protocol is just the simple cut-and-choose protocol. For $n > 2$ players, the protocol has $n-1$ rounds. The first round simply describes the cut-and-choose protocol executed by players $p_1$ and~$p_2$, which results in two pieces, $c_1$ and~$c_2$, with $C = c_1 \cup c_2$. Assuming player $p_1$ received piece $c_1$ and player $p_2$ received piece $c_2$, in the second round player $p_1$ has to divide piece $c_1$ with player $p_3$, and player $p_2$ has to divide piece $c_2$ with player $p_3$. To this end, $p_1$ cuts $c_1$ into three pieces each of which he or she considers to be worth at least $\nicefrac{1}{6}$, and so does $p_2$ with~$c_2$. Player $p_3$ then chooses one of the pieces of player $p_1$ and one of the pieces of player~$p_2$, both being most valuable according to $p_3$'s measure. This guarantees each of the players $p_1$, $p_2$, and $p_3$ a portion of at least $\nicefrac{1}{3}$ in their measures. Carrying on in this way, when the final round has been entered, each of the players $p_1, p_2, \dots, p_{n-1}$ is in possession of a portion that he or she considers to be worth at least $\nicefrac{1}{(n-1)}$ in his or her measure. Let us refer to those $n-1$ players as the ``cutters'' of round~$n-1$. Finally, each of the cutters $p_1, \dots, p_{n-1}$ cuts his or her portion into $n$ pieces each of value~$\nicefrac{1}{(n^2-n)}$, and player~$p_n$, the ``chooser'' of round $n-1$, chooses one piece of highest value (according to his or her measure) from each plate of the $n-1$ cutters of this round. \begin{lemma} \label{lem:lone-chooser} The Lone Chooser protocol has a degree of guaranteed envy-freeness of~$n$. \end{lemma} \begin{proofs} In the course of the Lone Chooser protocol, none of the players evaluate the portion of any of the other players, which determines the DGEF of the Lone Chooser protocol to be~$n$ as a result of Lemma~\ref{lem:DGEF-no-evaluations}.~\end{proofs} \subsection{Lone Divider} A complete algorithmic description of this protocol would be rather comprehensive. That is why we will give just a rough sketch of the procedure. For a more detailed description, the reader is referred to Kuhn~\cite{kuh:b:games-fair-division}, see also, e.g., \cite{bra-tay:b:fair-division,rob-web:b:cake-cutting,daw:j:lone-divider}. The Lone Divider protocol works as follows: Some player $p_d$, $1 \leq d \leq n$, is chosen to be the first-round ``divider.'' The divider cuts cake~$C$ into $n$ portions that he or she considers each to be worth $\nicefrac{1}{n}$, i.e., $v_d(C_j)=\nicefrac{1}{n}$ with $1 \leq j \leq n$. Subsequently, all other players (the first-round ``choosers'') are asked to identify any of the $n$ portions they find acceptable, that is, each player identifies all portions that are worth at least $\nicefrac{1}{n}$ in this player's measure. Obviously, every chooser~$p_c$, $c \neq d$, needs to accept at least one portion~$C_j$. Depending on the players' choices, there are different ways for how the protocol continues. In the simplest case, the players' choices allow for a \emph{fully decidable division}, i.e., for a division such that each chooser $p_c$ receives one of the portions he or she previously identified as being acceptable. Divider $p_d$ then receives the portion that has not been assigned to any of the choosers. All players drop out and the division is complete. However, if the players' choices do not allow a fully decidable division, there is either a \emph{partially decidable division} (which means that only some---not all---of the choosers are assigned a portion and drop out), or a \emph{fully undecidable division} (which means that there are at least two portions that have not been identified as being acceptable by any of the choosers and that none of the choosers receive a portion). In the case of a partially decidable division, the choosers accomplish only a partial allocation of the cake, i.e., those choosers that identified acceptable portions in a nonconflicting way are assigned a portion they have marked as being acceptable and drop out. In addition, this round's divider is assigned any one of the other portions and drops out, the remaining portions (that could not be assigned to players) are reassembled, and a new round with the remaining players, among which a new divider is to be chosen, is started. In the case of a fully undecidable division, none of the choosers are assigned a portion; only this round's divider receives one of the two portions that have not been identified as being acceptable by any of the choosers and drops out, and a new round with the remaining players, among which a new divider is to be chosen, is started in which the remaining cake is divided. This procedure is repeated until the whole cake has been allocated. Note that in each round at least this round's divider is assigned a portion and drops out. It is easy to see that the Lone Divider protocol is finite bounded and proportional. \begin{lemma} \label{lem:lone-divider} The Lone Divider protocol has a degree of guaranteed envy-freeness of $2n-2$. \end{lemma} \begin{proofs} The analysis of the degree of guaranteed envy-freeness is done by a worst-case scenario in terms of the number of existing envy-free-relations. We claim that the maximum number of guaranteed envy-free-relations exists in the case that every chooser $p_c$ marks $n-1$ portions as being acceptable in the first round. In the following, let us refer to this situation as the ``worst-case scenario.'' In this scenario, the rules of the protocol imply that a proportional division will be achieved in the very first round (i.e., in this worst case scenario the first round results in a fully decidable division), and the following envy-free-relations are guaranteed to exist in this case. The divider will not envy any of the choosers, as he or she considers each of the portions to be $\nicefrac{1}{n}$, resulting in $n-1$ guaranteed envy-free-relations. Furthermore, none of the $n-1$ choosers will envy the player (be it the divider or any of the other choosers) that received the portion he or she considers to be not acceptable, leading to additional $n-1$ guaranteed envy-free-relations. Since the DGEF is the maximum number of envy-free-relations that are guaranteed to exist in \emph{every} case, the DGEF of the Lone Divider protocol is at most $2n-2$. To argue that the scenario given above indeed represents the worst case for $n \geq 3$ players (and so the DGEF is equal to $2n-2$), we will consider all possible cases different from the worst-case scenario. We will show that in each of these cases the number of existing envy-free-relations is, in fact, higher than $2n-2$. This implies that none of these other cases considered represent a worst-case scenario. For notational convenience, we will use the term ``case-enforced envy-free-relation'' to refer to the number of those envy-free-relations that necessarily must exist in any of these cases, regardless of which particular valuation functions the players have (other than what was causing the respective case to occur). Recall that the term ``guaranteed envy-free-relation'' is reserved for the number of envy-free-relations that necessarily exist in the worst case (and---as we will see---none of the cases below will describe the worst case). Thus, the term ``case-enforced envy-free-relation'' is more general than and includes the term ``guaranteed envy-free-relation'': The number of case-enforced envy-free-relations in the worst case (i.e., the minimum number of case-enforced envy-free-relations, where the minimum is taken over all possible cases) is exactly the number of guaranteed envy-free-relations. \begin{fall} \item \label{case1} The first round results in a fully decidable division. Consider the following two subcases. \begin{unterfall} \item \label{case1-1} The total number of \emph{inacceptable} portions is greater than in the worst-case scenario. Specifically, let us look at the situation that exactly one of the choosers considers just one more portion as being inacceptable than in the worst-case scenario. This change decrements the total number of acceptable portions by one and thus creates one additional case-enforced envy-free-relation, since this chooser will not envy the player receiving this particular portion. However, since the number of guaranteed envy-free-relations of the worst-case scenario persists also in this case, increasing the total number of inacceptable portions also increases the total number of case-enforced envy-free-relations. Thus, the present case does not describe a worst-case scenario. \item \label{case1-2} The total number of \emph{acceptable} portions is greater than in the worst-case scenario. Specifically, let us look at the situation that exactly one of the choosers considers just one more portion as being acceptable than in the worst-case scenario. This chooser then accepts all portions, and thus necessarily considers each of the portions to be worth exactly $\nicefrac{1}{n}$. Accordingly, this chooser does not envy any of the other players, resulting in $n-2$ additional case-enforced envy-free-relations. In particular, since this chooser still does not envy the player he or she did not envy in the worst-case scenario, incrementing the total number of acceptable portions by just one increases the total number of case-enforced envy-free-relations by one (if $n=3$) or even more (if $n>3$). Thus, the present case does not describe a worst-case scenario. \end{unterfall} \item \label{case2} The first round does not result in a fully decidable division. Thus, the protocol runs over more than one round. For $n = 3$ players, an additional round would be caused only by a fully undecidable division. For $n > 3$ players, additional rounds are caused by either a fully undecidable division or a partially decidable division. In every round, at least the divider of this round is assigned a portion and drops out. On the part of the choosers, entering a new round creates additional case-enforced envy-free-relations the number of which depends on the present circumstances. Simply put, every additional round will increase the total number of case-enforced envy-free-relations compared with the worst-case minimum of $2n - 2$. This justifies why a division obtained by the execution of more than one round does not present a worst-case scenario. To see why running more than one round will lead to more than $2n - 2$ case-enforced envy-free-relations, let us first, in Cases~\ref{case2}.\ref{case2-1} and~\ref{case2}.\ref{case2-2}, have a closer look at the case-enforced envy-free-relations created in each nonfinal round if more than one round is executed in total. The final round will be handled separately in Case~\ref{case2}.\ref{case2-3}. In particular, if there are exactly two rounds, the total number of case-enforced envy-free-relations created is the sum of those explained in the first paragraph of either Case~\ref{case2}.\ref{case2-1} or Case~\ref{case2}.\ref{case2-2} (which describe the number of case-enforced envy-free-relations created in the first round) and those explained in Case~\ref{case2}.\ref{case2-3} (which describes the number of case-enforced envy-free-relations created in the final round). \begin{unterfall} \item \label{case2-1} In the case of a \emph{fully undecidable division} of the cake as the result of the first round, the first-round divider receives one of the portions that have not been marked as being acceptable by any of the choosers and drops out. All $n-1$ choosers enter the second round and will not envy the divider of the first round, resulting in $n-1$ case-enforced envy-free-relations. Moreover, the first-round divider will not envy at least one of the $n-1$ other players, since there must be at least one player he or she does not envy, which follows from the proof of Proposition~\ref{prop:DGEF-Minimum-Maximum}. Thus, $n$ case-enforced envy-free-relations result from the first round in this case. Note that Proposition~\ref{prop:DGEF-Minimum-Maximum} can be applied only to the first-round divider. Thus, analogously to the above argument, for every additional round with $s<n$ players that is caused by a fully undecidable division and that is not the final round, $s-1$ case-enforced envy-free-relations need to be added. An analysis of the final round follows in Case~\ref{case2}.\ref{case2-3}. \item \label{case2-2} In the case of a \emph{partially decidable division} of the cake as the result of the first round, let $K$ denote the set of those $k$ players, $1 < k < n-1$, that are in conflict with each other concerning the portions they identified as being acceptable in this first round. Let $L$ be the set of the $\ell$ remaining players, where $1 < \ell < n-1$ and $n = k+\ell$. The players in $L$ can divide a part of the cake without any conflict, i.e., each of the players in $L$ is assigned one of the portions he or she identified as being acceptable. Note that $\ell=1$ would represent Case~\ref{case2}.\ref{case2-1}, and that the divider always is one of the players in $L$. Following the protocol, each of the players in $L$ receives a portion and drops out while each of the players in $K$ enters the second round. Since none of the players in $K$ would accept any of the portions the players in $L$ have received, each of the players in $K$ does not envy any of the players in~$L$, resulting in $k\ell \geq k+\ell = n$ case-enforced envy-free-relations. The players in $L$ (except for the first-round divider) that are assigned a portion and drop out in the first round are each guaranteed one envy-free-relation due to the argument in the proof of Proposition~\ref{prop:DGEF-Minimum-Maximum}, summing up to $\ell-1$ additional case-enforced envy-free-relations. Moreover, the first-round divider will not envy any of the $\ell-1$ other players in~$L$, resulting in $\ell-1$ more case-enforced envy-free-relations. Consequently, $2(\ell-1)+k\ell$ case-enforced envy-free-relations result from the first round in this case. Now consider any additional round with $s=k^\prime+\ell^\prime$ players that is caused by a partially decidable division of the cake and that is not the final round, where $k^\prime$ players are in conflict with each other and $\ell^\prime$ players accomplish a partial allocation of the cake. Analogously to the above argument (except that Proposition~\ref{prop:DGEF-Minimum-Maximum} is no longer applicable), $k^\prime \ell^\prime$ case-enforced envy-free-relations are to be added for the $k^\prime$ players being in conflict, and $\ell^\prime-1$ case-enforced envy-free-relations are to be added for this round's divider. An analysis of the final round follows in Case~\ref{case2}.\ref{case2-3}. \item \label{case2-3} Aside from the case-enforced envy-free-relations that are created by executing a nonfinal round, additional case-enforced envy-free-relations are created in the \emph{final round}, in which even the last player receives a portion. In each case, the final round is characterized by providing a fully decidable division. Consequently, considering a final round with $s$~players at least $2(s-1)$ case-enforced envy-free-relations are created according to Cases~\ref{case1}.\ref{case1-1} and~\ref{case1}.\ref{case1-2}. \end{unterfall} \end{fall} As a result, in the example of the protocol running a second round, either $n+2(s-1) = n + 2(n-2)$ with $s=n-1$ (see Case~\ref{case2}.\ref{case2-1}), or $2(\ell-1)+k\ell+2(s-1) = k\ell + 2(n-2) \geq n + 2(n-2)$ with $s=k$ (see Case~\ref{case2}.\ref{case2-2}) envy-free-relations are case-enforced. Thus, since $n \geq 3$, a second round yields at least $n + 2(n-2) > 2n-2$ case-enforced envy-free-relations in total. As indicated above, every additional round increases the number of case-enforced envy-free-relations even more. Cases~\ref{case1} and \ref{case2} (and their subcases) completely characterize all situations different from the worst case scenario, since their number of case-enforced envy-free-relations is always greater than $2n-2$, the number of envy-free-relations guaranteed to exist in the scenario given in the first paragraph of this proof. This justifies that this scenario indeed represents the worst case. Note that the protocol does not require any of the choosers to value any of the portions they marked as being acceptable (i.e., the only information provided on these portions is that they are considered to be worth at least $\nicefrac{1}{n}$), and thus, according to the proof of Lemma~\ref{lem:DGEF-no-evaluations}, the DGEF of the Lone Divider protocol is $2n-2$.~\end{proofs} \subsection{Cut Your Own Piece} The protocol works as follows: Every player $p_i$ marks $n$ adjacent pieces each valued $\nicefrac{1}{n}$ in his or her measure, resulting in $n(n-1)$ marks in total. Afterwards, a cut is made at between $n-1$ and $2(n-1)$ of the existing marks, resulting in at least $n$~adjacent pieces. Each player then is assigned a portion that contains at least one of the pieces he or she marked beforehand plus some optional supplement. There are different strategies for how to make cuts such that the resulting division is fair. The number of cuts to be made depends on the strategy chosen. Note that there always is at least one strategy that guarantees each player $p_i$ a portion valued at least $\nicefrac{1}{n}$ according to his or her measure, i.e., $v_i(C_i) \geq \nicefrac{1}{n}$ for all $1 \leq i \leq n$~\cite{ste:b:math-snapshots}. If $n-1$~cuts are made (at $n-1$ of the existing marks), each player's portion consists of exactly one piece which is worth at least $\nicefrac{1}{n}$ according to his or her measure. \begin{lemma} \label{lem:cut-your-own-piece} The Cut Your Own Piece protocol has a degree of guaranteed envy-freeness of $n$ if no strategy is specified, and a degree of guaranteed envy-freeness of $2n-2$ for the ``left-right strategy'' (which, for convenience, will be explained in the proof). Moreover, for this protocol no strategy can give a better DGEF than $2n-2$. \end{lemma} \begin{proofs} As mentioned above, there are different strategies for how to make cuts such that the resulting division is fair. If no particular strategy is given, this protocol has a DGEF of only~$n$ (according to Lemma~\ref{lem:DGEF-no-evaluations}), since all players made their marks independently, and none of the players were asked to give an evaluation of the marks of any of the other players. Hence, only the minimum of $n$ envy-free-relations can be guaranteed. Steinhaus~\cite{ste:b:math-snapshots} did not mention a strategy for how to achieve a simple fair division; he just mentioned that there always exists at least one. However, when we consider a strategy that always assigns the leftmost (with respect to the interval $[0,1]$) smallest piece to the player that marked this piece as being of value $\nicefrac{1}{n}$, and the rightmost smallest piece to the player that marked this piece as being of value $\nicefrac{1}{n}$, this protocol guarantees at least $2n-2$ envy-free-relations. We call this strategy the \emph{left-right strategy}. In more detail, applying the left-right strategy we assign the leftmost piece to the player that marked the smallest piece starting at~$0$, and we assign the rightmost piece to the player that marked the smallest piece finishing at~$1$. That way it is guaranteed that the $n-2$ remaining players each consider the part of the cake between the assigned leftmost piece and the assigned rightmost piece as being worth at least $\nicefrac{(n-2)}{n}$. Thus, this subpart of the cake can be allocated to these $n-2$ players according to the marks made in the first instance. Note that if the player that marked the leftmost smallest piece happens to be the same as the one that marked the rightmost smallest piece, the rightmost piece is given to the player that marked the second smallest piece finishing at~$1$. If several marks for the leftmost (respectively, for the rightmost) smallest piece coincide, any mark can be chosen, without loss of generality. In this example, the terms ``piece'' and ``portion'' can be used interchangeably, as this protocol assigns contiguous portions. When applying the left-right strategy as described above, it is guaranteed that the player receiving the leftmost smallest piece is not envied by any of the $n-1$ remaining players, since all of them value this piece at most $\nicefrac{1}{n}$ according to their measures. Analogously, it is guaranteed that the $n-2$ players in the ``middle'' do not envy the player receiving the rightmost smallest piece as they value this piece at most $\nicefrac{1}{n}$. Note that if the player that marked the leftmost smallest piece is the very same as the one that marked the rightmost smallest piece, this player may envy the player receiving the rightmost piece, since in this case the rightmost piece to be assigned is just the second smallest piece finishing at~$1$. Thus, only $n-2$ envy-free-relations can be guaranteed with respect to the player receiving the rightmost piece. However, one more guaranteed envy-free-relation needs to be added, as the player that marked and is assigned the leftmost smallest piece cannot envy each of the other players (according to the argument in the proof of Proposition~\ref{prop:DGEF-Minimum-Maximum}). Consequently, $2n-2$ envy-free-relations can be guaranteed in total. The DGEF achieved for the Cut Your Own Piece protocol by the application of the left-right strategy cannot be enhanced by any other strategy. The latter is due to the fact that all pieces have been marked without any mutual evaluations (as already mentioned above), and that no common boundaries other than the left border of the leftmost piece and the right border of the rightmost piece (with respect to the interval $[0,1]$), which could be used for subsequent comparisons of the guaranteed sizes and thus values of the pieces marked, are known.~\end{proofs} \subsection{Divide and Conquer} \label{sec:divide-and-conquer} The Divide and Conquer protocol, which was first presented by Even and Paz~\cite{eve-paz:j:cake-cutting} (see also, e.g., \cite{rob-web:b:cake-cutting}), is based on the idea of dividing cake $C$ by simultaneously partitioning disjoint parts of~$C$. The procedure slightly differs depending on whether the number of players is even or odd. If there is an even number of players, say $n=2k$ for some integer~$k$, all players but one divide cake $C$ in the ratio $\nicefrac{k}{k}$ by a single cut, yielding two pieces of equal value for each of these $n-1$ players. The noncutter identifies either the piece to the left of the middle cut (with respect to the interval $[0,1]$), or the piece to the right of the middle cut as being worth at least half of the cake according to his or her measure, and then continues dividing this piece with those $k-1$ cutters whose cuts fall within this piece. The other piece will be divided among the $k$ remaining cutters. That is, a new round is started in which those two pieces of $C$ are divided among $k$ players each, simultaneously but independently of each other. If there is an odd number of players, say $n=2k+1$, all players but one divide cake $C$ in the ratio $\nicefrac{k}{(k+1)}$ by a single cut. The noncutter identifies either the piece to the left of the $k$th cut as being worth at least $\nicefrac{k}{(2k+1)}$, or the piece to the right of the $k$th cut as being worth at least $\nicefrac{(k+1)}{(2k+1)}$. Accordingly, the noncutter continues dividing either the piece to the left of the $k$th cut with those $k-1$ cutters whose cuts fall within this piece, or the noncutter divides the piece to the right of the $k$th cut with those $k$ cutters whose cuts fall within this piece. In both cases, the other piece will be divided among all the remaining cutters. In this way, the procedure is applied recursively until just one player remains in each subprocedure, i.e., until all the cake has been allocated to the players. Note that in the case of $n=2$, this is just the simple cut-and-choose protocol. Brams, Jones, and Klamler~\cite{bra-jon-kla:j:minimal-envy} present a finite bounded proportional cake-cutting protocol that is based on a divide-and-conquer strategy, and focuses on minimizing the number of players the most-envious player may envy. The major difference to the protocol described above lies in the way of splitting the piece of a particular subprocedure into two subpieces. While the original Divide and Conquer protocol suggests to use one of the cuts made by the cutters, the Minimal-Envy Divide and Conquer protocol suggests to conduct one more cut strictly between the cut chosen by the Divide and Conquer protocol and the very next right neighboring cut (according to the interval $[0,1]$), and then to use this additional cut for splitting the particular piece of the cake into two subpieces for the following round if there is any (or to be assigned if this has been the final round). \begin{lemma} \label{lem:divide-conquer} The Divide and Conquer protocol and the Minimal-Envy Divide and Conquer protocol both have a degree of guaranteed envy-freeness of \mbox{$n \cdot \left\lfloor \log n \right\rfloor + 2n - 2^{\left\lfloor \log n \right\rfloor + 1}$}. \end{lemma} \begin{proofs} The Divide and Conquer protocol is recursively defined. Put simply, in each subprocedure the given subpart of the cake is divided into two pieces and so are the players into two groups, the procedure then is applied recursively again and again to the resulting pieces and related players until in each subprocedure just one player remains. In each round, every player participating in any of the subprocedures of this round will not envy at least one of the players continuing with the corresponding other piece, as this other piece is of no more value (according to his or her measure) than the one he or she is continuing with. Thus, in each round, for each player involved in this round, one envy-free-relation is guaranteed to exist. Note that, for each subprocedure in any round except the final one, the numbers of players to be continued with in the resulting two subprocedures of the following round depend on whether the total number of players involved in the given subprocedure is even or odd. From these remarks it follows that the Divide and Conquer protocol's degree of guaranteed envy-freeness, call it $d(n)$ for $n$ players, can be described by the following recurrence: \[ \begin{array}{rcll} d(1) & = & 0, & \\ d(n) & = & d(k) + d(k) + 2k & \text{for $n = 2k$,} \\ d(n) & = & d(k) + d(k+1) + 2k+1 & \text{for $n = 2k+1$.} \end{array} \] Apparently, this recurrence relation can be simplified to: \begin{eqnarray} d(1)& = & 0, \nonumber \\ d(n) & = & d(\left\lfloor \nicefrac{n}{2} \right\rfloor) + d(\left\lceil \nicefrac{n}{2} \right\rceil) + n \mbox{\quad for $n \geq 2$.} \label{eq:rec-solution-divide-conquer} \end{eqnarray} The recurrence in~(\ref{eq:rec-solution-divide-conquer}) and similar versions are well known to occur also in other contexts.\footnote{For example, they also occur in the context of evaluating the number of comparisons made by various sorting algorithms that are based on a divide-and-conquer strategy. In particular, this recurrence expresses the number of comparisons done by the standard merge-sort algorithm.} It is a matter of routine (see, e.g., \cite{gra-knu-pat:b:concrete-maths-comp-science}) to solve it (i.e., to bring it into closed form): \begin{eqnarray} \label{eq:solution-divide-conquer} d(n) & = & n \cdot \left\lfloor \log n \right\rfloor + 2n - 2^{\left\lfloor \log n \right\rfloor + 1} \mbox{\quad for $n \geq 1$.} \end{eqnarray} How does Equation~(\ref{eq:solution-divide-conquer}) reflect the guaranteed number of envy-free-relations of the Divide and Conquer protocol? As mentioned before, this protocol can be considered as a collection of several subprocedures that altogether yield a proportional division of the given cake. This collection of subprocedures can be represented as a balanced binary tree (called the ``recursion tree''), because in every subprocedure the given subpart of the cake is cut into two pieces for which the procedure is applied recursively again and again until just one player remains in each resulting subprocedure. Since the depth of a balanced binary tree is logarithmic in the number of leaves, $\left\lceil \log n \right\rceil$ rounds are performed in total. If the number $n$~of players is not a power of two, every round except for the last one (i.e., $\left\lfloor \log n \right\rfloor$ rounds) is represented by a completely filled level of the binary tree in terms of the number of players, since all $n$~players are participating in these rounds---in different subprocedures though. Note that $n$ is a power of two if and only if it holds that $\left\lceil \log n \right\rceil = \left\lfloor \log n \right\rfloor$ (i.e., the final round is numbered $\lfloor \log n \rfloor$), and only in this case, all $n$~players are involved in each of the rounds, even in the final round. Recall that, in each round, every participating player will not envy at least one of the players continuing with the particular other piece, since he or she considers this piece to be of no more value than the one he or she is continuing with, i.e., in each round one guaranteed envy-free-relation is created on behalf of each of the participating players. For this reason, $n$~envy-free-relations are guaranteed to be created in each of the first $\left\lfloor \log n \right\rfloor$ rounds. This is because in subsequent rounds the particular subparts to be divided will never get bigger again, and once two players have ended up in different subprocedures, they will never meet again in the same subprocedure of any of the following rounds. Moreover, once a player has ended up in some group, he or she will not make future evaluations of pieces of the cake to be divided among the players in the other group. Thus, those envy-free-relations that result from any of the first $\left\lfloor \log n \right\rfloor$ rounds are guaranteed to persist until all the cake has been allocated. However, it cannot be determined which of the players continuing with the other piece of the particular subpart is not envied, since no evaluations of the pieces created in other subprocedures are made. In accordance with the proof of Lemma~\ref{lem:DGEF-no-evaluations}, the latter also justifies why no more envy-free-relations can be guaranteed. Hence, it can just be guaranteed that each player does not envy at least one of the players continuing with the other piece. Summing up, as exactly $n$ guaranteed envy-free-relations are created in each of the first $\left\lfloor \log n \right\rfloor$ rounds, $n \cdot \left\lfloor \log n \right\rfloor$ guaranteed envy-free-relations are created over all rounds, except for the final round if $n$ is not a power of two. Note that if $n$ is a power of two, the $(\log n)$th round is the final round and Equation~(\ref{eq:solution-divide-conquer}) simplifies to $d(n) = n \cdot \log n$, so in this case we are done. In contrast, if $n$ is not a power of two then less than $n$~players will be involved in the final round (i.e., in the round numbered $\lceil \log n \rceil = \lfloor \log n \rfloor + 1$), since in that case there is at least one subprocedure that involves an odd number of players. More specifically, in this case the number of players involved in the final round can be expressed by the term $2n - 2^{\left\lfloor \log n \right\rfloor + 1}$, where $2^{\left\lfloor \log n \right\rfloor + 1}$ specifies the number of players that would be involved in the final round if the binary recursion tree would be a full binary tree, i.e., if all $n$~players would be involved in the final round. In order to analyze the final round for $n$ not being a power of two in detail, let $i$-subprocedure denote a subprocedure involving exactly $i$ players. A $3$-subprocedure can occur only in the second-to-last round, and if it occurs then one of its three players cannot be participating in the final round, i.e., only two out of three players are proceeding to the final round. Since in a balanced binary tree the depth of all leaves differs by at most one, the second-to-last round can have only either $2$-subprocedures and/or $3$-subprocedures, or $4$-subprocedures and/or $3$-subprocedures. Regarding a second-to-last round with at least one $3$-subprocedure and any number of $2$-subprocedures (which can happen only if $n$ is not a power of two), the number of players involved in the final round will be twice the number of $3$-subprocedures occurring in the second-to-last round. Regarding a second-to-last round with at least one $3$-subprocedure and any number of $4$-subprocedures (which again can happen only if $n$ is not a power of two), the number of players involved in the final round will be twice the number of $3$-subprocedures plus four times the number of $4$-subprocedures. Consequently, the number of $3$-subprocedures and the number of $4$-subprocedures in the second-to-last round determine how many players are participating in the final round, and thus also determine the number of guaranteed envy-free-relations to be created in the final round. If $n$ is not a power of two, analogously to the argument for the first $\left\lfloor \log n \right\rfloor$ rounds, also in the final round one guaranteed envy-free-relation is created with respect to each participating player. Thus, for any number $n \geq 1$ of players, at least $2n - 2^{\left\lfloor \log n \right\rfloor + 1}$ guaranteed envy-free-relations are created in the $(\lfloor \log n \rfloor + 1)$th round.\footnote{Note that $n$ is a power of two if and only if $2n - 2^{\left\lfloor \log n \right\rfloor + 1} = 0$, and that in this case there is no $(\lfloor \log n \rfloor + 1)$th round.} According to the proof of Lemma~\ref{lem:DGEF-no-evaluations}, no more than that many envy-free-relations can be guaranteed. Altogether, this sums up to $n \cdot \left\lfloor \log n \right\rfloor + 2n - 2^{\left\lfloor \log n \right\rfloor + 1}$ guaranteed envy-free-relations in total, which is the number stated in Equation~(\ref{eq:solution-divide-conquer}). The degree of guaranteed envy-freeness of the Minimal-Envy Divide and Conquer protocol can be shown just as for the original Divide and Conquer protocol. The difference in the way of splitting the particular piece of the cake into two subpieces does not affect the number of guaranteed envy-free-relations. Consequently, although the Minimal-Envy Divide and Conquer protocol does decrease envy according to the definition of Brams, Jones, and Klamler~\cite{bra-jon-kla:j:minimal-envy} (see Section~\ref{sec:discussion} for more discussion of this point), its DGEF is $n \cdot \left\lfloor \log n \right\rfloor + 2n - 2^{\left\lfloor \log n \right\rfloor + 1}$, just as for the original Divide and Conquer protocol.~\end{proofs} \subsection{Recursive Divide and Choose} This protocol has been presented by Tasn\'{a}di~\cite{tas:j:proportional-protocol} and describes a recursive procedure for how to always achieve a proportional division. It works as follows: In the case of $n=2$, this is just the simple cut-and-choose protocol. In the case of $n=3$, one of the players, the ``divider,'' divides the cake into three equal pieces according to his or her measure, and each of the two other players, the ``choosers,'' marks two pieces he or she considers to be worth the most, where ties may be broken arbitrarily. If both choosers marked the same two pieces, they divide these by applying the simple cut-and-choose protocol, and the divider receives the remaining piece. If the choosers marked different pieces, they divide the piece they both have marked via the simple cut-and-choose protocol, and each of the choosers divides the piece marked by just him- or herself with the divider, again via applying the simple cut-and-choose protocol. In the case of $n > 3$ players, this procedure is repeated recursively until all comes down to the simple cut-and-choose protocol involving two players only. In more detail, in the first round the divider cuts the cake into $n$ equal pieces according to his or her measure, and each of the $n-1$ choosers marks $n-1$~pieces he or she considers to be worth the most, where ties may be broken arbitrarily. Afterwards, a new round is started and each of the $n$~pieces is divided among those $n-1$ choosers that identified this piece as being acceptable. Concerning pieces that have been marked by less than $n-1$~choosers, the divider fills out these empty slots by an appropriate number of clones. In other words, each of the $n$~pieces enters the next round of the protocol and induces a new subprocedure in the scope of which the particular piece is being divided among $n-1$ players. All $n$~subprocedures are executed simultaneously but independently of each other. Note that if in the very first round all $n-1$~choosers marked the same $n-1$~pieces as being acceptable than there is exactly one piece that has not been marked by any of the choosers. In this case, this piece is directly assigned to the divider and the divider drops out, whereas all choosers enter the next round for dividing the $n-1$~remaining pieces among them. Analogously, in the $k$th round, $1 < k < n$, there are $\prod_{i=2}^{k}{(n-i+2)}$~subprocedures, i.e., $\prod_{i=2}^{k}{(n-i+2)}$~pieces are to be divided simultaneously but independently among $n-k+1$~players each. In every subprocedure any one player is determined to be the divider and cuts the particular piece of this subprocedure into $n-k+1$~equal subpieces according to his or her measure. Afterwards, each of the $n-k$~choosers of this particular subprocedure marks $n-k$~pieces he or she considers to be worth the most, where ties may be broken arbitrarily. Each of the $n-k+1$~pieces will induce a new subprocedure in the next round and will be divided among those $n-k$~players that marked this piece as being acceptable---where the divider fills out all empty slots regarding pieces that have been marked by less than $n-k$~choosers. Again, if in some round $k$, $1 < k < n$, in any of the subprocedures, all $n-k$~choosers agree on the same $n-k$~pieces, then there will be exactly one piece that has not been marked by any of the choosers. In this case, the unmarked piece is directly assigned to the divider of this particular subprocedure, and in the following rounds, the divider will not be involved in any of the subprocedures that results from this one, i.e., only $n-k$~pieces enter the next round, in which these are to be divided among the $n-k$~choosers of the previous round. Nevertheless, this divider will enter the next round and will participate in all those subprocedures that result from procedures he has not dropped out from yet. Applying this procedure recursively until only two players remain in each subprocedure (i.e., running~$n-1$ rounds), and dividing the corresponding piece of the cake between these two via the simple cut-and-choose protocol, the Recursive Divide and Choose protocol provides a proportional division of the cake. \begin{lemma} \label{lem:divide-and-choose} The Recursive Divide and Choose protocol has a degree of guaranteed envy-freeness of~$n$. \end{lemma} \begin{proofs} For $n \geq 3$~players, no evaluations of entire portions are made---except for the one special case when the first-round divider drops out in the very first round (which, for the sake of self-containment, will be considered separately below)---and thus the scenario from the proof of Lemma~\ref{lem:DGEF-no-evaluations} is applicable. Simply put, only $n$~envy-free-relations can be guaranteed in every case due to the argument in the proof of Proposition~\ref{prop:DGEF-Minimum-Maximum}, as the missing evaluations of entire portions allow for any valuation functions, for example, those described in the proof of Lemma~\ref{lem:DGEF-no-evaluations}. If in the very first round all $n-1$~choosers agree on the very same $n-1$~pieces then the first-round divider will drop out with a portion that he or she values exactly $\nicefrac{1}{n}$ and that all other players value at most $\nicefrac{1}{n}$. This results in $n$ guaranteed envy-free-relations, since none of the $n-1$~choosers will envy the first-round divider and the first-round divider will not envy at least one of the other players by the argument in the proof of Proposition~\ref{prop:DGEF-Minimum-Maximum}. However, in this case no more envy-free-relations can be guaranteed in the following rounds due to the argument given above. Consequently, the DGEF of the Recursive Divide and Choose protocol is~$n$.~\end{proofs} \section{Related Work and Discussion} \label{sec:discussion} The analysis of envy-relations dates back at least to Feldman and Kirman~\cite{fel-kir:j:fairness-and-envy}. In contrast to our approach, they consider the number of envy-pairs in already existing divisions with the intention of maximizing fairness afterwards via trading. In particular, they do not consider the \emph{design} of cake-cutting protocols that maximize fairness. In the majority of cases, research in the area of cake-cutting from an economic perspective is concerned more with the existence of certain divisions and their properties than with how to achieve these divisions. A different approach measures the intensity of envy in terms of the distance between envied portions~\cite{cha:j:measure-envy}. More recently, Brams, Jones, and Klamler~\cite{bra-jon-kla:j:minimal-envy} proposed to minimize envy in terms of the maximum number of players that a player may envy. Their notion of measuring envy differs from our notion of DGEF in various ways, the most fundamental of which is that their notion takes an ``egalitarian'' approach to reducing the number of envy-relations (namely, via minimizing the most-envious player's envy, in terms of decreasing the number of this single player's envy-relations). In contrast, the DGEF aims at a ``utilitarian'' approach (namely, via minimizing overall envy, in terms of increasing the total number of guaranteed envy-free-relations among all players). That is to say that, although these notions may seem to be very similar at first glance, the approach presented in~\cite{bra-jon-kla:j:minimal-envy} is not sensitive to a reduction in the number of envy-relations on the part of any other than the most-envious player, whereas the DGEF does take each single improvement into account and adapts accordingly. The DGEF, thus, is a more specific, more fine-tuned measure. Note also that Brams, Jones, and Klamler~\cite{bra-jon-kla:j:minimal-envy} focus primarily on presenting a new protocol and less so on introducing a new notion for measuring envy. Another approach is due to Chevaleyre et al.~\cite{che-end-est-mau:c:envy-free-states}, who define various metrics for the evaluation of envy in order to classify ``the degree of envy in a society,'' and they use the term ``degree of envy'' in the quite different setting of multiagent allocation of \emph{indivisible} resources. Besides, we stress that our approach of approximating envy-freeness differs from other lines of research that also deal with approximating fairness. For example, Lipton et al.~\cite{lip:c:approximately-fair} propose to seek for minimum-envy allocations of \emph{indivisible} goods in terms of the value difference of the utility functions of envied players, and Edmonds and Pruhs~\cite{edm-pru:c:not-a-piece-of-cake,edm-pru:c:balanced-allocations-of-cake} approximate fairness in cake-cutting protocols by allowing merely approximately fair pieces (in terms of their value to the players) and by using only approximate cut queries (in terms of exactness). It may be tempting to seek to decrease envy (and thus to increase the DGEF) via trading, aiming to get rid of potential circular envy-relations. Although we do not consider trading to be an integral part of a cake-cutting protocol, let us for a moment digress to briefly discuss how trading may potentially affect the number of guaranteed envy-free-relations.\footnote{To be specific here, all occurrences of ``guaranteed envy-free-relations'' in this and the next paragraph refer to those envy-free-relations that are guaranteed to exist after executing some cake-cutting protocol \emph{and in addition, subsequently, performing trades that are guaranteed to be feasible}. This is in contrast with what we mean by this term anywhere else in the paper; ``guaranteed envy-free-relations'' usually refers to those envy-free-relations that are guaranteed to exist after executing the protocol only.} Indeed, if the DGEF is \emph{lower than~$\nicefrac{n(n-1)}{2}$}, the number of guaranteed envy-free-relations can be improved to this lower bound, or to an even higher number, by resolving circular envy-relations (of which two-way envy-relations are a special case) by means of circular trades after the execution of the protocol. Thus, in this case, involving subsequent trading actions adds on the number of guaranteed envy-free-relations. Furthermore, having exactly $\nicefrac{n(n-1)}{2}$ guaranteed envy-free-relations after all circular envy-relations have been resolved, three more guaranteed envy-free-relations can be gained by applying an envy-free protocol (e.g., the Selfridge--Conway protocol) to the three most envied players, which yields to an overall lower bound of~$3+\nicefrac{n(n-1)}{2}$ guaranteed envy-free-relations. To give an example for an even higher impact of trading, when circular trades indeed are involved after executing either the Divide and Conquer protocol or the Minimal-Envy Divide and Conquer protocol, their numbers of guaranteed envy-free relations can be improved to~$\left(\nicefrac{n(n+1)}{2}\right)-1$, which follows from~\cite{bra-jon-kla:j:minimal-envy}. On the other hand, if the DGEF of a proportional cake-cutting protocol is \emph{$\nicefrac{n(n-1)}{2}$ or higher} (such as the DGEF of the protocol presented in Figure~\ref{algo:n}) then---depending on the protocol---circular envy-relations may not be \emph{guaranteed} to exist, and if such cycles are not guaranteed to exist, trading has no impact on the number of guaranteed envy-free-relations. However, as mentioned above, we consider trading not to be part of a cake-cutting protocol, though it might be useful in certain cases (for example, Brams and Taylor~\cite[page~44]{bra-tay:b:fair-division} mention that trading might be used ``to obtain better allocations; however, this is not a procedure but an informal adjustment mechanism''). In particular, the notion of DGEF refers to (proportional) cake-cutting protocols without additional trading, i.e., the DGEF is defined to make a statement on the performance of a particular protocol and not about all sorts of actions to be undertaken afterwards. Although the well-known protocols listed in Table~\ref{tab:DGEF-survey} have not been developed with a focus on maximizing the DGEF,\footnote{Quite remarkably, without any trading actions and without involving, e.g., the Selfridge--Conway protocol the Last Diminisher protocol achieves with its DGEF almost (being off only by one) the trading- and Selfridge--Conway-related bound of $3+\nicefrac{n(n-1)}{2}$ mentioned above.} linking their degrees of guaranteed envy-freeness to the lower bound provided by involving, e.g., the Selfridge--Conway protocol and guaranteed trading opportunities indicates that the development of cake-cutting protocols with a considerably higher DGEF or even with a DGEF close to the maximum of~$n(n-1)$ poses a true challenge. That is why we feel that the enhanced DGEF of the protocol presented in Figure~\ref{algo:n} constitutes a significant improvement. \section{Conclusions} \label{sec:conclusion} Although different disciplines are engaged in the development of fair cake-cutting protocols for decades now, finite bounded protocols that guarantee an envy-free division for $n>3$ players are still a mystery. However, finite bounded protocols are the ones we are looking for in terms of practical implementations. That is why we in this paper have proposed to weaken the requirement of envy-freeness (as much as needed and as little as possible), while insisting on finite boundedness. To this end, we introduced the notion of degree of guaranteed envy-freeness for proportional cake-cutting protocols. Based on this definition, we gave a survey of the DGEF in existing finite bounded proportional cake-cutting protocols, which shows that when one is trying to approximate the ideal of envy-freeness via the DGEF, there is quite a bit room for improvements. In particular, we expect that the concept of DGEF is suitable to extend the scope for the development of new finite bounded cake-cutting protocols by allowing to approximate envy-freeness step by step. In this context, we proposed a new finite bounded proportional cake-cutting protocol, explicitly demonstrated for $n=4$ and for arbitrary~$n \geq 3$, which provides a significantly enhanced degree of guaranteed envy-freeness, compared with the status quo given by the survey in Table~\ref{tab:DGEF-survey} (see also Section~\ref{sec:survey}). In particular, our protocol has $\left\lceil \nicefrac{n}{2} \right\rceil - 1$ more guaranteed envy-free-relations than the Last Diminisher protocol, which previously was the best finite bounded proportional cake-cutting protocol with respect to the {DGEF}. To achieve this significantly enhanced DGEF, our protocol makes use of parallelization with respect to the leftmost and the rightmost pieces. In this regard, adjusting the values of the pieces to be marked from $\nicefrac{1}{n}$ to $\nicefrac{1}{s}$ (with $s$ players still in the game) and applying an appropriate inner-loop procedure is crucial to make the parallelization work. In addition to an enhanced DGEF, our protocol still has the other useful properties the Last Diminisher protocol is known to possess, such as strategy-proofness. In general, we suggest to target improvements on the ``level of envy-freeness'' already in the design of cake-cutting protocols rather than trying to improve on the number of envy-free-relations afterwards for the division obtained (e.g., via trading~\cite{fel-kir:j:fairness-and-envy,cha:j:measure-envy}). In terms of future research, this approach encourages to develop new protocols with even higher degrees of guaranteed envy-freeness---one may even think of modifications that focus on the development of protocols with ``balanced envy-freeness'' while keeping the DGEF high. \subsubsection*{Acknowledgments} We are grateful to Ariel Procaccia for many interesting discussions and pointers to the literature. In particular, the second author thanks him for drawing his attention to the fascinating area of cake-cutting during a visit to Hebrew University of Jerusalem, and he thanks Jeff Rosenschein for hosting this visit. We also thank Magnus Roos for helpful comments. {\small \bibliographystyle{alpha}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} In this paper we propose a CNN architecture for semantic image segmentation. Given an image $\mathcal{I}=(x_1,\ldots,x_N$) with $N$ pixels $x_i$ the task of semantic segmentation is to infer a labeling $Y=(y_1,\ldots,y_N)$ with a label $y_i\in\mathcal{Y}$ for every pixel. This problem can be naturally formulated as a structured prediction problem $g:\mathcal{I}\rightarrow Y$. Empirical performance is measured by comparing $Y$ to a human labeled $Y^*$ via a loss function $\Delta(Y,Y^*)$, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot, with the Intersection over Union (IoU) or pixel-wise Hamming Loss. A direct way to approach this problem would be to ignore the structure of the output variable $Y$ and train a classifier that predicts the class membership of the center pixel of a given image patch. This procedure reduces the problem to a standard multi-class classification problem and allows the use of standard learning algorithms. The resulting classifier is then evaluated at every possible patch in a sliding window fashion (or using coarse-to-fine strategies) to yield a full segmentation of the image. With high capacity models and large amounts of training data this approach would be sufficient, given that the loss decomposes over the pixels. Such a per-pixel approach ignores the relationship between the variables $(y_1,\ldots,y_N)$, which are not i.i.d.~since there is an underlying common image. Therefore, besides learning discriminative per-pixel classifiers, most segmentation approaches further encode the output relationship of $Y$. A dominating approach is to use Conditional Random Fields (CRF)~\cite{lafferty2001crf}, which allows an elegant and principled way to combine single pixel predictions and shared structure through unary, pairwise and higher order factors. \begin{figure}[t] \begin{center} \centerline{\includegraphics[width=0.8\textwidth]{figures/net_illustration.pdf}} \mycaption{Illustration of CNN layout} {We insert the \emph{Bilateral Inception (BI)} modules between the \emph{FC} ($1\times1$ convolution) layers found in most networks thus removing the necessity of further up-scaling algorithms. Bilateral Inception modules also propagate information between distant pixels based on their spatial and color similarity and work better than other label propagation approaches.}\label{fig:illustration} \end{center} \end{figure} What relates the outputs $(y_1,\ldots,y_N$)? The common hypothesis that we use in this paper could be summarized as: \emph{Pixels that are spatially and photometrically similar are more likely to have the same label.} Particularly if two pixels $x_i,x_j$ are close in the image and have similar $RGB$ values, then their corresponding labels $y_i,y_j$ will most likely be the same. The most prominent example of spatial similarity encoded in a CRF is the Potts model (Ising model for the binary case). The work of~\cite{krahenbuhl2012efficient} described a densely connected pairwise CRF (DenseCRF) that includes pairwise factors encoding both spatial \emph{and} photometric similarity. The DenseCRF has been used in many recent works on image segmentation which find also empirically improved results over pure pixel-wise CNN classifiers~\cite{chen2014semantic,bell2015minc,zheng2015conditional,chen2015semantic}. In this paper, we implement the above-mentioned hypothesis of photometrically similar and near-by pixels share common labels, by designing a new ``Bilateral Inception'' (BI) module that can be inserted before/after the last $1\times1$ convolution layers (which we refer to as `FC' layers - `Fully-Connected' in the original image classification network) of the standard segmentation CNN architectures. The bilateral inception module does edge-aware information propagation across different spatial CNN units of the previous FC layer. Instead of using the spatial grid-layout that is common in CNNs, we incorporate the superpixel-layout for information propagation. The information propagation is performed using standard bilateral filters with Gaussian kernels, at different feature scales. This construction is inspired by~\cite{szegedy2014googlenet,lin2014network}. Feature spaces and other parameters of the modules can be learned end-to-end using standard backpropagation techniques. The application of superpixels reduces the number of necessary computations and implements a long-range edge-aware inference between different superpixels. Moreover, since superpixels provides an output at the full image resolution it removes the need for any additional post-processing step. We introduce BI modules in the CNN segmentation models of~\cite{chen2014semantic,zheng2015conditional,bell2015minc}. See Fig.~\ref{fig:illustration} for an illustration. This achieves better segmentation results on all three datasets we experimented with than the proposed interpolation/inference techniques of DenseCRF~\cite{bell2015minc,chen2014semantic} while being faster. Moreover, the results compare favorably against some recently proposed dense pixel prediction techniques. As illustrated in Fig.~\ref{fig:illustration}, the BI modules provides an alternative approach to commonly used up-sampling and CRF techniques. \section{Related Work}\label{sec:related} The literature on semantic segmentation is large and therefore we will limit our discussion to those works that perform segmentation with CNNs and discuss the different ways to encode the output structure. A natural combination of CNNs and CRFs is to use the CNN as unary potential and combine it with a CRF that also includes pairwise or higher order factors. For instance~\cite{chen2014semantic,bell2015minc} observed large improvements in pixel accuracy when combining a DenseCRF~\cite{krahenbuhl2012efficient} with a CNN. The mean-field steps of the DenseCRF can be learned and back-propagated as noted by~\cite{domke2013learning} and implemented by~\cite{zheng2015conditional,arxivpaper,li2014mean,schwing2015fully} for semantic segmentation and~\cite{kiefel2014human} for human pose estimation. The works of~\cite{chen2014learning,lin2015efficient,liu2015semantic} use CNNs also in pairwise and higher order factors for more expressiveness. The recent work of~\cite{chen2015semantic} replaced the costly DenseCRF with a faster domain transform performing smoothing filtering while predicting the image edge maps at the same time. Our work was inspired by DenseCRF approaches but with the aim to replace the expensive mean-field inference. Instead of propagating information across unaries obtained by a CNN, we aim to do the edge-aware information propagation across \textit{intermediate} representations of the CNN. Experiments on different datasets indicate that the proposed approach generally gives better results in comparison to DenseCRF while being faster. A second group of works aims to inject the structural knowledge in intermediate CNN representations by using structural layers among CNN internal layers. The deconvolution layers model from~\cite{zeiler2010deconvolutional} are being widely used for local propagation of information. They are computationally efficient and are used in segmentation networks, for \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot~\cite{long2014fully}. They are however limited to small receptive fields. Another architecture proposed in~\cite{he2014spatial} uses spatial pyramid pooling layers to max-pool over different spatial scales. The work of~\cite{ionescu2015matrix} proposed specialized structural layers such as normalized-cut layers with matrix back-propagation techniques. All these works have either fixed local receptive fields and/or have their complexity increasing exponentially with longer range pixel connections. Our technique allows for modeling long range (super-) pixel dependencies without compromising the computational efficiency. A very recent work~\cite{yu2015multi} proposed the use of dilated convolutions for propagating multi-scale contextual information among CNN units. A contribution of this work is to define convolutions over superpixels by defining connectivity among them. In~\cite{he2015supercnn}, a method to use superpixels inside CNNs has been proposed by re-arranging superpixels based on their features. The technique proposed here is more generic and alleviates the need for rearranging superpixels. A method to filter irregularly sampled data has been developed in~\cite{bruna2013spectral} which may be applicable to superpixel convolutions. The difference being that their method requires a pre-defined graph structure for every example/image separately while our approach directly works on superpixels. We experimented with Isomap embeddings~\cite{tenenbaum2000global} of superpixels but for speed reasons opted for the more efficient kernels presented in this paper. The work of~\cite{mostajabi2014feedforward} extracted multi-scale features at each superpixel and perform semantic segmentation by classifying each superpixel independently. In contrast, we propagate information across superpixels by using bilateral filters with learned feature spaces. Another core contribution of this work is the end-to-end trained bilateral filtering module. Several recent works on bilateral filtering~\cite{barron2015fast,barron2015defocus,kiefel15bnn,arxivpaper} back-propagate through permutohedral lattice approximation~\cite{adams2010fast}, to either learn the filter parameters~\cite{kiefel15bnn,arxivpaper} or do optimization in the bilateral space~\cite{barron2015fast,barron2015defocus}. Most of the existing works on bilateral filtering use pre-defined feature spaces. In~\cite{campbell2013fully}, the feature spaces for bilateral filtering are obtained via a non-parametric embedding into an Euclidean space. In contrast, by explicitly computing the bilateral filter kernel, we are able to back-propagate through features, thereby learning the task-specific feature spaces for bilateral filters through integration into end-to-end trainable CNNs. \section{Superpixel Convolutional Networks} We first formally introduce superpixels in Sec.~\ref{sec:superpixels} before we describe the bilateral inception modules in Sec.~\ref{sec:inception}. \subsection{Superpixels}\label{sec:superpixels} The term \emph{superpixel} refers to a set of $n_i$ pixels $S_i=\{t_1,\ldots,t_{n_i}\}$ with $t_k\in\{1,\ldots,N\}$ pixels. We use a set of $M$ superpixels $S=\{S_1,\ldots,S_M\}$ that are disjoint $S_i\cap S_j=\emptyset, \forall i,j$ and decompose the image, $\cup_i S_i = \mathcal{I}$. Superpixels have long been used for image segmentation in many previous works, \emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot~\cite{Gould:ECCV2014,gonfaus2010harmony,nowozin2010parameter,mostajabi2014feedforward}, as they provide a reduction of the problem size. Instead of predicting a label $y_i$ for every pixel $x_i$, the classifier predicts a label $y_i$ per superpixel $S_i$ and extends this label to all pixels within. A superpixel algorithm can pre-group pixels based on spatial and photometric similarity, reducing the number of elements and also thereby regularizing the problem in a meaningful way. The downside is that superpixels introduce a quantization error whenever pixels within one segment have different ground truth label assignments. \begin{wrapfigure}[18]{r}{5.3cm} \includegraphics[width=0.4\textwidth]{figures/superpixel_plot_both.pdf} \mycaption{Superpixel Quantization Error} {Best achievable segmentation performance with a varying number of superpixels on Pascal VOC12 segmentation~\cite{voc2012segmentation} and MINC material segmentation~\cite{bell2015minc} datasets.}\label{fig:quantization} \end{wrapfigure} Figure~\ref{fig:quantization} shows the superpixel quantization effect with the best achievable performance as a function in the number of superpixels, on two different segmentation datasets: PascalVOC~\cite{voc2012segmentation} and Materials in Context~\cite{bell2015minc}. We find that the quantization effect is small compared to the current best segmentation performance. Practically, we use the SLIC superpixels~\cite{achanta2012slic} for their runtime and~\cite{DollarICCV13edges} for their lower quantization error to decompose the image into superpixels. For details of the algorithms, please refer to the respective papers. We use publicly-available real-time GPU implementation of SLIC, called gSLICr~\cite{gSLICr_2015}, which runs at over 250Hz per second. And the publicly available Dollar superpixels code~\cite{DollarICCV13edges} computes a super-pixelization for a $400\times 500$ image in about 300ms using an Intel Xeon 3.33GHz CPU. \subsection{Bilateral Inceptions}\label{sec:inception} Next, we describe the \emph{Bilateral Inception Module} (BI) that performs Gaussian Bilateral Filtering on multiple scales of the representations within a CNN. The BI module can be inserted in between layers of existing CNN architectures. {\bfseries Bilateral Filtering:} We first describe the Gaussian bilateral filtering, the building block of the BI module. A visualisation of the necessary computations is shown in Fig.~\ref{fig:bi_module}. Given the previous layer CNN activations $\mathbf{z}\in\mathbb{R}^{P\times C}$, that is $P$ points and $C$ filter responses. With $\mathbf{z}_c\in\mathbb{R}^P$ we denote the vector of activations of filter $c$. Additionally we have for every point $j$ a feature vector $\mathbf{f}_j\in\mathbb{R}^D$. This denotes its spatial position ($D=2$, not necessarily a grid), position and RGB color ($D=5$), or others. Separate from the input points with features $F_{in}=\{\mathbf{f}_1,\ldots,\mathbf{f}_P\}$ we have $Q$ output points with features $F_{out}$. These can be the same set of points, but also fewer ($Q<P$), equal ($Q=P$), or more ($Q>P$) points. For example we can filter a $10\times 10$ grid ($P=100$) and produce the result on a $50\times 50$ grid ($Q=2500$) or vice versa. The bilateral filtered result will be denoted as $\hat{\mathbf{z}}\in\mathbb{R}^{Q\times C}$. We apply the same Gaussian bilateral filter to every channel $c$ separately. A filter has two free parameters: the filter specific scale $\theta\in\mathbb{R}_+$ and the global feature transformation parameters $\Lambda\in\mathbb{R}^{D\times D}$. For $\Lambda$, a more general scaling could be applied using more features or a separate CNN. Technically the bilateral filtering amounts to a matrix-vector multiplication $\forall c$: \begin{equation} \hat{\mathbf{z}}_c = K(\theta, \Lambda, F_{in}, F_{out}) \mathbf{z}_c, \label{eq:filter} \end{equation} where $K\in\mathbb{R}^{Q\times P}$ and values for $f_i\in F_{out}, f_j\in F_{in}$: \begin{equation} K_{i,j} = \frac{\exp(-\theta\|\Lambda \mathbf{f}_i- \Lambda \mathbf{f}_j\|^2)}{\sum_{j'}\exp(-\theta\|\Lambda \mathbf{f}_i- \Lambda \mathbf{f}_{j'}\|^2)}. \label{eq:filter} \end{equation} From a kernel learning terminology, $K$ is nothing but a Gaussian Gram matrix and it is symmetric if $F_{in}=F_{out}$. We implemented this filtering in Caffe~\cite{jia2014caffe} using different layers as depicted in Fig.~\ref{fig:bi_module}. While approximate computations of $K\mathbf{z}_c$ exist and have improved runtime~\cite{adams2010fast,paris2006fast,gastal2011domain,adams2009gaussian}, we chose an explicit computation of $K$ due to its small size. Our implementation makes use of GPU and the intermediate pairwise similarity computations are re-used across different modules. The entire runtime is only a fraction of the CNN runtime, but of course applications to larger values of $P$ and $Q$ would require aforementioned algorithmic speed-ups. \begin{figure}[t] \begin{center} \centerline{\includegraphics[width=0.9\textwidth]{figures/bi_module/new_bi_module.pdf}} \mycaption{Computation flow of the Gaussian Bilateral Filtering} { We implemented the bilateral convolution with five separate computation blocks. $\Lambda$ and $\theta$ are the free parameters.}\label{fig:bi_module} \end{center} \end{figure} \begin{figure}[t] \begin{center} \centerline{\includegraphics[width=0.9\textwidth]{figures/inception_module.pdf}} \mycaption{Visualization of a Bilateral Inception (BI) Module} {The unit activations $\mathbf{z}$ are passed through several bilateral filters defined over different feature spaces. The result is linearly combined to $\bar{\mathbf{z}}$ and passed on to the next network layer. Also shown are sample filtered superpixel images using bilateral filters defined over different example feature spaces. $(u,v)$ correspond to position and $(r,g,b)$ correspond to color features.}\label{fig:inception} \end{center} \end{figure} {\bfseries Bilateral Inception Module:} The \textit{bilateral inception module} (BI) is a weighted combination of different bilateral filters. We combine the output of $H$ different filter kernels $K$, with different scales $\theta^1,\ldots,\theta^H$. All kernels use the same feature transformation $\Lambda$ which allows for easier pre-computation of pairwise difference and avoids an over-parametrization of the filters. The outputs of different filters $\hat{\mathbf{z}}^h$ are combined linearly to produce $\bar{\mathbf{z}}$: \begin{equation} \bar{\mathbf{z}}_c = \sum_{h=1}^H \mathbf{w}_c^h \hat{\mathbf{z}}_c^h, \label{eq:module} \end{equation} using individual weights $\mathbf{w}_c^h$ per scale $\theta^h$ and channel $c$. The weights $\mathbf{w} \in \mathbb{R}^{H\times C}$ are learned using error-backpropagation. The result of the inception module has $C$ channels for every of its $Q$ points, thus $\bar{\mathbf{z}} \in \mathbb{R}^{Q \times C}$. The inception module is schematically illustrated in Fig.~\ref{fig:inception}. In short, information from CNN layers below is filtered using bilateral filters defined in transformed feature space ($\Lambda \mathbf{f}$). Most operations in the inception module are parallelizable resulting in fast runtimes on a GPU. In this work, inspired from the DenseCRF architecture from~\cite{krahenbuhl2012efficient}, we used pairs of BI modules: one with position features $(u,v)$ and another with both position and colour features $(u,v,r,g,b)$, each with multiple scales $\{\theta^h\}$. {\bfseries Motivation and Comparison to DenseCRF:} A BI module filters the activations of a CNN layer. Contrast this with the use of a DenseCRF on the CNN output. At that point the fine-grained information that intermediate CNN layers represent has been condensed already to a low-dimensional vector representing beliefs over labels. Using a mean-field update is propagating information between these beliefs. Similar behaviour is obtained using the BI modules but on different scales (using multiple different filters $K(\theta^h)$) and on the intermediate CNN activations $\mathbf{z}$. Since in the end, the to-be-predicted pixels are not i.i.d., this blurring leads to better performance both when using a bilateral filter as an approximate message passing step of a DenseCRF as well in the system outlined here. Both attempts are encoding prior knowledge about the problem, namely that pixels close in position and color are likely to have the same label. Therefore such pixels can also have the same intermediate representation. Consider one would average CNN representations for all pixels that have the same ground truth label. This would result in an intermediate CNN representation that would be very easy to classify for the later layers. \subsection{Superpixel Convolutions} The bilateral inception module allows to change how information is stored in the higher level of a CNN. This is where the superpixels are used. Instead of storing information on a fixed grid, we compute for every image, superpixels $S$ and use the mean color and position of their included pixels as features. We can insert bilateral inception modules to change from grid representations to superpixel representations and vice versa. Inception modules in between superpixel layers convolve the unit activations between all superpixels depending on their distance in the feature space. This retains all properties of the bilateral filter, superpixels that are spatially close and have a similar mean color will have a stronger influence on each other. Superpixels are not the only choice, in principle one can also sample random points from the image and use them as intermediate representations. We are using superpixels for computational reasons, since they can be used to propagate label information to the full image resolution. Other interpolation techniques are possible, including the well known bilinear interpolation, up-convolution networks~\cite{zeiler2010deconvolutional}, and DenseCRFs~\cite{krahenbuhl2012efficient}. The quantization error mentioned in Sec.~\ref{sec:superpixels} only enters because the superpixels are used for interpolation. Also note that a fixed grid, that is independent of the image is a hard choice of where information should be stored. One could in principle evaluate the CNN densely, at all possible spatial locations, but we found that this resulted in poor performance compared to interpolation methods. \subsubsection{Backpropagation and Training.} All free parameters of the inception module $\mathbf{w}$, $\{\theta^h\}$ and $\Lambda$ are learned via backpropagation. We also backpropagate the error with respect to the module inputs thereby enabling the integration of our inception modules inside CNN frameworks without breaking the end-to-end learning paradigm. As shown in Fig.~\ref{fig:bi_module}, the bilateral filtering can be decomposed into 5 different sub-layers. Derivatives with respect to the open parameters are obtained by the corresponding layer and standard backpropagation through the directed acyclic graph. For example, $\Lambda$ is optimized by back-propagating gradients through $1\times1$ convolution. Derivatives for non-standard layers (pairwise similarity, matrix multiplication) are straight forward to obtain using matrix calculus. To let different filters learn the information propagation at different scales, we initialized $\{\theta^h\}$ with well separated scalar values (\emph{e.g}\onedot} \def\Eg{\emph{E.g}\onedot $\{1, 0.7, 0.3,...\}$). The learning is performed using Adam stochastic optimization method~\cite{kingma2014adam}. The implementation is done in Caffe neural network framework~\cite{jia2014caffe}, and the code is available online at http://segmentation.is.tuebingen.mpg.de. \definecolor{voc_1}{RGB}{0, 0, 0} \definecolor{voc_2}{RGB}{128, 0, 0} \definecolor{voc_3}{RGB}{0, 128, 0} \definecolor{voc_4}{RGB}{128, 128, 0} \definecolor{voc_5}{RGB}{0, 0, 128} \definecolor{voc_6}{RGB}{128, 0, 128} \definecolor{voc_7}{RGB}{0, 128, 128} \definecolor{voc_8}{RGB}{128, 128, 128} \definecolor{voc_9}{RGB}{64, 0, 0} \definecolor{voc_10}{RGB}{192, 0, 0} \definecolor{voc_11}{RGB}{64, 128, 0} \definecolor{voc_12}{RGB}{192, 128, 0} \definecolor{voc_13}{RGB}{64, 0, 128} \definecolor{voc_14}{RGB}{192, 0, 128} \definecolor{voc_15}{RGB}{64, 128, 128} \definecolor{voc_16}{RGB}{192, 128, 128} \definecolor{voc_17}{RGB}{0, 64, 0} \definecolor{voc_18}{RGB}{128, 64, 0} \definecolor{voc_19}{RGB}{0, 192, 0} \definecolor{voc_20}{RGB}{128, 192, 0} \definecolor{voc_21}{RGB}{0, 64, 128} \definecolor{voc_22}{RGB}{128, 64, 128} \begin{figure*}[t] \tiny \centering \subfigure{% \includegraphics[width=.15\columnwidth]{figures/2007_000033_given.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/2007_000033_sp.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/2007_000033_gt.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/2007_000033_cnn.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/2007_000033_crf.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/2007_000033_ours.png} }\\[-2ex] \setcounter{subfigure}{0} \subfigure[\tiny Input]{% \includegraphics[width=.15\columnwidth]{figures/2009_003564_given.png} } \subfigure[\tiny Superpixels]{% \includegraphics[width=.15\columnwidth]{figures/2009_003564_sp.png} } \subfigure[\tiny GT]{% \includegraphics[width=.15\columnwidth]{figures/2009_003564_gt.png} } \subfigure[\tiny Deeplab]{% \includegraphics[width=.15\columnwidth]{figures/2009_003564_cnn.png} } \subfigure[\tiny +DenseCRF]{% \includegraphics[width=.15\columnwidth]{figures/2009_003564_crf.png} } \subfigure[\tiny Using BI]{% \includegraphics[width=.15\columnwidth]{figures/2009_003564_ours.png} } \mycaption{Semantic Segmentation}{Example results of semantic segmentation on Pascal VOC12 dataset. (d) depicts the DeepLab CNN result, (e) CNN + 10 steps of mean-field inference, (f) result obtained with bilateral inception (BI) modules (\bi{6}{2}+\bi{7}{6}) between \fc~layers.}\label{fig:semantic_visuals} \end{figure*} \section{Experiments} We study the effect of inserting and learning bilateral inception modules in various existing CNN architectures. As a testbed we perform experiments on semantic segmentation using the Pascal VOC2012 segmentation benchmark dataset~\cite{voc2012segmentation}, Cityscapes street scene dataset~\cite{Cordts2015Cvprw} and on material segmentation using the Materials in Context (MINC) dataset from~\cite{bell2015minc}. We take different CNN architectures from the works of~\cite{chen2014semantic,zheng2015conditional,bell2015minc} and insert the inception modules before and/or after the spatial FC layers. In the supplementary, we presented some quantitative results with approximate bilateral filtering using the permutohedral lattice~\cite{adams2010fast}. \subsection{Semantic Segmentation} We first use the Pascal VOC12 segmentation dataset~\cite{voc2012segmentation} with 21 object classes. For all experiments on VOC2012, we train using the extended training set of 10581 images collected by~\cite{hariharan2011moredata}. Following~\cite{zheng2015conditional}, we use a reduced validation set of 346 images for validation. We experiment on two different network architectures, (a) DeepLab model from~\cite{chen2014semantic} which uses CNN followed by DenseCRF and (b) CRFasRNN model from~\cite{zheng2015conditional} which uses CNN with deconvolution layers followed by DenseCRF trained end-to-end. \subsubsection{DeepLab}\label{sec:deeplabmodel} We use the publicly available state-of-the-art pre-trained CNN models from~\cite{chen2014semantic}. We use the DeepLab-LargeFOV variant as a base architecture and refer to it as `DeepLab'. The DeepLab~CNN model produces a lower resolution prediction ($\frac{1}{8}\times$) which is then bilinearly interpolated to the input image resolution. The original models have been fine-tuned using both the MSCOCO~\cite{lin2014microsoft} and the extended VOC~\cite{hariharan2011moredata} datasets. Next, we describe modifications to these models and show performance improvements in terms of both IoU and runtimes. \begin{wraptable}[24]{r}{0pt} \scriptsize \begin{tabular}{>{\raggedright\arraybackslash}p{3.3cm}>{\raggedright\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{0.8cm}>{\centering\arraybackslash}p{1.0cm}} \toprule \textbf{Model} & Training & \emph{IoU} & \emph{Runtime}\\ \midrule \scriptsize DeepLab~\cite{chen2014semantic} & & 68.9 & 145ms\\ \midrule With BI modules & & & \\ \bi{6}{2} & only BI & \href{http://host.robots.ox.ac.uk:8080/anonymous/31URIG.html}{70.8} & +20 \\ \bi{6}{2} & BI+FC & \href{http://host.robots.ox.ac.uk:8080/anonymous/JOB8CE.html}{71.5} & +20\\ \bi{6}{6} & BI+FC & \href{http://host.robots.ox.ac.uk:8080/anonymous/IB1UAZ.html}{72.9} & +45\\ \bi{7}{6} & BI+FC & \href{http://host.robots.ox.ac.uk:8080/anonymous/EQB3CR.html}{73.1} & +50\\ \bi{8}{10} & BI+FC & \href{http://host.robots.ox.ac.uk:8080/anonymous/JR27XL.html}{72.0} & +30\\ \bi{6}{2}-\bi{7}{6} & BI+FC & \href{http://host.robots.ox.ac.uk:8080/anonymous/VOTV5E.html}{73.6} & +35\\ \bi{7}{6}-\bi{8}{10} & BI+FC & \href{http://host.robots.ox.ac.uk:8080/anonymous/X7A3GP.html}{73.4} & +55\\ \bi{6}{2}-\bi{7}{6} & FULL & \href{http://host.robots.ox.ac.uk:8080/anonymous/CLLB3J.html}{\textbf{74.1}} & +35\\ \bi{6}{2}-\bi{7}{6}-CRF & FULL & \href{http://host.robots.ox.ac.uk:8080/anonymous/7NGWWU.html}{\textbf{75.1}} & +865\\ \midrule DeepLab-CRF~\cite{chen2014semantic} & & 72.7 & +830\\ DeepLab-MSc-CRF~\cite{chen2014semantic} & & \textbf{73.6} & +880\\ DeepLab-EdgeNet~\cite{chen2015semantic} & & 71.7 & +30\\ DeepLab-EdgeNet-CRF~\cite{chen2015semantic} & & \textbf{73.6} & +860\\ \bottomrule \end{tabular} \mycaption{Semantic Segmentation using DeepLab~model} {IoU scores on Pascal VOC12 segmentation test dataset and average runtimes (ms) corresponding to different models. Also shown are the results corresponding to competitive dense pixel prediction techniques that used the same base DeepLab CNN. Runtimes also include superpixel computation (6ms). In the second column, `BI', `FC' and `FULL' correspond to training `BI', `FC' and full model layers respectively.} \label{tab:deeplabresults} \end{wraptable} We add inception modules after different FC layers in the original model and remove the DenseCRF post processing. For this dataset, we use 1000 SLIC superpixels~\cite{achanta2012slic,gSLICr_2015}. The inception modules after \fc{6}, \fc{7} and \fc{8} layers are referred to as \bi{6}{H}, \bi{7}{H} and \bi{8}{H} respectively, where $H$ is the number of kernels. All results using the~DeepLab~model on Pascal VOC12 dataset are summarized in Tab.~\ref{tab:deeplabresults}. We report the `test' numbers without validation numbers, because the released DeepLab model that we adapted was trained using both train and validation sets. The~DeepLab~network achieves an IoU of 68.9 after bilinear interpolation. Experiments with \bi{6}{2} module indicate that even only learning the inception module while keeping the remaining network fixed results in an reliable IoU improvement ($+1.9$). Additional joint training with \fc{} layers significantly improved the performance. The results also show that more kernels improve performance. Next, we add multiple modules to the base DeepLab network at various stages and train them jointly. This results in further improvement of the performance. The \bi{6}{2}-\bi{7}{6} model with two inception modules shows significant improvement in IoU by $4.7$ and $0.9$ in comparison to baseline model and DenseCRF application respectively. Finally, finetuning the entire network (FULL in Tab.~\ref{tab:deeplabresults}) boosts the performance by $5.2$ and $1.4$ compared to the baseline and DenseCRF application. Some visual results are shown in Fig.~\ref{fig:semantic_visuals} and more are included in the supplementary. Several other variants of using BI are conceivable. During our experiments, we have observed that more kernels and more modules improve the performance, so we expect that even better results can be achieved. In Tab.~\ref{tab:deeplabresults}, the runtime (ms) is included for several models. These numbers have been obtained using a Nvidia Tesla K80 GPU and standard Caffe time benchmarking~\cite{jia2014caffe}. DenseCRF timings are taken from~\cite{chen2015semantic}. The runtimes indicate that the overhead with BI modules is quite minimal in comparison to using Dense CRF. In addition, we include the results of some other dense pixel prediction methods that are build on top of the same DeepLab base model. DeepLab-MSc-CRF~is a multi-scale version~\cite{chen2014semantic} of DeepLab~with DenseCRF on top. DeepLab-EdgeNet~\cite{chen2015semantic} is a recently proposed fast and discriminatively trained domain transform technique for propagating information across pixels. Comparison with these techniques in terms of performance and runtime indicates that our approach performs on par with latest dense pixel prediction techniques with significantly less time overhead. Several state-of-the-art CNN based systems~\cite{lin2015efficient,liu2015semantic} have achieved higher results than DeepLab~on Pascal VOC12. These models are not yet publicly available and so we could not test the use of BI models in them. A close variant~\cite{barron2015fast} of our work, which propose to do optimization in the bilateral space also has fast runtimes, but reported lower performance in comparison to the application of DenseCRF. \begin{wraptable}[14]{r}{0pt} \scriptsize \begin{tabular}{>{\raggedright\arraybackslash}p{4.0cm}>{\centering\arraybackslash}p{0.7cm}>{\centering\arraybackslash}p{1.0cm}} \toprule \textbf{Model} & \emph{IoU} & \emph{Runtime}\\ \midrule \scriptsize DeconvNet(CNN+Deconv.) & 72.0 & 190ms \\ \midrule With BI modules & & \\ \bi{3}{2}-\bi{4}{2}-\bi{6}{2}-\bi{7}{2} & \textbf{74.9} & 245 \\ \midrule CRFasRNN (DeconvNet-CRF)& 74.7 & 2700\\ \bottomrule \end{tabular} \mycaption{Semantic Segmentation using CRFasRNN model}{IoU scores and runtimes corresponding to different models on Pascal VOC12 test dataset. Note that runtime also includes superpixel computation.} \label{tab:deconvresults} \end{wraptable} \subsubsection{CRFasRNN} As a second architecture, we modified the CNN architecture trained by~\cite{zheng2015conditional} that produces a result at an even lower resolution ($\frac{1}{16} \times$). Multiple deconvolution steps are employed to obtain the segmentation at input image resolution. This result is then passed onto the DenseCRF recurrent neural network to obtain the final segmentation result. We insert BI modules after score-pool3, score-pool4, \fc{6} and \fc{7} layers, please see~\cite{long2014fully,zheng2015conditional} for the network architecture details. Instead of combining outputs from the above layers with deconvolution steps, we introduce BI modules after them and linearly combined the outputs to obtain final segmentation result. Note that we entirely removed both the deconvolution and the DenseCRF parts of the original model~\cite{zheng2015conditional}. See Tab.~\ref{tab:deconvresults} for results on the DeconvNet model. Without the DenseCRF part and only evaluating the deconvolutional part of this model, one obtains an IoU score of $72.0$. Ten steps of mean field inference increase the IoU to $74.7$~\cite{zheng2015conditional}. Our model, with few additional parameters compared to the base CNN, achieves a IoU performance of $74.9$, showing an improvement of 0.2 over the CRFasRNN model. The BI layers lead to better performance than deconvolution and DenseCRF combined while being much faster. \subsubsection{Hierarchical Clustering Analysis} We learned the network parameters using 1000 gSLIC superpixels per image, however the inception module allows to change the resolution (a non-square $K$). To illustrate this, we perform agglomorative clustering of the superpixels, sequentially merging the nearest two superpixels into a single one. We then evaluated the DeepLab-\bi{6}{2}-\bi{7}{6} network using different levels of the resulting hierarchy re-using all the trained network parameters. Results in Fig.~\ref{fig:clustering} show that the IoU score on the validation set decreases slowly with decreasing number of points and then drops for less than 200 superpixels. This validates that the network generalizes to different superpixel layouts and it is sufficient to represent larger regions of similar color by fewer points. In future, we plan to explore different strategies to allocate the representation to those regions that require more resolution and to remove the superpixelization altogether. Fig.~\ref{fig:clustering} shows example image with 200, 600, and 1000 superpixels and their obtained segmentation with BI modules. \begin{figure}[t] \begin{tabular}{c} \subfigure{% \includegraphics[width=0.25\textwidth]{figures/superpixel_plot_agg_clustering.pdf} } \end{tabular} \hfill \begin{tabular}{c} \subfigure{% \includegraphics[width=0.14\textwidth]{figures/2007_001311_given.png} }\\ \subfigure{% \includegraphics[width=0.14\textwidth]{figures/2007_001311_gt.png} } \end{tabular} \hfill \begin{tabular}{c} \subfigure{% \includegraphics[width=0.14\textwidth]{figures/2007_001311_agg_1000_200_sp.png} }\\ \subfigure{% \includegraphics[width=0.14\textwidth]{figures/2007_001311_agg_200_ours.png} } \end{tabular}\hfill \begin{tabular}{c} \subfigure{% \includegraphics[width=0.14\textwidth]{figures/2007_001311_agg_1000_600_sp.png} }\\ \subfigure{% \includegraphics[width=0.14\textwidth]{figures/2007_001311_agg_600_ours.png} } \end{tabular}\hfill \begin{tabular}{c} \subfigure{% \includegraphics[width=0.14\textwidth]{figures/2007_001311_agg_1000_1000_sp.png} }\\ \subfigure{% \includegraphics[width=0.14\textwidth]{figures/2007_001311_agg_1000_ours.png} } \end{tabular} \mycaption{Hierarchical Clustering Analysis}{From left to right: Validation performance when using different super-pixel layouts, visualization of an image with ground truth segmentation, and the \bi{6}{2}-\bi{7}{6} result with 200, 600, and 1000 superpixels.} \label{fig:clustering} \end{figure} \begin{wraptable}[15]{r}{0pt} \scriptsize \centering \begin{tabular}{p{2.2cm}>{\centering\arraybackslash}p{2cm}>{\centering\arraybackslash}p{1cm}} \toprule \textbf{Model} & Class / Total accuracy & Runtime\\ \midrule \scriptsize Alexnet CNN & 55.3 / 58.9 & 300ms \\ \midrule \bi{7}{2}-\bi{8}{6} & 67.7 / 71.3 & 410 \\ \bi{7}{6}-\bi{8}{6} & \textbf{69.4 / 72.8} & 470 \\ \midrule AlexNet-CRF & 65.5 / 71.0 & 3400 \\ \bottomrule \end{tabular} \mycaption{Material Segmentation using AlexNet}{Pixel accuracies and runtimes (in ms) of different models on MINC material segmentation dataset~\cite{bell2015minc}. Runtimes also include the time for superpixel extraction (15ms).} \label{tab:mincresults} \end{wraptable} \subsection{Material Segmentation} We also experiment on a different pixel prediction task of material segmentation by adapting a CNN architecture finetuned for Materials in Context (MINC)~\cite{bell2015minc} dataset. MINC consists of 23 material classes and is available in three different resolutions with the same aspect ratio: low ($550^2$), mid ($1100^2$) and an original higher resolution. The authors of~\cite{bell2015minc} train CNNs on the mid resolution images and then combine with a DenseCRF to predict and evaluate on low resolution images. We build our work based on the Alexnet model~\cite{krizhevsky2012imagenet} released by the authors of~\cite{bell2015minc}. To obtain a per pixel labeling of a given image, there are several processing steps that~\cite{bell2015minc} use for good performance. First, a CNN is applied at several scales with different strides followed by an interpolation of the predictions to reach the input image resolution and is then followed by a DenseCRF. For simplicity, we choose to run the CNN network with single scale and no-sliding. The authors used just one kernel with $(u, v, L, a, b)$ features in the DenseCRF part. We used the same features in our inception modules. We modified the base AlexNet model by inserting BI modules after \fc{7} and \fc{8} layers. Again, 1000 SLIC superpixels are used for all experiments. Results on the test set are shown in Table~\ref{tab:mincresults}. When inserting BI modules, the performance improves both in total pixel accuracy as well as in class-averaged accuracy. We observe an improvement of $12\%$ compared to CNN predictions and $2-4\%$ compared to CNN+DenseCRF results. Qualitative examples are shown in Fig.~\ref{fig:material_visuals} and more are included in the supplementary. The weights to combine outputs in the BI layers are found by validation on the validation set. For this model we do not provide any learned setup due very limited segment training data. \definecolor{minc_1}{HTML}{771111} \definecolor{minc_2}{HTML}{CAC690} \definecolor{minc_3}{HTML}{EEEEEE} \definecolor{minc_4}{HTML}{7C8FA6} \definecolor{minc_5}{HTML}{597D31} \definecolor{minc_6}{HTML}{104410} \definecolor{minc_7}{HTML}{BB819C} \definecolor{minc_8}{HTML}{D0CE48} \definecolor{minc_9}{HTML}{622745} \definecolor{minc_10}{HTML}{666666} \definecolor{minc_11}{HTML}{D54A31} \definecolor{minc_12}{HTML}{101044} \definecolor{minc_13}{HTML}{444126} \definecolor{minc_14}{HTML}{75D646} \definecolor{minc_15}{HTML}{DD4348} \definecolor{minc_16}{HTML}{5C8577} \definecolor{minc_17}{HTML}{C78472} \definecolor{minc_18}{HTML}{75D6D0} \definecolor{minc_19}{HTML}{5B4586} \definecolor{minc_20}{HTML}{C04393} \definecolor{minc_21}{HTML}{D69948} \definecolor{minc_22}{HTML}{7370D8} \definecolor{minc_23}{HTML}{7A3622} \definecolor{minc_24}{HTML}{000000} \begin{figure*}[t] \tiny \centering \subfigure{% \includegraphics[width=.15\columnwidth]{figures/000000531_given.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/000000531_sp.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/000000531_gt.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/000000531_cnn.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/000000531_cnncrf.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/000000531_ours.png} }\\[-2ex] \setcounter{subfigure}{0} \subfigure[\tiny Input]{% \includegraphics[width=.15\columnwidth]{figures/000034078_given.png} } \subfigure[\tiny Superpixels]{% \includegraphics[width=.15\columnwidth]{figures/000034078_sp.png} } \subfigure[\tiny GT]{% \includegraphics[width=.15\columnwidth]{figures/000034078_gt.png} } \subfigure[\tiny AlexNet]{% \includegraphics[width=.15\columnwidth]{figures/000034078_cnn.png} } \subfigure[\tiny +DenseCRF]{% \includegraphics[width=.15\columnwidth]{figures/000034078_cnncrf.png} } \subfigure[\tiny Using BI]{% \includegraphics[width=.15\columnwidth]{figures/000034078_ours.png} } \mycaption{Material Segmentation}{Example results of material segmentation. (d) depicts the AlexNet CNN result, (e) CNN + 10 steps of mean-field inference, (f) results obtained with bilateral inception (BI) modules (\bi{7}{2}+\bi{8}{6}) between \fc~layers.} \label{fig:material_visuals} \end{figure*} \begin{wraptable}[16]{r}{0pt} \scriptsize \centering \begin{tabular}{p{1.8cm}>{\centering\arraybackslash}p{1.3cm}>{\centering\arraybackslash}p{1.3cm}>{\centering\arraybackslash}p{1.0cm}} \toprule \textbf{Model} & IoU (Half-res.) & IoU (Full-res.) & Runtime\\ \midrule \scriptsize DeepLab~CNN & 62.2 & 65.7 & 0.3s \\ \midrule \bi{6}{2} & 62.7 & 66.5 & 5.7 \\ \bi{6}{2}-\bi{7}{6} & \textbf{63.1} & \textbf{66.9} & 6.1 \\ \midrule DeepLab-CRF & 63.0 & 66.6 & 6.9 \\ \bottomrule \end{tabular} \mycaption{Street Scene Segmentation using DeepLab~model} {IoU scores and runtimes (in sec) of different models on Cityscapes segmentation dataset~\cite{Cordts2015Cvprw}, for both half-resolution and full-resolution images. Runtime computations also include superpixel computation time (5.2s).} \label{tab:cityscaperesults} \end{wraptable} \subsection{Street Scene Segmentation} We further evaluate the use of BI modules on the Cityscapes dataset~\cite{Cordts2015Cvprw}. Cityscapes contains 20K high-resolution ($1024\times2048$) images of street scenes with coarse pixel annotations and another 5K images with fine annotations, all annotations are from 19 semantic classes. The 5K images are divided into 2975 train, 500 validation and remaining test images. Since there are no publicly available pre-trained models for this dataset yet, we trained a DeepLab~model. We trained the base DeepLab~model with half resolution images ($512\times1024$) so that the model fits into GPU memory. The result is then interpolated to full-resolution using bilinear interpolation. \definecolor{city_1}{RGB}{128, 64, 128} \definecolor{city_2}{RGB}{244, 35, 232} \definecolor{city_3}{RGB}{70, 70, 70} \definecolor{city_4}{RGB}{102, 102, 156} \definecolor{city_5}{RGB}{190, 153, 153} \definecolor{city_6}{RGB}{153, 153, 153} \definecolor{city_7}{RGB}{250, 170, 30} \definecolor{city_8}{RGB}{220, 220, 0} \definecolor{city_9}{RGB}{107, 142, 35} \definecolor{city_10}{RGB}{152, 251, 152} \definecolor{city_11}{RGB}{70, 130, 180} \definecolor{city_12}{RGB}{220, 20, 60} \definecolor{city_13}{RGB}{255, 0, 0} \definecolor{city_14}{RGB}{0, 0, 142} \definecolor{city_15}{RGB}{0, 0, 70} \definecolor{city_16}{RGB}{0, 60, 100} \definecolor{city_17}{RGB}{0, 80, 100} \definecolor{city_18}{RGB}{0, 0, 230} \definecolor{city_19}{RGB}{119, 11, 32} \begin{figure*}[t] \tiny \centering \subfigure{% \includegraphics[width=.18\columnwidth]{figures/frankfurt00000_008206_given.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/frankfurt00000_008206_sp.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/frankfurt00000_008206_gt.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/frankfurt00000_008206_cnn.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/frankfurt00000_008206_ours.png} }\\[-2ex] \setcounter{subfigure}{0} \subfigure[\scriptsize Input]{% \includegraphics[width=.18\columnwidth]{figures/frankfurt00000_016005_given.png} } \subfigure[\scriptsize Superpixels]{% \includegraphics[width=.18\columnwidth]{figures/frankfurt00000_016005_sp.png} } \subfigure[\scriptsize GT]{% \includegraphics[width=.18\columnwidth]{figures/frankfurt00000_016005_gt.png} } \subfigure[\scriptsize Deeplab]{% \includegraphics[width=.18\columnwidth]{figures/frankfurt00000_016005_cnn.png} } \subfigure[\scriptsize Using BI]{% \includegraphics[width=.18\columnwidth]{figures/frankfurt00000_016005_ours.png} } \mycaption{Street Scene Segmentation}{Example results of street scene segmentation. (d) depicts the DeepLab results, (e) result obtained by adding bilateral inception (BI) modules (\bi{6}{2}+\bi{7}{6}) between \fc~layers. More in supplementary.} \label{fig:street_visuals} \end{figure*} We experimented with two layouts: only a single \bi{6}{2} and one with two inception \bi{6}{2}-\bi{7}{6} modules. We notice that the SLIC superpixels~\cite{achanta2012slic} give higher quantization error than on VOC and thus used 6000 superpixels using~\cite{DollarICCV13edges} for our experiments. Quantitative results on the validation set are shown in Tab.~\ref{tab:cityscaperesults}. In contrast to the findings on the previous datasets, we only observe modest improvements with both DenseCRF and our inception modules in comparison to the base model. Similar to the previous experiments, the inception modules achieve better performance than DenseCRF while being faster. The majority of the computation time in our approach is due to the extraction of superpixels ($5.2s$) using a CPU implementation. Some visual results with \bi{6}{2}-\bi{7}{6} model are shown in Fig.~\ref{fig:street_visuals} with more in supplementary. \section{Conclusion} The DenseCRF~\cite{krahenbuhl2012efficient} with mean field inference has been used in many CNN segmentation approaches. Its main ingredient and reason for the improved performance is the use of a bilateral filter applied to the beliefs over labels. We have introduced a CNN approach that uses this key component in a novel way: filtering intermediate representations of higher levels in CNNs while jointly learning the task-specific feature spaces. This propagates information between earlier and more detailed intermediate representations of the classes instead of beliefs over labels. Further we show that image adaptive layouts in the higher levels of CNNs can be used to an advantage in the same spirit as CRF graphs have been constructed using superpixels in previous works on semantic segmentation. The computations in the $1\times1$ convolution layers scales in the number of superpixels which may be an advantage. Further we have shown that the same representation can be used to interpolate the coarser representations to the full image. The use of image-adaptive convolutions in between the FC layers retains the appealing effect of producing segmentation masks with sharp edges. This is not a property of the superpixels, using them to represent information in FC layers and their use to interpolate to the full resolution are orthogonal. Different interpolation steps can be used to propagate the label information to the entire image, including bilinear interpolation, up-convolutions and DenseCRFs. We plan to investigate the effect of different sampling strategies to represent information in the higher layers of CNNs and apply similar image-adaptive ideas to videos. We believe that the Bilateral Inception models are an interesting step that aims to directly include the model structure of CRF factors into the forward architecture of CNNs. The BI modules are easy to implement and are applicable to CNNs that perform structured output prediction. \small\subsubsection{Acknowledgements} We thank the reviewers for their valuable feedback. Raghudeep Gadde is supported by CSTB and ANR-13-CORD-0003. \small \bibliographystyle{splncs}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{Sec1} This paper addresses the problem of tournament ranking when players may have played an arbitrary number of matches against each other, from an axiomatic point of view. For instance, the matches among top tennis players lead to a set of similar data: \emph{Andre Agassi} has played 14 matches with \emph{Boris Becker}, but he has never played against \emph{Bj\"orn Borg} \citep{BozokiCsatoTemesi2016}. To be more specific, we show the incompatibility of some natural properties. Impossibility theorems are well-known in the classical theory of social choice \citep{Arrow1950, Gibbard1973, Satterthwaite1975}, but our setting has a crucial difference: the set of agents and the set of alternatives coincide, therefore the transitive effects of 'voting' should be considered \citep{AltmanTennenholtz2008}. We also allow for cardinal and incomplete preferences as well as ties in the ranking derived. Several characterizations of ranking methods have been suggested in the literature by providing a set of properties such that they uniquely determine a given method \citep{Rubinstein1980, Bouyssou1992, BouyssouPerny1992, vandenBrinkGilles2003, vandenBrinkGilles2009, SlutzkiVolij2005, SlutzkiVolij2006, Kitti2016}. There are some excellent axiomatic analyses, too \citep{ChebotarevShamis1998a, Gonzalez-DiazHendrickxLohmann2013}. However, apart from \citet{Csato2018f}, we know only one work discussing impossibility results for ranking the nodes of a directed graph \citep{AltmanTennenholtz2008}, a domain covered by our concept of generalized tournament. We think these theorems are indispensable for a clear understanding of the axiomatic framework. For example, \citet{Gonzalez-DiazHendrickxLohmann2013} have found that most ranking methods violate an axiom called order preservation, but it is not known whether this negative result is caused by a theoretical impossibility or it is only due to some hidden features of the procedures that have been considered. It is especially a relevant issue because of the increasing popularity of sports rankings \citep{LangvilleMeyer2012}, which is, in a sense, not an entirely new phenomenon, since sports tournaments have motivated some classical works of social choice and voting theory \citep{Landau1895, Zermelo1929, Wei1952}. For instance, the ranking of tennis players has been addressed from at least three perspectives, with the use of methods from multicriteria decision-making \citep{BozokiCsatoTemesi2016}, network analysis \citep{Radicchi2011}, or statistics \citep{BakerMcHale2014, BakerMcHale2017}. Consequently, the axiomatic approach can be fruitful in the choice of an appropriate sports ranking method. This issue has been discussed in some recent works \citep{Berker2014, Pauly2014, Csato2017d, Csato2018m, Csato2018h, Csato2018j, Csato2018i, Csato2018b, Csato2018l, DagaevSonin2017, Vazirietal2018, Vong2017}, but there is a great scope for future research. For this purpose, we will place two properties, imported from the social choice literature, in the centre of the discussion. \emph{Self-consistency} \citep{ChebotarevShamis1997a} requires assigning the same rank for players with equivalent results, furthermore, a player showing an obviously better performance than another should be ranked strictly higher. \emph{Order preservation}\footnote{~The term order preservation may be a bit misleading since it can suggest that the sequence of matches does not influence the rankings (see \citet[Property~III]{Vazirietal2018}). This requirement obviously holds in our setting.} \citep{Gonzalez-DiazHendrickxLohmann2013} excludes the possibility of rank reversal by demanding the preservation of players' pairwise ranking when two tournaments, where the same players have played the same number of matches, are aggregated. In other words, it is not allowed that player $A$ is judged better both in the first and second half of the season than player $B$, but ranked lower on the basis of the whole season. Our main result proves the incompatibility of self-consistency and order preservation. This finding gives a theoretical foundation for the observation of \citet{Gonzalez-DiazHendrickxLohmann2013} that most ranking methods do not satisfy order preservation. Another important message of the paper is that prospective users cannot avoid to take similar impossibilities into account and justify the choice between the properties involved. The study is structured as follows. Section~\ref{Sec2} presents the notion of ranking problem and scoring methods. Section~\ref{Sec3} introduces the property called self-consistency and proves that one type of scoring methods cannot satisfy it. Section~\ref{Sec4} defines (strong) order preservation besides some other properties, addresses the compatibility of the axioms and derives a negative result by opposing self-consistency and order preservation. Section~\ref{Sec5} summarizes our main findings. \section{The ranking problem and scoring methods} \label{Sec2} Consider a \emph{set of players} $N = \{ X_1,X_2, \dots, X_n \}$, $n \in \mathbb{N}_+$ and a series of \emph{tournament matrices} $T^{(1)}$, $T^{(2)}$, \dots, $T^{(m)}$ containing information on the paired comparisons of the players. Their entries are given such that $t_{ij}^{(p)} + t_{ji}^{(p)} = 1$ if players $X_i$ and $X_j$ have played in round $p$ ($1 \leq p \leq m$) and $t_{ij}^{(p)} + t_{ji}^{(p)} = 0$ if they have not played against each other in round $p$. The simplest definition can be $t_{ij}^{(p)} = 1$ (implying $t_{ji}^{(p)} = 0$) if player $X_i$ has defeated player $X_j$, and $t_{ij}^{(p)} = 0$ (implying $t_{ji}^{(p)} = 1$) if player $X_i$ has lost against player $X_j$ in round $p$. A draw can be represented by $t_{ij}^{(p)} = t_{ji}^{(p)} = 0.5$. The entries may reflect the scores of the players, or other features of the match (e.g. an overtime win has less value than a normal time win), too. The tuple $\left( N,T^{(1)}, T^{(2)}, \dots, T^{(m)} \right)$, denoted shortly by $(N,\mathbf{T})$, is called a \emph{general ranking problem}. The set of general ranking problems with $n$ players ($|N| = n$) is denoted by $\mathcal{T}^n$. The \emph{aggregated tournament matrix} $A = \sum_{p=1}^m T^{(p)} = \left[ a_{ij} \right] \in \mathbb{R}^{n \times n}$ combines the results of all rounds of the competition. The pair $(N,A)$ is called a \emph{ranking problem}. The set of ranking problems with $n$ players ($|N| = n$) is denoted by $\mathcal{R}^n$. Note that every ranking problem can be associated with several general ranking problems, in this sense, ranking problem is a narrower notion. Let $(N,A),(N,A') \in \mathcal{R}^n$ be two ranking problems with the same player set $N$. The \emph{sum} of these ranking problems is $(N,A+A') \in \mathcal{R}^n$. For example, the ranking problems can contain the results of matches in the first and second half of the season, respectively. Any ranking problem $(N,A)$ has a skew-symmetric \emph{results matrix} $R = A - A^\top = \left[ r_{ij} \right] \in \mathbb{R}^{n \times n}$ and a symmetric \emph{matches matrix} $M = A + A^\top = \left[ m_{ij} \right] \in \mathbb{N}^{n \times n}$. $m_{ij}$ is the number of matches between players $X_i$ and $X_j$, whose outcome is given by $r_{ij}$. Matrices $R$ and $M$ also determine the aggregated tournament matrix through $A = (R + M)/2$, so any ranking problem $(N,A) \in \mathcal{R}^n$ can be denoted analogously by $(N,R,M)$ with the restriction $|r_{ij}| \leq m_{ij}$ for all $X_i,X_j \in N$. Despite description with results and matches matrices is not parsimonious, this notation will turn out to be useful. A \emph{general scoring method} is a function $g:\mathcal{T}^n \to \mathbb{R}^n$. Several procedures have been suggested in the literature, see \citet{ChebotarevShamis1998a} for an overview of them. A special type of general scoring methods is the following. \begin{definition} \label{Def21} \emph{Individual scoring method} \citep{ChebotarevShamis1999}: A general scoring method $g:\mathcal{T}^n \to \mathbb{R}^n$ is called \emph{individual scoring method} if it is based on individual scores, that is, there exist functions $\phi$ and $\delta$ such that for any general ranking problem $(N,\mathbf{T}) \in \mathcal{T}^n$, the corresponding score vector $\mathbf{s} = g(N,\mathbf{T})$ can be expressed as $\mathbf{s} = \delta(\mathbf{s}^{(1)},\mathbf{s}^{(2)}, \dots, \mathbf{s}^{(m)})$, where the partial score vectors $\mathbf{s}^{(p)} = \phi(N,T^{(p)})$ depend solely on the tournament matrix $T^{(p)}$ of round $p$ for all $p = 1,2, \dots, m$. \end{definition} A \emph{scoring method} is a function $f:\mathcal{R}^n \to \mathbb{R}^n$. Any scoring method can also be regarded as a general scoring method -- by using the aggregated tournament matrix instead of the whole series of tournament matrices --, therefore some articles only consider scoring methods \citep{Kitti2016, SlutzkiVolij2005}. \citet{Gonzalez-DiazHendrickxLohmann2013} give a thorough axiomatic analysis of certain scoring methods. In other words, scoring methods initially aggregate the tournament matrices and then rank the players by their scores, while individual scoring methods first give scores to the players in each round and then aggregate them. \section{An argument against the use of individual scoring methods} \label{Sec3} In this section, some properties of general scoring methods are presented, which will highlight an important failure of individual scoring methods. \subsection{Universal invariance axioms} \label{Sec31} \begin{axiom} \label{Axiom31} \emph{Anonymity} ($ANO$): Let $(N,\mathbf{T}) \in \mathcal{T}^n$ be a general ranking problem, $\sigma: \{ 1,2, \dots, m \} \rightarrow \{ 1,2, \dots, m \}$ be a permutation on the set of rounds, and $\sigma(N,\mathbf{T}) \in \mathcal{T}^n$ be the ranking problem obtained from $(N,\mathbf{T})$ by permutation $\sigma$. General scoring method $g: \mathcal{T}^n \to \mathbb{R}^n$ is \emph{anonymous} if $g_i (N,\mathbf{T}) = g_i \left( \sigma(N,\mathbf{T}) \right)$ for all $X_i \in N$. \end{axiom} Anonymity implies that any reindexing of the rounds (tournament matrices) preserves the scores of the players. \begin{axiom} \label{Axiom32} \emph{Neutrality} ($NEU$): Let $(N,\mathbf{T}) \in \mathcal{T}^n$ be a general ranking problem, $\sigma: N \rightarrow N$ be a permutation on the set of players, and $(\sigma(N),\mathbf{T}) \in \mathcal{T}^n$ be the ranking problem obtained from $(N,\mathbf{T})$ by permutation $\sigma$. General scoring method $g: \mathcal{T}^n \to \mathbb{R}^n$ is \emph{neutral} if $g_i(N,\mathbf{T}) = g_{\sigma(i)} (\sigma(N),\mathbf{T})$ for all $X_i \in N$. \end{axiom} Neutrality means that the scores are independent of the labelling of the players. \subsection{Self-consistency} \label{Sec32} Now we want to formulate a further requirement on the ranking of the players by answering the following question: \emph{When is player $X_i$ undeniably better than player $X_j$?} There are two such plausible cases: (1) if player $X_i$ has achieved better results against the same opponents; (2) if player $X_i$ has achieved the same results against stronger opponents. Consequently, player $X_i$ should also be judged better if he/she has achieved better results against stronger opponents than player $X_j$. Furthermore, since (general) scoring methods allow for ties in the ranking, player $X_i$ should have the same rank as player $X_j$ if he/she has achieved the same results against opponents with the same strength. In order to apply these principles, both the results and strengths of the players should be measured. Results can be extracted from the tournament matrices $T^{(p)}$. Strengths of the players can be obtained from their scores according to the (general) scoring method used, hence the name of the implied axiom is \emph{self-consistency}. It has been introduced in \citet{ChebotarevShamis1997a}, and extensively discussed by \citet{Csato2018f}. \begin{definition} \label{Def31} \emph{Opponent multiset}: Let $(N,\mathbf{T}) \in \mathcal{T}^n$ be a general ranking problem. The \emph{opponent multiset}\footnote{~\emph{Multiset} is a generalization of the concept of set allowing for multiple instances of its elements.} of player $X_i$ is $O_i$, which contains $m_{ij}$ instances of $X_j$. \end{definition} Players of the opponent multiset $O_i$ are called the \emph{opponents} of player $X_i$. \begin{notation} \label{Not31} Consider the ranking problem $(N,T^{(p)}) \in \mathcal{T}^n$ given by restricting a general ranking problem to its $p$th round. Let $X_i, X_j \in N$ be two different players and $h^{(p)}: O_i^{(p)} \leftrightarrow O_j^{(p)}$ be a one-to-one correspondence between the opponents of $X_i$ and $X_j$ in round $p$, consequently, $|O_i^{(p)}| = |O_j^{(p)}|$. Then $\mathfrak{h}^{(p)}: \{k: X_k \in O_i^{(p)} \} \leftrightarrow \{\ell: X_\ell \in O_j^{(p)} \}$ is given by $X_{\mathfrak{h}^{(p)}(k)} = h^{(p)}(X_k)$. \end{notation} \begin{axiom} \label{Axiom33} \emph{Self-consistency} ($SC$) \citep{ChebotarevShamis1997a}: A general scoring method $g: \mathcal{T}^n \to \mathbb{R}^n$ is called \emph{self-consistent} if the following implication holds for any general ranking problem $(N,\mathbf{T}) \in \mathcal{T}^n$ and for any players $X_i,X_j \in N$: if there exists a one-to-one mapping $h^{(p)}$ from $O^{(p)}_i$ onto $O^{(p)}_j$ such that $t_{ik}^{(p)} \geq t_{j \mathfrak{h}^{(p)}(k)}^{(p)}$ and $g_k(N,\mathbf{T}) \geq g_{\mathfrak{h}^{(p)}(k)}(N,\mathbf{T})$ for all $p = 1,2, \dots ,m$ and $X_k \in O_i^{(p)}$, then $f_i(N,R,M) \geq f_{j}(N,R,M)$, furthermore, $f_i(N,R,M) > f_{j}(N,R,M)$ if $t_{ik}^{(p)} > t_{j \mathfrak{h}^{(p)}(k)}^{(p)}$ or $g_k(N,\mathbf{T}) > g_{\mathfrak{h}^{(p)}(k)}(N,\mathbf{T})$ for at least one $1 \leq p \leq m$ and $X_k \in O_i^{(p)}$. \end{axiom} \subsection{Individual scoring methods and self-consistency} \label{Sec33} In this part, it will be proved that an anonymous and neutral individual scoring method cannot satisfy self-consistency, which is a natural fairness requirement, thus it is enough to focus on ranking problems and scoring methods. For this purpose, the example below will be used. \begin{figure}[htbp] \centering \caption{The general ranking problem of Example~\ref{Examp31}} \label{Fig31} \begin{subfigure}{.33\textwidth} \centering \subcaption{$(N,T^{(1)})$} \label{Fig31a} \begin{tikzpicture}[scale=1, auto=center, transform shape, >=triangle 45] \tikzstyle{every node}=[draw,shape=rectangle]; \node (n1) at (135:2) {$X_1$}; \node (n2) at (45:2) {$X_2$}; \node (n3) at (315:2) {$X_3$}; \node (n4) at (225:2) {$X_4$}; \draw [->] (n1) -- (n4); \end{tikzpicture} \end{subfigure \begin{subfigure}{.33\textwidth} \centering \subcaption{$(N,T^{(2)})$} \label{Fig31b} \begin{tikzpicture}[scale=1, auto=center, transform shape, >=triangle 45] \tikzstyle{every node}=[draw,shape=rectangle]; \node (n1) at (135:2) {$X_1$}; \node (n2) at (45:2) {$X_2$}; \node (n3) at (315:2) {$X_3$}; \node (n4) at (225:2) {$X_4$}; \draw (n1) -- (n2); \draw (n4) -- (n3); \end{tikzpicture} \end{subfigure} \begin{subfigure}{.33\textwidth} \centering \subcaption{$(N,\mathbf{T})$} \label{Fig31c} \begin{tikzpicture}[scale=1, auto=center, transform shape, >=triangle 45] \tikzstyle{every node}=[draw,shape=rectangle]; \node (n1) at (135:2) {$X_1$}; \node (n2) at (45:2) {$X_2$}; \node (n3) at (315:2) {$X_3$}; \node (n4) at (225:2) {$X_4$}; \foreach \from/\to in {n1/n2,n3/n4} \draw (\from) -- (\to); \draw [->] (n1) -- (n4); \end{tikzpicture} \end{subfigure} \end{figure} \begin{example} \label{Examp31} Let $\left( N,T^{(1)},T^{(2)} \right) \in \mathcal{T}^4$ be a general ranking problem describing a tournament with two rounds. It is shown in Figure~\ref{Fig31}: a directed edge from node $X_i$ to $X_j$ indicates a win of player $X_i$ over $X_j$ (and a loss of $X_j$ against $X_i$), while an undirected edge from node $X_i$ to $X_j$ represents a drawn match between the two players. This representation will be used in further examples, too. So, player $X_1$ has defeated $X_4$ in the first round (Figure~\ref{Fig31a}), while players $X_2$ and $X_3$ have not played. In the second round, players $X_1$ and $X_2$, as well as players $X_3$ and $X_4$ have drawn (Figure~\ref{Fig31b}). The whole tournament is shown in Figure~\ref{Fig31c}. \end{example} According to the following result, at least one property from the set of $ANO$, $NEU$ and $SC$ will be violated by any individual scoring method. \begin{proposition} \label{Prop31} There exists no anonymous and neutral individual scoring method satisfying self-consistency. \end{proposition} \begin{proof} Let $g: \mathcal{T}^n \to \mathbb{R}^n$ be an anonymous and neutral individual scoring method. Consider Example~\ref{Examp31}. $ANO$ and $NEU$ imply that $g_2(N,T^{(1)}) = g_3(N,T^{(1)})$ and $g_2(N,T^{(2)}) = g_3(N,T^{(2)})$, therefore \begin{equation} \label{eq1} g_2(N,\mathbf{T}) = \delta \left( g_2(N,T^{(1)}), g_2(N,T^{(2)}) \right) = \delta \left( g_3(N,T^{(1)}), g_3(N,T^{(2)}) \right) = g_3(N,\mathbf{T}). \end{equation} Note that $O_1^{(1)} = \{ X_4 \}$, $O_1^{(1)} = \{ X_2 \}$ and $O_4^{(1)} = \{ X_1 \}$, $O_4^{(2)} = \{ X_3 \}$. Take the one-to-one correspondences $h_{14}^{(1)}: O_1^{(1)} \leftrightarrow O_4^{(1)}$ such that $h_{14}^{(1)}(X_4)=X_1$ and $h_{14}^{(2)}: O_1^{(2)} \leftrightarrow O_4^{(2)}$ such that $h_{14}^{(2)}(X_2)=X_3$. Now $t_{12}^{(2)} = t_{43}^{(2)}$ since the corresponding matches resulted in draws. Furthermore, $t_{14}^{(1)} \neq t_{41}^{(1)}$ since the value of a win and a loss should be different. It can be assumed without loss of generality that $t_{14}^{(1)} > t_{41}^{(1)}$. Suppose that $g_1(N,\mathbf{T}) \leq g_4(N,\mathbf{T})$. Then players $X_1$ and $X_4$ have a draw against a player with the same strength ($X_2$ and $X_3$, respectively), but $X_1$ has defeated $X_4$, so it has a better result against a not weaker opponent. Therefore, self-consistency (Axiom~\ref{Axiom33}) implies $g_1(N,\mathbf{T}) > g_4(N,\mathbf{T})$, which is a contradiction, thus $g_1(N,\mathbf{T}) > g_4(N,\mathbf{T})$ holds. However, $O_2^{(1)} = \emptyset$, $O_2^{(2)} = \{ X_1 \}$ and $O_3^{(1)} = \emptyset$, $O_3^{(2)} = \{ X_4 \}$. Consider the unique one-to-one correspondence $h_{14}^{(2)}: O_2^{(2)} \leftrightarrow O_3^{(2)}$, which -- together with $t_{21}^{(2)} = t_{34}^{(2)}$ (the two draws should be represented by the same number) and $g_1(N,\mathbf{T}) > g_4(N,\mathbf{T})$ -- leads to $g_2(N,\mathbf{T}) > g_3(N,\mathbf{T})$ because player $X_2$ has achieved the same result against a stronger opponent than player $X_3$. In other words, $SC$ requires the draw of $X_2$ to be more valuable than the draw of $X_3$, but it cannot be reflected by any individual scoring method $g$ according to \eqref{eq1}. \end{proof} \section{The case of ranking problems and scoring methods} \label{Sec4} According to Proposition~\ref{Prop31}, only the procedure underlying scoring methods can be compatible with self-consistency. Therefore, this section will focus on scoring methods. \subsection{Axioms of invariance with respect to the results matrix} \label{Sec41} Let $O \in \mathbb{R}^{n \times n}$ be the matrix with all of its entries being zero. \begin{axiom} \label{Axiom41} \emph{Symmetry} ($SYM$) \citep{Gonzalez-DiazHendrickxLohmann2013}: Let $(N,R,M) \in \mathcal{R}^n$ be a ranking problem such that $R=O$. Scoring method $f: \mathcal{R}^n \to \mathbb{R}^n$ is \emph{symmetric} if $f_i(N,R,M) = f_j(N,R,M)$ for all $X_i, X_j \in N$. \end{axiom} According to symmetry, if all paired comparisons (but not necessarily all matches in each round) between the players result in a draw, then all players will have the same score. \begin{axiom} \label{Axiom42} \emph{Inversion} ($INV$) \citep{ChebotarevShamis1998a}: Let $(N,R,M) \in \mathcal{R}^n$ be a ranking problem. Scoring method $f: \mathcal{R}^n \to \mathbb{R}^n$ is \emph{invertible} if $f_i(N,R,M) \geq f_j(N,R,M) \iff f_i(N,-R,M) \leq f_j(N,-R,M)$ for all $X_i, X_j \in N$. \end{axiom} Inversion means that taking the opposite of all results changes the ranking accordingly. It establishes a uniform treatment of victories and losses. \begin{corollary} \label{Col41} Let $f: \mathcal{R}^n \to \mathbb{R}^n$ be a scoring method satisfying $INV$. Then for all $X_i, X_j \in N$: $f_i(N,R,M) > f_j(N,R,M) \iff f_i(N,-R,M) < f_j(N,-R,M)$. \end{corollary} The following result has been already mentioned by \citet[p.~150]{Gonzalez-DiazHendrickxLohmann2013}. \begin{corollary} \label{Col42} $INV$ implies $SYM$. \end{corollary} It seems to be difficult to argue against symmetry. However, scoring methods based on right eigenvectors \citep{Wei1952, SlutzkiVolij2005, SlutzkiVolij2006, Kitti2016} violate inversion. \subsection{Properties of independence} \label{Sec42} The next axiom deals with the effects of certain changes in the aggregated tournament matrix $A$. \begin{axiom} \label{Axiom43} \emph{Independence of irrelevant matches} ($IIM$) \citep{Gonzalez-DiazHendrickxLohmann2013}: Let $(N,A),(N,A') \in \mathcal{R}^n$ be two ranking problems and $X_i,X_j,X_k, X_\ell \in N$ be four different players such that $(N,A)$ and $(N,A')$ are identical but $a_{k \ell} \neq a'_{k \ell}$. Scoring method $f: \mathcal{R}^n \to \mathbb{R}^n$ is called \emph{independent of irrelevant matches} if $f_i(N,A) \geq f_j(N,A) \Rightarrow f_i(N,A') \geq f_j(N,A')$. \end{axiom} $IIM$ means that 'remote' matches -- not involving players $X_i$ and $X_j$ -- do not affect the pairwise ranking of players $X_i$ and $X_j$. Independence of irrelevant matches seems to be a powerful property. \citet{Gonzalez-DiazHendrickxLohmann2013} state that '\emph{when players have different opponents (or face opponents with different intensities), $IIM$ is a property one would rather not have}'. \citet{Csato2018f} argues on an axiomatic basis against $IIM$. The rounds of a given tournament can be grouped arbitrarily. Therefore, the following property makes much sense. \begin{axiom} \label{Axiom44} \emph{Order preservation} ($OP$) \citep{Gonzalez-DiazHendrickxLohmann2013}: Let $(N,A),(N,A') \in \mathcal{R}^n$ be two ranking problems where all players have played $m$ matches and $X_i, X_j \in N$ be two different players. Let $f: \mathcal{R}^n \to \mathbb{R}^n$ be a scoring method such that $f_i(N,A) \geq f_j(N,A)$ and $f_i(N,A') \geq f_j(N,A')$.\footnote{~\citet{Gonzalez-DiazHendrickxLohmann2013} formally introduce a stronger version of this axiom since only $X_i$ and $X_j$ should have the same number of matches in the two ranking problems. However, in the counterexample of \citet{Gonzalez-DiazHendrickxLohmann2013}, which shows the violation of $OP$ by several ranking methods, all players have played the same number of matches.} $f$ satisfies \emph{order preservation} if $f_i(N,A+A') \geq f_j(N,A+A')$, furthermore, $f_i(N,A+A') > f_j(N,A+A')$ if $f_i(N,A) > f_j(N,A)$ or $f_i(N,A') > f_j(N,A')$. \end{axiom} $OP$ is a relatively restricted version of combining ranking problems, which implies that if player $X_i$ is not worse than player $X_j$ on the basis of some rounds as well as on the basis of another set of rounds such that all players have played in each round (so they have played the same number of matches altogether), then this pairwise ranking should hold after the two distinct set of rounds are considered jointly. One can consider a stronger version of order preservation, too. \begin{axiom} \label{Axiom45} \emph{Strong order preservation} ($SOP$) \citep{vandenBrinkGilles2009}: Let $(N,A),(N,A') \in \mathcal{R}^n$ be two ranking problems and $X_i, X_j \in N$ be two players. Let $f: \mathcal{R}^n \to \mathbb{R}^n$ be a scoring method such that $f_i(N,A) \geq f_j(N,A)$ and $f_i(N,A') \geq f_j(N,A')$. $f$ satisfies \emph{strong order preservation} if $f_i(N,A+A') \geq f_j(N,A+A')$, furthermore, $f_i(N,A+A') > f_j(N,A+A')$ if $f_i(N,A) > f_j(N,A)$ or $f_i(N,A') > f_j(N,A')$. \end{axiom} In contrast to order preservation, $SOP$ does not contain any restriction on the number of matches of the players in the ranking problems to be aggregated. \begin{corollary} \label{Col43} $SOP$ implies $OP$. \end{corollary} It will turn out that the weaker property, order preservation has still unfavourable implications. \subsection{Relations among the axioms} \label{Sec44} In this part, some links among symmetry, inversion, independence of irrelevant matches, and (strong) order preservation will be revealed. \begin{remark} \label{Rem41} $SYM$ and $OP$ ($SOP$) imply $INV$. \end{remark} \begin{proof} Consider a ranking problem $(N,R,M) \in \mathcal{R}^n$ where $f_i(N,R,M) \geq f_j(N,R,M)$ for players $X_i,X_j \in N$. If $f_i(N,-R,M) > f_j(N,-R,M)$, then $f_i(N,O,2M) > f_j(N,O,2M)$ due to $OP$, which contradicts to $SYM$. So $f_i(N,-R,M) \leq f_j(N,-R,M)$ holds. \end{proof} It turns out that $IIM$ is also closely connected to $SOP$. \begin{proposition} \label{Prop41} A scoring method satisfying $NEU$, $SYM$ and $SOP$ meets $IIM$. \end{proposition} \begin{proof} Assume to the contrary, and let $(N,R,M) \in \mathcal{R}^n$ be a ranking problem, $f: \mathcal{R}^n \to \mathbb{R}^n$ be a scoring method satisfying $NEU$, $SYM$, and $SOP$, and $X_i, X_j, X_k, X_\ell \in N$ be four different players such that $f_i(N,R,M) \geq f_j(N,R,M)$, and $(N,R',M') \in \mathcal{R}^n$ is identical to $(N,R,M)$ except for the result $r'_{k \ell}$ and number of matches $m'_{k \ell}$ between players $X_k$ and $X_\ell$, where $f_i(N,R',M') < f_j(N,R',M')$. According to Remark~\ref{Rem41}, $f$ satisfies $INV$, hence $f_i(N,-R,M) \leq f_j(N,-R,M)$. Denote by $\sigma: N \rightarrow N$ the permutation $\sigma(X_i) = X_j$, $\sigma(X_j) = X_i$, and $\sigma(X_k) = X_k$ for all $X_k \in N \setminus \{ X_i,X_j \}$. Neutrality leads to in $f_i \left[ \sigma(N,R,M) \right] \leq f_j \left[ \sigma(N,R,M) \right]$, and $f_i \left[ \sigma(N,-R',M') \right] < f_j \left[ \sigma(N,-R',M') \right]$ due to inversion and Corollary~\ref{Col41}. With the notations $R'' = \sigma(R) - \sigma(R') - R + R' = O$ and $M'' = \sigma(M) + \sigma(M') + M + M'$, we get \[ (N,R'',M'') = \sigma(N,R,M) + \sigma(N,-R',M') + (N,-R,M) + (N,R',M'). \] Symmetry implies $f_i(N,R'',M'') = f_j(N,R'',M'')$ since $R'' = O$, but $f_i(N,R'',M'') < f_j(N,R'',M'')$ from strong order preservation, which is a contradiction. \end{proof} It remains to be seen whether $NEU$, $SYM$, and $SOP$ are all necessary for Proposition~\ref{Prop41}. \begin{lemma} \label{Lemma41} $NEU$, $SYM$, and $SOP$ are logically independent axioms with respect to the implication of $IIM$. \end{lemma} \begin{proof} It is shown that there exist scoring methods, which satisfy exactly two properties from the set $NEU$, $SYM$, and $SOP$, but violate the third and does not meet $IIM$, too: \begin{enumerate}[label=\fbox{\arabic*}] \item $SYM$ and $SOP$: the sum of the results of the 'previous' player, $f_i(N,R,M) = \sum_{j=1}^n r_{i-1,j}$ for all $X_i \in N \setminus \{ X_1 \}$ and $f_1(N,R,M) = \sum_{j=1}^n r_{n,j}$; \item $NEU$ and $SOP$: maximal number of matches of other players, $f_i(N,R,M) = \max \{ \sum_{k=1}^n m_{jk}: X_j \neq X_i \}$;\footnote{~It is worth to note that the maximal number of own matches satisfies $NEU$, $SOP$, and $IIM$.} \item $NEU$ and $SYM$: aggregated sum of the results of opponents, $f_i(N,R,M) = \sum_{X_j \in O_i} \sum_{k=1}^n r_{jk}$. \end{enumerate} \end{proof} Proposition~\ref{Prop41} helps in deriving another impossibility statement. \begin{proposition} \label{Prop42} There exists no scoring method that satisfies neutrality, symmetry, strong order preservation and self-consistency. \end{proposition} \begin{proof} According to Proposition~\ref{Prop41}, $NEU$, $SYM$ and $SOP$ imply $IIM$. \citet[Theorem~3.1]{Csato2018f} has shown that $IIM$ and $SC$ cannot be met at the same time. \end{proof} \subsection{A basic impossibility result} \label{Sec45} The four axioms of Proposition~\ref{Prop42} are not independent despite Lemma~\ref{Lemma41}. However, a much stronger statement can be obtained by eliminating neutrality and symmetry, which also allows for a weakening of strong order preservation by using order preservation. Note that substituting an axiom with a weaker one in an impossibility statement leads to a stronger result. We will use a generalized tournament with four players for this purpose. \begin{figure}[htbp] \centering \caption{The ranking problems of Example~\ref{Examp41}} \label{Fig41} \begin{subfigure}{.33\textwidth} \centering \subcaption{$(N,R,M)$} \label{Fig41a} \begin{tikzpicture}[scale=1, auto=center, transform shape, >=triangle 45] \tikzstyle{every node}=[draw,shape=rectangle]; \node (n1) at (135:2) {$X_1$}; \node (n2) at (45:2) {$X_2$}; \node (n3) at (315:2) {$X_3$}; \node (n4) at (225:2) {$X_4$}; \foreach \from/\to in {n1/n2,n1/n4,n2/n3,n3/n4} \draw (\from) -- (\to); \end{tikzpicture} \end{subfigure \begin{subfigure}{.33\textwidth} \centering \subcaption{$(N,R',M')$} \label{Fig41b} \begin{tikzpicture}[scale=1, auto=center, transform shape, >=triangle 45] \tikzstyle{every node}=[draw,shape=rectangle]; \node (n1) at (135:2) {$X_1$}; \node (n2) at (45:2) {$X_2$}; \node (n3) at (315:2) {$X_3$}; \node (n4) at (225:2) {$X_4$}; \foreach \from/\to in {n1/n4,n2/n4} \draw (\from) -- (\to); \draw [->] (n3) -- (n1); \draw [->] (n3) -- (n2); \end{tikzpicture} \end{subfigure} \begin{subfigure}{.33\textwidth} \centering \caption{$(N,R+R',M+M')$} \label{Fig41c} \begin{tikzpicture}[scale=1, auto=center, transform shape, >=triangle 45] \tikzstyle{every node}=[draw,shape=rectangle]; \node (n1) at (135:2) {$X_1$}; \node (n2) at (45:2) {$X_2$}; \node (n3) at (315:2) {$X_3$}; \node (n4) at (225:2) {$X_4$}; \foreach \from/\to in {n1/n2,n2/n4,n3/n4} \draw (\from) -- (\to); \draw [->] (n3) -- (n1); \draw[transform canvas={xshift=-0.5ex}](n2) -- (n3); \draw[->,transform canvas={xshift=0.5ex}](n3) -- (n2); \draw[transform canvas={xshift=-0.5ex}](n1) -- (n4); \draw[transform canvas={xshift=0.5ex}](n1) -- (n4); \end{tikzpicture} \end{subfigure} \end{figure} \begin{example} \label{Examp41} Let $(N,R,M), (N,R',M') \in \mathcal{R}^4$ be two ranking problems. They are shown in Figure~\ref{Fig41}: in the first tournament described by $(N,R,M)$, matches between players $X_1$ and $X_2$, $X_1$ and $X_4$, $X_2$ and $X_3$, $X_3$ and $X_4$ all resulted in draws (see Figure~\ref{Fig41a}). On the other side, in the second tournament, described by $(N,R',M')$, players $X_1$ and $X_2$ have lost against $X_3$ and drawn against $X_4$ (see Figure~\ref{Fig41b}). The two ranking problems can be summed in $(N,R'',M'') \in \mathcal{R}^4$ such that $R'' = R + R'$ and $M'' = M + M'$ (see Figure~\ref{Fig41c}). \end{example} \begin{theorem} \label{Theo41} There exists no scoring method that satisfies order preservation and self-consistency. \end{theorem} \begin{proof} Assume to the contrary that there exists a self-consistent scoring method $f: \mathcal{R}^n \to \mathbb{R}^n$ satisfying order preservation. Consider Example~\ref{Examp41}. \begin{enumerate}[label=\emph{\Roman*}.] \item Take the ranking problem $(N,R,M)$. Note that $O_1 = O_3 = \{ X_2, X_4 \}$ and $O_2 = O_4 = \{ X_1, X_3 \}$. \begin{enumerate}[label=\emph{\alph*})] \item Consider the identity one-to-one correspondences $h_{13}: O_1 \leftrightarrow O_3$ and $h_{31}: O_3 \leftrightarrow O_1$ such that $h_{13}(X_2) = h_{31}(X_2) = X_2$ and $h_{13}(X_4) = h_{31}(X_4) = X_4$. Since $r_{12} = r_{32} = 0$ and $r_{14} = r_{34} = 0$, players $X_1$ and $X_3$ have the same results against the same opponents, hence $f_1(N,R,M) = f_3(N,R,M)$ from $SC$. \item Consider the identity one-to-one correspondences $h_{24}: O_2 \leftrightarrow O_4$ and $h_{42}: O_4 \leftrightarrow O_2$. Since $r_{21} = r_{41} = 0$ and $r_{23} = r_{43} = 0$, players $X_2$ and $X_4$ have the same results against the same opponents, hence $f_2(N,R,M) = f_4(N,R,M)$ from $SC$. \item Suppose that $f_2(N,R,M) > f_1(N,R,M)$, which implies $f_4(N,R,M) > f_3(N,R,M)$. Consider the one-to-one mapping $h_{12}: O_1 \leftrightarrow O_2$, where $h_{12}(X_2) = X_1$ and $h_{12}(X_4) = X_3$. Since $r_{12} = r_{21} = 0$ and $r_{14} = r_{23} = 0$, player $X_1$ has the same results against stronger opponents compared to $X_2$, hence $f_1(N,R,M) > f_2(N,R,M)$ from $SC$, which is a contradiction. \item An analogous argument shows that $f_1(N,R,M) > f_2(N,R,M)$ cannot hold. \end{enumerate} Therefore, self-consistency leads to $f_1(N,R,M) = f_2(N,R,M) = f_3(N,R,M) = f_4(N,R,M)$ in the first ranking problem. \item Take the ranking problem $(N,R',M')$. Note that $O_1' = O_2' = \{ X_3, X_4 \}$ and $O_3' = O_4' = \{ X_1, X_2 \}$. \begin{enumerate}[label=\emph{\alph*})] \item Consider the identity one-to-one correspondences $h_{12}': O_1' \leftrightarrow O_2'$ and $h_{21}': O_2' \leftrightarrow O_1'$. Since $r_{13}' = r_{23}' = -1$ and $r_{14}' = r_{24}' = 0$, players $X_1$ and $X_2$ have the same results against the same opponents, hence $f_1(N,R',M') = f_2(N,R',M')$ from $SC$. \item Consider the identity one-to-one correspondence $h_{34}': O_3' \leftrightarrow O_4'$. Since $1 = r_{31}' > r_{41}' = 0$ and $1 = r_{32}' > r_{42}' = 0$, player $X_3$ has better results against the same opponents compared to $X_4$, hence $f_3(N,R',M) > f_4(N,R',M)$ from $SC$. \end{enumerate} Thus self-consistency leads to $f_1(N,R',M') = f_2(N,R',M')$ and $f_3(N,R',M') > f_4(N,R',M')$ in the second ranking problem. \item Take the sum of these two ranking problems, the ranking problem $(N,R'',M'')$. Suppose that $f_1(N,R'',M'') \geq f_2(N,R'',M'')$. Consider the one-to-one mappings $g_{21}: O_2 \leftrightarrow O_1$ and $g_{21}': O_2' \leftrightarrow O_1'$ such that $g_{21}(X_1) = X_2$, $g_{21}(X_3) = X_4$ and $g_{21}'(X_3) = X_3$, $g_{21}'(X_4) = X_4$. Since $r_{21} = r_{12} = 0$, $r_{23} = r_{14} = 0$ and $r_{23}' = r_{13}' = -1$, $r_{24}' = r_{14}' = 0$, player $X_2$ has the same results against stronger opponents compared to $X_1$, hence $f_2(N,R'',M'') > f_1(N,R'',M'')$ from $SC$, which leads to a contradiction. To summarize, self-consistency results in $f_1(N,R'',M') < f_2(N,R'',M'')$, however, order preservation implies $f_1(N,R'',M'') = f_2(N,R'',M'')$ as all players have played two matches in $(N,R',M')$ and $(N,R',M')$, respectively, which is impossible. \end{enumerate} Therefore, it has been derived that no scoring method can meet $OP$ and $SC$ simultaneously on the universal domain of $\mathcal{R}^n$. \end{proof} Theorem~\ref{Theo41} is a serious negative result: by accepting self-consistency, the ranking method cannot be required to preserve two players' pairwise ranking when some ranking problems, where all players have played the same number of matches, are aggregated. \begin{figure}[htbp] \centering \caption{The ranking problem of Example~\ref{Examp42}} \label{Fig42} \begin{tikzpicture}[scale=1, auto=center, transform shape, >=triangle 45] \tikzstyle{every node}=[draw,shape=rectangle]; \node (n1) at (135:2) {$X_1$}; \node (n2) at (45:2) {$X_2$}; \node (n3) at (315:2) {$X_3$}; \node (n4) at (225:2) {$X_4$}; \foreach \from/\to in {n1/n2,n2/n3,n3/n4} \draw (\from) -- (\to); \end{tikzpicture} \end{figure} \begin{example} \label{Examp42} Let $(N,R,M) \in \mathcal{R}^4$ be the ranking problem in Figure~\ref{Fig42}: $X_1$ has drawn against $X_2$, $X_2$ against $X_3$ and $X_3$ against $X_4$. \end{example} Theorem~\ref{Theo41} would be more straightforward as a strengthening of Proposition~\ref{Prop42} if self-consistency implies neutrality and/or symmetry. However, it is not the case as the following result holds. \begin{remark} \label{Rem42} There exists a scoring method that is self-consistent, but not neutral and symmetric. \end{remark} \begin{proof} The statement can be verified by an example where an $SC$-compatible scoring method violates $NEU$ and $SYM$. Consider Example~\ref{Examp42} with a scoring method $f$ such that $f_1(N,R,M) > f_2(N,R,M) > f_3(N,R,M) > f_4(N,R,M)$, for example, player $X_i$ gets the score $4-i$. $f$ meets self-consistency since $X_1$ has the same result against a stronger opponent compared to $X_4$, while there exists no correspondence between opponent sets $O_2$ and $O_3$ satisfying the conditions of $SC$. Let $\sigma: N \to N$ be a permutation such that $\sigma(X_1) = X_4$, $\sigma(X_2) = X_3$, $\sigma(X_3) = X_2$, and $\sigma(X_4) = X_1$. Since $\sigma(N,R,M) = (N,R,M)$, $NEU$ implies $f_4(N,R,M) > f_1(N,R,M)$ and $f_3(N,R,M) > f_2(N,R,M)$, a contradiction. Furthermore, $SYM$ leads to $f_1(N,R,M) = f_2(N,R,M) = f_3(N,R,M) = f_4(N,R,M)$, another impossibility. Therefore there exists a self-consistent scoring method, which is not neutral and symmetric. \end{proof} \section{Conclusions} \label{Sec5} We have found some unexpected implications of different properties in the case of generalized tournaments where the players should be ranked on the basis of their match results against each other. First, self-consistency prohibits the use of individual scoring methods, that is, scores cannot be derived before the aggregation of tournament rounds (Proposition~\ref{Prop31}). Second, independence of irrelevant matches (posing a kind of independence concerning the pairwise ranking of two players) follows from three axioms, neutrality (independence of relabelling the players), symmetry (implying a flat ranking if all aggregated comparisons are draws), and strong order preservation (perhaps the most natural property concerning the aggregation of ranking problems). According to \citet{Csato2018f}, there exists no scoring method satisfying self-consistency and independence of irrelevant matches, hence Proposition~\ref{Prop41} implies that neutrality, symmetry, strong order preservation and self-consistency cannot be met simultaneously (Proposition~\ref{Prop42}). It even turns out that self-consistency and a weaker version of strong order preservation are still enough to derive this negative result (Theorem~\ref{Theo41}), consequently, one should choose between these two natural fairness requirements. What do our results say to practitioners who want to rank players or teams? First, self-consistency does not allow to rank them in individual rounds, one has to wait until all tournament results are known and can be aggregated. Second, self-consistency is not compatible with order preservation on this universal domain. It is not an unexpected and counter-intuitive result as, according to \citet{Gonzalez-DiazHendrickxLohmann2013}, a number of ranking methods violate order preservation. We have proved that there is no hope to find a reasonable scoring method with this property. From a more abstract point of view, breaking of order preservation in tournament ranking is a version of \href{https://en.wikipedia.org/wiki/Simpson\%27s_paradox}{Simpson's paradox}, a phenomenon in probability and statistics, in which a trend appears in different groups of data but disappears or reverses when these groups are combined.\footnote{~We are grateful to an anonymous referee for this remark.} This negative result holds despite self-consistency is somewhat weaker than our intuition suggests: it does not imply neutrality and symmetry, so even a self-consistent ranking of players may depend on their names and without ties if all matches are drawn (Remark~\ref{Rem42}). Third, losing the simplicity provided by order preservation certainly does not facilitate the axiomatic construction of scoring methods. Consequently, while sacrificing self-consistency or order preservation seems to be unavoidable in our general setting, an obvious continuation of the current research is to get positive possibility results by some domain restrictions or further weakening of the axioms. It is also worth to note that the incompatibility of the two axioms does not imply that any scoring method is always going to work badly, but all can lead to problematic results at times. \section*{Acknowledgements} \addcontentsline{toc}{section}{Acknowledgements} \noindent We are grateful to \emph{S\'andor Boz\'oki} for useful advice. \\ Anonymous reviewers provided valuable comments and suggestions on earlier drafts. \\ The research was supported by OTKA grant K 111797 and by the MTA Premium Post Doctorate Research Program.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} \label{sec:intro} The Boltzmann equation is one of the fundamental equations in kinetic theory and serves as a basic building block to connect microscopic Newtonian mechanics and macroscopic continuum mechanics \cite{Cercignani, Villani02}. Albeit its wide applicability, numerical approximation of the Boltzmann equation is a challenging scientific problem due to the complicated structure of the equation (high-dimensional, nonlinear, and nonlocal). As such, the particle based direct simulation Monte Carlo method (DSMC) \cite{Bird} has been widely used in various applications for its simplicity and low computational cost. Nevertheless, the stochastic method suffers from slow convergence and becomes extremely expensive when simulating non-steady and low-speed flows. Since the pioneering work \cite{PP96, PR00}, it has been realized that the Fourier-Galerkin spectral method offers a suitable framework to approximate the Boltzmann collision operator. First of all, it is a deterministic method and provides very accurate results compared with stochastic method. Secondly, the Boltzmann collision operator is translation-invariant and the Fourier basis exactly leverages this structure. Thirdly, after the Galerkin projection, the collision operator presents a convolution-like structure, which opens the possibility to further accelerate the method by the fast Fourier transform (FFT) \cite{MP06, GHHH17}. Because of the above reasons, over the past decade, the Fourier spectral method has become a very popular deterministic method for solving the Boltzmann equation and related collisional kinetic models, see for instance, \cite{PRT00, FR03, FMP06, HY12, JAH19_1}, or the recent review article \cite{pareschi}. As opposed to its practical success, the theoretical study of the Fourier spectral method is quite limited, largely because the spectral approximation destroys the positivity of the solution, yet the positivity is one of the key properties to study the well-posedness of the equation. In \cite{PR00stability}, a positivity-preserving filter is applied to the equation to enforce the positivity of the solution. As a result, the stability of the method can be easily proved. However, the filter often comes with the price of significantly smearing the solution (hence destroying the spectral accuracy) and should be used only when the solution contains discontinuities (to suppress the oscillations caused by Gibbs phenomenon). Recently, a stability proof for the original Fourier spectral method is established in \cite{FM11}, where the authors provide a quite complete study of the method including both finite and long time behavior. The key strategy in \cite{FM11} is to use the ``spreading" or ``mixing" property of the collision operator to show that the solution will become everywhere positive after a small time. Motivated by this work, we present in this paper a different well-posedness and stability proof. The main difference from \cite{FM11} lies in that, instead of requiring the solution to be positive everywhere which is a stronger condition to achieve, we show that the $L^2$ norm of the negative part of the solution can be controlled as long as it is small initially. In other words, the solution is allowed to be negative for the method to remain stable. Therefore, our strategy does not rely on any sophisticated property of the collision operator and provides a simpler proof. In addition, we quantify clearly the requirement on the initial condition for the method to be stable, which includes both continuous and discontinuous functions. We mention another line of research which develops the conservative-spectral approximation for the Boltzmann equation \cite{GT09}. Apart from apparent differences (the Fourier-Galerkin method considered in this paper is based on domain truncation and periodization, while the method \cite{GT09} is based on Fourier transform and no periodization is performed), a conservation subroutine is added to restore the mass, momentum, and energy conservation. As a consequence, the method is able to preserve the Maxwellian distribution as time goes to infinity. The stability and convergence of the method is recently established in \cite{AGT18}, where the Fourier projection is only applied to the gain part of the collision operator. In contrast, both gain and loss terms are projected in our method, hence the loss term does not possess a definite sign. The paper is essentially self-contained. In Section~\ref{sec:review}, we briefly review the Fourier-Galerkin spectral method for the spatially homogeneous Boltzmann equation. After that, we discuss the basic assumptions (e.g., the collision kernel and truncation parameters) used throughout the paper. The assumptions on the initial condition are addressed in Section~\ref{subsec:initial}, which will play an important role in proving the main result. In Section~\ref{sec:QR} (and Appendix), we provide some preliminary estimates on the truncated collision operator. These are known results in the whole space but some subtle differences appear in the torus. Section~\ref{sec:main} presents our main result. We first conduct a $L^2$ estimate of the negative part of the solution and then prove a local existence/uniqueness result. Finally, the well-posedness and stability of the method on an arbitrary bounded time interval is established in Section~\ref{subsec:main} (Theorem~\ref{existencetheorem}). Facilitated with the stability result, the paper is concluded in Section~\ref{sec:conv} with a straightforward convergence and spectral accuracy proof of the method. \section{Fourier-Galerkin spectral method for the spatially homogeneous Boltzmann equation} \label{sec:review} In this section, we review the Fourier-Galerkin spectral method for the spatially homogeneous Boltzmann equation. The presentation follows the formulation originally proposed in \cite{PR00} which is the basis for many fast algorithms developed recently \cite{GHHH17, HM19, HQ20}. Here we limit the description to the extent that is sufficient for the following proof. At the end of the section, we discuss the basic assumptions used throughout the rest of the paper, in particular, the assumptions on the initial condition. The spatially homogeneous Boltzmann equation reads \begin{equation} \label{BE} \partial_{t} f= Q(f,f), \quad t>0, \ v\in \mathbb{R}^d, \ d\geq 2, \end{equation} where $f=f(t,v)$ is the probability density function of time $t$ and velocity $v$, $Q$ is the collision operator describing the binary collisions among particles, whose bilinear form is given by \begin{equation} \label{Qstrong} Q(g,f)(v)=\int_{\mathbb{R}^d}\int_{\mathbb{S}^{d-1}}B(|v-v_*|,\cos \theta)[g(v_*')f(v')-g(v_*)f(v)]\,\mathrm{d}{\sigma}\, \mathrm{d}{v_*}. \end{equation} In (\ref{Qstrong}), $\sigma$ is a vector varying over the unit sphere $\mathbb{S}^{d-1}$, $v'$ and $v_*'$ are defined as \begin{equation} v'=\frac{v+v_*}{2}+\frac{|v-v_*|}{2}\sigma, \quad v_*'=\frac{v+v_*}{2}-\frac{|v-v_*|}{2}\sigma, \end{equation} and $B\geq 0$ is the collision kernel. In this paper we will consider the kernel of the form \begin{equation} \label{kernel} B(|v-v_*|,\cos \theta)=\Phi(|v-v_*|) b(\cos \theta), \quad \cos \theta =\frac{\sigma\cdot (v-v_*)}{|v-v_*|}, \end{equation} whose kinetic part $\Phi$ is a non-negative function and angular part $ b$ satisfies the Grad's cut-off assumption \begin{equation} \label{cutoff} \int_{\mathbb{S}^{d-1}} b(\cos \theta)\,\mathrm{d}{\sigma}<\infty. \end{equation} To apply the Fourier-Galerkin spectral method, we consider an approximated problem of (\ref{BE}) on a torus $\mathcal{D}_L=[-L,L]^d$: \begin{equation} \label{ABE} \left\{ \begin{split} &\partial_{t} f = Q^{R}(f,f), \quad t>0, \ v\in \mathcal{D}_L,\\ & f(0,v)= f^{0}(v), \end{split} \right. \end{equation} where the initial condition $f^0$ is a non-negative periodic function, $Q^{R}$ is the truncated collision operator defined by \begin{equation}\label{QR} \begin{split} Q^R(g,f)(v)&=\int_{\mathcal{B}_R}\int_{\mathbb{S}^{d-1}}\Phi(|q|)b(\sigma\cdot \hat{q})\left[g(v_*')f(v')-g(v-q)f(v)\right]\, \mathrm{d}{\sigma}\, \mathrm{d}{q}\\ &=\int_{\mathbb{R}^d} \int_{\mathbb{S}^{d-1}}\mathbf{1}_{|q|\leq R}\Phi(|q|)b(\sigma\cdot \hat{q})\left[g(v_*')f(v')-g(v-q)f(v)\right]\,\mathrm{d}{\sigma}\, \mathrm{d}{q}, \end{split} \end{equation} where a change of variable $v_*\rightarrow q=v-v_*$ is applied and the new variable $q$ is truncated to a ball $\mathcal{B}_R$ with radius $R$ centered at the origin. We write $q=|q|\hat{q}$ with $|q|$ being the magnitude and $\hat{q}$ being the direction. Accordingly, \begin{equation} v'=v-\frac{q-|q|\sigma}{2}, \quad v_*'=v-\frac{q+|q|\sigma}{2}. \end{equation} In practice, the values of $L$ and $R$ are often chosen by an anti-aliasing argument \cite{PR00}: assume that $\text{Supp} (f^0(v))\subset \mathcal{B}_S$, then one can take \begin{equation} \label{RL1} R=2S, \quad L\geq \frac{3+\sqrt{2}}{2}S. \end{equation} Given an integer $N\geq0$, we then seek a truncated Fourier series expansion of $f$ as \begin{equation} f(t,v)\approx f_N(t,v)=\sum\limits_{k = -N/2}^{N/2} f_k(t) \mathrm{e}^{\mathrm{i} \frac{\pi}{L}k\cdot v} \in \mathbb{P}_N, \end{equation} where \begin{equation} \mathbb{P}_N=\text{span} \left\{ \mathrm{e}^{\mathrm{i} \frac{\pi}{L} k\cdot v}\Big| -N/2\leq k \leq N/2 \right\}\footnote{Note here $k=(k_1,\dots,k_d)$ is a vector, $-N/2\leq k \leq N/2$ means $-N/2\leq k_j \leq N/2$, $j=1,\dots,d$, and $\sum_{k=-N/2}^{N/2}:=\sum_{k_1=-N/2}^{N/2}\cdots \sum_{k_d=-N/2}^{N/2}$.}, \end{equation} equipped with inner product \begin{equation} \langle f,g \rangle = \frac{1}{(2L)^{d}}\int_{\mathcal{D}_L} f \bar{g}\, \mathrm{d} v. \end{equation} Substituting $f_N$ into (\ref{ABE}) and conducting the Galerkin projection onto the space $\mathbb{P}_N$ yields \begin{equation} \label{PFS} \left\{ \begin{split} &\partial_{t} f_N = \mathcal{P}_N Q^{R}(f_N,f_N), \quad t>0, \ v\in \mathcal{D}_L,\\ & f_N(0,v)=f_{N}^{0}(v), \end{split} \right. \end{equation} where $\mathcal{P}_N$ is the projection operator: for any function $g$, \begin{equation}\label{proj} \mathcal{P}_N g=\sum_{k=-N/2}^{N/2}\hat{g}_k \mathrm{e}^{\mathrm{i} \frac{\pi}{L}k\cdot v}, \quad \hat{g}_k=\langle g, \mathrm{e}^{\mathrm{i} \frac{\pi}{L}k\cdot v}\rangle, \end{equation} $f_N^0\in \mathbb{P}_N$ is the initial condition to the numerical system and should be a reasonable approximation to $f^0$. More discussion on the initial condition will be given in Section~\ref{subsec:initial}, which in fact plays an important role in the following proof. Writing out each Fourier mode of (\ref{PFS}), we obtain \begin{equation} \label{FS} \left\{ \begin{split} &\partial_{t} f_k = Q^{R}_k, \quad -N/2\leq k\leq N/2,\\ & f_k(0)= f^{0}_k, \end{split} \right. \end{equation} with \begin{equation} Q_{k}^R:=\langle Q^R(f_N,f_N), \mathrm{e}^{\mathrm{i} \frac{\pi}{L}k\cdot v}\rangle, \quad f^0_k:=\langle f_N^0, \mathrm{e}^{\mathrm{i} \frac{\pi}{L}k\cdot v}\rangle. \end{equation} Using the definition in (\ref{QR}) and orthogonality of the Fourier basis, we can derive that \begin{equation} \label{sum} Q_{k}^R =\sum\limits_{\substack{l,m=-N/2\\l+m=k}}^{N/2} G(l,m)f_lf_m, \end{equation} where the weight $G$ is given by \begin{equation} \begin{split} G(l,m) &= \int_{\mathcal{B}_{R}}\int_{\mathbb{S}^{d-1}}\Phi(|q|)b(\sigma\cdot \hat{q})\left[ \mathrm{e}^{-\mathrm{i} \frac{\pi}{2L}(l+m)\cdot q +\mathrm{i} \frac{\pi}{2L}|q|(l-m)\cdot \sigma} - \mathrm{e}^{-\mathrm{i} \frac{\pi}{L}m\cdot q} \right]\,\mathrm{d}\sigma\,\mathrm{d} q\\ &= \int_{\mathcal{B}_{R}}\mathrm{e}^{-\mathrm{i} \frac{\pi}{L}m\cdot q}\left[\int_{\mathbb{S}^{d-1}}\Phi(|q|)b(\sigma\cdot \hat{q})(\mathrm{e}^{\mathrm{i} \frac{\pi}{2L}(l+m)\cdot (q-|q|\sigma)}-1)\, \mathrm{d}\sigma\right] \,\mathrm{d} q. \label{GG} \end{split} \end{equation} The second equality above is obtained by switching two variables $\sigma \leftrightarrow \hat{q}$ in the gain part of $G(l,m)$. In the direct Fourier spectral method, $G(l,m)$ is precomputed since it is independent of the solution. Then in the online computation, the sum (\ref{sum}) is evaluated directly. Note that the solution $f$ to the original problem (\ref{ABE}) is always non-negative which is the key to many stability estimates. However, the solution $f_N$ to the numerical system (\ref{PFS}) is not necessarily non-negative due to the spectral projection which constitutes the main difficulty in the numerical analysis. Luckily, by virtue of the Fourier spectral method, mass is always conserved which provides some control of the solution. Precisely, we have \begin{lemma} \label{lemma:conv} The numerical system (\ref{PFS}) preserves mass, that is, \begin{equation} \int_{\mathcal{D}_L} f_N(t,v) \,\mathrm{d} v=\int_{\mathcal{D}_L} f^{0}_N(v) \,\mathrm{d} v. \end{equation} \end{lemma} \begin{proof} Note that \begin{equation} \int_{\mathcal{D}_L} f_N(t,v)\,\mathrm{d}{v}=\sum_{k=-N/2}^{N/2} f_k(t) \int_{\mathcal{D}_L} \mathrm{e}^{\mathrm{i} \frac{\pi}{L}k\cdot v}\, \mathrm{d}{v}=(2L)^d f_0(t), \end{equation} where $f_0$ is the zero-th mode of the numerical solution and is governed by \begin{equation} \partial_{t} f_0 = Q_0^R. \end{equation} From (\ref{sum}), it is clear that $Q^R_0\equiv0$ since $G(l,m)\equiv0$ when $l+m=0$. This implies $f_0$ remains constant in time, whose value is the zero-th Fourier mode of the initial condition $f_N^0(v)$. \end{proof} We now introduce some assumptions and notations that will be used throughout the rest of this paper. \vspace{0.05in} \noindent{\bf Basic assumptions on the truncation parameters and the collision kernel.} \begin{itemize} \vspace{-0.05in} \item[(1)] The truncation parameters $L$ and $R$ in (\ref{ABE}) satisfy \begin{equation} \label{RL} L\geq R>0. \end{equation} Note that the choice (\ref{RL1}) implies $L\geq (3+\sqrt{2})R/4$ hence the above condition is satisfied. \vspace{-0.1in} \item[(2)] The kinetic part of the collision kernel (\ref{kernel}) satisfies \begin{equation} \label{kinetic} \left \| \mathbf{1}_{|v|\leq R}\Phi(|v|)\right\|_{L^{\infty}(\mathcal{D}_L)} < \infty. \end{equation} Note that all power law hard potentials $\Phi(|v|)=|v|^{\gamma}$ ($0\leq \gamma\leq 1$) as well as the ``modified" soft potentials $\Phi(|v|)=(1+|v|)^{\gamma}$ ($-d<\gamma <0$) satisfy this condition. \vspace{-0.1in} \item[(3)] The angular part of the collision kernel (\ref{kernel}) has been replaced by its symmetrized version\footnote{This symmetrization can readily reduce the computational cost by a half (integration over the whole sphere is reduced to half sphere) so it also has important implications for numerical purpose, see \cite{GHHH17}.}: \begin{equation} \label{angular} \left[b(\cos \theta)+b(\cos \left(\pi-\theta\right))\right]\mathbf{1}_{0 \leq \theta \leq \pi/2}, \end{equation} and satisfies the cut-off assumption (\ref{cutoff}). \end{itemize} \noindent{\bf Some notations.} For a periodic function $f(v)$ in $\mathcal{D}_L$, we define its Lebesgue norm and Sobolev norm as follows: \begin{equation} \|f\|_{L^p_{\text{per}}(\mathcal{D}_L)}=\left(\int_{\mathcal{D}_L} |f(v)|^p\,\mathrm{d}{v}\right)^{1/p}, \quad \|f\|_{{H^k_{\text{per}}}(\mathcal{D}_L)}=\left( \sum_{|\nu|\leq k} \|\partial_v^{\nu}f\|_{L^2_{\text{per}(\mathcal{D}_L)}}^2\right)^{1/2}, \end{equation} where $k\geq 0$ is an integer and $\nu$ is a multi-index. ``per" indicates the function is periodic and will not be included in the following for simplicity. Except in Section~\ref{sec:QR}, we do not track explicitly the dependence of constants on the truncation parameters $R$, $L$, dimension $d$, and the collision kernel $B$. For a function $f(v)$ in $\mathcal{D}_L$, we define its positive and negative parts as \begin{equation} f^+(v)=\max\limits_{v\in\mathcal{D}_L}\{ f(v),0 \}, \quad f^-(v)=\max\limits_{v\in\mathcal{D}_L}\{ -f(v),0 \}, \end{equation} so that $f=f^+-f^-$ and $|f|=f^++f^-$. \subsection{Assumptions on the initial condition} \label{subsec:initial} To prove our main well-posedness and stability result, Theorem~\ref{existencetheorem}, we would assume that the initial condition $f^0(v)$ to the original problem (\ref{ABE}) is periodic, non-negative, and belongs to $L^1\cap H^1(\mathcal{D}_L)$ (in fact $L^1$ can be removed since $L^2(\mathcal{D}_L)\subset L^1(\mathcal{D}_L)$ due to boundedness of the domain). For the initial condition $f_N^0(v)$ to the numerical system (\ref{PFS}), we would require it to lie in the space $\mathbb{P}_N$ and satisfies the following: \begin{itemize} \item[(a)] Mass conservation: \begin{equation} \label{con(a)} \int_{\mathcal{D}_L} f^{0}_N(v)\, \mathrm{d} v=\int_{\mathcal{D}_L} f^0(v)\, \mathrm{d} v. \end{equation} \item[(b)] Control of $L^2$ and $H^1$ norms: for any integer $N\geq 0$, \begin{equation} \label{con(b)} \|f^0_N\|_{L^2(\mathcal{D}_L)}\leq \|f^0\|_{L^2(\mathcal{D}_L)}, \quad \|f^0_N\|_{H^1(\mathcal{D}_L)}\leq \|f^0\|_{H^1(\mathcal{D}_L)}. \end{equation} \item[(c)] Control of $L^1$ norm: there exists an integer $N_0$ such that for all $N> N_0$, \begin{equation} \label{con(c)} \|f_N^0\|_{L^1(\mathcal{D}_L)}\leq C \|f^0\|_{L^1(\mathcal{D}_L)}. \end{equation} where $C>1$ is some constant whose value is of no essential importance. In the following proof, we will take $C=2$ for simplicity. \item[(d)] $L^2$ norm of $f_N^{0,-}$ can be made arbitrarily small: for any $\varepsilon>0$, there exists an integer $N_0$ such that for all $N> N_0$, \begin{equation} \label{con(d)} \|f_N^{0,-}\|_{L^2(\mathcal{D}_L)} <\varepsilon. \end{equation} \end{itemize} \begin{remark} \label{rmk} An obvious choice is to take $f_N^0=\mathcal{P}_N f^0$. Condition (a) is satisfied since it is equivalent to preserving the zero-th Fourier mode of the function. Condition (b) is a direct consequence of the Parseval's identity. Condition (c) can be obtained by the $L^2$ convergence of the Fourier series and that $L^1$ norm can be controlled by $L^2$ norm. Condition (d) can be proved at least when the uniform convergence of the Fourier series is guaranteed, for which one may require additional continuity on $f^0$. For instance, $f^0$ is H\"{o}lder continuous, or continuous plus bounded variation (in fact $BV$ can be removed since $H^1(\mathcal{D}_L)\subset W^{1,1}(\mathcal{D}_L)\subset BV(\mathcal{D}_L)$). \end{remark} \begin{remark} Sometimes the initial condition $f^0$ may contain discontinuities, then simply taking the Fourier projection of $f^0$ will generate undesirable oscillations (Gibbs phenomenon). Hence a reasonable choice is to take a filtered version $f_N^0=\mathcal{S}_Nf^0$, where $\mathcal{S}_N$ is defined as: for any function g, \begin{equation} \mathcal{S}_Ng=\sum_{k=-N/2}^{N/2}\sigma_N(k)\hat{g}_k\mathrm{e}^{\mathrm{i} \frac{\pi}{L}k\cdot v},\quad \hat{g}_k=\langle g, \mathrm{e}^{\mathrm{i} \frac{\pi}{L}k\cdot v}\rangle, \end{equation} with $\sigma_N$ being the filter function, see for instance \cite[Chapter 9]{HGG07}. Typically, the filter won't change the zero-th Fourier mode of the function, and won't amplify the remaining Fourier modes, hence conditions (a) and (b) would be satisfied automatically. For conditions (c) and (d) to hold, one needs some kind of convergence which depends on the property of the actual filter. Without going into details, let us just mention that there is a class of positive filters (e.g., the Fej\'{e}r or Jackson filter \cite{WWAF06}) which can preserve the positivity of the function so that the condition (d) is trivially satisfied. Condition (c) can be satisfied as well by using the Young's inequality and the $L^1$ norm of the filter is exactly 1. However, the positivity-preserving filters may come with the price of slower convergence (away from the discontinuity) compared with other high order filters (e.g., the exponential filter \cite{HGG07}). Therefore, one could take non-positive high order filters, as long as they satisfy the conditions (c) and (d). It is worth emphasizing that the purpose of applying the filter here is merely to fix the initial condition when $f^0$ is discontinuous so that our well-posedness and stability proof still holds. This is in stark contrast to the filtering method used in \cite{PR00stability} and \cite{CFY18}, where the filter is applied to the equation to preserve the positivity of the solution. \end{remark} \section{Some preliminary estimates on the truncated collision operator $ Q^{R}$} \label{sec:QR} In this section, we prove some important estimates for the truncated collision operator (\ref{QR}). Since its gain term and loss term possess quite different properties, we consider \begin{equation} \begin{split} Q^{R,+}(g,f)(v)&:=\int_{\mathbb{R}^d} \int_{\mathbb{S}^{d-1}}\mathbf{1}_{|q|\leq R}\Phi(|q|)b(\sigma\cdot \hat{q})g(v_*')f(v')\, \mathrm{d}{\sigma} \,\mathrm{d}{q},\\ Q^{R,-}(g,f)(v)&:=\int_{\mathbb{R}^d} \int_{\mathbb{S}^{d-1}}\mathbf{1}_{|q|\leq R}\Phi(|q|)b(\sigma\cdot \hat{q})g(v-q)f(v) \,\mathrm{d}{\sigma}\, \mathrm{d}{q}, \end{split} \end{equation} separately whenever appropriate. \begin{proposition} Let the collision kernel $ B $ and truncation parameters $R$ and $L$ satisfy the assumptions (\ref{RL}), (\ref{kinetic}), (\ref{angular}), and (\ref{cutoff}), then the truncated collision operators $ Q^{R,\pm}(g,f)$ satisfy the following estimates: for $1\leq p \leq \infty$, \begin{equation}\label{QGLp1} \left\|Q^{R,+}(g,f)\right\|_{L^{p}(\mathcal{D}_L)} \leq C^+_{R,L,d,p}(B) \left\|g\right\|_{L^{1}(\mathcal{D}_L)} \left\|f \right\|_{L^{p}(\mathcal{D}_L)}, \end{equation} where the constant $C^+_{R,L,d,p}(B)=C^{1/p}\|b\|_{L^1(\mathbb{S}^{d-1})}\|\mathbf{1}_{|v|\leq R} \Phi(|v|)\|_{L^{\infty}{(\mathcal{D}_L)}}$. \begin{equation}\label{QLLp} \left\|Q^{R,-}(g,f)\right\|_{L^{p}(\mathcal{D}_L)} \leq C_{R,L,d}^-(B) \left\|g\right\|_{L^{1}(\mathcal{D}_L)} \left\|f \right\|_{L^{p}(\mathcal{D}_L)}, \end{equation} where the constant $C^-_{R,L,d}(B)=C\|b\|_{L^1(\mathbb{S}^{d-1})} \left \| \mathbf{1}_{|v|\leq R}\Phi(|v|)\right\|_{L^{\infty}(\mathcal{D}_L)}$. In particular, for the whole collision operator $ Q^{R}(g,f)$, we have \begin{equation}\label{QLp} \left\|Q^{R}(g,f)\right\|_{L^{p}(\mathcal{D}_L)} \leq C_{R,L,d,p}(B) \left\|g\right\|_{L^{1}(\mathcal{D}_L)} \left\|f \right\|_{L^{p}(\mathcal{D}_L)}. \end{equation} \end{proposition} \begin{proof} The proof of the truncated gain term $ \mathcal{Q}^{R,+}(g,f) $ is similar to the usual Boltzmann operator $ \mathcal{Q}^+(g,f)$ on $\mathbb{R}^d$. However, the right hand side is not entirely obvious as we need to restrict back to a bounded domain. Therefore, we follow \cite[Theorem 2.1]{mouhot2004regularity} to give a complete proof of (\ref{QGLp1}) (see Appendix). In fact, by carrying out this carefully, one can see that the condition (\ref{RL}) is needed. For the loss term, we write it as \begin{equation} \label{QLS} Q^{R,-}(g,f)(v)=L^R(g)(v)f(v), \end{equation} where $L^R$ is a convolution given by \begin{equation} L^R(g)(v)=\|b\|_{L^1(\mathbb{S}^{d-1})} \int_{\mathbb{R}^d} \mathbf{1}_{|q|\leq R}\Phi(|q|) g(v-q)\,\mathrm{d}{q}=\|b\|_{L^1(\mathbb{S}^{d-1})}\left(\mathbf{1}_{|v|\leq R}\Phi(|v|)\right)*g(v). \end{equation} Then \begin{equation} \begin{split} \|Q^{R,-}(g,f)\|_{L^p(\mathcal{D}_L)}& \leq \left \| L^R(g)\right\|_{L^{\infty}(\mathcal{D}_L)}\|f\|_{L^p(\mathcal{D}_L)}\\ & = \|b\|_{L^1(\mathbb{S}^{d-1})} \left \| \left(\mathbf{1}_{|v|\leq R}\Phi(|v|)\right)*g(v)\right\|_{L^{\infty}(\mathcal{D}_L)}\|f\|_{L^p(\mathcal{D}_L)}\\ & \leq \|b\|_{L^1(\mathbb{S}^{d-1})} \left \| \mathbf{1}_{|v|\leq R}\Phi(|v|)\right\|_{L^{\infty}(\mathcal{D}_L)} \left\|g\right\|_{L^{1}(\mathcal{B}_{\sqrt{2}L+R})} \|f\|_{L^p(\mathcal{D}_L)}\\ & \leq C \|b\|_{L^1(\mathbb{S}^{d-1})} \left \| \mathbf{1}_{|v|\leq R}\Phi(|v|)\right\|_{L^{\infty}(\mathcal{D}_L)}\|g\|_{L^1(\mathcal{D}_L)}\|f\|_{L^p(\mathcal{D}_L)}\\ & = C_{R,L,d}^-(B) \left\|g\right\|_{L^{1}(\mathcal{D}_L)} \left\|f \right\|_{L^{p}(\mathcal{D}_L)}, \end{split} \end{equation} where we used $R\leq L$ in the third line and $g$ is a periodic function on $\mathcal{D}_L$ in the fourth line. \end{proof} \begin{proposition} Let the collision kernel $ B $ and truncation parameters $R$ and $L$ satisfy the assumptions (\ref{RL}), (\ref{kinetic}), (\ref{angular}), and (\ref{cutoff}), then the truncated collision operator $ Q^{R}(g,f)$ satisfies the following estimate: for integer $k\geq 0$, \begin{equation} \label{QHk} \left\|Q^{R}(g,f)\right\|_{H^k(\mathcal{D}_L)} \leq C'_{R,L,d,k}(B) \left\|g\right\|_{H^k(\mathcal{D}_L)} \left\|f \right\|_{H^k(\mathcal{D}_L)}. \end{equation} \end{proposition} \begin{proof} First of all, (\ref{QHk}) when $k=0$ is a direct consequence of (\ref{QLp}) by taking $p=2$ and noting that $\left\|g\right\|_{L^{1}(\mathcal{D}_L)}\leq (2L)^{d/2}\left\|g\right\|_{L^{2}(\mathcal{D}_L)}$. To prove (\ref{QHk}) for $k>0$, note that the collision operator satisfies the Leibniz rule: \begin{equation} \partial_v^{\nu}Q^{R}(g,f)= \sum_{\mu\leq \nu}\binom{\nu}{\mu} Q^{R}(\partial_v^{\mu}g,\partial_v^{\nu-\mu}f), \end{equation} which is a consequence of the bilinearity and the Galilean invariance of the truncated collision operator $Q^{R}(g,f)(v-h)=Q^{R}(g(v-h),f(v-h))$. Then we have \begin{equation} \begin{split} \|Q^{R}(g,f)\|_{H^k(\mathcal{D}_L)}^2&=\sum_{|\nu|\leq k}\left\|\partial^{\nu}_v Q^{R}(g,f)\right\|_{L^2(\mathcal{D}_L)}^2=\sum_{|\nu|\leq k} \left\|\sum_{\mu\leq \nu}\binom{\nu}{\mu} Q^{R}(\partial_v^{\mu}g,\partial_v^{\nu-\mu}f)\right\|^2_{L^2(\mathcal{D}_L)}\\ & \leq \sum_{|\nu|\leq k}\sum_{\mu\leq \nu}\binom{\nu}{\mu}^2 \sum_{\mu\leq \nu} \left\| Q^{R}(\partial_v^{\mu}g,\partial_v^{\nu-\mu}f)\right\|^2_{L^2(\mathcal{D}_L)} \\ & \leq C'^2_{R,L,d,0}(B) \sum_{|\nu|\leq k}\sum_{\mu\leq \nu}\binom{\nu}{\mu}^2 \sum_{\mu\leq \nu} \left\|\partial_v^{\mu}g\right\|^2_{L^{2}(\mathcal{D}_L)} \left\| \partial_v^{\nu-\mu}f \right\|^2_{L^{2}(\mathcal{D}_L)}\\ & \leq C'^2_{R,L,d,k}(B) \left\|g\right\|^2_{H^{k}(\mathcal{D}_L)} \left\|f \right\|^2_{H^k(\mathcal{D}_L)}, \end{split} \end{equation} where we used the Cauchy-Schwarz inequality in the second line. \end{proof} \section{Main result: well-posedness and stability of the method} \label{sec:main} In this section, we establish the well-posedness and stability of the Fourier-Galerkin spectral method (\ref{PFS}) on an arbitrary bounded time interval $[0, T]$. The main strategy of the proof is as follows: In Section~\ref{sec:regularity} we prove some $L^2$ and $H^k$ estimates of the solution under {\it a priori} $L^1$ bound of $f_N$, among which the key result is the $L^2$ estimate of the negative part of the solution (Proposition~\ref{regularity1}). Proposition~\ref{localexistence} is a local existence and uniqueness result over a small time interval $ [t_0, t_0+\tau]$. Finally the main result is presented in Theorem~\ref{existencetheorem}, where we show that when $N$ is large enough the negative part of the solution can be controlled over time $[0,\tau]$. Due to mass conservation, this consequently implies the initial $L^1$ bound of the solution can be restored at time $\tau$. Therefore, we can repeat the procedure iteratively to build the solution up to final time $T$ (the estimates on $N$ and $\tau$ are done carefully at the beginning so that the same values can be used in the following iteration). \subsection{Propagation of the $L^2$ estimate of $f_N^-$ under {\it a priori} $L^1$ bound of $f_N$} \label{sec:regularity} We first establish the $L^2$ and $H^k$ estimates of $f_N$ under {\it a priori} $L^1$ bound of $f_N$. This result is not new and the proof is similar to \cite[Lemma 4.2]{FM11}. The main difference is that we closely track the dependence in the case of $H^1$ which will be useful in the following estimate. \begin{proposition}\label{regularity} Let the collision kernel $ B $ and truncation parameters $R$ and $L$ satisfy the assumptions (\ref{RL}), (\ref{kinetic}), (\ref{angular}), and (\ref{cutoff}). For the numerical system \eqref{PFS}, assume that the initial condition $ f_{N}^0\in H^{k}(\mathcal{D}_L) $ for some integer $k\geq 0$ and that the solution $ f_{N} $ has a $ L^{1}$ bound up to some time $t_0$: \begin{equation}\label{fNL1} \forall t\in [0,t_0], \quad \left\| f_{N}(t) \right\|_{L^{1}(\mathcal{D}_L)} \leq M, \end{equation} then there exists a constant $K_k$ depending on $t_0$, $M$, and $\|f_N^0\|_{H^k(\mathcal{D}_L)}$ such that \begin{equation} \label{Hk} \forall t\in[0,t_0], \quad \left\| f_{N}(t) \right\|_{H^{k}(\mathcal{D}_L)} \leq K_{k}\left(t_0,M, \|f_N^0\|_{H^k(\mathcal{D}_L)}\right). \end{equation} In particular, for $k=0$ and $k=1$, we have \begin{equation} K_0=\mathrm{e}^{t_0 D_0M } \left\|f_{N}^0\right\|_{L^{2}(\mathcal{D}_L)}, \quad K_1=\mathrm{e}^{t_0D_1 \left(M+K_0\right)} \left( \left\|f^0_N\right\|_{H^{1}(\mathcal{D}_L)}+D_2\right), \label{KK1} \end{equation} where $D_0$, $D_1$, $D_2$ are constants depending only on the truncation parameters $R$, $L$, dimension $d$, and the collision kernel $B$. \end{proposition} \begin{proof} The proof is based on mathematical induction. Step (i): We first prove (\ref{Hk}) holds for $k=0$. Multiplying both sides of (\ref{PFS}) by $f_N$ and integrating over $\mathcal{D}_L$ yields \begin{equation}\label{H0} \begin{split} \frac{1}{2} \frac{\mathrm{d}}{\mathrm{d} t} \left\| f_{N} \right\|^{2}_{L^{2}(\mathcal{D}_L)} =& \int_{\mathcal{D}_L} \mathcal{P}_NQ^{R}(f_{N},f_{N}) f_{N} \, \mathrm{d} v \leq \left\| \mathcal{P}_NQ^{R}(f_{N},f_{N}) \right\|_{L^{2}(\mathcal{D}_L)} \left\| f_{N} \right\|_{L^{2}(\mathcal{D}_L)} \\ \leq &\left\| Q^{R}(f_{N},f_{N}) \right\|_{L^{2}(\mathcal{D}_L)} \left\| f_{N} \right\|_{L^{2}(\mathcal{D}_L)} \leq D_0\left\| f_{N} \right\|_{L^{1}(\mathcal{D}_L)} \left\| f_{N} \right\|^{2}_{L^{2}(\mathcal{D}_L)} \leq D_0M \left\| f_{N} \right\|^{2}_{L^{2}(\mathcal{D}_L)}, \end{split} \end{equation} where we used (\ref{QLp}) and the assumption (\ref{fNL1}). Thus we have \begin{equation} \frac{\mathrm{d}}{\mathrm{d} t} \left\| f_{N} \right\|_{L^{2}(\mathcal{D}_L)} \leq D_0M \left\| f_{N} \right\|_{L^{2}(\mathcal{D}_L)}. \end{equation} By the Gr{\"o}nwall's inequality, we further conclude that \begin{equation}\label{priori} \left\| f_{N}(t) \right\|_{L^{2}(\mathcal{D}_L)} \leq \mathrm{e}^{ D_0M t_0} \left\|f_{N}^0\right\|_{L^{2}(\mathcal{D}_L)}, \quad \forall t\in [0, t_0]. \end{equation} Step (ii): We then assume that (\ref{Hk}) holds for some $k\geq 0$, and proceed to prove that it holds also for $k+1$. First of all, taking the $ \nu $-th derivative w.r.t.~$v$ on both sides of \eqref{PFS} gives \begin{equation} \label{fmu} \partial_t(\partial^{\nu}_vf_{N}) = \partial^{\nu}_v \mathcal{P}_NQ^{R}(f_{N},f_{N})= \mathcal{P}_N \partial^{\nu}_vQ^{R}(f_{N},f_{N}). \end{equation} Multiplying (\ref{fmu}) by $ \partial^{\nu}_vf_{N} $ and integrating over $\mathcal{D}_L$ then yields \begin{equation}\label{H1} \frac{1}{2} \frac{\mathrm{d}}{\mathrm{d} t}\left\|\partial^{\nu}_vf_{N}\right\|^{2}_{L^{2}(\mathcal{D}_L)} = \int_{\mathcal{D}_L} \mathcal{P}_N\partial^{\nu}_v Q^{R}(f_{N},f_{N}) \partial^{\nu}_vf_{N} \, \mathrm{d} v \leq \left\|\partial^{\nu}_v Q^{R}(f_{N},f_{N})\right\|_{L^{2}(\mathcal{D}_L)} \left\|\partial^{\nu}_vf_{N}\right\|_{L^{2}(\mathcal{D}_L)}. \end{equation} By adding \eqref{H1} with $|\nu|\leq k+1$ altogether and using the Cauchy-Schwarz inequality, we find that \begin{equation} \frac{1}{2} \frac{\mathrm{d}}{\mathrm{d} t}\left\|f_{N}\right\|^{2}_{H^{k+1}(\mathcal{D}_L)} \leq \left\|Q^{R}(f_{N},f_{N})\right\|_{H^{k+1}(\mathcal{D}_L)} \left\|f_{N}\right\|_{H^{k+1}(\mathcal{D}_L)}, \end{equation} i.e., \begin{equation}\label{H2} \frac{\mathrm{d}}{\mathrm{d} t}\left\|f_{N}\right\|_{H^{k+1}(\mathcal{D}_L)} \leq \left\|Q^{R}(f_{N},f_{N})\right\|_{H^{k+1}(\mathcal{D}_L)}. \end{equation} On the other hand, \begin{equation} \begin{split} &\left\|Q^{R}(f_{N},f_{N})\right\|^{2}_{H^{k+1}(\mathcal{D}_L)} = \left\|Q^{R}(f_{N},f_{N})\right\|^{2}_{H^{k}(\mathcal{D}_L)} + \sum_{|\nu|= k+1} \left\|\partial_v^\nu Q^{R}(f_{N},f_{N})\right\|^{2}_{L^{2}(\mathcal{D}_L)}\\ =&\left\|Q^{R}(f_{N},f_{N})\right\|^{2}_{H^{k}(\mathcal{D}_L)} + \sum_{|\nu|= k+1} \left\| \sum_{\mu\leq \nu} \binom{\nu}{\mu} Q^{R}(\partial_v^{\mu}f_{N},\partial_v^{\nu-\mu}f_{N})\right\|^{2}_{L^{2}(\mathcal{D}_L)}\\ \leq & \left\|Q^{R}(f_{N},f_{N})\right\|^{2}_{H^{k}(\mathcal{D}_L)} + \sum_{|\nu|= k+1} C_0^2\sum_{\mu\leq \nu} \left\|Q^{R}(\partial^{\mu}_vf_{N},\partial^{\nu-\mu}_vf_{N})\right\|^{2}_{L^{2}(\mathcal{D}_L)}\\ = & \left\|Q^{R}(f_{N},f_{N})\right\|^{2}_{H^{k}(\mathcal{D}_L)} + \sum_{|\nu|= k+1} C_0^2 \left(\sum_{0<\mu< \nu} \left\|Q^{R}(\partial^{\mu}_vf_{N},\partial^{\nu-\mu}_vf_{N})\right\|^{2}_{L^{2}(\mathcal{D}_L)}\right.\\ &\left.+\left\|Q^{R}(f_{N},\partial^{\nu}_vf_{N})\right\|^{2}_{L^{2}(\mathcal{D}_L)}+\left\|Q^{R}(\partial^{\nu}_vf_{N},f_{N})\right\|^{2}_{L^{2}(\mathcal{D}_L)}\right)\\ \leq &C_1^2\left\|f_{N}\right\|_{H^{k}(\mathcal{D}_L)}^2 + \sum_{|\nu|= k+1} C_0^2\left(\sum_{0<\mu< \nu}C_2^2 \|\partial^{\mu}_vf_{N}\|_{L^2(\mathcal{D}_L)}^2\|\partial^{\nu-\mu}_vf_{N}\|_{L^2(\mathcal{D}_L)}^2\right.\\ &\left.+C_3^2\|f_N\|_{L^1(\mathcal{D}_L)}^2\left\|\partial_v^{\nu}f_{N}\right\|_{L^2(\mathcal{D}_L)}^2+C_4^2\|\partial_v^\nu f_{N}\|_{L^1(\mathcal{D}_L)}^2\left\|f_{N}\right\|_{L^2(\mathcal{D}_L)}^2\right)\\ \leq & C_5^2\left\|f_{N}\right\|_{H^{k}(\mathcal{D}_L)}^2+C_6^2(\|f_N\|_{L^1(\mathcal{D}_L)}^2+\|f_N\|_{L^2(\mathcal{D}_L)}^2)\left\|f_{N}\right\|_{H^{k+1}(\mathcal{D}_L)}^2\\ \leq & C_5^2K_k^2+C_6^2(M^2+K_0^2)\left\|f_{N}\right\|_{H^{k+1}(\mathcal{D}_L)}^2, \end{split} \end{equation} where in the third last inequality, we used (\ref{QHk}) in the first line and (\ref{QLp}) in the second line. In the last inequality, we used the induction hypothesis. Then (\ref{H2}) becomes \begin{equation} \frac{\mathrm{d}}{\mathrm{d} t}\left\|f_{N}\right\|_{H^{k+1}(\mathcal{D}_L)} \leq C_6(M+K_0)\left\|f_{N}\right\|_{H^{k+1}(\mathcal{D}_L)} + C_5K_k. \end{equation} By the Gr{\"o}nwall's inequality, we have \begin{equation} \label{QQ} \begin{split} \left\|f_{N}(t)\right\|_{H^{k+1}(\mathcal{D}_L)} \leq & \mathrm{e}^{C_6 (M+K_0) t_0} \left( \left\|f_{N}^0\right\|_{H^{k+1}(\mathcal{D}_L)} + \frac{C_5K_k}{C_6(M+K_0)} \right):= K_{k+1}, \quad \forall t\in [0,t_0]. \end{split} \end{equation} This completes the induction argument for $k+1$. In particular, the explicit formula of $K_0$ is given in (\ref{priori}) and the formula of $K_1$ is implied by (\ref{QQ}) when $k=0$. \end{proof} We now proceed to estimate the negative part of the solution, which relies on a careful estimate of both gain and loss terms of the collision operator. This estimate will play a key role in the main theorem. \begin{proposition}\label{regularity1} Let the collision kernel $ B $ and truncation parameters $R$ and $L$ satisfy the assumptions (\ref{RL}), (\ref{kinetic}), (\ref{angular}), and (\ref{cutoff}). For the numerical system \eqref{PFS}, assume that the initial condition $ f_{N}^0\in H^1(\mathcal{D}_L) $ and that the solution $ f_{N} $ has a $ L^{1}$ bound up to some time $t_0$: \begin{equation}\label{fNL2} \forall t\in [0, t_0], \quad \left\| f_{N}(t) \right\|_{L^{1}(\mathcal{D}_L)} \leq M, \end{equation} then \begin{equation} \label{K1} \forall t\in [0, t_0], \quad \left\|f_{N}(t)\right\|_{L^2(\mathcal{D}_L)} \leq K_0, \quad \left\|f_{N}(t)\right\|_{H^{1}(\mathcal{D}_L)} \leq K_1, \end{equation} and $f^-_N$, the negative part of $f_N$, satisfies \begin{equation} \label{fneg} \forall t\in [0, t_0], \quad \left\|f_{N}^{-}(t)\right\|_{L^{2}(\mathcal{D}_L)} \leq \mathrm{e}^{t_0D_3(M+K_0) } \left(\left\|f_{N}^{0,-}\right\|_{L^{2}(\mathcal{D}_L)} +\frac{D_4 K_1^2}{MN}\right), \end{equation} where $K_0$, $K_1$ are given in (\ref{KK1}), and $D_3$ and $D_4$ are constants depending only on the truncation parameters $R$, $L$, dimension $d$, and the collision kernel $B$. \end{proposition} \begin{proof} First of all, since $ f_{N}^0\in H^1(\mathcal{D}_L) $, Proposition~\ref{regularity} (when $k=1$) directly yields (\ref{K1}). Equipped with this regularity, we now estimate the negative part of $f_N$. Note that $f_N=f_N^+ - f^-_N$, $|f_N|=f^+_N + f^-_N$. We first rewrite (\ref{PFS}) as \begin{equation} \label{PFS1} \partial_{t} f_N = Q^{R,+}(f_N,f_N) - Q^{R,-}(f_N,f_N) + E_{N}(f_N), \end{equation} with \begin{equation} E_N(f_N):=\mathcal{P}_NQ^R(f_N,f_N)-Q^R(f_N,f_N). \end{equation} For the gain term, we have \begin{equation} \begin{split} Q^{R,+}(f_{N},f_{N}) f_{N} \mathbf{1}_{\left\lbrace f_{N}\leq 0\right\rbrace } = & Q^{R,+}(f_{N}^{+} - f_{N}^{-}, f_{N}^{+} - f_{N}^{-}) f_{N} \mathbf{1}_{\left\lbrace f_{N}\leq 0\right\rbrace }\\ = & \left[ Q^{R,+}(f_{N}^{+}, f_{N}^{+}) - Q^{R,+}(f_{N}^{+}, f_{N}^{-}) - Q^{R,+}(f_{N}^{-}, f_{N}^{+}) + Q^{R,+}(f_{N}^{-}, f_{N}^{-}) \right] f_{N} \mathbf{1}_{\left\lbrace f_{N}\leq 0\right\rbrace }\\ = & \left[ -Q^{R,+}(f_{N}^{+}, f_{N}^{+}) + Q^{R,+}(f_{N}^{+}, f_{N}^{-}) +Q^{R,+}(f_{N}^{-}, f_{N}^{+}) - Q^{R,+}(f_{N}^{-}, f_{N}^{-}) \right] f_{N}^-\\ \leq & \left[ Q^{R,+}(f_{N}^{+}, f_{N}^{-}) + Q^{R,+}(f_{N}^{-}, f_{N}^{+}) \right] f_{N}^{-}. \end{split} \end{equation} Hence \begin{equation} \label{Q+} \begin{split} \int_{\mathcal{D}_L} Q^{R,+}(f_{N},f_{N}) f_{N} \mathbf{1}_{\left\lbrace f_{N}\leq 0\right\rbrace } \, \mathrm{d} v &\leq \int_{\mathcal{D}_L} \left[ Q^{R,+}(f_{N}^{+}, f_{N}^{-}) + Q^{R,+}(f_{N}^{-}, f_{N}^{+}) \right] f_{N}^{-} \, \mathrm{d} v \\ &\leq \left\|Q^{R,+}(f_{N}^{+}, f_{N}^{-})+Q^{R,+}(f_{N}^{-}, f_{N}^{+})\right\|_{L^{2}(\mathcal{D}_L)} \left\|f_{N}^{-}\right\|_{L^{2}(\mathcal{D}_L)}\\ & \leq C_0\left\| f_{N}^+\right\|_{L^{1}(\mathcal{D}_L)} \left\| f_{N}^{-} \right\|^{2}_{L^{2}(\mathcal{D}_L)}+C_0\left\| f_{N}^-\right\|_{L^{1}(\mathcal{D}_L)}\left\| f_{N}^+\right\|_{L^{2}(\mathcal{D}_L)} \left\| f_{N}^{-} \right\|_{L^{2}(\mathcal{D}_L)}\\ & \leq C_0\left\| f_{N}\right\|_{L^{1}(\mathcal{D}_L)} \left\| f_{N}^{-} \right\|^{2}_{L^{2}(\mathcal{D}_L)}+C_0'\left\| f_{N}\right\|_{L^{2}(\mathcal{D}_L)} \left\| f_{N}^{-} \right\|^2_{L^{2}(\mathcal{D}_L)}, \end{split} \end{equation} where we used the estimate (\ref{QGLp1}) for the gain term. For the loss term, we have \begin{equation} -Q^{R,-}(f_{N},f_{N}) f_{N} \mathbf{1}_{\left\lbrace f_{N}\leq 0\right\rbrace } = - L^R(f_{N})f_{N} f_{N}\mathbf{1}_{\left\lbrace f_{N}\leq 0\right\rbrace }=-L^R(f_{N})f_{N}^-f_{N}^-=-Q^{R,-}(f_{N},f_{N}^-)f_{N}^-, \end{equation} where we used the structure of the loss term, see (\ref{QLS}). Hence \begin{equation} \label{Q-} \begin{split} -\int_{\mathcal{D}_L} Q^{R,-}(f_{N},f_{N}) f_{N} \mathbf{1}_{\left\lbrace f_{N}\leq 0\right\rbrace } \, \mathrm{d} v &= -\int_{\mathcal{D}_L} Q^{R,-}(f_{N},f_{N}^-)f_{N}^- \, \mathrm{d} v\\ & \leq \|Q^{R,-}(f_{N},f_{N}^-) \|_{L^2(\mathcal{D}_L)}\|f_{N}^-\|_{L^2(\mathcal{D}_L)}\\ &\leq C_1 \|f_{N}\|_{L^1(\mathcal{D}_L)} \left\| f_{N}^{-} \right\|^{2}_{L^{2}(\mathcal{D}_L)}, \end{split} \end{equation} where we used the estimate (\ref{QLLp}) for the loss term. For the remainder $E_N$, we have \begin{equation} \begin{split} \left\|E_{N}(f_{N})\right\|_{L^{2}(\mathcal{D}_L)}&=\|\mathcal{P}_NQ^R(f_N,f_N)-Q^R(f_N,f_N)\|_{L^{2}(\mathcal{D}_L)}\\ &\leq \frac{C_2}{N}\|Q^R(f_N,f_N)\|_{H^1(\mathcal{D}_L)}\\ &\leq \frac{C_2}{N}\|f_N\|_{H^1(\mathcal{D}_L)}^2, \end{split} \end{equation} where we used the well-known property of the projection operator and estimate (\ref{QHk}). Hence \begin{equation} \label{EN} \begin{split} \int_{\mathcal{D}_L} E_{N}(f_{N}) f_{N} \mathbf{1}_{\left\lbrace f_{N}\leq 0\right\rbrace } \, \mathrm{d} v & = -\int_{\mathcal{D}_L} E_{N}(f_{N}) f_{N}^-\, \mathrm{d} v \\ & \leq \left\|E_{N}(f_{N})\right\|_{L^{2}(\mathcal{D}_L)} \left\|f_{N}^{-}\right\|_{L^{2}(\mathcal{D}_L)} \\ & \leq \frac{C_2}{N}\|f_{N}\|_{H^1(\mathcal{D}_L)}^2 \left\|f_{N}^{-}\right\|_{L^{2}(\mathcal{D}_L)}. \end{split} \end{equation} For the left hand side, we have \begin{equation} f_{N} \mathbf{1}_{\left\lbrace f_{N}\leq 0\right\rbrace }\partial_{t} f_N =-f_{N}^- \partial_t (f_N^+-f_N^-)=-f_{N}^- (\mathbf{1}_{\left\lbrace f_{N} \geq 0\right\rbrace }\partial_tf_N-\partial_t f_N^-)=f_{N}^-\partial_t f_N^-. \end{equation} Therefore, multiplying $ f_{N} \mathbf{1}_{\left\lbrace f_{N}\leq 0\right\rbrace } $ to both hand sides of \eqref{PFS1} and integrating over $ \mathcal{D}_L $, together with \eqref{Q+}, \eqref{Q-} and \eqref{EN}, yields \begin{equation} \frac{1}{2} \frac{\mathrm{d}}{\mathrm{d} t}\|f_{N}^{-}\|^{2}_{L^{2}(\mathcal{D}_L)} \leq \left[(C_0+C_1)\left\|f_{N} \right\|_{L^{1}(\mathcal{D}_L)}+C_0'\left\| f_{N}\right\|_{L^{2}(\mathcal{D}_L)} \right]\left\|f_{N}^{-}\right\|^{2}_{L^{2}(\mathcal{D}_L)} +\frac{C_2}{N}\|f_{N} \|_{H^1(\mathcal{D}_L)}^2 \left\|f_{N}^{-}\right\|_{L^{2}(\mathcal{D}_L)}, \end{equation} i.e., \begin{equation} \begin{split} \frac{\mathrm{d}}{\mathrm{d} t}\|f_{N}^{-}\|_{L^{2}(\mathcal{D}_L)} &\leq \left[(C_0+C_1)\left\|f_{N} \right\|_{L^{1}(\mathcal{D}_L)}+C_0'\left\| f_{N}\right\|_{L^{2}(\mathcal{D}_L)} \right] \left\|f_{N}^{-}\right\|_{L^{2}(\mathcal{D}_L)} +\frac{C_2}{N}\|f_{N}\|_{H^1(\mathcal{D}_L)}^2\\ &\leq \left[(C_0+C_1) M+C_0'K_0 \right] \left\|f_{N}^{-} \right\|_{L^{2}(\mathcal{D}_L)} +\frac{C_2 K_1^2}{N}, \end{split} \end{equation} where we have taken into account the $L^1$ bound and $L^2$, $H^1$ bounds of $f_N$ obtained earlier. By the Gr{\"o}nwall's inequality, we finally obtain the desired estimate (\ref{fneg}). \end{proof} \subsection{Local well-posedness of the solution $f_N$ on a small time interval $ [t_0,t_0+\tau]$} To prepare for the main theorem, we establish a local existence and uniqueness result and some stability bounds of the solution. \begin{proposition}\label{localexistence} Let the collision kernel $ B $ and truncation parameters $R$ and $L$ satisfy the assumptions (\ref{RL}), (\ref{kinetic}), (\ref{angular}), and (\ref{cutoff}). Assume that the initial condition $f^0(v)$ to the original problem (\ref{ABE}) belongs to $L^{1}\cap L^2(\mathcal{D}_L)$ and define \begin{equation} M_{f^0,1}=\|f^0\|_{L^1(\mathcal{D}_L)}, \quad M_{f^0,2}=\left\|f^{0}\right\|_{L^2(\mathcal{D}_L)}. \end{equation} For the numerical system (\ref{PFS}), assume that we evolve it starting at a certain time $t_0$ and the initial condition satisfies \begin{equation} \|f_N(t_0)\|_{L^1(\mathcal{D}_L)}\leq 2 M_{f^0,1}, \quad \|f_N(t_0)\|_{L^2(\mathcal{D}_L)}\leq \mathrm{e}^{2D_0M_{f^0,1}T}M_{f^0,2}, \end{equation} then there exists a local time $ \tau$ such that \eqref{PFS} admits a unique solution $ f_{N} = f_{N}(t,\cdot) \in L^{1}\cap L^{2}(\mathcal{D}_L) $ on $ [t_0,t_0+\tau]$. In particular, one can choose \begin{equation} \label{tau} \tau=\frac{1}{2(D_5M_2+D_6M_1)}, \quad \text{with} \quad M_{1} = 4M_{f^0,1}, \quad M_{2} = 2\mathrm{e}^{2D_0M_{f^0,1}T}M_{f^0,2}, \end{equation} such that \begin{equation} \forall t\in [t_0,t_0+\tau], \quad \|f_N(t)\|_{L^1(\mathcal{D}_L)} \leq M_{1}, \quad \|f_N(t)\|_{L^2(\mathcal{D}_L)} \leq M_{2}, \end{equation} where $T$ is the final prescribed time, $D_0$ is the constant appearing in (\ref{KK1}), and $D_5$, $D_6$ are constants depending only on the truncation parameters $R$, $L$, dimension $d$, and the collision kernel $B$. \end{proposition} \begin{proof} We construct the solution by a fixed point argument. Given $ M_{1}, M_{2} > 0$ and small enough time $ \tau >0 $ to be specified later, we define the space $\chi$ by \begin{equation} \chi= \left\lbrace f\in L^{\infty}([t_0,t_0+\tau]; L^1\cap L^2(\mathcal{D}_L)): \sup\limits_{t\in [t_0,t_0+\tau]}\left\|f(t, \cdot)\right\|_{L^1(\mathcal{D}_L)} \leq M_{1}, \sup\limits_{t\in [t_0,t_0+\tau]}\left\|f(t, \cdot)\right\|_{L^2(\mathcal{D}_L)} \leq M_{2} \right\rbrace, \end{equation} which is a complete metric space with respect to the induced distance \begin{equation}\label{distance} d(f, \tilde{f}) : = \left\| f - \tilde{f} \right\|_{\chi} =\sup\limits_{t\in [t_0,t_0+\tau]} \left\| f(t, \cdot) - \tilde{f}(t, \cdot) \right\|_{L^2(\mathcal{D}_L)}. \end{equation} For any $f_N\in \chi$, we define the operator $ \Phi $ as \begin{equation} \Phi(f_{N})(t, v) = f_N(t_0,v) + \int_{t_0}^{t}\mathcal{P}_N Q^R(f_N,f_N)(s,v)\, \mathrm{d} s, \quad \forall t\in[t_0,t_0+\tau]. \end{equation} We proceed to show that the mapping $\Phi$ has a unique fixed point in $\chi$. \vspace{0.1in} Step (i): We first show that $ \Phi $ maps $ \chi$ into itself: $\Phi(\chi)\subset \chi$. For any $ f_{N}\in \chi $ and $t\in [t_0,t_0+\tau]$, \begin{equation} \begin{split} \left\| \Phi(f_{N})(t, \cdot)\right\|_{L^1(\mathcal{D}_L)} \leq & \left\| f_{N}(t_0) \right\|_{L^1(\mathcal{D}_L)} + \int_{t_0}^{t} \left\| \mathcal{P}_NQ^{R}(f_{N},f_{N})(s, \cdot)\right\|_{L^1(\mathcal{D}_L)}\mathrm{d} s \\ \leq & \left\| f_{N}(t_0) \right\|_{L^1(\mathcal{D}_L)} + \tau (2L)^{d/2} \sup\limits_{t\in [t_0,t_0+\tau]} \left\|\mathcal{P}_NQ^R(f_{N},f_{N})(t, \cdot)\right\|_{L^2(\mathcal{D}_L)} \\ \leq & \left\| f_{N}(t_0) \right\|_{L^1(\mathcal{D}_L)} + \tau C_{R,L,d,2}(B) (2L)^{d/2} \sup\limits_{t\in [t_0,t_0+\tau]}\left( \left\| f_{N}(t, \cdot) \right\|_{L^1(\mathcal{D}_L)} \left\| f_{N}(t, \cdot) \right\|_{L^2(\mathcal{D}_L)}\right) \\ \leq & \left\|f_{N}(t_0) \right\|_{L^1(\mathcal{D}_L)} + \tau C_{R,L,d,2}(B) (2L)^{d/2} M_{1} M_{2}, \end{split} \end{equation} where we used (\ref{QLp}). Similarly, \begin{equation} \begin{split} \left\| \Phi(f_{N})(t, \cdot) \right\|_{L^2(\mathcal{D}_L)} \leq & \left\| f_{N}(t_0)\right\|_{L^2(\mathcal{D}_L)} + \int_{t_0}^{t} \left\|\mathcal{P}_N Q^{R}(f_{N},f_{N})(s, \cdot)\right\|_{L^2(\mathcal{D}_L)}\mathrm{d} s\\ \leq & \left\| f_{N}(t_0) \right\|_{L^2(\mathcal{D}_L)} + \tau \sup\limits_{t\in [t_0,t_0+\tau]}\left\|\mathcal{P}_NQ^R(f_{N},f_{N})(t, \cdot)\right\|_{L^2(\mathcal{D}_L)}\\ \leq & \left\| f_{N}(t_0) \right\|_{L^2(\mathcal{D}_L)} + \tau C_{R,L,d,2}(B) \sup\limits_{t\in [t_0,t_0+\tau]}\left( \left\| f_{N}(t, \cdot) \right\|_{L^1(\mathcal{D}_L)} \left\| f_{N}(t, \cdot) \right\|_{L^2(\mathcal{D}_L)}\right)\\ \leq & \left\| f_{N}(t_0) \right\|_{L^2(\mathcal{D}_L)} + \tau C_{R,L,d,2}(B) M_1 M_{2}. \end{split} \end{equation} Step (ii): We next show that $ \Phi $ is a contraction mapping on $ \chi $. For any $f_{N}, \tilde{f}_{N}\in\chi$ with the same initial datum $ f_{N}(t_0)$, we have \begin{equation} \begin{split} \left\|\Phi(f_{N})-\Phi(\tilde{f}_{N})\right\|_{\chi}=& \sup\limits_{t\in [t_0,t_0+\tau]} \left\|\Phi(f_{N})(t, \cdot)-\Phi(\tilde{f}_{N})(t, \cdot)\right\|_{L^2(\mathcal{D}_L)}\\ \leq & \sup\limits_{t\in [t_0,t_0+\tau]}\int_{t_0}^{t} \left\|\mathcal{P}_NQ^R(f_{N},f_{N})(s, \cdot)-\mathcal{P}_NQ^R(\tilde{f}_{N},\tilde{f}_{N})(s, \cdot)\right\|_{L^2(\mathcal{D}_L)}\, \mathrm{d} s \\ \leq & \tau \sup\limits_{t\in [t_0,t_0+\tau]} \left\|Q^R(f_{N},f_{N})(t, \cdot)-Q^R(\tilde{f}_{N},\tilde{f}_{N})(t, \cdot)\right\|_{L^2(\mathcal{D}_L)} \\ \leq & \tau \sup\limits_{t\in [t_0,t_0+\tau]} \left( \left\|Q^R(f_{N}-\tilde{f}_{N},f_{N})(t, \cdot)\right\|_{L^2(\mathcal{D}_L)}+\left\|Q^R(\tilde{f}_{N},f_{N}-\tilde{f}_{N})(t, \cdot)\right\|_{L^2(\mathcal{D}_L)}\right )\\ \leq & \tau C_{R,L,d,2}(B) \sup\limits_{t\in [t_0,t_0+\tau]} \left( \left\|f_{N}-\tilde{f}_{N}\right\|_{L^1(\mathcal{D}_L)}\|f_N\|_{L^2(\mathcal{D}_L)}+\left\|f_{N}-\tilde{f}_{N}\right\|_{L^2(\mathcal{D}_L)}\|\tilde{f}_N\|_{L^1(\mathcal{D}_L)} \right )\\ \leq & \tau C_{R,L,d,2}(B)((2L)^{d/2}M_2+M_1) \left(\sup\limits_{t\in [t_0,t_0+\tau]} \left\|f_{N}(t, \cdot)-\tilde{f}_{N}(t, \cdot)\right\|_{L^2(\mathcal{D}_L)}\right)\\ \leq & \tau(C_{R,L,d,2}(B)(2L)^{d/2}M_2+C_{R,L,d,2}(B) M_1)\left\|f_{N}-\tilde{f}_{N}\right\|_{\chi}. \end{split} \end{equation} Therefore, if we define $D_5=C_{R,L,d,2}(B)(2L)^{d/2}$, $D_6=C_{R,L,d,2}(B)$, and choose $M_1$, $M_2$ and $\tau$ as given in (\ref{tau}), we would have \begin{equation} \left\| f_{N}(t_0) \right\|_{L^1} + \tau D_5 M_{1} M_{2} \leq M_{1}, \quad \left\|f_{N}(t_0) \right\|_{L^2} + \tau D_6 M_1M_2 \leq M_{2}, \quad \tau(D_5 M_2+D_6M_1)< 1. \end{equation} So $\Phi: \chi \rightarrow \chi$ is a contraction mapping. According to the Banach fixed point theorem, \eqref{PFS} admits a unique solution on $[t_0,t_0+\tau]$. \end{proof} \subsection{Well-posedness and stability of the solution $f_N$ on an arbitrary bounded time interval $ [0,T] $} \label{subsec:main} We are ready to present our main result. \begin{theorem}\label{existencetheorem} Let the collision kernel $ B $ and truncation parameters $R$ and $L$ satisfy the assumptions (\ref{RL}), (\ref{kinetic}), (\ref{angular}), and (\ref{cutoff}). Let the initial condition $f^{0}(v)$ to the original problem (\ref{ABE}) and the numerical solution $f_N^0(v)$ to the numerical system (\ref{PFS}) satisfy the assumptions specified in Section~\ref{subsec:initial}, i.e., $f^0(v)$ is periodic, non-negative, and belongs to $L^1\cap H^1(\mathcal{D}_L)$, $f_N^0$ satisfies (\ref{con(a)})--(\ref{con(d)}). Define \begin{equation} M_{f^0,1}=\|f^0\|_{L^1(\mathcal{D}_L)}, \quad M_{f^0,2}=\left\|f^{0}\right\|_{L^2(\mathcal{D}_L)}. \end{equation} Then there exists an integer $N_0$ depending on the final time $T$ and initial condition $f^0$, such that for all $ N>N_{0} $, the numerical system (\ref{PFS}) admits a unique solution $ f_{N} = f_{N}(t,\cdot) \in L^{1} \cap H^1(\mathcal{D}_L) $ on the time interval $ [0,T] $. Furthermore, for all $ N>N_{0} $, $f_N$ satisfies the following stability estimates \begin{equation}\label{fNL1L2} \forall t\in [0,T], \quad \left\| f_{N}(t) \right\|_{L^{1}(\mathcal{D}_L)} \leq 2M_{f^0,1}, \quad \left\| f_{N}(t) \right\|_{L^{2}(\mathcal{D}_L)} \leq \mathrm{e}^{2D_0M_{f^0,1}T}M_{f^0,2}, \end{equation} where $D_0$ is the constant appearing in (\ref{KK1}). \end{theorem} \begin{proof} The proof is based on iteration. Given $T$, $M_{f^0,1}$, and $M_{f^0,2}$, we first choose $\tau$ according to (\ref{tau}). Then we define $t=0, \tau, 2\tau, \dots, n\tau, \dots$ until we cover the final time $T$. WLOG, we assume $T$ is some integral multiple of $\tau$. Step (i): At initial time $t=0$, we first choose $N$ such that \begin{equation} \label{initial} \|f^0_N\|_{L^1(\mathcal{D}_L)}\leq 2M_{f^0,1}, \end{equation} which is possible due to the condition (\ref{con(c)}). Also we have $\|f^0_N\|_{L^2(\mathcal{D}_L)}\leq \|f^0\|_{L^2(\mathcal{D}_L)} \leq e^{2D_0M_{f^0,1}T}M_{f^0,2}$ due to the condition (\ref{con(b)}). Then by Proposition~\ref{localexistence}, there exists a unique solution $f_N(t,\cdot)\in L^1\cap L^2(\mathcal{D}_L)$ over the time interval $[0,\tau]$ and \begin{equation} \forall t\in [0,\tau], \quad \|f_N(t)\|_{L^1(\mathcal{D}_L)}\leq 4M_{f^0,1}. \end{equation} Using this $L^1$ bound and that $f_N^0\in {H^1(\mathcal{D}_L)}$ (due to (\ref{con(b)})), we can invoke the Proposition~\ref{regularity1} to derive that \begin{equation} \forall t\in [0, \tau], \quad \|f_N(t)\|_{L^2(\mathcal{D}_L)}\leq K_0(\tau), \quad \|f_N(t)\|_{H^1(\mathcal{D}_L)}\leq K_1(\tau), \end{equation} and \begin{equation} \forall t\in [0, \tau], \quad \left\|f_{N}^{-}(t)\right\|_{L^{2}(\mathcal{D}_L)} \leq \mathrm{e}^{\tau D_3(4M_{f^0,1}+K_0(\tau)) } \left(\left\|f_{N}^{0,-}\right\|_{L^{2}(\mathcal{D}_L)} +\frac{D_4 K_1^2(\tau)}{4M_{f^0,1}N}\right), \label{fN-} \end{equation} with \begin{equation} K_0(\tau):=\mathrm{e}^{\tau D_0 4M_{f^0,1}}M_{f^0,2}, \quad K_1(\tau):=\mathrm{e}^{\tau D_1 \left(4M_{f^0,1}+K_0(\tau) \right)} \left( \left\|f^0\right\|_{H^{1}(\mathcal{D}_L)}+D_2\right). \end{equation} Note that we relaxed the bounds $K_0$, $K_1$ a bit (so that they depend only on $f^0$ but not $f_N^0$) using the condition (\ref{con(b)}) again. On the other hand, noticing that $|f_N| = 2f_N^- + f_N$, we have \begin{equation} \begin{split} \|f_N(t)\|_{L^1(\mathcal{D}_L)}&=\int_{\mathcal{D}_L}|f_N(t,v)|\,\mathrm{d}{v}=2\int_{\mathcal{D}_L}f_N^-(t,v)\,\mathrm{d}{v}+\int_{\mathcal{D}_L}f_N(t,v)\,\mathrm{d}{v}\\ &=2\|f_N^-(t)\|_{L^1(\mathcal{D}_L)}+\int_{\mathcal{D}_L}f^0(v)\,\mathrm{d}{v}\\ & \leq 2(2L)^{d/2}\|f_N^-(t)\|_{L^2(\mathcal{D}_L)}+M_{f^0,1}, \end{split} \end{equation} where we used the important mass conservation property in Lemma~\ref{lemma:conv} and (\ref{con(a)}) in the second line. Therefore, if we can control $\|f_N^-(t)\|_{L^2(\mathcal{D}_L)}$, then $\|f_N(t)\|_{L^1(\mathcal{D}_L)}$ will be controlled. Thanks to the estimate (\ref{fN-}), we can simply choose $N$ large enough such that the following is satisfied: \begin{equation} \label{N0} \mathcal{K}:=\mathrm{e}^{T D_3(4M_{f^0,1}+K_0(T)) } \left(\left\|f_{N}^{0,-}\right\|_{L^{2}(\mathcal{D}_L)} +\frac{D_4 K_1^2(T)}{4M_{f^0,1}N}\right) \leq \frac{M_{f^0,1}}{2(2L)^{d/2}}, \end{equation} then we have \begin{equation} \label{fNL1L21} \forall t\in [0,\tau], \quad \|f_N(t)\|_{L^1(\mathcal{D}_L)}\leq 2M_{f^0,1}. \end{equation} Note that (\ref{N0}) is possible due to the condition (\ref{con(d)}). Also, it is easy to see that the quantity $\mathcal{K}$ is an increasing function in time. Hence if $T$ in (\ref{N0}) is replaced by some $t_0\leq T$, (\ref{N0}) still holds. Combining the above choice of $N$ with the one at the beginning to satisfy (\ref{initial}), we have found an integer $N_0$, depending only on the final time $T$ and initial condition $f^0$, such that for all $N> N_0$, (\ref{PFS}) admits a unique solution $f_N(t,\cdot)\in L^1\cap H^1(\mathcal{D}_L)$ on $[0,\tau]$ which satisfies (\ref{fNL1L21}). Step (ii): Generally at time $t=n\tau$ ($n\geq 1$), we have \begin{equation} \label{initial1} \forall t\in [0,n\tau], \quad f_N(t,\cdot)\in L^1\cap H^1(\mathcal{D}_L), \quad \|f_N(t)\|_{L^1(\mathcal{D}_L)}\leq 2M_{f^0,1}. \end{equation} Then by Proposition~\ref{regularity} (with $k=0$), we have \begin{equation} \forall t\in [0,n\tau], \quad \|f_N(t)\|_{L^2(\mathcal{D}_L)}\leq e^{2D_0M_{f^0,1}n \tau}\|f_N^0\|_{L^2(\mathcal{D}_L)}\leq e^{2D_0M_{f^0,1}T}M_{f^0,2}. \end{equation} Then by Proposition~\ref{localexistence}, there exists a unique solution $f_N(t,\cdot)\in L^1\cap L^2(\mathcal{D}_L)$ on $[n\tau, (n+1)\tau]$ and \begin{equation} \forall t\in [n\tau, (n+1)\tau], \quad \|f_N(t)\|_{L^1(\mathcal{D}_L)}\leq 4M_{f^0,1}. \end{equation} Using this $L^1$ bound and that $f_N^0\in {H^1(\mathcal{D}_L)}$, we can invoke the Proposition~\ref{regularity1} over the interval $[0,(n+1)\tau]$ to derive that \begin{equation} \forall t\in [0, (n+1)\tau], \quad \|f_N(t)\|_{L^2(\mathcal{D}_L)}\leq K_0((n+1)\tau), \quad \|f_N(t)\|_{H^1(\mathcal{D}_L)}\leq K_1((n+1)\tau), \end{equation} and \begin{equation} \forall t\in [0,(n+1)\tau], \quad \left\|f_{N}^{-}(t)\right\|_{L^{2}(\mathcal{D}_L)} \leq \mathrm{e}^{(n+1)\tau D_3(4M_{f^0,1}+K_0((n+1)\tau))} \left(\left\|f_{N}^{0,-}\right\|_{L^{2}(\mathcal{D}_L)} +\frac{D_4 K_1^2((n+1)\tau)}{4M_{f^0,1}N}\right) \leq \mathcal{K}, \end{equation} i.e., the same choice of $N$ chosen above would still make \begin{equation} \forall t\in [0,(n+1)\tau], \quad \|f_N(t)\|_{L^1(\mathcal{D}_L)}\leq 2M_{f^0,1}. \end{equation} That is, at time $t=(n+1)\tau$, we are back to the situation (\ref{initial1}) at $t=n\tau$. Repeating Step (ii) until $t=T$, we can show that there exists a unique solution $f_N(t,\cdot)\in L^1\cap H^1(\mathcal{D}_L)$ on $[0,T]$, and \begin{equation} \forall t\in [0,T], \quad \|f_N(t)\|_{L^1(\mathcal{D}_L)}\leq 2M_{f^0,1}. \end{equation} Finally, by Proposition~\ref{regularity} (with $k=0$) again, we obtain \begin{equation} \forall t\in [0,T], \quad \left\| f_{N}(t) \right\|_{L^{2}(\mathcal{D}_L)} \leq e^{2D_0M_{f^0,1}T}M_{f^0,2}. \end{equation} \end{proof} \section{Convergence and spectral accuracy of the method} \label{sec:conv} With the well-posedness and stability of the numerical solution established in the previous section, the convergence of the method is straightforward. In this section, we assume that the initial condition $f^{0}(v)$ to the original problem (\ref{ABE}) is periodic, non-negative, and belongs to $L^1\cap H^k(\mathcal{D}_L)$ for some integer $k\geq 1$. In fact, it has been proved in \cite[Proposition 5.1]{FM11} that there exists a unique global non-negative solution $f(t,\cdot)\in H^k(\mathcal{D}_L)$. Furthermore, $\|f(t)\|_{H^k(\mathcal{D}_L)}\leq C_k(f^0), \ \forall t\geq 0$, where $C_k$ is a constant depending only on the initial condition. For the numerical system (\ref{PFS}), we consider the initial condition $f^0_N=\mathcal{P}_Nf^0$ for simplicity. According to the discussion in Remark~\ref{rmk}, we further assume that $f^0$ is, say, H\"{o}lder continuous, so that the four conditions (\ref{con(a)})--(\ref{con(d)}) are satisfied. Then by Theorem~\ref{existencetheorem}, there exists a unique solution $f_N(t,\cdot)\in L^1\cap H^1(\mathcal{D}_L)$ over the time interval $[0,T]$. Furthermore, $\|f_N(t)\|_{L^2(\mathcal{D}_L)}\leq C_0(T,f^0), \ \forall t \in[0,T]$, where $C_0$ is a constant depending only on the final time $T$ and initial condition $f^0$. Define the error function \begin{equation} e_{N}(t,v) = \mathcal{P}_{N} f(t,v) - f_{N}(t,v). \end{equation} We can show the following: \begin{theorem} \label{spectralaccuracy} Let the collision kernel $ B $ and truncation parameters $R$ and $L$ satisfy the assumptions (\ref{RL}), (\ref{kinetic}), (\ref{angular}), and (\ref{cutoff}). Choose $N_0$ such that it satisfies the condition in Theorem~\ref{existencetheorem}, then the Fourier spectral method is convergent for all $N>N_0$ and exhibits spectral accuracy. In particular, we have \begin{equation} \label{final} \forall t\in [0,T], \quad \left\| e_{N}(t) \right\|_{L^2(\mathcal{D}_L)} \leq \frac{C(T,f^0)}{N^k}, \quad \text{for all } N>N_0, \end{equation} where $C$ is a constant depending only on the final time $T$ and initial condition $f^0$. \end{theorem} \begin{proof} We first project the original problem \eqref{ABE} to obtain \begin{equation}\label{PUP} \left\{ \begin{array}{lr} \partial_{t} \mathcal{P}_{N}f = \mathcal{P}_{N}Q^{R}(f,f),\\ \mathcal{P}_{N}f(0,v) = \mathcal{P}_Nf^0, \end{array} \right. \end{equation} Subtracting \eqref{PFS} from (\ref{PUP}) and noting $f^0_N=\mathcal{P}_Nf^0$, we have \begin{equation}\label{error1} \left\{ \begin{split} &\partial_{t} e_{N}= \mathcal{P}_{N}\left( Q^{R}(f,f) - Q^{R}(f_{N},f_{N})\right),\\ &e_{N}(0,v) = 0. \end{split} \right. \end{equation} Multiplying (\ref{error1}) by $ e_{N}$ and integrating over $ \mathcal{D}_L $, we have \begin{equation} \begin{split} \frac{1}{2} \frac{\mathrm{d}}{\mathrm{d} t} \left\| e_N \right\|^{2}_{L^{2}(\mathcal{D}_L)} =& \int_{\mathcal{D}_L} \mathcal{P}_{N}\left( Q^{R}(f,f) - Q^{R}(f_{N},f_{N})\right) e_{N}\, \mathrm{d} v\\ \leq & \left\| \mathcal{P}_{N}\left( Q^{R}(f,f) - Q^{R}(f_{N},f_{N})\right)\right\|_{L^2(\mathcal{D}_L)} \left\| e_{N} \right\|_{L^2(\mathcal{D}_L)},\\ \Rightarrow \frac{\mathrm{d}}{\mathrm{d} t} \left\| e_N \right\|_{L^{2}(\mathcal{D}_L)} \leq & \left\| Q^{R}(f,f) - Q^{R}(f_{N},f_{N})\right\|_{L^2(\mathcal{D}_L)}. \end{split} \end{equation} Note that \begin{equation} \begin{split} &\left\| Q^{R}(f,f) - Q^{R}(f_{N},f_{N}) \right\|_{L^2(\mathcal{D}_L)}\\ \leq& \left\| Q^{R}(f-f_{N},f) \right\|_{L^2(\mathcal{D}_L)} + \left\| Q^{R}(f_{N},f-f_{N}) \right\|_{L^2(\mathcal{D}_L)}\\ \leq &C_1 \left\| f - f_{N} \right\|_{L^2(\mathcal{D}_L)} \left(\left\| f \right\|_{L^2(\mathcal{D}_L)} + \left\| f_{N} \right\|_{L^2(\mathcal{D}_L)} \right)\\ \leq &C_1(T,f^0) \left\| f - f_{N} \right\|_{L^2(\mathcal{D}_L)}. \end{split} \end{equation} Also \begin{equation} \begin{split} \left\| f - f_{N} \right\|_{L^2(\mathcal{D}_L)} \leq & \left\| f - \mathcal{P}_{N}f \right\|_{L^2(\mathcal{D}_L)}+\left\| \mathcal{P}_{N}f- f_{N} \right\|_{L^2(\mathcal{D}_L)}\\ \leq & \frac{C_2 \|f\|_{H^k(\mathcal{D}_L)}}{N^{k}} + \left\|e_{N} \right\|_{L^2(\mathcal{D}_L)}\\ \leq & \frac{C_2(f^0)}{N^{k}} + \left\|e_{N} \right\|_{L^2(\mathcal{D}_L)}. \end{split} \end{equation} Therefore, we have \begin{equation} \frac{\mathrm{d}}{\mathrm{d} t} \left\| e_N \right\|_{L^{2}(\mathcal{D}_L)} \leq C_1(T,f^0)\left\|e_{N} \right\|_{L^2(\mathcal{D}_L)}+\frac{C_3(T,f^0)}{N^{k}}, \end{equation} which implies \begin{equation} \forall t\in [0,T], \quad \left\| e_N(t) \right\|_{L^{2}(\mathcal{D}_L)} \leq e^{C_1(T,f^0)T}\left(\left\|e_{N}(0) \right\|_{L^2(\mathcal{D}_L)}+\frac{C_3(T,f^0)}{C_1(T,f^0)N^k}\right). \end{equation} Since $e_N(0,v)\equiv 0$, we finally obtain the desired result in (\ref{final}). \end{proof} \section*{Acknowledgement} JH is grateful to F. Filbet and R. Alonso for the helpful discussion. JH's research was supported in part by NSF grant DMS-1620250 and NSF CAREER grant DMS-1654152. TY's research was partially supported by General Research Fund of Hong Kong, \#11304419.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The next generation of ground-based telescopes equipped with adaptive optics (AO) will provide unprecedented resolutions to astronomical observations in the visible and near infrared wavelengths. This is the case of the extremely large telescopes, the new class of 25-40m telescopes observing in the near infrared \citep{elt,gmt,tmt}, as well as the 8m Very Large Telescope (VLT) observing in the visible \citep{vltaof}. Most of the mentioned telescopes foresee the use of multi-conjugate adaptive optics (MCAO) \citep{Beckers88,Rigaut18} modules to compensate for the wavefront distortions induced by atmospheric turbulence: MAORY \citep{maory} for the Extremely Large Telescope (ELT), NFIRAOS \citep{nfiraos} for the Thirty Meter Telescope and MAVIS \citep{Rigaut20} for the VLT. This flavour of adaptive optics aims to overcome the anisoplanatism problem, that represents a major limitation for single-conjugated adaptive optics (SCAO) \citep{Chassat89,Fried82}, through the use of both multiple guide stars (GSs) and deformable mirrors (DMs). The tomographic reconstruction of the turbulent volume from the GSs and the compensation for different layers of the atmosphere by the DMs help increase the isoplanatic patch, allowing the MCAO correction to provide uniform diffraction limited images over wide fields of view. The high angular resolution, the uniformity of the correction over wide areas, the large number of reference sources with high image quality provided and the control of the field distortions through the DMs conjugated in altitude are characteristics that make MCAO a good candidate for astrometric observations. High precision relative astrometry is, indeed, one of the main science drivers of the instruments equipped by the mentioned MCAO modules. The limiting astrometric precision is given by the centroiding error \citep{Lindegren78} and leads to the challenging requirements that have been set for these systems: 50µas of astrometric precision for MAORY (goal of 10µas, \citealt{Rodeghiero19}), 150µas for MAVIS (goal of 50µas, \citealt{Monty21}) and 50µas for NFIRAOS (goal of 10µas, \citealt{nfiraos_astrometry}). It is then crucial to investigate all possible sources of error in order to keep the astrometric error budget within this fundamental limitation. An exhaustive list of the main contributions to the astrometric error in the case of MCAO-assisted observations was provided in \citet{Trippe10}. Among the sources of error mentioned, we are interested in investigating tip-tilt atmospheric residuals. In general, tip-tilt residuals affect the astrometric precision by introducing fluctuations of the position of a source with respect to the nominal position on the detector. On the one hand, the amount of fluctuations integrated during the individual exposure can determine an increasing of the size and a change in shape of the point spread function (PSF), with typical PSF elongation effect; on the other hand, if the fluctuations are not totally integrated within the exposure time of the image, a jitter of the source position can also be observed between successive frames. Relative astrometry, intended as the measurement of the distance between two distinct sources, can be affected by both effects: the former contributes to the centroiding error in measuring the position of each object, while the latter leads to the \textit{differential tilt jitter} error, that is, the uncertainty in the distance measurement due to the relative residual jitter \citep{Fritz10,Cameron09}. The knowledge of the spatial and temporal dependence of tip-tilt residuals is needed to characterize the behavior of the related astrometric error. For SCAO systems, tip-tilt anisoplanatism is well known and has been thoroughly modeled: measuring tip-tilt through an off-axis reference determines a residual tip-tilt on the target that linearly increases with the separation between the two sources, the linear dependence on the distance being valid for each pair of objects in the field \citep{Sandler94,Sasiela94,Hardy98}. However, the characterization is more elaborate for the MCAO case, since the geometry with multiple guide stars and multiple DMs needs to be taken into account and can lead to complex behaviors. As pointed out in \citet{Trippe10}, tip-tilt anisoplanatism is not well understood for this flavour of adaptive optics and, to our knowledge, an analysis does not exist yet. In this context, we propose an analytical formulation that allows the derivation of the temporal power spectral density (PSD) of the MCAO residual phase in any direction of the scientific field of view, by means of the spatio-temporal statistics of the turbulence-induced distortions and of the temporal transfer functions of an MCAO loop. The phase is intended as decomposed on a modal basis (e.g. Zernike modes, \citealt{Noll76}). Differently from existing approaches providing an estimation of MCAO residuals in the spatial frequency domain (e.g. \citealt{Neichel09}), the presented method evaluates, for each mode, the MCAO residual phase in the temporal frequency domain and allows to include temporal effects such as the scientific integration time. The formulas are general and allow to analyse specific frameworks depending on the telescope aperture, the turbulence profile, the natural guide star (NGS) or laser guide star (LGS) asterism, the number and conjugation heights of the DMs, the sensed and corrected modes of distortion. The control loop and the tomographic reconstruction algorithm can also be chosen: in particular, we provide expressions in the case of either a closed-loop or a pseudo-open loop control. We then specialize our results to NGS-based systems and we analyse the behavior of MCAO tip-tilt anisoplanatism. We model the effect on tip-tilt residuals of the scientific integration time as well. Moreover, we provide an analytical expression to derive the temporal PSD of differential tilt jitter. Finally, we show an application where we make use of the presented formulas to quantify the contribution of differential tilt jitter to the future MCAO-assisted astrometric observations, choosing MAORY and MAVIS as case studies. \\ \noindent In Sec.~\ref{sec:performance}, we present the analytical approach and we derive the expression for the temporal PSD of the residual wavefront in the case of an MCAO correction; in Sec.~\ref{sec:aniso}, we use the formulas to analyse the spatial and temporal behavior of tip-tilt residuals, as well as to provide the expression for the differential tilt jitter error; in Sec.~\ref{sec:application}, we apply our results on differential tilt jitter to the MAORY and MAVIS cases. \section{Temporal power spectral density of MCAO wavefront residuals} \label{sec:performance} The aim of this section is to derive an analytical expression of the residual phase produced by an MCAO correction in a generic direction of the field of view as a function of the temporal frequencies. From this quantity, the temporal power spectral density (PSD) of the residual phase can be derived as: \begin{equation} \label{eq:psd_res_1} S_{res}^{\alpha}(\nu) = \big \langle \phi_{res}^{\alpha}(\nu) \phi_{res}^{\alpha \: \dagger}(\nu) \big \rangle \, , \end{equation} where $\alpha$ identifies the position in the field of view, $\nu$ is the temporal frequency, $\langle \cdot \rangle$ is the ensemble average, $^{\dagger}$ denotes the conjugate-transpose and $\phi$ represents the $\mathcal{L}$- or $Z$-transform of the phase, depending on whether a continuous or discrete-time domain is considered. From the integration of $S_{res}^{\alpha}$, the variance of the residual phase can be computed as well: \begin{equation} \label{eq:var_res_1} (\sigma_{res}^{\alpha})^2 = \int d\nu \: S_{res}^{\alpha}(\nu) \, . \end{equation} Among the sources of error contributing to the error budget of an MCAO correction, the presented method allows to take into account tomographic, noise and temporal errors. We consider the configuration in Fig.~\ref{fig:geometry}: the target and the guide stars (GSs) are, respectively, at positions $\alpha$ and $\bmath{\theta_{GS}} = [\theta_1, \theta_2, ..., \theta_{N_{GS}}]$ with respect to the telescope's axis. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{geometry.pdf} \caption{Scheme of the system geometry. In the example, there are two DMs conjugated at $h_1$ (DM1) and $h_2$ (DM2) and one at the ground layer (DM0), two guide stars at coordinates $\theta_{GS1}$ and $\theta_{GS2}$ and the scientific target at $\alpha$. The wavefront distortion is measured by WFS1 and WFS2 looking at, respectively, GS1 and GS2.} \label{fig:geometry} \end{figure} The light from the sources passes through $N_l$ layers of atmospheric turbulence before arriving at the pupil of the telescope. The turbulent layers are assumed to follow Taylor's frozen flow hypothesis. The turbulence-induced distortions are considered as decomposed onto wavefront modes and are measured by $N_{GS}$ wavefront sensors (WFSs), each sensing $n$ modes, and corrected by $N_{DM}$ deformable mirrors optically conjugated at altitudes $h_{j=1}^{N_{DM}}$ and compensating a total of $m_{DM} = \sum^{N_{DM}}_{k=1} m_k$ modes. In the following, we will denote the turbulent and residual phase in the direction of the target as $\phi_{turb}^{\alpha}$ and $\phi_{res}^{\alpha}$ respectively, the turbulent and residual phase in the direction of the guide stars as $\phi_{turb}^{\bmath{\theta_{GS}}}$ and $\phi_{res}^{\bmath{\theta_{GS}}}$ respectively and the phase applied on the deformable mirrors as $\phi_{DM}$. It follows that $\phi_{turb}^{\alpha}$ and $\phi_{res}^{\alpha}$ are vectors of $n$ elements, $\phi_{turb}^{\bmath{\theta_{GS}}}$ and $\phi_{res}^{\bmath{\theta_{GS}}}$ are vectors of $(n\cdot N_{GS})$ elements and $\phi_{DM}$ is a vector of $m_{DM}$ elements. We start writing the residual phase along $\alpha$ as the difference between the turbulent phase and the correction phase, both evaluated in the direction of interest: \begin{equation} \begin{split} \label{eq:phi_res_alpha_1} \phi_{res}^{\alpha}(\nu) &= \phi_{turb}^{\alpha}(\nu) - \phi_{corr}^{\alpha}(\nu) \\ &= \phi_{turb}^{\alpha}(\nu) - P_{DM}^{\alpha} \phi_{DM}(\nu)\, , \end{split} \end{equation} where $\phi_{corr}^{\alpha}$ is the correction phase in the direction $\alpha$, obtained through the matrix $P_{DM}^{\alpha}$ of size $n\times m_{DM}$ that projects the modes on the DMs as seen in the direction $\alpha$ onto the pupil. In the SCAO case, $P_{DM}^{\alpha}$ is the identity for any direction $\alpha$ as the correction is common to all directions of the field of view ($\phi_{corr}^{\alpha} = \phi_{corr}$).\newline We define $\phi_{DM}(\nu)$ as: \begin{equation} \label{eq:phi_dm_1} \phi_{DM}(\nu) = H_{ol}(\nu) W \big(\phi_{res}^{\bmath{\theta_{GS}}}(\nu) + \phi_{n}(\nu) \big) \, , \end{equation} where $H_{ol}$ is the open-loop transfer function of the AO feedback loop, $W$ is the reconstruction matrix, with dimension $m_{DM}\times (n\cdot N_{GS})$, relating the modes measured by the WFSs and the ones to be applied by the DMs and $\phi_{n}(\nu)$ is the WFSs measurement noise on the modes. We assumed ideal WFSs, meaning that they perform a direct measurement of the phase. In the case of a pure integrator, the expression of $H_{ol}$ is \citep{Madec99,Correia17}: \begin{equation} \begin{split} H_{ol}(s) &= H_{wfs}(s) H_c(s)\\ &= \dfrac{(1 - e^{-sT})}{sT} \dfrac{g}{sT} e^{-s T_d} \, , \end{split} \end{equation} where we limited the contributors to the wavefront sensor and the control and where $s = i2\pi\nu$ is the Laplace variable, $g$ is the gain, $T = 1 / v_{loop}$ with $v_{loop}$ the loop frequency, $T_d$ is the delay time of the control and where we defined $H_{wfs}(s) = (1 - e^{-sT})/sT$ and $H_c(s) = g/sT e^{-s T_d}$. \newline By replacing Eq.~\eqref{eq:phi_dm_1} in Eq.~\eqref{eq:phi_res_alpha_1} as referred to the guide stars directions ($\alpha = \bmath{\theta_{GS}}$), we get an expression of the residual phase on the guide stars: \begin{equation} \label{eq:phi_res_gs} \begin{split} \phi_{res}^{\bmath{\theta_{GS}}}(\nu) &= \big(Id + P_{DM}^{\bmath{\theta_{GS}}} H_{ol}(\nu) W \big)^{-1} \phi_{turb}^{\bmath{\theta_{GS}}}(\nu) \\ & \:\:\:- \big(Id + P_{DM}^{\bmath{\theta_{GS}}} H_{ol}(\nu) W \big)^{-1} P_{DM}^{\bmath{\theta_{GS}}} H_{ol} (\nu) W \phi_{n}(\nu) \\ &= H_r(\nu)\phi_{turb}^{\bmath{\theta_{GS}}}(\nu) - H_n(\nu) \phi_{n}(\nu) \, , \end{split} \end{equation} where $P_{DM}^{\bmath{\theta_{GS}}}$ is the DMs-WFSs projection matrix, with dimension $(n\cdot N_{GS})\times m_{DM}$ and $Id$ is an $(n\cdot N_{GS})\times (n\cdot N_{GS})$ identity matrix. We defined \begin{equation} \label{eq:rtf} H_r(\nu) = \big(Id + P_{DM}^{\bmath{\theta_{GS}}} H_{ol}(\nu) W \big)^{-1} \, \end{equation} as the Rejection Transfer Function (RTF), and \begin{equation} \label{eq:ntf} H_n(\nu) = \big(Id + P_{DM}^{\bmath{\theta_{GS}}} H_{ol}(\nu) W \big)^{-1} P_{DM}^{\bmath{\theta_{GS}}} H_{ol}(\nu) W \, \end{equation} as the Noise Transfer Function (NTF) of the MCAO loop. It is worth noting that these expressions also include a dependence on the spatial reconstruction. If taking the SCAO limit, $P_{DM}^{\bmath{\theta_{GS}}}$ and $W$ become equal to one and the classical definitions of RTF and NTF are retrieved \citep{Agapito17}. \newline We then replace Eq.~\eqref{eq:phi_res_gs} in Eq.~\eqref{eq:phi_dm_1}: \begin{equation} \begin{split} \label{eq:phi_dm_2} \phi_{DM}(\nu) &= H_{ol}(\nu) W \big(H_r(\nu) \phi_{turb}^{\bmath{\theta_{GS}}}(\nu) - H_n(\nu) \phi_{n}(\nu) + \phi_{n}(\nu) \big) \\ &= H_{ol}(\nu) W H_r(\nu) \big( \phi_{turb}^{\bmath{\theta_{GS}}}(\nu) + \phi_{n}(\nu) \big) \\ &= H_{n,tomo}(\nu) \big(\phi_{turb}^{\bmath{\theta_{GS}}}(\nu) + \phi_{n}(\nu) \big) \, , \end{split} \end{equation} where we used the relation $H_r(\nu)+H_n(\nu) = Id$, as derived from the sum of Eq.~\eqref{eq:rtf} and Eq.~\eqref{eq:ntf}, and where we defined the matrix $H_{n,tomo}(\nu) = H_{ol}(\nu) W H_r(\nu)$ as a tomographic NTF.\newline By substituting Eq.~\eqref{eq:phi_dm_2} in Eq.~\eqref{eq:phi_res_alpha_1}, we derive a final expression for the residual phase along $\alpha$: \begin{equation} \label{eq:phi_res_alpha_2} \begin{split} \phi_{res}^{\alpha}(\nu) &= \phi_{turb}^{\alpha}(\nu) - P_{DM}^{\alpha}\Big[H_{n,tomo}(\nu)\big(\phi_{turb}^{\bmath{\theta_{GS}}}(\nu) + \phi_{n}(\nu) \big)\Big] \\ &= \phi_{turb}^{\alpha}(\nu) - H_{n,tomo}^{\alpha}(\nu) \big(\phi_{turb}^{\bmath{\theta_{GS}}}(\nu) + \phi_{n}(\nu) \big) \, , \end{split} \end{equation} where $H_{n,tomo}^{\alpha}(\nu)=P_{DM}^{\alpha} H_{n,tomo}(\nu)$ is the tomographic NTF projected along $\alpha$. The diagram of the control loop described is shown in Fig.~\ref{fig:loop_scheme}. \begin{figure} \centering \includegraphics[width=\linewidth]{loop_scheme.pdf} \caption{Diagram of the control loop. The phase on the DMs is controlled in closed loop from the measurements on the guide stars and its projection along $\alpha$ determines the residual phase on the target in $\alpha$. The $P_{\bmath{\theta_{GS}}}$ and $P_{\alpha}$ blocks have been introduced as projections of the turbulent phase onto $\bmath{\theta_{GS}}$ ($\phi_{turb}^{\bmath{\theta_{GS}}} = P_{\bmath{\theta_{GS}}} \phi_{turb}$) and $\alpha$ ($\phi_{turb}^{\alpha} = P_{\alpha} \phi_{turb}$) respectively. The $H_T$ block represents the temporal filtering by the scientific instrument, as it will be shown in Sec.~\ref{sec:sci_time}.} \label{fig:loop_scheme} \end{figure} From Eq.~\eqref{eq:psd_res_1} and Eq.~\eqref{eq:phi_res_alpha_2} we can also compute the temporal power spectrum of the residual phase along $\alpha$: \begin{equation} \begin{split} \label{eq:psd_res_2} S^{\alpha}_{res} (\nu) &= S_{turb}^{\alpha}(\nu) + H_{n,tomo}^{\alpha}(\nu) \big(S_{turb}^{\bmath{\theta_{GS}}}(\nu) + S_{n}(\nu) \big) H_{n,tomo}^{\alpha \: \dagger}(\nu) \\ & \:\:\: - 2 Re \big( H_{n,tomo}^{\alpha}(\nu) S_{turb}^{\bmath{\theta_{GS}},\alpha}(\nu) \big) \, , \end{split} \end{equation} where $S_{turb}^{\alpha}$ is the temporal PSD of the turbulence, $S_{turb}^{\bmath{\theta_{GS}}}$ is the temporal PSD of the turbulence on the guide stars directions, $S_{n}$ is the temporal PSD of the noise and $S_{turb}^{\bmath{\theta_{GS}},\alpha}$ is the Cross PSD (CPSD) \citep{Plantet22} of the turbulence between the guide stars and the target. We assumed turbulence and noise to be uncorrelated.\newline The derived expression can provide a fast evaluation of the MCAO residuals in the field of view, given a statistics of turbulence and noise and the temporal filtering operated by the adaptive optics loop. It is worth noting that the SCAO limit of Eq.~\eqref{eq:psd_res_2} gives the same expression as provided in Eq.(54) of \citet{Plantet22}. Another version of Eq.~\eqref{eq:phi_res_alpha_2} and Eq.~\eqref{eq:psd_res_2} can be obtained if not only one target, but a set of targets equaling the number of guide stars is considered ($\bmath{\alpha} = [\alpha_1, \alpha_2, ..., \alpha_{N_{GS}}]$). In this case, we can modify Eq.~\eqref{eq:phi_res_alpha_2} as: \begin{equation} \label{eq:phi_res_alpha_3} \begin{split} \phi_{res}^{\bmath{\alpha}}(\nu) &= \phi_{turb}^{\bmath{\alpha}}(\nu) - H_{n,tomo}^{\bmath{\alpha}}(\nu) \big(\phi_{turb}^{\bmath{\theta_{GS}}}(\nu) + \phi_{n}(\nu) \big) \\ &= Id \: \phi_{turb}^{\bmath{\alpha}}(\nu) - H_{n,tomo}^{\bmath{\alpha}}(\nu) \big(\phi_{turb}^{\bmath{\theta_{GS}}}(\nu) + \phi_{n}(\nu) \big)\\ &= H_{r,tomo}^{\bmath{\alpha}}(\nu) \phi_{turb}^{\bmath{\alpha}}(\nu) \\ & \:\:\: - H_{n,tomo}^{\bmath{\alpha}}(\nu) \big(\phi_{turb}^{\bmath{\theta_{GS}}}(\nu) - \phi_{turb}^{\bmath{\alpha}}(\nu) + \phi_{n}(\nu) \big) \, , \end{split} \end{equation} where $H_{r,tomo}^{\bmath{\alpha}}$ is the tomographic RTF projected along $\bmath{\alpha}$, defined so that the relation $H_{r,tomo}^{\bmath{\alpha}}(\nu) + H_{n,tomo}^{\bmath{\alpha}}(\nu) = Id$ holds. This expression allows to differentiate the various contributions due to the rejection of turbulence (first term), to generalized anisoplanatism that is filtered as a noise by the AO loop (second plus third term) and to noise (last term). This is also shown by deriving the related temporal power spectrum: \begin{equation} \label{eq:psd_res_3} \begin{split} S^{\bmath{\alpha}}_{res} (\nu) &= \: H_{r,tomo}^{\bmath{\alpha}}(\nu) S_{turb}^{\bmath{\alpha}}(\nu) H_{r,tomo}^{\bmath{\alpha} \: \dagger}(\nu) \\ & \:\:\: + H_{n,tomo}^{\bmath{\alpha}}(\nu) S_n(\nu) H_{n,tomo}^{\bmath{\alpha} \: \dagger}(\nu) \\ & \:\:\: + H_{n,tomo}^{\bmath{\alpha}}(\nu) \big(S_{turb}^{\bmath{\theta_{GS}}}(\nu) - S_{turb}^{\bmath{\alpha}}(\nu)\big) H_{n,tomo}^{\bmath{\alpha} \: \dagger}(\nu) \\ & \:\:\: + 2 Re \Big[ H_{n,tomo}^{\bmath{\alpha}}(\nu) \big( S_{turb}^{\bmath{\alpha}} - S_{turb}^{\bmath{\theta_{GS}},\bmath{\alpha}} \big) \Big]\, , \end{split} \end{equation} where the first and second term leads, respectively, to the temporal and noise error, while the remaining terms quantify the tomographic error as well as its temporal filtering by the MCAO loop. \subsection{Pseudo-Open Loop control + MMSE reconstruction} In the previous calculations, we considered a closed-loop control, that is, the reconstruction is performed on the residual measurements as shown in Eq.~\eqref{eq:phi_dm_1}. The reconstruction matrix $W$ is then intended as the pseudo-inverse of the projection matrix $P_{DM}^{\bmath{\theta_{GS}}}$, as derived in the Least Square Estimator (LSE) approach \citep{Madec99}. However, it has been demonstrated not to be the optimal approach to deal with the problem of badly and unseen modes \citep{Fusco01_1,Fusco01_2,Neichel09,Roux04} characterizing multi-conjugate adaptive optics correction and that the Minimum Mean Square Error (MMSE) approach can lead to better performance, even if compared to the Truncated LSE (TLSE) \citep{Pacheco04}. As the MMSE reconstructor operates on the pseudo-open loop measurements of the turbulent phase, it has to be included in a Pseudo-Open Loop control (POLC) \citep{Ellerbroek03}. In this context, we provide the expressions to derive the performance of MCAO systems also in the case of POLC and MMSE. We modify Eq.~\eqref{eq:phi_dm_1} in order to consider a reconstruction acting on the pseudo-open loop measurements \citep{Basden19}: \begin{equation} \label{eq:phi_dm_polc_1} \phi_{DM}(\nu) = H_{ol}(\nu) \big(W_{MMSE} \: \phi_{OL}^{\bmath{\theta_{GS}}}(\nu) - \phi_{DM}(\nu) \big) \, , \end{equation} where $W_{MMSE}$ is the MMSE reconstructor and $\phi_{OL}^{\bmath{\theta_{GS}}}$ are the open-loop measurements that we write as: \begin{equation} \label{eq:phi_open_loop} \phi_{OL}^{\bmath{\theta_{GS}}}(\nu) = \phi_{res}^{\bmath{\theta_{GS}}}(\nu) + \phi_{n}(\nu) + P_{DM}^{\bmath{\theta_{GS}}} \phi_{DM}(\nu) \, . \end{equation} We replace this expression in Eq.~\eqref{eq:phi_dm_polc_1}: \begin{equation} \begin{split} \label{eq:phi_dm_polc_2} \phi_{DM}(\nu) &= H_{ol}(\nu) \Big[W_{MMSE} \big( \phi_{res}^{\bmath{\theta_{GS}}}(\nu) + \phi_{n}(\nu) \\ &\:\:\: + P_{DM}^{\bmath{\theta_{GS}}} \phi_{DM}(\nu) \big) - \phi_{DM}(\nu) \Big] \\ &= H_{ol}(\nu) W_{MMSE} \big( \phi_{res}^{\bmath{\theta_{GS}}}(\nu) + \phi_{n}(\nu) \big) \\ & \:\:\: + H_{ol}(\nu) (W_{MMSE} P_{DM}^{\bmath{\theta_{GS}}} - Id) \phi_{DM}(\nu) \, . \end{split} \end{equation} We group the terms related to $\phi_{DM}$: \begin{equation} \begin{split} \label{eq:phi_dm_polc_3} &\Big[ Id - H_{ol}(\nu) \big( W_{MMSE} \: P_{DM}^{\bmath{\theta_{GS}}} - Id \big) \Big] \phi_{DM}(\nu) \\ &= H_{ol}(\nu) W_{MMSE} \big( \phi_{res}^{\bmath{\theta_{GS}}}(\nu) + \phi_{n}(\nu) \big) \, , \end{split} \end{equation} and we obtain a final expression of the DMs phase: \begin{equation} \begin{split} \label{eq:phi_dm_polc_4} \phi_{DM}(\nu) &= \Big[ Id - H_{ol}(\nu) \big( W_{MMSE} \: P_{DM}^{\bmath{\theta_{GS}}} - Id \big) \Big]^{-1} \\ &\:\:\: \times H_{ol}(\nu) \: W_{MMSE} \big( \phi_{res}^{\bmath{\theta_{GS}}}(\nu) + \phi_{n}(\nu) \big) \\ &= \big[ Id + H_{ol}(\nu) K \big]^{-1} H_{ol}(\nu) \: W_{MMSE} \big( \phi_{res}^{\bmath{\theta_{GS}}}(\nu) + \phi_{n}(\nu) \big) \\ &= H_{polc}(\nu) \: W_{MMSE} \big( \phi_{res}^{\bmath{\theta_{GS}}}(\nu) + \phi_{n}(\nu) \big) \, , \end{split} \end{equation} where we defined the matrices $K = Id - W_{MMSE} \: P_{DM}^{\bmath{\theta_{GS}}}$ and $H_{polc}=\big[ Id + H_{ol}(\nu) K \big]^{-1} H_{ol}(\nu)$.\newline It follows that the results in Eqs.~\eqref{eq:phi_res_alpha_2} and \eqref{eq:psd_res_2} can still be used to compute the residual phase and PSD on target, but considering $H_{ol} = H_{polc}$ and $W = W_{MMSE}$ when taking into account POLC+MMSE. \section{Tip-tilt anisoplanatism in MCAO-assisted astrometric observations} \label{sec:aniso} In this section, we use the formulation introduced in Sec.~\ref{sec:performance} as a tool to investigate the behavior of atmospheric tip-tilt residuals in MCAO-assisted observations and their impact on astrometric precision. Since, in the presented approach, the phase is intended as decomposed onto wavefront modes, we can derive the temporal PSD and the variance of tip-tilt residuals from Eq.~\eqref{eq:psd_res_2} and Eq.~\eqref{eq:var_res_1} respectively, by applying both equations to tip and tilt modes. \noindent Throughout the following analysis, we consider the contribution of all the modes to the turbulence-induced wavefront distortions and a reconstruction of tip-tilt at the ground and focus-astigmatisms at the high layer, based on the tip-tilt measurements from three NGSs in equilateral asterism. Such NGS loop can be used for the control of the \textit{null modes} \citep{Flicker03} in MCAO systems using a split tomography approach \citep{Gilles08}. The compensation for focus-astigmatisms at the pupil plane is not included in our configuration and this would provide an out of focus and astigmatic PSF; however, this is not a limitation for our analysis as we are interested in investigating the variations of tip-tilt in the field of view. As we do not consider the LGS-based correction of the higher orders, the results have to be intended as an upper limit to the atmospheric tip-tilt residuals. An extended study including the LGS loop will be the object of future works. We use an LSE reconstructor, as the control of modes up to the astigmatisms with a symmetric asterism and without noise does not foresee divergences in the system's behavior; thus, it does not require a threshold nor an MMSE reconstructor, as it would be expected in the real cases. First, we analyse the dependence of on-axis tip-tilt residuals on the NGS asterism. Then, we introduce the contribution of the scientific integration time and, finally, we estimate relative tip-tilt residuals, that is the amount of differential tilt jitter error. \subsection{On-axis tip-tilt residuals} We consider the DM0 at 0m and the DM1 at 17km. We assume an equilateral asterism of NGSs centred at the origin of the field of view. We consider a 40-m telescope and the ELT median turbulence profile reported in \citet{elt_profile}, with a seeing of 0.644" and an average wind speed of 9.2 m/s. As we are mainly interested in the analysis of spatial anisoplanatism, we neglect the noise assuming NGSs with infinite flux. We also minimize the temporal error considering a loop with a frequency frame rate of 1kHz and where the control is a pure integrator with a delay given by the WFSs exposure time only. In Fig.~\ref{fig:tt_res_onaxis}, we show the dependence on the asterism radius of tip-tilt residuals for a target on axis. The errors are computed from the integration of Eq.~\eqref{eq:psd_res_2}, applied to tip-tilt, over the temporal frequencies. The MCAO residuals are shown in comparison to the SCAO case, where the asterism radius becomes the angular separation of the NGS from the target; as expected from the larger isoplanatic patch provided by the MCAO correction, MCAO errors are reduced with respect to the SCAO ones. Moreover, we note that, differently from the SCAO case, whose errors linearly depend on the off-axis separation, MCAO residuals show a quadratic dependence on the NGSs separation. We can explain the different behaviors as follows: the turbulence-induced distortions that are observed on the pupil plane can be described by a combination of polynomials with increasing degree: \begin{equation} \begin{split} \Delta x &= a_1 + a_2 x + a_3 y + a_4 x^2 + a_5 xy + a_6 y^2 + ... \\ \Delta y &= b_1 + b_2 y + b_3 x + b_4 y^2 + b_5 yx + b_6 x^2 + ... \, , \end{split} \end{equation} where the zeroth order coefficients ($a_1$, $b_1$) represent a global tip-tilt, that is a shift in $x$ and $y$ common to all directions of the field of view, the first order coefficients ($a_2$, $a_3$, $b_2$, $b_3$) represent the plate-scale distortions produced by the projection of focus and astigmatisms in altitude onto the tip-tilt in pupil, and so on for the higher orders. The covariance matrix of the distortions is $\langle \Delta \bmath{r} \Delta \bmath{r}^T \rangle$ , with $\Delta \bmath{r} = (\Delta x , \ \Delta y)$. The SCAO, correcting with only a DM at the ground and using a single WFS, is able to compensate for the zeroth order of the distortions (i.e. overall pointing), leaving residual distortions that are then dominated by the first order (i.e. plate-scale variations). The MCAO, in our NGS-based configuration, removes a global tip-tilt with the DM0 and, in addition, is able to control the first order distortions by compensating for focus and astigmatisms with the DM1 conjugated in altitude. The residual distortions are, in this case, dominated by the second order. The sum of the diagonal terms of the residual distortions covariance matrix leads, for the SCAO case, to the following expression: \begin{equation} \begin{split} \sum_{i=1,2} \langle \Delta \bmath{r} \Delta \bmath{r}^T \rangle_{ii} &= (a_2 x + a_3 y + ...)^2 + (b_2 y + b_3 x + ...)^2 \\ &= u (x^2 + y^2) + ... \, , \end{split} \end{equation} and, for the MCAO case, to: \begin{equation} \begin{split} \sum_{i=1,2} \langle \Delta \bmath{r} \Delta \bmath{r}^T \rangle_{ii} &= (a_4 x^2 + a_5 xy + a_6 y^2 + ...)^2 \\ & \:\:\: + (b_4 y^2 + b_5 yx + b_6 x^2 + ...)^2 \\ &= v (x^2 + y^2)^2 + ... \, , \end{split} \end{equation} where the simplification in the coefficient $u$ for the former and $v$ for the latter is obtained by replacing the coefficients of the polynomial series with the proper coefficients that relate tip-tilt on the pupil plane with the higher orders on a meta-pupil in altitude (see Appendix \ref{sec:appendix}). If we consider ($x$, $y$) as the position of the target with respect to the NGS, we find a dependence of the variance on the second power of the separation for the SCAO case and on the fourth power for the MCAO case. In Fig.~\ref{fig:tt_res_fov}, we show the spatial distribution of tip-tilt residuals in the field of view. The errors are computed for targets at different radial separations from the origin (that also represents the barycenter of the asterism), the final values being obtained from the average over several polar angles in order not to be affected by the geometry of the asterism. \begin{figure} \centering \includegraphics[width=\linewidth]{tt_res_onaxis_D40m_esoprofile.pdf} \caption{Tip-tilt residuals for a target at the origin of the field of view, as functions of the radius of the NGS asterism. The SCAO limit is also shown for comparison (dotted line); in this case, the values on the x-axis represent the angular separation between the target and the NGS.} \label{fig:tt_res_onaxis} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{tt_res_FoV_D40m_esoprofile.pdf} \caption{Tip-tilt residuals as functions of the target's radial distance with respect to the origin. The curves are shown for different values of the NGS asterism radius (r$_{ast}$) and the SCAO limit is also shown (dotted line).} \label{fig:tt_res_fov} \end{figure} The errors show similar values for targets within the NGS asterism and increase outside of the asterism, where tip-tilt is indeed not controlled. The minimum of the curves is not exactly at a distance equal to the asterism radius value, depending on the fact that the targets at an angular separation equal to the asterism radius fall outside of the NGSs triangle (except the ones with the same exact polar angles as the NGSs ones), where tip-tilt is worse controlled. \subsection{Scaling of tip-tilt residuals with the scientific integration time} \label{sec:sci_time} The previous results, obtained from a pure integration of Eq.~\eqref{eq:psd_res_2}, represent the case where the fluctuations in position due to tip-tilt residuals are fully integrated within the exposure and thus impact entirely on the shape and size of the PSF, leading to the PSF elongation. This effect contributes to the astrometric error due to photon noise \citep{Lindegren78}: \begin{equation} \sigma \sim \frac{FWHM}{SNR} \, , \end{equation} where $FWHM$ is the full width at half maximum of the PSF and $SNR$ is the signal-to-noise ratio. Regardless of the residual value contributing to the FWHM, this source of error can be ideally reduced to zero if we assume a source with infinite SNR. In this case, tip-tilt residuals would not affect the astrometric precision. On the other hand, if tip-tilt residuals are not fully integrated within the exposure, fluctuations in position due to the residual jitter are observed between successive frames, these affecting astrometric precision despite the source flux. Thanks to the knowledge of the temporal PSD of the residuals, we can analytically describe the residual jitter between successive frames by still following an approach that makes use of temporal transfer functions, as in Sec.~\ref{sec:performance}. We write the expression of the phase residuals that are left after a scientific integration of length $T$ as: \begin{equation} \label{eq:phi_sci_freq_1} \phi_{res, T}^{\alpha}(\nu) = H_{T}(\nu) \phi_{res}^{\alpha}(\nu) \, , \end{equation} where $\phi_{res}^{\alpha}$ is given by Eq.~\eqref{eq:phi_res_alpha_2} and $H_{T}$ is the temporal transfer function of the scientific camera, that is the Laplace or $Z$-transform of the time-average operation. In the Laplace case, the expression is given by: \begin{equation} \label{eq:camera_tf} \begin{split} H_{T}(\nu) &= \dfrac{1}{T} \widetilde{\Pi}_{T}(\nu) \\ &= sinc(\pi \nu T) e^{-i \pi \nu T} \, , \end{split} \end{equation} where $\widetilde{\Pi}_{T}$ denotes the transform of the rectangular function $\Pi_{T}$. From Eq.~\eqref{eq:psd_res_1} and Eq.~\eqref{eq:phi_sci_freq_1}, we can get the expression of the residual PSD for scientific frames of length $T$: \begin{equation} \label{eq:psd_sci_1} S_{res, T}^{\alpha}(\nu) = |H_{T}(\nu)|^2 S_{res}^{\alpha}(\nu) \, , \end{equation} where $S_{res}^{\alpha}$ is given by Eq.~\eqref{eq:psd_res_2}. The results of this expression, as applied to tip and tilt, are shown in Fig.~\ref{fig:sci_psds}, where on-axis tip-tilt residual PSDs are plotted for different integration times. \begin{figure} \centering \includegraphics[width=\linewidth]{sci_psds_D40m_esoprofile_onaxis_ast40arcsec.pdf} \caption{Temporal power spectrum of the residual jitter between successive frames, for scientific exposures of 0.1s (orange), 1s (green), 10s (red), 100s (purple). The configuration is the same as Fig.~\ref{fig:tt_res_onaxis}, with a target on axis and an asterism radius of 40". The unaveraged tip-tilt residual PSD is also shown for comparison (blue).} \label{fig:sci_psds} \end{figure} The impact of the scientific exposure depends on the relation between the cut-off frequency of the camera transfer function and the one of the residual PSD. The camera transfer function acts as a low-pass filter with a cut-off frequency $\nu_{H_{sci}} = 1/T$. If $\nu_{H_{sci}}$ is either larger or about the same as the tip-tilt residual PSD cut-off frequency ($\nu_{S_{res}} \simeq 0.6 \: v/D$, with $v$ the wind velocity and $D$ the telescope diameter \citealt{Conan95}), the scientific integration is not long enough to average the residuals and the position jitter observed between different exposures is emphasized. Indeed, in this case, the camera is either unable to filter any frequency of the PSD, or it filters only the frequencies that are larger than $\nu_{S_{res}}$, where the energy falls rapidly to zero. As the integration time increases, $\nu_{H_{sci}}$ becomes smaller than $\nu_{S_{res}}$ and the camera transfer function passes the frequencies where the PSD is flat, leaving then a residual variance that is proportional to $1/T$. Thus, the root-mean-square (RMS) is proportional to $T^{-1/2}$. This behavior is shown in Fig.~\ref{fig:sci_std_vs_T}: for integration times smaller than the inverse of $\nu_{S_{res}}$, tip-tilt residuals do not depend on $T$ and the curve is flat, while it follows a $T^{-1/2}$ law for larger times. The $T^{-1/2}$ power law is in agreement with the assumptions and the results that are present in the literature \citep{Ammons11,Cameron09,Ellerbroek07}. \begin{figure} \centering \includegraphics[width=\linewidth]{sci_std_D40m_esoprofile_onaxis_4asterisms.pdf} \caption{Tip-tilt residual error on axis as a function of the scientific integration time. The configuration is the same as Fig.~\ref{fig:sci_psds}, with the asterism radius varying from 10" to 80".} \label{fig:sci_std_vs_T} \end{figure} \subsection{Differential tilt jitter} \label{sec:dtj} The results in Sec.~\ref{sec:sci_time} give information about the repeatability of the position measurement of a single source. However, the science cases of future instruments show a major interest in relative astrometry, that is, in measuring the distance between sources. In order to be able to estimate the precision in the distance measurements, we extend the analysis to differential tilt jitter. This effect is well known for SCAO systems but, to our knowledge, is less well understood and no expression is present in the literature to compute this error for MCAO systems. In this context, we present an analytical expression for this flavour of adaptive optics as well, by using the results in Sec.~\ref{sec:performance}. We consider two sources in directions $\alpha$ and $\beta$ and we describe the differential jitter phase through the difference between the residual phases in the two directions: \begin{equation} \label{eq:phi_dtj_1} \phi^{\alpha,\beta}_{DTJ}(\nu) = \phi_{res}^{\alpha}(\nu) - \phi_{res}^{\beta}(\nu) \, . \end{equation} The temporal PSD is then: \begin{equation} \label{eq:psd_dtj_1} \begin{split} S_{DTJ}^{\alpha,\beta}(\nu) &= \Big\langle \phi^{\alpha,\beta}_{DTJ}(\nu) \: \phi^{\alpha,\beta \: \dagger}_{DTJ}(\nu) \Big\rangle\\ &= \Big\langle \big(\phi_{res}^{\alpha}(\nu) - \phi_{res}^{\beta}(\nu)\big) \big(\phi_{res}^{\alpha}(\nu) - \phi_{res}^{\beta}(\nu)\big)^{\dagger} \Big\rangle \, . \end{split} \end{equation} For SCAO systems, the difference between residual phases simplifies into the difference between turbulent phases because, as already pointed out in Sec.~\ref{sec:performance}, the correction phase is common to all directions. The reasoning leads to the following expression of differential tilt jitter PSD for the SCAO case: \begin{equation} \label{eq:psd_dtj_scao} S_{DTJ}^{\alpha,\beta}(\nu) = 2\big(S_{turb}(\nu) - S_{turb}^{\alpha,\beta}(\nu) \big) \, , \end{equation} where $S_{turb}^{\alpha,\beta}$ is the CPSD of turbulence between the two directions and where we considered $S_{turb} = S_{turb}^{\alpha} = S_{turb}^{\beta}$, having assumed a homogeneous and isotropic turbulence. The expression (integrated over the temporal frequencies) is in agreement with the results that are present in the literature \citep{Sandler94,Clenet15}. For MCAO systems, we can replace $\phi_{res}^{\alpha}$ and $\phi_{res}^{\beta}$ with the expression in Eq.~\eqref{eq:phi_res_alpha_2} applied to $\alpha$ and $\beta$ respectively. We obtain: \begin{equation} \label{eq:psd_dtj_2} \begin{split} S_{DTJ}^{\alpha,\beta}(\nu) &= \: 2\big(S_{turb}(\nu) - S_{turb}^{\alpha,\beta}(\nu) \big) \\ & \:\:\: + \Delta H_{n,tomo}^{\alpha,\beta}(\nu) \big(S_{turb}^{\bmath{\theta_{GS}}}(\nu) + S_{noise}(\nu) \big)\Delta H_{n,tomo}^{\alpha,\beta}(\nu)^{\dagger} \\ & \:\:\: - 2 Re \Big[ \Delta H_{n,tomo}^{\alpha,\beta}(\nu) \big(S_{turb}^{{\bmath{\theta_{GS}}},\alpha}(\nu) - S_{turb}^{{\bmath{\theta_{GS}}},\beta}(\nu)\big) \Big] \, , \end{split} \end{equation} where we defined $\Delta H_{n,tomo}^{\alpha,\beta}(\nu) = H_{n,tomo}^{\alpha}(\nu) - H_{n,tomo}^{\beta}(\nu)$. It is worth noting that, if taking the SCAO limit of this expression, we get $H_{n,tomo}^{\alpha} = H_{n,tomo}^{\beta}$ and we retrieve the results in Eq.~\eqref{eq:psd_dtj_scao}. Equation \eqref{eq:psd_dtj_2} shows that differential tilt jitter error in MCAO systems is given by the SCAO case error (first two terms) and additional terms depending on the correction (asterism/targets geometry, temporal filtering of the AO loop, noise) and on spatiotemporal cross-correlations of the turbulence. These additional terms might reduce the error with respect to the SCAO case, as shown in Fig.~\ref{fig:dtj_vs_outerscale} and Fig.~\ref{fig:dtj_vs_asterism_radius}. \begin{figure} \centering \includegraphics[width=\linewidth]{dtj_vs_outerscale_D40m_esoprofile.pdf} \caption{Difference between SCAO and MCAO differential tilt jitter error ($\Delta\sigma_{DTJ}$ = $\sigma_{DTJ,SCAO}$ - $\sigma_{DTJ,MCAO}$) as a function of the outer scale. The telescope, DMs, NGSs, turbulence configurations are the same as in Fig.~\ref{fig:tt_res_onaxis}. The targets' angular separation is 5" and the asterism radius is 40".} \label{fig:dtj_vs_outerscale} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{dtj_vs_asterism_radius_D40m_esoprofile_no_threshold_LSE_new_distances.pdf} \caption{MCAO differential tilt jitter error as a function of the NGS asterism radius. The colors show different values of the distance between the astrometric targets. For each curve, the SCAO case is shown as comparison (dotted lines).} \label{fig:dtj_vs_asterism_radius} \end{figure} In the former, the RMS of the difference between the variances obtained from Eq.~\eqref{eq:psd_dtj_scao} and Eq.~\eqref{eq:psd_dtj_2} as a function of the outer scale is plotted. As expected, the discrepancy between the SCAO and MCAO values increases with the outer scale, as a larger outer scale leads to larger cross-correlations that help reduce the differential tilt jitter error in the MCAO correction. In the latter, the MCAO differential tilt jitter error as a function of the NGS asterism radius is shown. The smaller cross-correlations given by larger asterisms determine an increasing of the differential tilt jitter error with the asterism radius. This is evident when the distance is small and both targets are included within the asterism (d = 1", 5"); for larger distances, the errors are about constant up to an asterism radius comparable to the targets' separation and then show the increasing behavior. As in Sec.~\ref{sec:sci_time}, we can also take into account the contribution of the scientific exposure on the differential tilt jitter error, through the temporal filtering of the camera integrating over $T$: \begin{equation} \label{eq:phi_dtj_sci} \phi^{\alpha,\beta}_{DTJ, T}(\nu) = H_{T}(\nu) \big(\phi_{res}^{\alpha}(\nu) - \phi_{res}^{\beta}(\nu)\big) \, . \end{equation} The PSD of time-averaged differential tilt jitter is then: \begin{equation} \label{eq:psd_dtj_sci} S^{\alpha,\beta}_{DTJ, T}(\nu) = |H_{T}(\nu)|^2 S^{\alpha,\beta}_{DTJ}(\nu) \, , \end{equation} where $S^{\alpha,\beta}_{DTJ}$ is given by Eq.~\eqref{eq:psd_dtj_scao} for SCAO and by Eq.~\eqref{eq:psd_dtj_2} for MCAO. \section{Application: Differential tilt jitter error for MAVIS and MAORY} \label{sec:application} In this section, we use Eq.~\eqref{eq:psd_dtj_2} to investigate the contribution of differential tilt jitter error on the future astrometric observations; as case studies, we consider MAVIS at the VLT and MAORY at the ELT. \noindent In Table~\ref{table:mav_mao_parameters}, we summarize the main parameters that we used to describe the two systems. The maximum value of the asterism radius represents the technical field of view (120" for MAVIS and 160" for MAORY). \begin{table} \centering \begin{tabular}{ |c|c|c| } \hline \textit{} & MAVIS & MAORY \\ \hline $D$ [m] & 8 & 39 \\ $h_{DM_0}$ [m] & 0 & 600 \\ $h_{DM_1}$ [m] & 13500 & 17000 \\ $r_{asterism}$ ["] & 10, 30, 50, 60 & 30, 55, 70, 80 \\ $r_{FoV}$ ["] & 15 & 30 \\ \hline \end{tabular} \vspace{.2cm} \caption{Telescope diameter, DMs conjugation height, set of asterism radii and scientific field of view radius used to derive the differential tilt jitter error for MAVIS- and MAORY-assisted observations. The outer scale used for both cases is 25m.} \label{table:mav_mao_parameters} \end{table} \noindent As in Sec.~\ref{sec:aniso}, we assume equilateral asterisms of NGS with infinite flux in order to neglect the contribution of noise. The measurements from the three NGSs allow to reconstruct tip and tilt, that are corrected on the DM0, and focus-astigmatisms, applied on the DM1. We consider a closed loop, where the control is a pure integrator working at 1kHz and where we minimize the latency by considering a delay due to the WFSs integration time only. For the computation of the PSDs and CPSDs of turbulence, we used the same turbulence profile as in Sec.~\ref{sec:aniso}, with a zenith angle of 30$^{\circ}$. \noindent In Fig.~\ref{fig:dtj_mao_mav}, we show the differential tilt jitter error for MAVIS and MAORY, obtained for typical scientific exposures of $T$=30s. The error is computed considering the first source at the origin of the field of view and varying the distance of the second source up to the edge of the scientific field of view. \begin{figure} \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=\linewidth]{mavis_dtj_std_T30s_ESOprofile_z30deg.pdf} \end{minipage}\hfill \begin{minipage}{.45\textwidth} \centering \includegraphics[width=\linewidth]{maory_dtj_std_T30s_ESOprofile_z30deg.pdf} \end{minipage} \caption{Differential tilt jitter error as a function of the angular separation for MAVIS- (top) and MAORY- (bottom) assisted observations of length $T$=30s. The SCAO case (dotted black line) is shown for comparison, as well as the astrometric precision requirement (dashed red line) of the two systems.} \label{fig:dtj_mao_mav} \end{figure} In order not to be affected by the geometry of the asterism of NGSs, for each separation we made an azimuthal average of the errors obtained at different polar coordinates. The plots show that differential tilt jitter can introduce errors on relative astrometry up to $\sim$0.4-1mas for MAVIS and $\sim$60-90µas for MAORY at the edge of the field of view. As shown in Sec.~\ref{sec:sci_time}, this source of error can be reduced with the integration time; if the measurements, for instance, can be averaged over $\sim$30 minutes of exposures, the relative astrometric error due to differential tilt jitter is reduced by a factor of $\sim$ 8 and becomes smaller than the requirement value over the whole field of view for both cases. \noindent Current specifications suggest a major interest in high precision relative astrometry for separations up to 1" \citep{Rigaut20}. For a better visualization of this scale, in Fig.~\ref{fig:dtj_mao_mav_1arcsec} we show the differential tilt jitter error as a function of the asterism radius for a fixed distance of 1". The plots show that differential tilt jitter error should not represent a relevant contribution to the MAORY astrometric error budget for these separations, even considering the goal of 10µas. For MAVIS, the error shows to be within the requirement of 150µas, but not compliant with the goal of 50µas for asterisms with radius larger than 40" and for the typical exposure time of 30s. In this case, the possibility to average over longer integration times is required. \begin{figure} \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=\linewidth]{mavis_dtj_std_ESOprofile_z30deg_at1arcsec_3exptimes.pdf} \end{minipage}\hfill \begin{minipage}{.45\textwidth} \centering \includegraphics[width=\linewidth]{maory_dtj_std_ESOprofile_z30deg_at1arcsec_3exptimes.pdf} \end{minipage} \caption{Differential tilt jitter error for targets separation of 1" as a function of the NGS asterism radius for MAVIS- (top) and MAORY- (bottom) assisted observations. The results are plotted for $T=30, 120$ and $600$s and show the scaling with $T^{-1/2}$ that has been demonstrated in Sec.~\ref{sec:sci_time}.} \label{fig:dtj_mao_mav_1arcsec} \end{figure} \noindent It is worth pointing out that these results show the contribution of atmospheric tip-tilt residuals in terms of differential tilt jitter only. The contribution of tip-tilt residuals on the astrometric error in terms of the centroiding error is not considered (that is equivalent to assume targets with infinite SNR). Moreover, the contribution of temporal errors of the AO loop is minimized and noise terms are neglected. On the other hand, it should be considered that the differential tilt jitter error could be calibrated out through dedicated coordinate transforms, if reference sources are available in the field \citep{Fritz10,Cameron09}. We also expect the error to be reduced if an LGS loop controlling the higher orders than the astigmatisms is included. In this context, these results have to be considered as an upper limit. An extended study about the impact of the LGS loop residuals on the tip-tilt modes is intended to be the object of future works. \section{Conclusion} We have presented an analytical formalism to derive the temporal PSD of the wavefront residuals of an MCAO correction. The formulation includes tomographic, noise and temporal errors. The general framework allows to select the telescope diameter, the asterism of either NGSs or LGSs, the DMs configuration, the turbulence profile and the modes of distortion that are sensed through the GSs and compensated by the DMs. We derived an expression for both a closed loop control with an LSE reconstruction and a pseudo-open loop control with an MMSE reconstruction. We applied the results to an NGS-based MCAO configuration in order to analyse the spatial and temporal behavior of tip-tilt residuals: we found a quadratic dependence of the on-axis residuals on the angular separation of the asterism, that we demonstrated to be consistent with the control of plate-scale distortions operated by the MCAO correction; we also verified the scaling of the residuals with the square root of the scientific exposure time by means of the temporal transfer function of the scientific camera. We analysed differential residuals as well and we provided an analytical expression for the differential tilt jitter error. We showed that the cross-correlations between the GSs of the asterism and between the GSs and the targets play a role in reducing this source of error with respect to the SCAO case and that parameters like the outer scale and the radius of the asterism can be crucial to properly decrease the differential tilt jitter in MCAO systems. Though these parameters are not under control, it is worth considering them during the preparation of astrometric observations. We finally used our results to quantify the contribution of the differential tilt jitter error to the future astrometric observations, choosing MAORY and MAVIS as case studies. In the case of equilateral asterism of NGSs and considering the possibility of averaging over several exposures, differential tilt jitter should not be the dominant limiting factor to the astrometric precision of these systems. \section*{Acknowledgements} The authors thank Carmelo Arcidiacono for fruitful discussion. This work has been partially funded by ADONI - the ADaptive Optics National laboratory of Italy. \section*{Data Availability} No new data were generated or analysed in support of this research. \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction and Overview} \subsection{Introduction} Stationary solutions of supergravity theories, such as black holes, black $p$-branes, gravitational waves and Kaluza-Klein monopoles can often be presented in terms of harmonic functions, which only depend on the coordinates transverse to the worldvolume.\footnote{There are many excellent reviews on the subject, including \cite{Stelle}, which discusses the approach we are going to use in its Section 9.} This is related to the existence of multi-centered solutions: if the field equations can be reduced to a set of decoupled harmonic equations without assuming spherical symmetry, then not only single-centered harmonic functions, \[ H(r) = h + \frac{q}{r^{D-3}}\;, \] but also multi-centered harmonic functions \[ H(\vec{x}) = h + \sum_{i=1}^N \frac{q_i}{|\vec{x} - \vec{x}_i|^{D-3}} \;. \] provide solutions of the field equations. The existence of stationary multi-centered solutions requires the exact cancellation of the forces between the constituents at arbitrary distance, the classical examples being multi-centered extremal black hole solutions like the Majumdar-Papapetrou solutions of Einstein-Maxwell theory \cite{MajPap}. This cancellation is often explained by supersymmetry: if the theory allows an embedding into a supersymmetric theory, then one can look for solutions admitting Killing spinors, which in turn leads to stationary multi-centered solutions. The saturation of an extremality bound, which is needed for the cancellation of forces is then equivalent to the saturation of the supersymmetric mass bound (also called the BPS mass bound). The extremal Reissner-Nordstr\"om black hole, and its multi-centered generalizations are the prototypical examples of such supersymmetric solitons \cite{Gib,GibHul}. In supergravity theories with $N\geq 2$ supersymmetry the asymptotic behaviour of BPS solutions at event horizons is determined by the charges through the black hole attractor mechanism \cite{FerKalStr,Str,FerKal}, which forces the scalar fields to take fixed point values. The attractor mechanism and the construction of solutions in terms of harmonic functions are closely related: from the attractor equations (also called stabilisation equations or fixed point equations), which determine the asymptotic near-horizon solution one can obtain the so-called generalized stabilisation equations, which allow to express the complete solution algebraically in terms of harmonic functions \cite{FerKalStr,FerKal,BCdWKLM,Sabra4d,BLS,Sabra5d,ChaSab,CdWKM}. In the single-centered case the generalized stabilisation equations can be formulated equivalently as gradient flow equations for the scalars as functions of the radial coordinate \cite{FGK,Moore,Denef}. The potential driving the flow is the central charge. While imposing supersymmetry is sufficient to derive the attractor mechanism, and to obtain multi-centered solutions, it is not necessary. As already observed in \cite{FGK} the attractor mechanism is a general feature of extremal black holes in Einstein-Maxwell type theories. More recently, single-centered non-supersymmetric extremal solutions have been studied extensively and from various perspectives starting from \cite{GIJT,TriTri,KalSivSor}. Reviews of this subject can be found in \cite{Erice,SpringerLN}. By imposing that the solution is spherically symmetric in addition to stationary, one can reduce the problem of solving the equations of motion to a one-dimensional problem which only involves the radial coordinate \cite{FGK}. The reduced problem is formally equivalent to the motion of a particle on a curved target space in presence of a potential, usually called the black hole potential. One contribution to the potential depends on the charges and is obtained by eliminating the gauge fields through their equations of motion. Alternatively, one can often convert the gauge fields (or at least the those components of the gauge fields relevant for the solution) into scalars. Then the equations of motion take the form of a geodesic equation (without potential) on an extended scalar manifold which encodes all relevant degrees of freedom. The black hole potential receives further contributions if the full higher-dimensional solution involves rotation, if the gauge fields cannot be expressed in terms of scalars, and if a cosmological constant, higher curvature terms, Taub-NUT charge, or other such complications are present. We will retrict ourselves to situations which can be formulated as geodesic motion (without potential) on an enlarged scalar manifold. In this set-up one is left with solving the scalar equations of motions, while the Einstein equations themselves result in a constraint, which can be interpreted as the conservation of the particle's energy. The standard approach to single-centered solutions is to try rewriting the second order scalar equations of motion as first order gradient flow equations. While for BPS solutions the potential driving the gradient flow is the central charge \cite{FGK}, first order rewritings have since then been found for various non-BPS solutions, and the function driving the flow is referred to as the (`fake'-, `generalized' or `pseudo'-)superpotential or as the prepotential \cite{CerDal,CCDOP,PSVV:08,ADOT}. The problem of finding a first order gradient flow prescription can be reformulated using the Hamilton-Jacobi formalism as the problem of finding a canonical transformation \cite{ADOT}. In many cases the first order equations can be interpreted as generalized Killing spinor equations, by defining a suitable covariant derivative for spinors. Similar observation have been made before in the context of cosmological solutions and domain walls, and this has motivated the concept of `fake'- or `pseudo'-supersymmetry \cite{Fake1,Fake2}. While much is known about the attractor mechanism for non-BPS black holes, we are not aware of a systematic analysis of the conditions which allow multi-centered solutions. From the supersymmetric case one is used to the observation that the existence of a first order rewriting and the reduction of the equations of motion to algebraic relations between the scalars and a set of harmonic functions (i.e. generalized stabilization equations) are closely related. In the single-centered case one might regard the generalized stabilisation equations as `solutions' of the gradient flow equation.\footnote{This `solution' is in general not completely explicit, since generically one cannot find explicit expression for the scalars in terms of the harmonic functions.} But if one looks beyond single-centered solutions, it becomes clear that the generalized stabilization equations are much more then mere solutions to the gradient flow equations. In the absence of spherical symmetry, the gradient flow equations are replaced by first order partial differential equations, which have not been studied much in the literature. In contrast the form of the generalized stabilization equations remains the same, and non-spherical solutions simply correspond to a more general choice of harmonic functions, namely multi-centered instead of single-centered harmonic functions. Also note that for BPS solutions the generalized stabilization equations can be derived directly, without passing through an intermediate stage of first finding a first order rewriting and then solving the flow equations \cite{BLS,CdWKM}. This suggest to develop an approach to non-BPS black holes which is not based on first order rewriting and flow equations, but on generalised stabilisation equations. In other words one should try to reduce the second order equations of motion to decoupled harmonic equations, without imposing spherical symmetry. We do not expect that this strategy can work for general Einstein-Maxwell theories. As remarked in \cite{FGK}, it is virtually impossible to obtain detailed information about the behaviour of extremal black hole solution away from the horizon and infinity if the scalar manifold is generic. The special geometries of vector multiplet manifolds provide examples where explicit solutions can be found (in general up to algebraic equations). Our strategy will be to start with general Einstein-Maxwell type Lagrangians, which include those of $N=2$ vector multiplets as a subclass, and then to work out the additional constraints that we need to impose on the scalar manifold in order to obtain multi-centered solutions. Since we are only interested in stationary solutions, we can perform a dimensional reduction over time and work with the resulting Euclidean theory. Dimensional reduction over time is a powerful solution generating technique, which was (to our knowledge) first used in \cite{NeuKra:69} and \cite{GibBreMai:88}, and which has recently be used to explore non-BPS extremal black holes (albeit only for single-centered solutions) \cite{CerDal, CCDOP,GNPW:05,GaiLiPadi:08,PSVV:08,ChiGut,ADOT}. We refer to \cite{Stelle,Bergshoeff:Geodesic,EucIII} for reviews of this method. The essential part of the reduced Lagrangian is a sigma model, whose equation of motion is the equation for a harmonic map from the reduced space-`time' to the scalar target space. The problem which we will investigate in this paper is to reduce the non-linear second order partial differential equation of a harmonic map to decoupled linear harmonic equations. As we will see this reduction imposes non-trivial conditions on the scalar manifold, which define a generalized version of the special geometry of $N=2$ vector multiplets, and which is characterised by the existence of a potential for the metric. While previous studies have either investigated supergravity Lagrangians, or Lagrangians with generic scalar manifolds, we have identified an interesting intermediate class of scalar manifolds: they are much more general as the target spaces of supergravity theories, while still allowing to express the solution in terms of harmonic functions and thus to obtain multi-centered solutions. This class of scalar manifolds is much more generic than symmetric spaces. For symmetric spaces, powerful methods from the theory of Lie groups are available, and the construction of BPS and non-BPS extremal solutions can be related to intergrable systems \cite{GNPW:05,GaiLiPadi:08,PSVV:08,ChiGut,CRTV,ADOT}. While this class of models is very interesting, symmetric spaces are not even general enough to cover the scalar geometries supergravity with $N\leq 2$ supersymmetry. Thus one is limited to models with $N>2$ supersymmetry, or to special $N\leq 2$ models, like toroidal and orbifold compactifications, or consistent truncations of models with $N>2$ supersymmetry. While our analysis could (and ultimately should) be carried out in an arbitrary number of dimensions, we will be more specific and fix the number of dimensions to be five. Since our approach is guided by results on $N=2$ vector multiplets, this is a natural choice, because the so-called very special geometry of five-dimensional vector multiplets \cite{GST} is the simplest of the special geometries of $N=2$ supermultiplets. From our results it will be clear that there is a similar story for four-dimensional $N=2$ vector multiplets, but the five-dimensional case is a more convenient starting point for technical simplicity. Thus, we will start with generic five-dimensional Einstein-Maxwell theories and construct asymptotically flat, electrically charged, extremal, multi-centered solutions by using the associated four-dimensional Euclidean sigma models. By imposing that the equations of motion reduce to decoupled harmonic equations, we obtain a constraint on the scalar metric which generalizes the very special real geometry of five-dimensional vector multiplets \cite{GST}. There are in fact two relevant conditions. An integrability condition for the solution implies the existence of a Hesse potential for the scalar metric, while the consistent lifting of the four-dimensional Euclidean solution to a solution of the five-dimensional Einstein-Maxwell theory requires in addition that the Hesse potential is the logarithm of a homogeneous function, which we call the prepotential. Five-dimensional supergravity corresponds to the special case where this prepotential is homogeneous of degree three. When expressed in terms of five-dimensional variables, the algebraic relations which express the solution in terms of harmonic functions take the form of the generalized stabilization equations for five-dimensional vector multiplets \cite{Sabra5d,ChaSab}. Therefore the solutions contain the static (non-rotating) electric multi-centered BPS solutions of five-dimensional supergravity \cite{Sabra5d,ChaSab} as a subclass. We also consider the case where we lift solutions of a four-dimensional Euclidean sigma model without coupling to gravity. In this case the Hesse potential is not constrained, and we obtain solitonic solutions of a five-dimensional gauge theory coupled to scalars. The construction of solutions is presented from the reduced, four-dim\-en\-sio\-nal perspective, i.e. we first construct solutions of four-dimensional Euclidean sigma models and discuss the lifting to five dimensions in a second step. The target geometries of the four-dimensional sigma models include those of four-di\-men\-sio\-nal Euclidean vector multiplets, both rigid and local, as special cases. Therefore there is some overlap between this paper and work on Euclidean special geometry \cite{EucI,EucII,EucIII}. In particular, the target space geometry of the four-dimensional Euclidean sigma model is para-complex,\footnote{Para-complex geometry is explained in some detail in \cite{EucI,EucIII}. The features relevant for our work will be explained in due course.} and the integrability condition guaranteeing the existence of multi-centered solutions implies that it is para-K\"ahler. The relation between the real, Hessian target spaces of five-dimensional sigma models and the para-K\"ahler geometry of four-dimensional sigma models provides a `para-version' or `temporal version' of the generalized $r$-map described recently in \cite{AleCor}. To be precise, we find two different generalized para-$r$-maps, depending on whether we couple the Euclidean sigma model to gravity before lifting, or not. In \cite{AleCor} only the case witout gravity was considered. The solutions of the reduced Euclidean theory can be interpreted as instantons and are interesting in their own right. They are of the same type as the D-instanton solution of type-IIB supergravity \cite{GGP}, and the instanton solutions of $N=2$ hypermultiplets \cite{BGLMM,GutSpa,TheVan,DdVTV,deVVAn,ChiGut}, and they contain the instanton solutions and $N=2$ vector multiplets \cite{EucIII,MohWai1} as a subclass. Since the instantons satisfy a Bogomol'nyi bound and lift to extremal black holes, we refer to them as extremal instanton solutions. Extremality is equivalent to satisfying what we call the `extremal instanton ansatz,' which restricts the scalars to vary along totally isotropic submanifolds of the scalar target. This in turn is equivalent to the vanishing of the energy momentum tensor, which makes it consistent to solve the reduced Einstein equations by taking the four-dimensional reduced metric to be flat. The dimensional lifting to five dimensions then gives rise to extremal black hole solutions. For single-centered extremal solutions of supergravity theories the distinction between BPS solutions and non-BPS solutions manifests itself in the form of the potential which drives the gradient flow equations. For BPS solutions this potential is the central charge, while for non-BPS solution it is another function, which one needs to construct. In our framework this distinction finds a geometric interpretation in terms of the para-K\"ahler geometry of the target space of the Euclidean sigma model, because the extremal instanton ansatz comes in two versions. The first version, which can be imposed without further constraints on the scalar metric requires that the scalar fields vary along the eigendistributions of the para-complex structure. However, if the metric has discrete isometries, a generalized version of the extremal instanton ansatz is possible, which allows the scalars to vary along other completely isotropic submanifolds of the target. This distinction generalizes the one between BPS and non-BPS extremal solutions in supergravity, and also provides a geometric interpretation of the difference between the two types of extremal instanton solutions. Besides serving as generating solutions for higher-dimensional solitons, instanton solutions are relevant for computing instanton corrections to quantum amplitudes and effective actions. While this second application is not our main focus in this paper, we encounter one notorious problem arising in this context: if one computes the instanton action by substituting the instanton solution into the Euclidean action one obtains zero instead of the expected non-vanishing finite result. We review one of the proposed solutions, namely the dualization of axions into tensor fields \cite{GGP}. In this dual picture the Euclidean action is positive definite and extremal instantons satisfy a Bogomol'nyi bound. This motivates to add a specific boundary term to the original `purely scalar' action, which ensures that its evaluation on instanton solutions give the same result as the dual `scalar-tensor' action. We show that the instanton action obtained this way agrees with the ADM mass of the black hole obtained by lifting the solution to five dimensions. If instead we lift four-dimensional solutions to five dimensions without coupling to gravity, we again find that the mass of the resulting soliton is equal to the instanton action. \subsection{Overview} This paper is structured as follows. In Section 2.1 we introduce the class of Euclidean sigma models which we will use to generate solutions. The scalar target space is required to be para-Hermitian\footnote{The relevant concepts from para-complex geometry will be explained in Section 2.} and to have $n$ commuting shift isometries. In Section 2.2 we show that the Euclidean scalar equations of motion can be reduced to a set of linear harmonic equations by imposing the extremal instanton ansatz, which corresponds to restricting the scalar fields to vary along totally isotropic subspaces of the target space. The consistency of the solution leads to an integrability condition, which has two natural solutions. Either one restricts the solution to depend on one variable only. Since we require that solutions approach a vacuum at infinity, this implies spherical symmetry. While this does not impose conditions on the scalar metric, it excludes multi-centered solutions. The second, more interesting solution of the integrability condition requires that the scalar metric has a Hesse potential. In this case the target space is para-K\"ahler rather than only para-Hermitian. Since no constraint needs to be imposed on the solutions themselves, we obtain multi-centered solutions. In Section 2.3 we define instanton charges, which are the conserved charges corresponding to the $n$ commuting shift symmetries, which are required in order to be able to lift the Euclidean sigma model to a five-dimensional gauge theory. Then we rederive the extremal instanton solutions from a different angle. By imposing that solutions carry finite instanton charge, we can `peel of' one derivative from the field equations and reduce them to first order equations. As long as one does not impose spherical symmetry these are still (quasi-linear) partial differential equations. But once spherical symmetry is imposed, which we do in Section 2.4, the field equations reduce to first order gradient flow equations. We include some observations and remarks about the relation of our approach to the one based on first order rewritings, and to the Hamilton-Jacobi approach. In Section 3 we discuss a dual version of Euclidean sigma models, where the $n$ axionic scalars have been dualized into tensor fields. In the dual formulation the action of extremal instanton solutions is finite, positive and satisfies a Bogomol'nyi bound. To be precise, the finiteness of the action requires a suitable behaviour of the scalar fields at the centers of the harmonic functions. These conditions are further analyzed in Section 4. Instead of working with the dual `scalar-tensor' action, one can add a boundary to the original `purely scalar' action, which has the effect that one obtains the same finite non-vanishing instanton action for both actions. In Section 4 we analyze two classes of Hesse potentials in more detail: homogeneous functions and logarithms of homogeneous functions. In these cases the asymptotic behaviour of the scalars at the centers and at infinity can be determined even if the field equations cannot be solved in closed form. The conditions which guarantee the finiteness of the instanton action are found explicitly for this class: the Hesse potential must be homogeneous of negative degree, or it must be the logarithm of a homogeneous function (of any degree). The instanton action can be expressed as a function of the instanton charges and of the asymptotic scalar fields, which has the standard form of a BPS mass formula. We can also find analogues of the stabilization equations and generalized stabilization equations known from BPS black holes. Solutions do not quite show fixed point behaviour, but the scalars run off to points at infinite affine parameter, with fixed finite ratios that are determined by the charges. We give various explicit examples of solutions, which include both rigidly and locally supersymmetric models as well as models which cannot have a supersymmetric extensions (the generic case). In Section 5 we briefly discuss the lifting of four-dimensional Euclidean sigma models to five-dimensional field theories without gravity. The most interesting result is that the mass of the resulting soliton equals the instanton action. Since instanton charges are electric charges from the five-dimensional point of view, the expression for the mass takes the same form as for the BPS mass in a supersymmetric theory. The special case of a cubic Hesse potential gives us the rigid para-$r$-map between the scalar geometries of five-dimensional vector multiplets and four-dimensional Euclidean vector multiplets. For general Hesse potentials we obtain a `para-version' of the generalized rigid $r$-map which relates Hessian manifolds to para-K\"ahler manifolds with $n$ commuting shift isometries. In Section 6.1 we discuss the relation between four-dimensional Euclidean sigma models coupled to gravity, and five-dimensional Einstein-Maxwell type theories. We start in five dimensions, and present a generalized version of the very special real geometry of vector multiplets where the prepotential is allowed to be homogeneous of arbitrary degree. This is used to write down a class of Einstein-Maxwell type Lagrangians, which reduce over time to para-K\"ahler sigma models with $n$ commuting shift isometries, coupled to gravity. This provides a generalized version of the local para-$r$-map, which includes the para-$r$-map between supersymmetric theories as a special case. We also indicate how the reduction over space results in a generalized version of the local $r$-map. We then set up an instanton -- black hole dictionary. Lifting extremal instanton solutions gives extremal black holes, and the ADM mass is shown to be equal to the instanton action in Section 6.2. In Section 6.3 we turn to the entropy of the black holes, which is non-vanishing or zero, depending on how many charges are switched on. The black hole entropy can be interpreted in the instanton picture by using a specific conformal frame for the four-dimensional metric, which is different from the Einstein frame. We call this frame the Kaluza-Klein frame, because it corresponds to a fixed time slice of the five-dimensional metric. In this frame the four-dimensional metric of extremal instantons is not flat, but only conformally flat, and the geometry can be interpreted as a semi-infinite wormhole. The Bekenstein-Hawking entropy of the black hole corresponds to the asymptotic size of the throat of the wormhole, and the degenerate case of black holes with vanishing entropy corresponds to wormholes with vanishing asymptotic size of the throat. In Section 6.4 we illustrate the relation between extremal instantons and extremal black holes with several explicit examples. Then we show in full generality that the instanton attractor equations lift to black hole attractor equations, which have the same form as the stabilization equations and generalized stabilization equations of five-dimensional vector multiplets. In particular, we show that the `fixed-ratio run-away' behaviour of four-dimensional scalar is equivalent to the proper fixed point behaviour of five-dimensional scalars. In Appendix A we expand on the observation that target space geometries, which are obtained from a higher-dimensional theory by dimensional reduction over space or time, respectively, can be viewed as different real sections of one underlying complex target space. We explain the notion of `complexifying (para-)complex numbers' and indicate that complex-Riemannian geometry is the appropriate framework for relating target spaces occuring in dimensional reduction over space and time by analytical continuation. \section{Sigma models with para-Hermitean target spaces} \subsection{Motivation and discussion of the Euclidean action \label{SectEucAct}} The starting point for all subsequent constructions are sigma models of the form \begin{equation} \label{paraH-real} S[\sigma,b]_{(0,4)} = \int d^4 x \; \frac{1}{2} N_{IJ}(\sigma) \left( \partial_m \sigma^I \partial^m \sigma^J - \partial_m b^I \partial^m b^J \right) \;. \end{equation} Space-time is taken to be flat Euclidean space $E$ with indices $m=1,2,3,4$. The target space $M$ is $2n$-dimensional with coordinates $\sigma^I, b^I$, where $I=1,\ldots, n$. The matrix $N_{IJ}(\sigma)$ is assumed to be real, positive definite and only depends on half of the scalar fields. Thus the metric of $M$ has $n$ commuting isometries which act as shifts on the axionic scalars $b^I$: \begin{equation} b^I \rightarrow b^I + C^I \;, \end{equation} where $C^I$ are constants. The relative minus sign between the kinetic terms of the scalars $\sigma^I$ and the axionic scalars $b^I$ implies that the metric $N_{IJ} \oplus (-N_{IJ})$ of $M$ has split signature $(n,n)$. There are two related reasons for considering Euclidean sigma models with split signature target spaces: \begin{enumerate} \item One approach to the definition of Euclidean actions combines the standard Wick rotation with an analytic continuation $b^I \rightarrow i b^I$ for axionic scalars \cite{vNWal}. D-instantons and other instanton solutions of string theory and supergravity are obtained as classical solutions of Euclidean actions of this type \cite{GGP,BGLMM,GutSpa,TheVan,DdVTV,ChiGut}. \item Solitons, i.e. stationary, regular, finite energy solutions of $(n+1)$-dimensional theories can be dimensionally reduced over time, resulting in instantons, i.e. regular finite action solutions of the reduced $n$-dimensional Euclidean theory. Conversely, one approach to the construction of solitons is to reduce the theory under consideration over time to obtain a simpler Euclidean theory, preferably a scalar sigma model. Instanton solutions of the Euclidean theory can then be lifted to solitons of the original theory. String theory has a large variety of solitonic solutions, which play a central role in establishing string dualities and thus obtaining information about the non-perturbative completion of the theory. Dimensional reduction has been used as a solution generating technique for some time in Einstein-Maxwell theory \cite{NeuKra:69}, supergravity \cite{GibBreMai:88} and string theory \cite{CleGal}. More recent applications include \cite{GNPW:05,GaiLiPadi:08,PSVV:08,EucIII}. We refer to \cite{Stelle,Bergshoeff:Geodesic,EucIII} for a review of this method. \end{enumerate} If we do not couple the sigma model (\ref{paraH-real}) to gravity, then its lift to $1+4$ dimensions is\footnote{Dimensional lifiting in the presence of gravity will be discussed later. As we will see the results obtained without coupling to gravity remain valid, provided that suitable restrictions on the scalar metric are imposed.} \begin{equation} \label{5dAction} S[\sigma, A, \ldots]_{(1,4)} = \int d^5 x \left( - \frac{1}{2} N_{IJ}(\sigma) \partial_\mu \sigma^I \partial^\mu \sigma^J - \frac{1}{4} N_{IJ}(\sigma) F_{\mu \nu}^I F^{J|\mu \nu} + \cdots \right) \;. \end{equation} Here space-time is five-dimensional Minkowski space with indices $\mu, \nu, \ldots = 0,1,2,3,4$ and $F^I_{\mu \nu}= \partial_\mu A^I_\nu - \partial_\nu A^I_\mu$ are abelian field strength. It is easy to see that (\ref{5dAction}) reduces to $(-1)$ times the action (\ref{paraH-real}) upon setting \[ \partial_0 \sigma^I = 0 \;,\;\; \partial_0 A_m^I =0 \;,\;\;\; F^I_{mn}=0 \;, \] identifying $b^I = A^I_0$ and dropping the integration over time. This type of reduction corresponds to the restriction to static and purely electric five-dimensional backgrounds. As indicated by the `dots' in (\ref{5dAction}), the five-dimensional theory could have further terms as long as they do not contribute to static, purely electric field configurations involving the scalars and gauge fields. For example, the action for five-dimensional vector multiplets \cite{EucI} contains a Chern-Simons term and fermionic terms, but these do not contribute to backgrounds which are static and where only scalars and electric field strength are excited. Note that there is a conventional minus sign between (\ref{5dAction}) and (\ref{paraH-real}). Our conventions for Lorentzian actions are that the space-time metric is of the `mostly plus' type, and that kinetic terms are positive definite. The convention for Euclidean actions is that the terms for the scalars $\sigma^I$ are positive definite, while scalar fields obtained by temporal reduction of gauge fields have a negative definite action. We can also reduce the five-dimensional action (\ref{5dAction}) over a space-like direction, leading to the following sigma model on four-dimensional Minkowski space: \begin{equation} \label{4dactionMink} S[\sigma, b]_{(1,3)} = - \int d^4 x \;\frac{1}{2} N_{IJ}(\sigma) \left( \partial_{\bar{m}} \sigma^I \partial^{\bar{m}} \sigma^J + \partial_{\bar{m}} b^I \partial^{\bar{m}} b^J \right) \;, \end{equation} where $\bar{m}=0,1,2, 3$. Note that we have discarded all terms in (\ref{5dAction}) which do not contribute to the scalar sigma model. By a Wick rotation we obtain the Euclidean action \begin{equation} \label{Herm-real} S[\sigma,b]'_{(0,4)} = \int d^4 x \frac{1}{2} N_{IJ}(\sigma) \left( \partial_m \sigma^I \partial^m \sigma^J + \partial_m b^I \partial^m b^J \right) \;, \end{equation} which is positive definite. Comparison to (\ref{paraH-real}) shows explicitly that dimensional reduction over space followed by a Wick rotation is different from dimensional reduction over time. However, the two actions (\ref{paraH-real}) and (\ref{Herm-real}) are related by the analytic continuation $b^I \rightarrow i b^I$. In other words, two actions obtained by space-like and by time-like reduction respectively are related by a modified Wick rotation which acts non-trivially on axionic scalars. For later reference, let us introduce the following notation for the target spaces of the actions we have encountered so far. The five-dimensional action (\ref{5dAction}) has an $n$-dimensional target space $M_r$ with postive definite metric $N_{IJ}$. The four-dimensional actions (\ref{4dactionMink}) and (\ref{Herm-real}) have a $2n$-dimensional target space $M'$ with positive definite metric $N_{IJ} \oplus N_{IJ}$, while (\ref{paraH-real}) has a $2n$-dimensional target space $M$ with split signature metric $N_{IJ} \oplus (-N_{IJ})$. The manifolds $M$ and $M'$ carry additional structures. For $M'$ we can define complex coordinates \[ Y^I = \sigma^I + i b^I \;, \] and we see that the target space $M'$ is Hermitean: \begin{equation} \label{SY} S[Y]'_{(0,4)} = \int d^4 x \; \frac{1}{2} N_{IJ}(Y+\overline{Y}) \partial_m Y^I \partial^m \overline{Y}^J \;. \end{equation} This begs the question whether there is a similar additional structure for the indefinite target space $M$. And indeed, here one can define para-complex coordinates by \[ X^I = \sigma^I + e b^I \;, \] where the para-complex unit has the properties \[ e^2 = 1 \;,\;\;\;\overline{e} = -e \;. \] The theory of para-complex manifolds runs to a large extent parallel to the theory of complex manifolds. In particular, the concepts of para-Hermitean, para-K\"ahler, and special para-K\"ahler manifolds are analogous to their complex counterparts. We refer to \cite{EucI,EucII,EucIII} for a detailed account. Using para-complex coordinates, one sees that the action (\ref{paraH-real}) has a para-Hermitean target space: \begin{equation} \label{SX} S[X]_{(0,4)} = \int d^4 x \; \frac{1}{2} N_{IJ}(X+\overline{X}) \partial_m X^I \partial^m \overline{X}^J \;. \end{equation} Thus actions of the type (\ref{paraH-real}) have a target space which is para-Hermitean and has $n$ commuting isometries acting as shifts. The latter implies that $M$ can be obtained from an $n$-dimensional manifold $M_r$ with positive definite metric, by applying temporal dimensional reduction to the corresponding action. The two real actions (\ref{SY}) and (\ref{SX}) can be viewed as two different real forms of one underlying complex action. This is further explained in Appendix A. Complex actions are useful to get a more unified picture actions and solutions which are related by analytic continuation. In \cite{Bergshoeff:Complex} complex actions for the ten-dimensional and eleven-dimensional maximal supergravity theories have been used to give a unified description of domain wall and cosmological solutions. There seems to be a close relation to the concept of fake supersymmetry. Complex actions seem also to be useful in understanding the Euclidean action of four-dimensional supergravity theories \cite{EucIII}. \subsection{From harmonic maps to harmonic functions} Solving the equations of motion for a sigma model is equivalent to constructing a harmonic map from the space-time $X$ to the scalar target space $M$. In general, both $X$ and $M$ can be (pseudo-)Riemannian manifolds. We restrict ourselves to the case where $X$ is Euclidean space $E$ equipped with its standard flat metric. Then the action of a general sigma model takes the form \[ S[\Phi]_{(0,4)} = \int d^4 x N_{ij}(\Phi) \partial_m \Phi^i \partial^m \Phi^j \;, \] and the equations of motion can be brought to the form \begin{equation} \label{HarmonicMap} \Delta \Phi^i + \Gamma_{jk}^i \partial_m \Phi^j \partial^m \Phi^k =0 \;, \end{equation} where $\Gamma^i_{jk}$ are the Christoffel symbols of the metric $N_{ij}$ of $M$. This is the coordinate form of the equation of a harmonic map $\Phi : E \rightarrow M$ from Euclidean space $E$ to the (pseudo-)Riemannian target $M$. One strategy for constructing such maps\footnote{See \cite{Stelle,Bergshoeff:Geodesic,EucIII} for a more detailed review.} is to identify totally geodesic submanifolds $N \subset M$. A submanifold $N\subset M$ is called completely geodesic if every geodesics of $N$ is also a geodesic of $M$. Then the embedding of $N$ into $M$ is a totally geodesic map, and since the composition of a harmonic map $E \rightarrow N$ with a totally geodesic map $N \rightarrow M$ is harmonic, it suffices to find harmonic maps $\phi: E \rightarrow N \subset M$ in order to solve the scalar equations of motion. We are interested in a criterion which guarantees that the solution of the harmonic map equation (\ref{HarmonicMap}) can be expressed in terms of harmonic functions. This will happen in particular if the submanifold $N$ is flat, so that the Christoffel symbols vanish identically if we use affine coordinates. Then we can parametrize the scalar fields such that the independent scalars $\phi^a$, $a=1, \ldots, \dim N$ corresponds to affine coordinates on $N$, and the harmonic map equation reduces to \begin{equation} \Delta \phi^a =0 \;. \end{equation} If $N$ has $\dim N < \dim M$, then the solution for the remaining $\dim M - \dim N$ scalar fields can be expressed in terms of the solution for the $\phi^a$. The dimension of $N$ controlls the number of independent harmonic functions which occur in the solution. We will now investigate under which conditions the reduction of the equations of motion to decoupled harmonic equations can be achieved, assuming that the target manifold $M$ is para-Hermitean and has $n$ commuting shift symmetries. In this case it is convenient to write the equations of motion in terms of the real fields $\sigma^I$, $b^I$. By variation of the action (\ref{paraH-real}) we obtain: \begin{eqnarray} \partial^m \left( N_{IJ} \partial_m \sigma^J \right) - \frac{1}{2} \partial_I N_{JK} \left( \partial_m \sigma^J \partial^m \sigma^K - \partial_m b^J \partial^m b^K \right) &=& 0 \;,\nonumber \\ \partial^m \left( N_{IJ} \partial_m b^J \right) &=& 0\;. \label{FullEOM} \end{eqnarray} This could be cast into the form (\ref{HarmonicMap}), but in the present form it is manifest that a drastic simplification occurs if we impose that \begin{equation} \label{InstantonAnsatz} \partial_m \sigma^I = \pm \partial_m b^I \;. \end{equation} In this case the two equations (\ref{FullEOM}) collapse into \begin{equation} \label{ReducedEOM} \partial^m \left( N_{IJ} \partial_m \sigma^J \right) = 0 \;, \end{equation} which is very close to the harmonic equation. We will refer to the condition (\ref{InstantonAnsatz}) as the extremal instanton ansatz. Geometrically, the extremal instanton ansatz implies that the scalar fields are restricted to vary along the null directions of the metric of $M$. In other words, the scalars take values in a submanifold $N\subset M$ which is completely isotropic. The extremal instanton ansatz has the consequence that the energy momentum tensor vanishes identically. The `improved', symmetric energy momentum tensor for the action (\ref{paraH-real}) is obtained by variation with respect to a Riemannian background metric on $E$: \begin{equation} \label{EMtensor} T_{mn} = N_{IJ} \left( \partial_m \sigma^I \partial_n \sigma^J - \partial_m b^I \partial_n b^J \right) - \frac{1}{2} \delta_{mn} N_{IJ} \left( \partial_l \sigma^I \partial^l \sigma^J - \partial_l b^I \partial^l b^J \right) \;. \end{equation} Since we are in four dimensions (more generally in $>2$ dimensions), the vanishing of $T_{mn}$ is equivalent to \begin{equation} \label{NullDirections} N_{IJ} \left( \partial_m \sigma^I \partial_n \sigma^J - \partial_m b^I \partial_n b^J \right) =0 \;, \end{equation} which means that the scalar fields vary along the null directions of $N_{IJ} \oplus (-N_{IJ})$. More precisely, depending on the choice of sign in (\ref{InstantonAnsatz}), the scalar fields vary along the eigendirection of the para-complex structure with eigenvalue $(+1)$ or $(-1)$, respectively. Thus the submanifold $N$ is the integral manifold of an eigendistribution of the para-complex structure. The vanishing of the energy momentum tensor has the important consequence that solutions of (\ref{paraH-real}) which satisfy (\ref{InstantonAnsatz}) remain solutions, without any modification, if we couple the sigma model to gravity, \begin{equation} \label{SigmaR} S[g,\sigma,b]_{(0,4)} = \int d^4 x \sqrt{g} \; \frac{1}{2} \left( -R + N_{IJ} \partial_m \sigma^I \partial^m \sigma^J - N_{IJ} \partial_m b^I \partial^m b^J \right) \;. \end{equation} Since the energy momentum tensor vanishes, it is consistent to solve the Einstein equation by taking the metric to be flat, $g_{mn} = \delta_{mn}$. Thus the instanton solutions we find are solutions of sigma models coupled to gravity (\ref{SigmaR}), subject to the `Hamiltonian constraint' $T_{mn}=0$. As we will discuss in more detail later, instanton solutions of (\ref{paraH-real}) can therefore be lifted consistently to solutions of five-dimensional gravity coupled to matter. In this case the lifting works somewhat differently than in the rigid case, because one has to take into account the Kaluza-Klein scalar. The resulting five-dimensional solutions are not flat, but have a conformally flat four-dimensional part, as is typical for BPS solutions. One particular class of lifted solutions are extremal static five-dimensional black holes. The theories and solutions obtainable from (\ref{paraH-real}) include all five-dimensional supergravity theories with abelian vector multiplets and their BPS black hole solutions. We will refer to instanton solutions obtained by the extremal instanton ansatz as extremal. One reason for this choice of terminology is that they can be lifted to extremal black hole solutions. Another reason is the saturation of a Bogomol'nyi bound for the action, which will be discussed in Section 3. The indefiniteness of the metric of $M$ is essential for obtaining non-trivial solutions of the scalar equations of motion with vanishing energy momentum tensor. For a positive (or negative) definite scalar target space metric, $T_{mn}=0$ would imply that all scalar fields have to be constant. The extremal instanton ansatz (\ref{InstantonAnsatz}) is sufficient but not necessary for the vanishing of the energy momentum tensor and the reduction of the equations of motion to (\ref{ReducedEOM}). If the metric $N_{IJ}$ is invariant under transformations of the form \begin{equation} \label{RotIsometry} N_{IJ} \rightarrow N_{KL} R^K_{\;\;I} R^L_{\;\;J} \;, \end{equation} where $R^I_{\;\;J}$ is a constant matrix, already the generalized instanton ansatz\footnote{`Generalized extremal instanton ansatz' would be more accurate, but we will use `generalized instanton ansatz' for convenience.} (\ref{InstantonAnsatz}) \begin{equation} \label{GenInstantonAnsatz} \sigma^I = R^I_{\;\;J} b^J \; \end{equation} implies $T_{mn}=0$ and (\ref{ReducedEOM}). Geometrically, the transformation (\ref{RotIsometry}) corresponds to an isometry of $N_{IJ} \oplus (-N_{IJ})$ which acts by \[ \sigma^I \rightarrow \sigma^I \;,\;\;\; b^I \rightarrow R^I_{\;\;J} b^J \;. \] The relation (\ref{GenInstantonAnsatz}) has occured previously in the context of extremal black hole solutions of supergravity, where $R^I_{\;\;J} \not= \delta^I_J$ corresponds to non-BPS solutions \cite{CerDal,CCDOP}. The simplest examples of non-BPS black holes correspond to flipping some of the charges of the black hole, which corresponds to diagonal $R$-matrices with entries $\pm 1$. Geometrically, this means that some of the fields vary along the $(+1)$-eigendirections of the para-complex strucure while the rest varies along the $(-1)$-eigendirections. In this way the distinction between BPS and non-BPS extremal solutions in supergravity can be understood geometrically and extended to a larger class of non-supersymmetric theories. Let us now investigate the reduced equations of motion (\ref{ReducedEOM}), which remain to be solved after imposing the extremal instanton ansatz (\ref{InstantonAnsatz}) or its generalization (\ref{GenInstantonAnsatz}): \[ \partial^m \left( N_{IJ} \partial_m \sigma^J \right) = 0 \;. \] This reduces to a set of $n$ harmonic equations, provided there exist `dual fields' $\sigma_I$ with the property \[ \partial_m \sigma_I = N_{IJ} \partial_m \sigma^J \;. \] The existence of such dual fields implies the integrability condition \[ \partial_{[n} ( N_{IJ} \partial_{m]} \sigma^J) = 0 \;. \] The same condition has been observed in the context of five-dimensional black hole solutions in \cite{PSVV:08}. Since $\partial_{[n} \partial_{m]} \sigma^J = 0$, the integrability condition is equivalent to \begin{equation} \label{ConditionOnN} \partial_{[n} N_{IJ} \partial_{m]} \sigma^J = \partial_K N_{IJ} \partial_{[n} \sigma^K \partial_{m]} \sigma^J = 0 \;. \end{equation} There are two strategies for solving this constraint. The first is to restrict the solution $\sigma^I(x)$ while not making assumptions about the metric $N_{IJ}$. If we assume that the solution only depends on one of the coordinates of $E$, then (\ref{ConditionOnN}) is solved automatically. The most natural assumption is spherical symmetry, $\sigma^I = \sigma^I(r)$, where $r$ is a radial coordinate, as this admits solutions which asymptotically approach ground states $\sigma^I_{\rm vac} = \mbox{const}$ at infinity. In this case the explicit solutions of $\Delta \sigma_I=0$ are single-centered harmonic functions, \[ \sigma_I = H_I(r) = h_I + \frac{q_I}{r^2} \;, \] where $h_I$ and $q_I$ are constants. The constants $h_I$ specify the values of $\sigma_I$ at infinity. As we will see in the next section the parameters $q_I$ are charges. Such solutions can be interpreted as describing an instanton with charges $q_I$ located at $r=0$.\footnote{At $r=0$ the fields $\sigma_I$ take singular values, and the equations of motion (\ref{ReducedEOM}) are not satisfied, unless explicit source terms are added. This is analogous to electric point charges in electrostatics.} Geometrically, this type of solution corresponds to a situation where the scalars flow along a null geodesic curve in $M$. The second strategy is to make no assumption about the solution. This is compulsory if we want that there are multi-centered solutions, \begin{equation} \label{SolDualsigma} \sigma_I(x) = H_I(x) = h_I + \sum_{a=1}^N \frac{q_{aI}}{|x - x_a|^2}\;, \end{equation} where $h_I, q_{aI}$ are constants and where $x,x_a\in E$. Such solutions correspond to $N$ instantons with charges $q_{aI}$, which are located at the positions $x_a$. For multi-centered solutions we cannot impose spherical symmetry but the integrability condition (\ref{ConditionOnN}) can still be solved by imposing the condition \begin{equation} \label{HesseMetric1} \partial_{[K} N_{I]J} = 0 \end{equation} on the scalar metric. This is equivalent to requiring that the first derivatives $\partial_K N_{IJ}$ of the metric are completely symmetric, or, again equivalently, that the Christoffel symbols of the first kind $\Gamma_{IJ|K}$ are completely symmetric. Finally, by applying the Poincar\'e lemma twice, we see that (\ref{HesseMetric1}) is locally equivalent to the existence of a Hesse potential ${\cal V}(\sigma)$: \begin{equation} \label{HesseMetric} N_{IJ} = \frac{\partial^2 {\cal V}}{\partial \sigma^I \partial \sigma^J} \;. \end{equation} A coordinate-free formulation is obtained by observing that the local existence of a Hesse potential is equivalent to the existence of a flat, torsion-free connection $\nabla$ which has the property that $\nabla g$, where $g$ is the metric, is a completely symmetric rank 3 tensor field. This is the definition of a Hessian metric given in \cite{AleCor}. They also observed that the affine special real manifolds which are the target spaces of rigid five-dimensional vector multiplets are special Hessian manifolds where the cubic form $\nabla g$ is parallel with respect to $\nabla$. It is easy to see why supersymmetry requires this additional condition. Supersymmetry implies the presence of a Chern-Simons term, whose coefficient is given by $\nabla g$. Gauge invariance requires that this coefficient is covariantly constant. In affine coordinates, this becomes the well known condition that the third derivatives of the Hesse potential (which for rigid supersymmetry is identical with the prepotential) must be constant. Hence the Hesse potential must be a cubic polynomial. In this paper we consider more general Hesse potentials, but since the models are not supersymmetric, there is no fixed relation between the Chern Simons term (if any is present) and other terms in the Lagrangian. In the purely electric background that we consider, a Chern-Simons does not contribute, and therefore we do not need to investigate whether a Chern-Simons term could or should be added. The dimensional reduction of models with general Hessian target spaces leads to a generalization of the rigid version of the $r$-map \cite{AleCor}. Recall that the $r$-map relates the target spaces of five-dimensional and four-dimensional vector multiplets \cite{dWvP:1992}. The $r$-map can be derived by dimensionally reducing the vector multiplet action from five to four dimensions, and depending on whether one considers supersymmtric field theories or a supergravity theories one obtains a rigid (also called affine) or local (also called projective) version of the $r$-map. Affine (projective) very special real manifolds are mapped to affine (projetive) special K\"ahler manifolds, respectively. The generalized rigid $r$-map of \cite{AleCor} is obtained by relaxing the constraint that the scalar target geometry of the five dimensional theory is very special real and only requiring it to be Hessian. In the notation of our paper the resulting generalized $r$-map is obtained by reducing (\ref{5dAction}) to (\ref{Herm-real}) while imposing that $N_{IJ}(\sigma)$ satisfies (\ref{HesseMetric}). As shown in \cite{AleCor} the resulting target space $M'$ of the four-dimensional theory is a K\"ahler manifold with $n$ commuting shift isometries. We already noted that $M'$ is Hermitean. To check that it is K\"ahler we go to complex coordinates $Y^I = \sigma^I + i b^I$ and verify by explicit calculation that \[ K(Y,\bar{Y}) = K(Y+\bar{Y}) = 4 {\cal V}(\sigma(Y,\bar{Y})) \; \] is a K\"ahler potential for the metric $N_{IJ} \oplus N_{IJ}$ of $M'$. Note that \cite{AleCor} prove that the relation between Hessian manifolds $(M_r, N_{IJ})$ and K\"ahler manifolds $(M',N_{IJ} \oplus N_{IJ})$ is one-to-one: any K\"ahler manifold with $n$ commuting shift isometries can be obtained from a Hessian manifold by the generalized $r$-map. By reducing (\ref{5dAction}), with Hessian $N_{IJ}$, over time rather than space, we obtain (\ref{paraH-real}) and a para-version (or temporal version) of the generalized rigid $r$-map. As shown in \cite{EucI}, the rigid para-$r$-map relates affine very special real manifolds to affine special para-K\"ahler manifolds with a cubic prepotential. If we only impose that $M_r$ is Hessian, then $M$ is a para-K\"ahler manifold with $n$ commuting shift isometries. We have already seen that metric $N_{IJ} \oplus (- N_{IJ})$ is para-Hermitean. To see that it is para-K\"ahler we go to para-complex coordinates $X^I=\sigma^I + e b^I$ and verify that \[ K(X,\bar{X}) = K(X+\bar{X}) = 4 {\cal V}(\sigma(X,\bar{X})) \; \] is a para-K\"ahler potential. We expect that every para-K\"ahler metric with $n$ commuting shift isometries can be obtained in this way. To conclude this section, let us discuss how our class of solutions fits into the general set up of constructing harmonic maps $E\rightarrow M$ by finding totally geodesic embeddings $N \subset M$ and harmonic maps $E \rightarrow N$. The extremal instanton ansatz implies that in our construction $N$ is totally isotropic. Although the induced metric of $N$ is totally degenerate one can still construct harmonic maps $E \rightarrow M$, by decomposing `harmonic maps'\footnote{Strictly speaking, we should not call this a harmonic map if $N$ is totally isotropic, because the definition of a harmonic map requires that source and target manifolds are equipped with non-degenerate metrics. However, the relevant point is that the composed map $E \rightarrow N \subset M$ is harmonic.} $E\rightarrow N$ with totally geodesic embeddings $N \subset M$ \cite{EucIII}. Our explicit calculation using the coordinates $\sigma^I, b^I$ shows that the totally isotropic submanifolds defined by the extremal instanton ansatz must be totally geodesic. This can also be understood as follows. The submanifolds defined by the ansatz (\ref{InstantonAnsatz}) are eigendistributions of the para-complex structure of $M$. The integrability condition (\ref{HesseMetric1}) implies that $M$ is para-K\"ahler, and therefore the eigendistributions are integrable and parallel with respect to the Levi-Civita connection. Such submanifolds are in particlar totally geodesic. By explicit calculation we have also seen that $N$ must be flat. This can again be understood geometrically. In complete analogy to K\"ahler manifolds, the Riemann tensor of a para-K\"ahler manifold has a particular index structure, when written in para-complex coordinates. The only non-vanishing components are those where both pairs of indices are of mixed type, i.e. one para-holomorphic and one anti-para-holomorphic index. However, the pullback of the Riemann tensor to an eigendistribution of the para-complex structur is of pure type, i.e. the non-vanishing components have either only para-holomorphic or only anti-para-holomorphic indices. Therefore the pull back of the Riemann tensor onto these eigendistributions vanishes. Thus the pulled back connection is flat, and the harmonic map equations must reduce to harmonic equations when expressed in affine coordinates. We can also understand why the existence of single-centered solutions does not impose constraints on the scalar target metric. In this case $N$ is a null geodesic curve, and therefore it is flat for any choice of the metric of $M$. The additional feature which is required for the existence of multi-centered solutions is the existence of a potential. For many purposes it is convenient to use real coordinates $\sigma^I, b^I$ and to work with the Hesse potential ${\cal V}(\sigma)$. The affine coordinates on $N$, in terms of which the equations of motion reduce to decoupled harmonic equations $\Delta \sigma_I=0$ are given by the first derivatives of the Hesse potential \begin{equation} \label{DualCoordinate} \sigma_I \simeq \frac{\partial {\cal V}}{\partial \sigma^I} \;. \end{equation} This is clear because the application of the partial derivative $\partial_m$ on (\ref{DualCoordinate}) gives the integrability condition (\ref{ConditionOnN}). In (\ref{DualCoordinate}) we have left the constant proportionality undetermined, so that we can later fix it to convenient numerical values case by case. Given the Hesse potential we have an explicit formula for the dual fields $\sigma_I$ in terms of the scalars $\sigma^I$. In general it is not possible to give explicit expressions for the scalars $\sigma^I$ as functions of the dual scalars $\sigma_I$. Hence, while instanton solutions are completely determined by harmonic functions $H_I$ through a set of algebraic relations, it is not possible in general to express the solution in terms of the $H_I$ in closed form. However, we will see that this does not prevent us from understanding many features of the solutions. Moreover, if the Hesse potential is sufficiently simple, explicit expressions can be obtained. Examples will be given later. \subsection{From instanton charges to harmonic functions} We have already anticipated that the parameters $q_I$ or $q_{Ia}$ occuring in the harmonic functions $H_I$ can be interpreted as charges. In this section we provide the definition of instanton charges and derive the instanton solutions from a slightly different perspective. It has been observed in the literature on extremal non-BPS black holes that if solutions can be expressed in terms of harmonic functions then the equations of motion can often be reduced from second to first order equations \cite{CerDal,CCDOP,PSVV:08}. Our derivation will show that these two properties are related through the existence of conserved charges: imposing that solutions carry {\em finite} instanton charges implies that the equations of motion can be replaced by first order equations. The symmetry of the target manifold $M$ under constant shifts $b^I \rightarrow b^I + C^I$ implies the existence of $n$ charges, which we call instanton charges. As we will see later these lift to five-dimensional electric charges. The current associated to the shift symmetry is \[ j_I = \partial_m \left( N_{IJ}(\sigma) \partial^m b^J \right) \;. \] It is `conserved' in the sense that the Hodge-dual four-form is closed. The charge obtained by integrating this current over four-dimensional Euclidean space is \begin{equation} \label{DefInstCharge} Q_I = \int d^4x j_I \;. \end{equation} Since $j_I$ is a total derivative, the charge $Q_I$ can be re-written as a surface charge, as is typical for gauge theories: \[ Q_I = \oint d^3 \Sigma^m \left( N_{IJ}(\sigma) \partial_m b^J \right) \;. \] The integral is performed over a topological three-sphere which encloses all sources. Note that explicit sources are needed to have non-vanishing instanton charges, because the equation of motion (\ref{FullEOM}) for $b^I$ implies $j_I =0$. To obtain non-trivial solutions we allow the presence of pointlike ($\delta$-function type) sources. Pointlike sources in Euclidean theories are referred to as $(-1)$-branes. The existence of solutions carrying $(-1)$-brane charge is taken as evidence that the theory should be extended by adding $(-1)$-branes. This is the philosophy underlying field theory and string dualities, for which our class of actions provides models. Solutions with non-vanishing instanton charge must have a particular asymptotic behaviour. We assume that the sources are contained in a finite region and take the limit $r\rightarrow \infty$, where $r$ is a radial coordinate with origin within this region. The integrand can be expanded in powers of $\frac{1}{r}$, and we assume that the contribution of the leading term to the charges $Q_I$ is non-vanishing and finite.\footnote{The expansion in $\frac{1}{r}$ is of course a version of the multipole expansion. In fact, from the five-dimensional point of view it is literally the multipole expansion of a discrete charge density contained in a finite region.} This implies that subleading terms in $\frac{1}{r}$ do not contribute to the charge. Since the leading term is independent of the angles, we take the integration surface to be a three-sphere $S^3_r$ of radius $r$ and integrate over the angles. The resulting charge is \begin{equation} \label{InstChargeMulti} Q_I = \mbox{vol}(S^3_1) \lim_{r \rightarrow \infty} \left( r^3 N_{IJ}(\sigma) \partial_r b^J \right) \;, \end{equation} where $\mbox{vol}(S^3_1) = 2 \pi^2$ is the the volume of the unit three-sphere. Since we assume that $Q_I$ is neither infinite nor zero, it follows that the integrand $N_{IJ} \partial_r b^J$ falls off like $\frac{1}{r^3}$: \[ N_{IJ} \partial_r b^J = \frac{1}{2\pi^2} \frac{Q_I}{r^3} + \cdots \;, \] where the omitted terms are of order $\frac{1}{r^4}$. Now we observe that the leading term in $N_{IJ} \partial_r b^J$ is the derivative of a spherically symmetric harmonic function $\tilde{H}_I(r)$: \begin{equation} \label{AsymptoticSol} N_{IJ} \partial_r b^J = \frac{1}{2\pi^2} \frac{Q_I}{r^3} + \cdots = \partial_r \tilde{H}_I(r) + \cdots \;, \end{equation} with \[ \tilde{H}_I(r) = \frac{\tilde{q}_I}{r^2} + \tilde{h}_I \;, \] where $\tilde{h}_I$ are constants and where $\tilde{q}_I = -\frac{1}{4\pi^2} Q_I$ are proportional to the charges. For simplicity we will refer to the parameters $\tilde{q}_I$ as charges. While the leading term of the expanded solution is automatically spherically symmetric, the full solution may not be. This leads to the distinction of two cases, precisely as in the previous section. If we impose that the full solution is spherically symmetric, then we obtain a solution to the equations of motion by setting all subleading terms to zero, and imposing the extremal instanton ansatz (\ref{InstantonAnsatz}) or its generalized version (\ref{GenInstantonAnsatz}): \[ N_{IJ} \partial_r b^J = \partial_r \tilde{H}_I(r) \;. \] This solution is spherically symmetric and at $r=0$ the equations of motion need to be modified by a $\delta$-function type source term with coefficient $q_I$. The source is interpreted as a $(-1)$-brane of total charge $\tilde{q}_I$, which is located at the center $r=0$ of the harmonic function. If we do not assume that the full solution is spherically symmetric, then we need to find solutions of (\ref{ReducedEOM}) with asymptotics (\ref{AsymptoticSol}) subject to the (generalized) extremal instanton ansatz. Such solutions are obtained if \begin{equation} \label{SolDualb} N_{IJ} \partial_m b^J = \partial_m \tilde{H}_I(x)\;, \end{equation} where $\tilde{H}_I(x)$ are harmonic functions. Since the right hand side is a total derivative, we need to impose an integrability condition equivalent to (\ref{HesseMetric1}), and thus we recover the condition that the scalar metric $N_{IJ} \oplus (-N_{IJ})$ of $M$ must be a para-K\"ahler metric. Assuming this, we have managed to reduce the second order equations of motion (\ref{ReducedEOM}) to the first order quasilinear partial differential equations \begin{equation} \label{1stOrderForm} \partial_m b^I = N^{IJ} \partial_m \tilde{H}_J(x) \;, \end{equation} where $N^{IJ}$ is the inverse of $N_{IJ}$. From our derivation it is clear that the crucial ingredient for the reduction of the order of the equation of motion is the existence of the charges $Q_I$, which can be used to prescribe the asymptotic behaviour of the solution, and `to peel off' one derivative from the equation of motion, provided that the integrability condition (\ref{ConditionOnN}) holds. Solutions with the correct asymptotics are given by multi-centered harmonic functions \begin{equation} \tilde{H}_I(x) = \tilde{h}_I + \sum_{a=1}^N \frac{\tilde{q}_{aI}}{|x - x_a|^2} \end{equation} where $x,x_a \in \mathbbm{R}^4$. They correspond to $N$ $(-1)$-branes with charges $\tilde{q}_{aI}$, which located at the centers $x_a$. For $|x| \rightarrow \infty$ the leading term is \begin{equation} \tilde{H}_I (x) \approx \frac{1}{|x|^2} \sum_{a=1}^N \tilde{q}_{aI} + {\cal O}(|x|^{-3}) \;. \end{equation} Thus the total instanton charges of such a configuration are $\tilde{q}_I = \sum_{a=1}^N \tilde{q}_{aI}$. The relation between this version of the solution and the one given in the previous section is provided by the (generalized) extremal instanton ansatz. Observe that \[ \partial_r \sigma_I = N_{IJ} \partial_r \sigma^J = N_{IJ} R^J_{\;\;K} \partial_r b^K = R_I^{\;\;J} N_{JK} \partial_r b^K \;, \] where $R_I^{\;\;J}$ is the inverse of the transposed of $R^I_{\;\;J}$. Comparing the solutions (\ref{SolDualsigma}) and (\ref{SolDualb}) we conclude \[ \partial_m H_I = R_I^{\;\;J} \partial_m \tilde{H}_J \;, \] which implies $H_I = R_I^{\;\;J} \tilde{H}_J$ up to an additive constant. This constant reflects the shift symmetry $b^I \rightarrow b^I + C^I$. However the coefficients of the non-constant terms in the harmonic functions are not ambiguous and related by the rotation matrix $R_I^{\;\;J}$. In particular the instanton charges $q_I$ and $\tilde{q}_I$ are related by \[ q_I = R_I^{\;\;J} \tilde{q}_J \;. \] Thus we have seen that the reduction of the equations of motion to the decoupled harmonic equations and the reduction of the equations of motion from second to first order differential equations result from the same integrability condition (\ref{ConditionOnN}). The integrability condition can be solved by either imposing that the solution is spherically symmetric, or by restricting the target geometry to be para-K\"ahler. \subsection{Spherically symmetric solutions and flow equations} In this section we take a closer look at spherically symmetric solutions. For spherically symmetric black hole solutions (and other related solitonic solutions), the reduction of the equation of motion to first order equations was first noticed for BPS solutions. In this context the first order equations are known as (generalized) attractor equations, or (gradient) flow equations. Later it was realized that a first order rewriting is often also possible for non-BPS solutions, and leads to first order flow equations which are driven by potential, which generalizes the $N=2$ central charge \cite{CerDal,CCDOP,PSVV:08}. Let us therefore explain how gradient flow equations fit into our framework. We have seen previously that if we impose that solutions carry non-vanishing instanton charge (and satisfy the integrability condition (\ref{ConditionOnN}) which is trivial for spherically symmetric solutions), then the second order equations of motion reduce to the first order quasilinear partial differential equations (\ref{1stOrderForm}). If we impose spherical symmetry the equations reduce further to the first order quasilinear ordinary differential equations \[ \sigma^I{}' = N^{IJ}(\sigma) H'_J(r) = N^{IJ}(\sigma) \frac{d}{dr} \left( \frac{q_J}{r^2} + h_J \right) \;, \] where $f'=\frac{df}{dr}$. The standard form of the flow equations is obtained by introducing the new coordinate $\tau=\frac{1}{r^2}$: \[ \dot{\sigma}^I = N^{IJ}(\sigma) \frac{d}{d\tau} \left( q_J \tau + h_J \right) = N^{IJ} q_J \;, \] where $\dot{f}=\frac{df}{d\tau}$. By introducing the function \[ W = q_J \sigma^J \] and obtains the gradient flow equations \begin{equation} \label{1storder_gradient_flow} \dot{\sigma}^I = N^{IJ} \partial_J W \;. \end{equation} In terms of the instanton charges $Q_I \propto \tilde{q}_I$, the `superpotential' is \[ W = R_I^{\;\;J} \tilde{q}_J \sigma^I\;. \] This form is familiar from black holes \cite{CerDal,CCDOP,PSVV:08}. If the underlying theory is supersymmeric, and if the $R$-matrix is proportional to the identity, then \[ W = \pm Z = \pm \tilde{q}_I \sigma^I \;, \] where $Z$ is the real central charge of the supersymmetry algebra of the underlying five-dimensional theory. $Z$ is also one of the two real supercharges of the four-dimensional Euclidean supersymmetric theory obtained by reduction over time. The new coordinate $\tau$ has a simple geometrical interpretation. To see this consider the version of the equations of motion which involve the dual scalars $\sigma_I$: \[ \Delta \sigma_I =0 \;. \] This is the harmonic equation for a map from space-time $E$ into a flat submanifold $N\subset M$, written in terms of affine coordinates on $N$. For spherically symmetric solutions, this takes the form \[ \left( \frac{\partial^2 }{\partial r^2} + \frac{3}{r} \frac{\partial}{\partial r} \right) \sigma_I = 0 \;. \] This is the geodesic equation for a curve on a flat submanifold $N\subset M$. The presence of the second term shows that $r$ is not an affine curve parameter. However, one can always introduce an affine parameter $\tau$, which is unique up to affine transformations, such that equations reduces to \[ \frac{\partial^2 }{\partial \tau^2} \sigma_I =0 \;. \] It is easy to see that for the case at hand the affine parameters are \[ \tau = \frac{a}{r^2} + b \;, \] where $a\not=0$ and $b$ are constants. Thus $r\rightarrow \tau=\frac{1}{r^2}$ is a reparametrization of the geodesic which brings it to affine form. The solution takes the particularly simple form of harmonic functions in one variable, \[ \sigma_I (\tau) = q_I \tau + \sigma_I(0) \;. \] For completeness, let us review an alternative derivation of the flow equations, which uses a variant of the Bogomol'nyi trick and which is used frequently in the literature on non-BPS extremal black holes (see for example \cite{CerDal,CCDOP,PSVV:08}). In spherically symmetric backgrounds $ds^2 = dr^2 + r^2 d \Omega_{(3)}^2$ the action (\ref{paraH-real}) can be reduced to the one-dimensional action \[ S[\sigma,b]_{(0,1)} = \int r^3 dr \; \frac{1}{2} N_{IJ} \left( \sigma^I{}' \sigma^J{}' - b^I{}' b^J{}' \right) \;. \] Then one tries to rewrite this action as an (alternating) sum of perfect squares plus boundary terms. The factor $r^3$ in can be eliminated by going to the affine curve parameter $\tau=\frac{1}{r^2}$: \begin{equation} \label{1daction} S[\sigma,b]_{(0,1)} = \frac{1}{2} \int d \tau N_{IJ} \left( \dot{\sigma}^I \dot{\sigma}^J - \dot{b}^I \dot{b}^J \right) \;. \end{equation} The Euler-Lagrange equations of this action need to be supplemented by the constraint \begin{equation} \label{HamConstr} {\cal H} = \frac{1}{2} N_{IJ} \left( \dot{\sigma}^I \dot{\sigma}^J - \dot{b}^I \dot{b}^J\right) =0 \;, \end{equation} which implies that the solution is extremal.\footnote{In general the constraint is $H=c^2$, where $c$ is a constant, but the case $c\not=0$ corresponds to non-extremal solutions \cite{FGK}, which we do not consider in this paper.} From the five-dimensional point of view, this constraint imposes the Einstein equations, and therefore it is analogous to the four-dimensional constraint $T_{mn}=0$. Note that ${\cal H}$ in (\ref{HamConstr}) is the Hamiltonian of the one-dimensional action. The canonical momenta \begin{equation} \label{LargeMomenta} p_I= \frac{\partial {\cal L}}{\partial \dot{\sigma}^I} = N_{IJ} \dot{\sigma}^J \;\;\;\mbox{and}\;\;\; \tilde{p}_I= \frac{\partial {\cal L}}{\partial \dot{b}^I} = N_{IJ} \dot{b}^J \end{equation} are conserved and agree with the charges: $p_I = q_I$ and $\tilde{p}_I = \tilde{q}_I$. Since the Lagrangian is quadratic in the velocities and does not contain a potential, the Hamiltonian coincides with the Lagrangian. The first order form of the equations of motion can be obtained by rewriting the Lagrangian as an (alternating) sum of squares, up to boundary terms \cite{CerDal,CCDOP,PSVV:08}: \begin{eqnarray} S[\sigma,b]_{(0,1)} &=& \frac{1}{2} \int d \tau \big[ N_{IJ} \left( \dot{\sigma}^I - N^{IK} q_K \right) \left( \dot{\sigma}^J - N^{JL} q_L \right) \label{BogoVar} \\ && - N_{IJ} \left( \dot{b}^I - N^{IK} \tilde{q}_K \right) \left( \dot{b}^J - N^{JL} \tilde{q}_L \right) \big] + \mbox{boundary terms} \;, \nonumber \end{eqnarray} where the constants $q_I$ and $\tilde{q}_I$ are related by $q_I = R_I^{\;\;J} \tilde{q}_J$. Since the boundary terms do not contribute to the equations of motion, a subclass of solutions is obtained by setting both squares to zero. This is equivalent to the combined flow equations for $\sigma^I$ and $b^I$, or to the generalized instanton ansatz $\dot{\sigma}^I = R^I_{\;\;J} \dot{b}^J$ together with the flow equations for the independent scalars. \subsection*{Reduced scalar manifold, geodesic potential, and remarks on the Hamilton-Jacobi formalism} So far we have worked on the scalar manifold $M$, which is parametrized by the $2n$ scalars $\sigma^I$ and $b^I$. One approach used frequently in the literature is to eliminate the $b^I$ by their equations of motion, which results in an effective potential for the $\sigma^I$ which contains the charges as parameters \cite{FGK}. The resulting equation for $\sigma$ describes geodesic motion with a non-trivial potential on the $n$-dimensional manifold $M_r$. We briefly review this approach in order to explain how our work relates to \cite{ADOT}, who applied the Hamilton-Jacobi formalism to spherically symmetric, static black holes.\footnote{While \cite{ADOT} also consider non-extremal black holes, we restrict ourselves to the extremal case.} To facilitate the comparision, it is convenient to write the Euler-Lagrange equations of the action (\ref{1daction}) in the following form: \begin{eqnarray} \ddot{\sigma}^I + \Gamma^I_{JK} \dot{\sigma}^J \dot{\sigma}^K + \frac{1}{2} N^{IL} \partial_L N_{JK} \dot{b}^J \dot{b}^K &=& 0 \;, \label{1dgeodesic} \\ \frac{d}{d\tau} \left(N_{IJ} \dot{b}^J \right)&=& 0 \;. \nonumber \end{eqnarray} Here $\Gamma^{I}_{JK}$ are the Christoffel symbols of the metric $N_{IJ}(\sigma)$ on the manifold $M_r$. While the combined set of equations is the geodesic equation for the metric $N_{IJ} \oplus (- N_{IJ})$ on the manifold $M$, one can use the fact that $N_{IJ}$ is independent of the $b^I$ to eliminate the $b^I$ and thus obtain a geodesic equation with potential on $M_r$. The equations of motion of the $b^I$ state that the quantities \[ \tilde{q}_I = N_{IJ} \dot{b}^J \] are conserved. In fact, the $\tilde{q}_I$ are the conserved axionic charges introduced previously. Using this the equations (\ref{1dgeodesic}) reduce to \begin{equation} \label{1dgeodesic_with_potential} \ddot{\sigma}^I + \Gamma^I_{JK} \dot{\sigma}^J \dot{\sigma}^K + \frac{1}{2} N^{IL} \partial_L N_{JK} N^{JM} N^{KN} \tilde{q}_M \tilde{q}_N =0 \;. \end{equation} The constraint (\ref{HamConstr}) now takes the form \begin{equation} {\cal H} = \frac{1}{2} \left( N_{IJ} \dot{\sigma}^I \dot{\sigma}^J - N^{IJ} \tilde{q}_I \tilde{q}_J \right) =0 \;. \end{equation} Expressing this in terms of the canonical momenta $p_I = N_{IJ} \dot{\sigma}^J$ and defining the `geodesic potential' \begin{equation} V(\sigma)_{\tilde{q}} = N^{IJ} \tilde{q}_I \tilde{q}_J \;, \end{equation} the Hamiltonian constraint becomes \[ \tilde{\cal H}(\sigma, p) = \frac{1}{2} \left( p_I N^{IJ} p_J - V(\sigma)_{\tilde{q}} \right) = 0 \;. \] The geodesic potential is positive definite for positive definite $N_{IJ}$ \cite{FGK,ADOT}. The relative minus sign between the `kinetic' term and the potential is due to the fact that our `time' is actually a space-like, radial coordinate. The associated action and Lagrangian are given by \begin{equation} \label{1deffective_action} \tilde{S}[\sigma]_{(0,1)} = \frac{1}{2} \int d \tau (N_{IJ} \dot{\sigma}^I \dot{\sigma}^J + V(\sigma)_{\tilde{q}}) \;. \end{equation} Note that this action is {\em not} obtained by substituting the definition of the geodesic potential into (\ref{1daction}), which would lead to a different sign in front of the potential. Rather, the two Hamiltonians are related through eliminating the $b^I$ by their equations of motion, and the associated Lagrangians are in turn given as Legendre transforms. This distinction is crucial, since the elimination of the $b^I$ leads to a non-trivial potential. To check that the procedure is correct observe that the Euler-Lagrange equations of (\ref{1deffective_action}) \[ \ddot{\sigma}^I + \Gamma^I_{JK} \dot{\sigma}^J \dot{\sigma}^K - \frac{1}{2} N^{IL} \partial_L V (\sigma)_{\tilde{q}} =0 \ \] agree with (\ref{1dgeodesic_with_potential}), which were obtained from the Euler-Lagrange equations (\ref{1dgeodesic}) of the action (\ref{1daction}) by eliminating the $b^I$ through their equation of motion. The problem investigated in \cite{ADOT} is the following: given an action of the form (\ref{1deffective_action}), how can one find new coordinates $\Sigma^I$ and momenta $P_I$, such that the new momenta are conserved? By Hamilton-Jacobi theory this can be achieved by finding a suitable generating function $\tilde{W}(\sigma, P, \tau)$ of the old coordinates and new momenta. This function must in particular satisfy $p_I = \frac{\partial \tilde{W}}{\partial \sigma^I}$ and $\Sigma^I = \frac{\partial \tilde{W}}{\partial P^I}$. Since $p_I = N_{IJ} \dot{\sigma}^J$ this leads to a first order gradient flow driven by the generating $\tilde{W}$: $\dot{\sigma}^I = N^{IJ} \partial_J \tilde{W}$ \cite{ADOT}. For extremal black holes the generating function is independent of `time' $\tau$ \cite{ADOT}. As we have seen above, the coordinates $\sigma^I$ which we use throughout this paper have associated canonical momenta $p_I$ which are proportional to the charges and hence conserved. This is due to the extremal instanton ansatz, which solves the constraint ${\cal H}=0$ by imposing that $\dot{\sigma}^I$ and $\dot{b}^I$ are proportional up to the constant matrix $R^{I}_{\;\;J}$. Since the momenta associated with the $b^I$ are conserved as a consequence of the shift symmetries, the extremal instanton ansatz implies that the $p_I$ are conserved as well. Above we derived gradient flow equations (\ref{1storder_gradient_flow}) which are driven by the `superpotential' $W = q_I \sigma^I$. As is easily verified, we can interprete this function as the generating function $\tilde{W} = P_I \sigma^I$ of the trivial canonical transformation $\Sigma^I = \sigma^I$, $P_I = p_I$. The triviality of the Hamiltonian-Jacobi problem reflects that we are already working, for extremal black holes, in the coordinate system adapted to the symmetries. Note that this does not require any assumption on the geometry of $M_r$, because for spherically symmetric black holes the integrability condition does not impose constraints on the scalar geometry. In the case where the manifold $M_r$ is Hessian, we can go to dual coordinates $\tilde{\sigma}_I \simeq \frac{\partial {\cal V}}{\partial \sigma^I}$ and the momenta are given by $p_I = \dot{\sigma}_I$. This observation should be useful when investigating non-extremal black hole solutions, where the constraint is deformed into ${\cal H} = c^2$. We leave a detailed investigation of non-extremal solutions to future work. \section{The dual picture} Given that we interpret the solutions we have constructed as instantons, we should expect that by substitution of the solution into the action we obtain a finite and positive result which is proportional to the instanton charges. But since the scalar fields vary along null directions of the target space it is clear that instanton action, when computed using (\ref{paraH-real}) is identically zero. Thus the same feature which allows to have non-trivial instanton solutions renders their interpretation as instantons problematic. This is one aspect of the more fundamental problem of working with a Euclidean action which is not positive definite. The same observations and questions apply to the type-IIB D-instanton solution \cite{GGP} and other stringy instantons, such as instanton solutions for four-dimensional hypermultiplets \cite{BGLMM,TheVan,DdVTV,deVVAn,ChiGut}. For the purpose of generating higher-dimensional stationary solutions none of the above points is critical, except perhaps, that one might expect the ADM mass of black hole or other soliton to be related to the action of the instanton obtained by dimensional reduction with respect to time. For the D-instanton and various other similar instanton solutions it is known that one can obtain an instanton action which is finite, positive and proportional to the instanton charges by working with a dual version of the action (\ref{paraH-real}), which is obtained by dualizing the axionic scalars $b^I$ into tensor fields. Alternatively, a specific boundary term can be added to (\ref{paraH-real}). In this section we derive the relevant formulae for the dualization of sigma models of the type (\ref{paraH-real}). Later we will show that the resulting instanton actions agree with the masses of the solitons obtained by dimensional lifting. In the sigma model (\ref{paraH-real}), the axionic scalars $b^I$ enter into the field equations only through their `field strength' $F_m^I=\partial_m b^I$, which can re-expressed in terms of the Hodge-dual three-forms $H_{mnp|I}$. By construction, the three-forms will satisfy the Bianchi identities \begin{equation} \label{BianchiH} \partial_{[m}H_{npq]|I} =0 \;, \end{equation} and therefore they can be written, at least locally, as the exterior derivatives of two-form gauge fields $B_{mn|I}$. The standard Lagrangian for a theory of scalars $\sigma^I$ and two-form gauge fields $B_{mn|I}$ takes the form \begin{equation} \label{ScalarTensorLagrangian} {\cal L} = - \frac{1}{2} N_{IJ}(\sigma) - \frac{1}{2 \cdot 3!} N^{IJ}(\sigma) H_{mnp|I} H_J^{mnp} \;, \end{equation} where \[ H_{mnp|I} = 3! \partial_{[m} B_{np|I} \;, \] and where $N^{IJ}$ is the inverse of $N_{IJ}$. Our parametrization anticipates that the dualization of antiysmmetric tensor fields into axions inverts the coupling matrix. The Euclidean form of the Lagrangian is obtained by a Wick rotation, and the resulting Euclidean action \begin{equation} \label{ScalarTensorAction} S_E[\sigma, B] = - \int d^4x {\cal L} \end{equation} is positive definite.\footnote{We include an explicit sign in this definition, so that the Euclidean action is positive definite instead of negative definite.} We will now show that this action is equivalent to (\ref{paraH-real}), in the sense that it gives rise to the same equations of motion. The first step is to promote the Bianchi identity $\partial_{[m } H_{npq]|I}=0$ to a field equation by introducing a Lagrange multiplier term: \begin{eqnarray} S &=& \int d^4 x \Big( \frac{1}{2} N_{IJ}(\sigma) \partial_m \sigma^I \partial^m \sigma^J + \frac{1}{2\cdot 3!} N^{IJ}(\sigma) H_{mnp|I} H^{mnp}_J \nonumber \\ &&+ \lambda b^I \epsilon^{mnpq} \partial_m H_{npq|J} \Big) \;. \end{eqnarray} Here $b^I$ is the Lagrange multiplier for the $I$-th Bianchi identity, and $\lambda$ is a normalisation constant which we will fix to a convenient value later. Variation of this action with respect to $H^{mnp}_I$ gives their equations of motion, which state that $H^{mnp}_I$ and $\partial_m b^I$ are Hodge dual: \begin{equation} \label{Hdualb} H_I^{mnp} = 3! \lambda N_{IJ}(\sigma) \epsilon^{mnpq}\partial_q b^J \;. \end{equation} When we substitue this back into the action, we obtain \begin{eqnarray} S[\sigma,b] &=& \int d^4 x \left( \frac{1}{2} N_{IJ} (\sigma)\partial_m \sigma^I \partial^m \sigma^J - \frac{1}{2} (3!\lambda)^2 N_{IJ} \partial_m b^I \partial^m b^J \right) \nonumber \\ &+& (3!\lambda)^2 \oint d^3 \Sigma^m b^I N_{IJ} \partial_m b^J \;. \end{eqnarray} The boundary term in the second line results from an integration by parts. We observe that the bulk term matches with (\ref{paraH-real}) if we choose $(3!\lambda)^2 =1$. The equations of motion for $\sigma^I$ and $B_{mn|I}$ are obtained by variation of (\ref{ScalarTensorLagrangian}): \begin{eqnarray} \partial^m \left( N_{IJ} \partial_m \sigma^J\right) &=& \frac{1}{2} \partial_I N_{JK} \partial_m \sigma^J \partial^m \sigma^K \nonumber \\ \partial^m \left( N^{IJ} H_{mnp|J} \right) &=& 0 \;. \label{ScalarTensorEOM} \end{eqnarray} By construction, they are converted into (\ref{FullEOM}) by substituting in equation (\ref{Hdualb}). The action (\ref{ScalarTensorAction}) is positive definite, and we can obtain instanton solutions by applying the Bogomol'nyi trick, i.e. by rewriting the action as sum of perfect squares, plus a remainder: \begin{eqnarray} S[\sigma, B] &=& \int d^4 x \Big[ \frac{1}{2} \left( \partial_m \sigma^I \mp \frac{1}{3!} N^{IJ} \epsilon_{mnpq} H^{npq}_J \right)^2 \nonumber \\ && \pm \frac{1}{3!} \partial_m \sigma^I \epsilon^{mnpq} H_{npq|I} \Big] \;. \end{eqnarray} Note that the last term is a total derivative as a consequence of the Bianchi identity for $H_{mnp|I}$. In contrast to the similar rewriting (\ref{BogoVar}) used in the previous section, this bulk term is not just an alternating sum of squares, but a single perfect square. Therefore equating the square to zero does not just give a saddle point, but a minimum of the action. The resulting equation \begin{equation} \label{DualInstantonAnsatz} \partial_m \sigma^I = \pm \frac{1}{3!} N^{IJ} \epsilon_{mnpq} H_J^{npq} \;, \end{equation} is the Hodge dual version of the extremal instanton ansatz (\ref{InstantonAnsatz}), as we see immediately using (\ref{Hdualb}). If the scalar metric $N_{IJ}$ admits a non-trivial $R$-matrix (\ref{RotIsometry}), we can impose a Hodge dual version of the generalized instanton ansatz (\ref{GenInstantonAnsatz}). As soon as we impose the (generalized) extremal instanton ansatz, the equations of motion (\ref{ScalarTensorEOM}) reduce to (\ref{ReducedEOM}). Note that the dual instanton ansatz in combination with the Bianchi identity (\ref{BianchiH}) already implies the equations of motion (\ref{ReducedEOM}). The extremal instanton ansatz is similar to the (anti-)selfduality condition characteristic for Yang-Mills instantons. This interesting feature is less obvious when working with the purely scalar version (\ref{paraH-real}) of the theory. To compute the instanton action, we substitute the relation (\ref{DualInstantonAnsatz}) back into the action and obtain: \begin{equation} \label{InstAction} S_{\rm inst} = \int d^4x N_{IJ} \partial_m \sigma^I \partial^m \sigma^J \;. \end{equation} This is a boundary term, up to terms proportional to the equations of motion: \begin{equation} S_{\rm inst} = \oint d^3 \Sigma^m N_{IJ} \sigma^I \partial_m \sigma^J \;. \end{equation} Guided by the analogy to Yang-Mills instantons, we expect that this can be expressed in terms of charges. The $B$-field has an abelian gauge symmetry, $B_{mn} \rightarrow B_{mn} + 2 \partial_{[m} \Lambda_{n]}$, and one can define the associated electric and magnetic charges. For us the magnetic charges \[ Q_I = \frac{1}{3!} \oint d^3 \Sigma^m \epsilon_{mnpq} H^{npq}_I \] will be relevant. The normalization has been chosen such that they agree with the axionic charges (\ref{DefInstCharge}) when we substitute (\ref{Hdualb}). When evaluated on instanton solutions these charge take the form \[ Q_I = \oint d^3 \Sigma^m N_{IJ} \partial_m \sigma^J \;. \] Comparing this to the instanton action, we see that the instanton action takes the form \begin{equation} \label{InstActChg} S_{\rm Inst} = \sigma^I(\infty) Q_I \;, \end{equation} provided that the boundary terms corresponding to the localized $(-1)$-branes (i.e. the centers of the harmonic functions) do not contribute. We will investigate this assumption below. The boundary term obtained by dualizing the $B$-fields into axions $b^I$ is \begin{equation} \label{BoundaryAction} S_{\rm bd} = \oint d^3 \Sigma^m b^I N_{IJ} \partial_m b^J = b^I(\infty) \tilde{Q}_I \;, \end{equation} where $\tilde{Q}_I = R_I^{\;\;J}Q_J$ and $\partial_m b^I = R^I_{\;\;J} \partial_m \sigma^J$. Thus the boundary action equals the instanton action when evaluated on instanton solutions, provided that $b^I(\infty) = R^I_{\;\;J} \sigma^J(\infty)$. Since the $b^I$ are only defined up to constant shifts, we can regards this as a choice of gauge. This observation suggests to add the boundary action to the scalar bulk action (\ref{paraH-real}), so that by evaluation on instanton solutions we obtain the same numerical values as for the scalar-tensor action. Above we have made the assumption that the instanton solution is regular at the centers, and that the centers do not contribute to the instanton action. Hoewever, the contribution of a single center to the instanton action is \[ \lim_{r\rightarrow 0} \oint_{S_r^3} d^3 \Sigma^m N_{IJ} \sigma^I \partial_m \sigma^J = \lim_{r\rightarrow 0} 2 \pi^2 r^3 N_{IJ} \sigma^I \partial_r \sigma^J \;. \] Since $N_{IJ} \partial_m \sigma^J$ is the derivative of a harmonic function, we know that close to a center \[ N_{IJ} \partial_r \sigma^J \sim \frac{1}{r^3} \;. \] To have a finite contribution to the instanton action we must require that the scalars $\sigma^I$ have finite limits at the centers. To obtain (\ref{InstActChg}) we need to impose the stronger condition that the scalars $\sigma^I$ vanish at the centers. The standard example of scalar instantons which we have in mind, including the D-instanton, have this property. Moroever, for supersymmetric models we expect a relation of the form (\ref{InstActChg}) between the instanton action and a central charge of the superysmmetry algebra. Therefore we require that instanton solutions satisfy (\ref{InstActChg}). Solutions which do not satisfy this condition should not be interpreted as proper instantons. \section{Finiteness of the instanton action and attractor behaviour. Examples} In this section we investigate the behaviour of instanton solutions. Our main interest is to find criteria which allow us to decide whether a given Hesse potential allows solutions with finite instanton action or not. This requires to investigate the behaviour of solutions at the centers, which in turn tells us whether solutions exhibit attractor behaviour, meaning that the asymptotics at the centers is determined exclusively by the charges, and in particular is independent of the boundary condition imposed at infinity. The fixed point behaviour of extremal black holes is a prototypical example, but we will encounter a slightly different behaviour which loosely speaking corresponds to fixed points `at infinity'. Later we will see that for those solutions that lift to five-dimensional black holes this behaviour is nevertheless equivalent to the (five-dimensional) black hole attractor mechanism. In order to obtain these results, we will need to make some assumptions about the Hesse potential in order to be able to control the asymptotic behaviour of solutions at the centers. Two types of Hesse potentials allow a complete analysis: homogeneous functions and logarithms of homogeneous functions. The second class corresponds to models which can be lifted to five dimensions in presence of gravity. We will also use this section to present a variety of explict solutions. \subsection{Hesse potential ${\cal V}= \sigma^{p}$} We start with models where the Hesse potential depends on one single scalar $\sigma$ and is homogeneous of degree $p=N+2$, i.e. ${\cal V} \sim \sigma^{N+2}$. Then the metric is proportional to $\sigma^N$, and the sigma model takes the form \[ S= \frac{1}{2} \int d^4 x \sigma^N (\partial_m \sigma \partial^m \sigma - \partial_m b \partial^m b) \;. \] The case $N=0, p=2$ corresponds to a free theory. The case $N=3, p=1$ corresponds to Euclidean vector multiplets, obtained by temporal reduction of five-dimensional vector multiplets. We would like to include the case $N=-2$, which will turn out to be related to supergravity and, more generally, to models including gravity. Here the Hesse potential is not a homogeneous polynomial, but logarithmic, ${\cal V}=-\log \sigma$.\footnote{Logarithmic Hesse potentials will be investigated in detail in Sections 4.6 -- 4.8. In Section 6 we will present a modified formulation of the Hessian geometry of the target space, which is more suitable for this case.} For $N=-1$ the Hesse potential is the integral of the logarithm. By imposing the extremal instanton ansatz $\partial_m \sigma = \pm \partial_m b$, the equation of motion reduces to \[ \partial_m ( \sigma^N \partial^m \sigma) = 0 \] which is equivalent to \[ \Delta \sigma^{N+1} = 0 \;. \] In other words, $\sigma^{N+1}$ is the dual coordinate of $\sigma$, which is of course a special case of the relation (\ref{DualCoordinate}). Close to a center, the solution has the asymptotic form \[ \sigma^{N+1} \sim \frac{1}{r^2} \;, \] which implies that \[ \sigma \sim r^{\frac{-2}{N+1}} \;. \] Consequently \[ \begin{CD} \sigma @>>{r \rightarrow 0}> \left\{ \begin{array}{ll} 0 & \mbox{if } N<-1 \;,\\ \infty &\mbox{if } N>-1 \;. \\ \end{array} \right. \end{CD} \] Therefore a finite action of the form $S_{\rm Inst}=\sigma^I(\infty) Q_I$ is obtained for $N=-2,-3, \ldots$, i.e. for logarithmic prepotentials and for prepotentials which are homogeneous of negative degrees $p=-1,-2,\ldots$. For models with $N=0,1,2,3, \ldots$ (i.e. with prepotentials homogeneous of degree $p=2,3,\ldots$), the instanton action is infinite, due to contributions from the centers. Therefore these models do not possess proper (finite action) instanton solutions. This includes the case $N=1,p=3$, which corresponds to the temporal reduction of five-dimensional vector multiplets. The case $N=-1$, which is not covered by the above analysis, has to be treated separately. One finds that $\log \sigma$ is harmonic, and therefore the limit at a center is either zero or infinite, depending on the sign of the charge. \subsection{General homogeneous Hesse potentials} We now turn to Hesse potentials which depend on an arbitrary number of scalar fields, and are homogeneous of degree $p$. In this case the dual scalars \[ \sigma_I \simeq \cal{V}_I =\frac{\partial {\cal V}}{\partial \sigma^I} \] are homogenous functions of degree $p-1$ of the scalars $\sigma^I$. Since $\Delta \sigma_I =0$, the dual scalars have the asymptotics $\sigma_I \sim r^{-2}$ at the centers, implying that \[ \sigma^I \sim r^{-2/(p-1)} \;. \] This is the natural generalization of the result obtained in the case of a single scalar: instanton solutions have a finite action of the form (\ref{InstActChg}), if the Hesse potential is homogeneous of degree $p\leq -1$. We will come back to the case of logarithmic prepotentials later. As the scalar fields always run off to either 0 or $\infty$ at the centers, we need to investigate whether these points are at finite or infinite `distance'. Since the scalar fields vary along isotropic submanifolds, the concept of distance has to be replaced by the concept of an affine curve parameter. It is sufficient to consider single-centered solutions, and therefore we have to investigate whether the point $r=0$ is at finite or infinite value of an affine parameter along the null geodesic corresponding to the solution. In terms of the dual scalars the equation of motion is always $\Delta \sigma_I=0$, which, for single centered solutions, is the geodesic equation for a curve, with the radial variable $r$ as curve parameter: \[ \Delta \sigma_I = \frac{\partial^2 \sigma_I}{\partial r^2} + \frac{3}{r} \frac{\partial \sigma_I}{\partial r} = 0 \;. \] Passing to an affine curve parameter \[ \tau = \frac{A}{r^2} + B \] where $A \not= 0$ and $B$ are constants, we obtain the affine version of the geodesic equation. Irrespective of the choice of affine parameter, we find that \[ \begin{CD} \lim \tau(r) @>>{r\rightarrow 0}> \infty \;, \end{CD} \] which shows that the point $r=0$ is at infinite affine parameter. Therefore the scalars always run away to limit points at `infinite distance' on the scalar manifold. This is different from the fixed point behaviour observed for extremal black holes, where the scalars approach interior points of the scalar manifold, which are determined by the charges through the black hole attractor equations. However, for homogeneous prepotentials the run-away behaviour is not generic and shows features resembling fixed point behaviour. If we consider ratios of scalar fields, then the limits at the centers are finite and depend only on the charges \[ \frac{\sigma_I}{\sigma_J} \rightarrow \frac{q_I}{q_J} \;. \] Thus at least the ratios show fixed point behaviour. The asymptotic behaviour of the scalars at the centers can be represented alternatively by performing a (singular) rescaling, which brings the limit points to finite parameter values. One possible rescaling is to simply rescale the scalars according to \[ \hat{\sigma}_I := r^2 \sigma_I \;. \] Then the new scalar $\hat{\sigma}_I$ show proper fixed point behaviour $\hat{\sigma}_I \rightarrow q_I$. A more intrinsic way of performing a rescaling is to divide the dual scalars $\sigma_I$ by a homogeneous function of the scalars, which is chosen such that the new scalar fields are homogeneous of degree zero. The natural way of achieving this is to take the appropriate power of the Hesse potential: \[ \begin{CD} \tilde{\sigma}_I = \frac{\sigma_I}{{\cal V}(\sigma)^{(p-1)/p}} @>>{r \rightarrow 0}> \mbox{finite}\;, \end{CD} \] because \[ \sigma_I \sim \frac{1}{r^2} \;,\;\;\; \sigma^I \sim \left(\frac{1}{r^2}\right)^{1/(p-1)} \;,\;\;\; {\cal V}(\sigma) \sim \left(\frac{1}{r^2}\right)^{p/(p-1)} \;,\;\;\; {\cal V}(\sigma)^{(p-1)/p} \sim \frac{1}{r^2}\;. \] These rescalings have no immedidate physical meaning, but are convenient for visualizing solutions. However, for models with logarithmic models the rescaling acquires a physical meaning once we couple the model to gravity, as we will see in Section 6. Although we did not yet discuss examples with logarithmic Hesse potential, it is clear by inspection that the above analysis remains valid for the corresponding $N=-2$ and $p=0$. \subsection{Hesse potential ${\cal V} = \frac{1}{6} C_{IJK} \sigma^I \sigma^J \sigma^K$} If we construct models by temporal reduction of rigidly supersymmetric five-dimensional vector multiplets, then the most general Hesse potential is a cubic polynomial \cite{EucI}. Since constant and linear terms do not enter into the metric, while quadratic terms only give a constant contribution to the scalar metric, we can restrict ourselves to homogeneous cubic polynomials \[ {\cal V}(\sigma) = \frac{1}{6} C_{IJK} \sigma^I \sigma^J \sigma^K \;. \] The corresponding metric is\footnote{We use a notation where ${\cal V}_I = \frac{\partial {\cal V}}{\partial \sigma^I}$, ${\cal V}_{IJ} = \frac{\partial^2 {\cal V}}{\partial \sigma^I \partial \sigma^J}$, etc.} \[ N_{IJ} = {\cal V}_{IJ} = C_{IJK} \sigma^K \;. \] The dual coordinates $\sigma_I$, for which the equations of motion reduce to $\Delta \sigma_I =0$, are normalized according to \[ \sigma_I = \frac{1}{3} {\cal V}_I= \frac{1}{6} C_{IJK} \sigma^J \sigma^K = \frac{1}{6} N_{IJ} \sigma^K \;. \] With this normalization \[ \sigma_I \sigma^I = {\cal V}(\sigma) \;. \] In terms of the dual coordinates, single and multi-centered solutions take the form \[ \sigma_I = h_I + \frac{q_I}{r^2} \] and \[ \sigma_I = h_I + \sum_{a=1}^n \frac{q_{Ia}}{|x-x_a|^2} \] respectively. In general, we cannot find an explicit expression for $\sigma^I$ in terms of $\sigma_I$ and, hence, in terms of the harmonic functions. \subsubsection*{Hesse potential ${\cal V}=\sigma^1 \sigma^2 \sigma^3$} We now consider a special case where one can obtain explicit expressions for the $\sigma^I$. This model is closely related to the so-called STU-model. The Hesse potential is \[ {\cal V} = \sigma^1 \sigma^2 \sigma^3 \;, \] and the dual coordinates are chosen\footnote{For convenience, we have changed the normalization of the $\sigma_I$ compared to the case of a general cubic Hesse potential.} \[ \sigma_1 = \sigma^2 \sigma^3 \;,\;\;\; \sigma_2 = \sigma^3 \sigma^1 \;,\;\;\; \sigma_3 = \sigma^1 \sigma^2 \;. \] In terms of dual coordinates, the solution is \[ \sigma_I = H_I \] where $H_I$, $I=1,2,3$ are harmonic functions. In this case we can solve explicitly for the $\sigma^I$: \[ \sigma^1 = \sqrt{\frac{\sigma_2 \sigma_3}{\sigma_1}} = \sqrt{\frac{H_2 H_3}{H_1}} \;, \] with similar expressions for $\sigma^2, \sigma^3$ obtained by cyclic permutations. Here we see explicitly that the fields $\sigma^I$ diverge like $\frac{1}{r}$ for $r \rightarrow 0$, while their ratios are finite and only depend on the charges: \[ \frac{\sigma^1}{\sigma^2} = \frac{H_2}{H_1} \rightarrow \frac{q_2}{q_1} \;. \] \subsection{Hesse potential ${\cal V} = \frac{1}{4!} C_{IJKL} \sigma^I \sigma^J \sigma^K \sigma^L$} The next example is similar, but not extendable to a supersymmetric model. We take a general quartic Hesse potential \[ {\cal V} = \frac{1}{4!} C_{IJKL} \sigma^I \sigma^J \sigma^K \sigma^L\;. \] The corresponding sigma model is still para-K\"ahler, but not special para-K\"ahler because the para-K\"ahler potential does not have a para-holomorphic prepotential. As a shortcut, we observe that the corresponding Euclidean sigma model lifts to a five-dimensional field theory whose couplings are encoded by a quartic Hesse potential. However five-dimensional supersymmetry requires a Hesse potential which is at most cubic. The corresponding metric is \[ N_{IJ} = \frac{1}{2} C_{IJKL} \sigma^K \sigma^L \;, \] and dual coordinates are given by \[ \sigma_I = \frac{1}{4!} C_{IJKL} \sigma^J \sigma^K \sigma^L = \frac{1}{6} {\cal V}_I \;. \] The solution is given in terms of harmonic functions by $\sigma_I = H_I$. While we cannot solve for the $\sigma^I$ explicitly, homogenity implies that the $\sigma^I \sim r^{-2/3}$ for $r\rightarrow 0$, and that the ratios $\frac{\sigma_I}{\sigma_J}$ and $\frac{\sigma^I}{\sigma^J}$ have finite limits. Explicit solutions can be obtained for sufficiently simple choices of a quartic Hesse potential, for example \[ {\cal V} = \sigma^1 \sigma^2 \sigma^3 \sigma^4 \;. \] Normalizing the dual coordinates such that \[ \sigma_1 = \sigma^2 \sigma^3 \sigma^4 \;,\ldots \] the solution is \[ \sigma^1 = \left( \frac{\sigma_2 \sigma_3 \sigma_4}{(\sigma_1)^2} \right)^{1/3} = \left( \frac{H_2 H_3 H_4}{H_1^2} \right)^{1/3} \;, \ldots \] with similar expressions for the other $\sigma^I$ obtained by cyclic permutations. \subsection{Hesse potential ${\cal V}=-\log(\sigma)$} In the following sections we discuss models with logarithmic Hesse potentials. As we will see in Section 6 these are the models which can be lifted to five-dimensional Einstein-Maxwell type theories. We will study some aspects already here, because these models can as well be lifted to five dimension without coupling to gravity. We start with a logarithmic Hesse potential depending on a single scalar, \[ {\cal V} = - \log \sigma \] where $\sigma >0$. The resulting Hessian metric is \[ {\cal V}'' = \frac{1}{\sigma^2} \;. \] We have already seen that this model is in the class where the instanton action has the form (\ref{InstActChg}). The dual coordinate is proportional to ${\cal V}'$, and we normalize it to be $\frac{1}{\sigma}$. The reduced equation of motion is \[ \Delta \frac{1}{\sigma} = 0 \;, \] which is solved by \[ \sigma = \frac{1}{H} \;, \] where $H$ is a harmonic function. Considering a single centered solution, \[ \sigma = \frac{1}{h+\frac{q}{r^2}} \;, \] we can see explicitly how $\sigma$ behaves for $r\rightarrow \infty$ and $r\rightarrow 0$: \[ \begin{CD} \sigma @>>{r \rightarrow \infty}> \frac{1}{h} \;,\;\;\; \sigma @>>{r \rightarrow 0}> 0 \;. \end{CD} \] This illustrates our general result, and we can see explicitly that the action is finite. The target space corresponding to this model is the symmetric space $SL(2,\mathbbm{R})/SO(1,1)$, which is also known as $AdS^2$. The action, expressed in terms of the scalars $\sigma$ and $b$ is \[ S = \int d^4x \frac{1}{\sigma^2} ( \partial_m \sigma \partial^m \sigma - \partial_m b \partial^m b) \;. \] In terms of the para-complex coordinates $X=\sigma + e b$ this becomes \[ S = \int d^4x \frac{\partial_m X \partial^m \bar{X}}{ (\mbox{Re}(X))^2 } \;, \] which makes explicit that the target space is a para-K\"ahler manifold with para-K\"ahler potential \[ K = - \log (X+\bar{X}) \;. \] By the analytic continuation $b \rightarrow ib$ we obtain the upper half plane, equipped with the Poincar\'e metric, ${\cal H} \cong \frac{SL(2,\mathbbm{R})}{SO(2)}$.\footnote{Various coordinate systems for the two symmetric spaces in question can be found, for example, in \cite{Gilmore}.} \subsection{Hesse potential ${\cal V}=-\log(\sigma^1 \sigma^2 \sigma^3 )$} Another model, which turns out to be the Euclidean version of the well-known STU-model is obtained by taking three copies of the previous model. The Hesse potential is \[ {\cal V} = - \log (\sigma^1 \sigma^2 \sigma^3) = -\log \sigma^1 -\log \sigma^2 -\log \sigma^3 \;. \] The corresponding target space is the product of three copies of $SL(2,\mathbbm{R})/SO(1,1)$, which is para-K\"ahler with para-K\"ahler potential \[ K = -\log ( (X^1+\bar{X}^1)(X^2+\bar{X}^2)(X^3+\bar{X}^3) \;, \] where $X^I = \sigma^I + e b^I$. This target space is in fact even projective special para-K\"ahler, with para-holomorphic prepotential $F= - \frac{X^1 X^2 X^3}{X^0}$, as it must be for Euclidean vector multiplets coupled to supergravity \cite{EucIII}. The dual coordinates can be normalized to be \[ \sigma_I = \frac{1}{\sigma^I} \;, \] so that explicit solutions for the $\sigma^I$ can be found: \[ \sigma^I = \frac{1}{H_I} \;. \] We will see later that this solution can be lifted to a five-dimensional extremal black hole solution of five-dimensional supergravity. \subsection{Hesse potential ${\cal V} = - \log \hat{\cal V}(\sigma)$, with homogeneous $\hat{\cal V}(\sigma)$} Finally, we consider the general case of a Hesse potential which is the logarithm of a homogeneous function $\hat{\cal V}(\sigma)$ (of arbitrary degree): \[ {\cal V}(\sigma^I) = - \log \hat{\cal V}(\sigma^I) \] where \[ \hat{\cal V}(\lambda \sigma^I) = \lambda^p \hat{\cal V}(\sigma^I) \; \] with integer $p$. Then the Hesse potential is not strictly a homogeneous function, but it is homogeneneous of degree zero up to a constant shift. However, the first derivatives \[ \sigma_I \simeq \frac{\partial \cal V}{\partial \sigma^I} \] are homogeneous of degree $-1$, and the metric, which is given by the second derivatives, \[ N_{IJ} = \frac{\partial^2 \cal V}{\partial \sigma^I \partial \sigma^J} \] is homogeneous of degree $-2$. This corresponds to the case $N=-2$ discussed in Sections 4.1 and 4.2, and the results derived there apply (setting $N=-2$ and $p=0$ in te relevant formulae. In particular the instanton action is of the form (\ref{InstActChg}), and the solutions show a version of fixed point behaviour where the scalars run off to a point at infinite affine parameter while the ratios approach finite values determined by the charges. \section{Lifting to five dimensions, without gravity} In the following section we discuss the lifting of instanton solutions to five-dimensional solitons in the absence of gravity. Here no constraints need to be imposed on the Hesse potential. We show that the mass of the soliton obtained by lifting is equal to the instanton action. The para-Hermitean Euclidean sigma model (\ref{paraH-real}) can be lifted to a five-dimensional theory of scalars and gauge fields: \begin{equation} \label{5dRigidAction} S[\sigma, A_\mu] = \int d^5 x \left( - \frac{1}{2} N_{IJ}(\sigma) \partial_\mu \sigma^I \partial^\mu \sigma^J - \frac{1}{4} N_{IJ}(\sigma) F^I_{\mu \nu} F^{\mu \nu|J} + \cdots \right) \;. \end{equation} Here $\mu, \nu=0,1,\cdots, 4$ are five-dimensional Lorentz indices, and the four-dimensional axions have been identified with the time components of the five-dimensional gauge fields \[ b^I = - A^I_0 \;. \] To obtain a covariant theory, we have added the magnetic components $F_{mn}^I$, $m,n=1,\ldots,4$ of the five-dimensional field strength. We also allow further terms, as long as they do not contribute to the four-dimensional sigma model obtained by reduction over time. It is straightforward to verify that the five-dimensional action (\ref{5dRigidAction}) reduces to the para-Hermitean sigma model (\ref{paraH-real}) upon restricting to static and purely electric field configurations, and reducing with respect to time. Thus instanton solutions of (\ref{paraH-real}) lift to electrically charged solitons of (\ref{5dRigidAction}). The full field equations of the five-dimensional theory have the following form. The equation of motion for the scalars $\sigma^I$ is \[ N_{KJ} \Box \sigma^J + \frac{1}{2} \partial_K N_{IJ} \partial_\mu \sigma^I \partial^\mu \sigma^J = \frac{1}{4} \partial_K N_{IJ} F^{I}_{\mu \nu} F^{\mu \nu|J} \;, \] and the equation of motion of the five-dimensional gauge fields is \[ \partial_\mu ( N_{IJ} F^{\mu \nu|J}) =0 \;. \] If we impose that the solution is static and does not carry magnetic charge, then all time-derivatives vanish and the only non-vanishing field strength components can be expressed in terms of the electrostatic potentials $A_0^I$: \[ F_{tm} = - F_{mt} = - \partial_m A_0^I = \partial_m b^I\;. \] In such backgrounds the equations of motion take the following form: \begin{eqnarray} N_{KJ} \Delta \sigma^J + \frac{1}{2} \partial_K N_{IJ} \partial_m \sigma^I \partial^m \sigma^J &=& \frac{1}{2} \partial_K N_{IJ} F^I_{0m} F^{0m|J} \nonumber \\ \partial_m ( N_{IJ} \partial^m A^J_0 ) &=& 0 \;. \end{eqnarray} Expressing $F^I_{0m}$ and $A^I_0$ in termes of $b^I$, we see that these equations of motion are identical to (\ref{FullEOM}). The extremal instanton ansatz corresponds to imposing \[ \partial_m \sigma^I = \pm F_{0m}^I \] which means that the scalars $\sigma^I$ are proportional to the electrostatic potentials. For five-dimensional vector multiplets, this is the condition for a BPS solution supported by scalars and electric fields. Imposing the extremal instanton ansatz we therefore obtain the reduced equations of motion \[ \partial_m (N_{IJ} \partial^m \sigma^J ) =0 \;, \] which is identical to (\ref{ReducedEOM}). The four-dimensional instanton charges equal the five-dimensional electric charges, which are defined by \[ Q_I = \oint d^3 \Sigma^m N_{IJ} F^J_{0m} = \oint d^3 \Sigma^m N_{IJ} \partial_m b^J \;. \] From the five-dimensional point of view the method used in Section 2.3 to solve the equations of motion is a standard method for solving Maxwell-type equations in an electrostatic background. We expect that the four-dimensional instanton action is related to the five-dimensional mass. The mass of the soliton is obtained by integrating the energy density, which is the component $T_{00}$ of the energy momentum tensor $T_{\mu \nu}$, over space. We use the symmetric energy momentum tensor which is obtained by coupling the action (\ref{5dRigidAction}) to a background metric and varying it. The result is \begin{eqnarray} T_{\mu \nu} &=& N_{IJ} \partial_\mu \sigma^I \partial_\nu \sigma^J - \frac{1}{2} N_{IJ} \eta_{\mu \nu} \partial_\rho \sigma^I \partial^\rho \sigma^J \nonumber \\ &&+ N_{IJ} F^I_{\mu \rho} F_{\nu}^{\;\;\rho|J} - \frac{1}{4} \eta_{\mu \nu} F^I_{\rho \sigma} F^{\rho \sigma|J} \;. \end{eqnarray} In a static, purely electric background, the resulting energy density is \[ T_{00} = \frac{1}{2} N_{IJ} \partial_m \sigma^I \partial^m \sigma^J + \frac{1}{2} N_{IJ} \delta^{mn} F_{0m}^I F_{0n}^J \;. \] For solutions where $F^I_{0m}=\pm \partial_m \sigma^I$, this becomes \[ T_{00} = N_{IJ} \partial_m \sigma^I \partial^m \sigma^J \;. \] The integral expression for the soliton mass $M$ of the soliton agrees with the instanton action (\ref{InstAction}) and the boundary action (\ref{BoundaryAction}): \[ M=\int d^4 x T_{00} = S_{\rm inst.} = S_{\rm bound.} \;. \] Our previous discussion of fixed point behaviour of the scalars $\sigma^I$ remains valid, because there is no difference between the four- and five-dimensional scalars. In particular the soliton mass is finite if the $\sigma^I$ approach finite values, and it is given in terms of the five-dimensional electric charges as \[ M=\sigma^I(\infty) Q_I \;, \] if the scalars go to zero at the centers. Models where the Hesse potential is homogeneous of positive degree do not have proper, i.e. finite mass, solitons of the type considered. This includes rigid five-dimensional vector multiplets. If the degree of homogeneity is negative, or if the Hesse potential is the logarithm of a homogeneous function, then solitons with finite mass do exist. \section{Lifting to five dimensions, with gravity} \subsection{Dimensional lifting and dimensional reduction} We now turn to the lifting of four-dimensional instantons to five-dimensional black holes. In the presence of gravity the relation between the five-dimensional and four-dimensional actions becomes more complicated. As a first step we would like to identify the class of five-dimensional Einstein-Maxwell type actions which reduce to actions of the form (\ref{SigmaR}) by dimensional reduction with respect to time. To be precise we allow additional terms in both actions, as long as (\ref{SigmaR}) is a consistent reduction, i.e., as long as solutions of (\ref{SigmaR}) are solutions of the five-dimensional theory. The main new feature in the presence of gravity is that the decomposition of the five-dimensional metric gives rise to a Kaluza-Klein scalar and Kaluza-Klein gauge field. The Kaluza-Klein gauge field can be set to zero consistently. At the level of five-dimensional solutions this means that we restrict ourselves to solutions which are not only stationary, but static, i.e. we exclude rotating solutions. However the Kaluza-Klein scalar provides a complication because it needs to be incorporated into the four-dimensional scalar sigma model.\footnote{As will be explicit from the solutions discussed later, freezing the Kaluza-Klein scalar is not an option, since this would only leave us with trivial solutions.} One class of examples where one obtains Euclidean actions of the type (\ref{SigmaR}) is the temporal reduction of five-dimensional supergravity coupled to vector multiplets \cite{EucIII}. We will adopt the strategy of generalizing this class while keeping the relevant feature that temporal reduction gives rise to a para-K\"ahler sigma model. As far as the relation between five-dimensional and four-dimensional actions is concerned, the analysis can be carried out in parallel for spatial and temporal reduction. For concreteness we will take the case of temporal reduction, but the other case is simply obtained by flipping signs in the Lagrangian, as discussed in more detail in \cite{EucIII}. The geometry underlying five-dimensional supergravity with vector multiplets is the local (or projective) version of very special real geometry \cite{GST}. This is a type of Hessian geometry, where the Hesse potential ${\cal V} = - \log \hat{\cal V}$ is the logarithm of a homogeneous cubic polynomial $\hat{\cal V}$, which is called the prepotential. To be precise, the metric obtained from this Hesse potential gives the coupling matrix of the gauge fields, while the scalar metric is its pull back to the hypersurface $\hat{\cal V}=1$. This reflects that the supergravity theory has one scalar field less than it has gauge fields. A five-dimensional vector multiplet contains a gauge field and a real scalar, but the gravity multiplet contains an additional gauge field, the graviphoton. Upon dimensional reduction each gauge field gives rise to an axionic scalar, which can be combined with the five-dimensional scalars and the Kaluza-Klein scalar into a sigma model of the type (\ref{SigmaR}). Thus it is important to have one additional gauge field in five dimensions. The other critical feature is that the metric of the five-dimensional scalar sigma model is homogeneous of degree $-2$ in the scalar fields. As we will see later this is crucial for combining the Kaluza-Klein scalar with the five-dimensional scalars in such a way that we obtain a sigma model of the form (\ref{SigmaR}). As we have seen in Section 4, a Hessian metric is homogeneous of degree $-2$ if its Hesse potential ${\cal V}=-\log \hat{\cal V}$ is the logarithm of a homogeneous function $\hat{\cal V}$, irrespective of the degree of homogenity. Therefore we will generalize the local very special real geometry of supergravity by dropping the requirement that the prepotential $\hat{\cal V}$ is a homogeneous cubic polynomial, while still requiring that it is a homogeneous function of degree $p$, where $p$ is now arbitrary. Dimensional reduction of five-dimensional supergravity with vector multiplets with respect to space results in target space geometries which are projective special K\"ahler \cite{GST}. The map between the target geometries of five-dimensional and four-dimensional vector multiplets is the $r$-map \cite{dWvP:1992}, which we will call the local (or projective) $r$-map, to disinguish it from its rigid (or global) counterpart. If one reduces over time, one obtains projective special para-K\"ahler manifolds, and the corresponding map is called the local (or projective) para-$r$-map \cite{EucIII}. The following construction provides a generalization of both the local $r$-map and local para-$r$-map. For concreteness we will give explicit expression for the para-$r$-map, and explain in the end how the $r$-map is obtained by analytical continuation. The construction starts with $n+1$ scalar fields $h=(h^I)=(h^0,h^1, \ldots, h^n)$, which we interprete as affine coordinates on an $(n+1)$-dimensional Hessian manifold $\tilde{M}_r$. We work locally and take $\tilde{M}_r$ to be an open domain in $\mathbbm{R}^{n+1}$. The Hesse potential for this manifold (which will be the prepotential for the actual scalar manifold $M_r$) is ${\cal V}(h) = - \log {\hat {\cal V}}(h)$, where the prepotential $\hat{\cal V}(h)$ is homogeneous of degree $p$: \begin{equation} \hat{\mathcal{V}}(\lambda h^0,..,\lambda h^n)= \lambda^p \hat{\mathcal{V}}(h^0,h^1,\ldots, h^n)\;. \end{equation} Taking the derivative with respect to $\lambda$ we obtain \begin{eqnarray} \hat{\mathcal{V}}_I(\lambda h)h^I=p\lambda^{p-1}\hat{\mathcal{V}}(h) \;, \end{eqnarray} where the subscript $I$ denotes differentiation with respect to $h^I$. By setting $\lambda=1$ we obtain \begin{equation} \hat{\mathcal{V}}_Ih^I=p\hat{\mathcal{V}}(h) \;.\label{Euler} \end{equation} Further differentiation implies that \begin{equation} \hat{\mathcal{V}}_{IJ}h^I=(p-1)\hat{\mathcal{V}}_J \label{Euler2}\;. \end{equation} The logarithm of $\hat{\cal V}(h)$ is used to define a Hessian metric by \[ a_{IJ}(h) = - \frac{1}{p} \frac{\partial^2 \log \hat{\cal V}(h)}{\partial h^I \partial h^J} = - \frac{1}{p} \left( \frac{ \hat{\cal V}_{IJ}}{\hat{\cal V}} - \frac{ \hat{\cal V}_I \hat{\cal V}_J}{ \hat{\cal V}^2} \right)\;. \] A conventional factor $\frac{1}{p}$ has been introduced in order to be consistent with supergravity conventions for $p=3$. The metric is homogeneous of degree $-2$ in the $h^I$. In order to ensure that the metric $a_{IJ}(h)$ is positive definite, we might need to restrict the fields $h=(h^I)$ to a suitable domain $D \subset \mathbbm{R}^{n+1}$. The scalar target manifold $M_r$ of the model is the hypersurface $\{h^I | \hat{\cal V}(h) =1\}$ of $D$, equipped with the pull-back metric. \[ a_{xy}(\phi) = \frac{\partial h^I}{\partial \phi^x} \frac{\partial h^J}{\partial \phi^y} a_{IJ}(h(\phi)) \;. \] The physical scalars $\phi^x$, $x=1,\ldots, n$ provide local coordinates on the hypersurface $\{h^I| \hat{\cal V}=1 \} \subset D$. In the following it will be convenient to work with the fields $h^I$, which are subject to the constraint $\hat{\cal V}(h)=1$, and with the associated Hessian metric $a_{IJ}(h)$. We will need a few relations involving $a_{IJ}(h)$. First note that (\ref{Euler}) and (\ref{Euler2}) can be used to show that \begin{eqnarray} a_{IJ}(h)h^Ih^J&=& - \frac{1}{p}\partial_I\partial_J\log \hat{\mathcal{V}}(h)h^Ih^J = - \frac{1}{p}\left(\frac{\hat{\mathcal{V}}_{IJ}}{\hat{\mathcal{V}}} - \frac{\hat{\mathcal{V}}_I \hat{\mathcal{V}}_J} {\hat{\mathcal{V}}^2}\right)h^Ih^J \nonumber \\ &=& \frac{ \hat{\mathcal{V}}_Jh^J}{p\hat{\mathcal{V}}} =1 \;. \label{eq:Vrelation} \end{eqnarray} Differentiation of the constraint $\hat{\cal V}(h)=1$ with respect to space-time implies \begin{equation} \hat{\mathcal{V}}_I\partial_\mu h^I = 0 \;, \end{equation} where $\mu=0,\ldots, 4$ are five-dimensional space-time indices. Combining this with (\ref{eq:Vrelation}) we obtain \begin{equation} a_{IJ}h^I\partial_\mu h^J= -\frac{\hat{\mathcal{V}}_J}{\hat{\mathcal{V}}} \partial_\mu h^J=0 \;. \label{eq:Vrelation2} \end{equation} We now use the prepotential $\hat{\cal V}(h)$ to define the following five-dimensional bosonic Lagrangian: \begin{equation} \label{5dLagrangian} \hat{e^{-1}}\hat{\mathcal{L}}= \frac{\hat{R}}{2}-\frac{3}{4}a_{IJ}(h) \partial_\mu h^I\partial^\mu h^J - \frac{1}{4}a_{IJ}(h) F_{\mu\nu}^I F^{\mu\nu J} + \cdots \;. \end{equation} Here $\hat{R}$ is the five-dimensional Ricci scalar, $\hat{e}$ is the determinant of the local frame (`f\"unfbein'), $a_{IJ}(h)$ is the Hessian metric defined above, and for the scalar term the constraint $\hat{\cal V}(h)=1$ is understood. As indicated, the Lagrangian might contain further terms, provided that these do not contribute to four-dimensional Euclidean sigma model obtained by reduction with respect to time. For $p=3$, (\ref{5dLagrangian}) is part of the Lagrangian of five-dimensional vector multiplets coupled to $n$ vector multiplets. The full supergravity Lagrangian also contains a Chern-Simon terms and fermionic terms, which, however, do not contribute to the four-dimensional sigma model upon reduction. We now reduce the Lagrangian (\ref{5dLagrangian}) with respect to time.\footnote{We refer to \cite{EucIII} for a more detailed discussion of dimensional reduction.} The reduction of the metric is carried out in such a way that the resulting four-dimensional Einstein-Hilbert term has the canonical form, i.e., we reduce from the five-dimensional Einstein frame to the four-dimensional Einstein frame. The corresponding parametrization of the line element is \[ ds^2_{(5)} = - e^{2\tilde{\sigma}} (dt + {\cal A}_m dx^m)^2 + e^{-\tilde{\sigma}} ds^2_{(4)} \;, \] where $\tilde{\sigma}$ is the Kaluza-Klein scalar and ${\cal A}_m$ is the Kaluza-Klein vector. Upon dimensional reduction over time, the zero components $\mathcal{A}^I_0$ of the five-dimensional gauge fields become four-dimensional scalar fields $m^I= \mathcal{A}^I_0$. In four dimensions, we only keep the Einstein-Hilbert term and the scalar terms. This is a consistent truncation and corresponds to the restriction to five-dimensional field configurations which are static and purely electric. The relevant part of the reduced Lagrangian is \begin{equation} e^{-1} \mathcal{L}=\frac{R}{2}-\frac{3}{4}\partial_m\tilde{\sigma} \partial^m\tilde{\sigma} - \frac{3}{4}a_{IJ}(h)\partial_m h^I \partial^m h^J + \frac{1}{2} e^{-2\tilde{\sigma}} a_{IJ}(h)\partial_m m^I \partial^m m^J \;, \end{equation} where $m=1,\ldots, 4$ are indices in four-dimensional space, $R$ is the four-dimensional Ricci scalar, and $e$ is the determinant of the four-dimensional local frame (`vierbein'). By making the redefinitions \begin{eqnarray} h^I&=&Ae^{-\tilde{\sigma}}\sigma^I\;, \\ m^I&=&B b^I \;, \end{eqnarray} where $A,B$ are constants to be fixed later, the Lagrangian takes on the form \begin{eqnarray} e^{-1}\mathcal{L} &=& \frac{R}{2} -\frac{3}{4}\partial_m\tilde{\sigma}\partial^m\tilde{\sigma} - \frac{3}{4}a_{IJ}(e^{-\tilde{\sigma}}\sigma)\sigma^I \sigma^J \partial_m e^{-\tilde{\sigma}}\partial^m e^{-\tilde{\sigma}} \nonumber \\ && - \frac{3}{4}a_{IJ}(e^{-\tilde{\sigma}}\sigma)e^{-2\tilde{\sigma}} \partial_m \sigma^I\partial^m \sigma^J -\frac{3}{2}a_{IJ}(e^{-\tilde{\sigma}}\sigma^)e^{-\tilde{\sigma}} \sigma^I \partial_m e^{-\tilde{\sigma}}\partial^m \sigma^J \nonumber \\ && + \frac{B^2}{2A^2}e^{-2\tilde{\sigma}}a_{IJ}(e^{-\tilde{\sigma}} \sigma) \partial_m b^I\partial^m b^J \;. \end{eqnarray} From now on we regard $\sigma^I$ and $b^I$ as the independent fields. Note that the constraint $\hat{\cal V}(h) =1$ implies the relation \begin{equation} \label{RelKK} \hat{\cal V}(\sigma) = \hat{\cal V}(A^{-1} e^{\tilde{\sigma}} h) = A^{-p} e^{p \tilde{\sigma}} \hat{\cal V}(h) = A^{-p} e^{p \tilde{\sigma}} \;, \end{equation} which expresses the Kaluza-Klein scalar $\tilde{\sigma}$ as a function of the four-dimensional scalars $\sigma^I$. By considering the relations (\ref{eq:Vrelation}) and (\ref{eq:Vrelation2}), the first and second term cancel and the fourth term vanishes. If we choose the constants $A,B$ to satisfy \begin{equation} B^2=\frac{3A^2}{2} \;, \end{equation} and use that $a_{IJ}(h)$ is homogeneous of degree $-2$, the remaining terms in the Lagrangian take the form \begin{eqnarray} e^{-1}\mathcal{L} &=& \frac{R}{2} - \frac{3}{4} a_{IJ}(\sigma) \partial_m \sigma^I \partial^m \sigma^J + \frac{3}{4} a_{IJ}(\sigma) \partial_m b^I \partial^m b^J \;. \end{eqnarray} Defining \begin{equation} N_{IJ}(\sigma)=\frac{3}{2} a_{IJ}(\sigma) \;, \end{equation} we recognize the standard form (\ref{SigmaR}) of a para-Hermitean sigma model with $n$ commuting shift isometries, coupled to gravity, \begin{equation} \label{ReducedSigmaTime} e^{-1}\mathcal{L}= \frac{R}{2} - \frac{1}{2} N_{IJ}(\sigma)(\partial_m \sigma^I \partial^m \sigma^J - \partial_m b^I \partial_m b^J) \;. \end{equation} The metric $N_{IJ}(\sigma)$ has the Hesse potential ${\cal V}(\sigma)=-\log \hat{\cal V}(\sigma)$: \[ N_{IJ}(\sigma) = -\frac{3}{2p} \frac{\partial^2}{\partial \sigma^I \partial \sigma^J} \log \hat{\cal V}(\sigma) \;. \] As a result, the metric $N_{IJ} \oplus (-N_{IJ})$ of the scalar manifold spanned by $\sigma^I, b^I$ is para-K\"ahler. This is seen explicitly by introducing para-holomorphic coordinates \[ X^I = \sigma^I + e b^I \;, \] and computing \[ \frac{\partial^2 \log \hat{\cal V}}{\partial X^I \partial \bar{X}^J} = \frac{\partial^2 \log \hat{\cal V}}{\partial \sigma^K \partial \sigma^L} \frac{\partial \sigma^K}{\partial X^I}\frac{\partial \sigma^L}{\partial \bar{X}^J} = \frac{1}{4} \frac{\partial^2 \log \hat{\cal V}}{\partial \sigma^I \partial \sigma^J} = \frac{p}{6} N_{IJ} \;. \] Thus $K(X,\bar{X})= \frac{6}{p} \log \hat{\cal V}$ is a para-K\"ahler potential for the metric $N_{IJ}\oplus (-N_{IJ})$. The relation between the five- and four-dimensional Lagrangian is true irrespective of the value of $p$ that we choose, and hence it makes sense for models with $p\not=3$, which cannot be embedded into a five-dimensional supersymmetric model. However, it was crucial that we could combine the Kaluza-Klein scalar with the five-dimensional scalars $h^I$ in such a way that the scalar target manifold of the reduced theory became para-Hermitean. This worked only because the metric $a_{IJ}(h)$ is homogeneous of degree $-2$. Therefore, there is no obvious further generalization which would allow one to drop the condition that the prepotential is homogeneous. The effect of reducing over space rather than time is to replace $b^I$ by $ib^I$ in \ref{ReducedSigmaTime}. Equivalently, in terms of (para-)complex coordinates, it corresponds to replacing $X^I=\sigma^I + e b^I$ by $Y^I = \sigma^I + i b^I$, i.e. the para-complex structure is replaced by a complex structure, and one obtains a K\"ahler manifold where the K\"ahler potential is proportional to the prepotential $\hat{\cal V}(\sigma)$. Thus, as in \cite{EucIII} the para-$r$-map and $r$-map are related by analytic continuation (see also Appendix A). Having fixed the relation between the five-dimensional and the four-dimensional theory, we can now see how four-dimensional instantons lift to five-dimensional solutions. We have restricted ourselves to solutions of (\ref{SigmaR}) where the four-dimensional metric is flat, $ds^2_{(4)} = \delta_{mn} dx^m dx^n$. Such line elements lift to five-dimensional line elements of the form \[ ds^2_{(5)} = - e^{2\tilde{\sigma}} dt^2 + e^{-\tilde{\sigma}} \delta_{mn} dx^m dx^n \;, \] where $\tilde{\sigma}$ is the Kaluza-Klein scalar. This is precisely the structure of a line element for an extremal five-dimensional black hole. Extremal black holes have the particular feature that their line elements reduce under temporal reduction to flat line elements, provided that one uses the Einstein frame in both dimensions. The non-trivial five-dimensional geometry is fully captured by the Kaluza-Klein scalar, while the four-dimensional metric is flat. This explains why extremal black holes correspond to null geodesics, and why we could effectively drop the four-dimensional Einstein-Hilbert term when constructing solutions. This observation provides additional justification for calling the corresponding instanton solutions extremal. From the four-dimensional point of view all information is encoded in the scalar fields $\sigma^I$. With the choice $A=1$, which implies $B=\sqrt{\frac{3}{2}}$, the Kaluza-Klein scalar is determined by the four-dimensional scalars through the relation \begin{equation} \label{RelKK2} e^{p\tilde{\sigma}} = \hat{\cal V}(\sigma) \;, \end{equation} while the five-dimensional scalars are given by \[ h^I = e^{-\tilde{\sigma}} \sigma^I \;, \] We have a Hesse potential of the form ${\cal V}(\sigma) = - \log \hat{\cal V}(\sigma)$, and therefore the dual scalars have the form \[ \sigma_I \simeq \frac{\partial }{\partial \sigma^I} \log \hat{\cal V}(\sigma)\;. \] As in previous examples we will fix the factor of proportionality at our convenience. The solution is given by $\sigma_I(x)=H_I(x)$, where $H_I(x)$ are harmonic functions on $\mathbbm{R}^4$. Explicit expressions for the $\sigma^I$ can only be obtained case by case if the prepotential is sufficiently simple. However, the asymptotics of the solution at the center is known from Section 4, and we will see below that this allows us to obtain information about the ADM mass and about the black hole entropy. The axions $b^I$ are determined by the extremal instanton ansatz and in turn determine the five-dimensional gauge fields. \subsection{ADM mass and instanton action} Before looking into explicit examples, we show that the ADM mass of the five-dimensional black hole is equal to the action of the corresponding four-dimensional instanton. The ADM mass can be written as a surface integral involving the Kaluza-Klein scalar $\tilde{\sigma}$ \cite{EucIII}. To compare this to the instanton action, we express the ADM mass in terms of the prepotential: \[ M_{ADM} = - \frac{3}{2} \oint d^3 \Sigma^m \partial_m e^{-\tilde{\sigma}} = - \frac{3}{2} \oint d^3 \Sigma^m \partial_m \hat{\cal V}(\sigma)^{-1/p} \;. \] Now we compare this to the instanton action \[ S_{\rm inst} = \oint d^3 \Sigma^m N_{IJ} \sigma^I \partial_m \sigma^J \;. \] The metric $N_{IJ}$ is given by \[ N_{IJ} = - \frac{3}{2p} \left( \frac{ \hat{\cal V}_{IJ}} { \hat{\cal V}} - \frac{ \hat{\cal V}_I \hat{\cal V}_J }{ \hat{\cal V}^2} \right) \;. \] Using that $\hat{\cal V}(\sigma)$ is homogeneous of degree $p$, we find \[ N_{IJ} \sigma^I \partial_m \sigma^J = - \frac{3}{2p} \left( \frac{ \hat{\cal V}_{IJ} \sigma^I }{ \hat{\cal V} } - \frac{ \hat{\cal V}_I \sigma_I \hat{\cal V}_J }{ \hat{\cal V}^2 } \right) \partial_m \sigma^J = \frac{3}{2p} \frac{ \hat{\cal V}_J }{ \hat{\cal V} } \partial_m \sigma^J \] But this is a total derivative: \[ N_{IJ} \sigma^I \partial_m \sigma^J = \frac{3}{2p} \partial_m \log \hat{\cal V}(\sigma) \;. \] As a result we have \begin{eqnarray} M_{ADM} & =& - \frac{3}{2} \oint d^3 \Sigma^m \partial_m \hat{\cal V}(\sigma)^{-1/p} = - \frac{3}{2} \oint d^3 \Sigma^m \partial_m e^{-\tilde{\sigma}} \;, \nonumber \\ S_{\rm inst} &=& \frac{3}{2} \oint d^3 \Sigma^m \partial_m \log \hat{\cal V}(\sigma)^{1/p} = \frac{3}{2} \oint d^3 \Sigma^m \partial_m \tilde{\sigma} \;. \end{eqnarray} Both the ADM mass and the instanton action are surface integrals, but the integrands are different. To compare the integrals we rewrite the ADM mass as \begin{equation} \label{ADMvsInst} M_{ADM} = \frac{3}{2} \oint d^3 \Sigma^m e^{-\tilde{\sigma}} \partial_m \tilde{\sigma} \;. \end{equation} The integration is performed by integrating over a three-sphere of radius $r$ and taking $r\rightarrow \infty$. Therefore the only terms in the integrand which give a finite contribute are those which fall off like $\frac{1}{r^3}$. The behaviour of the integrands in this limit is obtained by observing that $\hat{\cal V}(\sigma)$, and, hence, $e^{\tilde{\sigma}}$ are algebraic functions of the harmonic functions $H_I$. Since we normalize the five-dimensional metric to approach the standard Minkowski metric at infinity, both expressions approach the constant value 1 at infinity. This implies the following Taylor expansion of $\hat{\cal V}(\sigma)$ around $\tau=\frac{1}{r^2} = 0$: \begin{eqnarray} \hat{\cal V}(\sigma) & = & 1 + {\cal O}(\frac{1}{r^2}) \;.\nonumber\\ \partial_m \hat{\cal V}(\sigma) &=& {\cal O}(\frac{1}{r^3}) \;.\nonumber \end{eqnarray} This in turn implies that \begin{eqnarray} e^{-\tilde{\sigma}} &=& 1 + {\cal O}(\frac{1}{r^2}) \;,\nonumber\\ \partial_m \tilde{\sigma} &=& {\cal O}(\frac{1}{r^3}) \;.\nonumber \end{eqnarray} As a consequence, the factor $e^{-\tilde{\sigma}}$ in (\ref{ADMvsInst}) does not contribute to the integral, and the ADM mass and the instanton agree, \[ M_{ADM} = S_{\rm Inst} \;, \] despite that the integrands of the surface integrals are different.\footnote{ For the special case of the dilaton-axion system, this was also observed in \cite{EucIII}.} This is the same result as we found when lifting without coupling to gravity. In absence of gravity mass is defined as the integral of the energy density, but no such definition is available in the presence of gravity. Instead one needs to apply the ADM definition of mass. The fact that we find agreement between mass and instanton action in both cases provides additional support for the definition of the instanton action obtained by dualization of axions into tensors. \subsection{Black hole entropy and the size of the throat} Besides the ADM mass, the black hole entropy is the most important property of a black hole. To extend our instanton -- black hole dictionary we investigate the behaviour of the five-dimensional metric at the centers and interprete it in terms of four-dimensional quantities. Line elements of the form \[ ds^2_{(5)} = - e^{2\tilde{\sigma}} dt^2 + e^{-\tilde{\sigma}} \delta_{mn} dx^m dx^n \] describe extremal black holes with the horizon located at $r=0$ if the function $e^{-\tilde{\sigma}}$ has the asymptotics \[ e^{-\tilde{\sigma}} \approx \frac{Z}{r^2} \;, \] where $Z$ is constant. Here we use a spherical coordinate system which is centered at the black hole horizon. The asymptotic line element \[ ds^2_{(5)} = - \frac{r^4}{Z^2} dt^2 + \frac{Z}{r^2} dr^2 + Z d \Omega^2_{(3)} \] is locally isometric to $AdS^2 \times S^3$, and the area $A$ of the event horizon is given by the area $A=2\pi^2 Z^{3/2}$ of the asymptotic three-sphere located at $r = 0$. To obtain a four-dimensional interpretation, we consider a hypersurface of constant time. The resulting four-dimensional line element \[ ds^2_{(4)} = e^{-\tilde{\sigma}} \delta_{mn} dx^m dx^n \] describes the instanton in a conformal frame which is different from the (four-dimensional) Einstein frame employed so far. We will call this frame the Kaluza-Klein frame, and refer to \cite{EucIII} for a more detailed discussion of its role and properties. By definition, the four-dimensional Kaluza-Klein metric is the pull back of the five-dimensional metric onto a hypersurface $t=\mbox{const.}$, i.e. a constant time hypersurface of the black hole space-time. In this frame the instanton line element is not flat, but only conformally flat. The geometry can be interpreted as a semi-infinite wormhole, which is asymptotically flat for $r\rightarrow \infty$ and ends with a neck of size proportional to the area $A$ of the black hole for $r\rightarrow 0$. For multi-centered solutions there are several such throats with asymptotic sizes given by the areas of the corresponding horizons. If the constant $Z$ vanishes, the area of the black hole and the neck of the corresponding wormhole have zero size. As is well known from supergravity solutions\footnote{See for example \cite{Mal}.}, a non-vanishing $Z$ requires `to switch on sufficiently many charges'. A more precise statement will be made later when we consider explicit examples. Solutions with $Z=0$ can be interpreted as degenerate black hole solutions with vanishing area of the event horizon. In this case the horizon coincides with the curvature singularity, and the space-time has a null singularity. The spatial geometry corresponds to a semi-infinite wormhole with zero-sized neck. One expects that a finite horizon is obtained when taking into account higher curvature corrections to the Einstein-Hilbert term \cite{Small}. Such black holes are called small black holes, in contrast to large black holes which already have a finite horizon at the two-derivative level. \subsection{Attractor behaviour and examples} We will now consider some explicit examples for illustration. Then we return to the general case and show that the asymptotic behaviour at the event horizons is governed by an attractor mechanism which generalizes the one of five-dimensional supergravity. \subsubsection{Prepotential $\hat{\cal V}(\sigma) = \sigma^1 \sigma^2 \sigma^3$} We start with the STU-type prepotential $\hat{\cal V}(\sigma) = \sigma^1 \sigma^2 \sigma^3$. Like all models with a homogeneous cubic prepotential this model is supersymmetric, or more precisely, a subsector of a supersymmetric model \cite{EucIII}. The dual coordinates are $\sigma_I \simeq \partial_I \log(\sigma^1 \sigma^2 \sigma^3) \simeq \frac{1}{\sigma^I}$, and for convenience we fix the normalization to \[ \sigma_I = \frac{1}{\sigma^I} \;. \] Then the four-dimensional instanton solution is given by \[ \sigma^I(x) = \frac{1}{H_I(x)} \;, \] where $x\in \mathbbm{R}^4$. The Kaluza-Klein scalar $\tilde{\sigma}$ is \[ e^{3 \tilde{\sigma}} = \hat{\cal V}(\sigma) = \sigma^1 \sigma^2 \sigma^3 = \frac{1}{H_1 H_2 H_3} \;. \] The resulting five-dimensional line element is \begin{eqnarray} ds^2_{(5)} &=& - e^{2\tilde{\sigma}} dt^2 + e^{-\tilde{\sigma}} \delta_{mn} dx^m dx^n \nonumber \\ &=& - (H_1 H_2 H_3)^{-2/3} dt^2 + (H_1 H_2 H_3)^{1/3} \delta_{mn} dx^m dx^n \;, \nonumber \end{eqnarray} which is the standard form of a (single or multi-centered) five-dimensional BPS black hole for an STU-model. Observe that the asymptotic metric at the centers is $AdS^2 \times S^3$ if all three harmonic functions are non-constant. This requires to have three non-vanishing charges $q_1, q_2, q_3$. If one or two charges are switched off, one obtains `small' black holes with vanishing horizon area. The result for the five-dimensional scalars $h^I$ is: \[ h^I = e^{-\tilde{\sigma}} \sigma^I = \left( \frac{H_J H_K}{H_I^2} \right)^{1/3} \;, \] where $I,J,K$ are pairwise distinct. Observe that the $h^I$ take finite fixed point values at the centers, which only depend on the charges. For concreteness, single-centered harmonic functions $H_I = h_I + \frac{q_I}{r^2}$ give \[ \begin{CD} h^I @>>{r\rightarrow 0}> \left( \frac{q_J q_K}{q_I^2} \right)^{1/3} \;. \end{CD} \] A particular subclass is provided by double-extremal black holes, where the scalars $h^I$ are constant. The fixed point behaviour implies that these constant values are not arbitrary, but determined by the charges. For double-extremal black holes the harmonic functions $H_I$ must be proportional to one another, and the line element takes the form \begin{equation} \label{Tanghelini} ds_{(5)}^2 = - H^{-2}(x) dt^2 + H(x) \delta_{mn} dx^m dx^n \;, \end{equation} where $H(x)$ is a harmonic function. This is the Tanghelini solution, which is the five-dimensional version of the extremal Reissner-Nordstr\"om solution \cite{Tanghelini}. Models with a general homogeneous cubic prepotential can be treated in an analogous way. However, in general it is not possible to find explicit expressions for the scalars $h^I$ or $\sigma^I$ in terms of the harmonic functions. \subsubsection{Prepotential $\hat{\cal V}(\sigma) = \sigma^1 \sigma^2 \sigma^3 \sigma^4$ } Let us also consider one example which does not correspond to a supersymmetric model. We take the simplest example $\hat{\cal V}(\sigma) = \sigma^1 \sigma^2 \sigma^3 \sigma^4$ of a homogeneous quartic prepotential. We normalize the dual scalars such that \[ \sigma_I = \frac{1}{\sigma^I} \;, \] so that the solution is given by \[ \sigma^I(x) = \frac{1}{H_I(x)} \;. \] The corresponding Kaluza-Klein scalar $\tilde{\sigma}$ is \[ e^{4\tilde{\sigma}} = \hat{\cal V}(\sigma) = \sigma^1 \sigma^2 \sigma^3 \sigma^4 = \frac{1}{H_1 H_2 H_3 H_4} \;, \] which leads to a five-dimensional line element of the form \[ ds_{(5)}^2 = - (H_1 H_2 H_3 H_4)^{-2} dt^2 + (H_1 H_2 H_3 H_4)^{4} \delta_{mn} dx^m dx^n \;. \] Multi-centered black hole solutions with finite horizons are thus obtained if all four harmonic functions are non-constant, i.e. one needs four non-vanishing charges $q_1, \ldots, q_4$. The solution for the five-dimensional scalars is \[ h^I = e^{-\tilde{\sigma}} \sigma^I = \left( \frac{H_J H_K H_L}{H_I^3} \right)^{1/4} \;, \] where $I, J, K, L$ are pairwise distinct. Again we observe attractor behaviour, as the five-dimensional scalars approach fixed point values at the centers which only depend on the charges. For a single-centered solution we find \[ \begin{CD} h^I @>>{r\rightarrow 0}> \left( \frac{q_J q_K q_L}{q_I^3} \right)^{1/4} \;. \end{CD} \] If the scalars are frozen to their fixed point values we obtain a double extreme solution with a Tanghelini line element (\ref{Tanghelini}). \subsubsection{General homogeneous prepotential $\hat{\cal V}(\sigma)$} We now return to the general case and consider a general homogeneous prepotential. The dual coordinates \[ \sigma_I \simeq \frac{\hat{\cal V}_I}{\hat{\cal V}} \] are homogenous functions of degree $-1$. The solution is given by $\sigma_I(x) = H_I(x)$, where $H_I(x)$ are harmonic functions. While we cannot solve this for the scalars $\sigma^I$ in closed form, we know that the dual scalars behave like $\sigma_I \sim \frac{1}{r^2}$ at the centers, which implies $\sigma^I \sim r^2$. The asymptotics of the metric at the centers is determined by \[ e^{-\tilde{\sigma}} = \hat{\cal V}^{-1/p} \approx \frac{Z}{r^2} \;, \] and a finite event horizon requires finite $Z$. This imposes constraints on the charges, which we discuss below. If we express the relation $\sigma_I = H_I$ in terms of five-dimensional quantities we obtain \[ e^{-\tilde{\sigma}} \frac{\partial \hat{\cal V}(h)}{\partial h^I} = H_I \] This has the same form as the generalized stabilisation equations of five-dimensional supergravity \cite{ChaSab} and should be interpreted as a generalisation thereof. The generalized stabilisation equations are the algebraic version of the first order flow equations which determine the black hole solution globally. The stabilization or attractor equations which determine the behaviour at the centers can be obtained by taking the limit $r\rightarrow 0$. In this limit we have \[ H_I \approx \frac{q_I}{r^2} \;,\;\;\; e^{-\tilde{\sigma}} \approx \frac{Z}{r^2} \;. \] The limit $r\rightarrow 0$ of the generalized stabilisation equation gives \[ Z \left. \frac{\partial \hat{\cal V}(h)}{\partial h^I} \right|_* = q_I \] where $*$ denotes the evaluation at the horizon. This has the same form as the stabilisation equations (attractor equations) of five-dimensional supergravity \cite{ChaSab} and should be interpreted as a generalisation thereof. Since \[ \frac{\partial \hat{\cal V}(h)}{\partial h^I} h^I = p \hat{\cal V}(h) = p \;, \] the constant $Z$ can be expressed as \[ Z = \frac{1}{p} q_I h^I_* \;. \] Thus the area of the event horizon of the black hole and the size of the neck of the corresponding wormhole/instanton are determined by $Z$ through the charges $q_I$ and the attractor values of the scalars $h^I$. For supersymmetric models ($p=3$), $Z$ is proportional to the five-dimensional central charge. We can be more specific about the conditions leading to a non-vanishing $Z$ if we restrict the functional form of $\hat{\cal V}(\sigma)$. Consider the case where $\hat{\cal V}(\sigma)$ is a homogeneous polynomial of degree $p>0$, \[ \hat{\cal V}(\sigma) = C_{I_1 \cdots I_p} \sigma^{I_1} \cdots \sigma^{I_p} \;. \] Then the dual fields have the form \[ \sigma_I \simeq \frac{\partial_I C_{I_1 \cdots I_p} \sigma^{I_1} \cdots \sigma^{I_p}}{C_{I_1 \cdots I_p} \sigma^{I_1} \cdots \sigma^{I_p}} \;. \] Two extremal situations can arise. If the prepotential has the form \[ \hat{\cal V}(\sigma) = \sigma^1 \cdots \sigma^p \;, \] then the solution is given by \[ \hat{\cal V} = (H_1 \cdots H_p)^{-1} \] and \[ e^{-\tilde{\sigma}} = \hat{\cal V}(\sigma)^{-1/p} = (H_1 \cdots H_p)^{1/p} \;. \] In this case a finite horzion requires that all charges are switched on, i.e. $q_1 \not=0$, \ldots $q_p \not=0$. The other extreme case is a prepotential of the form $\hat{\cal V} = \sigma^p$. Then the solution is given by \[ \hat{\cal V} = H^{-p} \] and \[ e^{-\tilde{\sigma}} = \hat{\cal V}(\sigma)^{-1/p} = H \;. \] In this case it is sufficient to switch on one single charge, because the corresponding scalar enters into the prepotential with the $p$-th power. General homogeneous prepotentials provide examples for all kinds of cases which lie between these two extreme cases. The results of this section generalize the results on five-dimensional BPS black holes to a much larger class of non-supersymmetric models defined by homogeneous prepotentials. We observe that for the attractor mechanism the five-dimensional scalars $h^I$ are more suitable, since they have finite fixed point values while the four-dimensional scalars $\sigma^I$ go to zero. However, since $h^I$ and $\sigma^I$ are related by a rescaling, both descriptions are equivalent, and the asymptotic fixed point of infinity of the $\sigma^I$ corresponds to the proper fixed point for the $h^I$. \section{Conclusions and Outlook} In this paper we have constructed multi-centered extremal black hole solutions using temporal reduction without imposing spherical symmetry. By imposing that the solution can be expressed algebraically in terms of harmonic functions, we have identified a class of scalar geometries which is characterized by the existence of (Hesse or para-K\"ahler) potential for the metric. This class of theories contains supergravity theories as a subset while preserving the salient feature of BPS solutions, namely multi-centered generalizations and the generalized stabilisation equations. The distinction between BPS and non-BPS extremal solutions in supergravity has been subsumed under the geometrical distinction between solutions which flow along eigendistributions of the para-complex structure and those which flow along other completely isotropic submanifolds of the (extended) scalar manifold. Starting from the interpretation of the equation of motion as defining a harmonic map between the (reduced) space-time and the (extended) scalar manifold, the solution can be expressed algebraically in terms of harmonic functions without the need to bring the equations of motion to first order form. A first order rewriting can still be obtained by imposing that the solution carries finite charges. We plan to use this link to explore the relation between our formalism and the approaches using first order rewritings, `fake'-supersymmetry and Hamilton-Jacobi theory. It should also be interesting to investigate how Hessian scalar manifolds could be used within the entropy function formalism of Sen \cite{SenEntropyFunction}. This approach allows to study the near horizon geometry of generic Einstein-Maxwell type theories, but it is in general not possible to learn much about the extension of solutions away from the horizon. For BPS solutions one can make the transition from near-horizon to global solutions because generalized stabilisation equations and the `proper' stabilisation equations have the same structure, and we have seen that this feature generalizes to a large class of non-supersymmetric theories. The electric BPS-solutions of five-dimensional are a subclass of our solutions, and one expects that the corresponding instantons are BPS solutions of the reduced four-dimensional Euclidean theory. This can indeed be verified directly, and in \cite{MohWai1} we will give a more detailed account on instanton solutions for Euclidean vector multiplets. For concreteness, we have restricted ourselves in this paper to the relation between five-dimensional Einstein-Maxwell theories and four-dimensional Euclidean sigma models, and to extremal and electro-static backgrounds. This leaves various directions for future work. Evidently, many features of our constructions will generalize to any number of dimensions, the most interesting pair being four-dimensional Einstein-Maxwell type theories and three-dimensional sigma models. Moreover, there are various other types of solutions, like black holes in anti-de-Sitter and de-Sitter space, rotating black holes, black strings and black rings, Taub-NUT spaces, solutions including higher curvature terms, and non-extremal solutions. While some of these might just correspond to more complicated harmonic maps, others will require generalizations of the set up, since the temporal reduction will in general lead to Euclidean theories which also contain gauge fields and a scalar potential. It will be interesting to see whether solutions can be constructed efficiently in such a generalized set up. In this respect it is encouraging that black ring solutions for five-dimensional Einstein-Maxwell-Dilaton gravity have been constructed by lifting solutions of four-dimensional Euclidean sigma models with (symmetric) para-complex target spaces \cite{Yazadjiev}. Besides the construction and study of solutions, the geometrical structures underlying the Lagrangians are very interesting. Both \cite{AleCor} and our work suggest that there are natural generalizations of the special geometries realized in supersymmetric theories. Besides the existence of a potential, homogeneity conditions play an important role, which indicates that the underlying manifolds have homothetic Killing vector fields. This is well known feature of the scalar geometries of vector, tensor and hypermultiplets when these are considered in the superconformal formalism. As mentioned at various places in this paper, the scalar geometries of theories obtained from the same higher-dimensional theory by dimensional reduction over space and time, respectively, are related by analytic continuation. We have also noticed that this is related to an ambiguity in singling out `the' Euclidean action of a given theory. This has been discussed in some detail in \cite{EucIII}, the role of these ambiguities for instanton effects is currently under investigation \cite{MohWai}. Here we would like to point out that these ambiguities suggest to work in the framework of complex-Riemannian geometry and to regard scalar manifolds which are related by analytic continuation as real forms of a single underlying manifold. Interestingly, similar analytic continuations, the complexification of field space, and the subsequent classification of reality conditions seesm to play an important role in recent studies of black holes, instantons, domain walls and cosmological solutions within the framework of `fake'-supersymmetry, see for example \cite{Fake2,Bergshoeff:Complex}. While so far such investigations have been restricted to symmetric target spaces, complex-Riemannian geometry should provide the appropriate framework for extening these studies to general targtes. Some elements needed for this are provided in the appendix. \begin{appendix} \section{Complexification of the target space} At the end of Section \ref{SectEucAct} we observed that the target spaces $M$ and $M'$ of the two Euclidean actions (\ref{paraH-real}) and (\ref{Herm-real}) (equivalently (\ref{SX}) an (\ref{SY})) can be viewed as real sections of one underlying complex manifold. Complexification of the action is used in some approaches to defining the Euclidean actions of supersymmetric theories \cite{vNWal}. Complex actions for the ten-dimensional and eleven-dimensional supergravity theories were discussed in \cite{Bergshoeff:Complex}, while \cite{EucIII} found that a similar formalism should be useful for Euclidean vector multiplets in four dimensions. Since the scalar target spaces of (\ref{Herm-real}) and (\ref{paraH-real}) already a complex or para-complex structure, respectively, before we complexify them some care is needed in order to distinguish between the different complex structures.\footnote{If one includes fermions then yet another complex structure becomes relevant, namely the one carried by the spinor representation \cite{EucI}. Here we will restrict ourselves to bosonic actions. } In the following we work out some details and arrive at the conclusion that the proper geometrical framework for complexified Euclidean actions is complex-Riemannian geometry. When we use the real coordinates $(\sigma^I, b^I)$, then the metrics of the target spaces $M$ and $M'$, which underly the actions (\ref{paraH-real}) and (\ref{Herm-real}) have the form \[ ds^2 = N_{IJ}(\sigma) ( d\sigma^I d\sigma^J \mp db^I db^J) \;, \] respectively. The two line elements are related by the analytic continuation $b^I \rightarrow i b^I$. If we complexify the $b^I$, then $M$ and $M'$ can be viewed as subspaces of a $3n$-dimensional space $\tilde{M}$. This description is not satisfactory for various reasons. Complexifying only the $b^I$ introduces an asymmetry between the $\sigma^I$ and the $b^I$. It is more natural to complexify all fields and to view $M$ and $M'$ as real forms of a complex space $M_c$. Moreover, $M$ and $M'$ carry additional structures. $M$ is a complex space, and when we use complex coordinates $Y^I = \sigma^I + i b^I$ the line element of $M'$ is manifestly Hermitian \[ ds^2_{M'} = N_{IJ}(Y+\bar{Y}) dY^I d\bar{Y}^J \;. \] If we want to view $M'$ as a real form of a complex space $M_c$, then we need to be careful in distinguishing the complex structure of $M'$, and the complex structure of $M_c$ which is introduces in the process of complexification, and which is used in the analytic continuation from $M'$ to $M$. Similarly $M$ is a para-complex space and when using the para-complex coordinates $X^I = \sigma^I + e b^I$, the line element of $M$ is manifestly para-Hermitian: \[ ds^2_{M} = N_{IJ}(X+\bar{X}) dX^I d\bar{X}^J \;. \] In the following\footnote{The mathematical background material relevant for the following paragraphs can be found in \cite{Lou,Spin} and \cite{EucI,EucIII}.} we will reserve the symbol $i$ for the imaginary unit associated with the complex structure of $M$, while the imaginary unit associated with the complex structure of $M_c$ will be denoted $j$. We can define $j$ in terms of $i$ and $e$ by observing that the analytic continuation from $M'$ to $M$, when written in para-complex coordinates, takes the form \[ Y^I = \sigma^I + i b^I \rightarrow X^I = \sigma^I + e b^I \;. \] The replacement $ib^I \rightarrow eb^I$ is induced by $b^I \rightarrow (-ie) b^I$, and implies that $j$ should be defined as $j=-ie$. To have $j^2=-1$ we need to impose the relation $ie=ei$. These relations are consistent and define a four-dimensional commutative and assosiative real algebra, with basis $1,i,e,j$. This algebra is generated by $i$ and $e$, subject to the relations \begin{equation} \label{Rel1} i^2 =-1 \;,\;\;\;e^2=1\;,\;\;\;ie=ei \;, \end{equation} and defining $j=-ie$. Equivalently, this algebra is generated by $i$ and $j$ subject to the relations \[ i^2 = -1 \;,\;\;\;j^2=-1 \;,\;\;\;ij=ji \;\;\;\; \] and defining $e=ij$.The second presentation shows that the algebra is isomorphic to $\mathbbm{C} \oplus \mathbbm{C}$. Note that this is not only an algebra over $\mathbbm{R}$ but also an algebra over $\mathbbm{C}$. To see that the complex algebra $\mathbbm{C} \oplus \mathbbm{C}$ is the `complexification of the complex numbers' $\mathbbm{C}$ as well as the `complexification of the para-complex numbers' $C$, recall that the complexification of a real algebra $A$ (associative, with unit) is obtained by taking the real tensor product (of algebras) with $\mathbbm{C}$ (considered as a real algebra): \[ A_c = \mathbbm{C} \otimes_{\mathbbm{R}} A \;. \] If one takes $A=\mathbbm{C}$ then the result is \begin{equation} \label{ComplexifyComplex} \mathbbm{C} \otimes_{\mathbbm {R}} \mathbbm{C} \simeq \mathbbm{C} \oplus \mathbbm{C} \;. \end{equation} If one takes $A$ to be the para-complex numbers $C$, one obtains the same result: \begin{equation} \label{ComplexifyParaComplex} \mathbbm{C} \otimes_{\mathbbm{R}} C \simeq \mathbbm{C} \oplus \mathbbm{C} \end{equation} The isomorphisms (\ref{ComplexifyComplex}) and (\ref{ComplexifyParaComplex}) can easily be written down explicitly in terms of bases. Alternatively we can simply note that $\mathbbm{C}$ and $C$ are real Clifford algebras: \[ Cl_{1,0} \simeq \mathbbm{C} \;,\;\;\; Cl_{0,1} \simeq \mathbbm{R} \oplus \mathbbm{R} \simeq C \;, \] which is obvious from the relations $i^2=-1$ and $e^2=1$ of the generating elements $i$ and $e$. Given this, we can refer to the known fact that the complexifications of these two Clifford algebras are \cite{Lou,Spin}: \[ \mathbbm{C}l_1 = \mathbbm{C} \otimes_{\mathbbm{R}} Cl_{1,0} = \mathbbm{C} \otimes_{\mathbbm{R}} Cl_{0,1} \simeq \mathbbm{C} \oplus \mathbbm{C} \;. \] For models with $2n$ free scalar fields the target spaces are simply (the affine spaces associated to the) vector spaces $M=C^n$ and $M'=\mathbbm{C}^n$. Both are real $2n$-dimensional vector spaces, but carry additional structures: $M'=\mathbbm{C}^n$ is a (complex-) $n$-dimensional vector space, $M=C^n$ is an $n$-dimensional free module over the algebra of para-complex numbers $C$ \cite{EucI,EucIII}. Since the complexifications of the underlying algebras $C$ and $\mathbbm{C}$ coincide, the complexifications of $M$ and $M'$ are also isomorphic: \[ M_c \simeq \mathbbm{C} \otimes_{\mathbbm{R}} \mathbbm{C}^n \simeq \mathbbm{C} \otimes_{\mathbbm{R}} C^n \simeq \mathbbm{C}^n \oplus \mathbbm{C}^n \simeq \mathbbm{C}^{2n} \;. \] For models with $2n$ interacting fields, the target spaces $M$ and $M'$ are (para-)complex manifolds, i.e. manifolds modelled on $C^n$ and $\mathbbm{C}^n$, respectively. Both are in particular $2n$-dimensional real manifolds, and the complexification $M_c$ is a (complex-)$2n$-dimensional complex manifold. The tanget spaces of $M,M',M_c$ are $T_PM=C^n$, $T_PM'=\mathbbm{C}^n$ and $T_P M_c = \mathbbm{C}^{2n} \simeq \mathbbm{C}\otimes_{\mathbbm R} C^n \simeq \mathbbm{C} \otimes_{\mathbbm{R}} \mathbbm{C}^n$, respectively. The dynamics of the scalar fields is controlled by the (pseudo-)Riemannian metrics of $M$ and $M'$. To study the effect of complexification, let us start with the case of 2 free real scalar fields $\sigma$ and $b$. Then the real, positive definite line element of $M'$ is \[ ds^2_{M'} = d \sigma d \sigma + d b db \;. \] This can be complexified by promoting the real fields $\sigma, b$ to complex fields: \begin{eqnarray} \sigma &\rightarrow & \Sigma = \sigma_1 + j \sigma_2 \;, \nonumber \\ b & \rightarrow & B = b_1 + j b_2 \;.\nonumber \label{Complexification} \end{eqnarray} Here $j$ is the imaginary unit associated with the complex structure of $M_c$. The resulting complex line element is \[ ds^2_{M'_c} = d \Sigma d \Sigma + d B d B = [d \sigma_1 d \sigma_1 - d\sigma_2 d \sigma_2 + db_1 db_1 - d b_2 db_2] + 2j [d \sigma_1 d\sigma_2 + d b_1 db_2] \;. \] The line element of $M'$ is recovered by taking the real section $\sigma_2 = b_2=0$. If we take instead the real section $\sigma_2=b_1=0$ we obtain the real line element \[ ds^2 = d \sigma_1 d \sigma_1 - d b_2 db_2 \;, \] which has split signature. Upon setting $\sigma_1=\sigma$ and $b=b_2$ we obtain the line element \[ ds^2_M = d\sigma d\sigma - d b db \] of $M$. Conversely, if we complexify $ds^2_M$ by (\ref{Complexification}), then we obtain complex line element \[ ds^2_{M_c} = d \Sigma d\Sigma - d B d B = [d \sigma_1 d \sigma_1 - d\sigma_2 d \sigma_2 - db_1 db_1 + d b_2 db_2] + 2j[ d \sigma_1 d\sigma_2 - d b_1 db_2] \;. \] The line elements $ds^2_{M_c}$ and $ds^2_{M'_c}$ are related by the $B \rightarrow j B$, and define the same complex metric on $M_c$. The complexification can also be formulated in terms of the complex field $Y=\sigma + ib$. Here the distinction between $i$ and $j$ is important to avoid confusion. In terms of $Y$, the line element of $M$ is \[ ds^2_M= dY d\bar{Y} \;. \] Here `complexification of $Y$' can be understood as `taking $Y$ and $\bar{Y}$ to be independent complex variables'. Using the distinction between $i$ and $j$ we can make this precise: \begin{eqnarray} Y = \sigma + i b &\rightarrow& \Sigma + i B \;,\nonumber \\ \bar{Y} = \sigma - i b &\rightarrow& \Sigma - i B \;,\nonumber \end{eqnarray} with complex fields $\Sigma = \sigma_1 + j \sigma_1$ and $B= b_1 + j b_2$. Similarly, the complex line element of $M_c$ can be obtained by `complexifying the para-complex field' $X=\sigma +e b$. The most general case we are interested in are line elements of the form \[ ds^2_{M/M'} = N_{IJ}(\sigma) (d\sigma^I d \sigma^J \pm db^I d b^J) \;. \] The increase in the number of fields does not change much, as we only need to introduce indices $I,J$ to label the fields. If the real metric $N_{IJ}(\sigma)$ is not flat, we need to assume that it is real-analytic in the $\sigma^I$, so that it can be extended analytically to a holomorphic matrix function $N_{IJ}(\Sigma)$, in some neighbourhood of $\sigma^I_{2}=0$. The resulting complex manifold $M_c$ contains $M$ and $M'$ as the real submanifolds $\sigma_2=b_2=0$ and $\sigma_2=b_1=0$, respectively. For the purpose of embedding $M$ and $M'$ into some complex manifold, it is not relevant how we choose the neighbourhood of $\sigma_2=0$. The resulting line element \[ ds^2_{M_c} = N_{IJ}(\Sigma) (d \Sigma^I d \Sigma^J + d B^I dB^J) \] defines a complex-Riemannian metric on $M_c$. A complex-Riemannian metric on a complex manifold is a complex bilinear form on the holomorphic tangent bundle.\footnote{ A definition of complex-Riemannian manifolds and some further references can be found in \cite{Ivanov}. The extension of the bilinear form to the anti-holomorphic tangent bundle is given by complex conjugation. Taking the holomorphic and anti-holomorphic tangent bundles to be orthogonal, one obtains a natural extension to the full (complexified) tangent bundle.} Note that K\"ahler (more generally Hermitean) manifolds are Riemannian manifolds and not complex-Riemannian manifolds: they carry a positive definite hermitean sesquilinear form on their (complexified) tangent bundle, whose real part is a Riemannian metric. Similarly para-K\"ahler (and pseudo-K\"ahler, para-Hermitean, pseudo-Hermitean) manifolds are pseudo-Riemannian, not complex-Riemannian. The $n$ real shift isometries $b^I \rightarrow b^I + c^I$ induce $n$ complex shift isometries $B^I \rightarrow B^I + C^I$ on $M_c$. Symmetric spaces (which are listed in \cite{Gilmore} and \cite{Helgason}) provide plenty of examples for triples $(M,M',M_c)$. The simplest example is \[ M \simeq \frac{SL_2(\mathbbm{R})}{SO(1,1)}\;,\;\;\; M' \simeq \frac{SL_2(\mathbbm{R})}{SO(2)}\;,\;\;\; M_c \simeq \frac{SL_2(\mathbbm{C})}{GL(1,\mathbbm{C})}\;. \] The space $\frac{SL(2,\mathbbm{R})}{SO(1,1)}$ occured in several examples in the main paper. In Section 4 we have seen explicitly that this pseudo-Riemannian symmetric is para-K\"ahler, and that it is related by analytic continuation to the Riemannian symmetric space $\frac{SL(2,\mathbbm{R})}{SO(2)}$, which is K\"ahler. Note that while the above example uses symmetric spaces, the discussion in this appendix applies to analytic (pseudo-)Riemannian manifolds in general. \end{appendix} \subsubsection*{Acknowledgements} We would like to thank Gabriel Lopes Cardoso and Vicente Cort\'es for various valuable discussion. T.M. thanks the Center for Mathematical Physics of the University of Hamburg for support and hospitality during several visits to Hamburg. He also thanks the LMU Munich for hospitality and the Royal Society for support of this work through the Joint Projects Grant `Black Holes, Instantons and String Vacua'.
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{INTRODUCTION} In contrast with the Standard Model (SM), the exotic gauge particles such as leptoquarks and bileptons, predicted in many extensions of the SM, carry global quantum numbers. The bileptons are defined as bosons carrying two units of lepton number and are present in the $3-3-1$ model. The version of the $3-3-1$ model with right-handed neutrinos \cite{RHN} predicts the existence of single-charged and neutral bileptons. The family lepton number violation offers a clear signature for the bileptons detection \cite{DION}. The possibility of detecting neutral bileptons in future colliders has not been studied in the literature. In the present work we obtain the partial and the total width of the neutral bilepton $X^0$, two elementary distributions, the total cross section and some final bilepton distributions for LHC energy regime. \section{MODEL} This version of the model includes right-handed neutrinos for each leptonic family and two quark generations ($m=1,2$) belong to anti-triplet and the other to triplet representation \begin{eqnarray} Q_{m L} = && \left(d^\prime_{m},\ -u^\prime_{m},\ D^\prime_{m} \right)_L^T \sim ({\bf 3}, {\bf 3^*}, 0), \nonumber \\ Q_{3L} = && \left( t^\prime, \ b^\prime, \ T^\prime \right)_L^T \sim ({\bf 3}, {\bf 3}, 1/3), \end{eqnarray} The new quarks $D_{1}$ and $D_{2}$ carry $-\frac{1}{3}$ and $T$, $\frac{2}{3}$ units of positron charge. The gauge sector has an extra neutral boson ($Z^\prime$) and bileptons ($V^\pm$ and $X^0$). The important relation, used in the present work, between $Z^\prime$ and the bileptons ($V^\pm$ and $X^0$) masses is: \begin{eqnarray} {M_{V}\over M_{{Z^{\prime}}}}\simeq {M_{X}\over M_{{Z^{\prime}}}}\simeq{{\sqrt{3-4\sin^2\theta_W}}\over {2\cos\theta_W}}, \end{eqnarray} The neutral current interactions involving quarks and $Z$ or $Z^{\prime}$ is given by \begin{eqnarray} {\cal L} = - \frac{g}{2\cos\theta_W}\sum_f\bigl\lbrace \bar \Psi_f\, \gamma^\mu\ (g_{v f} - g_{a f}\gamma^5)\ \Psi_f\, Z_\mu + \bar \Psi_f\, \gamma^\mu\ (g^\prime_{v f} - g^\prime_{a f}\gamma^5)\ \Psi_f\, { Z_\mu^\prime} \bigr\rbrace, \end{eqnarray} where the couplings $g^\prime_v$ and $g^\prime_a$ can be found in \cite{EYB}. The Lagrangian for the interaction between the quarks and the $X^0$ is: \begin{eqnarray} {\cal L} = && -\frac{g}{2\sqrt{2}}\lbrace \bar t_{L} \gamma^\mu\ T_{L} - \bar D_{m L}\gamma^\mu\ d_{m L} \rbrace X^0 _\mu + H.c., \end{eqnarray} \section{ RESULTS} Let us consider first the partial and total width of the $X^0$. The main decay modes are $\bar q Q$ ($Q$ carries two units of lepton number) and $\nu \nu$. Fixing the exotic quark masses equal to $600$ GeV, we obtain the following values for $\Gamma_{X^0}$: $1.54$, $6.7$ and $14.7$ GeV for $M_{X^0}= 800 $, $1000$ and $1200$ GeV respectively. The main contributions for $X^0$ pair production in $p \, p$ collision depend on the initial quark $q$ charge. When $q = u$, only $Z$ and $ Z^\prime$ {\it via} s-channel contribute and, on the other hand, when $q = d$ we have an additional t-channel heavy quark exchange contribution. In our calculations we employed the CompHep package \cite{COMP}. Beginning with the dominant elementary process $u \bar u$ and $d \bar d$, we find some final bileptons distributions. In the Figure 1, for example, we display the $X^0$ angular distribution relative to the initial beam direction. These distributions were calculated by imposing the cuts on the final states \centerline{$ -0.95 \leq\cos \theta_{1i} \leq 0.95, \,\, -2.5 \leq y_{i} \leq 2.5, \,\, p_{t} \geq 50$ GeV.} We note that the angular distribution shapes are different for $u$ and $d$ initial quarks. This is expected because $d \bar d$ receives an additional heavy quark $t-$channel contribution, leading to a more asymmetric distribution. \begin{figure*}[t] \centering \includegraphics[width=75mm]{figure1.eps} \includegraphics[width=75mm]{figure2.eps} \caption{Final bilepton $X^0$ angular distribution relative to the initial beam direction, considering $u \bar u$ channel (left) and $d \bar d$ (right) for two different $Z^\prime$ masses.} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=75mm]{figure3.eps} \includegraphics[width=75mm]{figure4.eps} \caption{Final bilepton $X^{0}$ angular distribution relative to the initial beam direction (left) and $X^0$ rapidity distribution (right), for three different $Z^\prime$ masses} \end{figure*} \begin{figure*} \centering \includegraphics[width=75mm]{figure5.eps} \caption{Differential cross section vs. $ X^{0}$ pair invariant mass considering three different $Z^\prime$ masses.} \end{figure*} Now for $ p p$ collisions at $\sqrt s = 14$ TeV, we employ the CTEQ661 \cite{CTQ} structure functions. We show in Figure 2 the resulting $X^0$ angular distribution relative to the initial beam direction and the $X^0$ rapidity distribution. The angular distribution shape is almost flat and are independent of $Z^{\prime}$ mass, while the $X^0$ rapidity distribution is more concentrate for higher $Z^{\prime}$ masses. Finally, we show in the Figure 3 the differential cross section as a function of the invariant $X^0$ pair mass. We note that the this distribution decreases when $M_{Z^{\prime}}$ increases. \section{CONCLUSION} In this work we have presented some initial results for the production of neutral bileptons predicted in the version of the 3-3-1 model with right-handed neutrinos. We have shown some distributions for the elementary process in $pp$ collisions. The inclusion of heavy quarks leads to unique signature and the $X^0$ decay into two leptons, via bileptoquark and charged bilepton. For an annual luminosity at the LHC (${\cal L} = 100$\, fb$^{-1}$) we find $\simeq 100$ $X^0$ pairs produced {\it per} year. The next step in our analysis is to consider the $X^0$ decay into leptons in order to compare it with $Z$ decay. This comparison can possibly reveal a signature of the neutral bilepton production. \begin{acknowledgments} E. Ramirez Barreto thanks Capes and Y. A. Coutinho thanks FAPERJ for financial support. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Building a human-like task-oriented dialogue system is a long-term goal of AI. Great endeavors have been made in designing end-to-end task-oriented dialogue systems (TDSs) with sequence-to-sequence (Seq2Seq) models \cite{eric2017copy,madotto2018mem2seq,gangi-reddy-etal-2019-multi,qin2020dynamic,mi2019meta,he2020amalgamating,wang2020dual,qin2021exploring}, which have taken the state-of-the-art of TDSs to a new level. Generally, Seq2Seq models leverage an encoder to create a vector representation of dialogue history and KB information, and then pass this representation into a decoder so as to output a response word by word. For example, GLMP \cite{DBLP:journals/corr/abs-1901-04713} is a representative end-to-end TDS, which incorporates KB information into Seq2Seq model by using a global memory pointer to filter irrelevant KB knowledge and a local memory pointer to instantiate entity slots. Despite the remarkable progress of previous works, the current dominant paradigm for TDS is to learn a Seq2Seq model on a given dataset specifically for a particular purpose, which is referred to as isolated learning. Such learning paradigm is theoretically of limited success in accumulating the knowledge it has learned before. When a stream of domains or functionalities are joined to be trained sequentially, isolated learning faces catastrophic forgetting \cite{mccloskey1989catastrophic,yuan2020parameter,yuan2020one}. In contrast, humans retain and accumulate knowledge throughout their lives so that they become more efficient and versatile facing new tasks in future learning \cite{thrun1998lifelong}. If one desires to create a human-like dialogue system, imitating such a lifelong learning skill is quite necessary. This paper is motivated by the fact that a cognitive AI has continual learning ability by nature to develop a task-oriented dialogue agent that can accumulate knowledge learned in the past and use it seamlessly in new domains or functionalities. Continual learning \cite{parisi2019continual,wu2018memory,yuan2020parameter,yuan2020one} is hardly a new idea for machine learning, but remains as a non-trivial step for building empirically successful AI systems. It is essentially the case for creating a high-quality TDS. On the one hand, a dialogue system is expected to reuse previously acquired knowledge, but focusing too much on stability may hinder a TDS from quickly adapting to a new task. On the other hand, when a TDS pays too much attention to plasticity, it may quickly forget previously-acquired abilities \cite{mallya2018packnet}. In this paper, we propose a continual learning method for \underline{t}ask-oriented dialogue system with iterative network \underline{p}runing, \underline{e}xpanding and \underline{m}asking (TPEM), which preserves performance on previously encountered tasks while accelerating learning progress on the future tasks. Concretely, TPEM adopts the global-to-local memory pointer networks (GLMP) ~\cite{DBLP:journals/corr/abs-1901-04713} as the base model due to its powerful performance in literature and easiness for implementation. We leverage iterative pruning to keep old tasks weights and thereby avoid forgetting. Meanwhile, a network expanding strategy is devised to gradually create free weights for new tasks. Finally, we introduce a task-specific binary matrix to mask some old task weights that may hinder the learning of new tasks. It is noteworthy that TPEM is model-agnostic since the pruning, expanding and binary masking mechanisms merely work on weight parameters (weight matrices) of GLMP. We conduct extensive experiments on seven different domains from three benchmark TDS datasets. Experimental results demonstrate that our TPEM method significantly outperforms strong baselines for task-oriented dialogue generation in continual learning scenario. \section{Our Methodology} \subsection{Task Definition} Given the dialogue history $X$ and KB tuples $B$, TDS aims to generate the next system response $Y$ word by word. Suppose a lifelong TDS model that can handle domains 1 to $k$ has been built, denoted as $\mathcal{M}_{1:k}$. The goal of TDS in continual learning scenario is to train a model $\mathcal{M}_{1:k+1}$ that can generate responses of the $k+1$-th domain without forgetting how to generate responses of previous $k$ domains. We use the terms ``domain'' and ``task'' interchangeably, because each of our tasks is from a different dialogue domain. \subsection{Overview} In this paper, we adopt the global-to-local memory pointer networks (GLMP)~\cite{DBLP:journals/corr/abs-1901-04713} as base model, which has shown powerful performance in TDS. We propose a continual learning method for TDS with iterative pruning, expanding, and masking. In particular, we leverage pruning to keep the knowledge for old tasks. Then, we adopt network expanding to create free weights for new tasks. Finally, a task-specific binary mask is adopted to mask part of old task weights, which may hinder the learning of new tasks. The proposed model is model-agnostic since the pruning, expanding and binary masking mechanisms merely work on weight parameters (weight matrices) of the encoder-decoder framework. Next, we will introduce each component of our TPEM framework in detail. \subsection{Preliminary: The GLMP Model} GLMP contains three primary components: external knowledge, a global memory encoder, and a local memory decoder. Next, we will briefly introduce the three components of GLMP. The readers can refer to \cite{DBLP:journals/corr/abs-1901-04713} for the implementation details. \paragraph{External Knowledge} To integrate external knowledge into the Seq2Seq model, GLMP adopts the end-to-end memory networks to encode the word-level information for both dialogue history (dialogue memory) and structural knowledge base (KB memory). Bag-of-word representations are utilized as the memory embeddings for two memory modules. Each object word is copied directly when a memory position is pointed to. \paragraph{Global Memory Encoder} We convert each input token of dialogue history into a fixed-size vector via an embedding layer. The embedding vectors go through a bi-directional recurrent unit (BiGRU) \cite{chung2014empirical} to learn contextualized dialogue representations. The original memory representations and the corresponding implicit representations will be summed up, so that these contextualized representations can be written into the dialogue memory. Meanwhile, the last hidden state of dialogue representations is used to generate two outputs (i.e., global memory pointer and memory readout) by reading out from the external knowledge. Note that an auxiliary multi-label classification task is added to train the global memory pointer as a multi-label classification task. \paragraph{Local Memory Decoder} Taking the global memory pointer, encoded dialogue history and KB knowledge as input, a sketch GRU is applied to generate a sketch response $Y^s$ that includes the sketch tags rather than slot values. If a sketch tag is generated, the global memory pointer is then passed to the external knowledge and the retrieved object word will be picked up by the local memory pointer; otherwise, the output word is generated by the sketch GRU directly. To effectively transfer knowledge for subsequent tasks and reduce the space consumption, the global memory encoder and external knowledge in GLMP are shared among all tasks, while a separate local memory decoder is learned by each task. \subsection{Continual Learning for TDS} We employ an iterative network pruning, expanding and masking framework for TDS in continual learning scenario, inspired by \cite{mallya2018packnet,mallya2018piggyback}. \paragraph{Network Pruning} To avoid ``catastrophic forgetting'' of GLMP, a feasible way is to retain the acquired old-task weights and enlarge the network by adding weights for learning new tasks. However, as the number of tasks grows, the complexity of model architecture increases rapidly, making the deep model difficult to train. To avoid constructing a huge network, we compress the model for the current task by releasing a certain fraction of neglectable weights of old tasks ~\cite{DBLP:journals/corr/abs-1803-03635,geng2021iterative}. Suppose that for task $k$, a compact model $\mathcal{M}_{1:k}$ that is able to deal with tasks 1 to $k$ has been created and available. We then free up a certain fraction of neglectable weights (denoted as $\mathbf{W}^F_k$) that have the lowest absolute weight values by setting them to zero. The released weights associated with task $k$ are extra weights which can be utilized repeatedly for learning newly coming tasks. However, pruning a network suddenly changes the network connectivity and thereby leads to performance deterioration. To regain its original performance after pruning, we re-train the preserved weights for a small number of epochs. After a period of pruning and re-training, we obtain a sparse network with minimal performance loss on the performance of task $k$. This network pruning and re-training procedures are performed iteratively for learning multiple subsequent tasks. When inferring task $k$, the released weights are masked in a binary on/off fashion such that the network state keeps consistent with the one learned during training. \paragraph{Network Expanding} The amount of preserved weights for old tasks becomes larger with the growth of new tasks, and there will be fewer free weights for learning new tasks, resulting in slowing down the learning process and making the found solution non-optimal. An intuitive solution is to expand the model while learning new tasks so as to increase new capacity of the GLMP model for subsequent tasks~\cite{10.1145/3323873.3325053,hung2019compacting}. To effectively perform network expansion while keeping the compactness of network architecture, we should consider two key factors: (1) the proportion of free weights for new tasks (denoted as $F_k$) and (2) the number of training batches (denoted as $N_k$). Intuitively, it is difficult to optimize the parameters that are newly added and randomly initialized with a small number of training data. To this end, we define the following strategy to expand the hidden size $H_k$ for the $k$-th task from $H_{k-1}$: \begin{equation} \label{eq:expand} \resizebox{0.87\hsize}{!}{% $H_k = H_{k-1} + \alpha * (P_{k-1} - F_k) * \log(1+N_k/\beta)$} \end{equation} where $\alpha$ and $\beta$ are two hyperparameters. $P_{k-1}$ is the pruning ratio of task $k-1$. In this way, we are prone to expand more weights for the tasks that have less free weights but more training data. \begin{table*}[ht!] \centering {\renewcommand{\arraystretch}{1.0} \resizebox{2.0\columnwidth}{!}{ \begin{tabular}{ccccccccc} \toprule Task ID & 1 & 2 & 3 & 4 & 5 & 6 & 7 & \\ \midrule Task & Schedule & Navigation & Weather & Restaurant & Hotel & Attraction & CamRest & Avg. \\ \midrule Ptr-Unk & 0.00/23.33 & 0.36/14.17 & 1.26/12.62 & 1.20/21.21 & 1.66/16.14 & 0.84/19.16 & 8.40/39.45 & 1.96/20.87 \\ Mem2Seq & 0.66/23.32 & 3.87/23.37 & 3.21/38.90 & 1.37/14.17 & 0.95/10.25 & 0.19/4.80 & 10.10/43.07 & 2.91/22.55 \\ GLMP & 0.95/15.01 & 3.91/24.34 & 2.56/27.12 & 6.51/32.76 & 5.24/29.60 & 6.72/30.31 & 16.96/\textbf{52.85} & 6.12/30.28 \\ UCL & 12.60/60.24 & 4.42/33.06 & 4.27/47.93 & 3.57/15.60 & 2.40/10.34 & 1.20/14.24 & 12.77/39.74 & 5.89/31.59 \\ Re-init & 16.21/64.06 & 9.38/42.47 & 11.54/50.30 & 8.97/\textbf{34.06} & 6.52/\textbf{33.60} & 3.78/18.05 & 16.88/48.15 & 10.47/41.53 \\ Re-init-expand & 15.98/64.29 & 9.92/40.15 & 11.50/54.12 & \textbf{9.41}/30.98 & 6.07/31.54 & 5.80/17.56 & 16.60/46.42 & 10.75/40.72 \\ \midrule TPEM & \textbf{16.72}/\textbf{67.15} & \textbf{11.95}/\textbf{49.74} & \textbf{13.27}/\textbf{55.60} & 7.98/31.90 & \textbf{7.07}/30.99 & \textbf{9.11}/\textbf{33.74} & \textbf{17.60}/51.77 & \textbf{11.96}/\textbf{45.84} \\ w/o Pruning & 16.68/66.74 & 11.33/45.01 & 13.07/51.76 & 7.67/30.02 & 6.57/33.25 & 8.96/23.56 & 17.48/52.08 & 11.68/43.20 \\ w/o Expansion & \textbf{16.72}/\textbf{67.15} & \textbf{11.95}/\textbf{49.74} & 11.35/51.85 & 7.40/31.73 & 5.17/32.89 & 8.71/29.63 & 15.17/52.16 & 10.92/45.02 \\ w/o Masking & \textbf{16.72}/\textbf{67.15} & 11.35/48.48 & 11.88/54.25 & 7.29/31.79 & 6.21/32.59 & 8.42/30.78 & 16.71/51.35 & 11.23/45.20 \\ \bottomrule \end{tabular}}} \caption{BLEU/Entity F1 results evaluated on the final model after all 7 tasks are visited. We use Avg. to represent the average performance of all tasks for each method.} \label{table2}% \end{table*}% \paragraph{Network Masking} The preserved weights $\mathbf{W}_k^P$ of old tasks are fixed so as to retain the performance of learned tasks and avoid forgetting. However, not all preserved weights are beneficial to learn new tasks, especially when there is a large gap between old and new tasks. To resolve this issue, we apply a learnable binary mask $\mathbf{M}^k$ for each task $k$ to filter some old weights that may hinder the learning of new tasks. We additionally maintain a matrix $\tilde{\mathbf{M}}^k$ of real-valued mask weights, which has the same size as the weight matrix $\mathbf{W}$. The binary mask matrix $\mathbf{M}^k$, which participates in forward computing, is obtained by passing each element of $\tilde{\mathbf{M}}^k$ through a binary thresholding function: \begin{equation} \mathbf{M}^k_{ij} = \begin{cases} 1, & \mbox{if ~~}\tilde{\mathbf{M}}^k_{ij}>\tau \\ 0, & \mbox{ortherwise } \end{cases} \end{equation} where $\tau$ is a pre-defined threshold. The real-valued mask $\tilde{\mathbf{M}}^k$ will be updated in the backward pass via gradient descent. After obtaining the binary mask $\mathbf{M}^k$ for a given task, we discard $\tilde{\mathbf{M}}^k$ and only store $\mathbf{M}^k$. The weights selected are then represented as $\mathbf{M}^k \odot \mathbf{W}_k^P$, which get along with free weights $\mathbf{W}_k^F$ to learn new tasks. Here, $\odot$ denotes element-wise product. Note that old weights $\mathbf{W}^P_k$ are ``picked'' only and keep unchanged during training. Thus, old tasks can be recalled without forgetting. Since a binary mask requires only one extra bit per parameter, TPEM only introduces an approximate overhead of 1/32 of the backbone network size per parameter, given that a typical network parameter is often represented by a 32-bit float value. \section{Experimental Setup} \paragraph{Datasets} Since there is no authoritative dataset for TDS in continual learning scenario, we evaluate TPEM on 7 tasks from three benchmark TDS datasets: (1) In-Car Assistant~\cite{eric2017copy} that contains 2425/302/304 dialogues for training/validation/testing, belonging to calendar scheduling, weather query, and POI navigation domains, (2) Multi-WOZ 2.1~\cite{budzianowski2018multiwoz} that contains 1,839/117/141 dialogues for training/validation/testing, belonging to restaurant, attraction, and hotel domains, and (3) CamRest~\cite{DBLP:journals/corr/WenGMRSUVY16a} that contains 406/135/135 dialogues from the restaurant reservation domain for training/validation/testing. \paragraph{Implementation Details~~} Following~\cite{DBLP:journals/corr/abs-1901-04713}, the word embeddings are randomly initialized from normal distribution $\mathcal{N}(0,0.1)$ with size of 128. We set the size of encoder and decoder as 128. We conduct one-shot pruning with ratio $P=0.5$. The hyperparameters $\alpha$ and $\beta$ are set to 32 and 50, respectively. We use Adam optimizer to train the model, with an initial learning rate of $1e^{-3}$. The batch size is set to 32 and the number of memory hop $k$ is set to 3. We set the maximum re-training epochs to 5. That is, we adopt the same re-training epochs for different tasks. We run our model three times and report the average results. \paragraph{Baseline Methods~~} First, we compare TPEM with three widely used TDSs: \textbf{Ptr-Unk}~\cite{eric2017copy}, \textbf{Mem2Seq}~\cite{madotto2018mem2seq}, and \textbf{GLMP}~\cite{DBLP:journals/corr/abs-1901-04713}. In addition, we also compare TPEM with \textbf{UCL}~\cite{ahn2019uncertainty} which is a popular continual learning method. Furthermore, we report results obtained by the base model when its parameters are optionally re-initialized after a task has been visited (denoted as \textbf{Re-init}). We also report the results of Re-init with network expansion (denoted as \textbf{Re-init-expand}). Different from GLMP that keeps learning a TDS by utilizing parameters learned from past tasks as initialization for the new task, both Re-init and Re-init-expand save a separate model for each task in inference without considering the continual learning scenario. \begin{figure*}[t] \centering \includegraphics[width=2\columnwidth]{task1_7.png} \caption{The change of BLEU/Entity F1 scores for each task during the whole learning process (i.e., after learning new tasks).} \label{fig:task1_7} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{shuffle.png} \caption{The average results of TPEM over 7 domains with 5 different orderings randomly sampled from the 7 domains.} \label{fig:shuffle} \end{figure} \section{Experimental Results} \paragraph{Main Results} We evaluate TPEM and baselines with BLEU~\cite{papineni2002bleu} and entity F1~\cite{madotto2018mem2seq}. We conduct experiments by following the common continual learning setting, where experimental data from 7 domains arrives sequentially. The results of each task are reported after all 7 tasks have been learned. That is, each model keeps learning a new task by using the weights learned from past tasks as initialization. The evaluation results are reported in Table \ref{table2}. The typical TDSs (i.e., Ptr-Unk, Mem2Seq, GLMP) perform much worse than the continual learning methods (UCL and TPEM). This is consistent with our claim that conventional TDSs suffer from catastrophic forgetting. TPEM achieves significantly better results than baseline methods (including Re-init and Re-init-expand) on both new and old tasks. The improvement mainly comes from the iterative network pruning, expanding and masking. \paragraph{Ablation Study} To investigate the effectiveness of each component in TPEM, we conduct ablation test in terms of removing network pruning (w/o Pruning), network expansion (w/o Expansion), and network masking (w/o Masking). The experimental results are reported in Table \ref{table2}. The performance of TPEM drops more sharply when discarding network pruning than discarding the other two components. This is within our expectation since the expansion and masking strategies rely on network pruning, to some extent. Not surprisingly, combining all the components achieves the best results. Furthermore, by comparing the results of Re-init and Re-init-expand, we can observe that only using network expanding cannot improve the performance of Re-init. \paragraph{Case Study} We provide visible analysis on the middle states of all the models. Figure~\ref{fig:task1_7} shows how the results of each task change as new tasks are being learned subsequently. Taking the third task as an example, we observe that the performance of conventional TDSs and UCL starts to decay sharply after learning new tasks, probably because the knowledge learned from these new tasks interferes with what was learned previously. However, TPEM achieves stable results over the whole learning process, without suffering from knowledge forgetting. \paragraph{Effect of Task Ordering} To explore the effect of task ordering for our TPEM model, we randomly sample 5 different task orderings in this experiment. The average results of TPEM over 7 domains with 5 different orderings are shown in Figure \ref{fig:shuffle}. We can observe that although our method has various behaviors with different task orderings, TPEM is in general insensitive to orders because the results show similar trends, especially for the last 2 tasks. \section{Conclusion} In this paper, we propose a continual learning method for task-oriented dialogue systems with iterative network pruning, expanding and masking. Our dialogue system preserves performance on previously encountered tasks while accelerating learning progress on subsequent tasks. Extensive experiments on 7 different tasks show that our TPEM method performs significantly better than compared methods. In the future, we plan to automatically choose the pruning ratio and the number of re-training epochs in the network pruning process for each task adaptively. \section*{Acknowledgments} This work was partially supported by National Natural Science Foundation of China (No. 61906185), Natural Science Foundation of Guangdong Province of China (No. 2019A1515011705), Youth Innovation Promotion Association of CAS China (No. 2020357), Shenzhen Science and Technology Innovation Program (Grant No. KQTD20190929172835662), Shenzhen Basic Research Foundation (No. JCYJ20200109113441941). \bibliographystyle{acl_natbib}
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} The rate and direction of inventive activity have been recognized as one of the main themes in economics since at least the conference of the same title in 1960 (\cite{Nelson1962}, \cite{LernerStern2012}). Whereas the rate of innovation has been studied extensively, research on its direction has seen much less progress. Recent studies suggest the direction of scientific change is both an important choice for individual researchers and a critical outcome for scientific communities (\cite{AzoulayEtAl2019}, \cite{Myers2020}). These observations, along with the central role of product differentiation in the theory of industrial organization (IO), suggest the direction of inventive activity is important for firms and industries as well. Mapping the locations and directions of firms' research and development (R\&D) activities is a challenging problem because technological space is unstructured and has many dimensions, unlike physical/geographical space.\footnote{Whereas a large literature exists on the geography of innovation (pioneered by \cite{JaffeTrajtenbergHenderson1993}), relatively few papers explore technological space, because of methodological challenges.} Even a relatively high-level classification system by the US Patent and Trademark Office (USPTO) uses more than 400 categories (patent classes), and large firms frequently conduct R\&D in more than 100 classes, obtaining thousands of patents each year. Thus, the dimensionality of the action/state space is extremely high, and infinitely many directions of inventive activity are possible in principle. Before we can hope to model and understand the technological space, developing a method for mapping it and documenting empirical regularities (i.e., exploratory data analysis) would be a useful step. Given the high dimensionality of the problem, some dimensionality reduction is clearly needed. Commonly used methods include principal component analysis (PCA), multi-dimensional scaling (MDS), and various algorithms for clustering (e.g., k-means clustering). However, even though these existing methods provide some simplified visualization and description, fundamental issues remain unresolved: collapsing data in space or time would eliminate useful information about the direction of inventive activity. For example, Figure \ref{Figure - mapper(n20_m0_cos)} (a) shows a PCA that projects onto a two-dimensional plane 333 major firms' patent portfolios (vectors of logged patent counts across 430 USPTO classes) in 1976--2005. It visualizes the data and shows some patterns. For instance, huge clusters of points on the left side would seem to suggest many firms conduct R\&D in close proximity, but this "densely populated area" could partly be an artifact of collapsing the other 428 dimensions.\footnote{ See Appendix Figure \ref{Figure - mapper(cos_log_pca3d)} for a three-dimensional PCA, which is slightly more informative than the two-dimensional PCA. Note the other 427 dimensions are still missing.} Similar issues arise in other existing methods, due to information loss (see section 6 for an example of clustering). Thus, a faithful representation of the positions and directions of R\&D requires new descriptive tools that avoid arbitrarily collapsing data, provide intuitive visualizations of how firms' patent portfolios evolve over time, and permit quantification of these dynamics. \begin{figure}[htb!!!!] \caption{Firms' Locations in Technological Space, 1976--2005}% \begin{subfigure}{0.5\textwidth} \caption{Two-Dimensional PCA}% \centering \includegraphics[width=0.9\linewidth]{Figures/cos_log_pca2d_no_axes.eps} \end{subfigure} \begin{subfigure}{0.5\textwidth} \caption{Shape Graph by Mapper}% \centering \includegraphics[width=0.9\linewidth]{Figures/cos_log_n20.eps} \end{subfigure} \caption*{\footnotesize {% \textit{Note}: Both pictures represent the evolution of 333 major firms' portfolios of US patents that are acquired by in-house R\&D between 1976 and 2005. Each firm-year is a vector of log patent counts across 430 technological classes. The left panel is a two-dimensional PCA (red markers are IT firms, green markers are drug makers, and blue markers are all others). See Appendix Figure \ref{Figure - mapper(cos_log_pca3d)} for a three-dimensional PCA. The right panel is a Mapper graph based on the same data (see sections 3 and 4 for details).}}% \label{Figure - mapper(n20_m0_cos)} \end{figure}% This paper presents such a new method to represent firms' locations as a combinatorial/topological object (shape graph), which can be easily visualized and quantified in a variety of ways using graph theory. We adapt and extend a tool from computational topology called the Mapper procedure \parencite{singh2007topological}. This algorithm is well founded on mathematical concepts from computational topology and geometry, such as the Reeb graph, and aims to preserve the topological and geometric information of the original data, in two steps. First, it clusters data points in each local neighborhood based on a distance metric of one's choice (e.g., cosine distance). Second, it connects clusters with edges if a pair of clusters shares at least one data point. Hence, even though the resulting graph might appear to visualize data on a two-dimensional plane--—see Figure \ref{Figure - mapper(n20_m0_cos)} (b)---as in the PCA plot, the shape graph retains the notions of proximity and continuity (in the original space) with edges between neighboring nodes. We apply this method to the dynamic evolution of the 333 major firms' patent portfolios across 430 USPTO classes in 1976--2005, and discover empirical regularities that are intuitive and relevant, which we document both qualitatively and quantitatively. We find many engineering firms remain undifferentiated and cluster together in the densely populated "trunk" or the "continental" part of the map. However, a few dozen firms, primarily in the information technology (IT) sector, start differentiating from the rest in the 1980s and the 1990s, developing unique portfolios and exhibiting distinctive trajectories, as represented by long "branches" or "flares" that spike out of the main trunk. In the topological space, which is coordinate free, these shapes and their time evolution provide explicit signatures of the unique ``directions'' of inventive activity. We propose a formal definition of such flares based on graph theory, as well as a computational method to measure their length, which suggests 40.3 \% of the firms in our data exhibit some flares. We assess the empirical relevance of this "uniqueness" measure by following a traditional approach in the economics of innovation, such as \cite{PakesGriliches1984} and \cite{hall2005market}, which is to evaluate its statistical relationships with the firms' financial performances (revenue, profit, and market value). Regression results suggest positive correlations between the flare length and the performance metrics. This association is statistically significant at conventional levels, and economically significant in magnitude (e.g., an extra length of flare in 1976--2005 is associated with 31\%--40\% higher performances as of 2005). Moreover, these patterns continue to hold (i) within each sector and industry, (ii) after controlling for portfolio size (i.e., total patent count), and (iii) in balanced-panel subsamples (i.e., after controlling for firm survivorship). We further investigate how our method compares with that of \cite{JAFFE198987}, which is based on k-means clustering and is one of the most prominent methods to study firms' locations in the technological space. Our method differs from Jaffe's in two ways: data-transformation protocol (logs vs. shares) and the scope of clustering (local vs. global). The first difference is trivial in the sense that we can use his protocol within our scheme as well, but the second is more fundamental. Whereas Jaffe's global clustering is essentially a big \textit{discretization} operation that generates a list of disjoint clusters of firms, ours is designed to retain and recover the \textit{continuum} of firms and industries in the original data. Using Jaffe's measure within our method, we show that seemingly unrelated industries are in fact connected in the technological space, sometimes in surprising ways. Thus, our approach is complementary to the existing methods and can generate new insights that are difficult to obtain otherwise. It helps us answer some of the most basic questions, including where firms innovate, how industries and technologies are connected, and how they evolve over time, without losing the shape and continuity of the original data. Because its underlying mathematics is general, we believe this method is potentially useful for describing and characterizing other high-dimensional data in economics as well, such as product characteristics and international trade. \medskip We organize the rest of the paper as follows. The remainder of section 1 provides the literature context in economics. Section 2 introduces the idea of topological data analysis (TDA) and explains our method. Section 3 explains the data. Sections 4 and 5 provide qualitative and quantitative explanations of the main results, respectively. Section 6 presents an alternative version of the Mapper graph based on Jaffe's protocol, and compares and contrasts it with his method. Section 7 concludes. \paragraph{\protect Related Literature in Economics} Methodologically, we study the problem of characterizing firms' behavior/status in a high-dimensional space (i.e., when the dimensionality of the action/state space is large). Exploratory data analysis (EDA) is a well developed idea in statistics to summarize the characteristics of datasets since at least \cite{tukey1977}, and is more recently rebranded as part of machine learning. As such, this paper fits into the growing literature that introduces novel techniques from mathematics, statistics, and computer science to analyze high-dimensional data.\footnote{For surveys and examples, see \cite{belloni2014high}, \cite{varian2014big}, \cite{athey2017state}, \cite{brumm2017using}, \cite{ChernozhukovEtAl2018ECTJ}, \cite{cattaneo2019two}, \cite{gentzkow2019text}, \cite{IskhakovRustSchjerning2020}, and \cite{Igami2020}.} The distinguishing feature of our paper from the rest of EDA and the "machine learning in economics" literature is the use of TDA. In fact, to our knowledge, this paper is the first in economics to apply and extend TDA methods in general, and the Mapper algorithm in particular.\footnote{To be precise, we are aware of several applications of TDA to financial time series (\cite{Gidea2017}, \cite{GIDEA2018820}, \cite{GUO2020124956}, \cite{GOEL2020113222}, \cite{MAJUMDAR2020113868}, \cite{QIU2020113475}). However, they are relatively short papers primarily by non-economists and published in either computer science or physics journals, with limited connections to economics.} We explain TDA and its literature in section 2. Substantively, this paper builds on and contributes to IO and the economics of innovation in general, and the empirical analysis of R\&D, patents, and firm performance in particular. The literature is too large to summarize here; see \cite{Griliches1990} for an overview of patent statistics as an indicator of innovation, and \cite{Cohen2010}, \cite{NagaokaMotohashiGoto2010}, and \cite{LernerSeru2017} for surveys. Despite well-known limitations,\footnote{For example, see Griliches (1990), Lerner and Seru (2017), and Igami and Subrahmanyam (2019).} patent statistics remain one of the most systematic sources of information on inventive activities, and continue to play a central role in the economics of innovation.\footnote{Recent micro-econometric publications at top economics journals that use patents include \cite{cockburn2016patents}, \cite{howell2017financing}, \cite{AzoulayEtAl2019}, and \cite{sampat2019patents}, among many others. A major data-cleaning project is undertaken by \cite{AroraBelenzonSheer2019}, who extend the NBER patent database of \cite{hall2001nber} and \cite{bessen2009nber} to 2015, linking firm names to the Compustat financial data. Since December 2019, such efforts to construct and improve patent-based innovation metrics have been renewed and coordinated under the \textit{NBER Innovation Information Initiative} organized by Adam Jaffe.} The most closely related work to ours is \cite{JAFFE198987}, who pioneered the use of patent-class information to characterize firms' positions, and is therefore a direct predecessor of this paper. The literature has continued to use both his distance metric and his clustering algorithm (cosine distance and k-means clustering, respectively).\footnote{For example, \cite{BloomVanReenenSchankerman2013} present a sophisticated version of Jaffe's (1986) study of R\&D spillovers. \cite{moser2020immigration} use k-means clustering to simplify the areas of science. \cite{BennerWaldfogel2008} scrutinize the USPTO's classification procedures, investigate statistical biases in the analysis of patent-class data, and offer practical suggestions. \cite{BarLeiponen2012} propose a new distance metric, called the min-complement distance, which satisfies a desirable property (independence of irrelevant patent classes) that no other conventional measures satisfy.} Finally, this paper also contributes to the rapidly growing literature that applies "big data" techniques to patents. Because patents are fundamentally legal documents, researchers are introducing a range of modern tools from computational linguistics to conduct text analysis (e.g., \cite{bryan2016impact}, \cite{kuhn2020patent}, \cite{MyersLanahan2020research}).\footnote{Similarly, \cite{hoberg2016text} use text analysis on firms' Form 10-K product descriptions to describe and characterize product-market competition.} Whereas these papers focus on (i) converting text in the original documents into high-dimensional numerical data and (ii) proposing dissimilarity measures between patents, our method takes any kinds of (i) and (ii)---text-based or not---as given, and (iii) aims to faithfully represent the shape of such "big data," in a form that permits visualization and further analysis. Thus, our proposal is complementary to both Jaffe's traditional framework and more recent (e.g., text-based) approaches. \section{Method} This section explains the idea of TDA, the Mapper algorithm, our specifications, and our original method for detecting and measuring flares. \subsection{Brief Introduction to TDA} Most data-analysis techniques in economics and elsewhere concern the evaluation of parameters or other quantities that characterize the system (the data-generating process, or DGP).\footnote{This and the next paragraphs borrow expositions from \cite{EpsteinCarlssonEdelsbrunner2011} and \cite{SizemoreEtAl2018}.} However, not all aspects of a system are readily summarized by numerical quantities. In particular, the "shape" of the data (i.e., the properties that remain invariant under "stretching" and "shrinking," e.g., loops and branching patterns) could constitute a significant insight about real phenomena. Shape is a somewhat nebulous concept and may appear too intuitive to define precisely and describe quantitatively, but the unique strength of TDA is its ability to capture and summarize such information in a useful, small representation of the data. Even though it is not among the usual tools for empirical economists, topology as an area of pure mathematics has existed for more than a century, and provides a theoretical foundation for the analysis of shapes. The adaptation of topological techniques to real data has been undertaken only recently (\cite{edelsbrunner2000topological}, \cite{zomorodian2005computing}, \cite{carlsson2009topology}, \cite{edelsbrunner2010computational}). Nevertheless, TDA has already been successfully applied to an increasing number of fields, including biology, chemistry, and materials science (e.g., \cite{nicolau2011topology}, \cite{hiraoka2016hierarchical}). See \cite{chazal2017introduction} for a brief introduction. Among the techniques in TDA, the study of persistent homology has emerged as the most popular.\footnote{\cite{EpsteinCarlssonEdelsbrunner2011} explain the popularity of homology groups by pointing out that they offer an attractive combination of strong explanatory power, a clear intuitive meaning, and a low computational cost. Because the notion of shape within (finite) datasets is inevitably stochastic, and because homology is sensitive to noise in the data, \textit{persistent} homology is used to quantify the stability of geometric features with respect to perturbations, so that real phenomena could be distinguished from artifacts of noise.} However, its application to high-dimensional data is constrained by the computational cost of constructing combinatorial models (e.g., \v{C}ech complex, Alpha complex, Rips complex, etc.), which requires one to check higher-order intersections of the balls in that space and to store all the information. Various methods have been proposed to address this "curse of dimensionality," but persistent homology can handle only tens of dimensions in the current state of the art. By contrast, Mapper can easily handle thousands and even millions of dimensions, by focusing on the global topology of the data and providing simplified representations of their shape via nonlinear transformations.\footnote{For example, \cite{rizvi2017single} use Mapper to study single-cell gene expression, where the number of dimensions equals the number of expressed genes (up to 10,000).} Thus, whereas persistent homology offers a fine-grained characterization of cavities in relatively low-dimensional data, Mapper enables a relatively coarse characterization of \textit{very} high-dimensional data, which makes it particularly suitable for our empirical context. Since \cite{singh2007topological} introduced Mapper, it has been applied to study an RNA folding pathway \parencite{yao2009topological}, the DNA microarray data of breast cancer \parencite{nicolau2011topology}, cellular differentiation and development \parencite{rizvi2017single}, and the organization of whole-brain activity maps \parencite{saggar2018towards}. Methodologically, \cite{lum2013extracting} is the most closely related work to ours, because they also propose a flare-detection algorithm. Their method uses global graph-theoretic properties that are applicable to any graph, without using any additional information from the Mapper algorithm.\footnote{Specifically, their flare detection algorithm uses the $0$-dimensional persistent homology \parencite{edelsbrunner2000topological} of the graph filtered by an eccentricity measure on its nodes. An eccentricity measure tends to give a higher value to nodes that are ``eccentric'' (on tips of flares) compared to central nodes (on the trunks).} By contrast, our algorithm takes advantage of particularities of our Mapper graph, where each node is a set of firm-years. We ensure each flare that we identify is associated with a specific firm. Hence, it can be interpreted as \textit{a flare of that firm}. \subsection{The Mapper Algorithm} \label{sec:mapper_algorithm} We provide a quick review of the Mapper method introduced by \cite{singh2007topological}. Given some complicated and high-dimensional data, Mapper provides a simplified representation of the data via a graph that captures some of its important ``topological features'' such as branching, flares, and islands. We assume the data are given as a set of points $X$ together with a dissimilarity function $\delta:X\times X\rightarrow \mathbb{R}_{\geq 0}$. The Mapper graph is constructed in four steps. \begin{enumerate} \item Project $X$ into $\mathbb{R}^d$ by some filter function $f:X \rightarrow \mathbb{R}^d$. \item Cover the image $f(X)$ using an overlapping cover $\mathcal{C} = \{C_j\}_{j=1}^J$. \item For each cover element $C_j$, apply some clustering algorithm to its pre-image $f^{-1}(C_j)$ based on the dissimilarity function $\delta$ to obtain a partition of $f^{-1}(C_j)$ into $K_j$ clusters, $V_{j,k}$ ($k=1,\hdots, K_j)$:\footnote{The notation $\sqcup$ represents a disjoint union.} \begin{equation} f^{-1}(C_j) = \bigsqcup_{k=1}^{K_j} V_{j,k}. \end{equation} \item Construct the graph $G$ with nodes (vertices) consisting of all $V_{j,k}$s. Connect two nodes, $V_{j,k}$ and $V_{j',k'}$, by an edge if $V_{j,k} \cap V_{j',k'} \neq \emptyset$. \end{enumerate} \begin{figure}[htb!!!!]\centering% \caption{Illustration of the Mapper Procedure} \includegraphics[width=0.9\textwidth]{Figures/Figure_steps.eps} \label{fig:mapper_proc} \end{figure}% Figure \ref{fig:mapper_proc} illustrates this procedure with an example. Let us start with data $X$ given by the points in two-dimensional space. Our goal is to obtain a simplified representation of $X$ while preserving its topological features, such as holes and branches. In step 1, we project $X$ onto the horizontal axis (i.e., $d=1$). This operation reduces the dimensionality of the data by eliminating the second dimension (i.e., information on the vertical axis in this case). In step 2, we cover these points on the horizontal axis by four equal-sized intervals (i.e., cover elements) $C_1, C_2, C_3$, and $C_4$ (i.e., $J=4$) with overlaps.\footnote{The degree of overlap is approximately 20\% in the pictured example. The analyst can alter it (see section 2.3).} In step 3, we look at each interval $C_j$, and cluster adjacent points \textit{in the original data space} with two dimensions. In step 4, we represent these clusters by nodes, and connect them with edges whenever adjacent clusters share the same points within their overlapping regions. The resulting graph is much simpler than the original data and amenable to graph-theoretic analyses, but it still preserves the ``global structure'' of $X$ (i.e., topological features that span multiple local regions, such as loops and long branches/flares). By contrast, using conventional techniques for dimensionality reduction alone would be similar to performing only step 1. Likewise, directly performing clustering in the original data would be the same as skipping steps 1 and 2, which would probably generate a single big cluster for the entire data in this case. Neither approach would be able to recover the \textit{shape} of the data (i.e., a collection of global structures). For this particular example, the usefulness of the Mapper graph is limited, as the original data itself is only two-dimensional and can be readily visualized. However, for more complicated high-dimensional data, a simplified graph representation offers a helpful visual aid. One way to interpret the Mapper procedure is to view it as a kind of local clustering together with ``global reconstruction'' (i.e., replication of global structures). The choice of the filter function and cover determines the local regions $f^{-1}(U_j) \subset X$ of the data. Then, the clustering algorithm is applied only locally, to each local region. The construction of the graph $G$ recovers some of the global information by connecting nodes (each of which is a cluster of points in $X$) whenever they share points in the original data. \subsection{Application of Mapper to Our Data} \label{sec: our mapper} Recall that our data are a panel of $333$ firms' yearly patent applications (and/or acquisitions) across $430$ technology classes between the years $1976$ and $2005$.\footnote{Section 3 explains the data more thoroughly, including their sources and summary statistics.} For each firm $i$, each year $t$, and each patent class $c$, we have patent count $p_{i,t,c}$. We regard each firm-year as a single observation represented by a vector $p_{i,t} \in \mathbb{R}^{430}$. Hence, each firm-year is a point $p_{i,t}$ in the 430-dimensional technology space. Firms' patent applications in any single year tend to be volatile and less representative of their underlying R\&D activities. This issue is particularly important in the use of patent-class data, as \cite{BennerWaldfogel2008} point out. We follow their recommendation to smooth out yearly fluctuations by using a five-year moving window: \begin{equation} \tilde p_{i,t} = p_{i,t} + p_{i,t+1} + \hdots + p_{i,t+4}. \end{equation} Another practical consideration is the highly skewed distribution of patent count, which we explain in section 3. We address this issue by applying a logarithmic transform,\footnote{We use an alternative data-transformation protocol (calculating shares of classes within each firm-year) in section 6, in which we compare our method with Jaffe's (1989).} \begin{equation} \tilde p_{i,t} \mapsto \ln (\tilde p_{i,t} + 1) =: x_{i,t}. \end{equation} Let $X = \{x_{i,t}\}$ be the point cloud consisting of the transformed data. We use the following specifications in constructing a mapper graph for $X$. We use the Python implementation, KeplerMapper, by \cite{KeplerMapper2019}. \begin{enumerate} \item The filter function is $f:X \rightarrow \mathbb{R}^2$, which projects $X$ to its first two principal axes as obtained by PCA.\footnote{Note we use PCA only for the purpose of determining local regions. The subsequent clustering is performed in each pre-image in the original space and \textit{not} in the PCA space.} \item For the cover of the image of $f$, we use the default cover implementation in KeplerMapper. We set the resolution level (called the ``number of cubes,'' $n$) to $20$ in our baseline specification, and the degree of overlap, $o$, to 50\%, which creates a $20 \times 20$ grid of overlapping squares.\footnote{We set $n$ to $15$ and $25$, and $o$ to $30\%$, in sensitivity analysis, which result in qualitatively similar Mapper graphs. These choices are based on the following practical considerations. Lower resolution levels ($n$) lead to Mapper graphs that are too simple and coarse to reveal interesting structures in the data; higher values of $n$ lead to more detailed outputs that are computationally more difficult to render and navigate. Similarly, lower degrees of overlap ($o$) lead to graphs without much connectivity to study; higher values of $o$ lead to too many trivial connections that increase computational burden without additional insights.} \item For the clustering algorithm, we use single-linkage clustering together with the heuristic as proposed by \cite{singh2007topological} for choosing the number of clusters. \end{enumerate} For the dissimilarity measure between points in $X$, we use the cosine distance in our baseline specification, because it is the most commonly used one in the innovation literature, \begin{equation} \delta(x_{i,t},x_{i',t'})= 1 - \frac{\sum_{c} x_{i,t,c} x_{i',t',c}}{\sqrt{\sum_{c} x_{i,t,c}^{2}} \sqrt{\sum_{c} x_{i',t',c}^{2}}}, \end{equation} where $\delta(x_{i,t},x_{i',t'})$ is the distance between firm-years $(i,t)$ and $(i',t')$, and $x_{i,t,c}$ is firm-year $(i,t)$'s patent count in class $c$ in the transformed data.\footnote{As a sensitivity analysis, we also use other distance measures, including Euclidean, correlation, min-complement, and Mahalanobis. The results are broadly similar (see Appendix).} \subsection{Detection of Flares} \label{subsec:detection_flares} As section~\ref{sec:mapper_algorithm} explained, Mapper provides a simplified representation of complicated data via a graph $G$ that captures some of its more important topological features. In this section, we discuss the detection of one such feature: flares. Let us review some basic concepts from graph theory. In general, a \emph{graph} $G=(V,E)$ is a set $V$ of nodes (vertices) and a set $E$ of edges. We assume that each edge $e \in E$ of $G$ is assigned the weight $w(e) = 1$.\footnote{The theory can be extended to handle positive weights $w(e) > 0$ that are different across edges.} For $u, v \in G$, the \emph{length} $\ell(p)$ of a path $p$ from $u$ to $v$ is the sum of the weights of the edges of $p$. The \emph{distance} $d_G(u,v)$ between $u$ and $v$ is the minimum length of all paths $p$ in $G$ from $u$ to $v$. For simplicity, we write $d(u,v)$ for $d_G(u,v)$. For a graph $G$ and a subset $V'$ of the nodes of $G$, the \emph{full subgraph} of $G$ with nodes $V'$, denoted by $G[V']$, is the graph with the set of nodes $V'$ and edges consisting of all edges of $G$ whose endpoints are both in $V'$. It is the maximal subgraph of $G$ with set of nodes $V'$. \begin{definition}[Ball] Let $r \in \mathbb{R}$ and $u \in G$. The (closed) ball $B_r(u)$ in $G$ is \[ B_r(u) = G[\{v \in G \ifnum\currentgrouptype=16 \;\middle|\;\else\mid\fi d(u,v) \leq r\}]. \] In words, it is the full subgraph of $G$ of all nodes at most distance $r$ from $u$. \end{definition} Now, consider a graph $G=(V,E)$ obtained from the Mapper algorithm applied to our data. From the construction of the Mapper graph, each node $v \in V$ will consist of points (firm-years) of the form $x_{i,t}$. To simplify, we adopt the following notation, because we want to consider firms and not firm-years for the analysis. \begin{notation} In the setting above, firm $i$ is said to be in node $v$, or, equivalently, $v$ contains firm $i$ if node $v$ contains an observation of firm $i$ at some time $t$, that is, $x_{i,t} \in v$ for some $t$. In this situation, we write $i \in v$. \end{notation} For each firm $i$, we want to determine whether $i$ appears as a flare in $G$. One way to extract flares is to use global graph-theoretic properties of $G$, as in the method proposed in \cite{lum2013extracting} using $0$-persistence of eccentricity (or centrality). Instead, we start with the requirement that we only consider a structure to be a ``flare of $i$'' if each node in the flare contains $i$. This way, we focus on a smaller graph $G_i$ defined below, which contains only nodes that involve $i$, and look for flares therein.\footnote{More generally, one may consider a flare that involves multiple firms. We restrict our attention to single-firm flares in this paper because they are the most salient feature of our Mapper graphs.} We see later that this perspective simplifies computations. \begin{definition}[Induced subgraph $G_i$ of firm $i$] Let $i$ be a firm. Define $G_i$ to be \[ G_i = G[\{v \in G \ifnum\currentgrouptype=16 \;\middle|\;\else\mid\fi i \in v \}]. \] \end{definition} That is, $G_i$ is the full subgraph of $G$ formed by nodes that contain firm $i$. We decompose the nodes of $G_i$ into ``interior'' and ``boundary.'' \begin{definition}[Interior and boundary of $G_i$] \leavevmode \begin{enumerate} \item The \emph{interior} $F_i$ of $i$ in $G$ is defined to be $ F_i = G[\{v \in G_i \ifnum\currentgrouptype=16 \;\middle|\;\else\mid\fi B_1(v) \subseteq G_i\}]. $ \item The \emph{boundary} of $i$ in $G$ is $G_i \setminus F_i$. \end{enumerate} \end{definition} In words, the interior $F_i$ contains all nodes $v$ of $G_i$ such that $G_i$ contains all neighbors of $v$ (i.e., the ball of radius $1$ around $v$). Lemma~\ref{lem:pathexit} in the Appendix shows that the boundary $G_i\setminus F_i$ indeed serves as a ``boundary'' for $F_i$: to get outside of $G_i$, one always needs to go through the boundary. Figure \ref{fig:interior_boundary} illustrates the definitions of interior and boundary. The pink region represents firm $i$'s subgraph $G_i$, the green nodes are in the interior $F_i$, and the purple nodes are in the boundary $G_i\setminus F_i$. \begin{figure}[htb!!!!]\centering% \caption{Interior and Boundary} \includegraphics[width=0.4\textwidth]{Figures/interior-boundary.eps} \label{fig:interior_boundary} \end{figure}% Next, let us define flares and islands in graph-theoretic terms. \begin{definition}[Flares and Islands] A connected component $L$ of the interior $F_i$ of firm $i$ is said to be an \emph{island of firm $i$} if $L$ is also a connected component of $G$, and said to be a \emph{flare of firm $i$}, otherwise. \end{definition} For example, two flares and one island (the triangle on the right) exist in Figure \ref{fig:interior_boundary}. In the next subsection, we refine these notions using numerical indices. As defined above, a flare may not always ``look like'' what one may imagine to be a flare. \subsection{Measuring Flares} \label{subsec:measuring_flares} We introduce the following definition and proposition, which serve as the foundations for defining our concept of flare length. \begin{definition}[Exit distance] \label{defn:exit} Let $u \in F_i$ be a node in the interior of firm $i$. The exit distance of $u$ in $F_i$ is \[ e_i(u) = \min\{d(u,w) \ifnum\currentgrouptype=16 \;\middle|\;\else\mid\fi {w\in G\setminus F_i}\}. \] In the case in which no path exists from $u$ to any $w \in G\setminus F_i$, we put $e_i(u) = \infty$. \end{definition} \begin{restatable}{proposition}{thmexit} \label{thm:exit} Let $u \in F_i$. Then, \[ e_i(u) = \min\{d_{G_i}(u,v) \ifnum\currentgrouptype=16 \;\middle|\;\else\mid\fi {v \in G_i\setminus F_i}\}, \] where $d_{G_i}(u,v)$ is the distance between $u$ and $v$ in $G_i$. \end{restatable} See Appendix A for the proof. Using Proposition~\ref{thm:exit}, we can compute $e_i(u)$ using only the information of $G_i$, because the distance $d_{G_i}(u,v)$ is the minimum length of all \ul{paths in $G_i$} from $u$ to $v$. By contrast, directly using Definition~\ref{defn:exit} would necessitate the computation of $d(u,w)$, the minimum length of all \ul{paths in $G$} from $u$ to $w$. We use the exit distance $e_i(u)$ to refine our notion of flares. \begin{definition}[Flare index] For a connected component $L$ of $F_i$ (a flare or island of firm $i$), the \emph{flare index} of $L$ is defined to be \[ k_i(L) = \max_{u\in L} e_i(u). \] \end{definition} We immediately obtain the following characterization of islands using $k_i$. \begin{lemma} Let $L$ be a connected component of $F_i$. Then, $k_i(L) = \infty$ if and only if $L$ is an island of firm $i$. \end{lemma} \begin{proof} Immediate from the definitions. \end{proof} Finally, to aggregate all the information, we define flare signature. \begin{definition}[Flare signature] Let $F_i = L_1 \sqcup L_2 \sqcup \hdots \sqcup L_M$ be a decomposition of $F_i$ into its connected components. The \emph{flare signature} of $i$ is the multiset\footnote{A multiset is a modification of the concept of a set that, unlike a set, allows for multiple instances for each of its elements. We denote it by double braces $\{\{,\}\}$ to distinguish it from a set.} \[ \vec{k}_i = \{\{k_i(L_j) \ifnum\currentgrouptype=16 \;\middle|\;\else\mid\fi j =1,\hdots,M\}\}. \] Note that if $F_i$ is empty, we simply put the empty multiset as the flare signature of $i$. \end{definition} We link the flare signature to the following ``types.'' \begin{enumerate} \item \textbf{$\vec{k}_i$ is empty.} This case occurs if and only if $F_i = \emptyset$, meaning every node containing firm $i$ neighbors at least one node not containing $i$. We call this case \textbf{Type 0: no flare or island}. \item \textbf{$\vec{k}_i$ contains only finite elements}. In this case, each connected component $L$ of $F_i$ is connected to some point $w \in G \setminus F_i$, meaning each $L$ itself cannot be a connected component of $G$. Thus, each $L$ is not an island; it is a flare. We call this case \textbf{Type 1: flares only}. \item \textbf{$\vec{k}_i$ contains finite elements, and some copies of $\infty$}. This case corresponds to \textbf{Type 2: flares and islands}. \item \textbf{$\vec{k}_i$ contains only copies of $\infty$}. This case corresponds to \textbf{Type 3: islands only}. \end{enumerate} The flare signature is defined as a multiset of flare indices. Sometimes, having one number describing how much firm $i$ looks like a flare in the Mapper graph may be convenient. Thus, we define the following. \begin{definition}[{Flare length}] The \emph{flare length} (or just \emph{length}, for short) of firm $i$ is \[ k_i = \left\{ \begin{array}{ll} 0 & \text{if } \vec{k}_i \text{ is empty,}\\ \mathop{\mathrm{finmax}}(\vec{k}_i) & \text{if } \vec{k}_i \text{ has at least one finite element,}\\ \infty & \text{otherwise,} \end{array} \right. \] where $\mathop{\mathrm{finmax}}(\vec{k}_i)$ is the maximum among all finite elements of $\vec{k}_i$. \end{definition} Type 0 gets {flare length} $0$, type 3 is sent to index $\infty$, and types 1 and 2 occupy the range in between, where the {flare length} of a firm is determined by the ``longest'' flare of firm $i$. \paragraph{Computation of Flare Signatures} Let $G=(V,E)$ be the Mapper graph of our data $X$. For each firm $i$, the computation of the subgraph $G_i$ involving $i$ can be done by iterating through all nodes $v \in V$ and checking membership of firm $i$ in $v$. The interior-boundary decomposition of $G_i$ can be computed by considering the boundary first. For each $v \in G_i$, we simply check if $v$ has a neighbor that is not in $G_i$; if so, $v$ is part of the boundary $G_i\setminus F_i$. The nodes of $G_i$ not in the boundary are then automatically part of the interior. Next, let us consider the computation of the flare signature $\vec{k}_i$ of firm $i$. First, we need a decomposition of $F_i$ into its connected components: \[ F_i = L_1 \sqcup L_2 \sqcup \hdots \sqcup L_M, \] which can be done, for example, via a breadth-first search. For each connected component $L$ of $F_i$, its flare index is given by \[ k_i(L) = \max_{u\in L} e_i(u). \] Because we need to do the same for each connected component $L$ of $F_i$, we compute $e_i(u)$ for all $u \in F_i$. By Proposition~\ref{thm:exit}, the exit distance is \[ e_i(u) = \min\{d_{G_i}(u,v) \ifnum\currentgrouptype=16 \;\middle|\;\else\mid\fi {v \in G_i\setminus F_i}\}, \] which can be computed using a multi-source version of Dijkstra's shortest-path algorithm, with sources $G_i\setminus F_i$. \section{Data} \paragraph{\protect Patents} We use Ozcan's (2015) data on patents that are granted by the USPTO between 1976 and 2010.\footnote{\cite{Ozcan2015} uses the USPTO's Patent Data Files, which contain raw assignee names at the individual patent level. By contrast, the NBER Patent Data File (another commonly used source of patent data) records standardized assignee names at the ``pdpass'' (unique firm identifier) level, which is less granular than the original assignee name.} We use their application years (instead of years in which they are granted) in our analysis, because the former is closer than the latter to the time of actual invention. We focus on patents that are applied through 2005, because a substantial fraction of later applications would still be under review as of 2010, which raises concerns about sample selection. We sometimes call these patents ``R\&D patents'' to distinguish them from ``M\&A patents'' (see below). \paragraph{\protect Mergers and Acquisitions (M\&As)} Aside from conducting in-house R\&D and applying for patent protection, firms often obtain patents by acquiring firms that have their own portfolios of patents. Ozcan's (2015) dataset links the USPTO data to the Securities Data Company's M\&A data module. This part of the dataset contains M\&A deals between 1979 and 2010 in which both the acquiring firm and the target firm have at least one patent between 1976 and 2010.\footnote{The data include merger, acquisition, acquisition of majority interest, acquisition of assets, and acquisition of certain assets, but exclude incomplete deals, rumors, and repurchases. We use data on these transactions through 2005.} \paragraph{\protect Financial Performances} We use Compustat data on the firms' revenues, EBIT (earnings before interest and taxes), and stock-market capitalization in 2005 (or the last available fiscal year if the firm disappears before 2005). Our purpose is to assess the relevance of our topological measures in terms of their correlations with the firms' eventual financial performances (in section 5). \paragraph{\protect Descriptive Statistics} To keep the sample size suitable for visual inspection and detailed exploratory analysis, we focus on firms that acquired at least four firms with patents between 1976 and 2005. This criterion keeps 333 major firms that conduct nontrivial amount of both R\&D and M\&A. Table \ref{Table - Sumstats} reports their descriptive statistics. The average patent count (2,081 for R\&D and 268 for M\&A) is much higher than the median, which suggests relatively few firms have disproportionately large portfolios even within our selective sample. The three financial-performance metrics exhibit similar skewness. Consequently, we use the natural logarithm of these variables to mitigate heteroskedasticity in our subsequent analysis (except for section 6, in which we use percentage shares). \input{Tables/Table_sumstats.tex} \paragraph{\protect Where Do Firms Patent?} Panel (c) of Table \ref{Table - Sumstats} counts the number of USPTO classes in which the firms have patents. The median firm conducts R\&D in 34.5 classes, whereas the mean is 65. The most diversified portfolio (Mitsubishi Electric) covers 358 of the 430 classes, followed by General Electric's 347. Hence, the portfolio aspect of innovation is highly heterogeneous. Appendix A illustrates what these portfolios look like in raw data. \section{Results} \subsection{Qualitative Assessment of the Mapper Graph} Figure \ref{Figure - mapper(n20_m0_cos)} (b) in the introduction is the main output of our Mapper procedure in section~\ref{sec: our mapper}, which we reproduce with greater detail in Figure \ref{Figure - mapper(cos_log_details)} below. Let us investigate its details before proceeding to more formal analysis. \begin{figure}[htb!!!!] \caption{Mapper Graph (Details)}% \begin{subfigure}{1\textwidth} \centering \includegraphics[width=0.75\linewidth]{Figures/cos_log_n20_part1.eps} \caption{IT and Engineering}% \end{subfigure} \begin{center} \begin{subfigure}{0.3\textwidth} \centering \includegraphics[width=0.9\linewidth]{Figures/cos_log_n20_part3.eps} \caption{Conglomerates}% \end{subfigure} \begin{subfigure}{0.6\textwidth} \centering \includegraphics[width=0.9\linewidth]{Figures/cos_log_n20_part2.eps} \caption{Pharmaceuticals and Chemicals}% \end{subfigure} \end{center} \caption*{\footnotesize {% \textit{Note}: Node colors represent the average year of the firm-years in that cluster, with earlier years in blue and later years in red. These figures are enlarged and more detailed versions of the Mapper graph of R\&D patents in Figure \ref{Figure - mapper(n20_m0_cos)}, which uses log-transform, cosine distance, $n=20$, and $o=0.5$. See section~\ref{sec: our mapper} for details. Appendix Figure \ref{Figure - mapper(cos_log_m2)} shows another version with both R\&D and M\&A patents.}}% \label{Figure - mapper(cos_log_details)} \end{figure}% \paragraph{\protect IT} Figure \ref{Figure - mapper(cos_log_details)} (a) shows the northern half of Figure \ref{Figure - mapper(n20_m0_cos)} (b). The main trunk consists of densely connected large nodes, each of which contains dozens of firm-years (e.g., the lower-middle part labeled ``many engineering \& medical-device firms''), because their portfolios are undifferentiated from each other. Famous IT firms spike out from the trunk in flares, including Hewlett Packard (HP), Nokia, and Intel. They too start from the densely populated ``heartland'' of electronics and engineering in the 1970s. But their patenting behaviors diverge in the 1980s and evolve into something unique in the 1990s and the 2000s. This chronological pattern coincides with the underlying trend in which IT emerged as a major sector with technological opportunities in multiple different directions. According to the NBER patent database, computers and electronics relate to many USPTO classes: 35 classes (mostly in the 300s and the 700s) belong to ``computers and communications'' technologies, and 52 classes (mostly in the 300s as well) belong to ``electrical and electronic.''\footnote{By contrast, only 14 classes are categorized as ``drugs and medical.'' We discuss them in the following.} Some of the big names are extremely unique, thus forming their own ``islands,'' or smaller connected components that are isolated from the continent. For example, IBM had no peers in the 1970s--1990s (see its small island in the upper-middle part). Its R\&D activities were massive, diverse, and different from any other firm's. Its restructuring in the early 2000s made it somewhat comparable to HP, as suggested by their rendezvous at the end of HP's flare (see the top-left part).\footnote{Flares reflect continuous changes over time. Sudden jumps, such as the disconnection within IBM between the 1990s and the 2000s, tend to occur when firms go through major corporate reorganization.} Other global brands display surprisingly short flares, for multiple reasons. Consumers might perceive Apple's products as innovative, but their main appeal is design and functionality, which do not necessarily represent patentable inventions. Most software/internet firms (e.g., Google, Adobe, and eBay) were relatively new during the sample period and did not have time to develop unique patent portfolios. Another curious case is Cisco and Microsoft, both of which heavily patented in classes 370 and 709 in the late 1990s.\footnote{USPTO class 370 is ``multiplex communications'' and class 709 is ``electrical computers and digital processing systems: multicomputer data transferring.''} Their substantial overlap connects the two flares in the middle, without which they would have looked separate and longer. This example highlights an important aspect of our analysis: uniqueness is a relative concept. That is, a firm's flare length is determined by not only its own patenting patterns, but also all other firms' trajectories, because it is based on the entire graph. \paragraph{\protect Engineering Conglomerates} Famous engineering firms cluster together and constitute a large island in Figure \ref{Figure - mapper(cos_log_details)} (b). General Electric (GE), an archetypical conglomerate, holds the most diversified portfolio in our data. Its only peers are similarly diversified manufacturers of electronic and capital goods, such as Siemens, Philips, and Mitsubishi Electric. \paragraph{\protect Pharmaceuticals and Chemicals} Health care is another R\&D-intensive sector, and patent protection is crucial for its business model. Unlike IT firms, however, pharmaceutical firms do not appear in flares. Large drug makers, such as Pfizer, Merck, and Eli Lilly, are clustered in the opposite side from IT firms, as Figure \ref{Figure - mapper(cos_log_details)} (c) shows, because most of the drug patents are in either class 424 or 514 (both are labeled ``drug, bio-affecting, and body-treating compositions''), which limits the extent to which their patent portfolios could differ from each other. Hence, further investigations into pharmaceutical innovation would require subclass-level data or a different data-transformation protocol (see section 6). Household chemical brands appear near drug makers, usually in flares that grow outward, because their products are based on similar materials and technologies. Johnson and Johnson (J\&J), Unilever, Procter and Gamble (P\&G), and Kimberly-Clark hold patents in not only classes such as 510 (cleaning compositions for solid surfaces, auxiliary compositions therefor, or processes of preparing the compositions), but also 424 (see above) and 604 (surgery). Monsanto, an agrochemical firm, appears in another flare that connects with drug makers. However, a closer look into its time path reveals the flare is moving \textit{inward}, rather than outward as in the case of most other firms. Its patents in the 1970s and the 1980s are mostly unrelated to drugs, but those in the 1990s and the 2000s are in areas in which drug makers patent. Monsanto is one of the few centripetal flares in the graph, which suggests the firm employed highly idiosyncratic R\&D strategies. Finally, conglomerates in general chemistry (Dow, DuPont, and 3M) form their own long flares at the southern end of the continent. This pattern is reminiscent of the engineering conglomerates' island in Figure \ref{Figure - mapper(cos_log_details)} (b). The ability to capture and visualize the relative proximity of conglomerates appears to be a unique strength of our approach, because their entire business portfolios are often too large and complicated to study otherwise (e.g., by using conventional IO methods at the individual product-market level). \paragraph{\protect Patents Acquired by M\&As} Figures \ref{Figure - mapper(n20_m0_cos)} and \ref{Figure - mapper(cos_log_details)} show the Mapper graph of R\&D patents. How does the picture change if we incorporate M\&A patents as well? Appendix Figure \ref{Figure - mapper(cos_log_m2)} shows another graph based on both R\&D and M\&A patents. The overall pattern looks familiar, because only 11.4\% of all patents are obtained by M\&As. Nevertheless, this addition alters the appearance of certain sectors. The main change is that more connections are formed. Computers, semiconductors, and telecommunications firms now form ``super flares'' that contain multiple firms in the same industries, respectively, rather than individually spiking out from the main trunk. Even IBM, whose patents in the 1970s--1990s form an island in the previous graph, is now part of the computer super flare. Likewise, whereas AT\&T and other large telecommunications firms form their own island in Figures \ref{Figure - mapper(n20_m0_cos)} and \ref{Figure - mapper(cos_log_details)}, this telecom island becomes a ``peninsula'' in Figure \ref{Figure - mapper(cos_log_m2)} and is connected to the main continent via Nokia's flare. The manufacturers of semiconductor devices (e.g., Intel, Texas Instruments, and LSI Logic) form another super flare as well. Aerospace and engineering firms go even further and form a ``loop'' instead of individual flares or super flares. That is, their extended (R\&D + M\&A) patent portfolios connect with the main trunk at multiple points. This pattern suggests these firms operate in a continuum of technological areas that relate to multiple different sectors. Thus, even though M\&A patents account for a small fraction of our sample, they do seem to materially expand the firms' coverage areas. M\&A patents seem to ``fill in the gaps'' between firms and make their eventual portfolios more similar to each other than the R\&D-only versions are. This tendency seems particularly strong in IT-related industries.\footnote{By contrast, engineering conglomerates, pharmaceuticals, and chemical firms exhibit relatively small changes. They are already clustered together and densely connected in the previous graph; hence, M\&A patents can add only so many connections.} \subsection{Measuring Flares} Not all flares can be recognized by visual inspection. The formal definitions and computational methods in sections~\ref{subsec:detection_flares} and \ref{subsec:measuring_flares} allow us to capture and characterize \textit{all} firms, including those that are located within the densely populated areas. Table \ref{Table - Flare histogram} shows the results. Whereas our visual inspection of Figure \ref{Figure - mapper(cos_log_details)} identified only a few dozen flares and islands, this systematic examination reveals the existence of many more. We find 40.3 \% of our sample (133 firms) shows some flares. \input{Tables/Table_flare_length} \paragraph{\protect What Makes Portfolios ``Unique''?} Raw data at the firm level, such as those reviewed in section 2, suggest both the quantity and variety of patents help make their portfolios unique. For example, HP has a massive portfolio and has a flare of length 6, whereas Dell's portfolio is much smaller and its flare length is 1.\footnote{Note HP and Dell are among the largest computer makers, and their main patent classes are similar, but their approaches to R\&D are different. HP is a traditional computer maker, whereas Dell's success is usually attributed to its unique business model in which the company sells directly to consumers and most of the manufacturing is outsourced to third-party suppliers in Asia. Such ``business-model innovations'' do not represent patentable inventions in most cases. Hence, patent statistics (and their topological representations) do not reflect Dell's ``uniqueness'' in this sense.} However, these conditions are not sufficient for long flares, because uniqueness is a relative concept. Our definition of flare length is based on \textit{the graph of all firms in all years} and their distances from each other. Hence, our notion of \textit{technological differentiation} shares the spirit of product differentiation in standard IO models, and the firm's flare length depends on not only its own activities but also all other firms'.\footnote{Qualcomm, a manufacturer of telecommunication chips, exemplifies this point with a unique portfolio (length 3) despite having relatively few patents and seemingly simple distribution across classes. See Appendix Figures \ref{Figure - revenues & flares by SIC code} and \ref{Figure - bubble (HP, Dell, Qualcomm)} for a comparison of HP, Dell, and Qualcomm.} \section{Correlations with Performance Measures} This section investigates whether flares contain any ``relevant'' information. Following a common practice in the patent statistics literature (e.g., \cite{PakesGriliches1984}, \cite{JAFFE198987}, and \cite{hall2005market}), we look for correlations between these topological characteristics and the firms' performance metrics, including revenue, profit, and stock market value. \begin{figure}[htb!!!!] \caption{Flares and Financial Performances} \begin{subfigure}{0.3\textwidth} \centering \caption{Revenue} \includegraphics[width=\textwidth]{Figures/Graph_revenue.eps} \end{subfigure} \hfill \begin{subfigure}{0.3\textwidth} \centering \caption{EBIT} \includegraphics[width=\textwidth]{Figures/Graph_ebit.eps} \end{subfigure} \hfill \begin{subfigure}{0.3\textwidth} \centering \caption{Market value} \includegraphics[width=\textwidth]{Figures/Graph_mcap.eps} \end{subfigure} \caption*{\footnotesize {% \textit{Note}: The center of each circle represents the firm's revenue in 2005 and the flare length of its patent portfolio in 1976--2005 (based on cosine distance). The circle size reflects the firm's total patent count across all classes and all years. Infinitely long flares (i.e., islands-only type) are shown at length 10 for illustration purposes.}}% \label{Figure - bubble plots)} \end{figure}% \paragraph{\protect Scatter Plots} Figure \ref{Figure - bubble plots)} (a) plots each firm's revenue in 2005 (on the vertical axis) against the flare length of its patents in 1976--2005 (on the horizontal axis). The circle size reflects the total count of patents in 1976--2005. The maximum finite flare length of all firms is 8; the figure shows infinitely long flares (i.e., islands-only type) at length 10 for ease of visualization. Two patterns emerge. First, the upper-triangle-like shape of the scatter plot suggests long flares always entail high revenues, but the reverse is not true. Some high-revenue firms show short or no flares. Second, the prevalence of large circles in the upper region suggests large portfolios are frequently associated with both high revenues and long flares. However, some firms have many patents but only short flares of length 2 or 3. Thus, long flares predict high revenues and many patents, but not all ``large'' firms exhibit long flares. Panels (b) and (c) show similar patterns for profit and market value, respectively. These patterns are not an artifact of aggregation or driven by a few specific sectors and industries. Appendix Figure \ref{Figure - revenues & flares by sector} plots revenues and flares by economic sector defined by Standard and Poor's (S\&P), a credit-rating agency. Appendix Figure \ref{Figure - revenues & flares by SIC code} studies the technology sector more deeply at the SIC-code level, with a focus on computers and semiconductor industries. These additional scatter plots show the positive correlations are preserved within each sector and industry. \paragraph{\protect Regressions} Let us further investigate these correlations by running regressions of the following form:% \begin{equation} \ln (y_{i})=\alpha _{1}+\alpha _{2}k_{i}+\alpha _{3}1\left \{ k_{i}=\infty \right \} +\alpha _{4} \ln (p_{i}) +\varepsilon _{i}, \end{equation}% where $y_{i}$ is firm $i$'s revenue (or other performance metrics) in 2005, $k_{i}$ is the flare length of its patent portfolio's evolution in 1976--2005, $1\left \{ k_{i}=\infty \right \} $ is a dummy variable indicating the islands-only type, $p_{i}$ is the total count of firm $i$'s patents in 1976--2005 (i.e., $p_{i}=\sum_{t} \sum_{c}p_{i,t,c}$), $\alpha $s are their coefficients, and $\varepsilon _{i}$ is an error term.\footnote{% Note we do not intend to prove causal relationships. Our purpose is to assess the extent to which our uniqueness measures predict these performance metrics.} We include $\ln({p_{i}}) $\ to control for the size of the firm's R\&D/patenting activities.% \input{Tables/Table_Regressions} Table \ref{Table - Regressions} shows flare length is positively correlated with the firm's revenue, EBIT, and market value in 2005. Columns 1, 4, and 7 use the flare variables alone; columns 2, 5, and 8 use $\ln (p_{i})$ alone; and columns 3, 6, and 9 use both. The purpose of comparison is to assess whether our topological characteristics convey additional information above and beyond what patent count alone could predict. The differences between the adjusted $R^{2}$s suggest they do. More formally, the F-tests of a linear restriction, $\alpha _{2}=\alpha _{3}=0$, reject the null hypothesis at the 0.01\%, 0.1\%, and 1\% levels for the revenue, EBIT, and market-value regressions, respectively.\footnote{We calculate $F=[(R^{2}_{ur} - R^{2}_{r})/2] / [(1 - R^{2}_{ur})/(\#obs - 4)]$, where $R^{2}_{ur}$ is the $R^{2}$ of the unrestricted model in column 3 (6 or 9), $R^{2}_{r}$ is the $R^{2}$ of the restricted model in column 2 (5 or 8), and $\#obs$ is the number of observations (328, 301, or 325). We reject the null hypothesis, $\alpha_{2}=\alpha_{3}=0$, if $F$ is greater than the corresponding critical value of the F distribution.} Hence, the incremental contribution of the flare-and-island variables is statistically highly significant. What about their economic significance? The estimates of $\alpha _{2}$ are 0.34, 0.33, and 0.27 in columns 3, 6, and 9 (i.e., after controlling for $p_{i}$), respectively, which imply an extra length of flare is associated with 40\%, 39\%, and 31\% higher performances in terms of revenue, EBIT, and market value, respectively.\footnote{Likewise, the estimates of $\alpha _{3}$ (0.95, 0.94, and 0.70 in the same three columns) suggest islands-only firms tend to outperform no-flare firms by 159\%, 156\%, and 101\% in these measures, respectively. However, their standard errors are large. Only a few firms belong to this category, and all of them have relatively large patent portfolios, which makes $\alpha _{3}$ difficult to isolate from $\alpha _{4}$. Nevertheless, we keep $1\left \{ k_{i}=\infty \right \} $\ in these columns, because dropping it (and thereby grouping them with no-flare firms) would be unwise in light of Figure \ref{Figure - bubble plots)} and other results (columns 1, 4, and 7).} \paragraph{\protect Robustness to Alternative Distance Metrics and Survivorship Bias} These findings are robust to both the choice of distance metrics and firm survivorship (i.e., the possibility that both flare length and the eventual financial performances might be driven by a third factor: the duration of firm survival). Appendix Table \ref{Table - Regressions (alt-dist)} shows the results are similar under alternative distance metrics that are commonly used in the literature. Appendix Tables \ref{Table - Regressions (Survivors thru 2005)} and \ref{Table - Regressions (Balanced 1976-2005)} show the same patterns hold in balanced panels. \paragraph{\protect Why Do Flares Correlate with Performances?} Why do successful firms grow long flares? Or, why do flares reflect the firms' success and failure? Three factors would seem to constitute the underlying mechanism: size, growth, and market power. First, larger firms tend to spend more on R\&D and obtain more patents. This pattern is well documented in the innovation literature \parencite{Cohen2010}. To the extent that high $p_{i}$ allows the firm to exhibit more uniqueness (i.e., a higher degree of freedom in shaping the distribution of patents across different classes), having a certain size might be a necessary condition for unique portfolios. However, our regressions control for $p_{i}$ in columns 3, 6, and 9. Moreover, a ``large but stagnant'' portfolio would not exhibit long flares, because $p_{i,t}$ and $p_{i,t+1}$ (say) would remain identical and cluster together in that case. Hence, \textit{size} alone cannot explain the results; for a firm to develop flares, its portfolio must also be \textit{growing} and \textit{differentiated}. The second factor is growth. If the firm stops expanding its R\&D activities, its evolution on the Mapper graph would also stop. Other abrupt changes, such as corporate restructuring (e.g., scaling down or refocusing R\&D efforts), could also hamper the steady growth of long flares. Even though anecdotal success stories of ``corporate turnaround as a result of radical transformation'' exist, the positive correlations in Table \ref{Table - Regressions} suggest such cases are rare, at least in our data. Hence, long flares could be symptomatic of sustained growth. We already discussed the third factor, differentiation, at the end of section 4. Its connection to financial performances is straightforward, as almost any model of imperfect competition can explain how product differentiation softens price competition and increase profits. To the extent that successful product differentiation relies on technological differentiation, unique R\&D and patents could predict (or at least proxy for) subsequent competitive advantage and market power.\footnote{Building a micro-founded model that incorporates these mechanisms (and implementing it empirically) is beyond the scope of this paper. We leave the task for future research.} \section{Comparison with Jaffe (1989)} How does our approach compare with Jaffe's (1989) clustering method? Both use the same type of data in which a firm-year is represented by a vector of patent counts, and seek to map firms' locations in the technological space. Table \ref{Table - Jaffe comparison} clarifies two differences. \input{Tables/Table_Jaffe_comparison} First, we take a logarithm of patent count, $x_{i,t,c}=\ln(p_{i,t,c}+1)$, whereas he takes a share of each class within a firm-year, $x_{i,t,c}=\frac{p_{i,t,c}}{\sum_{c} p_{i,t,c}}$. These rescaling protocols transform the metric space itself and lead to significant differences in the outputs. Hence, how one pre-processes raw data is an important, substantive choice. Nevertheless, this difference is secondary in terms of methodology, because it is a matter of input choice rather than the analytical procedure itself. As we demonstrate in this section, we can easily switch to Jaffe's share-based measure while sticking to our overall framework.\footnote{Likewise, one can use our logged patent counts within Jaffe-style clustering.} The second and more important difference is that Jaffe performs clustering at the global level to generate a list of mutually exclusive clusters of firms, whereas our "clusters" are local and retain connections through edges between them (which reflect the existence of commonly shared members). In other words, his algorithm is a big \textit{discretization} operation, whereas ours is designed to recover the \textit{continuum} of firms and industries in the data. Uncovering the original, continuous data patterns is important because the underlying DGP is fundamentally continuous, and industry boundaries are fluid as far as innovative activities are concerned. In the following, we demonstrate how our method can help reveal the global shape of the data and generate additional insights beyond what Jaffe-style clustering does. \input{Tables/Table_Jaffe_k-medoids} As a point of departure, Table \ref{Table - K-Medoids Clustering} shows a list of clusters that (global) clustering \`{a} la Jaffe generates. The grouping seems intuitive, with clusters of firms in engineering (cluster 1), telecommunications (cluster 2), materials (cluster 3), medical devices (cluster 4), pharmaceuticals (cluster 5), and so on. However, cluster boundaries are ultimately an artifact of discretization. Some firms belong to multiple clusters over the years, and hence would appear to "move" between technological fields. The reality, however, is that cluster boundaries are frequently drawn in the middle of their data points (firm-years), and not necessarily that they moved a lot in the original technological space. The flip side of this boundary issue is that some industries and technologies are intrinsically connected and form a continuum. Splitting them into separate groups would be unnatural in such cases. For example, clusters 7 (computers), 10 (semiconductors), and 11 (electronics) commonly share Intel and HP as their group members. One interpretation is that these firms are exceptionally active in many different fields, but another is that these industries and technologies form a continuum, founded upon closely related (electrical and electronic engineering) methods, and should not be treated separately. Likewise, one might question whether Monsanto is particularly mercurial in moving between clusters 6 (chemicals) and 15 (genomics), or if they are simply two faces of the same, connected discipline. Medical-device manufacturers are somehow split into two clusters (4 and 20). Many of these ``puzzles'' could be an artifact of discretization. \begin{figure}[htb!!!!]\centering% \caption{Mapper Graph Based on Jaffe's Measure}% \includegraphics[width=0.60\textwidth]{Figures/cos_sumone_n40.eps} \caption*{\footnotesize {% \textit{Note}: Node colors represent the average year of the firm-years in that cluster, with earlier years in blue and later years in red. This figure is a shape-graph representation of 333 major firms' R\&D patents in 1976--2005 based on shares, cosine distance, $n=40$, and $o=0.5$. See Appendix Figure \ref{Figure - mapper(cos_sumone_n40_details)} for a detailed version.}}% \label{Figure - mapper(cos_sumone_n40)} \end{figure}% We address these issues by preserving the underlying continuity in the data. Figure \ref{Figure - mapper(cos_sumone_n40)} is the Mapper graph of the same data, based on the same rescaling protocol (percentage shares) and the same distance metric (cosine). Unlike the 21 mutually exclusive groups from the clustering algorithm, the shape graph recovers a \textit{continuum of industries} from the data. Indeed, its main insight is that industries are connected, sometimes in unanticipated ways. \paragraph{\protect Incredibly ``Shrinking'' High-Tech Industries} Many firms populate the upper-north-west corner of the graph. This high-tech region is so densely populated that disentangling it is difficult (see Appendix Figure \ref{Figure - mapper(cos_sumone_n40_details)}, panel a). These firms conduct R\&D in relatively many patent classes. Raw patent counts (and their logged version in sections 1--5) preserve the uniqueness of each firm's portfolio. However, after their conversion into percentage shares (and hence the loss of information on volumes in absolute terms), most portfolios end up looking alike. Thus, the non-share-based Mapper graphs of sections 1--5 seem more informative about high-tech industries. \paragraph{\protect Biomedical Super Flare} By contrast, the share-based Mapper graph maps biomedical areas more clearly and reveals interesting technological connections between industries.\footnote{We thank Elizabeth Lyons for helpful discussions on these industries.} Pharmaceutical companies live in their own world (in the south-west corner of Figure \ref{Figure - mapper(cos_sumone_n40)}), patenting only in a few drug-related classes. Nevertheless, they are not completely isolated, because biochemistry and medical electronics firms stretch from the northern ``heartland'' of engineering, materials, and general chemicals. The detailed maps in Appendix Figure \ref{Figure - mapper(cos_sumone_n40_details)} (panels a and b) show medical-equipment manufacturers (e.g., Perkin Elmer and Beckman Coulter) and genomics-based drug developers (e.g., Amgen and Genzyme) connect with pharmaceutical companies (e.g., Merck and Pfizer), collectively forming a long ``archipelago'' of biomedical industries. These connections are intuitive because genomics firms rely on measurement and data processing to develop new drugs. Uncovering them from Table \ref{Table - K-Medoids Clustering} alone would be difficult because it classifies general and agro-chemicals in cluster 6 and biochemicals and medical electronics in cluster 15.\footnote{Both clusters prominently feature Monsanto as a member, but its unique trajectory does not conform to the patterns of any other firms in either cluster (except Bayer, which acquired it in 2018). Appendix Figure \ref{Figure - mapper(cos_sumone_n40_details)} shows Bayer did not move much throughout the sample period, whereas Monsanto made a long trip from the crowded center of materials and chemicals industries to Bayer's location. The fact that Bayer acquired Monsanto in 2018 might suggest patent portfolios are a useful predictor of competitive positions and mergers. See \cite{EC2017}.} \paragraph{\protect Two Bridges to Medical Devices} Medical-device manufacturers occupy a large territory in the eastern half of Figure \ref{Figure - mapper(cos_sumone_n40)}.\footnote{We thank Matthew Grennan for helpful discussions on medical devices.} The Mapper graph reveals somewhat surprising ways in which this industry connects with others. Specifically, two types of firms bridge between medical devices and the engineering heartland. One bridge consists of household chemicals and contact lenses. Appendix Figure \ref{Figure - mapper(cos_sumone_n40_details)} (panels a and c) shows household names, such as Unilever, P\&G, and Bausch \& Lomb, were close to the center of materials and general chemicals in the 1970s and the 1980s. But then their R\&D efforts moved in the south-east direction to form their own peninsulas by the 1990s and the 2000s. J\&J has a major health care division and bridges between household chemicals and medical devices. The other bridge is located in the north and builds on dense clusters of less well-known firms specializing in aerodynamics and filters (e.g., Sealed Air, U.S. Filter, and Mine Safety Appliance). It then extends in the south-east direction and connects with more obviously medical-device-related names, such as Respironics and Vital Signs. The two groups of firms are seemingly unrelated at first glace, but their underlying technologies are common: breathing requires clean air, and the monitoring of vital signs concerns fluid dynamics. Thus, technologically speaking, mine safety and medical devices are closer neighbors than what a conventional industry-classification system would suggest. By contrast, the global clustering in Table \ref{Table - K-Medoids Clustering} is not particularly informative about these connections: P\&G and J\&J appear in cluster 4; the aerodynamics-and-filters firms appear separately in cluster 8; and medical devices are split into clusters 4 and 20. \bigskip These examples highlight Mapper's ability to preserve and represent the continuous patterns in the original space, which could help us draw many additional insights from otherwise intractable, high-dimensional data. Whereas Jaffe-style clustering generates a list of discrete, disjoint groups of firms (or firm-years), the shape graphs from Mapper put them in the global context and reveal continuity. Clustering has the beauty of simplicity; Mapper shows more nuances and the ``big picture'' at the same time. Thus, our approach is highly complementary to the existing methods and is a valuable addition to the toolbox for economists studying any \textit{intrinsically high-dimensional} objects. \section{Conclusion} This paper proposes a new method to map, describe, and characterize firms' inventive activities. The shape graphs from the Mapper procedure help us understand where firms and industries are located, how they connect with each other (or not), and how their innovative activities evolve over time. In the past, economists' ability to answer these basic, descriptive questions---and hence the ability to ask and answer deeper, causal/policy questions \textit{that presuppose reliable descriptions or stylized facts}---have been constrained by the ``curse of dimensionality'' of the technological space. Now that the topological concepts provide a set of descriptive tools that are both mathematically rigorous and computationally feasible, we can start revisiting and answering some of the long-standing questions in economics, including the rate and direction of inventive activity. We believe TDA enables us to tackle many other data-exploration challenges as well. \section*{Appendix (For Online Publication)} \subsection*{A. \ Raw Data: Where Do Firms Patent?} Let us illustrate with examples what the firms' patent portfolios look like. Figure \ref{Figure - bubble (examples)} visualizes the evolution of patenting activities at six major firms. Each plot lists the 430 USPTO patent classes on the vertical axis, and the year of application (for R\&D patents) or acquisition (for M\&A patents) on the horizontal axis. The circle size represents the number of patents in each class-year. \begin{figure}[htb!!!!] \caption{Acquiring a String of Pearls}% \begin{subfigure}{0.5\textwidth} \caption{Cisco Systems}% \centering \includegraphics[width=0.9\linewidth]{Figures/Pearls_001_CiscoSystems.eps} \end{subfigure} \begin{subfigure}{0.5\textwidth} \caption{Seagate Technology}% \centering \includegraphics[width=0.9\linewidth]{Figures/Pearls_218_SeagateTechnology.eps} \end{subfigure} \begin{subfigure}{0.5\textwidth} \caption{Pfizer}% \centering \includegraphics[width=0.9\linewidth]{Figures/Pearls_017_Pfizer.eps} \end{subfigure} \begin{subfigure}{0.5\textwidth} \caption{Medtronic}% \centering \includegraphics[width=0.9\linewidth]{Figures/Pearls_004_Medtronic.eps} \end{subfigure} \begin{subfigure}{0.5\textwidth} \caption{GE}% \centering \includegraphics[width=0.9\linewidth]{Figures/Pearls_002_GE.eps} \end{subfigure} \begin{subfigure}{0.5\textwidth} \caption{IBM}% \centering \includegraphics[width=0.9\linewidth]{Figures/Pearls_003_IBM.eps} \end{subfigure} \caption*{\footnotesize {% \textit{Note}: The circle size represents the number of patents in each class-year. Based on our method and analysis in sections 3 and 4, the ``flare lengths'' (our proposed measure of ``uniqueness'') of these firms' portfolios are: 3 (Cisco), 2 (Seagate), 1 (Pfizer), 2 (Medtronic), 4 (GE), and $\infty$ (IBM).}}% \label{Figure - bubble (examples)} \end{figure}% The top panels show two IT firms. Cisco Systems makes network equipment (e.g., routers) and is famous for its active use of M\&As to acquire new products and talents; it acquired the largest number of target firms with patents in our sample. Nevertheless, most of Cisco's patents are obtained by in-house R\&D and are concentrated in classes 370 (multiplex communications) and 709 (electrical computers and digital processing systems: multicomputer data transferring). Seagate Technology makes hard disk drives (HDDs) and is another example of specialized IT firms. Its main patent class is 360 (dynamic magnetic information storage or retrieval), which is central to the HDD technology, but its portfolio gradually diversified as the firm intensified efforts to manufacture key components as well, including heads, media, and their interface.\footnote{See \cite{IgamiSubrahmanyam2019} for the details of patents and innovation in the HDD industry.} The middle panels show two health care firms. The pharmaceutical industry is R\&D-intensive, but the patent portfolio of Pfizer looks simpler than the IT examples. Most of the drug patents are in classes 424 and 514 (drug, bio-affecting, and body treating compositions), and drug makers hardly patent elsewhere. By contrast, medical devices rely on a variety of technologies, even though their main classes are relatively few (600--607). The plot shows Medtronic, a leading medical-device maker, is active in many areas. The bottom panels present extreme cases, for a reference. GE, a conglomerate, has one of the most diversified portfolios in our sample, with patents in more than 300 classes. The picture becomes too messy for human eyes to draw insights. Finally, IBM has by far the largest number of patents in our sample, but its portfolio looks more organized than GE's, because its activities are more focused. Most of the computers and electronics technologies are in the 300s and the early 700s, which are where IBM's portfolio is concentrated. These examples suggest the portfolio aspect of patents and technologies is interesting and contains potentially important information. However, the high dimensionality of technological space makes conventional data analysis difficult. \subsection*{B. \ Proofs} First, we show the boundary $G_i\setminus F_i$ indeed serves as a ``boundary'' for $F_i$: to get outside of $G_i$, one always needs to go through the boundary. \begin{lemma} \label{lem:pathexit} Let $u \in F_i$ and $w \in G\setminus G_i$, and let $p$ be a path from $u$ to $w$. Then, the path $p$ passes through some node $v \in G_i \setminus F_i$. \end{lemma} \begin{proof} Let $p$ be such a path from $u\in F_i$ to $w\in G\setminus G_i$, which passes through the nodes \[ u=v_0, v_1, v_2,\hdots v_{n-1}, v_n = w \] in that order. Suppose, to the contrary, that all $v_j$ are not in the boundary $G_i\setminus F_i$. We show by induction that $v_j \in F_i$ for all $j \in \{0,\hdots,n\}$. First, $v_0=u \in F_i$ is clear. Suppose $v_j \in F_i$. Because $v_{j+1} \in B_1(v_j) \subseteq G_i$ by definition of the interior $F_i$, and because $v_{j+1} \notin G_i\setminus F_i$ by assumption, we see $v_{j+1} \in F_i$. Thus, by induction, $v_j \in F_i$ for all $j \in \{0,\hdots,n\}$. In particular, $v_n = w \in F_i$, which is a contradiction, because $w \in G\setminus G_i \subseteq G\setminus F_i$. Therefore, some $v_j$ exists in the boundary $G_i \setminus F_i$. \end{proof} For $i$, a firm, and $u \in F_i$, recall the exit distance of $u$ in $F_i$ was defined to be \[ e_i(u) = \min\{d(u,w) \ifnum\currentgrouptype=16 \;\middle|\;\else\mid\fi {w\in G\setminus F_i}\} \] in Definition~\ref{defn:exit}. Here, we reproduce Proposition~\ref{thm:exit} and provide a proof. \noindent\parbox{\textwidth}{\thmexit*} \begin{proof} It is clear that \[ \min\{d(u,w) \ifnum\currentgrouptype=16 \;\middle|\;\else\mid\fi {w\in G\setminus F_i}\} \leq \min\{d_{G_i}(u,v) \ifnum\currentgrouptype=16 \;\middle|\;\else\mid\fi {v\in G_i\setminus F_i}\}. \] Suppose the minimum of the left-hand side is achieved by a $w \in G\setminus F_i$, and let $d(u,w) = \ell(p)$, the length of a minimum path $p$ in $G$ from $u \in F_i$ to $w \in G\setminus F_i$. Let $v$ be the first node $v \in G_i \setminus F_i$ that $p$ passes through. Note such $v$ exists by Lemma~\ref{lem:pathexit}. In the case in which $v \neq w$, truncate $p$ to the path $p'$ from $u$ to $v$. By choice of $v$, $p'$ is fully contained in $G_i$, and $\ell(p') < \ell(p)$ because we only have positive weights and $p'$ has strictly fewer edges than $p$. It follows that \[ \min\{d(u,w) \ifnum\currentgrouptype=16 \;\middle|\;\else\mid\fi {w\in G\setminus F_i}\} = \ell(p) > \ell(p') \geq \min\{d_{G_i}(u,v) \ifnum\currentgrouptype=16 \;\middle|\;\else\mid\fi {v\in G_i\setminus F_i}\}, \] because $p'$ is a path from $u$ to $v$ that is contained in $G_i$. This is a contradiction. Thus, $v=w$, and it follows that \[ \min\{d(u,w) \ifnum\currentgrouptype=16 \;\middle|\;\else\mid\fi {w\in G\setminus F_i}\} = \ell(p) \geq \min\{d_{G_i}(u,v) \ifnum\currentgrouptype=16 \;\middle|\;\else\mid\fi {v\in G_i\setminus F_i}\}, \] which shows the required equality. \end{proof} \subsection*{C. \ Additional Figures and Tables for Sections 4, 5, and 6} \begin{figure}[htb!!!!]\centering% \caption{Three-Dimensional PCA}% \includegraphics[width=0.60\textwidth]{Figures/cos_log_pca3d.eps} \caption*{\footnotesize {% \textit{Note}: Red markers are IT firms, green markers are drug makers, and blue markers are all others.}}% \label{Figure - mapper(cos_log_pca3d)} \end{figure}% \begin{figure}[htb!!!!] \caption{Mapper Graph of Both R\&D and M\&A Patents}% \centering \includegraphics[width=0.5\linewidth]{Figures/cos_log_n20_m2.eps} \caption*{\footnotesize {% \textit{Note}: This version uses both R\&D and M\&A patents, whereas those in Figures \ref{Figure - mapper(n20_m0_cos)} and \ref{Figure - mapper(cos_log_details)} use only R\&D patents. Both use log-transform, cosine distance, $n=20$, and $o=0.5$.}}% \label{Figure - mapper(cos_log_m2)} \end{figure}% \begin{figure}[htb!!!!] \caption{Revenues and Flares by Sector}% \begin{subfigure}{0.3\textwidth} \caption{Technology}% \centering \includegraphics[width=\textwidth]{Figures/Graph_rev_flare_pat_technology.eps} \end{subfigure} \hfill \begin{subfigure}{0.3\textwidth} \caption{Capital Goods}% \centering \includegraphics[width=\textwidth]{Figures/Graph_rev_flare_pat_capitalgoods.eps} \end{subfigure} \hfill \begin{subfigure}{0.3\textwidth} \caption{Health Care}% \centering \includegraphics[width=\textwidth]{Figures/Graph_rev_flare_pat_healthcare.eps} \end{subfigure} \begin{subfigure}{0.3\textwidth} \caption{Consumer Goods}% \centering \includegraphics[width=\textwidth]{Figures/Graph_rev_flare_pat_consumergoods.eps} \end{subfigure} \hfill \begin{subfigure}{0.3\textwidth} \caption{Basic Materials}% \centering \includegraphics[width=\textwidth]{Figures/Graph_rev_flare_pat_materials.eps} \end{subfigure} \hfill \begin{subfigure}{0.3\textwidth} \caption{Others}% \centering \includegraphics[width=\textwidth]{Figures/Graph_rev_flare_pat_others.eps} \end{subfigure} \caption*{\footnotesize {% \textit{Note}: ``Consumer goods'' include the S\&P consumer-cyclicals and consumer-staples sectors. ``Others'' include the S\&P energy, communication services, transport, and utilities sectors.}}% \label{Figure - revenues & flares by sector} \end{figure}% \begin{figure}[htb!!!!] \caption{Revenues and Flares by SIC Code}% \begin{subfigure}{0.5\textwidth} \caption{Computers and Peripherals}% \centering \includegraphics[width=0.9\linewidth]{Figures/Graph_rev_flare_pat_computers_and_peripherals.eps} \end{subfigure} \begin{subfigure}{0.5\textwidth} \caption{Semiconductors}% \centering \includegraphics[width=0.9\linewidth]{Figures/Graph_rev_flare_pat_semiconductors.eps} \end{subfigure} \caption*{\footnotesize {% \textit{Note}: For computers and their peripherals, we use 3570 (computer and office equipment), 3571 (electronic computers), 3572 (computer storage devices), 3575 (computer terminals), and 3576 (computer communications equipment). For semiconductors, we use SIC code 3674 (semiconductors and related devices).}}% \label{Figure - revenues & flares by SIC code} \end{figure}% \begin{figure}[htb!!!!] \caption{Raw Data on Selected Technology Firms}% \begin{subfigure}{0.3\textwidth} \caption{Hewlett Packard}% \centering \includegraphics[width=\textwidth]{Figures/Pearls_010_HewlettPackard.eps} \end{subfigure} \hfill \begin{subfigure}{0.3\textwidth} \caption{Dell}% \centering \includegraphics[width=\textwidth]{Figures/Pearls_087_Dell.eps} \end{subfigure} \hfill \begin{subfigure}{0.3\textwidth} \caption{Qualcomm}% \centering \includegraphics[width=\textwidth]{Figures/Pearls_097_Qualcomm.eps} \end{subfigure} \caption*{\footnotesize {% \textit{Note}: The circle size represents the number of patents in each class-year. The flare lengths of these firms' portfolios are: 6 (HP), 1 (Dell), and 3 (Qualcomm).}}% \label{Figure - bubble (HP, Dell, Qualcomm)} \end{figure}% \input{Tables/Table_Regressions_alt} \input{Tables/Table_Regressions_survival} \input{Tables/Table_Regressions_balanced} \input{Tables/Table_Jaffe_k-means} \begin{figure}[htb!!!!] \caption{Mapper Graph Based on Jaffe's Measure (Details)}% \begin{subfigure}{1\textwidth} \centering \includegraphics[width=0.9\linewidth]{Figures/cos_sumone_n40_part1.eps} \caption{IT, Engineering, Materials, and Chemicals}% \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.9\linewidth]{Figures/cos_sumone_n40_part2.eps} \caption{Biomedicals and Pharmaceuticals}% \end{subfigure} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.9\linewidth]{Figures/cos_sumone_n40_part3.eps} \caption{Medical Devices}% \end{subfigure} \caption*{\footnotesize {% \textit{Note}: These figures are enlarged and more detailed versions of the Mapper graph in Figure \ref{Figure - mapper(cos_sumone_n40)}.}}% \label{Figure - mapper(cos_sumone_n40_details)} \end{figure}% \begin{figure}[htb!!!!]\centering% \caption{Mapper Graph Based on Jaffe's Measure and Mahalanobis Distance}% \includegraphics[width=0.50\textwidth]{Figures/blo_sumone_n40.eps} \caption*{\footnotesize {% \textit{Note}: This is a Mapper graph of 333 major firms' R\&D patents in 1976--2005 based on percentage shares, Mahalanobis distance, $n=40$, and $o=0.5$.}}% \label{Figure - mapper(bloom_sumone_n40)} \end{figure}% \clearpage
{ "redpajama_set_name": "RedPajamaArXiv" }
\section{Introduction} Given a set $X$, a \emph{$k$-coloring} of $X$ is a surjective mapping $c:X\to \{1,2,...,k\}$, or equivalently a partition $X=C_1\cup C_2 \, ...\cup C_k$ into $k$ nonempty parts called \emph{color classes}. A subset $Y\subseteq X$ is called \emph{monochromatic} under $c$ if it is contained in a single color class. On the other hand, $Y$ is called \emph{rainbow} if the coloring assigns pairwise distinct colors to its elements. Given a coloring of a subset of the integers, we say that a vector is \emph{rainbow} if each of its entries is colored differently. \emph{Arithmetic Ramsey Theory} concerns the study of the existence of monochromatic structures, like arithmetic progressions or solutions of linear equations, in every coloring of subsets of the integer numbers. The classical results in this area include Schur's Theorem: For every $k$, if $n$ is sufficiently large, every $k$-coloring of the starting segment of integers $[n]=\{1,2,...,n\}$ contains a monochromatic solution to the equation $x+y=z$. Another result is Van der Waerden's Theorem which states that for every pair of integers $t$ and $k$, when $n$ is sufficiently large, every $k$-coloring of $[n]$ contains a monochromatic $t$-term arithmetic progression. One of the most important examples is the famous 1933 theorem of Richard Rado: Given a rational matrix $A$, consider the homogeneous system of linear equations $Ax = 0$. This system or the matrix is called $k$-\emph{regular} if, for every $k$-coloring of the natural numbers, the system has a monochromatic solution. A matrix is \emph{regular} if it is $k$-regular for all $k$. Rado's Theorem characterizes precisely those matrices that are regular. The characterization depends on simple additive conditions satisfied by the columns of $A$ (which can be found in \cite{radoregular,goodbook}). In fact, Rado's Theorem is a common generalization of both Schur's and Van der Waerden's Theorems. In contrast to Ramsey Theory, \emph{Rainbow Ramsey Theory} refers to the study of the existence of \emph{rainbow} structures in colored combinatorial universes under some density conditions on the coloring. Arithmetic versions of this theory have been recently studied by several authors concerning colorings of integer intervals or cyclic groups, showing the existence of rainbow arithmetic progressions or rainbow solutions to linear equations under some density conditions on the color classes \cite{fmr, rainbowsurvey, llm, ms, rainbowapk}. As pointed out in papers \cite{fmr, rainbowsurvey}, one natural research direction is to generalize the known monochromatic results to the case of rainbow solutions of systems of linear equations. In particular the authors of \cite{fmr} stated that it would be very exciting to provide a complete rainbow analogue of Rado's Theorem. The key purpose of this paper is to provide such a theorem. As a consequence we disproved two conjectures \cite{rainbowsurvey,fmr}. Our techniques combine established combinatorial tools with ideas from convex geometry, particularly Ehrhart's Theory of lattice point counting \cite{BarviPom,beckrobins}. \begin{definition} A matrix $A$ with rational entries is \emph{rainbow partition $k$-regular} if for all $n$ and for every equinumerous $k$-coloring of $[kn]$ (i.e. $k$-colorings in which all color classes have size $n$), there exists a rainbow vector in $\ker(A)$. The smallest $k$ such that $A$ is rainbow partition $k$-regular, if it exists, is called the \emph{rainbow number of $A$} and is denoted by $r(A)$. A matrix $A$ is \emph{rainbow regular} if it is rainbow partition $k$-regular for all sufficiently large $k$. \end{definition} For instance, it is known that both matrices $$A_1=\begin{pmatrix}1 & -2 & 1 \end{pmatrix}\text{ and }A_2=\begin{pmatrix}1 & 1 & -1 & -1 \end{pmatrix},$$ corresponding to $3$-term arithmetic progressions and solutions to the Sidon equation, are rainbow regular matrices with rainbow numbers $r(A_1)=3$ and $r(A_2)=4$ respectively (see \cite{fmr, rainbowapk}). The authors of \cite{fmr} and \cite{rainbowapk} claimed that every $1\times n$ matrix with nonzero rational entries is rainbow regular if and only if some of the entries have different signs. That is correct for $n\geq 3$, but incorrect for $n=2$. This subtle difference is key for finding the main theorem. Lemma \ref{lem:1x2} shows that all rational nonzero $1\times2$ matrices are not rainbow regular and is used to prove Theorem \ref{thm:main}. The papers \cite{fmr, rainbowapk} contain two different conjectures on the classification of rainbow regular matrices. However, both conjectures as originally stated have the trivial counterexample $\begin{pmatrix}1 & -1 & 0\end{pmatrix}$. To avoid this, we state them here in slightly modified versions. \begin{conjecture}[Jungi\'c, Ne\v set\v ril, Radoi\v ci\'c \cite{rainbowsurvey}] \label{wrongrainbow3} A matrix $A$ with integer entries is rainbow regular if and only if there exist two linearly independent vectors with distinct positive integer entries in $\ker(A)$. \end{conjecture} \begin{conjecture}[Fox, Mahdian, Radoi\v ci\'c \cite{fmr}] \label{wrongrainbow4} A matrix $A$ with integer entries is rainbow regular if and only if the rows of $A$ are linearly independent and $\ker(A)$ contains a vector with distinct positive integer entries. \end{conjecture} After finding counterexamples to the above conjectures, we were able to obtain a Rado-style classification theorem of rainbow regular matrices. Moreover, the definition of rainbow regularity is stronger than it seems. We show that if $A$ is rainbow regular, it satisfies a stronger version of rainbow regularity where the equinumerous condition is relaxed. \begin{definition} A matrix $A$ with rational entries is \emph{robustly rainbow regular} if there exists some constant $C$, depending only on $A$ such that for every $\varepsilon>0$, positive integer $N$, and large enough integer $k$, the following holds: For every $k$-coloring of $[N]$ in which each color class contains at most $(C-\varepsilon)\frac N{\sqrt k}$ elements, there is a rainbow vector in $\ker(A)$. \end{definition} Note that (robust) rainbow regularity is actually a property of the kernel rather that the matrix. \pagebreak In our main theorem below, we classify rainbow regularity in terms of both the matrix $A$ and its kernel. \begin{theorem}\label{thm:main} Let $A$ be a $m \times d$ rational matrix. The following conditions are equivalent. \begin{enumerate}[(i)] \item $A$ is rainbow regular. \item $A$ is robustly rainbow regular. \item There exists a vector in $\ker(A)$ with positive integer entries, and every submatrix of $A$ obtained by deleting two columns has the same rank as $A$. \item There exists at least one vector in $\ker(A)$ with positive integer entries, and for every pair of distinct indices $(i,j)$, there exists a pair of vectors $x=(x_1,\dots,x_d)$ and $y=(y_1,\dots,y_d)$ in $\ker(A)$ such that $x_iy_j\neq x_jy_i$. \end{enumerate} \end{theorem} From Theorem \ref{thm:main} the reader can easily see that the matrix $\begin{pmatrix}a & -b & 0\end{pmatrix}$, with $a,b$ positive integers gives a counterexample to Conjectures \ref{wrongrainbow3} and \ref{wrongrainbow4}. However, not all counterexamples are this simple. For instance, the matrix $$\begin{pmatrix} 1 & 0 & 1 & -1 & 0 \\ 0 & 1 & 1 & 0 & -1 \\ 1 & 0 & 0 & 1 & -1 \end{pmatrix}$$ has a kernel generated by $(1,2,3,4,5)$ and $(1,2,4,5,6)$ but is not rainbow regular. In the next section we give the proof of Theorem \ref{thm:main} and we prove the following corollary about $k$-colorings of $\N$ instead of $[N]$. \begin{corollary}\label{thm:cor1} Given a rational rainbow regular $m\times d$ matrix $A$ there exists a constant $C$, depending only on $A$, that satisfies the following: For every $k$-coloring of $\N$ with each color class having upper density less than $\frac{C}{\sqrt k}$, there is a rainbow vector in $\ker(A)$. \end{corollary} Ramsey theory has also been used in graph theory, rainbow Ramsey theory is no different and we state a corollary to Theorem \ref{thm:main} that describes the properties of graphs and it suggests an interesting connection to the theory of no-where-zero flows on graphs (see \cite{diestel,sixflow} and references therein). For the sake of completeness we recall some definitions and well known facts. A graph is called \emph{$k$-edge-connected} if has no edge cut of cardinality less that $k$. A \emph{$k$-flow} of an oriented graph $G=(V,E)$ is an integer function $\phi$ on $E$ such that $-k< \phi(e) < k$ for all $e\in E$, and satisfies the Kirchhoff's law, that is $\sum_{e \in \delta^+(v)} \phi(e) = \sum_{e \in \delta^-(v)} \phi(e)$ for each $v\in V$. If a graph has a $k$-flow, then it has a positive $k$-flow. If in addition $\phi (e)\neq 0$ for every $e\in E$ we call $\phi$ a \emph{nowhere-zero-$k$-flow}. For a given oriented graph $G=(V,E)$ with $|V|=n$ and $|E|=m$ we consider the incidence matrix $M$ which is a $n\times m$ matrix. The rank of $M$ is $n-c$ where $c$ is the number of connected components of $G$. Theorem \ref{thm:main} has the following graph theoretic corollary (for a monochromatic analogue see \cite{hogben}). \begin{corollary} \label{thm:corgraph} The connected components of a graph $G$ are all 3-edge-connected if and only if for some orientation of $G$ there exists a constant $C$ depending only on the graph such that for every $\varepsilon>0$, positive integer $N$, and large enough integer $k$, it follows that: for every $k$-coloring of $[N]$ in which each color class contains at most $(C-\varepsilon)\frac{N}{\sqrt k}$ elements, there is a rainbow flow on that orientation of $G$. \end{corollary} In the third section we look at the matrices which give rise to Fibonacci sequences; we use Theorem \ref{thm:main} to show that they are rainbow regular and give a bound for their rainbow number. \section{Proof of Theorem \ref{thm:main} and its corollaries} \label{arrct} We start this section by considering the simplest case. We show that for any $1\times 2$ matrix $A$ there is an equinumerous $k$-coloring of $[kn]$, for sufficiently large $k$ and $n$, without rainbow vectors in the kernel of $A$. This case will later be used in the proof of the main theorem. \begin{lemma}\label{lem:1x2} If $A$ is a nonzero rational $1\times2$ matrix then $A$ is not rainbow regular. \end{lemma} \begin{proof} Assume $A=\begin{pmatrix}p&q\end{pmatrix}$ with $p,q\in\Q$. Then $\ker(A)$ is generated by some vector $(a,b)$. If either $p$ or $q$ are equal to $0$ then either $a$ or $b$ equals $0$, thus $\ker(A)$ contains no vectors with positive integer entries. The same conclusion holds if $p$ and $q$ have the same sign. Therefore, in both cases, the matrix $A$ is not rainbow regular. Assume $p$ and $q$ are nonzero and have opposite signs. If $p=-q$, then $\ker(A)$ is generated by $(1,1)$ and $A$ is not rainbow regular. So we may assume that $p\neq-q$, $a$ and $b$ are relatively prime, and $a<b$. Let $N=nk$. In order to define an equinumerous coloring without rainbow solutions, we give a partition $\mathcal P$ of $[N]$. Each of its classes will be monochromatic. So $i$ must be in the same class as $j$ if either $(i,j)$ or $(j,i)$ are in $\ker(A)$, i.e., $ai=bj$ or $bi=aj$. Since $a$ and $b$ are relatively prime, every integer can be written uniquely as $a^\alpha b^\beta c$ where $c$ is not divisible by neither $a$ or $b$. The equivalence classes are of the form \begin{equation*} \left\{a^\alpha b^0 c,a^{\alpha-1} b^1 c,\dots,a^0 b^\alpha c\right\}\cap[N]. \end{equation*} As mentioned before, all the elements in each class will be colored with the same color. Now, using a greedy algorithm, we can define the $k$-coloring of $[N]$: In each step, assign a least used color to a largest uncolored class. For a step-by-step example of our coloring procedure see Figure \ref{N12}. \begin{figure} \includegraphics{alg} \caption{An explicit coloring for $N=12$ showing that the matrix $\big(1\quad -2\big)$ is not rainbow partition 3-regular.} \label{N12} \end{figure} All that remains is to show that this is an equinumerous $k$-coloring. To do this we need some bounds related to $\mathcal P$; specifically, on the cardinality of the largest class and the number of classes with one element. Observe that each class can be represented by its smallest element. The set of these representative elements is precisely the set of integers in $[N]$ which are not divisible by $b$. Note that the largest element of any class in $\mathcal P$ is of the form $a^{\gamma}b^\beta c$ with $a^{\gamma}b^\beta c\le N$, and the number of elements of this class is $\beta+1$. So clearly the maximum cardinality of any class is at most $1+\log_b(N)$. We proceed by counting the number of classes in $\mathcal P$ with cardinality one. These classes are represented by $a^\alpha c$ with either $\alpha=0$ or $a^{\alpha-1}bc>N$. Here we will assume that both $k$ and $n$ are equal and large compared to $a$ and $b$, so we can find asymptotic bounds up to multiplicative constants depending only on $a$ and $b$. The number of classes corresponding to $\alpha=0$ are $$\floor{\frac{N}{a}}+\floor{\frac{N}{b}}-\floor{\frac{N}{ab}}=\Omega(N).$$ The classes corresponding to $a^{\alpha-1}bc>N$ are in bijection with the elements $c\in\left(\frac{a}{b}N,N\right]\setminus b\Z$. The number of those classes is approximately $$\frac{b-1}{b}\left(N-\frac{a}{b}N\right)=\Omega(N).$$ Thus the number of classes with cardinality one is $\Omega(N)$. Assume that the coloring defined above is not equinumerous. Then at some point during the greedy algorithm, a color becomes used more than $n$ times. Consider the first time that this happens: some equivalence class of size $m$ is assigned a color that has already been used at least $n-m+1$ times. Note that $m>1$, otherwise we would have already finished coloring. Because on each step we use the least used color, each color has been already used at least $n-m+1$ times. It follows that $k(n-m+1)\ge N-k(1+\log_b(N))$ integers have been colored. Therefore at most $k(1+\log_b(N))$ elements of $[N]$ remain uncolored, but we have not colored any class of size one yet. Therefore there are at least $\Omega(N)$ uncolored integers. This gives $$\Omega(N)\le k(1+\log_b(N))=O\left(N^{1/2}\log(N)\right),$$ a contradiction. Therefore, for sufficiently large $k=n$, the coloring is equinumerous and thus $A$ is not rainbow $k$-regular. \end{proof} Now we are ready to prove Theorem \ref{thm:main}. We point out that implication $(iii)\implies (ii)$ is based on a proof of \cite{rainbowapk}, but with new ideas from the theory of lattice points inside polyhedra started E. Ehrhart \cite{BarviPom,beckrobins}. \begin{proof}[Proof of Theorem \ref{thm:main}]\mbox{} \noindent\emph{(i)$\implies$(iv):} We show the contrapositive. If $\ker(A)$ contains no vectors with positive integer entries or there exist indices $i\neq j$ such that $x_iy_j=x_jy_i$ for all $x,y\in\ker(A)$, then $A$ is not rainbow regular. If $\ker(A)$ contains no vector with all positive integer entries then $A$ is clearly not rainbow regular, so we assume that there exists $q\in\ker(A)$ with positive integer entries and that for every $x\in\ker(A)$, $x_iq_j=x_jq_i$. So $\begin{pmatrix}q_j & -q_i\end{pmatrix}$ is a $1\times 2$ nonzero rational matrix which, by Lemma \ref{lem:1x2}, is not rainbow regular. That is, for arbitrarily large $k_0$ there exists $k>k_0$, $n$ and an equinumerous $k$-coloring of $[kn]$ such that if $x_iq_j-x_jq_i=0$, then $x_i$ and $x_j$ share a color. Therefore $A$ is not rainbow regular.\\ \noindent{\emph{(iv)$\implies$(iii):}} Suppose that there are two columns $i$ and $j$ of $A$ such that the submatrix $A'$ obtained by deleting them from $A$ has a different rank than $A$. If $A$ has linearly dependent rows, we may remove them and pass to the corresponding submatrices of $A$ and $A'$ without changing the kernel. Since $A'$ is a submatrix of $A$, $\rank(A')<\rank(A)$. Because $A$ and $A'$ have the same number of rows but $A'$ has smaller rank, the rows of $A'$ are linearly dependent. Let $r_\ell$ and $r'_\ell$ be the $\ell$-th rows of $A$ and $A'$, respectively. Then there exist scalars $\alpha_1,\dots,\alpha_m$ not all zero, such that $\sum_{\ell=1}^m\alpha_\ell r'_\ell=0$. Consequently, the entries of $$r=\sum_{\ell=1}^m\alpha_\ell r_\ell$$ are zero except possibly for the $i$-th and $j$-th entries. Let $q_i$ and $q_j$ be those entries. Then every $x,y\in\ker(A)$ satisfy $r\cdot x=r\cdot y=0$, thus $q_ix_i+q_jx_j=q_iy_i+q_jy_j=0$ and therefore $x_iy_j=x_jy_i$.\\ \noindent{\emph{(iii)$\implies$(ii):}} Since each color class contains at least one element, then $(C-\varepsilon)\frac N{\sqrt k}\ge 1$ so that the hypothesis in \emph{(ii)} can be satisfied. Consequently, we need only prove that $\ker(A)$ contains a rainbow vector for every large enough $N$ independent of $k$. We wish to find an upper bound for the number of vectors with entries in $[N]$ that are not rainbow. For this we introduce some new notation. If $i,j\le d$ then $A_{\widehat{i,j}}$ denotes the matrix obtained by removing the $i$-th column $A_i$ and the $j$-th column $A_j$ from $A$, and if $x$ is a vector then $x_{\widehat{i,j}}$ denotes the vector obtained by removing the $i$-th and $j$-th entries $x$. If $x\in[N]^d\cap\ker(A)$ is not rainbow, then $x$ has two entries that share a color (they may have the same value). Assume $i$ and $j$ are these entries, and $z_1$ and $z_2$ are their respective values. Then $x_{\widehat{i,j}}$ must solve the equation \begin{equation*}\label{eq:ij} A_{\widehat{i,j}}y=z_1A_i+z_2A_j. \end{equation*} In other words, if this equation has no solutions in $[N]^{d-2}$ then there is no $x\in[N]^d\cap\ker(A)$ with $x_i=z_1$ and $x_j=z_2$. If this equation has some solution $y_0$, then the set of solutions is simply $y_0+\ker(A_{\widehat{i,j}})$. By assumption, $\text{rank}(A_{\widehat{i,j}}) = \text{rank}(A)$, hence $\dim(\ker(A_{\widehat{i,j}}))=\dim(\ker(A))-2$. Thus we only need to choose values for $\dim(\ker(A))-2$ of the remaining coordinates and the rest will be determined. Since there are at most $N$ possible values for each coordinate, there are at most $N^{\dim(\ker(A))-2}$ ways to choose the remaining $d-2$ values of $x$. Now we can bound the number of non-rainbow vectors in $[N]^d\cap \ker(A)$. Since each color class contains at most $(C-\varepsilon)\frac N{\sqrt k}$ elements, there are at most $$k\left((C-\varepsilon)\frac N{\sqrt k}\right)^2=(C-\varepsilon)^2N^2$$ pairs of integers in $[N]$ that share a color. Given two such integers $z_1$ and $z_2$, there are at most $\binom{d}{2}$ ways to place them in a vector $x\in[N]^d$. Therefore there are at most \begin{equation}\label{eq:bound} (C-\varepsilon)^2N^2\binom{d}{2}N^{\dim(\ker(A))-2}=(C-\varepsilon)^2\binom d2N^{\dim(\ker(A))} \end{equation} non-rainbow vectors $x$. Now we must bound the number of vectors in $[N^d]\cap\ker(A)$ from below. Consider the polytope $P=[0,1]^d\cap\ker(A)$ and its interior $P^\circ$. The number of vectors in $[N]^d\cap\ker(A)$ is bounded below by the number of integer vectors in $NP^\circ$. Since $P$ is a rational polytope, the number of integer points in $NP$ is given by its Ehrhart quasi-polynomial $L_P(N)$ \cite[Chapter 3]{beckrobins}, which has degree $\dim(P)=\dim(\ker(A))$. By Ehrhart-Macdonald reciprocity \cite[Chapter 4]{beckrobins}, the number of integer points in $NP^\circ$ is given by the quasi-polynomial $p(N)=(-1)^{\dim(P)}L_P\left(-N\right)$. Let $\nu$ be the leading coefficient of $p$ (this is in fact the volume of the polytope $P$). Note that $\nu$ is a constant depending only on $A$. Take $C=\sqrt{\frac\nu{\binom d2}}$. Then $p(N)$ is larger than the coefficient of $N^{\dim(\ker(A))}$ in \eqref{eq:bound} for all $\varepsilon>0$. Therefore, for sufficiently large $N$, the non-rainbow vectors do not cover $[N]^d\cap\ker(A)$.\\ \noindent{\em (ii)$\implies$(i):} This follows immediately by taking $N=kn$. \end{proof} Now we show that as a consequence of Theorem \ref{thm:main} that we can deal with colorings of $\N$ where the color classes have bounded upper density: \begin{proof} (of Corollary \ref{thm:cor1}) Let $A$ be a rational rainbow regular $m\times d$ matrix. By Theorem \ref{thm:main}, there is some $C:=C(A)$ such that for every $\varepsilon>0$, positive integer $N$, and large enough integer $k$, it follows that: every $k$-coloring of $[N]$ in which each color class contains at most $(C-\varepsilon)\frac{N}{\sqrt k}$ elements, contains a rainbow vector in $\ker (A)$. Suppose $\N$ is $k$-colored such that each color class has upper density strictly less than $\frac{C}{\sqrt k}$. Then there exists $N\in\N$ and small $\varepsilon>0$ such that for each color class $K\subseteq\N$ and all $n>N$, $$\frac{\#(K\cap[n])}{n}\le \frac{C}{\sqrt{k}}-\varepsilon.$$ Consequently, $\ker (A)$ contains a rainbow vector. \end{proof} We now move to the graph theory consequences: \begin{proof} (of Corollary \ref{thm:corgraph}) Let $G$ be a graph. \noindent{$\implies$:} Suppose each connected component of $G$ is $3$-edge-connected. Then $G$ is bridgeless so by \cite{sixflow}, $G$ has a nowhere-zero $6$-flow. Consequently, we may choose an orientation of $G$ which has a positive $6$-flow. Let $M$ be the incidence matrix corresponding to that orientation. Note that the positive $6$-flow, written as a vector, is an element of the $\ker (M)$ with positive integer entries. Consider a submatrix obtained by deleting two columns from $M$. This corresponds to the subgraph obtained by deleting two edges from $G$. Because $G$ is $3$-edge connected, the deleting of two edges from $G$ does not change the number of connected components---and thus rank---of $G$. Hence any submatrix obtained by deleting two columns from $M$ has the same rank as $M$. Therefore $M$ is (robustly) rainbow regular by Theorem \ref{thm:main}. That is, there exists a constant $C$ depending only on $M$ (and thus $G$) such that for every $\varepsilon>0$, positive integer $N$, and large enough integer $k$, it follows that: for every $k$-coloring of $[N]$ in which each color class contains at most $(C-\varepsilon)\frac{N}{\sqrt k}$ elements, there is a rainbow vector in $\ker (M)$---which corresponds to a rainbow flow on the chosen orientation of $G$. \noindent{$\Longleftarrow$:} Suppose that for some orientation of $G$ there exists a constant $C$ depending only on the graph such that for every $\varepsilon>0$, positive integer $N$, and large enough integer $k$, it follows that: for every $k$-coloring of $[N]$ in which each color class contains at most $(C-\varepsilon)\frac{N}{\sqrt k}$ elements, there is a rainbow flow on that orientation of $G$. Then the incidence matrix $M$ corresponding to the chosen orientation of $G$ is robustly rainbow regular. By theorem \ref{thm:main}, the rank of any submatrix obtained by deleting two columns from $M$ is the same as the rank of $M$. This implies that the deletion of any two edges from $G$ does not change the rank---and thus the number of connected components---of $G$. Therefore each connected component of $G$ is $3$-edge-connected. \end{proof} \section{Examples} We would like to note that as a corollary of Theorem \ref{thm:main} all known examples of rainbow regular matrices are in fact robustly rainbow regular. This includes several well-known families such as matrices associated with arithmetic progressions. In this section we use Theorem \ref{thm:main} to analyze Fibonacci sequences and show that their associated matrices are rainbow regular (and thus robustly rainbow regular); we also give bounds for their rainbow number. Here we look at sequences $p_1,\dots,p_d$ where $p_{i+2}=p_i+p_{i+1}$; we call these \emph{Fibonacci sequences}. In this case we use the $(d-2)\times d$ matrices $$A_d=\begin{pmatrix} 1 & 1 & -1 & 0 & 0 & \dots & 0 & 0 \\ 0 & 1 & 1 & -1 & 0 & \dots & 0 & 0 \\ 0 & 0 & 1 & 1 & -1 & \dots & 0 & 0 \\ \vdots & \vdots & \ddots & \ddots & \ddots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & 0 & \dots & -1 & 0\\ 0 & 0 & 0 & 0 & 0 & \dots & 1 & -1 \end{pmatrix}.$$ To verify that this matrix is rainbow regular, we show that it satisfies the condition described in part \emph{(iv)} of Theorem \ref{thm:main} Let $F_i$ be the usual Fibonacci sequence with $F_0=0$ and $F_1=1$. Let $x=(F_1,\dots,F_d)$ and $y=(F_2,\dots,F_{d+1})$. The vector $x$ only has positive integer entries and it is easy to see that $x_i y_j=F_iF_{j-1}\neq F_jF_{i-1}=x_j y_i$. Next, we present exponential bounds on the rainbow number $r(A_d)$. Recall that for $d=3$, $A_3$ is the Schur equation and $r(A_3)=3$ (see \cite{probm}). \begin{theorem} For $d>3$, $F_{d+1}\leq r(A_d)\leq (d^2-d+1)F_{d-1}F_{d-2}$. \end{theorem} \begin{proof} For the lower bound, note that any rainbow solution $x=(x_1,\dots,x_d)$ of $A_d$ has $x_d\ge F_{d+1}$. So for $n=1$, $k\ge F_{d+1}$ and therefore $F_{d+1}\leq r(A_d)$. For the upper bound, we compute an Ehrhart-like polynomial to refine the bounds in the proof of the \emph{(iii)$\implies$(ii)} part in Theorem \ref{thm:main}. Consider the polytope $P=[0,1]^d\cap\ker(A_d)$. The dimension of $\ker(A)$ is $2$, so $\dim(P)=2$. The following claim ensures that this is a triangle. \begin{claim} The vertices of $P=[0,1]^d\cap\ker(A_d)$ are $O=(0,0,\dots,0)$, $A=\frac{1}{F_{d-1}}(F_0,F_1,\dots,F_{d-1})$ and $B=\frac{1}{F_{d-2}}(1,F_0,\dots,F_{d-2})$. \end{claim} \begin{proof} Suppose that $v=(v_1,\dots,v_d)$ is a vertex of $P$. Since $\dim(P)=2$ then two entries of $v$ are equal to either $0$ or $1$. If both entries are $0$ then $v$ is the origin $O$. So assume $v$ contains an entry $v_i=1$. Because $v$ is a Fibonacci sequence containing nonnegative elements then $v_j\le v_{j+1}$ for $j\ge 2$. Consequently, if $v_i=1$, then $i\in\{1,d-1,d\}$, as otherwise $v_{i+2}\ge 2$. If $v_1=1$ then $v_2=0$, as otherwise $v_3>1$. In this case $v=(1,0,1,1,2,\dots)$, which is $B$ if $d=4$ and invalid otherwise. If $v_{d-1}=1$ then $v_d=1$, as $v_d\ge v_{d-1}$. In this case $v=(\dots,-1,1,0,1,1)$, which is $B$ if $d=4$ and invalid otherwise. So assume $v_d=1$ and $v_1,v_{d-1}\neq 1$. No other entry of $v$ can be $1$, so some entry $v_j=0$. If $j\ge 2$ then $v_2=0$ because $v_2\le v_j$, in this case $v=B$. If $v_1=0$ then $v=A$. \end{proof} Therefore the dilation $(F_{d-1}F_{d-2})P$ is an integer polytope. Now we need to count the number $L_d(t)$ of lattice points in $(tF_{d-1}F_{d-2})P$ with positive entries. Let $Q$ be the polytope obtained by projecting $(F_{d-1}F_{d-2})P$ onto its first two coordinates. It is a triangle with vertices $(0,0)$, $(F_{d-1},0)$, and $(0,F_{d-2})$. \begin{claim} For any $t\in\Z$, this projection gives a bijection between the lattice points in $(tF_{d-1}F_{d-2})P$ and the lattice points in $tQ$. \end{claim} \begin{proof} Clearly the projection is injective. Suppose $(a,b)$ is an integer point in the dilation $nQ$. Then $a,b\ge 0$ and $F_{d-2}a+F_{d-1}b\leq tF_{d-1}F_{d-2}$. Consider the Fibonacci sequence $$(a,b,a+b,\dots,F_{d-2}a+F_{d-1}b).$$ This sequence is contained in $(tF_{d-1}F_{d-2})P$ and is projected to $(a,b)$. \end{proof} We may compute $L_d$ by counting the lattice points in $tQ$ with positive entries: \begin{align*} L_d(t)&=\frac{(tF_{d-1}+1)(tF_{d-2}+1)+(t+1)}2-(tF_{d-1}+1)-(tF_{d-2}+1)+1\\ &=\frac{F_{d-1}F_{d-2}}2t^2-\frac{F_d-1}{2}t. \end{align*} Now we need to show that if $k=(d^2-d+1)F_{d-1}F_{d-2}$ then for any equinumerous $k$-coloring of $[kn]$ there is a rainbow solution to $A_dx=0$. From the computation in \eqref{eq:bound}, we know there is a solution whenever \begin{align*} L_d\left(\frac{kn}{F_{d-1}F_{d-2}}\right)&=L_d\left((d^2-d+1)n\right)\\ &=\frac{F_{d-1}F_{d-2}}2(d^2-d+1)^2n^2-\frac{F_d-1}{2}(d^2-d+1)n\\ &>\frac{d(d-1)}{2k}(kn)^2=\frac{F_{d-1}F_{d-2}}{2}d(d-1)(d^2-d+1)n^2. \end{align*} This is equivalent to $F_{d-1}F_{d-2}n>F_d-1$, which is true for all integers $d>3$ and $n\ge 1$. \end{proof} \section*{Acknowledgments} We are grateful to UC MEXUS for the support received funding this research collaboration. We are also indebted to UC Davis and CINNMA for the hospitality and support received. The second author was supported by NSF grant DMS-0135345, the third author was supported by PAPIIT IA102013, the forth author was supported by PAPIIT IN101912 and the last three authors were supported by CONACyT project 166306. Last but not least we wish to thank Prof. Jacob Fox and Prof. Leslie Hogben for their helpful comments and suggestions.
{ "redpajama_set_name": "RedPajamaArXiv" }